Uploaded by axecaliber Sdw

instructor-s-manual-for-making-hard-decisions-with-decisiontools compress

advertisement
INSTRUCTOR’S MANUAL
for
Making Hard Decisions
with DecisionTools, 3rd Ed.
Revised 2013
Samuel E. Bodily
University of Virginia
Robert T. Clemen
Duke University
Robin Dillon
Georgetown University
Terence Reilly
Babson College
Table of Contents
GENERAL INFORMATION
Introduction
Influence Diagrams
Decision Analysis Software
1
2
2
CHAPTER NOTES AND PROBLEM SOLUTIONS
Chapter 1
Chapter 2
Chapter 3
Chapter 4
Chapter 5
Chapter 6
Section 1 Cases
Athens Glass Works
Integrated Siting Systems, Inc.
International Guidance and Controls
George’s T-Shirts
Chapter 7
Chapter 8
Chapter 9
Chapter 10
Chapter 11
Chapter 12
Chapter 13
Section 2 Cases
Lac Leman Festival de la Musique
Sprigg Lane
Appshop, Inc.
Calambra Olive Oil
Scor-eStore.com
Chapter 14
Chapter 15
Chapter 16
Chapter 17
Section 3 Cases
John Carter
Sleepmore Mattress Manufacturing
Susan Jones
TOPICAL INDEX TO PROBLEMS
4
8
14
39
66
85
90
98
116
130
145
165
176
201
223
240
267
283
297
330
339
353
371
395
403
416
423
438
447
461
GENERAL INFORMATION
INTRODUCTION
Making Hard Decisions with DecisionTools, 3rd Edition presents the basic techniques of modern decision
analysis. The emphasis of the text is on the development of models to represent decision situations and the
use of probability and utility theory to represent uncertainties and preferences, respectively, in those
models. This is a new edition of the text. New examples and problems have been added throughout the text
and some chapters have either been completely rewritten (Chapters 5 & 11) or are entirely new (Chapters 6
& 13). In addition, we have added 15 cases from Darden Business Publishing. The Darden cases are
grouped together at the end of each of the three sections.
The first section of the book deals with structuring decision models. This part of the process is undoubtedly
the most critical. It is in the structuring phase that one comes to terms with the decision situation, clarifies
one’s objectives in the context of that situation, and confronts questions regarding the problem’s essential
elements. One must decide exactly what aspects of a problem are to be included in a model and make
fundamental modeling choices regarding how to represent each facet. Discussions with decision analysts
confirm that in most real-world applications, the majority of the time is spent in structuring the problem,
and this is where most of the important insights are found and creative new alternatives invented. The
discussion of model structuring integrates notions of tradeoffs and multiple objectives, something new to
the second edition of the book. Although complete discussion of modeling and analysis techniques are put
off until later, students should have enough information so that they can analyze simple multiattribute
models after finishing Chapter 4. This early introduction to this material has proven to be an excellent
motivator for students. Give them an interesting problem, ask them to discuss the objectives and tradeoffs,
and you will have trouble getting them to be quiet!
Making Hard Decisions with DecisionTools provides a one-semester introduction to the tools and concepts
of decision analysis. The text can be reasonably well adapted to different curricula; additional material
(readings, cases, problems from other sources) can be included easily at many different points. For
example, Chapters 8 and 15 discuss judgmental aspects of probability assessment and decision making, and
an instructor can introduce more behavioral material at these points. Likewise, Chapter 16 delves into the
additive utility function for decision making. Some instructors may wish to present goal programming or
the analytic hierarchy process here.
The Darden cases are grouped at the end of each of the 3 sections. Instead of tying each case to a particular
chapter, a group of cases are associated with a group of chapters. The goal is to show that the various
concepts and tools covered throughout the section can be applied to the cases for that section. For example,
to solve the cases at the end of Section One, Modeling Decisions, students will need to understand the
objectives of the decision maker (Chapter 2), structure and solve the decision model (Chapters 3 and 4),
perform a sensitivity analysis (Chapter 5), and, perhaps, incorporate organizational decision making
concepts (Chapter 6). Instructors can either assign a case analysis after covering a set of chapters asking the
students to incorporate all the relevant material or can assign a case after each chapter highlighting that
chapter’s material. Students need to understand that a complete and insightful analysis is based on
investigating the case using more than one or two concepts.
Incorporating Keeney’s (1992) value-focused thinking was challenging because some colleagues preferred
to have all of the multiple-objective material put in the same place (Chapters 15 and 16), whereas others
preferred to integrate the material throughout the text. Ultimately the latter was chosen especially stressing
the role of values in structuring decision models. In particular, students must read about structuring values
at the beginning of Chapter 3 before going on to structuring influence diagrams or decision trees. The
reason is simply that it makes sense to understand what one wants before trying to structure the decision.
In order for an instructor to locate problems on specific topics or concepts without having to read through
all the problems, a topical cross-reference for the problems is included in each chapter and a topical index
for all of the problems and case studies is provided at the end of the manual.
1
INFLUENCE DIAGRAMS
The most important innovation in the first edition of Making Hard Decisions was the integration of
influence diagrams throughout the book. Indeed, in Chapter 3 influence diagrams are presented before
decision trees as structuring tools. The presentation and use of influence diagrams reflects their current
position in the decision-analysis toolkit. They appear to be most useful for (1) structuring problems and (2)
presenting overviews to an audience with little technical background. In certain situations, influence
diagrams can be used to great advantage. For example, understanding value-of-information analysis is a
breeze with influence diagrams, but tortuous with decision trees. On the other hand, decision trees still
provide the best medium for understanding many basic decision-analysis concepts, such as risk-return
trade-offs or subjective probability assessment.
Some instructors may want to read more about influence diagrams prior to teaching a course using Making
Hard Decisions with DecisionTools. The basic reference is Howard and Matheson (1981, reprinted in ).
This first paper offers a very general overview, but relatively little in the way of nitty-gritty, hands-on help.
Aside from Chapters 3 and 4 of Making Hard Decisions with DecisionTools, introductory discussions of
influence diagrams can be found in Oliver and Smith (1990) and McGovern, Samson, and Wirth (1993). In
the field of artificial intelligence, belief nets (which can be thought of as influence diagrams that contain
only uncertainty nodes) are used to represent probabilistic knowledge structures. For introductions to belief
nets, consult Morawski (1989a, b) as well as articles in Oliver and Smith (1990). Matzkevich and
Abramson (1995) provides an excellent recent review of network models, including influence diagrams and
belief nets.
The conference on Uncertainty in Artificial Intelligence has been held annually since 1985, and the
conference always publishes a book of proceedings. For individuals who wish to survey the field broadly,
these volumes provide up-to-date information on the representation and use of network models.
Selected Bibliography for Influence Diagrams
Howard, R. A. (1989). Knowledge maps. Management Science, 35, 903-922.
Howard, R. A., and J. E. Matheson (1981). “Influence Diagrams.” R. Howard and J. Matheson (Eds.),The
Principles and Applications of Decision Analysis, Vol II, Palo Alto: Strategic Decisions Group,
(1984), 719-762. Reprinted in Decision Analysis, Vol 2 (2005), 127-147.
Matzkevich, I., and B. Abramson (1995). “Decision Analytic Networks in Artificial Intelligence.”
Management Science, 41, 1-22.
McGovern, J., D. Samson, and A. Wirth (1993). “Influence Diagrams for Decision Analysis.” In S. Nagel
(Ed.), Computer-Aided Decision Analysis. Westport, CT: Quorum, 107-122.
Morawski, P. (1989a). “Understanding Bayesian Belief Networks.” AI Expert (May), 44-48.
Morawski, P. (1989b). “Programming Bayesian Belief Networks.” AI Expert (August), 74-79.
Neapolitan, R. E. (1990). Probabilistic Reasoning in Expert Systems. New York: Wiley.
Oliver, R. M., and J. Q. Smith (1989). Influence Diagrams, Belief Nets and Decision Analysis (Proceedings
of an International Conference 1988, Berkeley). New York: John Wiley.
Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems. San Mateo, CA: Morgan Kaufman.
Shachter, R. D. (1986). “Evaluating Influence Diagrams,” Operations Research, 34, 871-882.
Shachter, R. D. (1988). Probabilistic inference and influence diagrams. Operations Research, 36, 389-604.
Shachter, R. D., and C. R. Kenley (1989). Gaussian Influence Diagrams. Management Science, 35, 527550.
DECISION ANALYSIS SOFTWARE
Making Hard Decisions with DecisionTools integrates Palisade Corporation’s DecisionTools, version 6.0
throughout the text. DecisionTools consists of six programs (PrecisionTree, TopRank, @RISK, StatTools,
NeuralTools, and Evolver), each designed to help with different aspects of modeling and solving decision
problems. Instructions are given on how to use PrecisionTree and @RISK, typically, at the end of the
chapter. PrecisionTree is a versatile program that allows the user to construct and solve both decision trees
and influence diagrams. @RISK allows the user to insert probability distributions into a spreadsheet and
run a Monte Carlo simulation. . Each of these programs are Excel add-ons, which means that they run
within Excel by adding their ribbon of commands to Excel’s toolbar.
2
In the textbook, instructions have been included at the ends of appropriate chapters for using the programs
that correspond to the chapter topic. The instructions provide step-by-step guides through the important
features of the programs. They have been written to be a self-contained tutorial. Some supplemental
information is contained in this manual especially related to the implementation of specific problem
solutions.
Some general guidelines:
•
To run an add-in within Excel, it is necessary to have the “Ignore other applications” option turned
off. Choose Tools on the menu bar, then Options, and click on the General tab in the resulting
Options dialog box. Be sure that the box by Ignore other applications is not checked.
•
Macros in the add-in program become disabled automatically if the security level is set to High.
To change the security level to Medium, in the Tools menu, point to Macros and then click
Security.
•
When the program crashes, restart the computer. It may appear as if the program has closed
properly and can be reopened, but it probably has not, and it is best to restart the computer.
•
The student version of PrecisionTree may limit the tree to 50 nodes. Some of the problems that
examine the value of information in Chapter 12 can easily exceed this limit.
•
When running @RISK simulations in the student version, make sure that only one worksheet is
open at a time. Otherwise, the program will display that error message “Model Extends Beyond
Allowed Region of Worksheet.”
More tips are provided throughout this manual as they relate to implementing specific problem solutions.
JOIN THE DECISION ANALYSIS SOCIETY OF INFORMS
Instructors and students both are encouraged to join the Decision Analysis
Society of INFORMS (Institute for Operations Research and Management
Science). This organization provides a wide array of services for decision
analysts, including a newsletter, Internet list server, a site on the World Wide
Web (https://www.informs.org/Community/DAS), annual meetings, and
information on job openings and candidates for decision-analysis positions. For
information on how to join, visit the web site.
3
CHAPTER 1
Introduction to Decision Analysis
Notes
This chapter serves as an introduction to the book and the course. It sets the tone and presents the basic
approach that will be used. The ideas of subjective judgment and modeling are stressed. Also, we mention
some basic aspects of decisions: uncertainty, preferences, decision structure, and sensitivity analysis.
In teaching decision analysis courses, it is critical to distinguish at the outset between good decisions and
good outcomes. Improving decisions mostly means improving the decision-making process. Students
should make decisions with their eyes open, having carefully considered the important issues at hand. This
is not to say that a good decision analysis foresees every possible outcome; indeed, many possible
outcomes are so unlikely that they may have no bearing whatsoever on the decision to be made. Often it is
helpful to imagine yourself in the future, looking back at your decision now. Will you be able to say,
regardless of the outcome: “Given everything I knew at the time — and I did a pretty good job of digging
out the important issues — I made the appropriate decision. If I were put back in the same situation, I
would go through the process pretty much in the same way and would probably make the same decision.”
If your decision making lets you say this, then you are making good decisions. The issue is not whether you
can foresee some unusual outcome that really is unforeseen, even by the experts. The issue is whether you
carefully consider the aspects of the decision that are important and meaningful to you.
Chapter 1 emphasizes a modeling approach and the idea of a requisite model. If the notion of a requisite
model seems a bit slippery, useful references are the articles by Phillips. (Specific references can be found
in Making Hard Decisions with DecisionTools.) The concept is simple: A decision model is requisite if it
incorporates all of the essential elements of the decision situation. The cyclical process of modeling,
solution, sensitivity analysis, and then modeling again, provides the mechanism for identifying areas that
require more elaboration in the model and portions where no more modeling is needed (or even where
certain aspects can be ignored altogether). After going through the decision analysis cycle a few times, the
model should provide a reasonable representation of the situation and should provide insight regarding the
situation and available options. Note that the process, being a human one, is not guaranteed to converge in
any technical sense. Convergence to a requisite model must arise from 1) technical modeling expertise on
the part of the analyst, and 2) desire on the part of the decision maker to avoid the cognitive dissonance
associated with an incomplete or inappropriate model.
Also important is that the modeling approach presented throughout the book emphasizes value-focused
thinking (Keeney, 1992), especially the notion that values should be considered at the earliest phases of the
decision-making process. This concept is initially introduced on pages 5-6.
To show that that decision analysis really is used very broadly, we have included the section “Where is
Decision Analysis Used?” Two references are given. The Harvard Business Review article by Ulvila and
Brown is particularly useful for students to read any time during the course to get a feel for real-world
applications of decision analysis.
Finally, we have included the section “Where Does the Software Fit In?” to introduce the DecisionTools
suite of programs.
Topical cross-reference for problems
Constructionist view
Creativity
Rice football
Requisite models
Subjective judgments
1.12,Du Pont and Chlorofluorocarbons
1.8
1.7
1.2
1.3, 1.5
4
Solutions
1.1. Answers will be based on personal experience. It is important here to be sure the distinction is made
between good decisions on one hand (or a good decision-making process) and lucky outcomes on the other.
1.2. We will have models to represent the decision structure as well as uncertainty and preferences. The
whole point of using models is to create simplifications of the real world in such a way that analysis of the
model yields insight regarding the real-world situation. A requisite model is one that includes all essential
elements of the problem. Alternatively, a requisite model is one which, when subjected to sensitivity
analysis, yields no new intuitions. Not only are all essential elements included, but also all extraneous
elements are excluded.
1.3. Subjective judgments will play large roles in the modeling of uncertainty and preferences. Essentially
we will build representations of personal beliefs (probabilities) and preferences (utilities). In a more subtle
— and perhaps more important — way, subjective judgments also direct the modeling process. Subjective
judgments are necessary for determining the appropriateness of a model’s structure, what should be
included in the model, and so on. Thus, subjective judgments play central roles in decision analysis. Good
decision analysis cannot be done without subjective judgments.
1.4. An appropriate answer would be that decision analysis can improve your decisions — the way you
make decisions — by providing a framework for dealing with difficult decisions in a systematic way.
Along with the analytical framework, decision analysis provides a set of tools for constructing and
analyzing decision models, the purpose of which is to obtain insight regarding difficult decision problems.
1.5. You require her subjective judgments on a number of matters. First is the problem of identifying
important aspects of the problem. Her input also will be required for the development of models of her
uncertainty and her preferences. Thus, her judgments will be critical to the analysis.
This question may also lead students to consider the implications of delegating decisions to agents. How
can you ensure that the agent will see things the way you do? Will the same aspects of the problem be
important? Does the agent agree with you regarding the uncertainty inherent in the situation (which
outcomes are more or less likely)? Does the agent have the same feeling regarding trade-offs that must be
made? In many cases it may be appropriate to obtain and use an expert’s information. Can you identify
some specific decision situations where you would be willing to accept an agent’s recommendation? Does
it matter who the agent is? Can you identify other situations in which some of the agent’s input can be
taken at face value (a forecast, say), but must be incorporated into a model based primarily on your own
judgments?
1.6. Answers will be based on personal experience.
1.7. Some of the issues are 1) the monetary costs of staying in Division 1-A and of moving to Division III,
2) impact on both alumni and local businesses of moving to Division III, 3) political and social impact on
campus of changing divisions.
Alternatives include 1) stay in Division 1-A, 2) move to Division III, 3) move to Division II, 4) delay the
decision for a year or more to gather information, 5) investigate other sources of funding to cover the
deficit, 6) drop out from the NCAA altogether ...
There is considerable uncertainty around the impact on the school of switching divisions. What will the
fallout be from the faculty, students, alumni, and local businesses if Rice went to Division III? Will it
impact recruiting? If so, how? What are the financial consequences? Is the deficit due to mismanagement or
is it structural? What are the long-term consequences versus the immediate uproar? Sources of information
could be surveys given to each constituency and/or interviews with leaders of the constituencies. Perhaps
other schools have changed divisions, and information can be found from their experience.
The objectives that different groups want to work toward include 1) minimize short-term and long-term
deficit, 2) minimize social upheaval, 3) maximize enjoyment of collegiate sports, 4) maximize student
5
opportunity to participate in sports, 5) maximize quality of sports programs. Some students may identify
still other objectives. Trading off these objectives may mean trying to balance the issues that are important
to different constituent groups.
1.8. This is a creativity question. The Friends of Rice Athletics could fund raise, tuition and/or ticket prices
could be increased, the stadium’s name can be sold, the athletic staff could all take a pay cut, etc.
1.9. Answers will be based on personal experience.
1.10. Instead of thinking only about risk versus return, the socially responsible investor also must consider
how to trade off risk and return for ethical integrity. It would not be unreasonable to suspect that to obtain a
higher level of ethical integrity in the portfolio, the investor must accept a lower expected return, higher
level of risk, or both.
1.11. For the most part, decision analysis is most appropriate for strategic, or one-time, decisions. These are
situations that we have not thought about before and “don’t know what to do.” Hence, it is worthwhile to
engage in some “decision making,” or decision analysis, to figure out what would be an appropriate action.
This is not to say that decision analysis is inappropriate for repetitive decisions. In fact, if a decision is
repeated many times, the savings that can be achieved over time by improving the decision-making process
can be substantial. In fact, this is the basis of much of management science. However, the reliance on
subjective judgments for the construction of tailored decision models in each decision situation may render
decision analysis, as portrayed here, unsuitable for dealing with repetitive situations. The point, though, is
that if one anticipates a long string of repetitive decisions in the future, and an optimal decision strategy has
not been previously developed, then the situation is indeed one of “not knowing what to do.” A decisionmodeling approach would indeed be appropriate in that case.
1.12. Beliefs and values do appear to change and develop over time as we think about new issues. Decision
analysis implicitly provides a framework for such changes through the identification and modeling of
decision problems, beliefs regarding uncertainty, and preferences.
Case Study: Commercial Space Travel
A student’s answer to being an early adopter or waiting until the industry matures is a personal choice and
depends on many factors. Some of these are: track record of industry, affordability, health of student vis-àvis demands of space travel, interest level, etc.
It certainly is true that new firms can come along and change an industry with leaner production or
management systems. Often, these firms do not have to contend with the legacy of older systems in more
established firms. In addition, the savings of a younger workforce and less established pension program can
be quite significant. Thus, it is reasonable that the new furry animals can be competitive with a massive
governmental organization.
On the other hand, the lack of experience of extreme situations might turn into a disaster for a newly
established firm. The cost savings of the newer firms could come from more efficient operations or it could
come from not having the equipment and policies in place to handle unusual situations. A space-flight
disaster would make headlines across the world and probably doom the responsible for-profit company. To
continue the survival-of-the-fittest analogy, it is not that every for-profit company will survive by avoiding
life-threatening situations; it is that a subgroup will survive. Would you want to put your life or the life of a
loved one on the line given the uncertainties surrounding early adopters in space travel?
Case Study: Du Pont and Chlorofluorocarbons
The major issues include shareholder welfare, social and environmental responsibility, and ethics. Of
course, all of these might be thought of as means for ensuring the long-run profitability or survivability of
the firm. The major sources of uncertainty involve research and development. Will substitute products
work? Will they be accepted? The CEO might wonder whether the ozone problem really is a problem, or
6
whether the observed recent changes are part of a normal cycle. Finally, could Du Pont’s efforts really have
an effect, and how much?
It is undoubtedly the case that Du Pont’s views of the situation have changed over time. Early on, the
chlorofluorocarbon issue was essentially ignored; no one knew that a problem existed. In the 1970s and
1980s, it became apparent that a problem did exist, and as scientific evidence accumulated, the problem
appeared to become more serious. Finally, we have arrived at a position where the ozone issue clearly
matters. (In fact, it matters mostly because of consumers’ views and preferences rather than because of the
scientific evidence, which appears to be less than conclusive.) Du Pont appears to be asking “Can we do
anything to help?” Many companies have developed a kind of “social awareness” in the past two decades
as a way to maintain a high-integrity profile.
Case Study: Choosing a Vice-Presidential Candidate
A vice president tends not to have an important role in American politics except in gaining electoral votes
during the election. A running mate is often chosen to balance the ticket geographically and ideologically.
For example, choosing a conservative, women from Alaska helped McCain appeal to the conservative base
of the Republican Party and to women. Alaska, however, has the minimum number of possible electoral
votes at 3. While McCain could reasonably count on winning Alaska’s 3 electoral votes, he could have
chosen someone else from a more populous state for the electoral votes. McCain must have thought that
Ms. Palin would provide a ticket with a wide appeal and that she could help pick up votes across the whole
country.
It is hard to know how McCain’s health affected his choice of Ms. Palin. Clearly, he knew how he felt, and
given that he is still in office eight years later, it is reasonable to assume that his health was not a major
concern when choosing Ms. Palin. A portion of the population, however, did find his age coupled with her
inexperience troubling. If he personally was not concerned, he might at least have considered how the
voters would perceive Ms. Palin being one heartbeat away from the presidency of the U.S.A.
The president is constantly gathering information, from the daily threat-assessment reports to meetings with
his cabinet, congressional members, and world leaders. However, even with all of these intelligence
reports, much uncertainty still remains, often requiring the president to make a judgment call. One of the
more famous examples of this is President Obama’s decision to send U.S. forces into Pakistan after Osama
bin Laden. Although it was thought that bin Laden was hiding inside a residence, there was not definitive
proof. Moreover, Obama also had to make judgment calls concerning the size of the force to send in and
whether to alert Pakistani officials. Generally, the president’s decisions are based (hopefully) on both facts
and judgments. McCain’s choice of Sarah Palin led many voters to question his judgment.
Choosing Sarah Palin might have turned out to be a very good choice for the United States, but it certainly
had many political overtones. In all fairness, the choice of a vice-presidential running mate is a very
political decision, one specially aimed at winning the election – a political event. On the other hand,
appearances are of utmost importance in elections, and even an unsubstantiated rumor can completely
derail a candidate. Thus, in choosing his running mate, McCain probably should have weighed the pros and
cons of each candidate using his fundamental objectives, the fundamental objectives of his party, and, of
course, the fundamental objectives of the United States as a whole.
7
CHAPTER 2
Elements of Decision Problems
Notes
This chapter is intended to start the reader thinking about decision problems in decision-analysis terms.
Thus, we talk about decisions to make, uncertain events, and valuing consequences. To make sure that the
richness of the terrain is understood, we introduce the concepts of dynamic decision making, a planning
horizon, and trade-offs.
In our definition of terms, we refer to a decision maker’s objectives where the term values is used to refer to
the decision maker’s set of objectives and their structure. The terms decision and alternative are adopted,
and are used throughout the book rather than similar terms such as “choice” and “option.” Likewise, we
have adopted the term uncertain event (and sometimes chance event), which then has outcomes. Finally,
and perhaps most significant, we have adopted Savage’s term consequence to refer to what the decision
maker experiences as a result of a combination of alternative(s) chosen and chance outcome(s). Another
term that we use that comes from Keeney’s value-focused thinking is the notion of decision context. This
term is discussed in the text and still more thoroughly in Keeney’s book. Briefly, it refers to the specific
identification of the problem (from which we might suspect that when one solves the wrong problem, one
has used the wrong decision context). It also can be used as a way to identify the class of alternatives that
one is willing to consider; a broader context (safety in auto travel as compared to specific traffic laws, for
example) leads a decision maker to consider a broader class of alternatives.
The time value of money appears in Chapter 2 and may seem out of place in some ways. It is here because
it is a fundamental way that streams of cash flows are valued, and because it provides a nice example of a
basic trade-off. Also, we have found that since most students have already been exposed to discounting, we
have been able to incorporate NPV calculations into problems and case studies throughout the book. For
the few students who have not encountered the topic, the early introduction to discounting in Chapter 2
provides enough information for them to proceed. Of course, the section on NPV may be skipped and used
as a reference later for problems that require discounting or for the discussion of trade-offs in Chapter 15.
Topical cross-reference for problems
Requisite models
“Secretary” problem
Sequential decisions
Time value of money
2.13
2.6
2.2, 2.6, Early Bird, Inc.
2.9-2.12, The Value of Patience
Solutions
2.1. a. Some objectives might be to minimize cost, maximize safety, maximize comfort, maximize
reliability, maximize cargo capacity (for shopping or vacationing), maximize maneuverability (in city
traffic). Students will undoubtedly come up with others as well.
b. In this new context, appropriate objectives might be minimize travel time, maximize exercise, minimize
total transportation cost, minimize use of fossil fuels, maximize ease (suitably defined) of visiting friends
and shopping. New alternatives to consider include using a bicycle or public transportation, walking,
rollerblading, skateboarding, motorcycle or scooter, renting a car, such as Zipcar. One might even consider
moving in order to live in a more convenient location.
2.2. Future alternatives can affect the eventual value of the consequence. For example, a university faculty
member, when accepting a position at a different institution, may not immediately resign his or her position
at the first university. Instead, a leave of absence may be taken. The leave of absence provides the
opportunity to decide in the future whether to stay at the new institution or return to the old one. A faculty
member would most likely think about the two different situations — resigning the current position
immediately versus taking a leave and postponing a permanent decision — in very different ways.
8
Another good example is purchasing a house. For many people in our mobile society, it is important to
think about the potential for selling the house in the future. Many purchasers might buy an unusual house
that suits them fine. However, if the house is too unusual, would-be purchasers might be afraid that, if they
decide to sell the house in the near future, it may be difficult to find a buyer and the sales price might be
lower than it would be for a more conventional house.
Finally, the current choice might eliminate a future valuable option. For example, our policy of powering
cars with fossil fuels reduces our options for using oil for potentially more valuable and less destructive
future activities.
2.3. In the first case, the planning horizon may be tied directly to the solution of the specific problem at
hand. If the problem is an isolated one not expected to repeat, this is a reasonable horizon. If more similar
problems are anticipated, the planning horizon might change to look forward in time far enough to
anticipate future such situations. If the firm is considering hiring a permanent employee or training existing
employees, then a planning horizon should be long enough to accommodate employee-related issues
(training, reviews, career advancement, and so on). In this broader context, the firm must consider
objectives related to hiring a new person (or training), which might include maximizing the welfare of
current employees, minimizing long-term costs of dealing with the class of problems, satisfying
affirmative-action requirements, or equity in treatment of employees.
2.4. In making any decision, it is important to 1) use all currently available information and 2) think
carefully about future uncertainty. Thus it is necessary to keep track of exactly what information is
available at each point in time. If information is lost or forgotten, then it will either be treated as an
uncertainty or simply not used when deciding. Clearly, the farmer would want to keep up to date on the
weather and incorporate any change to the forecast.
2.5. Some possibilities: insurance, hire another firm to manage the protection operation, press for
regulatory decisions and evaluations (i.e., get public policy makers to do the necessary analysis), do
nothing, develop a “cleanup cooperative” with other firms, or design and develop equipment that can serve
a day-to-day purpose but be converted easily to cleanup equipment. Students may come up with a wide
variety of ideas.
2.6. The employer should think about qualifications of the applicants. The qualifications that he seeks
should be intimately related to what the employer wants to accomplish (objectives — e.g., increase market
share) and hence to the way the successful applicant will be evaluated (attributes — e.g., sales). The
planning horizon may be critical. Is the employer interested in long-term or short-term performance? The
uncertainty that the employer faces, of course, is the uncertainty regarding the applicant’s future
performance on the specified attributes.
If the decision maker must decide whether to make a job offer at the end of each interview, then the
problem becomes a dynamic one. That is, after each interview the decision maker must decide whether to
make the offer (and end the search) or to continue the search for at least one more interview, at which time
the same decision arises. In this version of the problem, the decision maker faces an added uncertainty: the
qualifications of the applicants still to come. (This dynamic problem is sometimes known as the “Secretary
Problem,” and has been analyzed extensively and in many different forms in the operations-research
literature. For example, see DeGroot (2004) Optimal Statistical Decisions, Hoboken, NJ: Wiley & Sons. P.
325.)
2.7. Decisions to make: How to invest current funds. Possible alternatives include do nothing, purchase
specific properties, purchase options, etc. Other decisions might include how to finance the purchase, when
to resell, how much rent to charge, and so on. Note that the situation is a dynamic one if we consider future
investment opportunities that may be limited by current investments.
Uncertain events: Future market conditions (for resale or renting), occupancy rates, costs (management,
maintenance, insurance), and rental income.
9
Possible outcomes: Most likely such an investor will be interested in future cash flows. Important trade-offs
include time value of money and current versus future investment opportunities.
2.8. Answers depend on personal experience and will vary widely. Be sure to consider current and future
decisions and uncertain events, the planning horizon, and important trade-offs.
2.9. NPV
=
-2500 1500
1700
+
+
0
1
1.13
1.13
1.132
= -2500 + 1327.43 + 1331.35
= $158.78.
Or use Excel’s function NPV:
=-2500+NPV(0.13,1500,1700) = $158.78
The Excel file, “Problem 2.9.xls” has the equation set-up as a reference to cells that contain the cash flows.
2.10. NPV
-12000 5000
5000 -2000
6000
6000
+
+
+
+
= 1.12 +
2
3
4
5
1.12
1.12
1.12
1.12
1.126
= -10,714.29 + 3985.97 + 3558.90 - 1271.04 + 3404.56 + 3039.79
= $2003.90
Using Excel’s NPV function:
=NPV(0.12,-12000,5000, 5000,-2000,6000,6000)
= $2,003.90
The internal rate of return (IRR) for this cash flow is approximately 19.2%.
The Excel file, “Problem 2.10.xls” has the equation set-up as a reference to cells that contain the cash
flows.
2.11.
If the annual rate = 10%, then the monthly (periodic) rate r = 10% / 12 = 0.83%.
NPV(0.83%)
90
= -1000 + 1.0083 +
90
90
+ ... +
2
12
1.0083
1.0083
= $23.71.
Or use Excel’s NPV function, assume the 12 payments of $90 appear in cells B13:B24:
=-1000+NPV(0.1/12,B13:B24)= $23.71
(As shown in the Excel file “Problem 2.11.xls”)
If the annual rate = 20%, then the monthly (periodic) rate r = 20% / 12 = 1.67%.
NPV(1.67%)
90
= -1000 + 1.0167 +
90
90
+ ... +
= $-28.44.
2
1.0167
1.016712
10
Or use Excel’s NPV function, assume the 12 payments of $90 appear in cells B13:B24:
=-1000+NPV(0.2/12,B13:B24)= $-28.44
(As shown in the Excel file “Problem 2.11.xls”)
The annual interest rate (IRR) that gives NPV=0 is approximately 14.45%. You can verify this result by
substituting 14.45% / 12 = 1.20% for r in the calculations above.
Or with Excel’s IRR function, IRR(Values, Guess), assume the series of payments (the initial $1000
payment and the series of 12 payments of $90) are in cells B12:B24:
=IRR(B12:B24,0) = 1.20%
(As shown in the Excel file “Problem 2.11.xls”)
2.12. a. If the annual rate = 10%, then the monthly rate r = 10%/12 = 0.83%. Always match the periodicity
of the rate to that of the payments or cash flows.
-55
NPV(Terry) = 600 + 1.0083 +
-55
-55
+ ... +
2
1.0083
1.008312
= $-25.60.
Be sure to get the orientation correct. For Terry, the loan is a positive cash flow, and the payments are
negative cash flows (outflows). Thus, the NPV is negative. Because of the negative NPV, Terry should
know that this deal is not in his favor and that the actual interest rate being charged is not 10% annually. If
it were, then NPV should equal zero. The actual annual interest being charged must be greater than 10% as
NPV is less than zero.
Or with Excel’s NPV function, assume the series of 12 payments of $55 are in cells B12:B23.
=NPV(0.1/12,B12:B23)+600
= -$25.60
These calculations and those associated with the remaining parts of the question are shown in the Excel file
“Problem 2.12.xls”.
b. For the manager, the $600 loan is a negative cash flow, and the payments are positive cash flows. Hence,
NPV(Mgr)
55
= -600 + 1.0083 +
55
55
+ ... +
1.00832
1.008312
= $25.60.
Or with Excel’s NPV function, assume the series of 12 receipts of $55 are in cells B12:B23.
=NPV(0.1/12,B12:B23)-600
= $25.60
c. If the annual rate is 18%, then NPV is about $-0.08. In other words, the actual rate on this loan (the
internal rate of return or IRR) is just under 18%.
Using Excel’s IRR function, and assuming the cash flows are in cells B11:B23:
11
=IRR(B11:B23,0)*12
= 17.97% annually
2.13. Should future decisions ever be treated as uncertain events? Under some circumstances, this may not
be unreasonable.
If the node for selling the car is included at all, then possible consequences must be considered. For
example, the consequence would be the price obtained if he decides to sell, whereas if he keeps the car, the
consequence would be the length of the car’s life and cost to maintain and repair it.
If the node is a decision node, the requisite model would have to identify the essential events and
information prior to the decision. If the node is a chance event, this amounts to collapsing the model, and
hence may be useful in a first-cut analysis of a complicated problem. It would be necessary to think about
scenarios that would lead to selling the car or not, and to evaluate the uncertainty surrounding each
scenario.
2.14. Vijay’s objectives include maximizing profit, minimizing unsavory behavior, minimizing legal costs,
and maximizing Rising Moon’s appeal. Students will think of other objectives. Vijay’s decision is to apply
for a liquor license, and if granted, then he could decide on how to manage drinking at Rising Moon. For
example, he might be able to create a separate area of his place, such as a beer garden, where drinking
alcohol is allowed. Vijay could also decide to broaden his menu in other ways than serving alcohol. The
uncertainties include future sales and profit for Rising Moon, market reaction to offering alcohol, amount
of disruption occurring from serving alcohol, and legal liabilities. Consequence measures for sales, profit,
and legal costs are clear. He could simply count the number of disruptions to the business due to alcohol or
he could try to associate a cost figure to the unsavory behavior. Rising Moon’s appeal could be measured
by the change in sales volume due to introducing alcohol.
Vijay will certainly, as law requires, hedge by carrying insurance, and he will want to think carefully about
the level of insurance. As mentioned, he might be able to have a designated area for drinking alcohol. He
could gather information now via surveys or speaking to other local merchants. And he can always change
his mind later and stop serving alcohol.
Case Study: The Value of Patience
The Excel solution for this case is provided in the file “Value of Patience case.xlsx”.
1.
NPV
= -385,000 +
100,000 100,000
100,000
+
+ ... +
= $-3847.
2
1.18
1.18
1.18 7
Thus, Union should not accept the project because the NPV is negative.
Using Excel’s NPV function and assuming the series of 7 payments of $100,000 are in cells B12:B18:
=-385000+NPV(0.18,B12:B18)
= -$3847
2.
NPV
= -231,000 +
50,000 50,000
50,000
+
+ ... +
= $12,421.
2
1.10
1.10
1.10 7
This portion of the project is acceptable to Briggs because it has a positive NPV.
Using Excel’s NPV function and assuming the series of 7 payments of $50,000 are in cells E12:E18:
= -231,000+NPV(0.1,E12:E18)
= $12,421
12
3.
NPV
= -154,000 +
50,000 50,000
50,000
+
+ ... +
= $36,576.
2
1.18
1.18
1.18 7
Thus, this portion of the project is profitable to Union.
Using Excel’s NPV function and assuming the series of 7 payments of $50,000 are in cells H12:H18:
= -154,000+NPV(0.18,H12:H18)
= $36,576
Some students will want to consider the other $231,000 that Union was considering investing as part of the
entire project. Note, however, that if Union invests this money at their 18% rate, the NPV for that particular
investment would be zero. Thus the NPV for the entire $385,000 would be the sum of the two NPVs, or
$36,576.
4. Patience usually refers to a willingness to wait. Briggs, with the lower interest rate, is willing to wait
longer than Union to be paid back. The higher interest rate for Union can be thought of as an indication of
impatience; Union needs to be paid back sooner than Briggs.
The uneven split they have engineered exploits this difference between the two parties. For Briggs, a
payment of $50,000 per year is adequate for the initial investment of $231,000. On the other hand, the less
patient Union invests less ($154,000) and so the $50,000 per year is satisfactory.
As an alternative arrangement, suppose that the two parties arrange to split the annual payments in such a
way that Union gets more money early, and Briggs gets more later. For example, suppose each invests half,
or $192,500. Union gets $100,000 per year for years 1-3, and Briggs gets $100,000 per year for years 4-7.
This arrangement provides a positive NPV for each side: NPV(Union) = $24,927, NPV(Briggs) = $45,657.
Briggs really is more patient than Union!
Case Study: Early Bird, Inc.
1. The stated objective is to gain market share by the end of this time. Other objectives might be to
maximize profit (perhaps appropriate in a broader strategic context) or to enhance its public image.
2. Early Bird’s planning horizon must be at least through the end of the current promotion. In a first-cut
analysis, the planning horizon might be set at the end of the promotion plus two months (to evaluate how
sales, profits, and market share stabilize after the promotion is over). If another promotion is being planned,
it may be appropriate to consider how the outcome of the current situation could affect the next promotion
decision.
3, 4.
Customer
response
New Morning’s
reaction
Move up promotion start date?
Reaction to
New Morning
Now
13
Market
Share or
Profits
CHAPTER 3
Structuring Decisions
Notes
Chapters 3 and 4 might be considered the heart of Making Hard Decisions with DecisionTools. Here is
where most of the action happens. Chapter 3 describes the process of structuring objectives and building
influence diagrams and decision trees. Chapter 4 discusses analysis. The chapter begins with a
comprehensive discussion of value structuring and incorporates value-focused thinking throughout.
Constructing influence diagrams and decision trees to reflect multiple objectives is demonstrated, and the
chapter contains a discussion of scales for measuring achievement of fundamental objectives, including
how to construct scales for objectives with no natural measures.
Understanding one’s objectives in a decision context is a crucial step in modeling the decision. This section
of the chapter shows how to identify and structure values, with an important emphasis on distinguishing
between fundamental and means objectives and creating hierarchies and networks, respectively, to
represent these. The fundamental objectives are the main reasons for caring about a decision in the first
place, and so they play a large role in subsequent modeling with influence diagrams or decision trees.
Students can generally grasp the concepts of influence diagrams and how to interpret them. Creating
influence diagrams, on the other hand, seems to be much more difficult. Thus, in teaching students how to
create an influence diagram for a specific situation, we stress basic influence diagrams, in particular the
basic risky decision and imperfect information. Students should be able to identify these basic forms and
modify them to match specific problems. The problems at the end of the chapter range from simple
identification of basic forms to construction of diagrams that are fairly complex.
The discussion of decision trees is straightforward, and many students have already been exposed to
decision trees somewhere in their academic careers. Again, a useful strategy seems to be to stress some of
the basic forms.
Also discussed in Chapter 3 is the matter of including in the decision model appropriate details. One issue
is the inclusion of probabilities and payoffs. More crucial is the clarity test and the development of scales
for measuring fundamental objectives. The matter of clarifying definitions of alternatives, outcomes, and
consequences is absolutely crucial in real-world problems. The clarity test forces us to define all aspects of
a problem with great care. The advantage in the classroom of religiously applying the clarity test is that the
problems one addresses obtain much more realism and relevance for the students. It is very easy to be lazy
and gloss over definitional issues in working through a problem (e.g., “Let’s assume that the market could
go either up or down”). If care is taken to define events to pass the clarity test (“The market goes up means
that the Standard & Poor’s 500 Index rises”), problems become more realistic and engaging.
The last section in Chapter 3 describes in detail how to use PrecisionTree for structuring decisions. The
instructions are intended to be a self-contained tutorial on constructing decision trees and influence
diagrams. PrecisionTree does have an interactive video tutorial along with video tutorials on the basics and
videos from experts. These videos along with example spreadsheets and the manual all can be found in the
PrecisionTree menu ribbon under Help, then choosing Welcome to PrecisionTree.
Please note that if you enter probability values that do not sum to 100% for a chance node, then the
program uses normalized probability values. For example, if a chance node has two branches and the
corresponding probabilities entered are (10%, 80%), then the model will use (11.11%, 88.89%) – i.e.,
0.1/(0.1+0.8) and 0.8/(0.1+0.8).
Because this chapter focuses on structuring the decision, many of the problems do not have all of the
numbers required to complete the model. In some cases, the spreadsheet solution provides the structure of
the problem only, and the formulas were deleted (for example, problem 3.10). In other cases, the model is
14
completed with representative numbers (for example, problem 3.5). In the completed model, you will see
expected values and standard deviations in the decision trees and influence diagrams. These topics are
discussed in Chapter 4.
Topical cross-reference for problems
Branch pay-off formula
Calculation nodes
Clarity test
Constructed scales
Convert to tree
Decision trees
3.25 – 3.28
3.14, Prescribed Fire
3.4, 3.5, 3.12, 3.21, Prescribed Fire
3.3, 3.14, 3.20, 3.21
3.11, 3.26, 3.28, Prescribed Fire
3.5 - 3.7, 3.6 - 3.11, 3.13, 3.20 - 3.28,
Prescribed Fire, S.S. Kuniang, Hillblom
Estate
3.9, 3.11
3.4, 3.6 - 3.9, 3.11, 3.14, 3.16, 3.20, 3.21,
3.26, 3.28, Prescribed Fire, S.S. Kuniang
3.24, 3.25
3.1 - 3.3, 3.7, 3.10, 3.14 - 3.19, 3.21, 3.23,
Prescribed Fire
3.5, 3.9, 3.24 - 3.28, Prescribed Fire, S.S.
Kuniang
3.20
3.9
Imperfect information
Influence diagrams
Net present value
Objectives
PrecisionTree
Sensitivity analysis
Umbrella problem
Solutions
3.1. Fundamental objectives are the essential reasons we care about a decision, whereas means objectives
are things we care about because they help us achieve the fundamental objectives. In the automotive safety
example, maximizing seat-belt use is a means objective because it helps to achieve the fundamental
objectives of minimizing lives lost and injuries. We try to measure achievement of fundamental objectives
because we want to know how a consequence “stacks up” in terms of the things we care about.
Separating means objectives from fundamental objectives is important in Chapter 3 if only to be sure that
we are clear on the fundamental objectives, so that we know what to measure. In Chapter 6 we will see that
the means-objectives network is fertile ground for creating new alternatives.
3.2. Answers will vary because different individuals have different objectives. Here is one possibility.
(Means objectives are indicated by italics.)
Best Apartment
Minimize
Travel time
To School To Shopping
Maximize
Ambiance
To Leisure-time
Activities
Maximize
Use of leisure
time
Alone Friends Neighbors
Maximize
Discretionary $$
Centrally
located
Maximize
features (e.g.,
pool, sauna,
laundry)
Parking
at apartment
Maximize
windows, light
15
Minimize
Rent
3.3. A constructed scale for “ambiance” might be the following:
Best
--
--
-Worst
Many large windows. Unit is like new. Entrance and landscape are clean and inviting
with many plants and open areas.
Unit has excellent light into living areas, but bedrooms are poorly lit. Unit is clean and
maintained, but there is some evidence of wear. Entrance and landscaping includes some
plants and usable open areas but is not luxurious.
Unit has one large window that admits sufficient light to living room. Unit is reasonably
clean; a few defects in walls, woodwork, floors. Entrance is not inviting but does appear
safe. Landscaping is adequate with a few plants. Minimal open areas.
Unit has at least one window per room, but the windows are small. Considerable wear.
Entrance is dark. Landscaping is poor; few plants, and small open areas are not inviting.
Unit has few windows, is not especially clean. Carpet has stains, woodwork and walls are
marred. Entrance is dark and dreary, appears unsafe. Landscaping is poor or nonexistent;
no plants, no usable open areas.
3.4. It is reasonable in this situation to assume that the bank’s objective is to maximize its profit on the
loan, although there could be other objectives such as serving a particular clientele or gaining market share.
The main risk is whether the borrower will default on the loan, and the credit report serves as imperfect
information. Assuming that profit is the only objective, a simple influence diagram would be:
Credit
report
Default?
Make
Loan?
Profit
Note the node labeled “Default” Some students may be tempted to call this node something like “Credit
worthy?” In fact, though, what matters to the bank is whether the money is paid back or not. A more
precise analysis would require the banker to consider the probability distribution for the amount paid back
(perhaps calculated as NPV for various possible cash flows).
Another question is whether the arrow from “Default” to “Credit Report” might not be better going the
other way. On one hand, it might be easier to think about the probability of default given a particular credit
report. But it might be more difficult to make judgments about the likelihood of a particular report without
conditioning first on whether the borrower defaults.
Also, note that the “Credit Report” node will probably have as its outcome some kind of summary measure
based on many credit characteristics reported by the credit bureau. It might have something like ratings that
bonds receive (AAA, AA, A, and so on). Arriving at a summary measure that passes the clarity test could
be difficult and certainly would be an important aspect of the problem.
If the diagram above seems incomplete, a “Credit worthiness” node could be included and connected to
both “Credit report” and “Default”:
16
Credit
worthiness
Credit
report
Default?
Make
Loan?
Profit
Both of these alternative influence diagrams are shown in the Excel file “Problem 3.4.xlsx.” Two different
types of arcs are used in the diagrams: 1) value only and 2) value and timing, and these are explained in the
text. A value influence type influences the payoff calculation and a timing type exists if the outcome
precedes that calculation chronologically (or is known prior to the event).
3.5. This is a range-of-risk dilemma. Important components of profit include all of the different costs and
revenue, especially box-office receipts, royalties, licensing fees, foreign rights, and so on. Furthermore, the
definition of profits to pass the clarity test would require specification of a planning horizon. At the
specified time in the future, all costs and revenues would be combined to calculate the movie’s profits. In
its simplest form, the decision tree would be as drawn below. Of course, other pertinent chance nodes could
be included.
Revenue
Make
movie
Don't make movie
Profit = Revenue-Cost
Value of best
alternative
The revenue for the movie is drawn as a continuous uncertainty node in the above decision tree.
Continuous distributions can be handled two ways in PrecisionTree either with a discrete approximation
(see Chapter 8 in the text) or with simulation (see Chapter 11 in the text). This decision tree with a discrete
approximation of some sample revenue values is shown in the Excel file “Problem 3.5.xlsx.” A potentially
useful exercise is to have the students alter the sample values to see the effect on the model and specifically
the preferred alternative.
3.6.
Strengths
Weaknesses
Influence Diagrams
Compact
Good for communication
Good for overview of large problems
Decision Trees
Display details
Flexible representation
Details suppressed
Become very messy
for large problems
17
Which should he use? I would certainly use influence diagrams first to present an overview. If details must
be discussed, a decision tree may work well for that.
3.7. This problem can be handled well with a simple decision tree and consequence matrix. See Chapter 4
for a discussion of symmetry in decision problems.
Best
Representation
Max
Communication
Overview
Details
Influence
Diagram
Excellent for
large problems
Decision
Tree
Poor, due to
complexity
Details
hidden
Details
displayed
Max flexibility
Best for symmetric
decision problems
Very flexible
for assymetric
decisions
3.8.
Win Senate
election?
Run for
Senate?
Run for Senate
Run for House
Yes
No
Outcome
Run for
Senate
House
Win Senate?
Yes
No
Yes
No
Outcome
US Senator
Lawyer
US Representative
US Representative
Note that the outcome of the “Win Senate” event is vacuous if the decision is made to run for the house.
Some students will want to include an arc from the decision to the chance node on the grounds that the
chance of winning the election depends on the choice made:
18
Win
election?
Run for
Senate?
Win
Lose
Outcome
Run for
Senate
Run for Senate
Run for House
Win Election?
Outcome
Yes
US Senator
No
Lawyer
Yes
US Representative
House
Note that it is not possible to lose the House election.
The arc is only to capture the asymmetry of the problem. To model asymmetries in an influence diagram,
PrecisionTree uses structure arcs. When a structural influence is desired, it is necessary to specify how the
predecessor node will affect the structure of the outcomes from the successor node. By using a structure
arc, if the decision is made to run for the house, the “Win election?” node is skipped. This influence
diagram is shown in the Excel file “Problem 3.8.xlsx.”
3.9. (Thanks to David Braden for this solution.) The following answers are based on the interpretation that
the suit will be ruined if it rains. They are a good first pass at the problem structure (but see below).
suit not ruined, plus a sense of relief
rain
no rain
take umbrella
suit not ruined, but inconvenience incurred
suit ruined
do not take umbrella
rain
no rain
suit not ruined
(A) Decision tree
19
Weather
Forecast
Rain
Take
Umbrella?
Take
Umbrella?
Satisfaction
(B) Basic Risky Decision
Rain
Satisfaction
(C) Imperfect Information
The Excel solution “Problem 3.9.xlsx” shows a realization of this problem assuming the cost of the suit is
$200, the cost of the inconvenience of carrying an umbrella when it is not raining is $20, the probability of
rain is 0.25, and the weather forecaster is 90% accurate.
Note that the wording of the problem indicates that the suit may be ruined if it rains. For example, the
degree of damage probably depends on the amount of rain that hits the suit, which is itself uncertain! The
following diagrams capture this uncertainty.
suit not ruined,
plus a sense of relief
rain
no rain
suit not ruined, but
inconvenience incurred
take umbrella
suit ruined
suit ruined
do not take umbrella
suit not
ruined
rain
no rain
suit not ruined, but some
effort spent to avoid ruining
the suit
suit not ruined
(A) Decision tree
20
Weather
Forecast
Rain
Take
Umbrella?
Take
Umbrella?
Ruin
Suit
Satisfaction
Rain
Satisfaction
Ruin
Suit
(C) Imperfect Information
(B) Basic Risky Decision
3.10. (Thanks to David Braden for this solution.)
The decision alternatives are (1) use the low-sodium saline solution, and (2) don’t use the low-sodium
saline solution. The uncertain variables are: (1) The effect of the saline solution, consequences for which
are patient survival or death; (2) Possibility of court-martial if the saline solution is used and the patent
dies. The possible consequences are court-martial or no court-martial. The decision tree:
patient dead
do not use
saline solution
patient
saved
patient survives and the
use of saline solution justified
for other patients
use saline
solution
patient dead and
doctors suffer
patient dead
court-martial
no court-martial
patient dead
This decision tree is drawn in the Excel file “Problem 3.10.xlsx.”
3.11. a.
Sunny
Best
Weather
Rainy
Outdoors
Indoors
Party
decision
Terrible
Good
Satisfaction
No party
21
Bad
This influence diagram is drawn in the Excel file “Problem 3.11.xlsx” with some sample values assumed
(on a utility scale, a sunny party outside is worth 100, an indoors party is worth 80, no party is worth 20, a
party outside in the rain is worth 0, and the probability of rain is 0.3). A structure only arc is added in the
file between party decision and weather to include the asymmetries to skip the weather uncertainty if the
decision is made to have no party or have one indoors.
The second worksheet in the file shows the default decision tree created by the “Convert to Tree” button on
the influence diagram settings dialog box. (Click on the name of the influence diagram “Problem 3.11a” to
access the influence diagram settings. The Convert to Decision Tree button creates a decision tree from the
current influence diagram. This can be used to check the model specified by an influence diagram to insure
that the specified relationships and chronological ordering of nodes are correct. Conversion to a decision
tree also shows the impacts of any Bayesian revisions made between nodes in the influence diagram.
Once a model described with an influence diagram is converted to decision tree, it may be further edited
and enhanced in decision tree format. However, any edits made to the model in decision tree format will
not be reflected in the original influence diagram.
b. The arrow points from “Weather” to “Forecast” because we can easily think about the chances
associated with the weather and then the chances associated with the forecast, given the weather. That is, if
the weather really will be sunny, what are the chances that the forecaster will predict sunny weather? (Of
course, it is also possible to draw the arrow in the other direction. However, doing so suggests that it is easy
to assess the chances associated with the different forecasts, regardless of the weather. Such an assessment
can be hard to make, though; most people find the former approach easier to deal with.)
Forecast
Weather
Party
decision
Satisfaction
22
Sunny
Rainy
Outdoors
Indoors
Forecast
= “Sunny”
No party
Rainy
Outdoors
Terrible
Good
Sunny
Forecast
= “Rainy”
Best
Indoors
No party
Bad
Best
Terrible
Good
Bad
The influence diagram including the weather forecast is shown in the third worksheet and the associated
default decision tree created by the “Convert to Tree” function is shown in the fourth worksheet.
Additionally, we assumed that the weather forecaster is 90% accurate.
3.12. The outcome “Cloudy,” defined as fully overcast and no blue sky, might be a useful distinction,
because such an evening outdoors would not be as nice for most parties as a partly-cloudy sky. Actually,
defining “Cloudy” to pass the clarity test is a difficult task. A possible definition is “At least 90% of the sky
is cloud-covered for at least 90% of the time.”
The NWS definition of rain is probably not as useful as one which would focus on whether the guests are
forced indoors. Rain could come as a dreary drizzle, thunderstorm, or a light shower, for instance. The
drizzle and the thunderstorm would no doubt force the guests inside, but the shower might not.
One possibility would be to create a constructed scale that measures the quality of the weather in terms that
are appropriate for the party context. Here are some possible levels:
(Best)
---(Worst)
Clear or partly cloudy. Light breeze. No precipitation.
Cloudy and humid. No precipitation.
Thunderclouds. Heavy downpour just before the party.
Cloudy and light winds (gusts to 15mph). Showers off and on.
Overcast. Heavy continual rain.
3.13.
23
Change product
Engineer
says “Fix #3.”
Replace
#3
Behind schedule
#3 Defective
On schedule, costly
#3 not
defective
Behind scedule,costly
Change product
Behind schedule
Engineer says
“#3 OK.”
Replace
#3
#3 Defective
On schedule, costly
#3 not
defective
Behind scedule,costly
This decision tree is drawn in the Excel file “Problem 3.13.xlsx.”
3.14.
Forecast
Hurricane
Path
Safety
Decision
Evacuation
Cost
Consequence
Note that Evacuation Cost is high or low depending only on the evacuation decision. Thus, there is no arc
from Hurricane Path to Evacuation Cost.
This influence diagram is drawn in the Excel file “Problem 3.14.xlsx.” Because PrecisionTree allows only
one payoff node per influence diagram, the “Safety” and “Evaluation Cost” nodes are represented by
calculation nodes. A calculation node (represented by a rounded blue rectangle) takes the results from
predecessor nodes and combines them using calculations to generate new values. These nodes can be used
to score how each decision either maximizes safety or minimizes cost.
The constructed scale for safety is meant to describe conditions during a hurricane. Issues that should be
considered are winds, rain, waves due to the hurricane’s storm surge, and especially the damage to
buildings that these conditions can create. Here is a possible constructed scale:
(Best)
--
Windy, heavy rain, and high waves, but little or no damage to property or infrastructure.
After the storm passes there is little to do beyond cleaning up a little debris.
Rain causes minor flooding. Isolated instances of property damage, primarily due to
window breakage. For people who stay inside a strong building during the storm, risk is
minimal. Brief interruption of power service.
24
--
Flooding due to rain and storm surge. Buildings within 100 feet of shore sustain heavy
damage. Wind strong enough to break many windows, but structural collapse rarely
occurs. Power service interrupted for at least a day following the storm.
-Flooding of roads and neighborhoods in the storm’s path causes areas with high property
damage. Many roofs are severely damaged, and several poorly constructed buildings
collapse altogether. Both electrical power and water service are interrupted for at least a
day following the storm.
(Worst) Winds destroy many roofs and buildings, putting occupants at high risk of injury or
death. Extensive flooding in the storm’s path. Water and electrical service are interrupted
for several days after the storm. Structural collapse of older wood-frame buildings occurs
throughout the region, putting occupants at high risk.
3.15. Answers to this question will depend largely on individual preferences, although there are some
typical responses. Some fundamental objectives: improve one’s quality of life by making better decisions,
help others make better decisions, improve career, graduate from college, improve one’s GPA (for one who
is motivated by grades). Some means objectives: satisfy a requirement for a major, improve a GPA (to have
better job opportunities), satisfy a prerequisite for another course or for graduate school. Note that “making
better decisions” is itself best viewed as a means objective because it can provide ways to improve one’s
life. Only a very few people (academics and textbook writers, for example), would find the study of
decision analysis to be its own reward!
The second set of questions relates to the design of the course and whether it is possible to modify the
course so that it can better suit the student’s objectives. Although I do not want to promote classroom
chaos, this is a valuable exercise for both students and instructor to go through together. (And the earlier in
the term, the better!) Look at the means objectives, and try to elaborate the means objectives as much as
possible. For example, if a class is primarily taking the course to satisfy a major requirement, it might make
sense to find ways to make the course relate as much as possible to the students’ major fields of study.
3.16. While individual students will have their own fundamental objectives, we based our hierarchy on a
study titled "Why do people use Facebook?" (Personality and Individual Differences, Volume 52, Issue 3,
February 2012, Pages 243–249). The authors, Ashwini Nadkarni and Stefan G. Hofmann from Boston
University, propose that Facebook meets two primary human needs: (1) the need to belong and (2) the need
for self-presentation. Thus, a particular student’s objectives may be a variation on these themes. Students
may find it interesting to see how their fundamental objectives for their own Facebook page compare and
contrast to the study.
The fact that one’s private area can sometimes be seen by outsiders, particularly employers or future
employers, interferes with presenting your private self. If the private area were truly private, then a user
could be more truthful discussing their revelries and celebrations or even their private thoughts. If you
believe someone is eavesdropping on your conversation, then you are naturally more guarded with your
speech. Thus, Facebook does not provide a good means to expressing your private self.
Maximize Socialization
Maximize Need for
Belonging
Maximize
Self Esteem
Maximize
Self Worth
Maximize Need for
Self Presentation
Minimize
Loneliness
25
Maximize
Public Persona
Maximize
Private Persona
Clearly, each individual will have their own FOH, but there are some facts that students may want to take
into consideration. First, it is naïve to think that future employers will not “Google” or “Facebook” you.
Such concerns are not usually on the mind of a college freshman or particularly a high-school student, but
there are a host of problems that can arise from indiscriminately sharing private information. Even if there
is no illegal activity being shown (underage drinking, drug use, etc.), different audiences will have different
norms, and pictures of drinking, dancing, and partying could be considered compromising or
unprofessional.
Second, a future employer or even your current employer may be interested in your postings. They may
want to know what religion you practice, what your interests are, what organizations you belong to, such as
the NRA. All of these could bias them, good or bad, towards you. Also, discretion might be an important
aspect of your position, and employers might view your postings to determine if you can be trusted with
proprietary information.
Posted Information can also be used by competing firms either to gain a direct benefit or more nefariously
to befriend you, and thereby learn more about their competitor. Your personal profile may include job
details and thus provide an opening by unscrupulous ‘cyber sharks’, or by competing businesses hoping to
learn from the eyes and ears of the opposition. You may even have posted confidential information without
realizing its sensitive nature.
Facebook, Inc. itself has rights to your private information and has wide latitude on how it can use your
info. Facebook’s privacy agreement states "We may use information about you that we collect from other
sources, including but not limited to newspapers and Internet sources such as blogs, instant messaging
services, Facebook Platform developers and other users of Facebook, to supplement your profile.”
Facebook can also sell a user's data to private companies, stating: "We may share your information with
third parties, including responsible companies with which we have a relationship."
There are also many data mining and identify theft issues that could result from even the public areas of
your Facebook page. Searches can be performed to discover what movies you like, what music you listen
to, what organizations you belong to, etc. This information can be used for marketing, recruiting, or even
identity theft.
Finally, a student’s short-term and long-term objectives may differ. Short term, the focus will most likely
be on being connected and being cool, i.e., partying, sexual prowess, being wild-n-crazy, etc. Many highschool and college students compete using Facebook to see who can have the most friends. Having 500 to
1000 Facebook friends is actually common. For these users, their objectives are to attract their friend’s
friends and present a profile attractive to large swaths of the population. Long term, the focus will most
likely be on being professional and staying connected to their college friends.
3.17. Here are my (Clemen’s) fundamental-objectives hierarchy and means-objective network (italics) in
the context of purchasing or building a telescope. The diagram does provide insight! For example, many
astronomers focus (so to speak) on image quality, and so there is a tendency to overemphasize aperture and
quality of eyepieces. But for me, an important issue is enjoying myself as much as possible, and that can
mean taking the telescope out of the city. All of the means objectives related to “Maximize enjoyment of
viewing sessions” (which is intended to capture aspects other than enjoying the quality of the images
viewed) push me toward a smaller, lighter, less expensive telescope. Thus, it is arguable that the basic
question I must ask is whether I just want to get out at night and enjoy seeing a few interesting sights, or
whether my interest really is in seeing very faint objects with as much clarity as possible.
Of course, the creative solution would be to find an inexpensive and highly transportable telescope with
large aperture, excellent optics, and very stable mount. Unfortunately, such a telescope doesn’t exist; all of
the desired physical features would lead to a very expensive telescope!
26
Best Telescope
Maximize
enjoyment of
viewing sessions
Maximize image
clarity
Maximize
brightness
Max
aperture
Maximize
image
defnition
Max quality
of optics
Maximize quality
of astrophotography
Maximize
stability
Max quality
of tracking device
Max
stability of
mount
Max additional
viewing accessories
Min light
pollution in sky
Max visits to
dark-sky site
Min cost
of telescope
Min
total weight
Max
transportability
Minimize
telescope size
27
3.18. Objectives are, of course, a matter of personal preferences, and so answers will vary considerably.
a. Here is an objectives hierarchy for the decision context of going out to dinner:
Time
Driv ing time
Ordering/serv ing
Minimize
cost
Dollar cost
Atmosphere
Maximize
experience
Location
Menu
Quality of f ood
b. A simple hierarchy for deciding from among different trips:
Company (f riends)
Maximize
experience
Learning
Relax
Preparation
Minimize
cost
Time
Expense
Trav el cost
Some means objectives might be going to a particular kind of resort; maximizing time spent shopping,
golfing, or on the beach; maximizing nights spent in youth hostels; using a travel agent (to reduce time
spent in preparation); maximizing time in a foreign country (to maximize language learning, for example).
c. Here is a possible fundamental-objectives hierarchy for choosing a child’s name:
Similarity of name
Maximize
f amily ties
Closeness of namesake
by child
Ease of
use/learning
by child’s play mates
by relativ es
Nicknames
Minimize negativ e
potential
Teasing by “f riends”
3.19. Responding to this question requires considerable introspection and can be very troubling for many
students. At the same time, it can be very enlightening. I have had students face up to these issues in
28
analyzing important personal decisions such as where to relocate after graduation, whether to make (or
accept) a marriage proposal, or whether to have children. The question, “What is important in my life?”
must be asked and, if answered clearly, can provide the individual with important insight and guidance.
3.20. a. These influence diagrams and decision trees are drawn in the Excel file “Problem 3.20.xlsx.”
Surgery
results
Recover
fully
Have
surgery
Die
Have
surgery?
Quality
of life
Don’t
have surgery
Long, healthy
life
Death
Progressive
debilitation
b.
Surgery
results
Quality
of life
Have
surgery?
Recover
fully
Have
surgery
Complications
Long,
healthy
life
Complications
Die
Don’t
have surgery
Death
Progressive
debilitation
Full recovery
Long healthy life after
difficult treatment
Partial recovery
Invalid after difficult
treatment
Death
Death after difficult
treatment
Given the possibility of complications and eventual consequences, the surgery looks considerably less
appealing.
c. Defining this scale is a personal matter, but it must capture important aspects of what life would be like
in case complications arise. Here is one possible scale:
29
(Best)
--
No complications. Normal, healthy life.
Slight complications lead to minor health annoyances, need for medication, frequent
visits. Little or no pain experienced. Able to engage in most age-appropriate activities.
-Recovery from surgery requires more than two weeks of convalescence. Pain is intense
but intermittent. Need for medication is constant after recovery. Unable to engage in all
age-appropriate activities.
-Recovery requires over a month. Chronic pain and constant need for medication.
Confined to wheelchair 50% of the time.
(Worst) Complete invalid for remainder of life. Restricted to bed and wheelchair. Constant pain,
sometimes intense. Medication schedule complicated and occasionally overwhelming.
3.21. This question follows up on the personal decision situation that was identified in problem 1.9.
3.22 This decision tree is drawn in the Excel file “Problem 3.22.xlsx.”
“Not to be”
(commit suicide)
“What dreams may
come” (What comes
after death?)
“To be”
“Bear fardels”
(continue to live) (burdens of life)
3.23. This decision tree is drawn in the Excel file “Problem 3.23.xlsx.”
Aircraft hostile
Crew safe
Aircraft not
hostile
Crew safe
Civilians killed
Shoot
Aircraft hostile
Don't shoot
Aircraft not
hostile
Harm to
crew
Crew safe
Civilians safe
Rogers’s most crucial objectives in this situation are to save lives, those of his crew and of any civilians
who are not involved. It is not unreasonable to consider objectives of saving his ship or improving the
relationship with Iran, but in the heat of action, these were probably not high on Rogers’ list.
The risk that Rogers faces is that the blip on the radar screen may not represent a hostile aircraft. The main
trade-off, of course, is the risk to his crew versus possibly killing innocent civilians.
As usual, there are a lot of ways this decision tree could be made more elaborate. For example, a “Wait”
alternative might be included. The tree above assumes that if the decision is to shoot, the incoming aircraft
would be hit (generally a safe assumption these days), but one might want to include the possibility of
missing.
30
3.24. This is a straightforward calculation of NPV. Assuming that all the cash flows happen at the end of
the year, the following table shows the cash flows:
Cash
Flows
Year
0
1
2
3
4
5
6
7
8
9
Stop
Continue Continue
No Patent
Patent
License
0
0
0
0
0
0
0
0
0
0
0
-2
0
0
0
0
0
0
0
0
0
-2
0
5
5
5
5
5
0
0
Continue Continue Continue
Patent
Patent
Patent
Develop
Develop
Develop
Dem. High Dem. Med Dem. Low
0
0
0
-2
-2
-2
0
0
0
-5
-5
-5
-5
-5
-5
11
6.6
3
11
6.6
3
11
6.6
3
11
6.6
3
11
6.6
3
Present values are calculated by applying the appropriate discount rate to each cash flow; the discount rate
1
for the cash flows in year i. Finally, NPV is the sum of the present values. Also, the NPV function
is
1.15i
in Excel can be used for the calculations as shown in the Excel file “Problem 3.24.xls.”
Stop Continue
No Patent
Present
Values
Year
0
1
2
3
4
5
6
7
8
9
NPV
Discount
Factor
1
0.8696
0.7561
0.6575
0.5718
0.4972
0.4323
0.3759
0.3269
0.2843
Continue
Patent
License
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
-1.74
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
-1.74
0.00
3.29
2.86
2.49
2.16
1.88
0.00
0.00
0.00
-1.74
10.93
Continue Continue
Patent
Patent
Develop
Develop
Dem. High Dem. Med
0.00
0.00
-1.74
-1.74
0.00
0.00
-3.29
-3.29
-2.86
-2.86
5.47
3.28
4.76
2.85
4.14
2.48
3.60
2.16
3.13
1.88
13.20
4.76
Continue
Patent
Develop
Dem. Low
0.00
-1.74
0.00
-3.29
-2.86
1.49
1.30
1.13
0.98
0.85
-2.14
In file “Problem 3.24.xlsx”, the decision tree references the NPV calculations to demonstrate the process of
choosing to continue or stop development. The ability to build these trees in Excel and reference cells as
done in this problem makes this a powerful program. The payoff for each branch of the tree is a formula
that corresponds to the correct cell in the NPV calculations worksheet.
Alternative assumptions can be made about the timing of the cash flows. For example, it would not be
unreasonable to believe that the expenses must be paid at the beginning of the year and that revenue arrives
at the end of the year. The most realistic scenario, however, is that all cash flows are evenly spread out over
the year for which they are specified.
31
3.25. This decision tree is relatively complex compared to the ones that we have seen so far. Buying the
new car does not involve any risk. However, the used car has an uncertainty each year for the next three
years. The decision tree is shown below. Note that it is also possible to calculate the NPVs for the ends of
the branches; the natural interest rate to use would be 10%, although it would be best to use a rate that
reflects what you could earn in another investment. This decision tree representation does not discount the
values.
Down
Payment
New Car -5500
Purchase
Maintenance and loan payments
Year 1
Year 2
Year 3
-2522.20
-2722.20
Repairs
Year 1
-650
(0.2)
Used Car -5500
-1650
(0.6)
-2650
(0.2)
32
-2722.20
Net
Salvage Value
3626
-9841
Repairs
Year 2
Repairs
Year 3
Net
Salvage Value
-700
(0.2)
-500 (0.2)
-1500 (0.6)
-2500 (0.2)
2000
2000
2000
-5350
-6350
-7350
-1700
(0.6)
-500 (0.2) 2000
-1500 (0.6) 2000
-2500 (0.2) 2000
-6350
-7350
-8350
-2700
(0.2)
-500 (0.2) 2000
-1500 (0.6) 2000
-2500 (0.2) 2000
-7350
-8350
-9350
-700
(0.2)
-500 (0.2) 2000
-1500 (0.6) 2000
-2500 (0.2) 2000
-6350
-7350
-8350
-1700
(0.6)
-500 (0.2) 2000
-1500 (0.6) 2000
-2500 (0.2) 2000
-7350
-8350
-9350
-2700
(0.2)
-500 (0.2) 2000
-1500 (0.6) 2000
-2500 (0.2) 2000
-8350
-9350
-10350
-700
(0.2)
-500 (0.2) 2000
-1500 (0.6) 2000
-2500 (0.2) 2000
-7350
-8350
-9350
-1700
(0.6)
-500 (0.2) 2000
-1500 (0.6) 2000
-2500 (0.2) 2000
-8350
-9350
-10350
-2700
(0.2)
-500 (0.2) 2000
-1500 (0.6) 2000
-2500 (0.2) 2000
-9350
-10350
-11350
The Excel file “Problem 3.25.xlsx” contains two solutions for this problem. The workbook consists of 3
worksheets. The first worksheet is titled Data & Formulas, and contains the input data for the problem (car
costs, loan value, etc.) along with some formulas. The formulas in cells G29 and H29 calculate the 3-year
net cost of the new and used car respectively. Remember to include the initial $5,500 payment. The
formulas in M12 and L12 also calculate the 3-year net cost but incorporate the time value of money. We
used 10% as the interest rate.
The next two worksheets show the decision trees, with and without incorporating the time value of money.
Both of these trees are linked trees, which we introduce in Chapter 4. There is a method for students to
solve the problem not using linked trees, and this is explained in the text box as Option 1. Essentially,
students will need to create 27 formulas for 3-year net cost, one for each unique combination of
maintenance costs across the 3 years. Once these 27 formulas have been created, it is a simple matter to use
Excel referencing to reference the end node with its corresponding formula. Assigning this problem in
Chapter 3 will help the students realize the flexibility of linked decision trees when they encounter them in
Chapter 4. Additional hints might be needed.
Finally, this problem can also be used to exhibit good-modeling techniques to the students via separating
out the data inputs and placing all of them into one area. Not only does this help with constructing the
model, but it also facilitates running a sensitivity analysis.
3.26a. The influence diagram from Exercise 3.11(a) is shown here drawn in PrecisionTree.
Weather
Party decision
Satisfaction
In order for the “Convert To Tree” button to automatically adjust for the asymmetries, a structure arc is
needed from “Party Decision” to “Weather” (represented by the dotted arrow). This influence diagram and
the corresponding converted decision tree are shown in the first two worksheets in the Excel file “Problem
3.26.xlsx.” Some assumed values for outcomes and probabilities are shown in the converted decision tree.
70.0%
Sunny
0
FALSE
Outdoors
Weather
0
70
Rainy
Converted Problem 3.26 (a)
0
100
30.0%
0
0
0
Party decision
80
Indoors
No party
TRUE
1
0
80
FALSE
0
0
20
b. Adding the arrow from “Weather” to “Party Decision” means that the information regarding the weather
is known before the time of the decision. Therefore, in the Converted Decision Tree, the “Weather” chance
events will appear in the tree prior to the “Party Decision.”
33
Weather
Party Decision
Satisfaction
Outdoors
70.0%
Sunny
TRUE
0.7
100
100
Party Decision
0
100
Indoors
No party
FALSE
0
80
80
FALSE
0
20
20
FALSE
0
0
0
Weather
Converted Problem 3.26 (b)
94
Outdoors
30.0%
Rainy
Party Decision
0
80
Indoors
No party
TRUE
0.3
80
80
FALSE
0
20
20
c.
Forecast
Weather
Party decision
Satisfaction
If the arrow went from "Party" to "Forecast", then you would have to make the party decision before you
got the forecast. If an arrow started at "Forecast" and went to "Weather", we would be stating that somehow
the forecast influences the weather.
34
Sunny
Outdoors
TRUE
95.5%
0.63
0
100
4.5%
0.03
Weather
0
95.455
Rainy
0
"Sunny" forecast 66.0%
0
95.455
Indoors
No party
Converted Problem 3.26 (c)
0
Party decision
FALSE
0
0
80
FALSE
0
0
20
Forecast
90.2
20.6%
Sunny
0
Outdoors
FALSE
Weather
0
20.588
Rainy
"Rainy" forecast
34.0%
0
100
79.4%
0
0
0
Party decision
0
80
Indoors
TRUE
0.34
0
No party
80
FALSE
0
0
20
3.27. This problem is more challenging than some previous ones as it pushes the students to incorporate
numerical values into the decision model, and to do so, using formulas. Also, the data were provided in two
different formats (annual and monthly) requiring the students to pick one format for the consequence
measure. The solution in the spreadsheet “Problem 3.27.xlsx” uses monthly values. A portion of the tree is
shown below.
The formula for Jameson’s monthly take home if he stays the course is
$2,000 + (Actual Revenue – VJ’s Cost) x (1 – Tax Rate)
Subtracting the required $2,400 is the surplus/deficit Jameson faces, and is the consequence measure we
used.
If the student uses the revenue values, then he or she will need to subtract $2,400 after the analysis. This
only works because the same value is being subtracted from all end nodes. Also, students will run into
PrecisionTree adding the values along the branches (cumulative method). For example, using monthly
revenue values of $2,300 in year 1 and $2,400 in year 2 has a cumulative value of $4,700. In this case,
$4,800 would need to be subtracted. Better is to use the consequence measure that fits the problem.
The formula for Jameson’s monthly take home if he pursues an MSW in 2 years is
(Annual salary/12) x (1 – Tax Rate) – Loan Payment
Again, we subtracted $2,400 for the consequence measure.
35
So what should Jameson do? The future looks bleak for him, but bleaker if he stays the course. Over the
next 3 years, we see him having a monthly deficit of $570 on average if he stays the course, but only a $75
deficit if he gets his MSW. Either way, he is falling behind every month, and expenses associated with
raising children grow almost as fast as the children do.
3.28. From an income maximization point of view, Susan should not quit. There are no financial advantages
of her quitting. As a matter of fact, the only financial ramification is the loss of 6 months of salary or $12,000.
It could be that Susan had set her mind on quitting and cannot process not quitting based on something that
might only happen. She is anchored to quitting, and is probably excited to start a new life.
Susan is not as concerned with income as she is with running through their savings and being evicted. To
model this, we consider all the different scenarios that were presented for Susan and Denzel, and for each
calculate what would be left in their savings account. If this ever drops below zero, then they run the risk of
eviction. The timeframe for them is clearly 6 months because both Susan and Denzel will be on their feet
by then with their new careers.
The solution, shown below and in the file “Problem 3.28.xlsx,” shows the structure of the problem and the
values we used for the end nodes. To help understand the formulas, the spreadsheet cells have been named
and thus the formulas are of the form:
=Savings+6*(Denzel’s Contr if Laid Off + Assistance) - Six_Months_Req.
This formula reports their end savings account balance when Denzel is laid off, when Susan is not
contributing, and they do receive assistance. See the file for complete details. Please note that the above
equals a negative value (-$2,000), but we are not saying the savings account balance can go negative.
Rather, this measures by amount of their deficit.
Susan should definitely not quit her job at the coffee shop. If she does, there is a 70% chance they will have
at least a $2,000 deficit. Any deficit is to be avoided. If she stays working at Joes, then no matter what
happens to Denzel, they will have at least $4,000 left in their savings account.
To exhibit the iterative natures of modeling, we extended the model by adding a third alternative, namely,
staying at the coffee shop for a few months. Cell E7 allows you to enter any value between 0 and 6 for the
36
number of Months Susan stays at Joe’s Coffee. By making this a cell reference, she can dynamically
change it value to view intermediate alternatives. For example, if she stays at Joe’s for 3 months, then the
probability of a deficit drops to 8%.
Case Study: Prescribed Fire
1. Appropriate objectives for the agency in the context of this decision relate to costs. In particular, they
would like to minimize the cost of disposing of the material, and they would like to minimize the cost of
containing fires that go out of control. The latter could include damages if a fire escapes.
2. Many influence-diagram representations are possible. Here is one popular alternative:
Fire
behavior
Burn or YUM
& Burn
Problem
cost
Total cost of
containment
Treatment
cost
Total
cost
A possible decision tree:
Total cost
Success
High cost
Burn
Problems
Med cost
Low cost
Escape
Success
High cost
YUM &
Burn
Problems
Med cost
Low cost
Escape
To pass the clarity test, the costs must be precisely defined. Also, fire behavior must be defined. For
example, it would be necessary to distinguish between and escaped fire and a fire that develops problems
that can be brought under control.
Excel file “Prescribed fire case study.xlsx” shows the above influence diagram with one additional
structural arc. In order for the “Problem Cost” uncertainty to occur only if the “Fire behavior” results in
problems as shown in the above decision tree, an additional structural arc was added between these two
nodes. The file also contains the resulting decision tree from automatically converting the influence
diagram. While the above decision tree treats the resulting multiple objectives at the outcome of each
37
branch, we used calculation nodes for the “Treatment cost” and “Total cost of containment” and are thus
represented as additional nodes in the converted decision tree.
Case Study: The S.S. Kuniang
1. Again, many different influence-diagram representations are possible here. A popular alternative is:
Highest
competitor
bid
Bid
amount
Winning
bid
Coast
Guard
judgment
Cost
Bid
6
5
5
-
CG
Winning
Bid judgment
H
6
H
6
L
6
-
Cost
6 + refit cost
Cost of next best
alternative
-
}
This representation shows that the decision maker has to think about chances associated with the highest
bid from a competitor and the Coast Guard’s judgment. An arc might be included from “Winning bid” to
“Coast Guard judgment” if the decision maker feels that different winning bids would affect the chances
associated with the coast guard’s judgment (which is a distinct possibility). Also, it would be reasonable to
include a separate decision node for purchasing alternative transportation if the bid fails. The representation
above includes that consideration implicitly in the cost table.
A possible decision tree:
Coast Guard
judgment
Win bid
Cost of
S.S. Kuniang
Bid
amount
Lose bid
Cost of next
best alternative
Additional nodes might be added. As with the influence diagram, a decision regarding what to do if the
bid is lost could be included. Also, a decision to keep or sell Kuniang might be included in the event of a
high Coast Guard judgment. The file “SSKuniang I.xlsx” shows representations of both the above influence
diagram and the decision tree.
Case Study: The Hillblom Estate, Part I
No solution is provided for this case. We recommend that the instructor consult the article by Lippman and
McCardle, (2004), “Sex, Lies and the Hillblom Estate,” Decision Analysis, 1, pp 149–166.
38
CHAPTER 4
Making Choices
Notes
After having structured a number of decisions in Chapter 3, it seems natural to talk about analyzing models.
Thus, Chapter 4 is titled “Making Choices,” and discusses the analysis of decision models. We cover
decision trees, influence diagrams, risk profiles and first-order stochastic dominance, and the analysis of a
two-attribute decision model. The last section in Chapter 4 provides additional instructions for using some
of the advanced features in PrecisionTree.
The opening example concerning the Texaco-Pennzoil negotiations is quite useful because it includes
multiple decisions and multiple chance nodes. Furthermore, the problem could be quite complex if
analyzed fully. Our analysis here is a simplification, and yet the modeling simplifications made are
reasonable and would provide an excellent springboard for a more complete analysis. In fact, we will
revisit Texaco-Pennzoil in later chapters.
The discussion of the algorithm for solving decision trees is straightforward and takes at most part of one
class period. Solving influence diagrams, however, is somewhat more involved, partly because of the
symmetric representation that influence diagrams use. While PrecisionTree solves both of these models
with a click of a button, it is important to make sure the students still understand the steps involved.
The discussion of risk profiles and dominance are placed in Chapter 4 so that students get an early idea
about using dominance to screen alternatives. The discussion is not technically deep because probability
has not yet been introduced; we revisit the idea of stochastic dominance later in Chapter 7 after discussing
probability and CDFs. Also, note that the text only discusses first-order stochastic dominance. Two
problems in Chapter 14 (14.19 and 14.20) can be used as introductions to second-order stochastic
dominance, but only after the students have begun to think about risk aversion.
The chapter continues the topic of multiple attributes by leading the students through the analysis of a
simple two-attribute job choice. Much of the discussion is about how to construct an appropriate twoattribute value function. With the value function in place, we see how to calculate expected value, create
risk profiles, and determine stochastic dominance when there are multiple objectives.
The last section provides additional instructions (beyond those in Chapter 3) for using PrecisionTree’s
advanced features including reference nodes to graphically prune the tree without leaving out any of the
mathematical details and linked trees (decision trees linked to a spreadsheet model). An example is also
provided that demonstrates a multiple-attribute model using formulas set-up in a spreadsheet. Finally,
instructions are provided for analyzing the model and generating risk profiles.
Topical cross-reference for problems
Branch pay-off formula
Calculation nodes
Certainty equivalent
Convert to tree
Decision criterion
Decision trees
4.19, Southern Electronics
Southern Electronics
4.3
Southern Electronics
4.20, 4.23
4.1, 4.4, 4.6-4.8, 4.14, 4.15, 4.19, Southern
Electronics, Strenlar, SS Kuniang Part II,
Marketing Specialists
4.15, Marketing Specialists
4.19, Marketing Specialists
4.1, 4.5, 4.10, 4.11, 4.13, Southern
Electronics
4.12, 4.18, Job Offers, Strenlar
Goal Seek
Linked decision trees
Influence diagrams
Multiple objectives
39
Negotiations
PrecisionTree
Southern Electronics
4.4-4.8, 4.10, 4.15, 4.16, 4.19, Job Offers,
Strenlar, Southern Electronics, S.S. Kuniang
Part II, Marketing Specialists
4.21
4.3, Strenlar
4.9, 4.14-4.17, Job Offers, Marketing
Specialists
4.11, 4.15, Strenlar
S.S. Kuniang, Part II
4.2, 4.6, 4.7, 4.9, 4.11, 4.14-4.17, 4.22, Job
Offers, Marketing Specialists
4.15
4.3
4.14, Strenlar, Marketing Specialists
4.12, 4.18, Job Offers
4.13
Probability
Risk aversion
Risk profiles
Sensitivity analysis
Solver
Stochastic dominance
Stock options
Texaco-Pennzoil
Time value of money
Trade-off weights
Umbrella problem
Solutions
4.1. No. At least, not if the decision tree and influence diagram each represent the same problem (identical
details and definitions). Decision trees and influence diagrams are called “isomorphic,” meaning that they
are equivalent representations. The solution to any given problem should not depend on the representation.
Thus, as long as the decision tree and the influence diagram represent the same problem, their solutions
should be the same.
4.2. There are many ways to express the idea of stochastic dominance. Any acceptable answer must capture
the notion that a stochastically dominant alternative is a better gamble. In Chapter 4 we have discussed
first-order stochastic dominance; a dominant alternative in this sense is a lottery or gamble that can be
viewed as another (dominated) alternative with extra value included in some circumstances.
4.3. A variety of reasonable answers exist. For example, it could be argued that the least Liedtke should
accept is $4.63 billion, the expected value of his “Counteroffer $5 billion” alternative. However, this
amount depends on the fact that the counteroffer is for $5 billion, not some other amount. Hence, another
reasonable answer is $4.56 billion, the expected value of going to court.
If Liedtke is risk-averse, though, he might want to settle for less than $4.56 billion. If he is very risk-averse,
he might accept Texaco’s $2 billion counteroffer instead of taking the risk of going to court and coming
away with nothing. The least that a risk-averse Liedtke would accept would be his certainty equivalent (see
Chapter 13), the sure amount that is equivalent, in his mind, to the risky situation of making the best
counteroffer he could come up with. What would such a certainty equivalent be? See the epilogue to the
chapter to find out the final settlement.
The following problems (4.4 – 4.9) are straightforward and designed to reinforce the computations used in
decision trees. Because PrecisionTree does all the calculations, the instructor may want the students to do
some of these problems by hand to reinforce the methodology of calculating expected value or creating a
risk profile.
4.4. The Excel file “Problem 4.4.xlsx” contains this decision tree. The results of the run Decision Analysis
button (fifth button from the left on the PrecisionTree toolbar) are shown in the worksheets labeled
Probability Chart, Cumulative Chart, and Statistical Summary.
EMV(A)
= 0.1(20) + 0.2(10) + 0.6(0) + 0.1(-10)
= 3.0
EMV(B)
= 0.7(5) + 0.3(-1)
= 3.2
40
Alternative B has a larger EMV than A and a lower standard deviation or narrower range. While B’s
maximum payoff is only 5, B’s payoff will be larger than A’s payoff 70% of the time. Neither A nor B
dominates.
4.5. The Excel file “Problem 4.5.xlsx” contains this influence diagram. The most challenging part of
implementing the influence diagram is to enter the payoff values. The payoff values reference the outcome
values listed in the value table. The value table is a standard Excel spreadsheet with values of influencing
nodes. In order for the influence diagram to calculate the expected value of the model, it is necessary to fill
in the value tables for all diagram nodes. is the summary statistics are shown in the upper left of the
worksheet. Change a value or probability in the diagram, and you will immediately see the impact on the
statistics of the model. It is possible to use formulas that combine values for influence nodes to calculate
the payoff node vales.
The results of the run Decision Analysis button (fifth button from the left on the PrecisionTree toolbar) are
shown in the worksheets labeled Statistics, RiskProfile, CumulativeRiskProfile, and ScatterProfile. The
results of an influence diagram will only show the Optimal Policy. An analysis based on a decision tree will
produce the risk profiles and summary statistics for all the alternatives, not just the optimal one. The
following figure shows how to solve the influence diagram by hand. See the online supplement for more
explanation.
Event A
Choice
20
10
0
-10
A
B
.1
.2
.6
.1
Value
Choice
A
Event B
5 .7
-1 .3
Event
A
20
10
0
-10
B
20
10
0
-10
41
Event
B
5
-1
5
-1
5
-1
5
-1
5
-1
5
-1
5
-1
5
-1
Value
20
20
10
10
0
0
-10
-10
5
-1
5
-1
5
-1
5
-1
Solution:
1.
2.
3.
Reduce Event B:
Choice Event A EMV
A
20
10
0
-10
B
20
10
0
-10
Reduce Event A:
Choice
EMV
A
3.0
B
3.2
Reduce Choice:
B
3.2
20
10
0
-10
3.2
3.2
3.2
3.2
4.6. The Excel file “Problem 4.6.xlsx” contains this decision tree. The results of the run Decision Analysis
button are shown in the worksheets labeled Probability Chart, Cumulative Chart, and Summary Statistics.
EMV(A)
= 0.1(20) + 0.2(10) + 0.6(6) + 0.1(5)
= 8.1
EMV(B)
= 0.7(5) + 0.3(-1)
= 3.2
This problem demonstrates deterministic dominance. All of the outcomes in A are at least as good as the
outcomes in B.
4.7. Choose B, because it costs less for exactly the same risky prospect. Choosing B is like choosing A but
paying one less dollar.
EMV(A)
EMV(B)
= 0.1(18) + 0.2(8) + 0.6(-2) + 0.1(-12)
= 1.0
= 0.1(19) + 0.2(9) + 0.6(-1) + 0.1(-11)
= 1 + 0.1(18) + 0.2(8) + 0.6(-2) + 0.1(-12)
= 1 + EMV(A)
= 2.0
The Excel file “Problem 4.7.xlsx” contains this decision tree. The dominance of alternative B over
alternative A is easily seen in the Risk Profile and theCumulative Risk Profile.
4.8. The Excel file “Problem 4.8.xlsx” contains this decision tree. The results of the run Decision Analysis
button are shown in the worksheets labeled Probability Chart, Cumulative Chart, and Summary Statistics.
42
A1
$8
8
EMV(A)
= 0.27(8) + 0.73(4)
= 5.08
EMV
= 7.5
.27
A2
0.5
$0
0.5
A
.73
$4
0.45
$10
B
EMV(B)
= 0.45(10) + 0.55(0)
= 4.5
$15
0.55
$0
4.9. The risk profiles and cumulative risk profiles are shown in the associated tabs in the Excel file
“Problem 4.8.xlsx.” The following risk profiles were generated by hand. The profiles generated by
PrecisionTree only include the two primary alternatives defined by the original decision “A” or “B”. To
also include the A-A1 and A-A2 distinction, the decision tree would need to be restructured so that there
was only one decision node with three primary alternatives, “A-A1”, “A-A2”, and “B”.
Risk profiles:
0.8
A-A1
0.7
A-A2
0.6
B
0.5
0.4
0.3
0.2
0.1
0
0
2
4
6
8
43
10
12
14
Cumulative risk profiles:
1
A-A2
A-A1
0.75
B
0.5
0.25
0
-2
0
2
4
6
8
10
12
14
None of the alternatives is stochastically dominated (first-order) because the cumulative risk-profile lines
cross.
4.10. The Excel file “Problem 4.10.xlsx” contains two versions of this influence diagram. The first version
is as shown below. The second version (in Worksheet “Alt. Influence Diagram”) takes advantage of
PrecisonTree’s structural arcs to incorporate the asymmetries associated with the model. For example, if
you choose A for Choice 1, you don’t care about the result of Event B. In the basic influence diagram, you
need to consider all combinations, but in the alternative diagram, structural arcs are used to skip the
irrelevant nodes created by the asymmetries. Compare the value tables for the two alternatives –
incorporating structure arcs makes the value table much simpler because the asymmetries are captured by
the structure arcs. As mentioned previously, these results generated by PrecisionTree when analyzing an
influence diagram only show the optimal alternative, in this case Choice B.
To solve the influence diagram manually, see the diagrams and tables below. The online supplement for
Chapter 4 provides a detailed explanation.
44
16
Choice
1
Event
A
A
B
Choice 2
8
A'
Event
A'
Choice 2
4
0
15
Event
B
Value
Choice
1
A
Event
A
Choice 2
Choice
2
8
Event
A'
0
15
0
A'
15
4
0
8
15
0
A'
15
B
Choice 2
0
8
15
0
A'
15
4
0
8
15
0
A'
15
45
Event
B
0
10
0
10
0
10
0
10
0
10
0
10
0
10
0
10
0
10
0
10
0
10
0
10
0
10
0
10
0
10
0
10
0
10
Value
8
8
8
8
0
0
15
15
4
4
4
4
4
4
4
4
0
10
0
10
0
10
0
10
0
10
0
10
0
10
0
10
Reduce Event B:
Choice Ev ent
1
A
A
Choice 2
Choice
2
8
A'
4
8
A'
B
Choice 2
8
A'
4
8
A'
Ev ent
A'
0
15
0
15
0
15
0
15
0
15
0
15
0
15
0
15
EMV
8
8
0
15
4
4
4
4
4.5
4.5
4.5
4.5
4.5
4.5
4.5
4.5
Reducing Event A':
Ev ent
Choice
A
1
Choice 2
A
4
B
Choice 2
4
Choice
2
8
A'
8
A'
8
A'
8
A'
EMV
8
7.5
4
4
4.5
4.5
4.5
4.5
Reducing Choice 2:
Choice Ev ent
EMV
A
1
8
Choice 2
A
4
4
Choice 2 4.5
B
4.5
4
Reducing Event A:
Choice
1
EMV
A
5.08
B
4.5
Finally, reducing Choice 1 amounts to choosing A because it has the higher EMV. This problem shows
how poorly an influence diagram works for an asymmetric decision problem!
46
4.11. If A deterministically dominates B, then for every possible consequence level x, A must have a
probability of achieving x or more that is at least as great as B’s probability of achieving x or more:
P(A ≥ x) ≥ P(B ≥ x) .
The only time these two probabilities will be the same is 1) for x less than the minimum possible under B,
when P(A ≥ x) = P(B ≥ x) = 1.0 — both are bound to do better than such a low x; and 2) when x is greater
than the greatest possible consequence under A, in which case P(A ≥ x) = P(B ≥ x) = 0. Here neither one
could possibly be greater than x.
4.12. It is important to consider the ranges of consequences because smaller ranges represent less
opportunity to make a meaningful change in utility. The less opportunity to make a real change, the less
important is that objective, and so the lower the weight.
4.13. Reduce “Weather”:
Take umbrella?
Take it
Don’t take it
EMV
80
p(100)
Reducing “Take Umbrella?” means that “Take it” would be chosen if p ≤ 0.8, and “Don’t take it” would be
chosen if p > 0.8.
The Excel file “Problem 4.13.xlsx” contains the influence diagram for this problem. PrecisionTree allows
you to link the probability of weather to a cell location for variable p. Thus, to consider different
probability values, you simply need to change the value for the probability in cell J6, the location we chose
for p.
4.14. a. There is not really enough information here for a full analysis. However, we do know that the
expected net revenue is $6000 per month. This is a lot more than the sure $3166.67 = $400,000
× (0.095/12) in interest per month that the investor would earn in the money market.
b. If the investor waits, someone else might buy the complex, or the seller might withdraw it from the
market. But the investor might also find out whether the electronics plant rezoning is approved. He still will
not know the ultimate effect on the apartment complex, but his beliefs about future income from and value
of the complex will depend on what happens. He has to decide whether the risk of losing the complex to
someone else if he waits is offset by the potential opportunity to make a more informed choice later.
c. Note that the probability on the rezoning event is missing. Thus, we do not have all the information for a
full analysis. We can draw some conclusions, though. For all intents and purposes, purchasing the option
dominates the money-market alternative, because it appears that with the option the investor can do
virtually as well as the money-market consequence, no matter what happens. Comparing the option with
the immediate purchase, however, is more difficult because we do not know the precise meaning of
“substantial long-term negative effect” on the apartment complex’s value. That is, this phrase does not pass
the clarity test!
The point of this problem is that, even with the relatively obscure information we have, we can suggest that
the option is worth considering because it will allow him to make an informed decision. With full
information we could mount a full-scale attack and determine which alternative has the greatest EMV.
The structure of the tree is drawn in the Excel file “Problem 4.14.xlsx.” All the numbers necessary to do a
complete analysis are not provided.
47
Buy Complex
Rezone denied
Buy
option
Expect $6000 per month + higher v alue
Inv est $399,000 in
$3158.75/month = $399,000 (0.095/12)
money market
Buy Complex
Rezone approv ed
Inv est $399,000 in
money market
Rezone denied
Expect $6000 per month + lower v alue
$3158.75/month = $399,000 (0.095/12)
Expect $6000 per month + higher v alue
Buy complex
Rezone approv ed
Do nothing.
Inv est $400,000 in money market.
Expect $6000 per month + lower v alue
$3166.67/month = $400,000 (0.095/12)
4.15. a.
1.00
.50
Option
1000
-500
2000
3000
$
2000
3000
$
1.00
Buy stock
-500
-46.65
1000
85.10
48
1.00
Do nothing
2000
1000
-500
3000
$
3.35
3.33
No immediate conclusions can be drawn. No one alternative dominates another.
b. The Excel file “Problem 4.15.xlsx” contains this decision tree and the risk profiles generated by
PrecisionTree.
Net
Contribution
Apricot wins (0.25)
Option worth $3.5 per
share f or 1000 shares.
Buy option
on 1000
shares
3000 = 3500 - 500
Apricot loses (0.75)
Option worthless.
-500
Apricot wins (0.25)
Buy 17 shares f or
$484.50. Inv est
$15.50 in money market.
Apricot loses (0.75)
Do nothing.
Inv est $500 in money market f or 1 month.
85.10 = 33.5 (17) + 15.5 (1.0067) - 500
-46.65= 25.75 (17) + 15.5 (1.0067) - 500
3.35 = 500(1.0067) - 500
3.33
Analyzing the decision tree with p = 0.25 gives
EMV(Option) = 0.25(3000) + 0.75(-500) = $375
EMV(Buy stock) = 0.25(85.10) + 0.75(-46.65) = -$13.71
EMV(Do nothing) = $3.33.
Thus, the optimal choice would be to purchase the option. If p = 0.10, though, we have
EMV(Option) = 0.10(3000) + 0.90(-500) = $-150
EMV(Buy stock) = 0.10(85.10) + 0.90(-46.65) = -$33.48
EMV(Do nothing) = $3.33.
Now it would be best to do nothing.
To find the breakeven value of p, set up the equation
p (3000) + (1 - p)(-500) = 3.33
Solve this for p:
49
p (3000) + p (500) = 503.33
p (3500) = 503.33
p = 503.33/3500 = 0.1438.
When p = 0.1438, EMV(Option) = EMV(Do nothing).
The break-even analysis can also be found using the built-in Excel tool: Goal Seek. Because we are altering
a probability value, it is necessary to use formulas to guarantee that the probabilities always add to one.
These formulas are incorporated in the decision tree model. Then, use Excel’s Goal Seek (from the Tools
menu). Select the outcome of the Buy option branch (cell $C$ 13) as the “Set Cell”, the value of the money
market branch (3.33) as the “To value” (you can’t enter a cell reference here with goal seek), and the
probability of winning the lawsuit (cell: $F$5) as the “By changing cell.” Goal Seek will find the
probability 14.4%.
4.16. The decision tree for this problem is shown in the Excel file “Problem 4.16.xlsx.” The cumulative risk
profile generated by PrecisionTree is shown in the second worksheet. Cumulative risk profiles:
1
0.75
Buy
0.5
Make
0.25
0
30
35
40
45
50
55
Johnson Marketing should make the processor because the cumulative risk profile for “Make” lies to the
left of the cumulative risk profile for “Buy.” (Recall that the objective is to minimize cost, and so the
leftmost distribution is preferred.) Making the processor stochastically dominates the “Buy” alternative.
4.17. Analysis of personal decision from problems 1.9 and 3.21.
4.18. a. Stacy has three objectives: minimize distance, minimize cost, and maximize variety. Because she
has been on vacation for two weeks, we can assume that she has not been out to lunch in the past week, so
on Monday, all of the restaurants would score the same in terms of variety. Thus, for this problem, we can
analyze the problem in terms of cost and distance. The following table gives the calculations for part a:
Sam’s
Sy’s
Bubba’s
Blue China
Eating
Excel-Soaring
Distance
10
9
7
2
2
5
Distance
Score
0
13
38
100
100
63
Cost
$3.50
$2.85
$6.50
$5.00
$7.50
$9.00
50
Cost
Score
89
100
41
65
24
0
Overall
Score
45
56
39
83
62
31
In the table, “Distance Score” and “Cost Score” are calculated as in the text. For example, Sam’s cost score
is calculated as 100(3.50 - 9.00)/(2.85 - 9.00) = 89. The overall score is calculated by equally weighting the
cost and distance scores. Thus, S(Sam’s) = 0.5(0) + 0.5(89) = 45. The overall scores in the table are
rounded to integer values.
Blue China has the highest score and would be the recommended choice for Monday’s lunch.
b. Let’s assume that Stacy does not go out for lunch on Tuesday or Wednesday. For Thursday’s selection,
we now must consider all three attributes, because now variety plays a role. Here are Stacy’s calculations
for Thursday:
Sam’s
Sy’s
Bubba’s
Blue China
Eating
Excel-Soaring
Distance
10
9
7
2
2
5
Distance
Score
0
13
38
100
100
63
Cost
$3.50
$2.85
$6.50
$5.00
$7.50
$9.00
Cost
Score
89
100
41
65
24
0
Variety
Score
100
100
100
0
100
100
Overall
Score
63
71
59
55
75
54
The score for variety shows Blue China with a zero and all others with 100, reflecting Monday’s choice.
The overall score is calculated by giving a weight of 1/3 to each of the individual scores. Now the
recommended alternative is The Eating Place with an overall score of 75.
If we assume that Stacy has been out to eat twice before making Thursday’s choice, then the table would
have zeroes under variety for both Blue China and The Eating Place, and the recommended choice would
be Sy’s.
Note that it is necessary to do the calculations for part b; we cannot assume that Stacy would automatically
go to the next best place based on the calculations in part a. The reason is that a previous choice could be so
much better than all of the others on price and distance that even though Stacy has already been there once
this week, it would still be the preferred alternative.
4.19. The linked-tree is in the Excel file “Problem 4.19.xlsx.” The spreadsheet model defines the profit
based on the problem parameters: cost per mug and revenue per mug, the order decision, and the demand
uncertainty. The value of the order decision node is linked to the “Order Quantity” cell in the spreadsheet
model ($B$5). The value for the demand chance nodes are linked to the “Demand” cell in the spreadsheet
model ($B$8), and the outcome node values are linked to the Profit (no resale) in the spreadsheet model
($B$12) for part a, and to the Profit (resale) in the spreadsheet model ($B$13) for part b.
The results of the model show that they should order 15,000 if they can't sell the extras at a discount, and
they should order 18,000 if they can.
51
Spreadsheet Model
Order Quantity
Cost per mug
Cost
Demand
Number Sold
Revenue per mug
Revenue
Profit (no resale)
Profit (resale)
12000
6.75
=Order_Quantity*Cost_per_mug
10000
=IF(Demand<Order_Quantity,Demand,Order_Quantity)
23.95
=B10*Number_Sold
=Revenue-Cost
=Revenue-Cost+MAX(0,Order_Quantity-Number_Sold)*5
Demand is 10,000 25.0%
10000
Order 12,000
FALSE
Demand
12000
194425
Demand is 15,000 50.0%
15000
Demand is 20,000 25.0%
20000
Problem 4.19
0
158500
0
206400
0
206400
Order Decision
228062.5
Demand is 10,000 25.0%
10000
Order 15,000
TRUE
Demand
15000
228062.5
Demand is 15,000 50.0%
15000
Demand is 20,000 25.0%
20000
Demand is 10,000 25.0%
10000
Order 18,000
FALSE
Demand
18000
225775
Demand is 15,000 50.0%
15000
Demand is 20,000 25.0%
20000
0.25
138250
0.5
258000
0.25
258000
0
118000
0
237750
0
309600
4.20. The most compelling argument for using EMV when deciding among alternatives is presented later in
the book when we discuss utility theory (Chapters 14 & 15). There we show that if you agree with a few
postulates and if you are risk neutral, then you are most satisfied when choosing the alternative that
maximizes EMV. As an example of one of the postulates, consider transitivity, which states: if you prefer
A to B and prefer B to C, then you prefer A to C. As most people readily agree to the transitivity and other
postulates, most people are served well by choosing the alternative that maximizes EMV.
52
The cons of using EMV as a decision criterion is that the decision maker may not be risk neutral, that is, the
amount of risk associated to the different alternatives does matter to him or her. The EMV still serves as the
single best summary of what could happen, but in this case, the decision maker is interested in more than a
single number summary. To choose the alternative they like the most, the risk level of the alternative needs
to be taken into consideration. Typically, we do this by examining the range of possible payouts via the risk
profile. Not only does the risk profile report the different payouts, but it also reports the probability of those
payouts occurring.
The pros of using EMV is that it makes logical sense and that it is straightforward. Incorporating risk into
the decision criterion involves making tradeoffs. Comparing risk profiles requires the decision maker to
somehow balance to rewards and risks, whereas comparing EMVs requires no tradeoffs. It is quite easy to
view a list of EMVs and choose the largest one. Even in one-off cases, the EMV summarizes all the
different payoffs using both the monetary values and their probabilities into a coherent, easy to understand
single value useful for making comparisons.
4.21. This is an algebra problem. We are given:
𝑝1 + 𝑝2 + 𝑝3 = 1; 𝑞1 + 𝑞2 + 𝑞3 = 1; 𝑟1 + 𝑟2 + 𝑟3 = 1; 𝑎𝑛𝑑 𝑡1 + 𝑡2 + 𝑡3 = 1.
We want to show that the nine end-node probability values sum to one:
𝑝1 𝑟1 + 𝑝1 𝑟2 + 𝑝1 𝑟3 + 𝑝2 𝑞1 + 𝑝2 𝑞2 + 𝑝2 𝑞3 + 𝑝3 𝑡1 + 𝑝3 𝑡2 + 𝑝3 𝑡3 = 1.
We do this by grouping terms.
𝑝1 𝑟1 + 𝑝1 𝑟2 + 𝑝1 𝑟3 + 𝑝2 𝑞1 + 𝑝2 𝑞2 + 𝑝2 𝑞3 + 𝑝3 𝑡1 + 𝑝3 𝑡2 + 𝑝3 𝑡3
= 𝑝1 (𝑟1 + 𝑟2 + 𝑟3 ) + 𝑝2 (𝑞1 + 𝑞2 + 𝑞3 ) + 𝑝3 (𝑡1 + 𝑡2 + 𝑡3 )
= 𝑝1 + 𝑝2 + 𝑝3 = 1
4.22. Yes, if A dominates B, then the EV(A) is always better than the EV(B). For example, assume our
consequence measure is profit, and thus the more profit the better. If alternative A dominates alternative B,
then for any probability value 𝑝, 0 ≤ 𝑝 ≤ 1, A’s associated payoff or profit is always as good as and at
times better than B’s associated profit. Because EV(A) is a weighted average of the profit values, weighted
by the probabilities, EV(A) must be greater than EV(B).
EV(A) = � 𝑝 ∗ 𝑃𝑟𝑜𝑓𝑖𝑡𝐴 �𝑥𝑝 � > � 𝑝 ∗ 𝑃𝑟𝑜𝑓𝑖𝑡𝐵 �𝑥𝑝 � = 𝐸𝑉(𝐵),
where 𝑃𝑟𝑜𝑓𝑖𝑡𝐴 �𝑥𝑝 � is the profit value of A associated to the probability 𝑝.
4.23. Students often mix up the mode and mean, and this problem is to help them distinguish these
statistics. The Excel solution is given in file “Problem 4.23.xlsx,” and we copied the summary statistics
below. Note that the values (in model and below) are given in $1000s. Thus, bidding $750,000 has an EMV
of $35,480 and bidding $700,000 has an EMV of $40,950. Because the probability of the higher bid
amount has a 30% chance of being accepted versus the 60% chance of the lower bid amount being
accepted, we see that bidding the lower amount produces the higher average profit.
The modal value for both alternatives is zero. This is because 0 is the most likely outcome for either bid
amount. As a matter of fact, bidding $750,000 results in a $0 profit 70% of the time and bidding $700,000
results in a $0 profit 40% of the time. These probabilities can be read from either the probability or
cumulative charts. Clearly, the mean or EMV is different from the mode and tells us a different story. Both
statistics give us different insights, and both are useful.
53
Bid =
$750,000
$35.48
Bid
=$700,000
$40.95
Minimum
$0
-$20
Maximum
$205
$155
Statistics
Mean
Mode
$0
$0
Std.
Deviation
Skewness
$58
$46
1.3130
0.7538
Kurtosis
3.2847
2.4924
Case Study: Southern Electronics, Part I
1. Steve’s reservation price is $10 Million, the value of the offer from Big Red. This is the least he should
accept from Banana.
2. This influence diagram is modeled in the Excel file “Southern Case Parts 1 & 2.xlsx.” An additional
structure arc is included between “Choice” and “EYF Success” to model the fact that if Steve chooses Big
Red, the probability of EYF Success is irrelevant.
EYF
Success
Choice
Banana
Share
Price
Yes (0.4)
No (0.6)
EYF
Success
Yes
No
Share
Price
$40
$25
Big Red
Banana
Value
Choice
Big Red
Banana
54
Banana
Share Price
40
25
40
25
Value
10
10
11
8.75
3.
Share
price
EY F
succeeds
(0.4)
EMV =
$9.65 Million
Banana
EY F
Fails
(0.6)
EV | success = $40
EV | f ailure = $25
Big Red
11 M = 5 M + (40 x 150,000)
8.75 M = 5 M + (25 x 150,000)
10M = 5 M + (50 x 100,000)
The decision tree shown in the second worksheet of the Excel file “Southern Case.xlsx” is the version of
the tree that is automatically generated by the “Convert to Tree” function for the associated influence
diagram in the first worksheet.
It shows calculation nodes for the Banana share price as represented in the influence diagram but not
explicitly in the above decision tree. Also, formulas are used in the calculation of the output values for the
Banana decision where the formula includes the certain $5M plus 150000 shares * BranchVal("Banana
Share Price").
4. Obviously, Steve cannot accept the Banana offer because for him it has a lower EMV than Big Red’s
offer. Furthermore, it is riskier than Big Red’s.
Note that Banana calculates the expected value of its offer as
EMV
= 0.6[$5 M + (50 × 150,000)] + 0.4[$5 M + (25 × 150,000)]
= 0.6 (12.5 M) + 0.4 (8.75 M)
= 11 M.
So naturally Banana thinks it is making Steve a good offer!
Case Study: Southern Electronics, Part II
1. The second part of the Southern Electronics case now includes an option to hedge against the risk of the
share price. This decision tree is shown in the third worksheet in the Excel file
“Southern Electronics Parts 1 & 2.xlsx.”
55
Share
price
Expected
share price
= $40
EY F
success
(0.4)
$11.73 M = $530,000 + (40 x 280,000)
EMV =
$10.05 M
Banana
$8.93 M = $530,000 + (30 x 280,000).
Stev e exercises option and sells shares
f or $30 each.
EY F f ailure (0.6)
Big Red
$10 M
Steve’s expected value is
EMV(Steve) = 0.4($11.73 M) + 0.6($8.93 M) = $10.05 M.
2. Banana’s expected cost is calculated by noting that
• Banana pays $530,000 in cash.
• Banana must hand over 280,000 shares worth $30 per share. Thus,
280,000 × $30 = $8.4 M
• If the EYF fails (which they assess as a 40% chance), Banana must buy back the shares at an
expected price of $5 per share over the market value ($30 - $25). In this case Banana would incur
a cost of $5 × 280,000 = 1.4 M.
Thus, Banana’s expected cost is
E(cost) = $0.53 M + $8.4 M + 0.4 (1.4 M) = $9.49 M.
The expected cost of this offer thus is less than the $9.5 M cost for their original offer. Of course, Steve’s
offer is somewhat riskier than the original offer.
This problem demonstrates how two parties in a negotiation can exploit differences in probability
judgments to arrive at an agreement. The essence of such exploitation is to set up some sort of a bet. In this
case, the bet involves whether the EYF fails. Accepting this offer would mean that Banana is “betting” that
the EYF succeeds, and Steve is “betting” (or is protected) if the EYF fails.
(Side note from Clemen: EYF stands for Elongated Yellow Fruit. I stole the abbreviation from James
Thurber, whose editor once complained that Thurber used fancy phrases (elongated yellow fruit) instead of
simple ones (banana)).
Case Study: Strenlar
Strenlar is a very messy case. Here is the structure in an influence diagram. This model is shown in the
Excel file “Strenlar.xlsx.” This influence diagram is shown in the first worksheet.
56
Court outcome?
Mfg process success?
Decision?
Payoff
In order to do a good job with the analysis, a number of assumptions must be made. Here is a reasonable
set:
• $8 million in profits is the present value of all profits to be realized over time.
• $35 million in sales is also the present value of all future sales.
• Fred Wallace’s interest rate is 7.5% per year.
• Fred’s time frame is 10 years. We will use 10 years to evaluate the job offer and the lump sum.
• If Fred goes to court, he continues to be liable for the $500,000 he owes. The $8 million in
profits is net after repaying the $500,000. As indicated in the case, If Fred and PI reach an
agreement, then PI repays the debt.
• If Fred accepts the lump sum and options, and if the manufacturing process works, then the
options pay 70,000 × $12 at a point 18 months in the future. Thus, the present value of the
12 × 70,000
= $752,168. That is, we figure three periods at 3.75% per period. The
options is
1.0375 3
purpose of this is simply to put a value on the options.
• The options are non-transferable. Thus, there is no need to figure out the “market value” of the
options.
These assumptions are set-up in a spreadsheet model on the second worksheet in the Excel file
“Strenlar.xlsx.” These assumptions are then used in the decision tree model on the same worksheet. For
example, the outcome of going to court, winning, and the manufacturing process is the NPV of all profits
from Strenlar over time (cell $B$3). Also, in cell H3 the NPV of 10 years making $100,000 per year is
calculated at $686,408 and in H4 the NPV of the option is calculated at $752,168.
57
Fred’s tree looks like this:
An explanation of the end-node values:
• If Fred wins in court, but cannot fix the manufacturing process, then he must pay back the
$500,000.
• If he loses in court, then he must also pay the $60,000 in court fees.
• If he accepts the job (present value of $686,408) and fixes the manufacturing process, he also
earns 6% of $35 million, which is $2.1 million.
• If he accepts the job and the process cannot be fixed, he has $686,408.
• If he takes the lump sum, then either he earns $500,000 or an additional $752,168 as the present
value of the options.
Going to court is clearly Fred’s best choice if he wants to maximize EMV. In fact, this conclusion is
remarkably robust to the assumptions that are made (as long as the assumptions are reasonable). If he is risk
averse—and the case indicates that he may be substantially so—one of the other alternatives may be
preferable. Furthermore, it would be appropriate to consider alternatives he might pursue over the next ten
years if he were to take the lump sum.
The analysis above is OK in financial terms, but that is as far as it goes. It does ignore other objectives that
Fred might have. It may be helpful to characterize Fred’s three alternatives:
1. Stick with Strenlar; don’t succumb to PI. This alternative has by far the greatest “upside potential”
of $8 Million. Such a consequence also would be quite satisfying and his future would be secure.
He could also fail, however, or suffer from hypertension. This clearly is the riskiest choice.
2. Play it safe. Accept the PI job offer. After all, there is a good chance Fred will earn a lot of money
in royalties. But will things be the same at PI as they were in the past?
3. Accept the lump sum. Fred can have $500,000 immediately and ten years to do something else,
plus a chance at an additional $840,000 18 months in the future. What could he do with resources
like this?
58
From this characterization, it is clear that Fred has to think about whether the potential wealth to be gained
from Strenlar is worth substantial risk, both financially and in terms of his health. If he decides that it is not,
then he should consider whether the security of the PI job outweighs his potential for alternative pursuits if
he were to take the lump sum.
Wallace’s risk profiles show clearly that refusing PI is by far the riskiest choice. The cumulative risk profile
shows, at least from the financial perspective, that taking the job dominates the lump sum offer.
Cumulative Probabilities for Decision Tree 'Strenlar'
Choice Comparison for Node 'Decision'
1
Cumulative Probability
0.8
0.6
Go to Court
0.4
Accept job
Accept lum sum
0.2
0
-$1,000,000
$1,000,000
$3,000,000
$5,000,000
$7,000,000
$9,000,000
Case Study: Job Offers
Robin’s decision tree is shown in the Excel file: “Job Offers.xlsx” as shown below. The consequences in
the tree use formula in the cells under each branch to link each path to the associated overall score in the
spreadsheet model.
59
Weights:
100
$1500
50.0%
0
400
48.6
70.0%
0.35
0.50
Magazine Overall
Rating
Score
75
25
56
48.6
56.9
56.9
15.0%
0.075
75
50
56
56.9
73.6
73.6
75
100
56
73.6
15.0%
0.075
40.3
40.3
25
25
56
40.3
70.0%
0.35
25
50
56
48.6
25
100
56
65.3
100
37.5
0
29.2
100
57.5
0
35.8
100
80
0
43.3
0
0
100
50.0
Disposable Income
54.0
100
$1300
50.0%
0
Snowfall
49.9
200
400
Job Case
0.075
48.6
0.33
Snowfall
Rating
Snowfall
58.2
200
Madison Publishing TRUE
0
15.0%
0.17
Income
Rating
48.6
48.6
15.0%
0.075
65.3
65.3
Decision
54.0
15.0%
150
29.2
MPR Manufacturing FALSE
0
0
29.2
Snowfall
36.0
70.0%
230
35.8
15.0%
320
43.3
Pandemonium PizzaFALSE
0
50.0
50
0
35.8
0
43.3
1. The student must calculate the proportional scores in the consequence matrix. For example, the snowfall
rating when it snows 100 cm is 100(100 cm - 0)/(400 cm - 0) = 25. These formulas are shown in the
spreadsheet model.
2. Annual snowfall obviously is a proxy in this situation. It may not be perfect, depending on other climatic
conditions. for example, 200 cm of snowfall in Flagstaff may be very different than 200 cm in Minnesota;
snow in Flagstaff typically melts after each storm, but not so in Minnesota. Other possible proxies might be
proximity to mountains, average snowpack during February (for example), or the average high temperature
during winter months. Another possibility is to consider a non-weather index such as the amount of wintersports activity in the area.
3. The weights are km = 1/2, ks = 1/3, and ki = 1/6. Calculate these by solving the equations km = 1.5ks, km
= 3ki, and km +ks + ki = 1.
Using these weights, we can calculate the overall scores at the end of each branch of the tree:
Weights:
Ratings:
Madison
MPR
Pandemonium
Income
0.17
75
75
75
25
25
25
100
100
100
0
Snowfall
0.33
25
50
100
25
50
100
37.5
57.5
80
0
Magazine
0.50
56
56
56
56
56
56
0
0
0
100
60
Overall
Score
49
57
74
41
49
66
29
36
43
50
4. Expected values for the three individual attributes and the overall score are given in the following table:
EXPECTED VALUES
Income
$1200
$1600
$1420
Pandemonium
MPR
Madison
Snowfall
0 cm
231.5 cm
215 cm
Magazine
Score
95
50
75
Overall
Score
50
36
55
5, 6. The cumulative risk profiles are shown below. Note that different alternatives are stochastically
dominant for different attributes. Looking at the overall score, MPR is clearly dominated. The Excel file
“Job Offers Case.xlsx” shows the cumulative risk profile for all three measures and separate risk profiles
for each measure individually.
In expected value, Madison Publishing comes in second on each attribute but is a clear winner overall.
Likewise, Madison looks good in the overall score risk profile, clearly dominating MPR. A good strategy
for Robin might be to go to Madison Publishing in St. Paul and look for a good deal on an apartment.
Nevertheless, Robin would probably want to perform some sensitivity analysis on the weights used to
determine how robust the choice of Madison is (in terms of the calculated expected values and risk profiles)
to variations in those weights.
Legend:
Madison
MPR
Pandemonium
1.00
0.80
0.60
0.40
0.20
Income
0.00
$1,000
$1,200
$1,400
$1,600
$1,800
1.00
0.80
0.60
0.40
0.20
Snowfall
0.00
-100
0
100
200
300
400
61
500
1.00
0.80
0.60
0.40
0.20
Magazine Score
0.00
0
20
40
60
80
100
1.00
0.80
0.60
0.40
0.20
Overall score
0.00
0
20
40
60
80
100
Case Study: SS Kuniang, Part II
1. The problem is how does one sort through all of the possible bids between $3 and $10 million to find the
one with the lowest expected cost. One possibility is to construct a model that calculates the expected cost
for a given bid and then search for the optimum bid by trial and error. A better approach would be to
construct a model and use an optimization routine to find the optimum bid; for example, a spreadsheet
model can be constructed in Microsoft Excel, and then Excel’s Solver can be used to find the optimum.
This model is constructed in the Excel file “SSKuniang II.xlsx.”
2. The details provided lead to the following decision tree:
Coast Guard
judgment
$9 M (0.185)
$4 M (0.630)
Win bid
(p | Bid)
$1.5 M (0.185)
Bid
amount
Lose bid (1-p | Bid)
Max (Bid, $13.5 M
Max (Bid, $6 M)
Max (Bid, $2.25 M)
$15 M (Tug-barge
Two things are notable in the decision tree. First, the probability of submitting the winning bid, p | Bid, is
calculated according to the formula given in the problem. (Incidentally, this way of calculating the
probability is consistent with a belief that the highest competitive bid is uniformly distributed between $3
and $10 million; see Problems 9.27 and 9.28). Second, there is, strictly speaking, a decision following the
Coast Guard judgment, and that would be whether to complete fitting out the Kuniang or to go with the tug
and barge. Because the bid will never be more than $10 million, however, the final cost after the Coast
Guard judgment will never be more than $13.5 million, less than the $15 million for the tug and barge.
To run the model constructed in the spreadsheet, use the built-in Solver tool to minimize the cost of the
Decision (cell $B$6) while constraining the bid amount (cell $A$22) between $3 and $10. To run the
Solver tool, select Solver from the Tools menu. Using this model, the optimal bid is $9.17 million with an
expected cost of $10.57 million.
62
Case Study: Marketing Specialists, Ltd.
1. See the file “Marketing Specialists.xlsx” for the full analysis.
2. With the commission rate set to 15%, we can use PrecisisonTree to calculate the following:
Cost Plus:
Commission:
EMV = €291,391
EMV = €56,675
Std Deviation = €20,040
Std Deviation = €958,957
P(Cost Plus better than Commission) = 51.5%
The risk profiles (below) show how risky the Commission option is compared to Cost Plus. In addition, the
cumulative risk profile shows that neither option stochastically dominates the other. At a commission rate
of 15%, it would be difficult to justify the Commission Option.
63
3. Using Goal Seek, it is straightforward to find that the breakeven commission rate is about 16.96%.
Although this sets the EMVs equal, the risk profiles are still vastly different (see below). As a result, it is
still difficult to argue for the Commission Option. Why take the Commission when it gives the same EMV
but is so much riskier?
64
So the problem Grace Choi faces is to find a commission rate that gives a high enough EMV to offset the
increased risk. For example, when the commission rate is 25%, the statistics and risk profiles are:
EMV = €850,136; Std Deviation = €518,868; P(Cost Plus better than Commission) = 12.25%
This looks a lot better for Commission! Note that some students will want a commission rate of 30% or
more to be sure that the Commission generates at least as much as Cost Plus. That might be a good opening
offer from which to negotiate, but probably there is a lower commission rate that would work for Grace and
Marketing Specialists. A good follow-up exercise would be to incorporate the idea of risk tolerance
(Chapter 14) to find the commission rate that makes the expected utilities of the two options the same.
65
CHAPTER 5
Sensitivity Analysis
Notes
Because no optimal procedure exists for performing sensitivity analysis, this chapter is somewhat “looser”
than the preceding. An effort has been made to present some of the more basic sensitivity analysis
approaches and tools. It is important to keep in mind that the purpose of sensitivity analysis is to refine the
decision model, with the ultimate objective of obtaining a requisite model.
For those instructors who enjoy basing lectures on problems and cases, an appealing way to introduce
sensitivity analysis is through the Southern Electronics cases from Chapter 4. If this case has just been
discussed, it can be used as a platform for launching into sensitivity analysis. Start by constructing a simple
(one bar) tornado diagram for the value to Steve of Banana’s offer. The endpoints of the bar are based on
the value of Banana’s stock ($20 to $60, say). Because the bar crosses the $10 million mark, the value of
Big Red’s offer, it would appear that the uncertainty about the stock price should be modeled. A second
tornado diagram can be created which considers the sensitivity of the expected value to 1) the probability of
success (from 0.35 up to 0.5, say); 2) the value of Banana’s stock if the EYF succeeds ($35 to $60); and 3)
the value of Banana’s stock if the EYF fails ($20 to $30). An advantage of this is to show that tornado
diagrams can be used on EMVs as well as sure consequences.
After showing tornado diagrams for Southern Electronics, it is natural to construct a two-way sensitivity
graph for the problem. One possibility is to construct a three-point probability distribution for the value of
the stock if the EYF succeeds. Denote two of the three probabilities by p and q, and construct the graph to
show the region for which Banana’s offer is preferred.
Sensitivity analysis is one of those topics in decision analysis that can be tedious and boring if done by
hand but quite exciting when a computer can be used. PrecisionTree, along with Goal Seek and Data Tables
in Excel provide some very powerful tools for sensitivity analysis. The software provides the capability, to
determine which inputs have the largest effect on a particular output, how much change you can expect in
an output when a given input changes by a defined amount, and which variables in the model change the
rank ordering of the alternatives. The chapter provides step-by-step instructions for setting up a sensitivity
analysis in PrecisionTree to create tornado diagrams, sensitivity graphs, and spider graphs. Instructions are
also provided on how to use Goal Seek and Data Tables as sensitivity analysis tools.
PrecisionTree saves the entries for the sensitivity analysis dialog box, which is helpful upon returning to the
model. When creating student handout worksheets, however, you may want your students to make their
own entries, in which case, make sure the dialog box is empty.
In the Excel solution files, the variables or cells used in the sensitivity analysis have been shaded green.
This should help in reading and understanding the analysis a little better.
Topical cross-reference for problems
Cost-to-loss ratio problem
Linked Decision Trees
Multiple objectives
Negotiations
Net present value
PrecisionTree
Requisite models
Sensitivity analysis
Texaco-Pennzoil
66
5.6, 5.7
5.9, 5.12
5.10, 5.12, Job Offers Part II, MANPADS
DuMond International
5.10
5.8 - 5.12, Dumond International, Strenlar
Part II, Job Offers Part II, MANPADS
5.4
5.1-5.12, DuMond International, Strenlar
Part II, Job Offers Part II, MANPADS
5.11
Tornado diagrams
Trade-off weights
Two-way sensitivity analysis
Umbrella problem
5.12, Strenlar Part II
5.9, Job Offers Part II
5.5, 5.10, 5.11, , Job Offers Part II, Strenlar
Part II, MANPADS
5.6
Solutions
5.1. Sensitivity analysis answers the question “What matters in this decision?” Or, “How do the results
change if one or more inputs change?” To ask it still another way, “How much do the inputs have to change
before the decision changes?” Or “At what point does the most preferred alternative become the second
most preferred and which alternative moves into the number one spot?”
We have framed the main issue in sensitivity analysis as “What matters” because of our focus on
constructing a requisite model. Clearly, if a decision is insensitive to an input—the decision does not
change as the input is varied over a reasonable range—then variation in that input does not matter. An
adequate model will fix such an input at a “best guess” level and proceed.
By answering the question, “What matters in this decision,” sensitivity analysis helps identify elements of
the decision situation that must be included in the model. If an input can vary widely without affecting the
decision, then there is no need to model variation in that input. On the other hand, if the variation matters,
then the input’s uncertainty should be modeled carefully, as an uncertain variable or as a decision if it is
under the decision maker’s control.
5.2. This is a rather amorphous question. The decision apparently is, “In which house shall we live?”
Important variables are the value of the current home, costs of moving, purchase price, financing
arrangements (for the current house as well as for a new one), date of the move, transportation costs, and so
on.
What role would sensitivity analysis play? The couple might ask whether the decision would change if they
took steps to reduce driving time in the future (e.g., by deciding to have no more children). How does the
attractiveness of the different alternatives vary as the family’s size and the nature of future outings are
varied? (For example, how much more skiing would the family have to do before living out of town is
preferred?) Can the family put a price tag on moving into town; that is, is there an amount of money (price,
monthly payment, etc.) such that if a house in town costs more than this amount, the family would prefer to
stay in the country?
5.3. Another rather amorphous question. Students may raise such issues as:
• Is a retail business the right thing to pursue? (Is the right problem being addressed?)
• Does the father really want to be a retailer?
• Operating costs and revenues may vary considerably. These categories cover many possible
inputs that might be subjected to sensitivity analysis.
To use sensitivity analysis in this problem, the first step would be to determine some kind of performance
measure (NPV, cash flow, payback, profit). Then a tornado diagram could be constructed showing how the
selected performance measure varies over the range of values for the inputs. The tornado diagram will
suggest further modeling steps.
5.4. From the discussion in the text, the basic issue is whether some relevant uncertainty could be resolved
during the life of the option. Some possibilities include:
• Obtaining a new job, promotion, or raise,
• Obtaining cash for a down payment,
• Learning about one’s preferences. “Is this house right for me?”
• Are there hidden defects in the house that will require repairs?
• Are zoning decisions or other developments hanging in the balance?
If such an uncertainty exists, then the option may have value. If not, it may be a dominated alternative.
67
5.5. Each line is a line of indifference where two of the alternatives have the same EMV. Now imagine a
point where two lines cross. At that point, all three of the alternatives must have the same EMV.
Point D is at t = 0.4565 and v = 0.3261 and is the location when all 3 EMVs equal. Thus, at D, the
EMV(High-Risk Stock) = EMV(Low-Risk Stock) = $500. The exact location of D can be found using
algebra, using Solver, or using a combination of algebra and Goal Seek. Because there are two unknowns,
Goal Seek needs additional information. For example, to use Goal Seek: Substitute 𝑣 = (9 − 14𝑡)⁄8,
which corresponds to Line CDE, into the expression for EMV(High-Risk Stock). Now you have one
equation and one unknown, and you can use Goal Seek to find the value of t for which the new expression
equals 500. Point D is unique in that there is no other point at which all 3 alternatives have equal expected
monetary values.
5.6.
Cost of protective action = C
Expected loss if no action taken = pL
C
Set C = pL, and solve for p: p = L .
C
Thus, if p ≥ L , take protective action.
C
The only information needed is p and L . Note that the specific values of C and L are not required, only
their relative values.
5.7. The best way to see if it is necessary to model the uncertainty about D is to revert to the regular costto-loss problem. If pL < C < C + D, then one would not take action, and if pL > C + D , then the optimal
choice would be to take action. However, if C < pL < C + D then the occurrence of damage D does matter.
The choice between taking action and not is unclear, and one would want to include D in the decision tree.
5.8. This problem can either be set up to maximize expected crop value (first diagram below) or minimize
expected loss (second diagram below). The solution is the same either way; the difference is the perspective
(crop value vs. loss). The decision tree that maximizes expected crop value is modeled in the Excel file
“Problem 5.8.xlsx” along with the sensitivity reports.
The expected loss from doing nothing is much greater than for either of the two measures, and so it is
certainly appropriate to take some action. The expected loss for burners is almost entirely below that for
sprinklers, the only overlap being between $14.5K and $15K. It would be reasonable to set the burners
without pursuing the analysis further.
Another argument in favor of this is that most likely the same factors lead to more or less damage for both
burners and sprinklers. With this reasoning, there would be a negligible chance that the burners would
produce a high loss and the sprinklers a low loss.
A final note: Some students may solve this problem without calculating the expected loss, comparing the
range of losses from burners or sprinklers if damage occurs with the $50K loss from doing nothing.
However, if uncertainty about the weather is ignored altogether, the appropriate analysis has the loss
ranging from $0 to $50K for no action, $5 to $25K for the burners, and $2 to $32K for the sprinklers.
Because the three ranges overlap so much, no obvious choice can be made. It is, therefore, appropriate and
necessary to include the probability of adverse weather and calculate the expected losses.
68
Decision tree for
maximizing expected
crop value.
Loss ($ 000’s)
Freeze (0.5)
Exp Loss = $25K
Decision tree for
minimizing
expected loss.
50
Do nothing
No f reeze (0.5)
Freeze (0.5)
Set burners
Exp Loss =
$12.5K to $15K
No f reeze (0.5)
Freeze (0.5)
0
20 to 25
5
27 to 32
Use sprinklers
Exp Loss =
$14.5K to $17K
No f reeze (0.5)
2
5.9. This decision tree (shown in Figure 4.40 in the text) is modeled in the Excel file “Problem.5.9.xlsx.”
The model is a linked tree where the uncertainty node for the amount of fun is linked to cell $F$6 in the
spreadsheet model (“Fun Level for Forest Job”), and the uncertainty node for the amount of work is linked
to cell $G$7 in the spreadsheet model (“Salary Level for In-town Job”). The outcome nodes for the Forest
Job are linked to cell $F$8 and the outcome nodes for the In-Town Job to cell $G$8. The user can then vary
the weights to see that Sam will still prefer the forest job. The sensitivity analysis gives the following
results:
Expected Overall Score
Forest Job
In-Town Job
ks
0.50
71.25
57.50
0.75
76.125
56.25
Thus, regardless of the precise value of ks, the optimal choice is the forest job. In fact, a much stronger
statement can be made; it turns out that for no value of ks between zero and one is the in-town job
69
preferred. Smaller values of ks favor the in-town job, but even setting ks = 0 leaves the expected overall
scores equal to 60 and 61.5 for the in-town and forest jobs, respectively.
Another way to show the same result is to realize that the expected overall scores are linear in the weights
and in the expected scores for the individual attributes. Because the forest job has higher expected scores
on both attributes, there cannot exist a set of weights that makes the in-town job have the higher overall
expected score.
5.10. Using the base values of $5000 for the cash flows and 11% for the interest rate,
NPV
5000 5000
5000
+ ... +
= -14000 + 1.11 +
2
1.11
1.116
= $7153.
When the cash flows are varied from $2500 to $7000, and the interest rate is varied from 9.5% to 12%, the
following tornado diagram results:
Cash f lows $2500
$7000
12%
Interest rate
NPV
-$4000
0
$4000
9.5%
$8000
$12,000
$16,000
This graph assumes that the cash flows vary between $2500 and $7000 and the amount is the same across
all 6 years. The range of possible interest rates appears not to pose a problem; NPV remains positive within
a relatively narrow range. On the other hand, NPV is more sensitive to the range of cash flows. It would be
appropriate to set the interest rate at 11% for the analysis but to model the uncertainty about the cash flows
with some care.
The tornado diagram generated by PrecisionTree is shown in the Excel file “Problem 55.10.xlsx.” The
solution file also shows a two-way data table to calculate the swing of NPV as the annual payment and the
interest rate vary. Note that the table reports many more values than the tornado diagram, as the diagram
only incorporates one column and one row of the table.
Tornado Graph of Decision Tree 'Problem 5.10'
Expected Value of Entire Model
Annual Payment (B8)
Interest Rate (B13)
-$4,000 -$2,000
$0
$2,000 $4,000 $6,000 $8,000 $10,000 $12,000 $14,000 $16,000
Expected Value
The problem can also be modeled by varying each year between $2500 and $7000, one at a time. This
model is shown in the Excel file “Problem 5.10.xlsx.” Because of the discounting factor, the early year
payments are more influential than the later years. Said differently, NPV is more sensitive to the early year
payments than to later years.
70
Tornado Graph of Decision Tree 'Problem 5.10 by Year'
Expected Value of Entire Model
Year 1 (B8)
Year 2 (C8)
Year 3 (D8)
Year 4 (E8)
Year 5 (F8)
Year 6 (G8)
$4,500
$5,000
$5,500
$6,000
$6,500
$7,000
$7,500
$8,000
$8,500
$9,000
$9,500
Expected Value
Some students may see the error message: “Model Extends Beyond Allowed Region of Worksheet,” which
means that their version of the software is limited to one decision model per workbook. .
5.11. First, sensitivity analysis by hand: Let’s establish some notation for convenience:
Strategy A = Accept $2 billion.
Strategy B = Counteroffer $5 billion, then refuse if Texaco offers $3 billion.
Strategy C = Counteroffer $5 billion, then accept if Texaco offers $3 billion.
EMV(A) = 2
EMV(B) = 0.17 (5) + 0.5 [p 10.3 + q 5 + (1-p - q) 0] + 0.33 [p 10.3 + q 5 + (1-p - q) 0]
= 0.85 + 8.549 p + 4.15 q.
EMV(C) = 0.17 (5) + 0.5 [p 10.3 + q 5 + (1-p - q) 0] + 0.33 (3)
= 1.85 + 5.15 p + 2.5 q.
Now construct three inequalities:
• EMV(A) > EMV(B)
2 > 0.85 + 8.549 p + 4.15 q
0.135 - 0.485 q > p .
(1)
• EMV(A) > EMV(C)
2 > 1.85 + 5.15 p + 2.5 q
0.03 - 0.485 q > p .
(2)
• EMV(B) > EMV(C)
0.85 + 8.549 p + 4.15 q > 1.85 + 5.15 p + 2.5 q
p > 0.294 - 0.485 q.
(3)
Plot these three inequalities as lines on a graph with p on the vertical axis and q on the horizontal axis. Note
that only the region below the line p + q = 1 is feasible because p + q must be less than or equal to one.
71
p
1.0
0.9
0.8
0.7
0.6
0.5
0.4
I
0.3
0.2
II
0.1
III
IV
0
q
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
These three lines divide the graph into four separate regions, labeled I, II, III, and IV. Inequality (3) divides
regions I and II. For points above this line, p > 0.294 - 0.485 q, and so EMV(B) > EMV (C). Inequality (1)
divides regions II and III. For points above this line, p > 0.135 - 0.485 q, and EMV(B) > EMV(A). As a
result of this, we know that B is the preferred choice in region I and that C is the preferred choice in region
II [where EMV(C) > EMV (B) > EMV(A)].
Inequality (2) divides regions III and IV. For points above this line, p > 0.03 - 0.485 q, and EMV(C) >
EMV (A). Thus, we now know that C is the preferred choice in region III [where EMV(C) > EMV(A) and
EMV(C) > EMV(B)], and A is preferred in region IV. Thus, we can redraw the graph, eliminating the line
between regions II and III:
p
1.0
0.9
0.8
0.7
p > 0.15
0.6
q > 0.35
0.5
0.4
B
0.3
0.2
0.1
C
A
0
q
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
The shaded area in the figure represents those points for which p > 0.15 and q > 0.35. Note that all of these
points fall in the “Choose B” region. Thus, Liedtke should adopt strategy B: Counteroffer $5 billion, then
refuse if Texaco offers $3 billion.
72
The two-way SA feature of PrecisionTree can be used here, but with limited results. The widest any one
probability can vary is up to the point that the remaining probabilities sum to 0 or sum to 1. In this case, the
most we can vary “Probability of Large Award” is between 0 and 0.5. The model is in the Excel file
“Problem 5.11.xlsx.” Because cell D18 (“Probability of No Award”) contains the formula “=1 – (D12 +
D16)”, “Probability of Large Award” can vary at most from 0 and 0.5. The same is true for “Probability of
Medium Award”. In deciding how much you can vary the variables in a two-way analysis, the complete
rectangle defined by the range of values must fit inside the large triangular region shown above.
The two-way sensitivity graph is on the second tab, and is hard to read. Where there is a kink in the 3D
graph, then the optimal alternative has changed. In other words, the alternative with the maximum EMV
has changed. Easier to read, butperhaps not available in the student version, is the strategy graph. The
workbook contains two strategy graphs. The second one uses 50 steps, and thus computed a combination of
2500 values. From these graphs, we clearly see which alternative has the maximum EMV. Remember, the
PrecisionTree is limited to working with rectangular regions, and the whole region needs to fit within the
triangle. Therefore, the PrecisionTree two-way SA provides a limited and incomplete analysis compared to
the algebraic solution previously given.
If the student version does not allow strategy graphs, then by calculating the differences from row to row,
we can see where the value changes (i.e., where the kink occurs). As shown below, the differences show
which alternative is optimal. Taking the differences, as we have done, only works for linear models.
Prob Medium Award
0.1
0.15
0.2
0.25
0.3
Prob Large Aw
0
0.05
0
2
2
2.09
2.215
2.34
2.465
2.59
2.3475
0.05
2.0975
2.2225
2.4725
2.5975
2.7225
2.8475
2.855
2.98
3.105
0.1
2.355
2.48
2.605
2.73
2.8625
2.9875
3.1125
3.2375 3.37735
0.15
2.6125
2.7375
0.2
2.87
2.995
3.12
3.245
3.3898
3.5973
3.8048
0.25
3.1275
3.2525 3.40225 3.60975 3.81725 4.02475 4.23225
4.2447
0.3
3.4147
3.6222
3.8297
4.0372
4.4522
4.6597
0.35
3.84215 4.04965 4.25715 4.46465 4.67215 4.87965 5.08715
0.4
4.2696
4.4771
4.6846
4.8921
5.0996
5.3071
5.5146
0.45
4.69705 4.90455 5.11205 5.31955 5.52705 5.73455 5.94205
5.9545
0.5
5.1245
5.332
5.5395
5.747
6.162
6.3695
Successive differences:
0.0975
0.2225
0.2575
0.2575
0.2575
0.2575
0.2575
Accept $2B
0.2575
0.2575
0.2575
0.2575
0.2575
0.2575
0.2575
preferred.
0.2575
0.2575
0.2575
0.2575
0.2575
0.2575 0.27235
0.2575
0.2575
0.2575
0.2575
0.2773
0.3598 0.42745
Counter and
0.2575
0.2575 0.28225 0.36475 0.42745 0.42745 0.42745
accept $3B
0.2872
0.3697 0.42745 0.42745 0.42745 0.42745 0.42745
0.42745 0.42745 0.42745 0.42745 0.42745 0.42745 0.42745
0.42745 0.42745 0.42745 0.42745 0.42745 0.42745 0.42745
Counter and
0.42745 0.42745 0.42745 0.42745 0.42745 0.42745 0.42745
refuse $3B
0.42745 0.42745 0.42745 0.42745 0.42745 0.42745 0.42745
0.35
2.715
2.9725
3.23
3.58485
4.0123
4.43975
4.8672
5.29465
5.7221
6.14955
6.577
0.2575
0.2575
0.35485
0.42745
0.42745
0.42745
0.42745
0.42745
0.42745
0.42745
0.4
2.84
3.0975
3.3649
3.79235
4.2198
4.64725
5.0747
5.50215
5.9296
6.35705
6.7845
0.2575
0.2674
0.42745
0.42745
0.42745
0.42745
0.42745
0.42745
0.42745
0.42745
0.45
2.965
3.2225
3.5724
3.99985
4.4273
4.85475
5.2822
5.70965
6.1371
6.56455
6.992
0.2575
0.3499
0.42745
0.42745
0.42745
0.42745
0.42745
0.42745
0.42745
0.42745
0.5
3.09
3.35245
3.7799
4.20735
4.6348
5.06225
5.4897
5.91715
6.3446
6.77205
7.1995
0.26245
0.42745
0.42745
0.42745
0.42745
0.42745
0.42745
0.42745
0.42745
0.42745
5.12. The first thing to notice is that the net annual cost values have changed for Barnard and Charlotte, but
not for Astoria, as Astoria has no rental. Specifically, when incorporating the taxes on rental income and
depreciation, Barnard’s net annual cost has dropped by almost $5,000 and Charlotte’s by over $3,000.
Thus, instead of there be an $11,000 difference in annual costs among the 3 properties, there is only a
$6,000 difference. This helps improve the attractiveness of Barnard and Charlotte.
The Excel file for the model is “Problem 5.12.xlsx” and the sensitivity graphs are found in “Problem 5.12
Sensitivity Graphs.” Remember: what we call sensitivity graphs, PrecisionTree calls strategy graphs.
Many of the sensitivity analysis insights discussed in the chapter also hold here. Astoria is still the cheapest
no matter the value of the input variable for all the 8 input variables, except monthly rent. The main
difference, as mentioned above, is that the properties are now closer in value. For example, when varying
the interest rate between 4% and 8%, Astoria’s net annual cost is always less than the other 2 properties net
annual cost values. At an interest rate of 4%, Charlotte is within $700 of Astoria’s net annual cost, but
73
previously, when we did not incorporate depreciation and taxes on the rental income, then the closest
Charlotte came to Astoria was $4,000.
A subtle difference between the analyses is the influence of the tax rate. Previously, taxes could swing
Charlotte’s and Barnard’s net annual cost by $4,000 to $5,000; whereas now it only swings it by only
$2,000 to $2,500. Remember, that tax rate’s impact is counterintuitive in that costs go down as the tax rate
increases. When incorporating a more complete tax analysis, the effect of tax rate overall is muted by taxes
on the rental income.
While the months-occupied variable never changes the rank ordering of the properties, the analysis does
show a significant impact on net annual cost. Previously, when “Months Occupied” = 0 months, then
Barnard’s net annual cost was nearly $60,000; whereas now it is only $48,218.
The monthly rent value again can make Charlotte the least expensive property. Previously, when
Charlotte’s rent was $2,100 or more per month, than it was cheaper than Astoria. Now, Charlotte’s rent
needs only be above $1,900 per month for it to have a smaller net annual cost. Barnard is always the most
expensive no matter the monthly rent (up to $2,900/month), but now it is within $200 at $2,900/month.
The Excel file “Problem 5.12 Tornado Diagrmas.xlsx” report the 3 tornado diagrams and now show that
the loan’s interest rate is more influential than months occupied for both Barnard and Charlotte. For
example, previously months occupied could vary net annual cost by $24,000 for Barnard ($2,000 monthly
rent), and now varies it by approximately $16,000. This is the buffer we mentioned. Because the interest
rate can vary Barnard’s net annual cost by $19,276, interest rate has become the most influential variable
for Barnard. The same holds true for Charlotte; interest rate is now more influential than months occupied.
We choose to run a two-way SA on “Monthly Rent” and “Interest Rate.” Remember that Charlotte’s rent is
set at $500 less than Barnard’s. The Excel file is “Problem 5.12 Two-Way SA.xlsx.” As expected, the
results show a larger region when Charlotte is the cheapest than the previous analysis, and, again, Barnard
is never the cheapest within this rectangle. The two-way graphs show that when the interest rate is 4%, then
Charlotte has a lower net annual cost than Astoria when Charlotte’s rent is $1,629 or more.
Sensitivity of Decision Tree 'Problem 5.12'
Expected Value of Node 'Housing Decision' (B34)
$36,000
$32,000
$30,000
$28,000
$26,000
$24,000
$22,000
$20,000
$1,500
$1,586
$1,671
$1,757
$1,843
$1,929
$2,014
$2,100
$2,186
$2,271
$2,357
$2,443
$2,529
$2,614
$2,700
$2,786
$2,871
Expected Value
$34,000
Monthly Rent (Barnard) (C10)
74
8.0%
7.4%
6.9%
6.3%
Interest Rate (B4)
5.7%
5.1%
4.6%
4.0%
Strategy Region for Node 'Housing Decision'
8.0%
7.5%
Interest Rate (B4)
7.0%
6.5%
6.0%
Astoria St
5.5%
Charlotte St
5.0%
4.5%
$3,000
Monthly Rent (Barnard) (C10)
$2,800
$2,600
$2,400
$2,200
$2,000
$1,800
$1,600
$1,400
4.0%
5.13. The consequence measure “Appreciation + Equity – Net Annual Cost” substantially changes the
preference order. Barnard St is now ranked first at $15,645 and Astoria St. is ranked last at $5,582. With
this measure, larger values are more preferred. The Excel file “Problem 5.13.xlsx” contains the spreadsheet
model and note that the values given in the text for Barnard and Charlotte are off. Charlotte should be
$9,584.
Strategy Region of Decision Tree 'Problem 5.13'
Expected Value of Node 'Decision' (B34)
With Variation of Interest Rate (B4)
$25,000
Expected Value
$20,000
$15,000
Astoria St
$10,000
Barnard St
Charlotte St
$5,000
8.5%
8.0%
7.5%
7.0%
6.5%
6.0%
5.5%
5.0%
4.5%
4.0%
3.5%
$0
Interest Rate (B4)
The Barnard Street house is the most preferred property for all the variables listed in Table 5.2, except
“Appreciation” and “Months Occupied,” for every value in the range. For example, the sensitivity graph for
interest rate with between 4% and 8% is shown above. For each interest rate in this range, Barnard always
has the maximum value. The only time the rank ordering of the alternatives change is for the variables:
“Appreciation” and “Months Occupied.”
75
Strategy Region of Decision Tree 'Problem 5.13'
Expected Value of Node 'Decision' (B34)
With Variation of Months Occupied (C18)
$25,000
$20,000
Expected Value
$15,000
$10,000
Astoria St
$5,000
Barnard St
Charlotte St
$0
-$5,000
14
12
10
8
6
4
2
0
-2
-$10,000
Months Occupied (C18)
Strategy Region of Decision Tree 'Problem 5.13'
Expected Value of Node 'Decision' (B34)
With Variation of Appreciation Percentage (B26)
$50,000
$40,000
Expected Value
$30,000
$20,000
$10,000
$0
Astoria St
-$10,000
Barnard St
-$20,000
Charlotte St
-$30,000
-$40,000
12.00%
10.00%
8.00%
6.00%
4.00%
2.00%
0.00%
-2.00%
-4.00%
-$50,000
Appreciation Percentage (B26)
From 0 to 12 months, Barnard’s “Appreciation + Equity – Net Annual Cost” value is always larger than
Charlotte’s, but is only larger than Astoria’s when months occupied greater than or equal to 5 months.
“Appreciation” is now the most influential variable. Notice that the y-axis now goes from -$50,000 to
$50,000, showing that as the appreciation percentage changes, the “Appreciation + Equity – Net Annual
Cost” value can vary up to $100,000. For annual appreciation rates less than 2.8%, Astoria is the better
choice, then, after 2.8%, Barnard is more preferred.
Part c asks the students to append a chance node to Figure 5.14 to account for the uncertainty in the
appreciation percentage. You can either let the students define their own chance node (number of
76
outcomes, probabilities, etc.), or you can give them specific instructions. The file “Problem 5.13 part
c.xlsx” contains our solution, and, as shown below, it has 3 outcomes. Note also this is a linked tree. You
can tell this because the branch values are not payoffs, but the input values used to calculate the payoffs.
Running a two-way SA on the probabilities “p1” and “q1” in Figure 5.14 shows the Barnard is optimal for
all probability values. We set the steps equal to 50 for each “p1” and “q1” resulting in 2500 evaluation
points. On this measure, Barnard dominates the other two properties.
In part d, It is clear that “Net Annual Cost” is attempting to capture the real, or out-of-pocket, costs
experienced by home ownership. These are items that Sanjay and Sarah will be paying cash for. The other
consequence (“Appreciation + Equity – Net Annual Cost”) is harder to understand. In some ways, it is
measuring the annual value Sanjay and Sarah are earning in their home. When they sell their home, the will
realize both the appreciation and equity. This, however, does not happen incrementally as this measure
indicates. Either it happens when they sell their house or it comes into play when they take out a homeequity loan. In the screenshot of the model, we see that Appreciation + Equity – Net Annual Cost =
$35,470 for Barnard when there is full occupancy, $8,000 in repair and upkeep, and a 9% appreciation rate.
This does not mean that Sanjay and Sarah have an additional $35,000 in their hands nor can they take a
loan out for that amount.
Strategy Region for Node 'Housing Decision'
80%
60%
Barnard
40%
20%
Prob of 12 Months Occup (J11)
77
35%
30%
25%
20%
15%
10%
5%
0%
0%
Prob of High R& U Cost (J12)
100%
Case Study: DuMond International
1. If the changes suggested by Dilts and Lillovich are incorporated, EMV(Current product) increases to
$930,000, but EMV(New product) stays the same at $1,100,000. Thus, the new product would be preferred.
If the changes suggested by Jenkins and Kellogg are incorporated, EMV(New product) drops to $925,000.
Recall, though, that Jenkins and Kellogg were satisfied with Milnor’s analysis of the current product, so
their changes still leave the new product as the preferred choice.
If all of the suggested changes are incorporated, EMV(New product) = $925,000 and EMV(Current
product) = $930,000. Thus, the combination of optimism about the current product and pessimism about
the new leads to a scenario in which the current product barely has a better EMV than the new one.
Because no one embraced all of the changes, though, all board members should be convinced that the new
product is the preferred choice.
This case is set-up as a spreadsheet model in the Excel file “Dumond Case.xlsx.” The decision tree is
structured so that it references cells in the spreadsheet, so that the user can vary the parameters of the
model, and see how the preferred decision changes.
Case Study: Strenlar, Part II
1. The solution to this case depends to a great extent on how the decision was modeled in the first place.
The sensitivity analysis that follows is based on the model discussed above in the solution for Strenlar, Part
I, in Chapter 4. The Excel solution file is “Strenlar Case Part II.xlsx.”
The table shows the parameters for which we wish to perform a sensitivity analysis. For the Reject PI
option, this includes P(Win Lawsuit), P(Manufacturing Process Works), legal fees, and profits. For the
Accept Job option, the parameters are the interest rate, gross sales, and P(Manufacturing Process Works).
For the Accept Lump Sum and Options alternative, the analysis focuses on the interest rate, stock price if
Strenlar succeeds, and P(Manufacturing Process Works).
P(Win Lawsuit)
P(Mfg Process Works)
Gross Sales
Legal Fees
Fixed Cost
Variable Cost (% of Gross Sales)
PI Stock Price
Interest Rate
Pessimistic
50%
70%
$25,000,000
$90,000
$8,000,000
80.0%
$48.00
5.0%
Base
60%
80%
$35,000,000
$60,000
$5,000,000
62.9%
$52.00
7.5%
Sensitivity Analysis Table for Strenlar Model.
78
Optimistic
75%
90%
$45,000,000
$20,000
$2,000,000
60.0%
$57.00
12.5%
Because Refuse PI depends on profits, which in turn depends on gross sales, we have chosen to expand the
model slightly. We have assumed as base values that fixed costs would equal $5 million and that, for sales
of $35 million, the variable cost would be $22 million. This leaves profits of $8 million as specified in the
case. Specifically, we assumed:
Profit = Gross Sales – Variable Cost – Fixed Cost,
where Fixed Costs = $5 million and Variable Cost = (35/22) * Gross Sales.
Now it is possible to run sensitivity analysis using PrecisionTree on all three variables (Gross Sales,
Variable Costs, Fixed Costs) and obtain comparable results for the Refuse PI and Accept Job alternatives.
The tornado graph for the Refuse PI alternative shows that Variable Cost and Gross Sales are the two most
influential variables. These two variables are also the ones to cause Fred’s payoff to drop below $3.9
million.
Tornado Graph of Refuse PI
Expected Value of Entire Model
Variable Costs (B14)
Gross Sales (B5)
Fixed Costs (B13)
Prob of Winning Case (B4)
Prob(Mfg Process Works) (B3)
Legal Fees (B8)
Interest Rate (B9)
$7,000,000
$6,000,000
$5,000,000
Expected Value
$4,000,000
$3,000,000
$2,000,000
$1,000,000
PI Stock Price (B10)
The tornado diagram for Accept Job shows that Fred’s payoffs are always below $3.2 million and that
Gross Sales is the most influential variable pushing his payoff below $2 million. The sensitivity graph
shows that Refusing PI has the better payoff until gross sales are less than $27 million.
Tornado Graph of Accept Job
Expected Value of Entire Model
Gross Sales (B5)
Prob(Mfg Process Works) (B3)
Interest Rate (B9)
Legal Fees (B8)
Prob of Winning Case (B4)
PI Stock Price (B10)
Fixed Costs (B13)
79
$3,200,000
$3,000,000
$2,800,000
Expected Value
$2,600,000
$2,400,000
$2,200,000
$2,000,000
$1,800,000
Variable Costs (B14)
Strategy Region of Decision Tree 'Strenlar'
Expected Value of Node 'Decision' (B23)
With Variation of Gross Sales (B5)
$5,500,000
$5,000,000
$4,500,000
Expected Value
$4,000,000
$3,500,000
$3,000,000
Refuse PI
$2,500,000
Accept job
$2,000,000
Accept lum sum
$1,500,000
$50,000,000
$45,000,000
$40,000,000
$35,000,000
$30,000,000
$25,000,000
$20,000,000
$1,000,000
Gross Sales (B5)
The only other variable to cause a change to the rank ordering of the alternatives is Variable Costs. The
sensitivity graph shows the variable cost only affects the Refuse PI alternative. If he takes the job at PI,
then his royalties are tied to gross sales and not profit.
Strategy Region of Decision Tree 'Strenlar'
Expected Value of Node 'Decision' (B23)
With Variation of Variable Costs (B14)
$7,000,000
$6,000,000
Expected Value
$5,000,000
$4,000,000
Refuse PI
Accept job
$3,000,000
Accept lum sum
$2,000,000
85.00%
80.00%
75.00%
70.00%
65.00%
60.00%
55.00%
$1,000,000
Variable Costs (B14)
.
Another intriguing possibility is to see how pessimistic Fred could be before Refuse PI is no longer
optimal. Consider Scenario 1 in the table. In this pessimistic scenario, all the variables are at their base
value (best guess value) or they are less, and four of them are at their lowest value. Even in this case,
refusing PI has a larger EMV than accepting the job (compare $1.77 million to $1.74 million).
In
Scenario 2, we kept all the values from Scenario 1, except we increased the stock price to its maximum
upper bound of $57. Doing so only changed the lump-sum payoff, from $0.9 million to $1.4 million, a
$500,000 gain, but not enough to overtake the other alternatives. Scenario 3 keeps all the same values as in
80
Scenario 1, except for increasing the fixed cost value only by $89,505. The EMV of Refuse PI and of
Accept Job are equal in Scenario 3. In other words, Fred would have to be relatively pessimistic overall
before the Reject PI option is not optimal
Scenario 2
Scenario 3
P(Win Lawsuit)
Scenario 1
60%
(Base Value)
60%
60%
P(Mfg Process Works)
70%
(Pessimistic)
70%
70%
$25,000,000
(Pessimistic)
$25,000,000
$25,000,000
$90,000
(Pessimistic)
$90,000
$90,000
$5,000,000
$5,000,000
$5,089,505
60%
60%
$48.00
(Base Value)
(Slightly
Pessimistic)
(Pessimistic)
$57.00
$48.00
7.5%
(Base Value)
7.5%
7.5%
Gross Sales
Legal Fees
Fixed Cost
Variable Cost (% of
Gross Sales)
PI Stock Price
60 %
Interest Rate
Case Study: Job Offers, Part II
1. The sensitivity analysis gives the following results:
P($1500)
0
1
Expected Overall Score
Madison
MPR
50
36
58
36
Pandemonium
50
50
Thus, there is no question about the preference for Madison Publishing; with the probability set at the most
pessimistic value of 0, Madison and Pandemonium are equivalent. MPR, of course, is never in the running
at all. A one-way sensitivity plot for the probability of disposable income being $1500 for the Madison job
generated by PrecisionTree is shown in the second worksheet. The one-way plot does not show the
alternative policies, only the range of possible outcome results.
2. The algebraically generated two-way sensitivity graph is shown below. The labels indicate the optimal
choice in each region. The graph makes good sense! When the weights on snowfall and income are small
(and hence the weight for the magazine score is high), Pandemonium is the best, reflecting its strong
showing in the magazine dimension. Likewise, when the magazine weight is low, MPR is best, reflecting
its strength in the income and snowfall dimensions, but poor showing with the magazine score.
1.0
ki
0.8
0.6
MPR
0.4
0.2
Madison
Pandemonium
0.0
0.0
0.2
0.4
0.6
81
0.8
ks
1.0
Sensitivity analysis using PrecisionTree. The decision model is in the Excel file “Job Offers Case II.xlsx.”
Question 1: To vary only the probability of $1,500 from 0 to 1, you must choose Spreadsheet Cell in the
SA dialog box, for Type of Value.
Question 2: You can use PrecisionTree to create the same optimal regions as we did above algebraically.
To do so, use the model and run a two-way SA, where you vary the weights for snowfall on the x-axis and
income on the y-axis. The weight for magazine rating is the formula 1 – (snowfall wt + income wt). Vary
both these weights from 0 to 1. This will create a rectangular region not the desired the triangular region.
See figure below. However, we can simply ignore the region above the main diagonal. We can do this for
weights, but not for probabilities. With the weights, if one of this is negative; the model still calculates a
value. The value it calculates is meaningless, which is why we ignore the upper triangular region of the
two-way analysis. We cannot do this for probabilities, because probabilities can never be negative. Once
PrecisionTree encounters a negative probability, it will not calculate any results.
Strategy Region for Node 'Decision'
1.00
Income Wt (F4)
0.80
0.60
Madison Publishing
MPR Manufacturing
0.40
Pandemonium Pizza
0.20
1.00
0.80
0.60
0.40
0.20
0.00
0.00
Snowfall Wt (G4)
Case Study: MANPADS
See file “MANPADS.xlsx” for the decision tree and the full analysis.
1. Decision tree shown below. Using the values given in the case,
E(Cost | Countermeasures) = $14,574 million
E(Cost | No Countermeasures) = $17,703 million
Putting the tree together is relatively straightforward. The only odd aspect of the model is working out
P(Interdiction | Attempt, Countermeasures). The information in the case is not entirely clear; in this
solution f is interpreted as the extent to which P(Interdiction | Attempt) is increased. The wording in the
case suggests that f should be interpreted as the proportional decrease in P(Interdiction | Attempt), but that
82
does not necessarily make sense. In the original article, the based value for f was set to 0, and the range was
0-0.25. You may want to be lenient with the students on this point.
83
2. Risk profiles shown below. The real difference is that adopting countermeasures greatly reduces the
chance of a very large loss. So the policy question is whether the cost is worth that reduction in risk. The
cumulative graph (see the Excel file) shows that neither alternative stochastically dominates the other.
Probabilities for Decision Tree 'MANPADS'
Choice Comparison for Node 'Decision'
80%
70%
60%
Probability
50%
40%
Countermeasures
30%
No Countermeasures
20%
10%
120000
100000
80000
60000
40000
20000
0
-20000
0%
3. See the Excel file for both one-way and two-way sensitivity analyses. Changing the inputs in this model
can result in changes to both alternatives, so this sensitivity analysis has been run on the difference between
the expected costs (B45 on "MANPADS Tree"). When the difference is positive, Countermeasures has the
lower expected cost. Also, without any real guidance on reasonable ranges, I've varied each input by plus or
minus 25%.
The tornado and spider charts in the Excel file show that the top three variables in terms of sensitivity are
P(Attempt), Economic Loss, and P(Hit | Attack). Moreover, only the first two could result in No
Countermeasures being preferred. But remember that this is a one-way analysis.
The two-way analysis in the file shows how the strategy changes as we vary P(Attempt), Economic Loss,
and P(Hit | Attack). (So it is actually a three-way analysis.) When P(Hit | Attack) = 0.80, the region for No
Countermeasures is fairly small. When P(Hit | Attack) decreases to 0.60, though, the No Countermeasures
region is much larger. So it would not take too much movement of these three variables together to result in
No Countermeasures having the lower expected cost.
84
CHAPTER 6
Organizational Use of Decision Analysis
Notes
This chapter puts the activity of decision analysis directly into an organizational context. This context is
very different from that of an individual thinking alone about a personal decision. Generally, organizational
context is given short shrift in textbooks about the methodology of decision analysis. Yet it is where
decision analysts will earn their keep. And it is where the creation of value from better decisions will be
most strongly demonstrated. Efforts to create organizational buy-in and to create change will have huge
rewards.
This doesn’t mean that individual ability and ideas don’t matter. Organizational success depends strongly
on individual creativity. This is another area that may not be treated in textbooks on decision analysis. How
to foster that creativity and how to capture the value it creates is an integral part of this chapter and can be a
huge component of value creation in the organization.
A six-step process for the organizational use of decision analysis is described in the chapter. This approach
laces together a decision board and a strategy team as the work of the organizational decision analysis
proceeds through the six steps. Not only does it help in the selection of the path down which to proceed, but
it also looks towards the final desired results and focuses effort on the change necessary to achieve those
results.
Creativity comes into play in several of the steps. Creativity is needed to identify issues, determine
alternatives and develop a vision statement. A key in the organization is to establish a corporate culture and
tradition that stay out of the way of creativity.
We simply had more to say about creativity than would fit into the book pages. So there is an online
Supplement to Chapter 6. This has many more ideas on reducing blocks to creativity and additional
creativity techniques. The supplement also contains questions and problems of its own. The solutions to
those problems are included in a separate section at the end of this document.
Topical cross-reference for problems (S prefix indicates problems in the Online Supplement).
Analytical complexity
6.3
Barriers to implementation
6.1, Eastman Kodak
Brainstorming
6.10
Commitment to action
6.2
Creativity
6.4, 6.8, 6.10, 6.12, Eastman Kodak, S6.2,
S6.3, S6.4
Creativity blocks
S6.1
Decision board
6.2
Decision quality
6.13
Implementation
6.1
Incubation
6.11
Means objectives
6.6, 6.9
Organizational characteristics
6.3, 6.7
Six-step process
6.4
Spider diagram
6.13
Strategy table
6.5
This technical note was written by Samuel E. Bodily, John Tyler Professor at the University of Virginia
Darden Graduate School of Business Administration. Copyright  2013 by Samuel E. Bodily. It
accompanies Making Hard Decisions with Decision Tools, 3rd Ed, by Robert T. Clemen and Terrence
Reilly, Mason, OH: South-Western. Some answers were contributed by Robert T. Clemen from a previous
edition of the book.
85
Solutions
6.1. Here we are looking to tie the concepts from the chapter to the students’ personal experiences. Students
may mention barriers such as turf wars, lack of people skills, inadequate selling of the proposal, inadequate
attempts to get buy-in from stakeholders, missed attributes, lack of transparency, incorrect understanding of
who will make the decision, to list just a few. Students can explain how ideas from the chapter can reduce
these barriers. For example, bringing together a carefully composed decision board and analysis team
through the lacing of the six-step process can improve buy-in from stakeholders, reduce missed attributes,
improve transparency and improve identification of the decision maker(s).
6.2. First the decision-board participants must be wisely chosen. They must be those individuals who have
a stake in the decision, such as project managers or engineers on the project that the decision most affects.
The participants must also be compatible in that they work well together and can communicate freely
among themselves.
Second, a simple vision statement must provide the decision-board with a definitive purpose to the project
and set specific goals for the project. This will give incentive for each board member to commit to the
project and follow through until the end.
6.3. a. If you are single and your retirement only affects yourself, this decision would have a low
organizational complexity. However if you are a married person with children and your retirement affected
all of these people, then the decision could have a higher organizational complexity.
This decision is analytically complex due to the uncertainty of the different investments and the usually
large number of investment options to choose between. Therefore this decision may be in the upper right
quadrant or the lower right quadrant.
b. This would generally have a high level of organizational complexity, unless the organization was very
small, because of the large number of people with different personalities and because there would most
likely be conflicts and power struggles among the players. It may also have a fairly high degree of
analytical complexity due to the fact that you would be uncertain of the reactions of the many choices that
could be made for each position and the group dynamics involved at each level of the newly created
organizational chart. Therefore it might be in the upper right quadrant, possibly the upper left quadrant.
c. The organizational complexity depends on the size and shape of this organization. If this were a small
organization with very few divisions and/or employees, it would have a low complexity. However if this
was a company of the size, say, of Johnson & Johnson, this decision would have a high level of
organizational complexity.
This would in most cases have a high degree of analytical complexity. The uncertainties, the dynamics and
the many interrelated and numerous factors would make this analytically complex. Therefore, probably
upper right.
6.4. Here students bring in personal experiences and relate the many positives of the six-step decisionmaking process to real life problems.
6.5. In the following diagram we show a set of choices using brackets that fits with the “Broad-based
Expansion” designation. It is a consistent strategy. It might be hard to argue that it is necessarily combining
the best strategies of that figure without more information about the context.
Each alternative strategy comprises a consistent sent of choices, including one option in each column.
86
Each alternative strategy comprises a consistent set of choices
—one
— option under each decision.
Strategy
Short-Term
Profit
Improvement
Current
Focused
Reallocation
Joint Venture
and License
Focus
Broad-Based
Expansion
International
Focus
Worldwide
presence
Build critical
sales force
mass in top
12 countries
Domestic
Marketing
Maintain
current sales
force levels
Expand sales
force to
maintain
current officeBuild, acquire based
companies, or coverage and
joint venture to increased
build critical
hospital and
mass in
key physician
Germany,
coverage
Japan, etc.
Maintain
Acquire
current sales
company in
force levels
Germany;
and increase
out-license in
advertising
Japan, etc.
Licensing
and Joint
Ventures
Out-licensing
limited to lowpotential drugs
and focused
in-licensing
with joint
research
agreements
Generics
Stay out and
promote
trademark to
prevent share
erosion
Selectively
enter to use
Aggressive
manufacturing
out-licensing to capacity
achieve foreign
potential and
in-licensing to Cautious
add domestic
entry strategy
products
based on
marketing
Aggressive
strength
licensing and
key joint
venture
R&D
Strategy
Current
Concentrate on
key
product
classes
pro
duct
pro
duct
Aggressively
pro
license to
duct
exploit
strengths
Expand entire
program
Bodily & Allen, Interfaces 29:6
6.6. Here we are looking for the students to experience the process of identifying fundamental objectives
and listing means objectives, from which they may generate new alternatives. Suppose a student picks the
decision about a job to take, in other words to “choose an employer.” One of their objectives might be to
“have the opportunity to work with leaders in my area of expertise.” An example means objective might be
to meet and spend time with leaders in that area of expertise. The student might then imagine some
alternatives such as “go to a professional meeting with leaders in my area of expertise” or “send a piece of
my best writing on new ideas” to a leader in the area of expertise.
6.7. Students may reference examples of young companies in Silicon Valley where the culture is one of
enjoying oneself creatively, with few rules and walls (both figuratively and literally). Students’ answers
would reflect their own views about freedom to create. An organization that will thrive on chaos will most
likely be relatively small, with managers that are highly flexible and creative themselves. In fact, the idea
of a “shell” or “virtual” organization may be appropriate; such a firm may provide a minimum amount of
structure required legally and for the sake of outsiders. However, within the shell the organization may be
extremely flexible, or even constantly in flux. Imagine a holding company which permits its subsidiaries to
metamorphose, segment, and recombine at will. Managers in such a situation must be the antithesis of
bureaucrats. They must eschew traditional empire-building because such behavior attempts to lock in
certain organizational structures. Success and excellence must be defined in new ways; meeting next
quarter’s sales quota will probably not be a universally appropriate objective. Other issues include the kinds
of people one hires and nurturing a corporate culture that values change and evolution while disdaining the
status quo.
6.8. Those of us who attended formal schooling need to take a step back and determine whether we have
had the curiosity and creativity schooled out of us. And if we are less curious and creative we need to take
steps to get that curiosity and creativity back into our everyday lives.
Today’s educators need to be aware of the importance of curiosity and creativity in today’s schoolchildren.
Instead of simply correcting a child for an incorrect answer, the educator should try to see in what ways
that answer was creative and compliment the student for being creative before revealing the correct answer.
87
6.9. Here are some possible alternatives linked to specific objectives:
• Require safety features – Require the improvement of current safety features such as seatbelts and
airbags.
• Educate the public about safety – Require that those about to obtain a license receive a minimum
of car safety education.
• Enforce traffic laws – Require adults to take refresher driving courses (education) if they
accumulate too many traffic tickets?
• Have reasonable traffic laws – Have a citizen/police panel that reviews traffic laws.
• Minimize driving under the influence of alcohol – Provide rides home from social gatherings that
involve heavy use of alcohol; establish programs that remove keys from those who may have
consumed too much alcohol.
6.10. This is a good exercise to show how brainstorming and value-focused thinking can be combined. A
good list of alternatives will include suggestions that target all aspects of PeachTree’s fundamental
objectives. Other than the typical suggestions of distributing flyers, posters, and placing advertisements in
local college papers, PeachTree might sponsor a contest, have an open house, have managers be guest
speakers at local schools, and so on.
6.11. This should get the students to think about unconscious incubation from their own lives, maybe about
the idea that came to them in the shower or while taking a run. It can be informative about how to set up
opportunities for incubating ideas in the future. You can help students realize that while it is fantastic when
it works, it won’t always produce break-through ideas. It is said that the inventor of television first had the
idea for sending electronic pictures as a stream of rows of dots when, as a kid on the farm, he saw rows and
rows of individual plants in a field forming a mosaic picture.
6.12. Some additional examples could be the following:
• Internal Management of firm: Inventory reduction, operational improvements, reduction of SKUs,
• External Arrangements: Mergers and acquisitions, data mining and sharing.
6.13. Answers will be very individual.
Case Study: Eastman Kodak
There are a couple of points to this case. One is that the kind of decision making needed early in the typical
business life-cycle may not be the same as those needed late in the typical life-cycle. Early on the kind of
creativity needed would be largely related to the technological challenges, relating to how to be the world’s
leader in putting chemical emulsions onto plastic film. The mindset may be that of the chemical engineer.
Late in its business life-cycle Kodak found itself making decisions regarding two different paths. One path
choice was about how to contract successfully a chemical film business that was being disrupted by digital
technology. Another path choice related to developing potential business(es) that might renew the
company.
By that time the management of Eastman Kodak was primarily chemical engineers who had grown to their
positions by being very good at laying down chemical emulsions onto plastic film.
1. Virtually any of the creativity-enhancing techniques could be used to generate alternatives for Kodak.
You can ask students what ideas these techniques produced and which were more useful. Ask also how
these techniques might help someone get out of the mindset that so successfully brought the company
success, but which might not now lead to success.
2. You can imagine how difficult it would be for Kodak to come by organizational change. It could have
been helpful perhaps in this situation to add individuals to the decision board that were completely outside
the company and even the industry. For the decision about “renewing” it might be helpful to set up a board
that was largely independent of existing management. This isn’t easy for most organizations to do well.
88
Chapter 6 Online Supplement: Solutions to Questions and Problems
S6.1. Most students will imagine the engineer as male mainly because women rarely wear neckties and
their hair is usually longer. Therefore, you would not necessarily mention these factors if it were a female.
S6.2. This is a personal question and the questions generated should be different for each student. This
question is included so that students will think about what is important to them. If they come up with a
good list of questions, they should be better prepared for decisions like the job choice problem. Lists of
questions should be complete but not redundant. A list of questions regarding potential jobs might include:
• Does this job increase my salary?
• Could I live in a nicer house or neighborhood?
• Does it improve the opportunities available for my children?
• Does it increase my opportunities for outdoor recreation?
• Does it enhance my professional development?
Asking such questions up front can help one to focus on exactly what matters, and the exercise may
enhance one’s creativity in generating alternatives. Note that the lists described in the chapter tend to focus
on characteristics of alternatives. Here the purpose of the list is to be sure that alternatives are examined in
ways that are particularly meaningful to the decision maker.
S6.3. Many creative solutions will be generated by the students such as: chopped up for insulating material,
melted into art projects, used as buoys to identify crab traps, sterilized for reuse as containers for homemade wine. If the student’s list is short with a broad range of possibilities (s)he was most likely using
flexible thinking. On the other hand if the student’s list is fairly long but more narrowly focused, (s)he was
most likely using fluid thinking.
S6.4. This question works well because students generally have a lot of good ideas about good and bad
aspects of their programs. If left to their own devices, most students try brainstorming. As an instructor,
your job is to be as open-minded as possible when you hear their ideas!
89
CHAPTER 7
Probability Basics
Notes
Making Hard Decisions with DecisionTools, 3rd Ed. assumes that students have had some introduction to
probability. In Chapter 7, probability basics are presented and examples worked out in order to strengthen
the student’s understanding of probability and their ability to manipulate probabilities. The focus is on
manipulations that are useful in decision analysis. Bayes’ theorem of course is useful for dealing with
problems involving information, and the “law of total probability” is pertinent when we discuss probability
decomposition in Chapter 8. The use of PrecisionTree to perform Bayesian calculations (“flipping the
tree”) is discussed in the last section of this chapter.
Of particular note in Chapter 7 is the topic of conditional independence. This concept plays a central role in
the development of probability models in decision analysis. The nature of the role is particularly obvious in
the construction of influence diagrams; the absence of an arc between two chance nodes that may be
connected through other nodes is an indication of conditional independence. Care should be taken in the
construction of influence diagrams to identify conditional independence. Each statement of conditional
independence means one less arc, which means less probability assessment or modeling.
At the same time that conditional independence is important for modeling in decision analysis, it is most
likely a new concept for students. Probability and statistics courses teach about marginal independence, a
special case of conditional independence in which the condition is the sure outcome. However, students
sometimes have difficulty with the idea that a common conditioning event can be carried through all
probabilities in a calculation as is the case in conditional independence. In addition, the intuition behind
conditional independence often is new to students; if I already know C, then knowing B will not tell me any
more about A.
This chapter also includes an online supplement on covariance and correlation. The solutions to the
problems in the online supplement are included at the end of this chapter.
Topical cross-reference for problems
Bayes’ theorem
Conditional independence
Conjunction effect
Linear transformations
PrecisionTree
Sensitivity analysis
Simpson’s paradox
Two-way sensitivity analysis
Texaco-Pennzoil
7.14, 7.21, 7.28, 7.30, AIDS
7.23, 7.23,7.28, 7.34
7.24, 7.25
7.18, 7.19, 7.20
7.14, 7.21, 7.30
7.32, 7.33, AIDS
Decision Analysis Monthly, Discrimination
and the Death Penalty
7.33
7.29
Solutions
7.1. We often have to make decisions in the face of uncertainty. Probability is a formal way to cope with
and model that uncertainty.
7.2. An uncertain quantity or random variable is an event that is uncertain and has a quantitative outcome
(time, age, $, temperature, weight, . . . ). Often a non-quantitative event can be the basis for defining an
uncertain quantity; specific non-quantitative outcomes (colors, names, categories) correspond to
quantitative outcomes of the uncertain quantity (light wavelength, number of letters, classification number).
Uncertain quantities are important in decision analysis because they permit us to build models that may be
subjected to quantitative analysis.
145
7.3.
7.4.
P(A and B) = 0.12
—
P( B ) = 0.35
—
P(A and B ) = 0.29
0.12
P(B | A) = 0.41 = 0.293
P(A) = 0.41
0.12
P(A | B) = 0.65 = 0.185
P(B) = 0.65
0.06
– |—
P(A
B ) = 0.35 = 0.171
—
– and B)
P(A or B) = P(A and B) + P(A and B ) + P(A
= 0.12 + 0.53 + 0.29 = 0.94
or P(A or B)
or P(A or B)
= P(A) + P(B) - P(A and B)
= 0.41 + 0.65 - 0.12 = 0.94
– and —
= 1 - P(A
B ) = 1 - 0.06 = 0.94
7.5.
A and B
A and B
A and B
A
B
From the diagram, it is clear that
—
P(A) = P(A and B) + P(A and B )
and
– and B).
P(B) = P(A and B) + P(A
—
– and B) because of property 2. Thus,
But P(A or B) clearly equals P(A and B) + P(A and B ) + P(A
—
– and B)
P(A or B) = P(A and B) + P(A and B ) + P(A
= P(A) + P(B) - P(A and B).
146
7.6.a.
b.
c.
d.
e.
f.
g.
h.
i.
j.
k.
Joint.
P(left-handed and red-haired) = 0.08
Conditional
P(red-haired | left-handed) = 0.20
Conditional
P(Cubs win | Orioles lose) = 0.90
Conditional
P(Disease | positive) = 0.59
Joint
P(success and no cancer) = 0.78
Conditional
P(cancer | success)
Conditional
P(food prices up | drought)
Conditional
P(bankrupt | lose crop) = 0.50
Conditional, but with a joint condition: P(lose crop | temperature high and no rain)
Conditional
P(arrest | trading on insider information)
Joint
P(trade on insider information and get caught)
7.7. For stock AB, E(Return of AB) = 0.15(-2%) + 0.50(5%) + 0.35(11%) = 6.1%. Similarly, E(Return of
CD) = 5.0%, and E(Return of EF) = 9.0%. These are needed to calculate the variances and standard
deviations.
Var(Return of AB) = 0.15(-2% - 6.1%)2 + 0.50(5% - 6.1%)2 + 0.35(11% - 6.1%)2
= 0.15(-0.02 – 0.061)2 + 0.50(0.05 – 0.061)2 + 0.35(0.11 – 0.061)2 = 0.0019
Thus, the standard deviation of the return on AB is √0.0019 = 0.043 = 4.3%.
Similarly, Var(Return of CD) = 0.0003 and the standard deviation of the return on CD is 1.8%. Also,
Var(Return of EF) = 0.0243 and the standard deviation of the return on EF is 15.6%.
7.8.
A
A
B
0.2772 0.1450 0.4222
B
0.1428 0.4350 0.5778
0.42
0.58
1
– ) = 1 - P(A) = 1 - 0.42 = 0.58
P(A) = 0.42 is given, so P(A
—
P( B | A) = 1- P(B | A) = 1 - 0.66 = 0.34
— –
– ) = 1 - 0.25 = 0.75
P( B | A
) = 1- P(B | A
– ) P(A
– ) = 0.66(0.42) + 0.25(0.58) = 0.4222
P(B) = P(B | A) P(A) + P(B | A
—
P( B ) = 1 - P(B) = 1- 0.4222 = 0.5778
P(A | B) =
P(B|A)P(A)
0.66(0.42)
P(A and B)
=
= 0.4222 = 0.6566
P(B)
P(B)
– | B) = 1 - P(A | B) = 1 - 0.6566 = 0.3434
P(A
147
—
—
P(A and B )
P( B |A)P(A)
0.34(0.42)
—
P(A | B ) =
=
= 0.5778 = 0.2471
—
—
P( B )
P( B )
—
– |—
P(A
B ) = 1 - P(A | B ) = 1 - 0.2471 = 0.7529
– ) = 1 - P(A) = 1 - 0.10) = 0.90
7.9. P(A
—
P( B | A) = 1 - P(B | A) = 1 - 0.39 = 0.61
— –
– ) = 1 - 0.39 = 0.61
P( B | A
) = 1 - P(B | A
– ) P(A
– ) = 0.39(0.10) + 0.39(0.90) = 0.39
P(B) = P(B | A) P(A) + P(B | A
—
P( B ) = 1- P(B) = 1 - 0.39 = 0.61
– ) = 0.39.
At this point, it should be clear that A and B are independent because P(B) = P(B | A) = P(B | A
—
– ) = P(A
– | B) = P(A
– |—
Thus, P(A) = P(A | B) = P(A | B ) = 0.10, and P(A
B ) = 0.90. (Actually, the fact that
A and B are independent can be seen in the statement of the problem.)
7.10.
a. P(Y = 2) = 0.4, but P(Y = 2 | X = -2) = P(Y = 2 | X = 2) = 1.0 and P(Y = 2 | X = 0) = 0
b. P(X= -2) = 0.2, but P(X = -2 | Y = 2) = 0.5 and P(X = -2 | Y = 0) = 0
c. X and Y are dependent. In fact, Y = |X|. But it is not a linear relationship, and the covariance relationship
does not capture this nonlinear relationship.
7.11. The influence diagram would show conditional independence between hemlines and stock prices,
given adventuresomeness:
Adventuresomeness
Hemlines
Stock Prices
Thus (blatantly ignoring the clarity test), the probability statements would be P(Adventuresomeness),
P(Hemlines | Adventuresomeness), and P(Stock prices | Adventuresomeness).
7.12. In many cases, it is not feasible to use a discrete model because of the large number of possible
outcomes. The continuous model is a “convenient fiction” that allows us to construct a model and analyze
it.
148
7.13.
B
B
A
0.204
0.476
0.68
A
0.006
0.314
0.32
0.21
0.79
1
P(B and A) = P(B | A) P(A) = 0.30 (0.68) = 0.204
– ) = P(B | A
– ) P(A
– ) = 0.02 (0.32) = 0.006
P(B and A
—
P( B and A) 0.476
—
P( B | A) =
= 0.68 = 0.70
P(A)
—
– ) 0.314
P( B and A
— –
P( B | A
)=
= 0.32 = 0.98
–)
P(A
– ) = 0.204 + 0.006 = 0.21
P(B) = P(B and A) + P(B and A
—
P( B ) = 1 - P(B) = 1 - 0.21 = 0.79
P(A | B) =
P(A and B) 0.204
= 0.21 = 0.970
P(B)
– | B) = 1 - P(A | B) = 1 - 0.970 = 0.030
P(A
—
P(A and B ) 0.476
—
P(A | B ) =
= 0.79 = 0.603
—
P( B )
—
– |—
P(A
B ) = 1 - P(A | B ) = 1 - 0.603 = 0.397
7.14.
P(offer) = 0.50
P(good interview | offer) = 0.95
P(good interview | no offer) = 0.75
P(offer | good interview) = P(offer | good)
P(good | offer) P(offer)
= P(good | offer) P(offer) + P(good | no offer) P(no offer)
0.95 (0.50)
= 0.95 (0.50) + 0.75 (0.50)
= 0.5588
See also “Problem 7.14.xlsx” for a solution using PrecisionTree.
149
7.15. a. E(X)
= 0.05 (1) + 0.45 (2) + 0.30(3) + 0.20(4)
= 0.05 + 0.90 + 0.90 + 0.80
= 2.65
Var(X) = 0.05 (1-2.65)2 + 0.45 (2-2.65)2 + 0.30(3-2.65)2 + 0.20(4-2.65)2
= 0.05 (2.72) + 0.45 (0.42) + 0.30(0.12) + 0.20(1.82)
= 0.728
σX
b.
E(X)
= 0.728 = 0.853
= 0.13 (-20) + 0.58 (0) + 0.29(100)
= -2.60 + 0 + 29
= 26.40
Var(X) = 0.13 (-20 - 26.40)2 + 0.58 (0 - 26.40)2 + 0.29(100 - 26.40)2
= 0.13 (2152.96) + 0.58 (696.96) + 0.29(5416.96)
= 2255.04
σX
c.
E(X)
= 2255.04 = 47.49
= 0.368 (0) + 0.632 (1) = 0.632
Var(X) = 0.368 (0 - 0.632)2 + 0.632 (1 - 0.632)2
= 0.368 (0.632)2 + 0.632 (0.368)2
= 0.368 (0.632) [0.632 + 0.368]
= 0.368 (0.632)
= 0.233
σX
= 0.233 = 0.482
7.16.
E(X)
= (1 - p) (0) + p (1) = p
Var(X) = (1 - p) (0 - p)2 + p (1 - p)2
= (1 - p) p2 + p (1 - p)2
= (1 - p) p[p + (1 - p)]
= (1 - p) p
– | B) = 1, because the condition (B) is the same in each probability. Thus,
7.17. It is true that P(A | B) + P(A
—
these two probabilities are complements. However, the question is about P(A | B) + P(A | B ), which can
equal anything between 0 and 2. There is no requirement that these two probabilities add up, because the
—
conditions (B and B ) are different.
7.18. a.
E(Revenue from A)
= $3.50 E(Unit sales)
= $3.50 (2000)
= $7000
Var(Revenue from A)
= 3.502 Var(Unit sales)
= 3.502 (1000)
= 12,250 “dollars squared”
150
b.
E(Total revenue)
= $3.50 (2000) + $2.00 (10,000) + $1.87 (8500)
= $42,895
Var(Total revenue)
= 3.502 (1000) + 2.002 (6400) + 1.872 (1150)
= 41,871 “dollars squared”
7.19. Let X1 = random number of breakdowns for Computer 1, and X2 = random number of breakdowns
for Computer 2.
Cost = $200 (X1) + $165 (X2)
E(Cost) = $200 E(X1) + $165 E(X2) = $200 (5) + $165 (3.6) = $1594
If X1 and X2 are independent, then
Var(Cost) = 2002 Var(X1) + 1652 Var(X2) = 2002 (6) + 1652 (7)
= 430,575 “dollars squared”
σCost = 430,575 “dollars squared”
= $656.18
The assumption made for the variance computation is that the computers break down independently of one
another. Given that they are in separate buildings and operated separately, this seems like a reasonable
assumption.
7.20. The possible values for revenue are 100 ($3) = $300 and 300 ($2) = $600, each with probability 0.5.
thus, the expected revenue is 0.5 ($300) + 0.5 ($600) = $450. The manager’s mistake is in thinking that the
expected value of the product is equal to the product of the expected values, which is true only if the two
variables are independent, which is not true in this case.
7.21. Notation:
“Pos” = positive
“Neg” = negative
“D” = disease
—
“ D ” = no disease
—
—
P(Pos) = P(Pos | D) P(D) + P(Pos | D ) P( D )
= 0.95 (0.02) + 0.005 (0.98)
= 0.0239
P(Neg) = 1 - P(Pos) = 1 - 0.0239 = 0.9761
P(D | Pos) =
0.95 (0.02)
P(Pos | D) P(D)
= 0.0239
= 0.795
P(Pos)
P(D | Neg) =
0.05 (0.02)
P(Neg | D) P(D)
= 0.9761
= 0.0010
P(Neg)
151
Test
positive
(0.0239)
Test
negative
(0.9761)
Disease
(0.795)
Probability Table
No disease
(0.205)
Pos
Neg
D
0.0190 0.0010
0.02
D
0.0049 0.9751
0.98
Disease
(0.0010)
No disease
(0.9990)
0.0239 0.9761
1
7.22. Test results and field results are conditionally independent given the level of carcinogenic risk.
Alternatively, given the level of carcinogenic risk, knowing the test results will not help specify the field
results.
7.23.
P(TR+ and FR+ | CP high) = P(TR+ | FR+ and CP high) P(FR+ | CP high)
= P(TR+ | CP high) P(FR+ | CP high)
The second equality follows because FR and TR are conditionally independent given CP. In other words,
we just multiply the probabilities together. This is true for all four of the probabilities required:
P(TR+ and FR+ | CP high) = 0.82 (0.95) = 0.779
P(TR+ and FR- | CP high) = 0.82 (0.05) = 0.041
P(TR- and FR- | CP low) = 0.79 (0.83) = 0.6557
P(TR- and FR+ | CP low) = 0.79 (0.17) = 0.1343
7.24. Students’ answers will vary considerably here, depending on their opinions. However, most will rate
h as more likely than f. Tversky and Kahneman (1982) (see reference in text) found that as many as 85% of
experimental subjects ranked the statements in this way, which is inconsistent with the idea of joint
probability (see the next question). Moreover, this phenomenon was found to occur consistently regardless
of the degree of statistical sophistication of the subject.
7.25. a. The students’ explanations will vary, but many of them argue on the basis of the degree to which
Linda’s description is consistent with the possible classifications. Her description makes her sound not
much like a bank teller and a lot like an active feminist. Thus, statement h (bank teller and feminist) is more
consistent with the description than f (bank teller). Tversky and Kahneman claim that the conjunction effect
observed in the responses to problem 7.25 stem from the representativeness heuristic. This heuristic is
discussed in Chapter 8 of Making Hard Decisions with DecisionTools.
152
b.
Bank
teller
Feminist
Bank teller and feminist
P(Bank teller and feminist) = P(Feminist | Bank teller) P(Bank teller). Since P(Feminist | Bank teller) must
be less than or equal to one, P(Bank teller and feminist) must be less than or equal to P(Bank teller). The
area for the intersection of the two outcomes cannot be larger than the area for Bank teller.
c. The friend is interpreting h as a conditional outcome instead of a joint outcome. Statement h clearly is a
joint outcome, because both outcomes (bank teller and feminist) occur.
7.26. To start, we need some labels. Let us say that we have chosen Door A. The host has opened Door B,
revealing the goat, and Door C remains closed. The question is whether we should switch to C. The
decision rule is simple: switch if the probability of the car being behind Door C is greater than the
probability that5 it is behind A. Let “Car C” denote the outcome that the car is behind C, and likewise with
the goats and the other doors. We want to calculate P(Car C | Goat B). Use Bayes theorem:
P(Car C | Goat B) =
P(Goat B | Car C) P(Car C)
P(Goat B | Car A) P(Car A) + P(Goat B | Car B) P(Car B) + P(Goat B | Car C) P(Car C)
The prior probabilities P(Car A), P(Car B), and P(Car C) are all equal to 1/3. For the conditional
probabilities, the key is to think about the host’s behavior. The host would never open a door to reveal the
car. Thus, P(Goat B | Car C) = 1 and P(Goat B | Car B) = 0. Finally, what if the car is behind A? What is
P(Goat B | Car A)? In this case, we assume that the host would randomly choose B or C, so
P(Goat B | Car A) = 0.5. Plug these numbers into the formula to get:
2
1 (1/3)
P(Car C | Goat B) = 0.5 (1/3) + 0 (1/3) + 1 (1/3) = 3
Thus, you should always switch when the host reveals the goat!
Here’s another way to think about it: You had a one-third chance of getting the correct door in the first
place. Thus, there is a two-thirds chance that the goat is behind B or C. By showing the goat behind B, the
host has effectively shifted the entire two-thirds probability over to Door C.
Still another way to think about the problem: If you played this game over and over, one-third of the time
the car would be behind A, and two-thirds of the time it would be behind one of the other doors. Thus, twothirds of the time, the host shows you which door the car is not behind. If you always switch to the door
that the host did not open, you will find the car 2/3 of the time. The other 1/3 the car is behind the door you
chose in the first place.
153
This question was asked of Marilyn Vos Savant, the person with the highest recorded I.Q. Her published
answer was correct, but it created quite a stir because many people (including PhDs) did not understand
how to solve the problem.
7.27. The host is proposing a decision tree that looks like this:
Keep
Switch
x
0.5
x/2
0.5
2x
But this is not correct. Suppose that x is equal to $100. Then the host is saying that if you swap, you have
equally likely chances at an envelope with $200 and an envelope with $50. But that’s not the case! (If it
were true, you would definitely want to switch.)
Labeling the two envelopes A and B, the contestant correctly understands that the decision tree is as
follows:
A has x
(0.5)
B has x
(0.5)
A has x
(0.5)
B has x
(0.5)
x
Keep A
x/2
x/2
Switch to B
x
The two decision branches are equivalent from the point of view of the decision maker.
7.28. The solution is a straightforward application of Bayes’ theorem. For any 𝐴𝑖 and 𝐵𝑗 , for 1 ≤ 𝑖 ≤ 𝑛 and
1 ≤ 𝑗 ≤ 𝑚, we are given that 𝑃 �𝐴𝑖 �𝐵𝑗 � = 𝑃(𝐴𝑖 ). By Bayes’ theorem:
𝑃�𝐵𝑗 �𝐴𝑖 � =
This is what we were to show.
𝑃 �𝐴𝑖 �𝐵𝑗 �𝑃(𝐵𝑗 )
𝑃 ( 𝐴𝑖 )𝑃(𝐵𝑗 )
=
= 𝑃�𝐵𝑗 �.
𝑃(𝐴𝑖 )
𝑃(𝐴𝑖 )
7.29
E(Payoff)
= $4.56 billion (calculated previously)
Var(Payoff)
= 0.2 (10.3 - 4.56)2 + 0.5 (5 - 4.56)2 + 0.3 (0 - 4.56)2
= 12.9244 billion-dollars-squared
σPayoff = 12.9244 = $3.5951 billion
7.30 . Let “+” indicate positive results, and “-” indicate negative results.
P(+)
= P(+ | Dome) P(Dome) + P(+ | No dome) P(No Dome)
= 0.99 (0.6) + 0.15 (0.4)
= 0.654
154
P(+ | Dome) P(Dome)
P(Dome | +) = P(+ | Dome) P(Dome) + P(+ | No dome) P(No Dome)
0.99 (0.6)
= 0.99 (0.6) + 0.15 (0.4)
= 0.908
P(No Dome | +) = 1 - 0.908 = 0.092
We can now calculate the EMV for Site 1, given test results are positive:
EMV(Site 1 | +) = (EMV | Dome) P(Dome | +) + (EMV | No dome) P(No dome | +)
= ($52.50 K) 0.908 + (-$53.75 K) 0.092
= $42.725 K
[EMV|Dome and EMV|No dome have been calculated and appear in Figure 7.15.]
EMV(Site 1 | +) is greater than EMV(Site 2 | +). If the test gives a positive result, choose Site 1.
If the results are negative:
P(-) = 1- P(+)
= 1 - 0.654
= 0.346
P(- | Dome) P(Dome)
P(Dome | -) = P(- | Dome) P(Dome) + P(- | No dome) P(No Dome)
0.01 (0.6)
= 0.01 (0.6) + 0.85 (0.4)
= 0.017
P(No Dome | -) = 1 - 0.017 = 0.983
We can now calculate the EMV for Site 1, given test results are negative:
EMV(Site 1 | -) = (EMV | Dome) P(Dome | -) + (EMV | No dome) P(No dome | -)
= ($52.50 K) 0.017 + (-$53.75 K) 0.983
= -$51.944 K
EMV(Site 1 | -) is less than the EMV(Site 2 | -). If the test gives a negative result, choose Site 2.
7.31.
P(+ and Dome) = P(+ | Dome) P(Dome) = 0.99 (0.60) = 0.594.
P(+ and Dome and Dry) = P(Dry | + and Dome) P(+ and Dome)
But P(Dry | + and Dome) = P(Dry | Dome) = 0.60. That is, the presence or absence of the dome is what
matters, not the test results themselves. Therefore:
P(+ and Dome and Dry) = 0.60 (0.594) = 0.356
155
Finally,
P(Dome | + and Dry) =
P(Dome and + and Dry)
P(+ and Dry)
But
P(+ and Dry) = P(+ and Dry | Dome) P(Dome) + P(+ and Dry | No dome) P(No dome)
and
P(+ and Dry | Dome)
= P(Dry | + and Dome) P(+ | Dome)
= P(Dry | Dome) P(+ | Dome)
= 0.6 (0.99)
P(+ and Dry | No dome)
= P(Dry | + and No dome) P(+ | No dome)
= P(Dry | No dome) P(+ | No dome)
= 0.85 (0.15)
Now we can substitute back in:
P(+ and Dry) = 0.6 (0.99) (0.6) + 0.85 (0.15) (0.4) = 0.407
and
P(Dome | + and Dry)
0.356
= 0.6 (0.99) (0.6) + 0.85 (0.15) (0.4)
= 0.874.
7.32. EMV(Site 1) = p(52.50) + (1 - p)(-53.75)
Set this equal to EMV( Site 2) = 0, and solve for p:
p(52.50) + (1 - p)(-53.75) = 0
52.50 p + 53.75 p = 53.75
53.75
p = 52.50 + 53.75
= 0.5059.
If 0.55 < P(Dome) < 0.65, then the optimal choice for the entire region is to drill at Site #1.
7.33. Choose Site 1 if
EMV(Site 1) > EMV(Site 2)
q(52.50) + (1 - q) (-53.75) > p(-200) + (1-p) (50)
q > -2.3529 p + 0.9765
156
q = P(Dome at Site 1)
0.9765
1.00
0.75
Site 1
0.50
0.25
Site 2
p = P(Dry at Site 2)
0
0.25
0.50
0.4150
0.75
1.00
7.34. P(FR pos) = P(FR pos | CP High) P(CP High) + P(FR pos | CP Low) P(CP Low)
= 0.95 (0.27) + 0.17 (0.73)
= 0.3806
P(FR + | TR +) = P(FR + | CP High and TR +) P(CP High | TR +)
+ P(FR + | CP Low and TR +) P(CP Low | TR +)
But FR and TR are conditionally independent given CP, so
P(FR + | CP High and TR +)
= P(FR + | CP High) = 0.95
P(FR + | CP Low and TR +)
= P(FR + | CP Low) = 0.17
and
We can calculate P(CP High | TR +) using Bayes’ theorem:
P(TR + | CP High) P(CP High)
P(CP High | TR +) = P(TR + | CP High) P(CP High) + P(TR + | CP Low) P(CP Low)
0.82 (0.27)
= 0.82 (0.27) + 0.21 (0.73)
= 0.5909.
Therefore
P(CP Low | TR +) = 1 - P(CP High | TR +) = 1 - 0.5909 = 0.4091.
157
Substitute back to obtain
P(FR + | TR +) = 0.95 (0.5909) + 0.17 (0.4091) = 0.6309.
It is important to note that P(FR + | TR +) ≠ P(FR +) = 0.3806. Thus, the two are not fully independent,
even though they are conditionally independent given CP. Another way to say it is that conditional
independence does not necessarily imply regular (marginal) independence.
7.35 Students have a hard time understanding what they are to show here, as it seems obvious, but there is
some algebra required to show the return of portfolio is the weighted average of the individual returns. The
trick for this problem is that you need to break the portfolio weights down into the number of shares and
stock prices. Specifically, using the notation below, the weight, 𝑤𝐴𝐵 , satisfies:
𝑤𝐴𝐵 =
𝑛𝐴𝐵 𝐴𝐵𝑃𝑟𝑖𝑐𝑒0
𝑛𝐴𝐵 𝐴𝐵𝑃𝑟𝑖𝑐𝑒0 + 𝑛𝐶𝐷 𝐶𝐷𝑃𝑟𝑖𝑐𝑒0 + 𝑛𝐸𝐹 𝐸𝐹𝑃𝑟𝑖𝑐𝑒0
Let “Port0” be the initial price of the portfolio and “Port1” be the value of the portfolio after one period. To
calculate the ending value of the portfolio, we need to know how many shares of each stock we own
because the ending value is the sum of number of shares times ending stock value. Let 𝑛𝐴𝐵 , 𝑛𝐶𝐷 , and 𝑛𝐸𝐹 be
the numbers of shares of sticks AB, CD, and EF in the portfolio. Thus, “Port0” and “Port1” satisfy:
𝑃𝑜𝑟𝑡0 = 𝑛𝐴𝐵 𝐴𝐵𝑃𝑟𝑖𝑐𝑒0 + 𝑛𝐶𝐷 𝐶𝐷𝑃𝑟𝑖𝑐𝑒0 + 𝑛𝐸𝐹 𝐸𝐹𝑃𝑟𝑖𝑐𝑒0
𝑃𝑜𝑟𝑡1 = 𝑛𝐴𝐵 𝐴𝐵𝑃𝑟𝑖𝑐𝑒1 + 𝑛𝐶𝐷 𝐶𝐷𝑃𝑟𝑖𝑐𝑒1 + 𝑛𝐸𝐹 𝐸𝐹𝑃𝑟𝑖𝑐𝑒1
Therefore, return of the portfolio, 𝑅𝑃 , is
=
=
𝑃𝑜𝑟𝑡1 − 𝑃𝑜𝑟𝑡0
𝑃𝑜𝑟𝑡0
𝑛𝐴𝐵 𝐴𝐵𝑃𝑟𝑖𝑐𝑒1 + 𝑛𝐶𝐷 𝐶𝐷𝑃𝑟𝑖𝑐𝑒1 + 𝑛𝐸𝐹 𝐸𝐹𝑃𝑟𝑖𝑐𝑒1 − (𝑛𝐴𝐵 𝐴𝐵𝑃𝑟𝑖𝑐𝑒0 + 𝑛𝐶𝐷 𝐶𝐷𝑃𝑟𝑖𝑐𝑒0 + 𝑛𝐸𝐹 𝐸𝐹𝑃𝑟𝑖𝑐𝑒0 )
𝑅𝑃 =
𝑛𝐴𝐵 𝐴𝐵𝑃𝑟𝑖𝑐𝑒0 + 𝑛𝐶𝐷 𝐶𝐷𝑃𝑟𝑖𝑐𝑒0 + 𝑛𝐸𝐹 𝐸𝐹𝑃𝑟𝑖𝑐𝑒0
𝑛𝐴𝐵 (𝐴𝐵𝑃𝑟𝑖𝑐𝑒1 − 𝐴𝐵𝑃𝑟𝑖𝑐𝑒0 ) + 𝑛𝐶𝐷 (𝐶𝐷𝑃𝑟𝑖𝑐𝑒1 − 𝐶𝐷𝑃𝑟𝑖𝑐𝑒0 ) + 𝑛𝐸𝐹 (𝐸𝐹𝑃𝑟𝑖𝑐𝑒1 − 𝐸𝐹𝑃𝑟𝑖𝑐𝑒0 )
𝑛𝐴𝐵 𝐴𝐵𝑃𝑟𝑖𝑐𝑒0 + 𝑛𝐶𝐷 𝐶𝐷𝑃𝑟𝑖𝑐𝑒0 + 𝑛𝐸𝐹 𝐸𝐹𝑃𝑟𝑖𝑐𝑒0
=
+𝑛𝐶𝐷 (𝐶𝐷𝑃𝑟𝑖𝑐𝑒1 − 𝐶𝐷𝑃𝑟𝑖𝑐𝑒0 )
𝑛𝐴𝐵 (𝐴𝐵𝑃𝑟𝑖𝑐𝑒1 − 𝐴𝐵𝑃𝑟𝑖𝑐𝑒0 )
+
𝑛𝐴𝐵 𝐴𝐵𝑃𝑟𝑖𝑐𝑒0 + 𝑛𝐶𝐷 𝐶𝐷𝑃𝑟𝑖𝑐𝑒0 + 𝑛𝐸𝐹 𝐸𝐹𝑃𝑟𝑖𝑐𝑒0 𝑛𝐴𝐵 𝐴𝐵𝑃𝑟𝑖𝑐𝑒0 + 𝑛𝐶𝐷 𝐶𝐷𝑃𝑟𝑖𝑐𝑒0 + 𝑛𝐸𝐹 𝐸𝐹𝑃𝑟𝑖𝑐𝑒0
=
(𝐴𝐵𝑃𝑟𝑖𝑐𝑒1 − 𝐴𝐵𝑃𝑟𝑖𝑐𝑒0 )
𝑛𝐴𝐵 𝐴𝐵𝑃𝑟𝑖𝑐𝑒0
𝐴𝐵𝑃𝑟𝑖𝑐𝑒0
𝑛𝐴𝐵 𝐴𝐵𝑃𝑟𝑖𝑐𝑒0 + 𝑛𝐶𝐷 𝐶𝐷𝑃𝑟𝑖𝑐𝑒0 + 𝑛𝐸𝐹 𝐸𝐹𝑃𝑟𝑖𝑐𝑒0
+
+
+
𝑛𝐸𝐹 (𝐸𝐹𝑃𝑟𝑖𝑐𝑒1 − 𝐸𝐹𝑃𝑟𝑖𝑐𝑒0 )
𝑛𝐴𝐵 𝐴𝐵𝑃𝑟𝑖𝑐𝑒0 + 𝑛𝐶𝐷 𝐶𝐷𝑃𝑟𝑖𝑐𝑒0 + 𝑛𝐸𝐹 𝐸𝐹𝑃𝑟𝑖𝑐𝑒0
(𝐶𝐷𝑃𝑟𝑖𝑐𝑒1 − 𝐶𝐷𝑃𝑟𝑖𝑐𝑒0 )
+𝑛𝐶𝐷 𝐶𝐷𝑃𝑟𝑖𝑐𝑒0
𝐶𝐷𝑃𝑟𝑖𝑐𝑒0
𝑛𝐴𝐵 𝐴𝐵𝑃𝑟𝑖𝑐𝑒0 + 𝑛𝐶𝐷 𝐶𝐷𝑃𝑟𝑖𝑐𝑒0 + 𝑛𝐸𝐹 𝐸𝐹𝑃𝑟𝑖𝑐𝑒0
𝑛𝐸𝐹 𝐸𝐹𝑃𝑟𝑖𝑐𝑒0
(𝐸𝐹𝑃𝑟𝑖𝑐𝑒1 − 𝐸𝐹𝑃𝑟𝑖𝑐𝑒0 )
𝑛𝐴𝐵 𝐴𝐵𝑃𝑟𝑖𝑐𝑒0 + 𝑛𝐶𝐷 𝐶𝐷𝑃𝑟𝑖𝑐𝑒0 + 𝑛𝐸𝐹 𝐸𝐹𝑃𝑟𝑖𝑐𝑒0
𝐸𝐹𝑃𝑟𝑖𝑐𝑒0
= 𝑤𝐴𝐵 𝑅𝐴𝐵 + 𝑤𝐴𝐵 𝑅𝐴𝐵 + 𝑤𝐴𝐵 𝑅𝐴𝐵
158
Case Study: Decision Analysis Monthly
May:
P(Renew) = P(Renew | Gift) P(Gift) + P(Renew | Promo) P(Promo)
+ P(Renew | Previous) P(Previous)
= 0.75 (0.70) + 0.50 (0.20) + 0.10 (0.10)
= 0.6350
June:
P(Renew) = 0.85 (0.45) + 0.60 (0.10) + 0.20 (0.45)
= 0.5325
There is good news because the proportion of renewals in each category increased from May to June.
However, as indicated by Calloway, the overall proportion has indeed decreased because the mix of gift,
promotional, and previous subscriptions has changed along with the increase in proportion renewed. The
overall decrease should indeed be looked at as bad news in a sense. If the editors can project the future mix
of expiring subscriptions, they will be able to make an educated guess as to whether the trend is toward an
overall increase or decrease in subscriptions. (This problem is an example of Simpson’s paradox.)
Case Study: Screening for Colorectal Cancer
1. We know that P(Blood) = 0.10 from the text: “...10% had blood in their stools.” These 10% underwent
colonoscopy, and 2.5% of those actually had cancer, which gives us P(Cancer | Blood) = 0.025. Thus,
P(Cancer and Blood) = 0.025 (0.10) = 0.0025. Finally, we know that “... approximately 5 out of 1000” had
no blood but did develop cancer. This gives P(No Blood and Cancer) = 0.005.
With these numbers, we can construct the complete probability table. The probabilities that we already
have are indicated in bold:
Blood
No Blood
Cancer
0.0025
0.0050
0.0075
No Cancer
0.0975
0.8950
0.9925
0.10
0.90
1.00
P(Cancer | Blood) = 0.025 as indicated above. We can calculate P(Cancer | No Blood) = 0.005/0.90 =
.0056.
2. The expected cost of the policy is 60 million ($10) + 0.10 (60 million) ($750) = $5.1 billion. the
expected number of people who must undergo colonoscopy is 6 million. And the number of people who
have colonoscopy done needlessly is 0.975 (6 million) = 58.5 million.
3. If we save 3 lives per thousand, then out of 60 million people tested we would expect to save 60 million
times 3/1000 or 180,000 lives. The total cost, ignoring the time value of money, would be $5.1 billion times
13, or $66.3 billion. Divide cost by number of lives to get the cost per life: $368,333 per life saved.
4. This is a tough question, but we face such questions constantly. On one hand, a lot of people are
inconvenienced needlessly by this screening procedure. On the other hand, the cost of $368,333 is a
relatively low figure. Economic analyses of the value of a life typically give a figure in the neighborhood of
$4 million.
However, this analysis is not complete. The real issue is that the colonoscopy procedure itself can lead to
complications and misinterpretations, and hence it is itself a risky procedure. A complete analysis would
have to take this into account. Doing so will increase the overall cost of the screening policy. Moreover, we
have put no dollar figures on the inconvenience, concern, and worry that so many people must go through
needlessly.
159
Case Study: AIDS
1. P(Inf | ELISA+)
P(ELISA+ | Inf) P(Inf)
=P(ELISA+ | Inf) P(Inf) + P(ELISA+ | Not Inf) P(Not Inf)
0.997 (0.0038)
= 0.997 (0.0038) + 0.015 (1 - 0.0038)
= 0.20.
2.
P( Inf|ELISA+)
NY drug
users 0.98
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
RI gays
0.73
NJ recruits
0.13
0
0.05
0.1
0.15
0.2
P( Inf)
3. P(Inf | ELISA-)
P(ELISA- | Inf) P(Inf)
=P(ELISA- | Inf) P(Inf) + P(ELISA- | Not Inf) P(Not Inf)
0.003 (0.0038)
= 0.003 (0.0038) + 0.985 (1 - 0.0038)
= 0.0000116.
1
0.8
0.6
0.4
0.2
P(Inf | ELISA 0
0.50
0.60
0.70
0.80
0.90
160
1.00
0.25
0.3
0.35
0.4
4. P(Inf | WB+, ELISA+)
P(WB+ | Inf, ELISA+) P(Inf | ELISA+)
= P(WB+ | Inf, ELISA+) P(Inf | ELISA+) + P(WB+ | Not Inf, ELISA+) P(Not Inf | ELISA+)
We have the values for specificity, sensitivity, false positive rate, and false negative rate for the Western
Blot from the text of the case. As indicated in the problem, all of these figures are conditioned on a positive
ELISA result:
P(WB+ | Inf, ELISA+) = 0.993
P(WB- | Inf, ELISA+) = 0.007
P(WB+ | Not Inf, ELISA+) = 0.084
P(WB- | Not Inf, ELISA+) = 0.916.
Likewise, we calculated P(Inf | ELISA+) in question 1:
P(Inf | ELISA+) = 0.20
P(Not Inf | ELISA+) = 0.80
Thus, we have
P(Inf | WB+, ELISA+)
0.993 (0.20)
= 0.993 (0.20) + 0.084 (0.80)
= 0.75.
We can also calculate
P(Inf | WB-, ELISA+)
P(WB- | Inf, ELISA+) P(Inf | ELISA+)
= P(WB- | Inf, ELISA+) P(Inf | ELISA+) + P(WB- | Not Inf, ELISA+) P(Not Inf | ELISA+)
0.007 (0.20)
= 0.007 (0.20) + 0.916 (0.80)
= 0.00193.
Note that in using P(Inf | ELISA+) = 0.20, we are implicitly using as a prior P(Inf) = 0.0038.
161
5.
1.00
P(Inf | ELISA+, WB+
0.80
0.60
0.40
0.20
0.00
0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0.40
0.45
0.50
1.00
0.80
0.60
P(Inf | ELISA+, WB-
0.40
0.20
0.00
0.00
0.20
0.40
0.60
0.80
1.00
6. Students’ answers will vary considerably here. The question itself provides some guidance for
identifying the costs associated with false negatives and false positives. Clearly, a false positive could lead
an individual to unnecessary psychological distress. At a societal level, a high false positive rate could lead
to an unduly high level of expenditures on the disease. On the other hand, false negatives could have severe
social impact as infected individuals could spread the disease unknowingly.
162
One of society’s fundamental choices is to balance the rates of false positives and false negatives. If false
negatives are deemed much more serious than false positives, for example, then it would be appropriate to
develop tests that have a very high level of sensitivity.
It is appropriate to draw an analogy here with Type I and Type II errors in statistical hypothesis testing. The
probabilities of these errors (often labeled α and β, respectively), correspond to false positive and false
negative rates. When the probability of a Type I error is reduced by changing the decision rule (everything
else being equal), the probability of a Type II error increases.
Case Study: Discrimination and the Death Penalty
1. Let DP denote “Death Penalty,” DW “Defendant White,” and DB “Defendant Black.” From Table 7.5,
P(DP | DW) = 19/160 = 0.119
P(DP | DB) = 17/166 = 0.102
Based on these data, there appears to be little difference in the rate at which black defendants get the death
penalty.
2. Let VW and VB denote “Victim White” and “Victim Black,” respectively. Based on Table 7.6, for white
victims:
P(DP | DW, VW) = 19/151 = 0.126
P(DP | DB, VW) = 11/63 = 0.175
For black victims:
P(DP | DW, VB) = 0/9 = 0
P(DP | DB, VB) = 6/103 = 0.058
Now the interpretation is different. After disaggregating the data on the basis of victim race, blacks appear
to get the death penalty more frequently (by about 5 percentage points) than whites, regardless of the race
of the victim.
3. How do we resolve the apparent paradox? How could there be no difference between the overall rate of
death penalties (or even a slightly lower rate for blacks) with the aggregate data, but a clear difference — in
the opposite direction — with the disaggregate data?
This is an example of Simpson’s paradox. The problem is that the mix of victim races differs considerably
from white defendants to black defendants. There are so few black victims of whites that the low deathpenalty rate in this case (0) plays a very small role in calculating the overall death-penalty rate for whites.
Likewise, there are so many black victims of black defendants, so the relatively low death-penalty rate for
black defendant/black victim brings down the overall death-penalty rate for black victims.
The Decision Analysis Monthly case is another example of Simpson’s paradox.
163
Chapter 7 Online Supplement: Solutions to Problems
7S.1. P(X=2, Y=10) = P(Y=10 | X=2) P(X=2) = 0.9 (0.3) = 0.27. Likewise,
P(X=2, Y=20) = P(Y=20 | X=2) P(X=2) = 0.1 (0.3) = 0.03
P(X=4, Y=10) = P(Y=10 | X=4) P(X=4) = 0.25 (0.7) = 0.175
P(X=4, Y=20) = P(Y=20 | X=4) P(X=4) = 0.75 (0.7) = 0.525
E(X) = 0.3(2) + 0.7(4) = 3.4
P(Y = 10) = P(Y=10 | X=2) P(X=2) + P(Y=10 | X=4) P(X=4) = 0.27 + 0.175 = 0.445
P(Y = 20) = 1-0.445 = 0.555
E(Y) = 0.445 (10) + 0.555 (20) = 15.55
Now calculate:
X
2
2
4
4
Y
10
20
10
20
X - E(X)
-1.4
-1.4
0.6
0.6
Y - E(Y)
-5.55
4.45
-5.55
4.45
(X-E(X))(Y-E(Y))
7.77
-6.23
-3.33
2.67
P(X, Y)
0.27
0.03
0.175
0.525
The covariance is the expected value of the cross products in the next-to-last column. To calculate it, use
the joint probabilities in the last column:
Cov(X, Y) = 0.27 (7.77) + 0.03 (-6.23) + 0.175 (-3.33) + 0.525 (2.67) = 2.73
Calculate the standard deviations by squaring the deviations in the third and fourth columns (for X and Y,
respectively), finding the expected value of the squared deviations, and finding the square root:
σX =
0.3 (-1.42) + 0.7 (0.62) = 0.917
σY =
0.445 (-5.552) + 0.555 (4.452) = 4.970
2.73
Thus, the correlation is ρXY = 0.917 (4.970) = 0.60.
7S.2. The basic setup for this problem is the same as it is for the previous problem. We already have the
joint probabilities, so we can start by calculating the expected values of X and Y:
E(X) =
(-2) + (-1) + 0 + 1 + 2
= 0
5
E(Y) = 0.2 (0) + 0.4 (1) + 0.4 (2) = 1.2
Thus, we have the following table:
X
Y
X - E(X)
Y - E(Y)
-2
2
-2
0.8
-1
1
-1
-0.2
0
0
0
-1.2
1
1
1
-0.2
2
2
2
0.8
(X-E(X))(Y-E(Y))
-1.6
0.2
0
-0.2
1.6
The covariance is the expected value of the numbers in the last column, each of which can occur with
probability 1/5. Calculating this expected value gives a covariance of zero. Likewise, the correlation equals
zero.
164
CHAPTER 8
Subjective Probability
Notes
Chapter 8 is the key chapter in Section 2 of Making Hard Decisions with DecisionTools, 3rd Ed. While
probability models can, and often are, created in other ways, it is the subjective interpretation of probability
and the use of personal judgments that make decision analysis unique.
At the same time, subjective probability has been a source of considerable criticism. The only counter to
this criticism is to understand the nature of the subjective judgments that must be made and to make those
judgments with great care. And the only way to do so is to spend the time required to think hard about what
is known regarding any given uncertain event. The probability-assessment framework presented in the
chapter is just the framework; the decision maker or assessor must provide the hard thinking and work to
generate good subjective probability models.
Question 8.4 is an excellent in-class exercise for motivating quantitative modeling of subjective
probabilities. Have the students answer the questions, and tabulate their results. A wide variety of
interpretations exist for many of these verbal phrases.
Two books are particularly good sources for probabilities of everyday events for which you might want to
have students assess probabilities. These books can be used to compare the assessments with data-based
risk estimates:
Laudan, L. (1994) The Book of Risks. New York: Wiley
Siskin, B., J. Staller, and D. Rorvik (1989) What are the Chances. New York: Crown
The online supplement introduces the idea of a Dutch Book and shows why subjective assessments should
follow the laws of probability. Solutions to the two online problems (S8.1 and S8.2) are included here.
Topical cross-reference for problems
Ambiguity
Bayes’ theorem
Conjunction effect
CDFs
Decomposition
Ellsberg paradox
Odds
Discrete approximations
Probability assessment
Probability assessment heuristics
Requisite models
Scientific information
Sensitivity analysis
Subjective judgments
8.23, S8.1, S8.2, Assessing Cancer Risk,
Space Shuttle Challenger
8.10
8.21
8.11 - 8.16
8.5, 8.7, 8.17 - 8.19
8.25
8.9, 8.10, 8.26
8.11, 8.15, 8.16
8.3, 8.6, 8.11 - 8.14
8.20 - 8.22
8.23
Assessing Cancer Risk, Breast Implants,
Space Shuttle Challenger
8.24
8.2
Solutions
8.1. Answers will vary considerably, but students might think about subjective probability as a degree of
belief, uncertainty in one’s mind, or a willingness to bet or accept lotteries. They may appropriately
contrast the subjective interpretation of probability with a frequency interpretation.
8.2. The model under discussion relates a number of financial ratios to the probability of default. First,
there is a good deal of subjective judgment involved in deciding which financial ratios to include in such a
165
model. Second, in using past data your friend has made the implicit assumption that no fundamental
changes in causes of default will occur, or that the data from the past are appropriate for understanding
which firms may default in the future. A bank officer using this model is making an implicit subjective
judgment that the model and data used are adequate for estimating the probability of default of the
particular firms that have applied for loans. Finally, the bank officer implicitly judges that your friend has
done a good job!
8.3. Assessing a discrete probability requires only one judgment. Assessing a continuous probability
distribution can require many subjective judgments of interval probabilities, cumulative probabilities, or
fractiles in order to sketch out the CDF. Even so, the fundamental probability assessments required in the
continuous case are essentially the same as in the discrete case.
8.4. Answers will, of course, vary a lot. As a motivation for careful quantitative modeling of probability, it
is instructive to collect responses from a number of people in the class and show the ranges of their
responses. Thus, it is clear that different people interpret these verbal phrases in different ways.
Answers can be checked for consistency, as well. In particular, a > 0.5, g > 0.5, l < j, e < j, p < m < i, and o
< f < k.
8.5. Answers will vary here, too, but many students will decompose the assessment into how well they will
do on homework (for which they have a good deal of information) and how well they will do on a final
exam or project.
Final
Exam
Homework
Course
Grade
8.6. The students’ assessments may vary considerably. However, they should be reasonable and indicate in
some way that some effort went into the assessment process. Answers should include some discussion of
thought processes that led to the different assessments.
8.7. It is possible to assess probabilities regarding one’s own performance, but such assessments are
complicated because the outcome depends on effort expended. For example, an individual might assess a
relatively high probability for an A in a course, thus creating something of a personal commitment to work
hard in the course.
8.8. Considering the assessments in 8.7, it might be appropriate to decompose the event into uncertain
factors (homework, professor’s style, exams, and so on) and then think about how much effort to put into
the course. It would then be possible to construct risk profiles for the final grade, given different levels of
effort.
8.9. This problem calls for the subjective assessment of odds. Unfortunately, no formal method for
assessing odds directly has been provided in the text. Such a formal approach could be constructed in terms
of bets or lotteries as in the case of probabilities, but with the uncertainty stated in odds form.
8.10. First, let us adopt some notation. Let NW and NL denote “Napoleon wins” and “Napoleon loses,”
respectively. Also, let “P&E” denote that the Prussians and English have joined forces.
The best way to handle this problem is to express Bayes’ theorem in odds form. Show that
P(NW | P&E) P(P&E | NW) P(NW)
P(NL | P&E) = P(P&E | NL) P(NL) .
166
We have that P(NW) = 0.90 and P(NW | P&E) = 0.60, and so P(NL) = 0.10 and P(NL | P&E) = 0.40. Thus,
we can substitute:
0.60 P(P&E | NW) 0.90
0.40 = P(P&E | NL) 0.10
or
P(P&E | NW) 1
P(P&E | NL) = 6 .
Thus, Napoleon would have had to judge the probability of the Prussian and English joining forces as six
time more likely if he is to lose than if he is to win.
8.11. This problem requires a student to assess a subjective CDF for his or her score in the decision analysis
course. Thus, answers will vary considerably, depending on personal assessments. For illustrative purposes,
assume that the student assesses the 0.05, 0.25, 0.50, 0.75, and 0.95 fractiles:
x0.05 = 71
x0.25 = 81
x0.50 = 0.87
x0.75 = 89
x0.95 = 93
These assessments can be used to create a subjective CDF:
1.00
0.75
0.50
0.25
60
70
80
90
100
To use these judgments in deciding whether to drop the course, we can use either Swanson-Megill or the
Pearson-Tukey method. The Pearson-Tukey method approximates the expected DA score as:
EP-T(DA Score) ≈ 0.185 (71) + 0.63 (87) + 0.185 (93) = 85.15.
Assume that the student has a GPA of 2.7. Using EP-T(DA Score) = 85.15 to calculate expected salary,
E(Salary | Drop Course) = $4000 (2.7) + $24,000 = $34,800
E(Salary | Don’t drop) = 0.6 ($4000 x 2.7) + 0.4 ($170 × EP-T(DA Score)) + $24,000
= 0.6 ($4000 × 2.7) + 0.4 ($170 × 85.15) + $24,000
= $36,270.
Thus, the optimal choice is not to drop the course.
167
8.12. Again, the assessments will be based on personal judgments and will vary among students. As an
example, suppose the following assessments are made:
P(DJIA ≤ 2000) = 0.05
P(DJIA > 3000) = 0.05
P(DJIA ≤ 2600) = 0.50
P(DJIA ≤ 2350) = 0.25
P(DJIA ≤ 2800) = 0.75
These assessments result in the following graph:
8.13. If you have worked this problem and looked in the back of the book for the answers, you have most
likely found that relatively few of the answers fell within the ranges you stated, indicating that you, like
most people are very overconfident in your judgments. Given this, it would make sense to return to
Problem 8.12 and broaden the assessed distributions. (Note from Dr. Clemen: I have tried to do exercises
like this myself, knowing about the overconfidence phenomenon and how to make subjective probability
judgments, and I still make the same mistake!)
8.14. The cumulative distribution function provides a “picture” of what the forecaster sees as reasonable
outcomes for the uncertain quantity. If the CDF is translated into a probability density function, it is still
easier to see how the forecaster thinks about the relative chances of the possible outcomes. The key
advantages of probabilistic forecasting are 1) that it provides a complete picture of the uncertainty, as
opposed to a point forecast which may give no indication of how accurate it is likely to be or how large the
error might be; and 2) the decision maker can use the probability distribution in a decision analysis if
desired. The disadvantage is that making the necessary assessments for the probabilistic forecast may take
some time. Some decision makers (the uninitiated) may have difficulty interpreting a probabilistic forecast.
8.15. a. This question requires students to make personal judgments. As an example, suppose the following
assessments are made:
P(S ≤ 65) = 0.05
P(S > 99) = 0.05
P(S ≤ 78) = 0.25
P(S ≤ 85) = 0.50
P(S ≤ 96) = 0.75
168
b. Now the student would ask whether she would be willing to place a 50-50 bet in which she wins if 78 <
S ≤ 85 and loses if 85 < S ≤ 96. Is there a problem with betting on an event over which you have some
control? See problems 8.8, 8.9.
c. Three-point Extended Pearson-Tukey approximation:
EP-T(Score) ≈ 0.185 (65) + 0.63 (85) + 0.185 (99) = 83.89.
d. Three-point Extended Swanson-Megill approximation:
ES-M(Score) ≈ 1/6 (68.25) + 2/3 (85) + 1/6 (98.25) = 84.42.
Because the assessments did not include the tenth and ninetieth percentiles, we use a straight-line
interpolation. The tenth percentile is calculated as: 65 + (0.10 – 0.05)*(13/0.2) = 68.25.
The ninetieth percentile is calculated as: 96 + (0.90 – 0.75)*(3/0.2) = 98.25.
The Extended P-T and S-M are quite close to each other, within 0.53 percentage points.
8.16. We continue with the example given above in 8.12, using the Dow Jones Industrial Average. The
CDF is:
169
a.
EP-T(DJIA) = 0.185 (2000) + 0.63 (2600) + 0.185 (3000) = 2563
b.
ES-M(DJIA) = 1/6 (2087.5) + 2/3 (2600) + 1/6 (2950) = 2573.
Because the assessments did not include the tenth and ninetieth percentiles, we use a straight-line
interpolation. The tenth percentile is calculated as: 2000 + (0.10 – 0.05)*(350/0.2) = 2087.5.
The ninetieth percentile is calculated as: 2800 + (0.90 – 0.75)*(200/0.2) = 2950.
8.17. The issue in this problem is whether an assessment made in one way is better than another. The
assessments will probably not be perfectly consistent. That is, P(Mets win series) = p will probably not be
exactly equal to P(Mets win series | Mets win Pennant) × P(Mets win pennant). For example, P(Mets win
series) may be holistically assessed as 0.02, while P(Mets win series | Mets win Pennant) = 0.6 and P(Mets
win pennant) = 0.1, giving P(Mets win series) = 0.06. For many individuals, particularly those with some
knowledge of the Mets’ current team, the decomposed assessment may be easier, and they may have more
confidence in the final result.
8.18. Students must assess P(Hospitalized | Accident), P(Hospitalized | No Accident), and P(Accident).
With these assessments, they could calculate
P(Hospitalized) = P(Hospitalized | Accident) P(Accident) + P(Hospitalized | No Accident) P(No accident)
It would be possible to decompose the assessment further or in other ways. For example, one might
consider the possibility of contracting a serious disease or requiring hospitalization for mental illness.
8.19. This problem is actually a full-scale project requiring considerable research. The students might
consider assessing a distribution for industry sales and a distribution for the firm’s market share. Together,
these two quantities give the distribution of revenue for the firm:
Market
Share
Industry
Sales
Revenue
8.20. Tversky and Kahneman (1971) attribute the gambler’s fallacy to the representativeness heuristic and a
misunderstanding of random processes. People tend to think that small segments of a random process will
be highly representative of the overall process. Hence, after a string of red on a roulette wheel, it is thought
that black must occur to balance the sample and make it more representative of the overall process (50%
red, 50% black). Source: Tversky, A., and Kahneman, D. (1971) “Belief in the Law of Small Numbers,”
Psychological Bulletin, 76, 105-110.
8.21. “Linda” (Problem 7.25) is a classic example of the representativeness heuristic. Many people judge
that Linda’s description is less representative of a bank teller than of a feminist bank teller.
8.22. The “regression to the mean” phenomenon could easily be at work here. The D is most likely an
“outlier” and, hence, is likely to be followed by an improvement. Another argument is that your parents
should not use the D as a basis to compare you to other D students (representativeness heuristic). In
contrast, they should consider your “base rate” (as a B student) and not overweight the poor exam
performance.
8.23. In principle, the notion of a requisite model is appropriate in answering this question, but the
application of the concept is delicate. It is possible to perform sensitivity analysis on, say, a three-point
170
discrete approximation by wiggling the representative points. Do small changes in the 0.05, 0.5, or 0.95
fractiles affect the choice? If not, then further assessments are probably not necessary. If they do, then more
assessments, possibly decomposed assessments, and a clearer picture of the CDF are needed to obtain an
unequivocal decision.
8.24. There are a variety of different possibilities here. Perhaps the most straightforward is to obtain
“upper” and “lower” probabilities and perform a sensitivity analysis. That is, solve the decision tree or
influence diagram with each probability. Does the optimal choice change? If not, there is really no problem.
If the decision is sensitive to the range of probabilities, it may be necessary to assess the probabilities more
carefully. Another simple solution is to assess a continuous distribution for the probability in question (a
“second-order” probability distribution). Now estimate the expected value of this distribution using bracket
medians or the Pearson-Tukey method. Finally, use the expected value as the probability in the decision
problem.
8.25. a, b. Most people choose A and D because these two are the options for which the probability of
winning is known.
c. Choosing A and D may appear to be consistent because both of these involve known probabilities.
However, consider the EMVs for the lotteries and the implied values for P(Blue). If A is preferred to B,
then
EMV(A) > EMV(B)
1
3 (1000) > P(Blue) (1000)
1
P(Blue) < 3 .
However, if D is preferred to C, then
EMV(D) > EMV(C)
1
P(Blue) (1000) + P(Yellow) (1000) > 3 (1000) + P(Yellow) (1000)
1
P(Blue) > 3 .
1
1
The inconsistency arises because it clearly is not possible to have both P(Blue) < 3 and P(Blue) > 3 .
(Exactly the same result obtains if we use the utility of $1000, instead of the dollar value.)
Case Study: Assessing Cancer Risk — From Mouse to Man
1. Some assumptions and judgments that must be made:
• Organisms react in a specified way to both high and low doses of the substance. Researchers have
developed dosage response models, but validating those models has been difficult. Threshold
effects may exist; at a low dosage level, an organism may be able to process a particular toxin
effectively, although at a higher level (beyond the threshold) reactions to the toxin may appear.
• The test species is judged to react to the toxin in the same way that humans do. Some evidence
indicates, though, that human livers are better at processing toxins than mouse or rat livers. Thus,
dosage responses may not be the same across species.
• Lab and field exposures are similar in nature. However, in the field many more complicating and
possibly interactive effects exist.
The first two assumptions take shape in what is called a “dose-response” model. This is a mathematical
relationship that estimates the magnitude of the physiological response to different doses of the chemical.
171
What kinds of evidence would help to nail down the effects of toxic substances? Long term studies of
toxins in the field and in the lab would be most useful. We need to know effects of low doses on humans in
the field, in order to refine the human dose-response model, but this information may be very difficult to
gather. Certainly, no controlled studies could be performed!
2. The question is whether one bans substances on the grounds that they have not been demonstrated to be
safe, or does one permit their use on the grounds that they have not been demonstrated to be dangerous.
The choice depends on how the decision maker values the potential economic benefits relative to the
potential (but unknown) risks.
3. The issue of credibility of information sources is one with which scientists are beginning to wrestle, and
it is a complicated one. Intuitively, one would give more weight to those information sources that are more
credible. However, systematic ways of assessing credibility are not yet available. Furthermore, the overall
impact of differences in source credibility on the decision maker’s posterior beliefs is unclear.
Case Study: Breast Implants
1. There clearly are differences in the quality of information that is available in most situations. Science
teaches us to beware of inferences based on small samples, yet anecdotes can be used to paint compelling
scenarios. Are judges prepared to make judgments regarding the quality of information presented as
“scientific”? How can a judge, not trained in the science himself, be expected to make reasonable
judgments in this respect? And if a judge is ill prepared, what about jurors?
2. The questions asked in the last paragraph of the quoted passage clearly relate primarily to preferences. In
a democratic, capitalistic society, we generally assume that individual consumers should get to make their
own decisions, based on their own preferences. In this case, however, the issue of preference deals with
how much risk is acceptable. And that question presumes that the decision maker knows what the risk is.
The level of risk, however, is a matter of uncertainty (“facts,” in contrast to “values”), and it takes experts
to measure that risk. In situations where individual consumers cannot realistically be expected to
understand fully the risks, we often expect the government to step in to regulate the consumer risk. It is not
so much a matter of protecting the consumer from himself as it is being sure that the risks are appropriately
measured, the information disseminated to the consumers, and , where appropriate, appropriate standards
set.
Case Study: The Space Shuttle Challenger
1. With little or no information, does one refrain from launching the spacecraft on the grounds that no proof
exists that the launch would be safe, or does one launch on the grounds that there is no proof that doing so
is unsafe. Since the Challenger accident, NASA has implemented a system whereby the policy is clearly
stated: Do not launch if there are doubts as to the safety.
2. These subjective estimates made by different people are based on different information and different
perspectives, and are used for different purposes. It is important for a decision maker to look beyond the
biases, try to judge the “credibility” of the judgments, and take these into account in developing his or her
own probabilities or beliefs. The same caveats as in question 3 of the Cancer Risk case apply here,
however. That is, even though it seems appropriate to weight more credible sources more heavily, neither
precise methods nor an understanding of the impact of doing so exist at this time.
3. The overall effect of slight optimism in making each individual assessment would be a very
overoptimistic probability of failure. That is, P(failure) would wind up being much lower than it should be.
4. Reichhardt’s editorial raises the question of what is an “acceptable risk.” How should society determine
what an acceptable risk would be? How society should choose an acceptable risk level for enterprises such
as nuclear power generation, genetic research, and so on, has been a hotly debated topic. Furthermore,
different people are willing to accept different levels of risk. For example, an astronaut, thrilled with the
prospect of actually being in space, may be more willing to accept a high level of risk than a NASA
administrator who may be subject to social and political repercussions in the event of an accident. Because
172
of the diversity of preferences, there is no obvious way to determine a single level of acceptable risk that
would be satisfactory for everyone.
Further reading: Fischhoff, B., S. Lichtenstein, P. Slovic, S. Derby, & R. Keeney (1981) Acceptable Risk.
Cambridge: Cambridge University Press.
****************
Chapter 8 Online Supplement: Solutions to Questions and Problems
S8.1. a. For each project, the investor appears to believe that
E(profit) = 0.5 (150,000) + 0.5 (-100,000)
= 75,000 - 50,000
= 25,000
b. However, since only one of the projects will succeed, he will gain $150,000 for the successful project,
but lose $100,000 for each of the other two. Thus, he is guaranteed to lose $50,000 no matter what happens.
c. For a set of mutually exclusive and collectively exhaustive outcomes, he appears to have assessed
probabilities that add up to 1.5.
d. “Knowing nothing” does not necessarily imply a probability of 0.5. In this case, “knowing nothing” is
really not appropriate, because the investor does know something: Only one project will succeed. If, on top
of that, he wants to invoke equally likely outcomes, then he should use P(Success) = 1/3 for each project.
Note that it is also possible to work parts a and b in terms of final wealth. Assume that he starts with
$300,000. For part a, expected final wealth, given that he invests in one project, would be:
E(wealth) = 0.5 (450,000) + 0.5 (200,000)
= 325,000.
Because $325,000 is an improvement on the initial wealth, the project looks good. For part b, though, if he
starts with $300,000 and invests in all three projects, he will end up with only $250,000 for the project that
succeeds. As before, he is guaranteed to lose $50,000.
S8.2. a. If he will accept Bet 1, then it must have non-negative expected value:
P(Cubs win) ($20) + [1 - P(Cubs win)] (-$30) ≥ 0.
Increasing the “win” amount to something more than $20, or decreasing the amount he must pay if he loses
($30) will increase the EMV of the bet. However, reducing the “win” amount or increasing the “lose”
amount may result in a negative EMV, in which case he would not bet. The same argument holds true for
Bet 2.
b. Because he is willing to accept Bet 1, we know that
P(Cubs win) ($20) + [1 - P(Cubs win)] (-$30) ≥ 0,
which can be reduced algebraically to
P(Cubs win) ≥ 0.60.
Likewise, for Bet 2, we know that
173
P(Cubs win) (-20) + [1 - P(Cubs win)] ($40) ≥ 0.
This can be reduced to P(Cubs win) ≤ 0.67. Thus, we have 0.60 ≤ P(Cubs win) ≤ 0.67.
c. Set up a pair of bets using the strategy from Chapter 8. From Bet 1 we infer that P(Cubs win) = 0.60, and
from Bet 2 P(Yankees win) = 0.33. Use these to make up Bets A and B:
A: He wins 0.4 X if Cubs win
He loses 0.6 X if Yankees win
B: He wins 0.67 Y if the Yankees win
He loses 0.33 Y if the Cubs win
We can easily verify that the EMV of each bet is equal to 0:
EMV(A) = P(Cubs win) (0.4 X) + [1 - P(Cubs win)] (- 0.6 X)
= 0.6 (0.4 X) - 0.4 (0.6 X)
=0
EMV(B) = P(Yankees win) (0.67 Y) + [1- P(Yankees win)] (-0.33 Y)
= 0.33 (0.67 Y) - 0.67 (0.33 Y)
=0
If the Cubs win, his position is:
0.4 X - 0.33 Y = W
If the Yankees win:
-0.6 X + 0.67 Y = Z
Following the strategy in the book, set W = Z = -$100 to be sure that he pays us $100 net, regardless of
what happens:
0.4 X - 0.33 Y = -$100
-0.6 X + 0.67 Y = -$100
Now solve these two equations for X and Y to obtain X = Y = -$1500. Thus, the original bets A and B
become:
A: He wins -$600 if Cubs win
He loses -$900 if Yankees win
B: He wins -$1000 if the Yankees win
He loses -$500 if the Cubs win
The minus sign means that he is taking the “other side” of the bet, though (i.e. winning -$600 is the same as
losing $600). Thus, these two bets really are:
A: He loses $600 if Cubs win
He wins $900 if Yankees win
B: He loses $1000 if the Yankees win
He wins $500 if the Cubs win
174
Finally, compare these bets to Bets 1 and 2 in the book. He said he would bet on the Cubs at odds of 3:2 or
better, but we have him betting on the Cubs (in Bet B) at odds of 2:1, which is worse. (That is, he has to put
up 2 dollars to win 1, rather than 1.5 to win 1.) The same problem exists with bet A: he is betting on the
Yankees at odds of 2:3, which is worse than 1:2. As a result, he will not accept either of these bets!
The reason for this result is that the solutions for X and Y are negative. In fact, it is possible to show
algebraically that if A and B are both negative, then X and Y will both be negative. This has the effect of
reversing the bets in such a way that your friend will accept neither. The conclusion is that, even though his
probabilities appear incoherent, you cannot set up a Dutch book against him.
175
CHAPTER 9
Theoretical Probability Models
Notes
Chapter 9 is a straightforward treatment of probability modeling using six standard distributions. The six
distributions (binomial, Poisson, exponential, normal, triangular, and beta) were chosen because they cover
a variety of different probability-modeling situations. To calculate probabilities, students can use Excel,
@RISK, or online probability calculators.
In addition to the six distributions treated in the main part of the chapter, the uniform distribution is
developed in problems 9.27 - 9.29 and the lognormal distribution in problem 9.36 and the Municipal Solid
Waste case study. Depending on the nature of the course and the level of the students, instructors may wish
to introduce other distributions.
One note of caution: Chapter 9 provides an introduction to probability distributions that are used in Chapter
10 (fitting model parameters and natural conjugate priors) as well as Chapter 11 (creating random variates
in a simulation). In particular, if the course is intended to move on to Chapter 11, it is important to expose
students to the uniform distribution, which is fundamental to the simulation process, and some of the other
distributions as well.
Although @RISK can run a full simulation, in this chapter, we use @RISK only to view a distribution’s
graph and to calculate probability values from the distribution. Step-by-step instructions for @RISK are
provided in the chapter. The Define Distribution button is used to choose a distribution, and the desired
probabilities can be determined by sliding the left and right delimiters to the appropriate values. The values
can also be typed into the text box over each delimiter and probabilities can be typed into the probability
over the graph.
Topical cross-reference for problems
Bayes’ theorem
Beta distribution
Binomial distribution
9.17, 9.21, 9.35
9.4, 9.5, 9.6, 9.15, 9.24
9.1, 9.5, 9.16, 9.18, 9.19, 9.29, 9.30, 9.32,
9.36, Overbooking
9.37, Municipal Solid Waste
9.11
9.5, 9.6, 9.7, 9.12, 9.14, 9.15, Earthquake
Prediction
9.14, 9.22, 9.25, 9.26
9.37, Municipal Solid Waste
9.12, 9.13, 9.28
9.2, 9.5 - 9.7, 9.11, 9.25, 9.26, 9.31, 9.32,
Municipal Solid Waste
9.18
9.3, 9.5, 9.8, 9.9, 9.13, 9.15, 9.19, 9.20, 9.33,
9.34, Earthquake Prediction
9.36
9.23
9.1-9.9, 9.11 – 9.16, 9.18, 9.19, 9.21, 9.24,
9.30, 9.35, 9.36,
9.17, 9.24, Earthquake Prediction
9.27 - 9.29
Central limit theorem
Empirical rule
Exponential distribution
Linear transformations
Lognormal distribution
Memoryless property
Normal distribution
Pascal sampling
Poisson distribution
PrecisionTree
Requisite models
@RISK
Sensitivity analysis
Uniform distribution
176
Solutions
9.1. Find P(Gone 6 or more weekends out of 12)
= PB(R ≥ 6 | n = 12, p = 0.65)
= PB(R' ≤ 6 | n = 12, p = 0.35) = 0.915.
Being gone 6 or more weekends out of 12 is the same as staying home on 6 or fewer.
Using @RISK: select the binomial as the distribution type, 12 for the n parameter, and 0.65 for the p
parameter. Move the left delimiter to 5.5, and the right delimiter to 12. The probability bar then shows the
desired probability: 91.54%. See “Problem 9.1.xlsx.”
0 - 2000
9.2. P(Loss) = PN (X < 0 | µ = 2000, σ = 1500) = P(Z < 1500 ) = P(Z < -1.33) = 0.0918.
P(Gain greater than 4000)
= PN (X > 4000 | µ = 2000, σ = 1500) = P(Z >
4000 - 2000
)
1500
= P(Z > 1.33) = 1 - P(Z ≤ 1.33) = 1 - 0.9082 = 0.0918.
Note that P(Z ≤ -1.33) = P(Z ≥ 1.33) because of symmetry of the normal distribution.
Using @RISK: select the normal distribution, 2000 for the µ parameter, and 1500 for the σ parameter. To
determine the probability of a loss, move the left delimiter to 0. The left-hand side of the probability bar
then shows the desired probability: 9.1%. To determine the probability that the return will be greater than
4000, move the right delimiter 4000 and the right-hand side of the probability bar shows the desired
probability: 9.1%. See “Problem 9.2.xlsx”
9.3. P(No chocolate chips) = PP(X = 0 | m = 3.6) = 0.027
P(Fewer than 5 chocolate chips) = PP(X < 5 | m = 3.6)
= PP(X ≤ 4 | m = 3.6) = 0.706
P(More than 10 chocolate chips) = PP(X > 10 | m = 3.6)
= 1 - PP(X ≤ 10 | m = 3.6) = 1 - 0.999 = 0.001.
Using @RISK: select the Poisson distribution and 3.6 for 𝑚 = 𝜆. To determine the probability of no
chocolate chips, move the left delimiter to 0.5. The probability bar then shows the probability of 2.7%. To
determine the probability of fewer than 5 chips, move the left delimiter bar to 4.5. The probability of more
than 10 can be found by moving the right delimiter to 10.5, and the probability shown is 99.9%. See
“Problem 9.3.xlsx”
9.4. P(Net Contribution > $600000) = P(1,680,000Q – 300000 > $600000) =
𝑃𝐵 (𝑄 > 0.536|𝛼1 = 1.1, 𝛼2 = 3.48) = 8.0%.
P(Net Contribution < $100000) = P(1,680,000Q – 300000 < $100000) =
𝑃𝐵 (𝑄 < 0.238|𝛼1 = 1.1, 𝛼2 = 3.48) = 57.3%.
177
Using @RISK: Select the beta distribution, 1.1 for the 𝛼1 parameter, and 3.48 for the 𝛼2 parameter. To
determine the probability of net contribution being greater than $600,000, move the right delimiter to
0.536. The right-hand side of the probability bar then shows the probability is 8.0%. To determine the
probability of net contribution being less than $100,000, move the right delimiter to 0.238. The left-hand
side of the probability bar then shows the probability is 57.3%. See “Problem 9.4.xlsx”
9.5. This problem requires significant trial and error with a dose of calculus if you are going to solve it
algebraically. @RISK makes the problem easier to solve and the solution can be found in “Problem
9.5.xlsx.” To use @RISK, first enter the parameter values for the desired theoretical distribution. For
example, to solve Part a, pull up the binomial distribution and enter 12 for n and 0.85 for p. Next, enter the
desired probability value into the probability bar of the Define Distribution window of @RISK. Note that
you can actually type in the value and that the values are stated in percentages. Thus, to finish solving Part
a, type 2.4 (for 2.4%) into the leftmost section of the probability bar as shown in the figure below. We
choose the left hand side of the probability bar because Part a is asking for the area to the left of r:
𝑃𝐵 (𝑅 ≤ 𝑟 |𝑛 = 12, 𝑝 = .85) = 0.024. The answer is the value above the appropriate delimiter, the left
delimiter in this case.
See “Problem 9.5.xlsx” for the answers to the remaining parts. You need to be careful when working with
discrete distributions because probability values do not change until an outcome occurs. Thus, for Part a, r
could be any value between 7 (inclusive) to 8 (exclusive).
Type 2.4 into this section
of the probability bar.
9.6. This problem requires @RISK. First, using the alternative- parameter method, enter the percentiles
given for each part. Note the for Part b, the 25th percentile should be 25 and not 125. After defining the
distribution using the percentiles, you can determine the parameter values by undoing the alternativeparameter method. After unchecking the Alternative Parameters checkbox and clicking OK, the standard
parameter values are reported. See “Problem 9.6.xlsx.”
178
Warning: Do not use cell references for the stated percentiles in this problem. The alternative-parameter
method will not report the parameter values if the percentiles are cell locations.
The beta distribution does not allow the alternative- parameter method, but the betageneral does. By setting
the minimum to zero and the maximum to one, the betageneral is the same as a beta distribution.
∞
9.7. If PE(T ≥ 5 | m) = 0.24, then ∫5 𝑚𝑒 −𝑚𝑡 𝑑𝑡 = e-5m = 0.24. Take logs of both sides:
ln[ e-5] = ln(0.24)
-5m = -1.4271
m = 0.2854.
Alternatively, the file “Problem 9.7.xlsx” contains the @RISK solution where the alternative- parameter
method is used. Remember that @RISK uses 𝛽 for the exponential parameter and we used 𝑚 in the text.
The relationship is 𝑚 = 1⁄𝛽.
9.8. One can either use tables of Poisson values to find m or use Excel’s Goal Seek feature. The Excel file
“Problem 9.8.xlsx” contains the solution. Note that the problem reports two conditions that are desired, but
the Poisson has only one parameter. Therefore, you need only use one of the conditions to solve for m, then
check to make sure the other condition also holds. Because the Poisson has no upper limit for x, it is much
easier to use: 𝑃(𝑋 ≤ 2|𝑚) = 0.095. Solution m = 5.394.
9.9. If PP(X = 0 | m) = 0.175, then
e-m m0
= 0.175
0!
e-m = 0.175 because m0 = 0! = 1.
m = -ln(0.175) = 1.743.
Alternatively, Excel’s Goal Seek can be used as explained in Problem 9.8 above and in the solution file
“Problem 9.9.xlsx.”
9.10. Having more information rather than less is always better. Here we are assuming that the quality of
the information remains constant. If we think about gathering data one value at a time, then we can see that
as data are gathered, each successive data point helps refine the choice of distribution. If the data were
completely accurate (no biases, no incorrectly calibrated values, etc.), then eventually a unique distribution
will be identified. Additional data values will not disconfirm the distribution. That is, each additional data
value will match the distribution. If, however, the data have inaccuracies, then there may not be any one
theoretical distribution that matches all the data.
Even in cases where additional data show that there is no one theoretical distribution matching all the data,
having that data can be helpful. For example, it tells us that either the underlying uncertainty does not
match a theoretical distribution and/or the data are not completely accurate. Suppose you felt a normal
distribution was a good match to the uncertain in question. The first two data values then uniquely define a
normal. The third value may or may not confirm the choice. If it doesn’t, then this tells us valuable
information. The more data we gather, the better our estimates of the mean and standard deviation.
9.11. PN (µ - σ < Y ≤ µ + σ | µ, σ) = PN (Y ≤ µ + σ | µ, σ) - PN (Y ≤ µ - σ | µ, σ)
179
= P(Z ≤
(µ + σ) - µ
(µ - σ) - µ
) - P(Z ≤
)
σ
σ
= P(Z ≤ 1) - P(Z ≤ -1) = 0.8413 - 0.1587 = 0.6826.
PN (µ - 2σ < Y ≤ µ + 2σ | µ, σ) = PN (Y ≤ µ + 2σ | µ, σ) - PN (Y ≤ µ - 2σ | µ, σ)
= P(Z ≤
(µ - 2σ) - µ
(µ + 2σ) - µ
) - P(Z ≤
)
σ
σ
= P(Z ≤ 2) - P(Z ≤ -2) = 0.9772 - 0.0228 = 0.9544.
Using @RISK, select the normal distribution, enter 0 for µ and 1 for σ. Moving the left and right delimiters
to -1 and 1 respectively shows the probability of 68.3%. Similarly, setting the delimiters at -2 and 2 shows
a probability value of 95.4%. See “Problem 9.11.xlsx.”
9.12. If the mean is 10 days, then m = 1/10 = 0.1.
a.
PE(T ≤ 1 | m = 0.1) = 1 - e-1(0.1) = 0.0952.
Using @RISK, select the exponential distribution and enter 10 for the β parameter. Sliding the left
delimiter to 1 shows the desired probability: 9.5%.
b. PE(T ≤ 6 | m = 0.1) = 1 - e-6(0.1) = 0.4512.
Move the left delimiter to 6 and the probability -is: 45.1%.
c.
PE(6 ≤ T ≤ 7 | m = 0.1) = e-6(0.1) - e-7(0.1) = 0.0522.
Set the left delimiter to 6 and the right to 7. The probability in the middle of the bar is 5.2%. If you cannot
read the probability from the bar, you can from the Statistics along the right in the “Dif. P” row.
d. P(T ≤ 7 | T ≥ 6, m = 0.1)
=
P(T ≤ 7 and T ≥ 6 | m = 0.1)
P(T ≥ 6 | m = 0.1)
=
PE(6 ≤ T ≤ 7 | m = 0.1)
PE(T ≥ 6 | m = 0.1)
0.0522
= 1-P (T ≤ 6 | m = 0.1)
E
= 0.0952.
This problem demonstrates the “memoryless” property of the exponential distribution; the probability of
time T lasting another day is the same no matter how much time has already elapsed. See also problem
9.28. The solution is in “Problem 9.12.xlsx.”
9.13. Because of independence among arrivals, the probability distribution for arrivals over the next 15
minutes is independent of how many arrived previously. Thus, for both questions,
PP(X = 1 in 15 minutes | m = 6 per hour)
180
= PP(X = 1 in 15 minutes | m = 1.5 per 15 minutes)
= 0.335
Using @RISK, select the Poisson distribution and enter 1.5 for the λ parameter. Set the left delimiter to 0.5
and the right to 1.5. The middle of the bar shows the desired probability: 33.5%. See “Problem 9.13.xlsx.”
9.14. a. E(TA) = 5 years and E(TB) = 10 years, choose B.
b. P(TA ≥ 5 | m = 0.2) = e-5(0.2) = 0.368
P(TB ≥ 10 | m = 0.1) = e-10(0.1) = 0.368.
For exponential random variables, the probability is 0.368 that the random variable will exceed its expected
value. Using @RISK, select the exponential distribution and enter 𝛽 = 1/𝑚 = 1/.2 = 5. Repeat for the
second exponential, this time setting 𝛽 = 10. See file “Problem 9.14.xlsx.”
c.
—
i. Average lifetime = 0.5 (TA) + 0.5 (TB) = T .
—
E( T ) = 0.5 (5) + 0.5 (10) = 7.5.
—
Var( T ) = 0.52 (25) + 0.52 (100) = 31.25.
ii. Difference ∆T = TB - TA
E(∆T ) = E(TB) - E(TA) = 10 - 5 = 5
Var(∆T ) = 12 Var(TB) + (-1)2 Var(TA) = 25 +100 = 125.
9.15. a. PE(T ≥ 2 hours | m = 8.5 cars per 10 hours)
= PE(T ≥ 2 hours | m = 0.85 cars per hour)
= e-2(0.85) = 0.183.
Using @RISK, select the exponential distribution and enter 𝛽 = 1/𝑚 = 1/.85 = 1.176. Moving the right
delimiter to 2 gives the probability of 18.3%. See file “Problem 9.15.xlsx.”
b. PP(X = 0 in 2 hours | m = 8.5 cars per 10 hours)
= PP(X = 0 in 2 hours | m = 1.70 cars per 2 hours)
= 0.183.
Using @RISK, select the Poisson distribution and enter 𝜆 = 8.5/5 = 1.70. Moving the left delimiter to 0
gives the probability of 18.3%. See file “Problem 9.15.xlsx.”
This probability must be the same as the answer to part a because the outcome is the same. The first sale
happening after 2 hours pass (T ≥ 2) is the same as no sales (X = 0) in the first two hours.
c. E(Bonus) = $200 PP(X = 13 | m = 8.5) + $300 PP(X = 14 | m = 8.5)
+ $500 PP(X = 15 | m = 8.5) + $700 PP(X ≥ 16 | m = 8.5)
= $200 (0.0395) + $300 (0.024) + $500 (0.0145) + $700 (0.0138)
= $32.01
181
@RISK can be used to calculate the probabilities reported above, and then inserted into the expected bonus
formula. See file “Problem 9.15.xlsx.”
d. P(Pay $200 bonus) = PP(X = 13 | m = 8.5) = 0.0395. But this is the probability for any given day. There
are 8 such days, for each of which P(Pay $20 bonus) = 0.0395. Now we have a binomial problem. Find the
probability of 2 out of 8 “successes” when P(success) = 0.0395:
PB(R = 2 | n = 8, p = 0.0395) = 0.033.
Using @RISK, select the binomial distribution and n = 8 and p = 0.035. Moving the left delimiter to 1.5
and the right delimiter to 2.5 gives the probability of 3.4%. See file “Problem 9.15.xlsx.”
9.16. a. Assuming independence from one questionnaire item to the next, we can calculate the probability
using the binomial distribution: PB(R ≤ 2 | n = 10, p = 0.90). Using Excel’s BinomDist() function, we can
calculate BinomDist(2, 10, 0.90, TRUE) to obtain 0.000000374.
Using @RISK, select the binomial distribution and n = 10 and p = 0.90. Moving the left delimiter to 1.5
and the right delimiter to 2.5 gives a probability of 0.0%. See file “Problem 9.16.xlsx.”
b. Why might the occurrences not be independent? Imagine the following: You have just learned that the
first answers fell outside of the ranges that you specified. Would you want to modify the ranges of the
remaining five? Most of us would. And that would mean that, given data regarding hits or misses on some,
you might change the probability of a hit or miss on the others. Thus, the data — whether each item is a hit
or a miss — may not be independent, but only because of uncertainty about the level of calibration of the
probability assessor! For a more complete discussion, see Harrison, J.M. (1977), “Independence and
Calibration in Decision Analysis,” Management Science, 24, 320-328.
9.17. a. A hit is defined as market share = p = 0.3. A flop is market share = p = 0.1. Given n = 20 and x = 4,
we can find, for example,
P(x = 4 | n = 20, p = 0.3) = PB(R = 4 | n = 20, p = 0.3) = 0.130
and
P(x = 4 | n = 20, p = 0.1) = PB(R = 4 | n = 20, p = 0.1) = 0.090.
Thus,
P(X = 4 | Hit) P(Hit)
P(Hit | X = 4) = P(X = 4 | Hit) P(Hit) + P(X = 4 | Flop) P(Flop)
0.13 (0.2)
= 0.13 (0.2) + 0.09 (0.8) = 0.265.
b. For P(Hit) = 0.4,
0.13 (0.4)
P(Hit | X = 4) = 0.13 (0.4) + 0.09 (0.6) = 0.491.
c. For P(Hit) = 0.5,
0.13 (0.5)
P(Hit | X = 4) = 0.13 (0.5) + 0.09 (0.5) = 0.591.
d. For P(Hit) = 0.75,
0.13 (0.75)
P(Hit | X = 4) = 0.13 (0.75) + 0.09 (0.25) = 0.813.
182
e. For P(Hit) = 0.9,
0.13 (0.9)
P(Hit | X = 4) = 0.13 (0.9) + 0.09 (0.9) = 0.929.
f. For P(Hit) = 1.0,
0.13 (1.0)
P(Hit | X = 4) = 0.13 (1.0) + 0.09 (0) = 1.0.
Posterior probability
1
0.8
0.6
0.4
0.2
Prior probability
0
0
0.2
0.4
0.6
0.8
1
The very slight bow relative to the 45° line indicates that 4 favorable responses out of 20 people lead to a
slight but undramatic revision of the prior probability.
9.18. a. PB(R > 15 | n = 20, p = 0.6) = PB(R' = 15 - R < 5 | n = 20, p = 0.4)
= PB(R' ≤ 4 | n = 20, p = 0.4) = 0.051.
To use @RISK, select the binomial distribution with parameters n = 20 and p = 0.60. Set the right
delimiter to 15.5. The right-hand side of the bar shows the desired probability of 5.1%. See “Problem
9.18.xlsx.”
b. P(First 12 in favor) = (0.6)12 = 0.0022
To use @RISK, select the binomial distribution with parameters n = 12 and p = 0.60. Move the right
delimiter to 11.5, and the right-hand side shows a probability of 0.02%. See “Problem 9.18.xlsx.”
P(Stop after 13th) = P(11 out of 12 in favor and 13th in favor)
= PB(R = 11 | n = 12, p = 0.6) × 0.6
= PB(R' = 12 - R = 1 | n = 12, p = 0.4) × 0.6
= 0.017 (0.6) = 0.0102.
To use @RISK, select the binomial distribution with parameters n = 12 and p = 0.60. Move the left
delimiter to 10.5 and the right to 11.5. The probability of R = 11 is shown in the middle of the bar or to the
right in the “Dif. P” row of the statistics table. See “Problem 9.18.xlsx.”
P(Stop after 18th) = P(11 out of 17 in favor and 18th in favor)
= PB(R = 11 | n = 17, p = 0.6) × 0.6
= PB(R' = 17 - R = 1 | n = 17, p = 0.4) × 0.6
= 0.002 (0.6) = 0.0012.
183
To use @RISK, select the binomial distribution with parameters n = 17 and p = 0.60. Move the left
delimiter to 10.5 and the right to 11.5. The probability of R = 11 is shown in the middle of the bar or to the
right in the “Dif. P” row of the statistics table. See “Problem 9.18.xlsx.”
9.19. a. PP(X > 2 | m = 1.1) = 1 - PP(X ≤ 2 | m = 1.1) = 1.00 - 0.90 = 0.1.
To use @RISK, select the Poisson distribution with the parameter λ = 1.1. Set the right delimiter to 2.5 and
the desired probability value is shown in the right-hand side of the bar to be 10%.See “Problem 9.19.xlsx.”
b. Now we have to obtain 18 or more that conform out of 20, given that the probability of conformance
equals 0.90 and the probability that a box is rectified is 0.1. This is a binomial model:
P(18 or more pass) = P(2 or fewer are rectified)
= PB(R ≤ 2 | n = 20, p = 0.1) = 0.677.
To use @RISK, select the binomial distribution with parameters n = 20 and p = 10%. Set the delimiter to
2.5, and the desired probability of 67.7% is shown in the left-hand side. See “Problem 9.19.xlsx.”
c. Let X be the random number of cases out of 20 that are rectified. Then X cases will have no
nonconforming bottles. Also, (20 - X) may have up to 11 non-conforming bottles; we know that at least one
bottle in each case is OK. Given X cases are rectified, the expected number of nonconforming bottles out of
20 × 11 bottles is
0 (X) + 11 (0.1) (20 - X) = 1.1 (20 - X).
The expected number of cases rectified are np = 20(0.1) = 2. Therefore E(X) = 2, and
E(# of nonconforming bottles) = 1.1 (20 - 2) = 1.1 (18) = 19.8.
9.20. The Poisson process is often used to model situations involving arrivals, and here we are using it to
model the arrivals of the Definitely-Purchase responses. The issue here is whether the expected arrival rate
(m) is constant over time. This rate may vary during the day. For example, people who arrive early to the
conference might have a different acceptance rate than those arriving at day’s end. It might be possible to
break the day into a few different periods, each with its own rate, and use a separate Poisson process for
each period.
Cost
m=3
(0.5)
X (170)
X($1,700)
m = 1.5
(0.5)
X (170)
New
E(Cost)
= $382.5
$3,825
Old
E(Cost)
= $375
Poisson, m = 2.5
184
X(150)
9.21. a. The older machines appear to have the edge. They have both lower expected cost and less
uncertainty. See Excel file “Problem 9.21.xlsx.”
b. Find P(m = 1.5 per month | X = 6 in 3 months) = P(m = 4.5 per 3 months | X = 6 in 3 months):
Using Bayes theorem, P(m = 4.5 per 3 months | X = 6 in 3 months)
P(X = 6 | m = 4.5) P(m = 4.5 per 3 months)
= P(X = 6 | m = 4.5) P(m = 4.5 ) + P(X = 6 | m =9) P(m = 9)
X($1,700)
PP(X = 6 | m = 4.5) 0.5
= P (X = 6 | m = 4.5) 0.5 + P (X = 6 | m = 9) 0.5
P
P
X($1,500)
0.128 (0.50)
= 0.128 (0.50) + 0.091= $3,750
(0.50) = 0.5845.
c. Now the expected cost of the new machines would be
E(Cost) = 0.5845 (1.5 x $1700) + 0.4155 (3.0 x $1700) = $3,610.
This is less than $3,750, so the new information suggests that the new machines would be the better choice.
d. The source of the information often provides us with some indication of credibility or biasedness. We
might discount information from the distributor because of incentives to sell the product. A trade magazine
report might be viewed as more credible or less biased. Does this mean Bayes’ theorem is inappropriate?
Not entirely, but very simple applications that do not account for bias or differential credibility may be.
One straightforward way to incorporate the idea of biasedness or credibility would be to think in terms of
“equivalent independent information.” For example, a positive report from a source you think has a positive
bias might be adjusted downward. If the distributor claimed “6 out of 10” preferred the product, say, this
might be adjusted down to 5 of 10 or 4 of 10. In other cases, it might be judged that the information
presented is to some extent redundant with information already obtained, in which case the reported sample
size might be adjusted downward. Thus “6 out of 10” from a redundant sample might be judged to be
equivalent to an independent test in which the results were “3 of 5.” The “adjusted” or “equivalent
independent” sample results could then be used in Bayes’ theorem.
9.22. This is a straightforward application of the formula on page 284 : if 𝑌 = 𝑎 + 𝑏𝑋, then 𝐸(𝑌) = 𝑎 +
𝑏𝐸(𝑋).
9.23. All we need to do is to create a model or representation of the uncertainty that adequately captures the
essential features of that uncertainty. A requisite model of uncertainty will not always be a perfect model,
but the aspects that are important will be well modeled. For example, if a normal distribution is used to
represent the weights of lab animals, the mean may be 10 grams with standard deviation 0.3 grams. In fact,
all of the animals may weigh between 9 and 11 grams. According to the normal distribution, 99.92% of the
animals should fall in this range.
9.24. a. The assessments give P(Q < 0.08) = P(Q > 0.22) = 0.10 and P(Q < 0.14) = 0.50. Therefore,
P(0.08 < Q < 0.14) = P(Q < 0.14) - P(Q < 0.08) = 0.40.
Likewise,
P(0.14 < Q < 0.22) = P(Q < 0.22) - P(Q < 0.14) = 0.40.
b. Answer: 𝛼1 = 5.97 and 𝛼2 = 39.68. See “Problem 9.24.xlsx.”
185
Using the alternative-parameter method, we choose the betageneral distribution (as the beta does not allow
alternative parameters). The betageneral has 4 parameters and we have 3 assessments:
P(Q < 0.08) = 0.10, P(Q < 0.14) = 0.50, and P(Q < 0.22) = 0.90. We used these 3 assessments and set the
minimum equal to 0 for the fourth input. To find the parameter values for 𝛼1 and 𝛼2 , undo the alternativeparameter method. The resulting distribution matches the assessments perfectly, but as Problem 9.23
indicates, it is not an exact representation. For example, the maximum is 110%, which is clearly impossible
for Q. Effectively, however, the maximum is 40% as there is a 99.9% chance that Q is less than 40%.
c. E(Q) = 14.6%,. There is a 54% chance that market share will be less than 14.6% and a 57% chance that
it will be less than 15%
9.25. a. No, because the second investment is substantially riskier as indicated by the higher standard
deviation.
b. It makes some sense. We need a single peak (mode) and a reasonably symmetric distribution for the
normal distribution to provide a good fit. Returns can be positive or negative, and we might expect
deviations around a central most-likely value to be evenly balanced between positive and negative
deviations.
c. P(R1 < 0%) = PN (R1 ≤ 0 | µ = 10, σ = 3)
0 - 10
= P(Z ≤ 3 ) = P(Z < -3.33) = 0.0004.
To use @RISK, select the normal distribution; enter 0.1 for µ and 0.03 for σ. Set the left delimiter to 0, and
the probability in the left-hand side is 0.04%. @RISK may not show the necessary 4 decimal places.
P(R2 < 0%) = PN (R2 ≤ 0 | µ = 20, σ = 12)
0 - 20
= P(Z ≤ 12 ) = P(Z < -1.67) = 0.0475.
To use @RISK, select the normal distribution; enter 0.2 for µ and 0.12 for σ. Set the left delimiter to 0, and
the probability shown is 4.8%.
P(R1 > 20%) = PN (R1 > 20 | µ = 10, σ = 3)
20 - 10
) = P(Z > 3.33) = 0.0004.
= P(Z > 3
To use @RISK, select the normal distribution; enter 0.1 for µ ανd 0.03 for σ. Set the right delimiter to
0.20, and the probability in the right-hand side is 0.04%. @RISK may not show the necessary 4 decimal
places
P(R2 < 10%) = PN (R2 <10 | µ = 20, σ = 12)
10 - 20
= P(Z ≤ 12 ) = P(Z < -0.83) = 0.2033.
To use @RISK, select the normal distribution; enter 0.2 for µ and 0.12 for σ. Set the left delimiter to 0.10,
and the probability in the left-hand side is 20.2%.
d. Find P(R1 > R2) = P(R1 - R2 > 0) = P(∆R > 0)
= PN (∆R > 0 | µ = 10 - 20, σ =
32 + 122 - 0.5 (3) (12) )
= PN (∆R > 0 | µ = -10, σ = 11.62)
0 - (-10)
= P(Z ≤ 11.62 ) = P(Z > 0.86) = 1 - P(Z < 0.86) = 0.1949.
186
Part d cannot be calculated using the Define Distribution window as we have done in the previous
problems. Using the simulation feature of @RISK would allow one to complete this problem, but we wait
until Chapter 11 for running simulations.
e. The probability distributions could be used as the basis for a model about the uncertainty regarding their
returns. This uncertainty can be included in a decision tree with appropriate portfolio alternatives.
9.26. Let X denote return (in percent), M = McDonalds, and S = US Steel. We have prior probability P(M)
= 0.80.
a.
P(6 < X < 18 | M) = PN (6 < X < 18 | µ = 14, σ = 4)
6 - 14
18 - 14
) = P(-2 < Z < 1) = 0.8185.
= P( 4 < Z < 4
To use @RISK, select the normal distribution; enter 14 for µ and 4 for σ. Set the left delimiter to 6 and the
right 18. The desired probability is shown in the middle of the bar to be 81.9%.
P(6 < X < 18 | S) = PN (6 < X < 18 | µ = 12, σ = 3)
6 - 12
18 - 12
) = P(-2 < Z < 2) = 0.9544.
= P( 3 < Z < 3
To use @RISK, select the normal distribution; enter 12 for µ and 3 for σ. Set the left delimiter to 6 and the
right 18. The desired probability is shown in the middle of the bar to be 95.5%.
b. P(6 < X < 18) = P(6 < X < 18 | M) P(M) + P(6 < X < 18 | S) P(S)
= 0.8185 (0.8) + 0.9544 (0.2) = 0.84568.
12 - 14
) = P(Z > -0.5) = 0.6915.
4
To use @RISK, select the normal distribution; enter 14 for µ and 4 for σ. Set the right delimiter to 18. The
desired probability is shown in right-hand side of the bar to be 69.1%.
c.
P(X > 12 | M) = PN (X > 12 | µ = 14, σ = 4) = P(Z >
12 - 12
) = P(Z >0) = 0.5.
3
To use @RISK, select the normal distribution; enter 12 for µ and 3 for σ. Set the right delimiter to 12. The
desired probability is shown in right-hand side of the bar to be 50.0%.
P(X > 12 | S) = PN (X > 12 | µ = 12, σ = 3) = P(Z >
P(X > 12 | M) P(M)
P(M | X > 12) = P(X > 12 | M) P(M) + P(X > 12 | S) P(S)
0.6915 (0.8)
= 0.6915 (0.8) + 0.5 (0.2) = 0.847.
d. E(Return) = 0.5 E(X | M) + 0.5 E(X | S) = 0.5 (14) + 0.5 (12) = 13%.
Var(Return) = 0.52 Var(X | M) + 0.52 Var(X | S) = 0.25 (42) + 0.25 (32) = 6.25.
9.27. a. This problem introduces the uniform probability distribution. The density function is
1
fU(x | b, a) = b - a when a ≤ x ≤ b, and 0 otherwise:
187
f(x)
1
b-a
0
x
a
b
1
The area under the density function is just the area of the rectangle: b - a (b - a) = 1.
c-a
b. PU(X < c | a, b) = b - a . When a = 3 and b = 5, then
4.5 - 3 1.5
PU(X < 4.5 | a = 3, b = 5) = 5 - 3 = 2.0 = 0.75.
To use @RISK, select the uniform distribution; enter 3 for the minimum parameter and 5 for the maximum
parameter. Set the left delimiter to 4.5. The desired probability is shown in the left-hand side of the bar to
be 75.0%.
c.
4.3 - 3 1.3
P(X < 4.3 | a = 3, b = 5) = 5 - 3 = 2.0 = 0.65.
Moving the left delimiter to 4.3 results in left-hand side of the bar showing a probability to be 65.0%.
0.75 - 0.25
P(0.25 < X < 0.75 | a = 0, b = 1) = 1 - 0
= 0.5.
To use @RISK, select the uniform distribution; enter 0 for the minimum parameter and 1 for the maximum
parameter. Set the left delimiter to 0.25 and the right delimiter to 0.75. The desired probability is shown in
the middle of the bar to be 50.0%.
10 - 3.4
P(X > 3.4 | a = 0, b = 10) = 10 - 0 = 0.66.
To use @RISK, select the uniform distribution; enter 0 for the minimum parameter and 10 for the
maximum parameter. Set the right delimiter to 3.4. The desired probability is shown in the right-hand side
of the bar to be 66.0%.
0 - (-1) 1
P(X < 0 | a = -1, b = 4) = 4 - (-1) = 5 = 0.2.
To use @RISK, select the uniform distribution; enter -1 for the minimum parameter and 4 for the
maximum parameter. Set the left delimiter to 0.0. The desired probability is shown in left-hand side of the
bar to be 20.0%.
188
d.
P(X <x
)
1
0
x
a
e. E(X) =
b
a+b 3+5
(b - a)2 (5 - 3)2
4
=
=
4.
Var(X)
=
12 = 12 = 12 = 0.33.
2
2
9.28. a.
0
1.0
10.5
s
1
P(S < 1) = (height × width) = 10.5 x 1.0 = 0.0952.
To use @RISK, select the uniform distribution; enter 0 for the minimum parameter and 10.5 for the
maximum parameter. Set the left delimiter to 1.0. The desired probability is shown in left-hand side of the
bar to be 9.5%.
1
b. P(S < 6) = 6 × 10.5 = 0.5714.
With the setup in Part a, move the left delimiter to 6 and the left-hand side of the bar shows 57.1%.
1
c. P(6 < S < 7) = 1 × 10.5 = 0.0952.
With the setup in Part a, move the left delimiter to 6 and the right delimiter to 7. The middle of the bar
0.0952
P(6 < S < 7)
shows 9.5%.d. P(S ≤ 7 | S > 6) = P(S > 6) = 1 - 0.5714 = 0.2222.
Note that this result is different from the probability in part a. The uniform distribution does not have a
“memoryless” property like the exponential does (problem 9.12).
189
9.29. a.
0
36
min
1.5
hr
t
36
P(T ≤ 36) = 90 = 0.40.
To use @RISK, select the uniform distribution; enter 0 for the minimum parameter and 90 for the
maximum parameter. We are using minutes as the base unit. Set the left delimiter to 36. The desired
probability is shown in left-hand side of the bar to be 40.0%.
b. Use a binomial model with 𝑝 = 𝑃(𝑇 ≤ 36) = 0.40. Each of 18 customers has a 0.40 chance of being a
“success” (shopping for 36 minutes or less). We require the probability that 10 or more are “successes” out
of 18:
PB(R ≥ 10 | n = 18, p = 0.4) = 1 - PB(R ≤ 9 | n = 18, p = 0.4) = 1 - 0.865 = 0.135.
To use @RISK, select the binomial distribution; enter 18 for n and 0.4 for p. Set the right delimiter to 9.5.
The desired probability is shown in right-hand side of the bar to be 13.5%.
9.30. a. We define 𝑅 to be the number of working engines on the plane. Thus, for a two-engine plane,
𝑅 ≥ 1for the plane to land safely and for a four-engine plane, 𝑅 ≥ 2.
Does the binomial distribution fit the uncertainty? We have dichotomous outcomes (engine working or
not). Each engine may or may not have the same probability of working. Because all airplane engines
undergo routine safety checks and follow a schedule for replacing parts, even if the part is working, it is
likely that both independence and constant probability holds true.
b. For a given p, 𝑃𝐵 (𝑅 ≥ 1|𝑛 = 2, 𝑝) = 2(𝑝)(1 − 𝑝) + 𝑝2 and 𝑃𝐵 (𝑅 ≥ 2|𝑛 = 4, 𝑝) = 6(𝑝2 )(1 − 𝑝)2 +
4(𝑝3 )(1 − 𝑝) + 𝑝4 . See the file “Problem 9.30.xlsx” for the complete solution. For probability values
above 67%, the four-engine plane is safer and for probability values below 67%, the two-engine plane is
safer. Counterintuitively, as the engines become less reliable, the safety of the two-engine plane surpasses
the safety of the four-engine plane. Overall, the safety drops for both planes, but does so more quickly for
the four-engine plane.
Probability of
Engine
Working
0.95
0.8
0.67
0.1
Prob of 2-engine
landing safely
99.750%
96.000%
89.110%
19.000%
Prob of 4-engine
landing safely
99.952%
97.280%
89.183%
5.230%
190
9.31. Let X be the score of a problem drinker and Y be the score of a non-problem drinker.
𝑃𝑁 (𝑋 > 75|𝜇 = 80, 𝜎 = 5) = 84.1%
and
𝑃𝑁 (𝑌 > 75|𝜇 = 60, 𝜎 = 10) = 6.7%
9.32 Let L denote the uncertain length of an envelope.
a. PN (L > 5.975 | µ = 5.9, σ = 0.0365) = P(Z >
5.975 - 5.9
0.0365 ) = P(Z > 2.055) = 0.02.
To use @RISK, select the normal distribution; enter 5.9 for µ and 0.0365 for σ. Set the right delimiter to
5.975 and the desired probability is shown in the right-hand side to be 2.0%.
b. We will use a binomial model, in which p = P(Envelope fits) = 0.98. We need
PB(R ≤ 18 | n = 20, p = 0.98) = PB(R ≥ 2 | n = 20, p = 0.02)
= 1 - PB(R ≤ 1 | n = 20, p = 0.02) = 1 - 0.94 = 0.06.
That is, about 6% of the boxes will contain 2 or more cards that do not fit in the envelopes.
To use @RISK, select the binomial distribution; enter 20 for n and 0.98 for p. Set the left delimiter to 18.5
and the desired probability is shown in the left-hand side to be 6.0%.
9.33. a. Reasons for using the Poisson distribution include:
- Machines often break down independently of one another.
- The probability of a breakdown is small.
- The rate of breakdowns is roughly constant over time.
- Machines can break down at any point in time.
b. There are a few ways to solve this problem. The brute-force way is to alter the λ parameter until the
Poisson distribution closely matches the desired probabilities. At λ = 0.7, we have a close fit:
P(X = 0) = 0.50
P(X = 1) = 0.35
P(X = 2 or 3) = 0.15
P(X ≥ 4) = 0
PP(X = 0 | m = 0.70) = 0.497
PP(X = 1 | m = 0.70) = 0.348
PP(X = 2 or 3 | m = 0.70) = 0.150
PP(X ≥ 4 | m = 0.70) = 0.006
Another approach is to use the Poisson formula on one of the probabilities. The probability that X = 0 is the
easy one:
e-m m0
= e-m.
0!
Solve for m to get m = 0.693.
P(X = 0) = 0.5 =
c. The expected number of breakdowns is m, whatever value was chosen.
191
9.34.
Cost
Leav plant open
0 or 1 f ailure
(0.736)
0
E(Cost)
= $3960
2 or more f ailures
$15,000
(0.264)
Close plant
$10,000
P(0 or 1 failure) = PP(X = 0 | m = 1) + PP(X = 1 | m = 1) = 0.368 + 0.368 = 0.736.
P(2 or more failures) = 1 - P(0 or 1 failure) = 1 - 0.736 = 0.264.
Because E(Cost) for leaving the plant open is $3960 < $10,000, the choice would be to leave the plant
open.
9.35. This problem is designed to show that Bayesian updating either done sequentially or all at once
produces the same posterior probabilities.
a. The posterior probabilities after the first hour are: P(Failure) = 0.04, P(Potential) = 0.44, and P(Success)
= 0.52. These become the prior probabilities for the second hour. Given 17 Definitely-Purchase responses
in the second hour, we update as before:
P(Failure|𝐷 = 17)
=
P(𝐷 = 17|Failure)P(Failure)
P(𝐷 = 17|Failure)P(Failure) + P(𝐷 = 17|Potential)P(Potential) + P(𝐷 = 17|Success)P(Success)
=
=
PP (𝐷 = 17|m = 10)(0.04)
PP (𝐷 = 17|m = 10)(0.04) + PP (𝐷 = 17|m = 15)(0.44) + PP (𝐷 = 17|m = 20)(0.52)
0.013(0.04)
0.0128(0.04) + .0847(0.44) + 0.0760(0.52)
= 0.0073.
Similarly, P(Potential|𝐷 = 17) = .4793 and P(Success|𝐷 = 17) = .5135
b. The prior probabilities are: P(Failure) = P(Potential) = P(Success) = 1/3. Given 35 Definitely-Purchase
responses in the first two hours, we update as before:
P(Failure|𝐷 = 35 in 2 hrs)
=
P(𝐷 = 35|Failure)P(Failure)
P(𝐷 = 35|Failure)P(Failure) + P(𝐷 = 35|Potential)P(Potential) + P(𝐷 = 35|Success)P(Success)
=
PP (𝐷 = 35|m = 20)(1/3)
PP (𝐷 = 35|m = 20)(1/3) + PP (𝐷 = 35|m = 30)(1/3) + PP (𝐷 = 35|m = 40)(1/3)
192
=
0.00069(1/3)
0.00069(1/3) + .04531(1/3) + 0.04854(1/3)
= 0.0073.
Similarly, P(Potential|𝐷 = 17) = .4793 and P(Success|𝐷 = 17) = .5135. You need be very careful with
rounding in this problem, and thus should not use @RISK as it only reports the first decimal place. See
“Problem 9.35.xlsx” for a complete solution.
9.36. a, b. Reasons for using the binomial distribution are the following: The number of machines to break
down can be between 0 and 50. The probability of any machine breaking down is 0.004. The machines
appear to break down independently of each other. Thus, a binomial distribution with p = 0.004 and n = 50
would be appropriate.
Reasons for using the Poisson distribution: The probability of an individual machine breaking down is
small and can occur any time during the day. Breakdowns seem to occur independently of one another. The
expected number of breakdowns in one day is 50 × 0.004 = 0.2, so a Poisson distribution with m = 0.2
would be appropriate.
# parts required
(breakdowns)
0 (0.819)
1 (0.164)
Stock 0 parts
E(Cost) = $13.00
2 (0.016)
3 (0.001)
.
.
.
0 (0.819)
1 (0.164)
Stock 1 part
E(Cost) = $11.24
2 (0.016)
3 (0.001)
.
.
.
0 (0.819)
1 (0.164)
Stock 2 parts
E(Cost) = $20.13
2 (0.016)
3 (0.001)
.
.
.
c. Calculations for E(Cost):
E(Cost | Stock 0) = E(Breakdowns) × $65.00 = 0.2 × $65.00 = $13.00
50
E(Cost | Stock 1) = $10 + $65 × ∑x=1 [x - 1]P(X = x)
193
Cost ($)
0
65
130
195
10
10
75
140
20
20
20
85
50
50
= $10 + $65 × [∑x=1 x P(X = x) - ∑x=1 P(X = x)]
= $10 + $65 × [E(# breakdowns) - {1 - P(X = 0)}]
= $10 + $65 × [0.2 - (1 - 0.819)]
= $11.235.
50
E(Cost | Stock 2) = $20 + $65 × ∑x=2 [x - 2]P(X = x)
50
50
= $20 + $65 × [∑x=2 x P(X = x) - 2 ∑x=2 P(X = x)]
50
= $20 + $65 × [∑x=1 x P(X = x) - P(X = 1) - 2 {1 - P(X = 0) - P(X = 1)}]
= $20 + $65 × [E(# breakdowns) - P(X = 1) - 2 {1 - P(X = 0) - P(X = 1)}]
= $20 + $65 × [0.2 - 0.164 - 2 {1 - 0.819 - 0.164}]
= $20.13.
This decision tree is modeled in the Excel file “Problem 9.36.xlsx.” The Binomial distribution is included
in the spreadsheet model using the Excel function BINOMDIST. For example, to find the probability of no
breakdowns use the formula “=BINOMDIST(0,50,0.004,FALSE)”.
More possible outcomes are included in the spreadsheet model (up to 7 parts required due to breakdowns)
therefore there is some round-off difference between the solution above and the answers in the spreadsheet.
The preferred alternative is to stock 1 part with an expected cost of $11.20.
9.37. a. E(X) = e[10 + 0.5 (0.09)] = $23,040.
Var(X) = e2(10) (e0.09 - 1) (e0.09) = 49,992,916
σX = $7,071.
b. PL (X > 50,000 | µ = 10, σ = 0.3) = PN (Y = ln(X) > ln(50,000) | µ = 10, σ = 0.3)
= PN (Y > 10.8198 | µ = 10, σ = 0.3)
= P(Z >
10.8198 - 10
) = P(Z > 2.73) = 0.0032.
0.3
c. First, find the probability distribution for Q = 200X. According to the hint, Q will be approximately
normal with mean µ = 200 (23, 0404) = $4.608 million, variance σ2 = 200 (49,992,915.95) =
9,998,583,190, and standard deviation σ = $99,992.91, or about $0.1 million.
Now find the 0.95 fractile of this normal distribution. That is, find the value q such that
PN (Q ≤ q | = 4.608, σ = 0.1) = 0.95. We know that P(Z ≤ 1.645) = 0.95. Therefore,
P(Z ≤
q - 4.608
) = 0.95
0.1
194
only if
q - 4.608
= 1.645
0.1
or
q = 4.608 + 1.645 (0.1) = $4.772 million.
Thus, if the company has on hand $4.772 million, they should be able to satisfy all claims with 95%
probability.
Case Study: Overbooking
1. Use a binomial distribution, and let Res = n = number of reservations sold.
PB(R > 16 | Res = 17, p = 0.96) = PB(R = 0 | n = 17, p = 0.04) = 0.4996.
PB(R > 16 | Res = 18, p = 0.96) = PB(R ≤ 1 | n = 18, p = 0.04) = 0.8393.
PB(R > 16 | Res = 19, p = 0.96) = PB(R ≤ 2 | n = 19, p = 0.04) = 0.9616.
This model is shown in the Excel file “Overbooking I.xlsx.” The probabilities for possible arrivals given a
certain number of reservations are shown in a table in the second worksheet in the file. For example, the
probability that 17 people show-up given 17 reservations are taken (i.e., 0 no-shows) can be found using
the Excel formula “ =BINOMDIST(0,17,0.04,TRUE)”. @RISK can be used to determine this value, but
you would then need to manually transfer the values from @RISK into the spreadsheet model, while Excel
provides useful formulas for many of the distributions. This case and the following cases use the built-in
Excel functions rather than finding the value in @RISK and typing it into the spreadsheet.
2.
E(R | Res = 16) = $225 (16) = $3600
16
E(C1 | Res = 16) = $900 + $100 ∑x=0 x PB(X = x | n = 16, p = 0.96)
= $900 + $100 E(X)
= $900 + $100 (15.36)
= $2436.
E(C2 | Res = 16) = 0 (No extra passengers can arrive.)
E(Profit | Res = 16) = $3600 - $2436 - $0 = $1164.
These calculations are shown in the first worksheet in the Excel file.
3.
E(R | Res = 17) = $225 (17) = $3825
16
E(C1 | Res = 17) = $900 + $100 ∑x=0 x PB(X = x | n = 17, p = 0.96)
+ $1600 PB(X = 17 | n = 17, p = 0.96)
= $900 + $782.70 + $799.34 = $2482.04
E(C2 | Res = 17) = $325 PB(X = 17 | n = 17, p = 0.96)
= $325 (0.4996) = $162.37
195
E(Profit | Res = 17) = $3825.00 - $2482.04 - $162.37 = $1180.59.
E(R | Res = 18) = $225 (18) = $4050
16
E(C1 | Res = 18) = $900 + $100 ∑x=0 x PB(X = x | n = 18, p = 0.96)
+ $1600 PB(X ≥ 17 | n = 18, p = 0.96)
= $900 + $253.22 + $1342.89 = $2496.11
E(C2 | Res = 18) = $325 PB(X = 17 | n = 18, p = 0.96)
+ $650 PB(X = 18 | n = 18, p = 0.96)
= $325 (0.3597) + $650 (0.4796) = $428.65
E(Profit | Res = 18) = $4050.00 - $2496.11 - $428.65 = $1125.24.
E(R | Res = 19) = $225 (19) = $4275
16
E(C1 | Res = 19) = $900 + $100 ∑x=0 x PB(X = x | n = 19, p = 0.96)
+ $1600 PB(X ≥ 17 | n = 19, p = 0.96)
= $900 + $60.74 + $1538.56 = $2499.30
E(C2 | Res = 19) = $325 PB(X = 17 | n = 19, p = 0.96)
+ $650 PB(X = 18 | n = 19, p = 0.96)
+ $975 PB(X = 19 | n = 19, p = 0.96)
= $325 (0.1367) + $650 (0.3645) + $975 (0.4604) = $730.26
E(Profit | Res = 19) = $4275.00 - $2499.30 - $730.26 = $1045.44.
The optimum amount to overbook is by one seat. By selling 17 reservations, Mockingbird obtains the
highest expected profit.
Case Study: Earthquake Prediction
Excel provides many useful functions to solve for the probabilities. The model is shown in the Excel file
“Earthquake Prediction.xlsx.”
If you want to do the raw calculations in a spreadsheet for this problem, a recursive formula for calculating
Poisson probabilities is helpful:
e-mmk+1
e-mmk
m
PP(X = k + 1 | m ) = (k + 1)! = k!
k+1
(
)
m
= PP(X = k | m ) k + 1
(
).
A recursive formula like this can be entered and calculated easily in a spreadsheet. It saves laborious
calculations of many factorial terms.
196
1.
PP(X ≤ 10 in next year | m = 24.93/year) = PP(X ≤ 10 | m = 24.93) = 0.000613
Or this value can be found using the Excel formula “=POISSON(10,24.93,TRUE)”
PP(X ≤ 7 in next 6 months | m = 24.93/year) = PP(X ≤ 7 | m = 12.465) = 0.071069
This value can be found using the Excel formula “=POISSON(7,12.465,TRUE)”
PP(X > 3 in next month | m = 24.93/year) = 1 - PP(X ≤ 3 | m = 2.0775) = 0.157125
This value can be found using the Excel formula “=1-POISSON(3,2.0775,TRUE)”
These formulas are shown in the first worksheet of the Excel file.2.
f(x)
Density function for
earthquake magnitude
Magnitude
4.0
4.5
5.0
5.5
6.0
6.5
7.0
7.5
PE(M ≤ 6.0 | m = 1.96) = 1 - e-1.96(6.0-4.0) = 0.980
PE(5.0 ≤ M ≤ 7.5 | m = 1.96) = e-1.96(5.0-4.0) - e-1.96(7.5-4.0) = 0.140
PE(M ≥ 6.4 | m = 1.96) = e-1.96(6.4-4.0) = 0.009
Range
4.0 - 5.0
5.0 - 6.0
6.0 - 7.0
7.0 - 8.0
8.0 +
Probability
0.859
0.121
0.017
0.002
0.001
197
8.0
8.5
Sensitivity analysis: Find m that corresponds to the data in Table 9.1:
8.0+
Empirical P(M ≥ x)
1
2493 = 0.0004
7.0
13
2493 = 0.0052
1.75
6.0
93
2493 = 0.0373
1.64
5.0
493
2493 = 0.1978
1.62
Magnitude (x)
Exponential rate m
1.96
Thus, m could be less than 1.96. What value should be chosen for the model? A value of 1.96 makes sense
if the concern is primarily with large earthquakes. Furthermore, this choice will result in larger probabilities
of large earthquakes, a kind of “worst case” scenario.
These calculations are shown in the second worksheet of the Excel file “Earthquake Prediction.xls”. Also,
the probability as a function of m is implemented such that m can be varied to see the impact on the
probability.
3. Performing the calculations as described in the case gives
P(At least 1 quake with M > 8 in next year) ≈ 0.01
P(At least 1 quake with M > 8 in next 5 years) ≈ 0.124.
These calculations are shown in the third worksheet of the Excel file “Earthquake Prediction.xls”.
4. This question asks students to think about preparation for serious earthquakes. Clearly, the potential
exists for substantial damage. Other than the issues raised in the case, one would want to look at training
for residents, protection for utilities, emergency measures for providing power and water to hospitals, and
distributing water, food, shelter, and other necessities in the event of a severe quake. This question may
stimulate an interesting discussion, especially if some of the students have experienced a large quake.
Case Study: Municipal Solid Waste
1. The pollutant levels follow lognormal distributions, and so the log-pollutant levels follow normal
distributions with corresponding parameters. Thus, we need to find the probability that the log-pollutant
level exceeds the logarithm of the corresponding established level from Table 9.3. This can be done using
the Normal Distribution Probability Tables or with Excel functions. This case is modeled in the spreadsheet
“Municipal Solid Waste.xlsx.”
For a small plant:
Dioxin/Furan (DF) permit level = 500 ng/Nm3
P(DF > 500 ng/Nm3) = PN (ln(DF) > ln(500) | µ = 3.13, σ = 1.2)
= P(Z ≤
ln(500) - 3.13
) = P(Z > 2.57) = 0.0051.
1.2
198
Or with the Excel function:
=1-NORMDIST(LN(500),3.13, 1.2, TRUE)
= 0.0051
Particulate Matter (PM) permit level = 69 mg/dscm
P(PM > 69 mg/dscm) = PN (ln(PM) > ln(69) | µ = 3.43, σ = 0.44)
= P(Z ≤
ln(69) - 3.43
) = P(Z > 1.83) = 0.0338.
0.44
Or with the Excel function:
=1-NORMDIST(LN(69),3.43, 0.44, TRUE)
= 0.0338
For a medium plant:
DF permit level = 125 ng/Nm3
P(DF > 125 ng/Nm3) = PN (ln(DF) > ln(125) | µ = 3.13, σ = 1.2)
= P(Z ≤
ln(125) - 3.13
) = P(Z > 1.42) = 0.0785.
1.2
Or with the Excel function:
=1-NORMDIST(LN(125),3.13, 1.2, TRUE)
= 0.0785
Particulate Matter: Analysis is the same as for the small plant.
2. SO2 permit level = 30 ppmdv
For a single observation, the analysis follows question 1:
P(SO2 > 30 ppmdv) = PN (ln(SO2) > ln(30) | µ = 3.2, σ = 0.39)
= P(Z ≤
ln(30) - 3.2
) = P(Z > 0.52) = 0.3030.
0.39
Or with the Excel function:
=1-NORMDIST(LN(30),3.2, 0.39, TRUE)
= 0.3030
—
For a geometric average of 24 independent observations of SO2, denoted by S ,
—
—
0.39
)
P( S > 30 ppmdv) = PN (ln( S ) > ln(30) | µ = 3.2, σ =
24
= P(Z ≤
ln(30) - 3.2
) = P(Z > 2.53) = 0.0057.
0.39 / 24
199
Or with the Excel function:
=1-NORMDIST(LN(30),3.2, 0.0796, TRUE)
= 0.0057
Naturally, this is a much smaller probability than for the single-observation case. It is due to the effect of
the Central Limit Theorem on the distribution of averages.
3. They appear to be able to satisfy the less strict requirements for a small plant more easily. However, for a
medium plant they also appear to be in good shape; the highest probability of noncompliance is for Dioxins
and Furans (0.0785). A larger plant might prove to be better in the long run because of extra capacity and
the stricter incineration standards. In addition, the officials should consider the effects of recycling program
growth, measures that would require recyclable packaging, and other potential impacts on waste disposal
needs.
200
CHAPTER 10
Using Data
Notes
This chapter is a combination of elementary data analysis and regression analysis. It is clear that data can
be used as a basis for developing probability models in decision analysis. The first part of the chapter does
this in a straightforward way, using data to generate histograms and CDFs, and fitting theoretical
distributions to data. For most students, the only new element of this early section is the development of
data-based CDFs. Additional reading on this topic can be found in Vatter, P., S. Bradley, S. Frey, & B.
Jackson (1978) Quantitative Methods in Management: Text and Cases, Homewood, IL: Richard D. Irwin,
Inc.
The chapter then provides instructions for using @RISK to fit distributions to data. @RISK allows one to
analyze data to find a best-fitting distribution from among a set of candidate families. It will run a fully
automated fitting procedure and generate a report that ranks a set of distributions according to how well
they fit the input data. The rankings are based on one of six goodness-of-fit measures.
The second part of the chapter considers the use of data to understand relationships via regression analysis.
For instructors, it is important to realize that we do not intend for this section to be a stand-alone treatment
of regression. In fact, we can practically guarantee that classical statisticians who teach basic statistics
courses will disapprove of the treatment of regression in this chapter. The chapter is to relate regression to
decision analysis so that students understand how this statistical tool can be used in the decision-modeling
process, and thus does not cover the typical statistical-inference procedures. Nevertheless, this section does
provide a review and a decision-analysis perspective for students who have already covered regression in a
basic statistics course. The chapter concludes by reinforcing the distinction between decision-analysis
modeling and conventional statistical inference.
Case studies that use regression in a decision-making context make a good addition to the problems in the
book. For example, in a bidding context it is sometimes possible to construct a regression model of the
highest competitive bids in previous auctions. For an upcoming auction, this model can serve as the basis
for developing a probability distribution for the highest competitive bid. From this distribution, the bidder
can derive the probability of winning the auction with a specific bid amount, which in turn provides a
framework for choosing an optimal (expected-value-maximizing) bid. For an example of such a case, see
“J.L. Hayes and Sons” in Vatter, et al (1978).
A brief introduction to Bayesian inference using natural conjugate priors has been placed online
(cengagebrain.com). Conceptually, this is very difficult material. In fact, this material was included only
after a number of reviewers of Making Hard Decisions specifically requested it! Of course, this online
section has as a strict prerequisite the sections in Chapter 9 on the binomial, normal, and beta distributions.
Additional reading at a level that students can understand is in An Introduction to Bayesian Inference and
Decision, by R. Winkler, New York: Holt, Rinehart, Winston (1972), and Statistics: A Bayesian
Perspective, by D. Berry, Belmont, CA: Duxbury (1995).
Topical cross-reference for problems
@RISK
Bayesian Analysis
10.4, 10.8 – 10.12, Taco Shells
10S.1 – 10S.5, Forecasting Sales,
Overbooking: Bayesian Analysis
10.12, 10.17, 10.18
10.10
10.4, 10.6, 10.8, 10.9, 10.11, 10.13, 10.14,
Taco Shells
10.11
10.5, 10.7, 10.10, 10.12
Beta distribution
Binomial distribution
CDFs
Exponential distribution
Histograms
201
Natural conjugate prior
Normal distribution
Pearson-Tukey approximation
Poisson distribution
PrecisionTree
Regression
Subjective judgments
10S.1 – 10S.5, Forecasting Sales,
Overbooking: Bayesian Analysis
10.8, 10.9
Taco Shells
10.10
Taco Shells
10.3, 10.13, 10.14
10.2
Solutions
10.1. As long as what happened in the past can be reasonably expected to extend into the future, then
patterns of past outcomes can be used to estimate probability distributions. Even if such extrapolation is not
fully justified, an empirically derived probability distribution may provide a basis that can be adjusted
subjectively.
10.2. Many decision makers trust “objective” data more than “subjective” judgments. However, implicit in
such a position is the subjective judgment that the data and analysis are appropriate for constructing a
model of the uncertainty for the current situation. In many cases, the data may not be adequate in this
regard. For example, estimating a failure rate among machines implicitly assumes that the failure rate from
the past (old machines) will be the same in the future (new machines). In the case of economic forecasting,
the assumption is that the economy has not changed in any material way.
10.3. a. Important subjective judgments include the specification of the particular explanatory variables
(and hence exclusion of others), the form of the regression function, and that past data will be appropriate
for understanding property damage due to future storms. Another important modeling decision is the use of
property damage as the response variable; presumably, this variable is useful for the insurance company.
Because the population density is not constant along the coastline, however, it may be better to use
something like property damage per unit of population density.
b. The interpretation of β1 would be the expected change in property damage for every additional mile of
diameter holding constant the other variables in the model, specifically, holding constant the barometric
pressure, the wind speed, and the time of year. The sign of 𝛽1 indicates the direction of change; positive 𝛽1
means an expected increase in property damage while a negative 𝛽1 means an expected decrease in
property damage.
c. The interpretation of β5 would be the expected change in property damage due to the hurricane hitting a
city. (Including such a categorical variable is one way the insurance company could account for variable
population density. Redefining the response variable as described in the answer to part a is another.)
10.4. One alternative is to estimate the fractiles by drawing lines on Figure 10.7:
Empirical Distribution of Costs
Cumulative Probability
1
0.8
0.6
0.4
0.2
0
$600
$700
$800
$900
$1,000 $1,100 $1,200 $1,300 $1,400 $1,500
Operating Costs
From this, we estimate 𝑥0.65 ≈ $1,100 and 𝑥0.35 ≈ $900.
202
To obtain a more precise estimate, use the Distribution-Fitting button in @RISK to fit the data. Simply
highlight the data and click the button. The input distribution (in blue) in the results window is the
empirical CDF. Typing 35% in the left-hand and right-hand sides of the probability bar shows that𝑥0.65 =
$1,070 and 𝑥0.35 = $903. See figure below.
10.5. Answers to this question will vary depending on the intervals chosen. A histogram with six bins or
intervals is shown below. The bin widths are each $116.55 and were chosen using the Auto (automatic)
feature of the DecisionTools program StatTools. The numerical details of the bins is given below the
histogram.
Histogram of A random sample of 20 observations for the annual
operating costs of Big Berthas. / Data Set #1
4.5
4
3.5
2.5
2
1.5
1
0.5
203
$1297.51
$1180.96
$1064.40
$947.85
$831.30
0
$714.75
Frequency
3
A random sample of 20 observations for the annual operating costs of Big Berthas. / Data Set #1
Histogram
Bin #1
Bin #2
Bin #3
Bin #4
Bin #5
Bin #6
Bin Min
Bin Max
Bin Midpoint
Freq.
Rel. Freq.
Prb. Density
$656.47
$773.03
$889.58
$1006.13
$1122.68
$1239.23
$773.03
$889.58
$1006.13
$1122.68
$1239.23
$1355.78
$714.75
$831.30
$947.85
$1064.40
$1180.96
$1297.51
3
3
4
4
3
3
0.1500
0.1500
0.2000
0.2000
0.1500
0.1500
0.00129
0.00129
0.00172
0.00172
0.00129
0.00129
10.6. Either draw lines as was done for Exercise 10.4 or use @RISK’s distribution fitting procedure on the
residuals. For the regression using only Ads, the 0.2 fractile is -$1,600 and the 0.80 fractile is $1,632. For
the regression using only all three variables, the 0.20 fractile is -$361 and the 0.80 fractile is $369.
When predicting sales using only Ads, we would be 60% confident that actual sales will be within -$1,600
and $1,632 of the predicted value. When predicting sales using all three variables, we would be 60%
confident that actual sales will be within -$361 and $369 of the predicted value.
10.7. Answers will vary considerably here. There is no really good way to resolve this problem in the
context of using data alone. Here are some possibilities:
a.
b.
c.
d.
Fit a theoretical distribution to the data. Use the theoretical distribution as a basis for
determining small probabilities.
Use a combination of subjective assessments and the data to come up with probabilities.
Collapse the categories. That is, look at situations where two or more failures occurred, being
sure that the newly defined categories all have at least five observations.
Do nothing. Estimate the probability of three failures as 1/260.
10.8.Using the distribution fitting feature of @RISK, we have the following comparisons.
Operating Cost
$700
$800
$900
$1,000
$1,200
$1,400
Cumulative
Empirical Prob
10%
20%
30%
50%
20%
0%
Cumulative
Normal Prob
6.8%
15.8%
30.4%
49.1
17%
2.7%
The figure below shows the data-based empirical distribution overlaying the normal distribution with
𝜇 = $1,004.71 and 𝜎 = $204.39. From the cumulative probabilities above, the normal is underestimating
the left-hand tail (compare 10% to 6.8% for $700 operating costs) and overestimating the right-hand tail
(compare 0% to 2.7% for $1,400 operating costs). Overall, the normal is a very good fit. The figure shows
that it is the best fitting for the chi-square, K-S, and A-D fit measures and is second best for the AIC and
BIC measures of fit.
204
10.9. a. The figure below shows the empirical CDF and superimposed on the best-fitting normal
distribution, which has a mean of 9.7620 grams and a standard deviation of 1.2008 grams.
Weight (grams)
8
9.5
11.5
Cumulative
Empirical Prob
6.7%
33.3%
6.7%
Cumulative
Normal Prob
7.1%
43.3%
7.4%
205
b. The normal distribution probability is not terribly close and neither is it terribly far off. The only serious
departure from normality is that the distribution is slightly skewed. @RISK shows that the Laplace
distribution is the best fitting distribution for all five measures of fit. The Laplace has an unusual shape as
shown below. The peak in the middle results from two exponential functions being placed back to back. Is
the Laplace better than the normal? To answer this, it helps know what the Laplace is used to model.
Researchers have recently being using the Laplace instead of the normal if they believe the tails need more
weight. In this case, the scientist has to ask if the weights of the lab animals can be extreme. If so, the
Laplace could be better than the normal.
10.10. a, @RISK recommends a Poisson distribution to model the data with m = 0.39. The Poisson is a
natural choice here. In Chapter 9, we discussed the four conditions necessary to use the Poisson. The
independence condition (#3 in Chapter 9) is always tricky to confirm. From the fit results, the Poisson is a
very good choice. The parameter m = 0.39 is interpreted as expected number of defective bulbs per box.
206
b, c. The parameter m = 0.39 and did not change for the Poisson distribution. @RISK reports a binomial
with n = 3 and p = 0.12333. See below. It is not necessary to use @RISK’s results. For example, a binomial
distribution with n = 12 and p = 39/1200 would be a reasonable distribution. The total number of bulbs
checked is 1200, and 39 were found defective.
10.11. a. An exponential distribution often is used to model times between arrivals, especially when the
arrivals are independent (in this case, no groups of shoppers arriving at once).
b. No, @RISK reports that the triangular distribution is the #1 fit for four of the five fit measures. The
exponential is certainly close enough to be used here.
c. The fit for the exponential distribution improved only slightly. For example, the K-S distance went from
0.17 to 0.15. The fit in Part b had a shift of 0.00115 and a mean of 1.77. The fit now has no shift and a
mean of 1.87. This is based on the sample mean equaling 1.87 minutes. Thus, we will use an exponential
1
distribution with 𝛽 = 1.87 or m = 1.87 = 0.53.
10.12 a. A Poisson distribution might be appropriate for these data. The problem involves the occurrence of
outcomes (nests) across a continuum (area). Certainly, the probability of a nest at any given location is
quite small. The occurrence of nests may not be perfectly independent due to territorial behavior by birds,
however; the probability of a nest may be reduced if another nest is nearby.
b. The Poisson has the second or third best fit. The sample mean and sample standard deviation is
compared to the means and standard deviations estimated from the theoretical fitted distributions in the
table below.
Sample
IntUniform
NegBin
Poisson
Geomet
Mean
6.375
7.000
6.375
6.375
6.375
St. Dev.
3.294
3.742
3.385
2.525
6.857
207
c. The figure below shows the spike at 9 testing sites. The data shows nearly a 21% chance of finding 9
nesting sites per five acres and the Poisson reports 8.2%. Because none of the fitted distributions do a good
job of modeling this uncertainty, even though it naturally fits a Poisson, we would use the empirical
distribution to model this uncertainty.
10.13. a. The regression equation is:
E(Sales Price | House Size, Lot Size, Attractiveness)
= 15.04 + 0.0854 (House Size) + 20.82 (Lot Size) + 2.83 (Attractiveness).
b. Creating this graph requires calculation of the residuals from the regression (which is done automatically
with Excel’s regression procedure or with StatTools). Then use the residuals in exactly the same way that
we used the halfway-house data in Table 10.9 in the text to create a CDF. To use @RISK to create the CDF
of the residuals, use either the RiskCumul distribution or use @RISK’s fitting procedure, which constructs
the empirical distribution.. The file “Problem 10.13.xlsx” uses the RiskCumul formula. Even though we did
not ask, notice in the Fit-Results window below that the normal distribution fits the residuals quite well.
208
c. The expected Sales Prices for the two properties are
1.
E(Sales Price | House Size = 2700, Lot Size = 1.6, Attractiveness = 75)
= 15.04 + 0.0854 (2700) + 20.82 (1.6) + 2.83 (75)
= 491 ($1000s).
2.
E(Sales Price | House Size = 2000, Lot Size = 2.0, Attractiveness = 80)
= 15.04 + 0.0854 (2000) + 20.82 (2.0) + 2.83 (80)
= 453.7 ($1000s).
d.
In the graph, House 2 is the leftmost or blue line.From the graph, it is easy to see that House #2 is
reasonably priced, according to the model. At $480K, its list price falls just below the 0.80 fractile of the
distribution. Presuming that the owners have built in some negotiating room, it looks as though Sandy may
be able to make a reasonable offer and obtain the property for a price close to the price suggested by the
model.
House #1 is a different story. Its list price falls well above the probability distribution given by the model. It
is either way overpriced (which suggests that Sandy may have to offer a very low price), or there is
something about the house that increases its value but is not reflected in the model.
209
10.14. a. Calculating the regression coefficients gives the expression
E(Sales ($1000s) | Index, Price, Invest, Ad) = 3275.89 + 56.95 (Index) - 15.18 (Price)
+ 1.55 (Invest) + 7.57 (Ad)
The interpretations are as follows:
• A one-point increase in the Spending Index leads to an expected increase of $56,950 in Sales when
holding constant the System Price, Capital Investment, and Advertising and Marketing variables.
• A $1,000 increase in System Price leads to an expected decrease of $15,180 in Sales when holding
constant the Spending Index, Capital Investment, and Advertising and Marketing variables.
• A $1,000 increase in Capital Investment leads to an expected increase of $1,550 in Sales when
holding constant the Spending Index, System Price, and Advertising and Marketing variables.
• A $1,000 increase in Advertising and Marketing leads to and expected increase of $7,570 in Sales
when holding constant the Spending Index, System Price, and Capital Investment variables.
b. E(Sales ($1000s) | Index = 45.2, Price = 70, Invest = 145, Ad = 90)
= 3275.89 + 56.95 (45.2) - 15.18 (70) + 1.55 (145) + 7.57 (90)
= $5,695 ($1000s)
The residuals from the regression can be used to estimate the probability that Sales will exceed $6 million.
This can be done by creating the CDF graph for sales and reading off P(Sales > $6 Million | Index, Price,
Invest, and Ad). Or we can go straight to the calculations themselves. To exceed $6 million, the error
would have to be greater than $305,000. This amount falls at about the 0.90 fractile of the error
distribution. Thus, we estimate a probability of about 10% that Sales will exceed $6 million.
c. Using the conditions and estimate from part b, the 0.10 fractile (in $1,000s) is about $5,390. If we
blithely use the model as it is, we can estimate how far the price must be dropped for the 0.10 fractile to be
6000. The difference is 610. Every $1,000 decrease in price leads to an expected decrease of $15,180 in
Sales, and the same incremental change applies to the fractiles as well. Thus, we divide 610 by 15.18 to
arrive at about 40. That is, the price must be dropped by about $40,000 to obtain a 90% probability that
Sales will exceed $6 million. This implies a System Price in the neighborhood of $30,000. However, it is
important to realize that the model was never meant to be used under these conditions! The lowest System
Price in the data set is $56.2, and that was with a much lower Spending Index, Capital Investment, and
Advertising. So the model is not applicable when talking about a system price of $30,000.
The best advice to give Thomas is that the goal of $6 million in sales in the first half of next year will be
difficult to achieve. Although he might drop the price further or spend more in advertising, achieving the
target will take a substantial amount of luck!
Case Study: Taco Shells
1.
Cost per unbroken shell
New supplier
$25.00/case
Current supplier
$23.75/case
# of unbroken shells (X)
# of unbroken shells (Y )
=25.00 / X
= 23.75 / Y
210
Discrete chance nodes really would be correct, because only integer numbers of shells will be unbroken.
However, the chance node would have to have 500 branches, so a continuous fan is more practical.
2. The new supplier certainly has the higher expected number of usable shells per case. The two CDFs are
about the same in terms of riskiness. The data and a description how to create these CDFs in @RISK are in
the file “Taco Shells Case.xlsx.”
211
3. The statistics in the figure above allows us to easily construct either the ES-M or EP-T discrete
approximations discussed in Chapter 8. For the EP-T approximation, the median is assigned a probability
of 0.63, and the 0.05 and 0.95 fractiles are assigned probability values of 0.185.
Cost per usable shell
Cost per case
465
18.5%
463
New Supplier
TRUE
$25
Usable shells
$0.0531
472
63.0%
470
483
Taco Supplier
0.185
$0.0540
0.63
$0.0532
18.5%
0.185
482.8
$0.0518
Supplier?
$0.0531
429
18.5%
427
Current Supplier
FALSE
$23.75
0
$0.0556
Usable shells
$0.0540
442
63.0%
441
450
0
$0.0539
18.5%
0
450.2
$0.0528
Expected cost per usable shell for new supplier = $0.0531
Expected cost per usable shell for current supplier = $0.0540
This decision tree is modeled in the Excel file “Taco Shells Case.xlsx.” In the decision tree, the
consequences are determined using a Branch pay-off formula that divides the cost per case by the number
of usable shells. Click on the setting for the decision tree to view the default formula or the outcome nodes
to view the node specific formula. A linked was also created.
4. Average number of unbroken shells for new supplier = 473. (Add up all 12 observations, and divide by
12. Thus, average cost per usable shell for new supplier = $25.00 / 473 = $0.0529. Similarly, for the current
supplier the average number of unbroken shells for current supplier = 441, and the average cost per usable
shell = $23.75 / 441 = $0.0539.
Note that these results are very close to those found in question 3. Two advantages of using the CDFs:
a. The variability in the distributions is clear from the CDFs, but not when using the sample
average directly.
b. Question 4 requires us to approximate
$25.00
$25.00
E( X ) ≈ E(X)
$23.75
$23.75
and E( Y ) ≈ E(Y) .
1
1
However, this is not generally appropriate. That is, E(X ) ≠ E(X) in general. The use of the
CDF and the costs of the shells directly is more appropriate.
5. Ortiz should certainly consider taste, reliability of the supplier, reputation, whether he can obtain other
products from the same supplier, etc.
212
Chapter 10 Online Supplement: Solutions to Problems and Case Studies
10S.1. a. The natural conjugate prior distribution for µ is normal with mean m0 = 9.4 grams and σ0
0.8
= 1.96 = 0.4082. Note that m0 must be halfway between 9.0 and 9.8, and halfway between 8.6 and 10.2.
The standard deviation σ0 is found by realizing that the distance 10.2 - 9.4 must be equal to 1.96 σ0. Thus,
PN(µ ≥ 10 grams | m0 = 9.4, σ0 = 0.4082)
10 - 9.4
= P(Z ≥ 0.4082 )
= P(Z ≥ 1.47)
= 0.0708.
b. The posterior distribution for µ is normal with
1.52
9.4 ( 15 ) + 9.76 (0.40822)
= 9.59
m* =
1.52
2
+
0.4082
15
and
σ* =
1.52
2
15 0.4082
= 0.2810
1.52
2
+
0.4082
15
Now we have
PN(µ ≥ 10 grams | m* = 9.59, σ* = 0.2810)
10 - 9.59
= P(Z ≥ 0.2810 )
= P(Z ≥ 1.46)
= 0.0721.
Note that the probability has changed only slightly. The mean has shifted up from 9.4 to 9.59 grams, but the
standard deviation is less. The net effect is that the probability that µ is greater than 10 grams is still about
the same.
10S.2. a. The predictive probability that a single animal weighs more than 11 grams is equal to
P(X ≥ 11 | m0 = 9.4, σ0 = 0.4082)
= PN(X ≥ 11 grams | m0 = 9.4, σp =
1.52 + 0.40822 = 1.5546)
213
11 - 9.4
= P(Z ≥ 1.5546 )
= P(Z ≥ 1.03)
= 0.1515
b. P(X ≥ 11 | m* = 9.59, σ* = 0.2810)
= PN(X ≥ 11 grams | m* = 9.59, σp =
1.52 + 0.28102 = 1.5261)
11 - 9.59
= P(Z ≥ 1.5261 )
= P(Z ≥ 0.92)
= 0.1788
10.S.3. a. The graph for fβ(q | r0 = 1, n0 = 20):
Density
q
0
0.1
0.2
0.3
0.4
0.5
b. The posterior distribution is fβ(q | r* = 40, n* = 1220), which is essentially a spike at q = 40/1220 =
0.0328.
214
10S.4. a. The prior distribution is fβ(c | r0 = 3, n0 = 6):
Density
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
c
1
b. The predictive probability is P(R = 3 | n = 4, r0 = 3, n0 = 6)
5! 3! 4! 5!
= 3! 2! 1! 2! 9! = 0.2381
Likewise, P(R = 4 | n = 4, r0 = 3, n0 = 6) = 0.1190, and so
P(R > 2 | n = 4, r0 = 3, n0 = 6) = 0.2381 + 0.1190 = 0.3571.
c. The posterior distribution is fβ(c | r* = 3 + 3 = 6, n* = 6 + 4 = 10):
Density
Posterior
Prior
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
c
d. The predictive probabilities are:
r P(R = r | n = 10, r* = 6, n* = 10)
6
0.1750
7
0.1715
8
0.1393
9
0.0867
10 0.0325
Thus, P(R > 5 | n = 10, r* = 6, n* = 10) is equal to the sum of these five probabilities, or 0.6050.
215
e. Now the posterior distribution is fβ(c | r** = 6 + 6 = 12, n** = 10 + 10 = 20). His probability that the
measure will pass must be the expected value of this distribution, or E(C) = 12/20 = 0.60.
10S.5. a. For the comptroller,
PN(µ > 11,000 | m0 = 10,000, σ0 = 800)
= P(Z >
11,000 − 10,000
)
800
= P(Z > 1.25)
= 0.1056.
For her friend,
PN(µ > 11,000 | m0 = 12,000, σ0 = 750)
= P(Z >
11,000 − 12,000
)
750
= P(Z > -1.33)
= 0.9082.
b. The posterior distribution for the comptroller is normal with
m* =
1500 2
) + 11,003(800 2 )
9
= 10,721
1500 2
2
+ 800
9
σ* =
15002
2
9 (800 )
= 424.
15002
2
9 + 800
10,000(
and
Thus,
PN(µ > 11,000 | m* = 10,721, σ* = 424)
= P(Z >
11,000 − 10,721
)
424
= P(Z > 0.66)
= 0.2546.
For her friend, the posterior distribution is normal with
216
m* =
1500 2
) + 11,003(750 2 )
9
= 11,310
1500 2
2
+ 750
9
σ* =
15002
2
9 (750 )
= 416.
15002
2
9 + 750
12,000(
and
Thus,
PN(µ > 11,000 | m* = 11,310, σ* = 416)
= P(Z >
11,000 − 11,310
)
416
= P(Z > -0.75)
= 0.7734.
c. For the comptroller, the posterior distribution is normal with
m** =
1500 2
) + 11,254(800 2 )
144
= 11,224
1500 2
2
+ 800
144
σ** =
15002
2
144 (800 )
= 123.5.
15002
2
144 + 800
10,000(
and
Thus,
PN(µ > 11,000 | m** = 11,224, σ** = 123.5) = P(Z >
= P(Z > -1.81) = 0.9649.
For her friend, the posterior distribution is normal with
12,000(
m** =
1500 2
) + 11,254(750 2 )
144
= 11,274
1500 2
2
+ 750
144
and
217
11,000 − 11,224
)
123.5
σ** =
Thus,
15002
2
144 (750 )
= 123.3.
15002
2
+
750
144
PN(µ > 11,000 | m** = 11,274, σ** = 123.3)= P(Z >
11,000 − 11,274
)
123.3
= P(Z > -2.22) = 0.9868.
d. Eventually the data overwhelm any prior information. In the limit, as more data are collected, the
comptroller and her friend will end up with the same posterior distribution.
Case Study: Forecasting Sales
1. Average error = 2416. Standard deviation = 3555. Bill Maught’s opinion is reasonable; Morley does
appear to underestimate, and by more than Maught suspected. These data and calculations are shown in the
first worksheet of the Excel file “Forecasting Sales Case.xlsx.”
2.
P(Forecast too low by 1700 or more) = P(Error ≥ 1700) = 0.5. Note that we have defined Error as (Sales Forecast) so that Error is what must be added to the forecast in order to get sales. If the forecast is too low
by 1700 (or more), we would need to add 1700 (or more) to the forecast to obtain actual sales. Thus, we
want the probability that Error is 1700 or more.
3. Maught says, “... I’d bet even money that his average forecast error is above 1700 units.” This means that
1700 is Maught’s median, which is also the mean m0 for a normal prior distribution.
“...About a 95% chance that on average he underforecasts by 1000 units or more.” This statement indicates
that 1000 is the 0.05 fractile of the distribution.
218
We know that
P(Z ≤ -1.645) = 0.05.
Thus,
1000 - 1700
= -1.645.
σ0
Solving for σ0:
σ0 =
1000 - 1700
= 426.
-1.645
Thus, Maught’s prior distribution for µ, Morley’s average error, is a normal distribution with parameters
m0 = 1700 and σ0 = 426.
—
Given the data with X = 2416 and n = 14, Maught’s posterior distribution is normal with
30002
1700 ( 14 ) + 2416 (4262)
m* =
= 1857
30002
2
14 + 426
and
σ* =
30002
2
14 (426 )
= 376.
30002
2
14 + 426
Thus, P(µ > 1700 | m* = 1857, σ* = 376) = P(Z >
1700 - 1857
) = P(Z > -0.42) = 0.6628.
376
219
You can also use @RISK to model the normal distribution. To determine the desired probability that µ >
1700 by setting the left delimiter to 1700 and the right delimiter to 8300. Then graph will show the desired
probability: 66.19%.
4. The probability distribution for Sales will be based on a predictive distribution of the upcoming year’s
error. The predictive distribution for the error is normal with m* = 1857 and σp = 30002 + 3762 = 3023.
Because Sales = Forecast + Error, and we know that Forecast = 187,000, then we know that the
distribution for Sales is normal with mean E(Sales) = 187,000 + 1857 = 188,857, and standard deviation
3023. Thus,
P(Sales > 190,000) = P(Z >
190,000 − 188,857
) = P(Z > 0.38) = 0.3520.
3023
You can use @RISK to determine the probability that the Sales > 190,000 by setting the left delimiter to
190,000 and the right delimiter to 210,000. The graph shows that the desired probability is 35.27%.
Case Study: Overbooking: Bayesian Analysis
1, 2. See the Excel file “Overbooking Bayesian.xlsx” for the model and analysis.
In the original Overbooking case study at the end of Chapter 9, we used a binomial distribution to model
the uncertainty about how many people arrive. In this case study, we include prior information about the
no-show rate in the form of a beta distribution. Question 1 asks for the predictive distribution for the
number of arrivals, given that Mockingbird sells 17 reservations. Let r denote the number of no-shows and
n the number of reservations sold. The predictive distribution for the number of no-shows is a betabinomial distribution; substituting r0 = 1 and n0 = 15 into the beta-binomial formula and simplifying gives
P(No-shows = r | Reservations sold = n, r0 = 1, n0 = 15)
220
(n - r +13)! n! 14!
= (n-r)! 13! (n + 14)! .
The following table shows the predictive probability distribution for no-shows given that Mockingbird sells
16, 17, 18, and 19 reservations. Note that the number of arrivals equals (n - no-shows). For example,
P(Arrivals = 17 | Reservations sold = 17) = P(No-shows = 0 | n = 17) = 0.452.
The table also includes calculations of expected values for revenue, costs, and profits, allowing us to
determine the number of reservations n that maximizes expected profit. The formulas for calculating these
quantities are given in the original case in Chapter 9. You can see that the optimum number of reservations
to sell is 17.
Parameters:
n
r0
n0
No-shows
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Expected
Values: n:
Revenue
E(C1)
E(C2)
E(Profit)
16
1
15
17
1
15
18
1
15
19
1
15
0.467
0.257
0.138
0.072
0.036
0.017
0.008
0.003
0.001
0.001
0.000
0.000
0.000
0.000
0.000
0.000
0.000
Predictive
0.452
0.256
0.141
0.076
0.039
0.020
0.009
0.004
0.002
0.001
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
Probability
0.438
0.254
0.144
0.079
0.043
0.022
0.011
0.005
0.002
0.001
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
16
$3,600
$2,393
$0
$1,207
17
$3,825
$2,442
$147
$1,237
18
$4,050
$2,467
$367
$1,216
0.424
0.252
0.146
0.083
0.046
0.024
0.013
0.006
0.003
0.001
0.001
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
19
$4,275
$2,481
$625
$1,169
3. If all 17 passengers arrive, the manager’s posterior distribution for the no-show rate becomes a beta
distribution with parameters
r* = r0 + r = 1 + 0 = 1
n* = n0 + n = 15 + 17 = 32.
221
4. The following table shows the predictive distributions for no-shows as in question 1, but now with r* = 1
and n* = 32. The expected value calculations indicate that the optimum action is not to overbook, but to sell
exactly 16 reservations.
Parameters:
n
r*
n*
No-shows
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Expected
Values: n:
Revenue
E(C1)
E(C2)
E(Profit)
16
1
32
17
1
32
18
1
32
19
1
32
0.660
0.229
0.076
0.024
0.007
0.002
0.001
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
Predictive
0.646
0.234
0.081
0.027
0.009
0.003
0.001
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
Probability
0.633
0.237
0.086
0.030
0.010
0.003
0.001
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
16
$3,600
$2,450
$0
$1,150
17
$3,825
$2,482
$210
$1,133
18
$4,050
$2,493
$488
$1,068
0.620
0.240
0.090
0.033
0.011
0.004
0.001
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
19
$4,275
$2,498
$790
$987
222
CHAPTER 11
Monte Carlo Simulation
Notes
Simulation, sometimes referred to as “Risk Analysis” when applied in decision-analysis settings, is a very
important and powerful tool. Many of my students have not been exposed to the idea of simulation
modeling before reading about it in Making Hard Decisions with DecisionTools, 3rd ed., and the
opportunity to work on relatively complex decision problems proves to be enlightening and engaging,
while those students that have little experience using a spreadsheet may find it very challenging. Students
typically come away from Chapter 11 feeling good about the tools they have learned.
The first portion of the chapter discusses the general procedure for simulating random variables, including
the general CDF-inversion method. Following this, we turn to the implementation of simulation using
@RISK with detailed step-by-step instructions and several illustrations.
When entering the probability distributions in a spreadsheet model, it is often helpful at first to use the
Define Distribution window to better understand how to assign values to function arguments. Then, once
you better understand the syntax of the distribution function arguments, you can enter the arguments
yourself directly in Excel, bypassing the Define Distribution window. Students should have used the Define
Distribution window in Chapters 8, 9 & 10, and hence should be comfortable with it.
Topical cross-reference for problems
@RISK
Beta distribution
Binomial distribution
Decision trees vs simulation
Decision variables
Discrete distribution
EP-T vs ES-M
Cumulative risk profiles
Normal distribution
Monty Hall
Objectives
Opportunity cost
Poisson distribution
PrecisionTree
Requisite model
Sensitivity analysis
Sequential simulations
Stochastic dominance
Subjective judgments
Triangular distribution
Uniform distribution
11.8-11.16, Manufacturing Process, La
Hacienda Musa, Overbooking Part III
11.8 – 11.10
Overbooking, Part II
11.5
11.7, La Hacienda Musa
11.16
11.11, 11.12
Choosing a Manufacturing Process
11.5, 11.6, Choosing a Manufacturing
Process, La Hacienda Musa
11.15
11.9
11.8
Choosing a Manufacturing Process
11.11, 11.12
11.1, 11.10
11.13, 11.14, Overbooking, Part II
11.8 – 11.10, La Hacienda Musa
Choosing a Manufacturing Process
11.3, 11.4
11.13,
11.13, 11.14
Solutions
11.1. Constructing a simulation model can help a decision maker specify clearly what is uncertain in the
environment and the nature of the uncertainty. This is part of the structuring phase of decision analysis, and
insight can be gained in the process of iterative modeling and analysis with the goal of reaching a requisite
decision model. The analysis itself (running the simulation) can produce simulation-generated probability
distributions for those uncertain quantities in which the decision maker is interested (e.g., NPV, profit,
payoff, cost).
223
11.2. (Note: the difference between this question and 11.1 is that this one is meant to focus on the nuts and
bolts of the simulation process.) After constructing a model of the uncertainty, the computer generates
random occurrences (e.g., revenue and cost) according to the specified distributions, and then aggregates
these occurrences according to the model (e.g., profit = revenue - cost). Finally, these results are tracked for
all iterations of the model to produce a simulation-generated distribution of the variable (profit) in which
the decision maker is interested.
11.3. Answers will depend on a student’s attitudes toward risk. In particular, how much additional risk is
the student willing to take for an increase in expected profit? Clearly, 700 calendars is the maximum almost
every student will order because the expected profit drops after that. See Figure 11.10. A risk-neutral
student will order 700 calendars and the more risk averse the student is, the fewer calendars they will order.
Figure 11.11 illustrates the middle 90% of profit values, showing an increase in the range as the order size
increases.
11.4. Her subjective judgments regarding important sources of uncertainty and the nature of that
uncertainty are required for building the model. She may also be needed to understand the relationships
among the variables and how they impact the consequence measure. She may also be interested in more
than one consequence, and thus her input is critical as to what should be measured. For example, Leah may
be interested in profit as well as the probability of selling out.
11.5. Simulation works by brute force, in that it samples the input distributions thousands of times.
Decision trees do not sample the distributions at all. Rather, they use a discrete distribution to approximate
a continuous distribution. It is the judicious choice of the discrete distribution that results in the two
methods producing similar results. The ES-M and EP-T approximations were engineered so that they
closely match the mean and the standard deviation of the distribution they are replacing. Generally, we are
mainly concerned with the mean and the standard deviation of the consequence measure. If our
approximations match the mean and standard deviation of the inputs, then the decision tree model should
closely match the mean and standard deviation of the consequence measure.
11.6. The problem with saying that the simulation models choose values randomly is that it sounds as if
simulation models are capricious or erratic in their operation. A better statement would be along the lines of
saying that the values chosen by a simulation model for an uncertainty are governed by the properties of the
uncertainty and controlled by the probability distribution being used. This is why we stress that the
probability distribution should have the same properties of the uncertainty it is modeling. Much of the text
has been devoted to eliciting the most accurate probability distribution possible. In developing decision
trees, a considerable amount of time is spent to get the distribution just right.
11.7. Many students believe that using a discrete distribution to model a decision variable is appropriate
and have trouble answering this question.
There are two problems with modeling a decision variable as an uncertainty. First, a decision variable
represents a value that is under the control of the decision maker and an uncertainty represents a value that
is not.
Second, analyzing a model with a decision variable modeled as an uncertainty is difficult at best. Suppose
Leah model her quantity-ordered decision using a discrete distribution, say 30% chance of ordering 680
calendars and 70% of ordering 700 calendars. After running the model, how would Leah analyze the results
to determine the best order quantity? Because she must order one and only one amount, she would have to
sort all the simulation results by order quantity. She would need to gather all the 680 calendar orders into
one spreadsheet and determine the expected profit, the risk, etc. She would also need to repeat all this for
the 700 calendar order. Much simpler would be to use RiskSimTable(680, 700), in which case the program
does all the sorting and analysis for us.
224
11.8. See file “Problem 11.8.xlsx” for the simulation model. The new consequence measure is labeled
“Profit*” and includes the opportunity cost of missing sales due to running out of calendars.
Because Profit* includes an extra cost, expected Profit* is less than expected Profit. Also, the order
quantity that maximizes expected Profit* is larger than the order quantity that maximized expected Profit.
The results of running 9 simulations at 10,000 iterations are given below. The largest two EMVs are
highlighted showing that ordering 710 calendars maximizes expected Profit*.
Order
Quantities Expected Profit*
Expected Profit
650
$5,653.21
$5,763.23
660
$5,712.56
$5,803.96
670
$5,758.32
$5,833.63
680
$5,791.47
$5,853.07
690
$5,813.22
$5,863.25
700
$5,824.86
$5,865.20
710
$5,827.65
$5,859.97
720
$5,822.83
$5,848.55
730
$5,811.53
$5,831.87
The graph below shows the expected Profit and expected Profit* for the different order quantities. We see
that for order quantities less than 710, expected Profit* drops more quickly than does expected Profit. This
tells us that Profit* is more affected by running out of stock than is Profit. For order quantities above 710,
expected Profit* is less affected by running out of stock than is Profit. Thus, the statement that Leah should
err on the high side when she orders holds even more for Profit*.
We labeled the new consequence Profit* because it is similar to profit, but not equal to profit. It is a proxy
measure for long-run profit that is designed to capture the disappointment of shoppers when they discover
there are no more calendars. Being a proxy measure, it does not exactly capture long-run profit and should
be treated with care. In other words, Leah needs to be careful when interpreting the simulation results on
Profit*.
$5,900.00
$5,850.00
$5,800.00
Expected Profit*
$5,750.00
Expected Profit
$5,700.00
$5,650.00
$5,600.00
640
660
680
700
720
225
740
11.9 This problem shows that Leah may have other objectives, and by changing the consequence measure
the simulation model can provide insights into her decision problem. The file “Problem 11.9.xlsx” has the
simulation model with the new consequence measures. The table below reports the results.
If Leah wants to minimize leftovers, then she should order no calendars. That would guarantee no unsold
calendars, but clearly ordering 0 calendars is not what she wants. In this case, the objective of minimizing
leftovers is not very well thought out. A little more helpful is for Leah to order the minimum number of
calendars, which according her assessments is 600 calendars. However, ordering the minimum that one
expects to sell is not likely to grow the business.
The simulation model shows, via the table below, that the expected number of unsold calendars increases
as does her order quantity. If she were to order 680 calendars (expected demand), then she can expect to
have 21 unsold calendars. If she were to order 700 calendars (maximizes expected profit), then she can
expect to have 33 leftovers. We used the RiskMean function to calculate the values in the Expected
Leftovers column.
If her objective is to maximize the probability P(Profit > $5,200), then, according to the model, she should
order no more than 660 calendars. This may sound counterintuitive, but as her order size increases, so does
the standard deviation of profit. The larger the standard deviation is, the more weight there is in the tails,
and thus, there is a higher probability of seeing extreme profits. Specifically, as the order size increases,
P(Profit < $5,200) also increases. The @RISK output window below shows the profit distributions when
ordering 650 and 710 calendars. The standard deviation of profit when ordering 710 calendars is nearly
300% higher than when ordering 650 calendars, pushing the tails of the distribution further out. We used
the function RiskTarget to calculate the cumulative probability of Profit being less than $5,200.
For the third objective, Leah wants to meet the hurdle of the 5th percentile being at least $5,200. The table
below shows that this hurdle is met for order quantities of 690 or less. The reason the 5th percentile
increases as the order size increases is again because the standard deviation is also increasing. For this
objective, Leah should consider ordering 690 calendars or less. We used the function RiskPercentile to
calculate the 5th percentile values.
Order
Quantities
650
660
670
680
690
700
710
720
730
Expected
Leftovers
7
10
15
21
27
33
41
49
57
P(Profit > $5,200)
100%
100%
99%
98%
97%
95%
93%
91%
89%
226
5th Percentile
$
5,397.88
$
5,357.88
$
5,317.88
$
5,277.88
$
5,237.88
$
5,197.88
$
5,157.88
$
5,117.88
$
5,077.88
11.10 Students often question how much detail to include and how do you determine if you have a requisite
model. This question shows that going through the work to guarantee that only whole calendars are sold is
not worth the effort. The maximum difference in expected profit between the model that allows fractional
calendar sales and one that allows only whole number of calendars to be sold is one penny. See “Problem
11.10.xlsx.” Clearly, fractional parts are not realistic, but including them does not materially affect the
results of the analysis.
11.11 We now need the 10th and 90th percentiles of the demand distribution. As we did in Chapter 9, we can
use the Define Distribution feature to pull up the demand distribution and read off the desired percentiles.
The 10th percentile is 623 calendars, the 90th percentile is 752 calendars, and median is still 670 calendars.
The file “Problem 11.11.xlsx” contains the simulation model and the decision tree models with all 6
alternatives. Both the ES-M and EP-T trees are in the spreadsheet and they are linked to the profit
simulation model. The results of both the EP-T and ES-M trees are displayed in the table below.
The EP-T did a better job in this problem being a maximum of $16.57 from the simulation results. Three of
the six ES-M estimates were over $28 away from the simulation results.
Order Quantity
600
650
700
750
800
850
Sim Expected Profit ES-M Expected Profit
$5,400.00
$5,400.00
$5,763.23
$5,791.50
$5,865.20
$5,873.17
$5,785.92
$5,781.50
$5,625.71
$5,585.83
$5,436.79
$5,385.83
227
Difference
$0.00
-$28.27
-$7.97
$4.42
$39.87
$50.95
Order Quantity
600
650
700
750
800
850
Sim Expected Profit
$5,400.00
$5,763.23
$5,865.20
$5,785.92
$5,625.71
$5,436.79
EP-T Expected Profit
$5,400.00
$5,765.83
$5,849.88
$5,770.13
$5,642.28
$5,442.28
Difference
$0.00
-$2.59
$15.32
$15.79
-$16.57
-$5.49
11.12. Again, we see that the EP-T approximation estimated the simulation model more closely than the
ES-M approximation. See table below and file “Problem 11.12.xlsx.” The file contains the simulation
model and the decision tree models with all 6 alternatives. Both the ES-M and EP-T trees are in the
spreadsheet and they are linked to the profit simulation model.
Order Quantity
600
650
700
750
800
850
Sim St. Dev. Profit
$0.00
$154.06
$393.66
$557.32
$637.60
$668.52
ES-M St. Dev. Profit
$0.00
$130.81
$293.62
$487.21
$495.84
$495.84
Difference
$0.00
$23.25
$100.04
$70.11
$141.76
$172.68
Order Quantity
600
650
700
750
800
850
Sim St. Dev. Profit
$0.00
$154.06
$393.66
$557.32
$637.60
$668.52
EP-T St. Dev. Profit
$0.00
$176.68
$345.11
$539.50
$674.82
$674.82
Difference
$0.00
-$22.61
$48.55
$17.83
-$37.22
-$6.30
In the second part of the problem, we are asking to compare the EP-T approximation using 780 as the 95th
percentile to the EP-T approximation using 781 as the 95th percentile. When comparing the results of the
simulation model to that of the decision tree, we should be using 781 as it is the 95th of the input
distribution. In the text, we used 780 as it was Leah’s assessed value. For comparison purposes, 781 is a
slightly more accurate value.
The table below shows that there is at most a $2.40 difference between the two different approximations.
However, the second table shows that the using the slightly more accurate value actually produced higher,
albeit slightly, errors.
Order Quantity
600
650
700
750
800
850
Using 781:
EP-T Expected
Profit
$5,400.00
$5,765.83
$5,849.88
$5,770.13
$5,644.68
$5,444.68
Using 780:
EP-T Expected Profit
$5,400.00
$5,765.83
$5,849.88
$5,770.13
$5,642.28
$5,442.28
228
Difference
$0.00
$0.00
$0.00
$0.00
$2.40
$2.40
Difference:
Sim – EP-T(781)
$0.00
-$2.59
$15.32
$15.80
-$18.97
-$7.89
Difference:
Sim – EP-T(780)
$0.00
-$2.59
$15.32
$15.79
-$16.57
-$5.49
11.13. a. Each probability can be a random variable itself. For example, for Project 1, first generate p1, its
probability of success, from a uniform distribution that ranges from 0.45 to 0.55. Then, once p1 is
determined, determine whether Project 1 succeeds using p1 as the probability of success. In influence
diagram form:
No uncertainty or vagueness about
probabilities:
Project
1
Project
2
Project
3
Project
4
Project
5
p
4
p
5
Payoff
With uncertainty or vagueness about
probabilities:
p
1
Project
1
p
2
Project
2
p
3
Project
3
Project
4
Project
5
Payoff
Although there are many ways to solve this problem, any solution should comply with the above influence
diagram. In particular, students could choose almost any distribution for each 𝑝𝑖 , 𝑖 = 1, … , 5, as long as the
values are constrained between 𝑝𝑖 − 0.05 to 𝑝𝑖 + 0.05, 𝑖 = 1, … , 5. Possible distribution choices are the
beta, the triangular, and the uniform distributions. The file “Problem 11.13a.xlsx” shows that we chose the
229
uniform. We also supplied some monetary values for each project to illustrate the monetary impact on the
portfolio of projects when the probability of success of each project itself is uncertain.
b. In part b, the probabilities are dependent. We can link them through an “optimism” node to represent the
decision maker’s unknown level of optimism/pessimism regarding the chance of success of the projects:
Optimism
level
p
1
Project
1
p
2
p
3
p
4
Project
3
Project
2
Project
4
p
5
Project
5
Payoff
How could such a model be implemented? The file “Problem 11.13b.xslx” has two possible solutions. Both
solutions are based on choosing an “optimism parameter” (OPT) according to a distribution. In the solution
file, OPT is based on a uniform distribution between 0 and 1. Of course, this distribution should be assessed
by the decision maker.
For Model 1 in the spreadsheet, the OPT value determines a value on the line segment that goes from
(0, 𝑝𝑖 − 0.05) to (1, 𝑝𝑖 + 0.05). Specifically, given a value for OPT, the probability of success is
determined by the equation for a line: 2 × (0.05) × 𝑂𝑃𝑇 + 𝑝𝑖 − 0.05
2 × (0.05) × 𝑂𝑃𝑇 + 𝑝𝑖 − 0.05
(0, 𝑝𝑖 − 0.05)
OPT = 0
(1, 𝑝𝑖 + 0.05)
OPT
OPT = 1
For Model 2 in the spreadsheet, the OPT value determines the modal (most likely) value for a triangular
distribution with minimum set to 𝑝𝑖 − 0.05 and maximum set to 𝑝𝑖 + 0.05.
230
Density f unction f or
p
1
0.45
0.55
Mode = 0.45 + OPT * (0.55 - 0.45)
For either model, OPT values close to zero imply the boss is pessimistic about the probability of success for
all the projects, that is, probability of success is close to 𝑝𝑖 − 0.05. For OPT values close to one half, the
boss is neither pessimistic nor optimistic, that is, probability of success is close to 𝑝𝑖 . For OPT values close
to one, the boss is optimistic about all the probability of success for all the projects, that is, probability of
success is close to 𝑝𝑖 + 0.05.
The difference between the models is that in Model 1 we use a uniform distribution between zero and one
to choose the OPT value, and then this directly determines the probability of success via 2 × (0.05) ×
𝑂𝑃𝑇 + 𝑝𝑖 − 0.05. In Model 2, we again use the same uniform to choose the OPT value, but then the
probability value is drawn from a triangular distribution whose mode is 2 × (0.05) × 𝑂𝑃𝑇 + 𝑝𝑖 − 0.05.
11.14. a. A straightforward sensitivity analysis would calculate expected values with the probabilities set at
various values. The results would be a triangular grid that could be plotted in terms of p1 = P($10,000) and
p2 = P($5000):
p
1
0.5
0.5
p
2
To use simulation to incorporate the uncertainty about the probabilities, it is possible to sample the
probabilities from uniform distributions. Care must be taken when sampling the values as each probability,
𝑝1 , 𝑝2 , and 𝑝3 can only range from 0 to 0.5. To do this, we sample 𝑝1 from a uniform distribution between 0
and 0.5, and we sample 𝑝2 from a uniform distribution between (0.5 − 𝑝1 ) and 0.5. To ensure the
probabilities sum to one 𝑝3 is not sampled, but computed as 1 − (𝑝1 + 𝑝2 ). If we had sampled 𝑝2 from a
uniform distribution between 0 and 0.5, then 𝑝3 could be larger than 0.5.
The Excel solution can be found in “Problem 11.14.xlsx.” The expected payoff when each investment is
equally likely is $5,333. The expected payoff when incorporating the uncertainty is $5,876, based on
10,000 iterations.
b. No, it would not be possible to have each of the three probabilities chosen from a uniform distribution
between zero and one, because the three probabilities would never sum to one. Some kind of building-up
process, like that described in part a, must be followed.
231
11.15 This is a logic problem and the key to solving it correctly is that Monty always reveals the
unattractive prize, which we said was a donkey. Thus, if you pick Curtain #1, and the prize is behind
Curtain #2, then Monte must open Curtain #3. Monty never opens the curtain you choose; if he did, you
would see that you had chosen a donkey and would obviously switch. If you choose Curtain #1 and the
prize is behind it, then Monty can open either of the other curtains, as they both contain donkeys.
The solution file “Problem 11.15.xlsx” contains one possible solution. See the file and formulas for the
logic. The answer is that you double your chances of winning by switching.
Many people believe that, regardless of which curtain Monty opens, it is still equally likely that the prize is
behind the remaining two. This would be true, but only if Monte randomly opened one of the two other
doors. However, he always reveals the donkey, and this strategic move essentially provides information
you can use. It is possible to use Bayes theorem to work out the posterior probability of winning given
switch vs don’t switch, but we will leave that to the reader. Many different solutions can be found online,
including decision tree solutions that clarify the logic behind which curtain Monty opens.
11.16. Although this problem may appear difficult, you need only replace one discrete distribution with
another in the example model from the text. Specifically, replace the discrete distribution that chooses a
day from 1 to 365 with equal probability with a discrete distribution that chooses a day according to
observed births. In the file “Problem 11.16.xlsx,” we placed the data (observed births) in the AK and AL
columns. The discrete distribution we used to choose a birthday is then:
=RiskDiscrete($AK$5:$AK$370,$AL$5:$AL$370)
This discrete distribution is repeated 30 times in cells B4 to B33.
Case Study: Choosing a Manufacturing Process
1.
D0
V0
P0
Z0
D1
V1
P1
Z1
D2
V2
Z2
P2
NPV
This looks workable as an influence diagram. However, to do it all in one fell swoop like this, the problem
must be simplified substantially. Even if Z0, Z1, and Z2 each had only three outcomes, and V0, V1, and V2
each were modeled with three-point discrete approximations, then there would be 27 outcomes each for P0,
P1, and P2. These would combine to produce 273 = 19,683 outcomes under NPV.
If all that is required is the expected value E(NPV) and standard deviation of NPV, one could find E(P0),
E(P1), and E(P2), along with the standard deviations, and combine these to get E(NPV) and the standard
deviation. If the entire distribution is needed, though, some other method would be required.
232
2. Assume the profit for each year arrives at the end of next year, and thus must be discounted. Let P1i
denote the profit for Process 1 in year i. Then
P1i = $8 Di - Di Vi - $8000 Zi - $12,000,
and
P10
P11
P12
+
.
NPV1 = 1.10 +
2
1.10
1.103
For Process 2,
P2i = $8 Di - Di Vi - $6000 Zi - $12,000,
and
P20
P21
P22
+
.
NPV2 = -$60,000 + 1.10 +
1.102 1.103
3, 4. This problem is modeled in the Excel file “Manufacturing Case.xlsx.” For one run of 10,000
iterations, we obtained the following results:
Process 1:
E(NPV1) = $92,084.
Standard deviation = $48,200
233
P(NPV < 0) = 0.031
Process 2:
E(NPV2) = $114,337. Standard deviation = $72,808
P(NPV < 0) ≈ 0.049
5. Process 2’s expected NPV is $22,261 larger than process 1’s expected NPV. The standard deviation of
Process 2 is also larger, by nearly $25,000. While process 2 does not quite stochastically dominate Process
1, as shown in the crossing CDFs below, Process 2 does provide the opportunity to make a lot more money
than Process 1 with approximately the same downside risks as Process 1.
234
Case Study: La Hacienda Musa
For the model and full analysis, see “La Hacienda Musa.xlsx.” The results below were obtained from
10,000 iterations.
1a. The graph below shows Maria's profit (C26) risk profile if 100 hectares are planted organic. Expected
profit = $83,080, and P(Profit < 0) = 15.4%
1b. The graph below calculates the risk profile for the difference between the profit for conventional profit
versus organic (B19 in the Excel file). When negative, organic has the higher profit. You can see that
P(Organic more profitable than conventional) = 77.3%.
235
1c. This graph shows the profit risk profile when 50 hectares are organic and 50 conventional. Expected
profit = $60,976. 10th percentile = -$10,581; 50th percentile (median) = $59,613
90th percentile = $135,257.
1d. This overlay shows that, as more organic is planted, the distribution moves higher, having a higher
mean (see table below). Standard deviation also increases. In particular, though, the lower tails stay about
the same, but the longer upper tails indicate greater chance of high profit.
Sim #5,
0 hectares
Sim #1,
100 hectares
Sim #1
100 hectares
Hectares planted organic
Sim #2
Sim #3
Sim #4
75 hectares
50 hectares
25 hectares
236
Sim #5
0 hectares
2a. NOTE: See Chapter 14 for information about risk tolerance and the exponential utility function.
Looking only at risk profiles and statistics for total profit, it looks as though Maria would prefer to go with
100 hectares organic. There is, however, a bit of a risk-return trade-off. To incorporate this trade-off, we
use her risk tolerance and calculate her utility (C40 in the Excel file). Her expected utility (EU) is the mean
of this output cell after the simulation completes. The table below reports the EUs for each of the five
levels of organic, showing that 100 hectares organic does indeed have the highest EU = 0.359.
Sim #5,
0 hectares
Sim #1,
100 hectares
Sim #1
100 hectares
Hectares planted organic
Sim #2
Sim #3
Sim #4
75 hectares
50 hectares
25 hectares
237
Sim #5
0 hectares
Hectares
organic
100
75
50
25
0
EU
0.359
0.333
0.298
0.255
0.202
(Results from @RISK Excel report)
2bi. The figure below shows that the expected incremental cost to the company is $4,763. Note, though,
that there is about a 76% chance that the incremental cost is zero. This occurs when the organic price falls
above the GMP, when the company does not pay anything more than the market price.
2bii. This is a relatively easy question to answer. Keller did not need a GMP at all in order to decide to
plant all 100 hectares in organic bananas. Any GMP will be an improvement for her.
238
2biii. Assuming there are other farmers out there that could increase their organic area in response to the
company's incentive, there are plenty of approaches the company could take. Perhaps the most
straightforward would be to sign a fixed-price contract whereby the farmer agrees to sell the bananas for a
specified price. (Essentially, this amounts to a futures contract.) This would remove all price uncertainty
from both sides. The company's challenge would be to find a fixed price that would be equivalent, in terms
of the farmer's expected utility, so that the farmer was just indifferent between the fixed price arrangement
and the uncertain profit associated with waiting to sell the bananas at the current market price.
Case Study: Overbooking, Part II
1. The results for 1000 iterations of the simulation model are shown below. The simulation model is saved
in the Excel file: “Overbooking II part 1.xls”.
Reservations sold:
Expected profit:
Std Deviation:
# Iterations:
16
$1,164
$78
1000
17
$1,180
$186
1000
18
$1,126
$247
1000
19
$1,045
$272
1000
2. Uncertainty about the no-show rate and the cost can be included in the spreadsheet by introducing
random variables for these quantities. For 1000 iterations, the results are shown below. The simulation
model is saved in the Excel file: “Overbooking II part 2.xls”.
Reservations sold:
Expected profit:
Std Deviation:
# Iterations:
16
$1,165
$80
1000
17
$1,192
$177
1000
18
$1,155
$237
1000
19
$1,101
$274
1000
In both questions 1 and 2, selling 17 reservations — overbooking by one seat — comes out being slightly
better than selling 16. However, it is important to realize that most of the time all 17 ticket holders will
show up. It may be important to consider the cost of lost goodwill.
3. The uncertainty about the no-show rate and costs could be addressed in a straight sensitivity analysis of
the problem as solved in Chapter 9.
239
CHAPTER 12
Value of Information
Notes
Chapter 12 is where influence diagrams meet decision trees head to head. And influence diagrams
definitely come out on top! Teaching value of information using influence diagrams is a snap. On the other
hand, using decision trees to solve value-of-information problems is complicated at best. At worst, they
require painstaking flipping of probabilities via Bayes’ theorem (a step handled automatically by the
influence-diagram programs). And the trees become large and bushy.
The chapter discusses both the Expected Value of Information and the Expected Value of Imperfect
Information (referred to as the EVSI (expected value of sample information) in a previous edition and in
Raiffa and Schlaifer’s (1961) Applied Statistical Decision Theory). We provide instructions for calculating
both the EVPI and EVII with PrecisionTree by adding additional arcs to influence diagrams and by
rearranging nodes of the decision tree. The student version of PrecisionTree may limit the model size to 50
nodes, therefore some of the value of information trees are too large to develop completely. We have either
summarized parts of the tree or referenced separate sub-trees to overcome this limitation.
Topical cross-reference for problems
Bayes’ theorem
Deterministic dominance
Oil wildcatting
PrecisionTree
Texaco-Pennzoil
Wizardry
12.7, 12.14, DuMond International
12.3, 12.10
12.7
12.2, 12.4, 12.5, 12.7, 12.9 - 12.14
12.12, 12.13, Texaco-Pennzoil Revisited
Texaco-Pennzoil Revisited
Solutions
12.1. The issue typically is whether to obtain information about some uncertain factor, and this decision
must be made up front: Should we hire the expert? Should we conduct a survey? The decision must be
made in anticipation of what the information will be and its possible impact and value with regard to the
decision itself. As a result, the focus is on what the value of the information is expected to be.
Because there is uncertainty as to what the value of the information is going to be, we need a summary
measure of its worth. Sometimes the information will have no value, sometimes it will have some value,
and at time it will have great value; the expected value tells us on average its worth. The expected value
averages the value across all the possible information outcomes.
240
12.2.
0.1 20
0.2 10
EMV(A) = 7.00
A
0.6 5
0.1 0
B
EMV(B) = 6.00
6
Perf ect
Inf ormation
A
A = 20
0.1
B
EMV(Inf o) = 8.20
A
A = 10
0.2
B
A
A=5
0.6
B
A
A=0
0.1
B
20
6
10
6
5
6
0
6
EVPI = EMV(Info) - EMV(A) = 8.20 - 7.00 = 1.20. This decision tree model is saved in the Excel file
“Problem 12.2.xlsx.”
12.3. EVPI = 0 because one would still choose A regardless of knowing which outcome for the chance
node were to occur. To state it slightly differently, no matter what you found out about the outcome for the
chance node following A, your decision would still be the same: Choose A. Because the information never
changes the optimal alternative, the expected value of the information equals zero.
241
12.4. a. The following decision trees are saved in the Excel file “Problem 12.4.xlsx.” Each part is shown in
a separate worksheet.
0.1
0.2
EMV(A) = 3.00
A
0.6
E
0.1
20
10
0
-10
EMV(B) = 3.20
B
F
0.7
5
0.3
-1
Perf ect Inf ormation
about E
A
A = 20
(0.1)
EMV(Inf o about E) = 6.24
B
20
0.7 5
0.3
A
A = 10
(0.2)
B
E
10
0.7 5
0.3
A
A=0
(0.6)
B
A
B
242
-1
-10
0.7 5
0.3
EVPI(E) = EMV(Info) - EMV(B) = 6.24 - 3.20 = 3.04
-1
0
0.7 5
0.3
A = -10
(0.1)
-1
-1
b.
0.1
0.2
EMV(A) = 3.0
A
0.6
E
0.1
20
10
0
-10
EMV(B) = 3.2
B
0.7
5
0.3
-1
F
Perf ect Inf ormation
about F
0.1
0.2
EMV(Inf o about F) = 4.4
0.6
E
0.1
B=5
(0.7)
B
F
0.2
0.6
E
0.1
B = -1
(0.3)
B
243
10
0
-10
5
0.1
EVPI(F) = EMV(Info) - EMV(B) = 4.4 - 3.2 = 1.2
20
20
10
0
-10
-1
c.
0.1
0.2
EMV(A) = 3.0
A
0.6
E
0.1
EMV(B) = 3.2
B
F
20
10
0
-10
0.7
5
0.3
-1
Perf ect Inf ormation
about E and F
A = 20
(0.1)
EMV(Inf o about E and F) = 6.42
A = 10
(0.2)
E
A=0
(0.6)
A = -10
(0.1)
B=5
(0.7)
F
B = -1
(0.3)
B=5
(0.7)
F
B = -1
(0.3)
B=5
(0.7)
F
20
B
-1
A
10
B
B
A
B = -1
(0.3)
B=5
(0.7)
F
B
A
B
A
B = -1
(0.3)
Chance F
EV = 3.2
5
10
-1
0
B
A
Chance E
244
5
A
12.5. The basic influence diagram is:
Payoff
20
B
A
EVPI(E and F) = EMV(Info) - EMV(B) = 6.42 - 3.2 = 3.22.
Decision A or B
A
B
5
0
-1
-10
5
-10
-1
For 12.5a, b, and c, add arrows appropriately representing the information obtained:
12.5.b.
12.5.a.
Decision A or B
Chance F
Decision A or B
Chance E
Chance F
Chance E
EV = 6.24
EV = 4.4
Payoff
Payoff
12.5.c.
Decision A or B
Chance F
Chance E
EV = 6.42
Payoff
These models are saved in separate worksheets in the Excel file “Problem 12.5.xlsx.” To find the value of
the information, you need to subtract the EV of the model without information from the EV of the model
with information.
12.6. a. Of course, different people will have different feelings on this one. Personally, I would prefer that
the doctor wait to inform me until after the other tests have been performed. (This may not be possible if
the further tests require additional blood samples or other interventions; I would certainly know that
something was going on.) Why wait? I would worry about the outcome of the other tests. I would just as
soon not know that they were even being performed.
b. Suppose I know of no such defects. In this state of information, I can legitimately give my house a clean
bill of health, given this state of knowledge. Now, suppose that I learn from the engineering report that the
house has a defect and also that my buyer withdraws from the agreement to purchase. Now I would have to
reveal the defect to any subsequent purchaser, and it would most likely result in a lower negotiated price.
Under the circumstances, even though future buyers may also request an inspection that reveals the defect,
I would rather not know the engineer’s report; this state of knowledge gives me a better chance at a better
sales price.
c. The answer to this question really depends on your negotiation skill. If the seller knows that you have
had the building appraised, he knows that you have a very clear bottom line, and he can be very tough in
the negotiations to try to get you to make concessions until the price is right at the bottom line. If you have
the appraisal in hand, you will have to do your best not to reveal anything about the appraised value
through your behavior and sequence of counteroffers. Without the appraisal, you would not have such a
clear view. (But then, you might end up purchasing the building for more than it is worth).
Perhaps the best situation is for your boss to have the appraisal but not to reveal it to you, the agent doing
the negotiating. This way, you can negotiate as well as you can, and if the agreed-upon price is too high,
the boss can disapprove.
245
12.7. This decision model is saved in the Excel file “Problem 12.7.xlsx.” The decision tree for the decision
whether to drill or not is shown in the first worksheet. The decision tree for parts a and c:
Strike oil
(0.1)
$190 K
EMV = $10 K
Dry hole
Drill
(0.9)
-$10 K
Don’t drill
0
Drill
Strike oil
Consult
clairv oy ant
$190 K
(0.1)
Don’t drill
0
EMV = $19 K
Drill
Dry hole
-$10 K
(0.9)
Don’t drill
0
a. The expected value of drilling is $10 K, versus $0 for not drilling, so choose to drill.
b. The influence diagram representation is shown in the second worksheet. With the arc between the
uncertainty node “Strike oil” and the decision node “Drill?” the influence diagram evaluates the expected
value of the decision assuming perfect information. To see the expected value without information, delete
the arc. The EVPI is the difference between these two EV's, or $19,000 - $10,000 = $9,000.
Perfect information:
Basic model:
Strike
oil?
Strike
oil?
Drill?
Drill?
Payoff
Payoff
c. See the decision tree above or the decision tree model saved in the third worksheet.
EVPI = EMV(Clairvoyant) - EMV(Drill) = $19 K - $10 K = $9 K.
d. We have:
P(“good” | oil) = 0.95 P(oil) = 0.1
P(“poor” | dry) = 0.85 P(dry) = 0.9
We can find P(“good”) and P(“poor”) with the law of total probability:
246
P(“good”) = P(“good” | oil) P(oil) + P(“good” | dry) P(dry)
= 0.95 (0.1) + 0.15 (0.9)
= 0.23
P(“poor”) = 1 - P(“good”)
= 1 - 0.23
= 0.77.
Now we can find
P(“good” | oil) P(oil)
P(oil | “good”) = P(“good” | oil) P(oil) + P(“good” | dry) P(dry)
0.95 (0.1)
= 0.95 (0.1) + 0.15 (0.9)
= 0.41
P(dry | “good”) = 1 - P(oil | “good”)
= 0.59
P(“poor” | oil) P(oil)
P(oil | “poor”) = P(“poor” | oil) P(oil) + P(“poor” | dry) P(dry)
0.05 (0.1)
= 0.05 (0.1) + 0.85 (0.9)
= 0.0065
P(dry | “poor”) = 0.9935.
247
Now the decision tree is:
Drill
EMV = $10 K
Strike oil
(0.1)
$190 K
Dry hole
(0.9)
-$10 K
Don’t Drill
Consult
geologist
EMV =
$16.56 K
$0 K
Strike oil
(0.41)
$190 K
Dry hole
(0.59)
-$10 K
Drill
“good”
(0.23)
Don’t Drill
$0 K
Strike oil
(0.0065)
$190 K
Dry hole
(0.9935)
-$10 K
Drill
“poor”
(0.77)
Don’t Drill
$0 K
The influence diagram solution:
With Geologist:
Without Geologist:
Consult
Geologist
Oil?
Drill?
Payoff
Drill?
Oil?
Payoff
EVII = EMV(Consult Geologist) - EMV(Drill) = $16.56 K - $10 K = $6.56 K.
Because EVII is less than $7000, which the geologist would charge, this is a case where the expected value
of the geologist’s information is less than what it would cost. Don’t consult her.
The corresponding influence diagram is shown in the fourth worksheet and the decision tree is in the fifth
worksheet. Note, the values in the spreadsheet have slightly different values due to round-off error.
Bayesian probability calculations are very sensitive to the significant digits carried through the calculations.
12.8 a. Setting P(Strike oil) equal to 1.0 results in choosing to drill with an EMV equal to $190,000.
b. Setting P(Strike oil) equal to 0.0 results in choosing not to drill with an EMV equal to $0.
248
c. 0.9*(190000) + 0.1*(0) = $171,000.
d. Subtracting the EMV(Drill) = $170,000 results in $1,000, which equals the EVPI. This alternate way to
think about or compute EVPI can be helpful to students. Knowing that we will strike oil results in an EMV
= $190,000 when we drill. Knowing that we will not find oil, means we can avoid the $10,000 drill cost by
not drilling. Thus, our EMV = $0 in this case. From the model, 90% of the time we will strike oil with an
EMV = $190,000 and 10% of the time we will not strike oil with an EMV = $0. Thus, knowing if there is
oil or not tells us whether to drill, and on average, this information is worth $1,000.
12.9. a.
(0.25)
$35.00
(0.25)
Make
EC = $42.395
$42.50
(0.37)
$45.00
(0.13)
$49.00
(0.25)
$35.00
(0.25)
Buy
EC = $44.70
Perf ect
inf ormation
about making
processor
EC = $41.725
$42.50
(0.37)
$45.00
(0.13)
$49.00
Make
Make = $35.00
(0.25)
Buy
Make
Make = $42.50
(0.25)
Buy
Make
Make = $45.00
(0.37)
Buy
Make
Make = $49.00
(0.13)
249
Buy
$35.00
EC = $44.70
$42.50
EC = $44.70
$45.00
EC = $44.70
$49.00
EC = $44.70
Influence diagrams:
Information about making
processor:
Basic Influence Diagram:
Cost
to buy
Cost to
make
Make
or
Buy
Cost
to buy
Cost to
make
Make
or
Buy
Cost
Cost
Note that because we are minimizing cost in this problem, we need to find the expected cost savings due to
the information. For that reason, EVPI = EC(Make processor) - EC(Information) = $42.395 - $41.725 =
$0.670 per unit.
This decision tree is shown in the first worksheet of the Excel file “Problem 12.9.xlsx.” Reference nodes
are used to simplify the representation of the tree by referring to the cost uncertainty node associated with
the “Buy” decision.
b.
(0.25)
$35.00
(0.25)
Make
EC = $42.395
$42.50
(0.37)
$45.00
(0.13)
$49.00
(0.25)
$35.00
(0.25)
Buy
EC = $44.70
$42.50
(0.37)
$45.00
(0.13)
Perf ect
inf ormation
about buy ing
processor
EC = $41.8555
$49.00
Make
Buy = $37.00
(0.10)
Buy
Make
Buy = $43.00
(0.40)
Buy
Make
Buy = $46.00
(0.30)
Buy
Make
Buy = $50.00
(0.20)
250
Buy
EC = $42.395
$37.00
EC = $42.395
$43.00
EC = $42.395
$46.00
EC = $42.395
$50.00
Information about cost to
buy processor:
Cost
to buy
Cost to
make
Make
or
Buy
Cost
EVPI = EC(Make processor) - EC(Information) = $42.395 - $41.8555 = $0.5355 per unit.
This decision tree is shown in the second worksheet of the Excel file “Problem 12.9.xlsx.” Reference nodes
are used to simplify the representation by referring to the Cost uncertainty associated with the “Make”
decision.
c. Influence diagram:
Information about costs
to buy and to make
processor
Cost to
make
Cost
to buy
Make
or
Buy
Cost
EVPI = EC(Make processor) - EC(Information) = $42.395 - $41.0805 = $1.3145 per unit
This decision tree is shown in the third worksheet of the Excel file “Problem 12.9.xlsx.” The decision tree,
however, in its complete form may have too many nodes for the student version (limit 50). The tree was
manually trimmed near the bottom to satisfy this constraint.
251
Make
EC = $42.395
Buy
EC = $44.70
Make = $35.00
(0.25)
Buy = $37.00
(0.10)
Perf ect
inf ormation
about making
and buy ing
processor
EC = $41.0805
Make = $42.50
(0.25)
Make
Buy
Make = $45.00
(0.37)
Make
Buy
Make = $49.00
(0.13)
Make
Buy
Make = $35.00
(0.25)
Buy = $43.00
(0.40)
Buy = $46.00
(0.30)
252
Make
Buy
Make = $42.50
(0.25)
Make
Buy
Make = $45.00
(0.37)
Make
Buy
Make = $49.00
(0.13)
Make
Buy
Make = $35.00
(0.25)
Make
Buy
Make = $42.50
(0.25)
Make
Buy
Make = $45.00
(0.37)
Make
Buy
Make = $49.00
(0.13)
Make
Buy
Make = $35.00
(0.25)
Buy = $50.00
(0.20)
Make
Buy
Make
Buy
Make = $42.50
(0.25)
Make
Buy
Make = $45.00
(0.37)
Make
Buy
Make = $49.00
(0.13)
Make
Buy
$35.00
$37.00
$42.50
$37.00
$45.00
$37.00
$49.00
$37.00
$35.00
$43.00
$42.50
$43.00
$45.00
$43.00
$49.00
$43.00
$35.00
$46.00
$42.50
$46.00
$45.00
$46.00
$49.00
$46.00
$35.00
$50.00
$42.50
$50.00
$45.00
$50.00
$49.00
$50.00
12.10. The answer of course depends on the specific GPA and the distribution for the course score. Let’s
assume GPA = 3.0, and the student has the following 3-point discrete approximation for the course score:
Score
67
82
90
Probability
0.185
0.630
0.185
Salary
$35,756
$36,776
$37,320
The file “Problem 12.10.xlsx” contains the solution and allows for different assumptions. The decision tree,
including obtaining information would be:
FALSE
Take
EVPI = EMV(Info) - EMV(Take Course)
= $36,773 - $36,688
= $85
Info about score
5th Percentile Score
TRUE
Chance
0
$36,733
50th Percentile Score
95th Percentile Score
18.5%
0
$35,756
Decision
67
$36,000
Drop
TRUE
Take
TRUE
63.0%
0.185
$36,000
0.63
$36,776
Decision
82
$36,776
Drop
FALSE
Take
TRUE
18.5%
0
$36,000
0.185
$37,320
Decision
90
$37,320
FALSE
Drop
0
$36,000
Decision
Problem 12.10
$36,733
Drop Course
0
FALSE
28000
$36,000
5th Percentile Score
Take Course
FALSE
Chance
0
$36,688
18.5%
67
50th Percentile Score
63.0%
95th Percentile Score
18.5%
82
90
0
$36,776
0
$37,320
Information about
Score:
Basic problem:
Score
Score
Take
Course?
0
$35,756
Take
Course?
Payoff
Payoff
EVPI = EMV(Information) - EMV(Take Course) = $36,773 - $36,688 = $85.
Note that in many cases (for example the illustrative solution to problem 8.11 in this manual), EVPI = 0
because taking the course deterministically dominates dropping the course.
253
12.11. a. Using the information about probabilities and costs from Problem 9.34, we have the following
decision tree:
TRUE
Leave open
EVPI = E(Cost of leaving plant open) - E(Cost of information)
= $3,960 - $2,640
= $1,320
73.6%
0 or 1
0
TRUE
Chance
0
$2,640
$0
FALSE
$10,000.00
26.4%
0
0
$10,000
FALSE
Leave open
2 or more
$0
Decision
Close plant
Info about machines
0.736
0
$15,000.00
0
$15,000
Decision
$10,000
TRUE
Close plant
$10,000.00
0.264
$10,000
Decision
Problem 12.11 a
$2,640
FALSE
Close plant
0
$10,000.00
$10,000
0 or 1
FALSE
Leave open
73.6%
0
0
$0
Chance
$3,960
0
26.4%
2 or more
$15,000.00
0
$15,000
Note that because we are minimizing cost in this problem, we need to find the expected cost savings due to
the information. For that reason, the formula for EVPI appears reversed:
EVPI = E(Cost for leaving plant open) - E(Cost for information) = $3,960 - $2,640 = $1,320. This decision
tree is shown in the first worksheet in the Excel file “Problem 12.11.xlsx.”
b. EVPI = E(Cost for leaving plant open) - E(Cost for information) = $7,920 - $2,640 = $5,280. This
decision tree is shown in the second worksheet in the Excel file “Problem 12.11.xlsx.”
Leave open
EVPI = E(Cost of leaving plant open) - E(Cost of information)
= $7,920 - $2,640
= $5,280.
73.6%
0 or 1
0
Info about machines
TRUE
Chance
0
$2,640
26.4%
0
$0
$0
Leave open
2 or more
$1
$0
Decision
Close plant
5280
TRUE
FALSE
$10,000.00
FALSE
$30,000.00
0
$10,000
0
$30,000
Decision
$10,000
Close plant
TRUE
$10,000.00
0.264
$10,000
Decision
Problem 12.11 b
$2,640
Close plant
FALSE
0
$10,000.00
$10,000
0 or 1
Leave open
FALSE
73.6%
0
0
$0
Chance
0
$7,920
2 or more
26.4%
$30,000.00
0
$30,000
c. Now EVPI = 0 because leaving the plant open deterministically dominates closing the plant. The
manager would leave the plant open regardless of the number of broken machines. This decision tree is
shown in the third worksheet of the Excel file “Problem 12.11.xlsx.”
254
Leave open
EVPI = 0 because the manager would
leave the plant open regardless of the
number of broken machines.
73.6%
0 or 1
0
TRUE
Chance
0
$3,960
26.4%
Decision
0
$15,000
Close plant
Problem 12.11 c
$0
$0
Leave open
2 or more
0.736
0
Decision
Close plant
Info about machines
TRUE
FALSE
$20,000.00
0
$20,000
TRUE
$15,000.00
0.264
$15,000
FALSE
$20,000.00
0
$20,000
Decision
$3,960
FALSE
Close plant
0
$20,000.00
$20,000
0 or 1
FALSE
Leave open
0
73.6%
0
0
$0
Chance
$3,960
2 or more
26.4%
$15,000.00
0
$15,000
12.12. a. Liedtke’s EVPI regarding Texaco’s response would be zero because Liedtke would still
counteroffer $5 billion regardless of Texaco’s response.
Accept
2 billion?
Final
Court
Decision
Texaco
reaction
Pennzoil
Payoff
reaction
255
Pay of f
($billion)
Accept 2 billion
2
Counterof f er 5 billion
EMV = 4.63
Accept 2
Texaco accepts 5 billion
(0.17)
Inf ormation
about Texaco
2
Counterof f er 5
EMV = 4.63
Accept 2
Texaco ref uses
(0.50)
2
Counterof f er 5
(0.2)
Final court (0.5)
decision
(0.3)
Texaco
counterof f ers 3
billion
(0.33)
5
Accept 2
10.3
5
0
2
(0.2)
Final court
decision
Ref use
(0.5)
(0.3)
Counterof f er 5
Accept 3 billion
10.3
5
0
3
EVPI = EMV(Information) - EMV(Counteroffer $5 billion) = 4.63 - 4.63 = 0. Whenever the EVPI equals
zero, then knowing the outcome does not change which alternative is optimal. Counteroffering $5 billion is
optimal not knowing Texaco’s response and is optimal for each of Texaco’s 3 responses. This decision tree
is shown in the first worksheet in the Excel file “Problem 12.12.xlsx.”
b.i.
Texaco
reaction
Final
Court
Decision
Accept
2 billion?
Pennzoil
Payoff
reaction
256
Pay of f
($billion)
Accept 2 billion
2
Counterof f er 5 billion
EMV = 4.63
Accept 2
Inf ormation
about court
award
Award = 10.3
(0.2)
2
Tex. Acc. (0.17)
Counterof f er 5
EMV = 9.4
EMV = 4.98
Accept 2
Award = 5
(0.5)
2
Ref use (0.50)
Award = 0
(0.3)
Counterof f er 5
Ref use (0.50)
EMV = 1.84
EVPI
5
Accept
Accept
This decision tree is shown in the second worksheet of the Excel file “Problem 12.12.xlsx.” Note, this
assumes that Liedke is the only one who receives the information regarding the court award. See below.
Texaco
reaction
Final
Court
Decision
Accept
2 billion?
Pennzoil
Payoff
reaction
257
3
0
= EMV(Information) - EMV(Counteroffer 5)
= $4.98 billion - $4.63 billion
= $0.35 billion.
b.ii.
5
5
Ref use
Counter 3
(0.33)
3
5
Counter 3
(0.33)
Tex. Acc. (0.17)
10.3
Accept
Ref use
EMV = 5
2
10.3
Ref use
Counter 3
(0.33)
Tex. Acc. (0.17)
Counterof f er 5
Accept 2
5
Ref use (0.50)
0
3
Pay of f
($billion)
Accept 2 billion
2
Counterof f er 5 billion
Counterof f er 5, get
inf ormation later.
EMV = 4.63
Texaco accepts 5 billion
(0.17)
5
EMV = 4.93
(0.2)
Final court (0.5)
decision
(0.3)
Texaco ref uses
(0.50)
Texaco
counterof f ers 3
billion
(0.33)
Award = 10.3
(0.2)
10.3
5
0
Accept 3
Ref use
Accept 3
Award = 5
(0.5)
Award = 0
(0.3)
Ref use
Accept 3
Ref use
3
10.3
3
5
3
0
EVPI = EMV(Information) - EMV(Counteroffer $5 billion) = 4.93 - 4.63 = $0.30 billion. This decision tree
is shown in the third worksheet of the Excel file “Problem 12.12.xlsx.”
c. The earlier the information, the more valuable it is. In fact, we saw in part b(i) that, if the award turned
out to be zero, Liedtke would accept the $2 billion that is on the table. Obtaining the information later does
not allow him to use the information in this way.
NOTE: The solution to part b(i) above is based on the assumption that only Liedtke obtains information
about the court award, and not Kinnear. If Kinnear also finds out what the court award is to be, then
Texaco’s reaction would change. The decision tree is shown below:
258
Pay of f
($billion)
Accept 2 billion
2
Counterof f er 5 billion
EMV = 4.63
Accept 2
Award = 10.3
(0.2)
Liedtke and Kinnear
both obtain inf ormation
2
Counterof f er 5
Tex. Acc. (1.00)
5
EMV = 4.10
Accept 2
Award = 5
(0.5)
2
Counterof f er 5
Accept 2
Award = 0
(0.3)
Tex. Acc. (1.00)
5
2
Counterof f er 5
Ref use (1.00)
0
In this case, EVPI is actually negative (4.10 - 4.63 = -$0.53 billion). The fact that information has negative
value here appears to be at odds with the text, which indicates that information can never have negative
value. In fact, that statement implicitly assumes that the decision maker is not playing a strategic game
against some other decision maker who could also take advantage of the information. If such is the case,
then, as indicated by the solution to this problem, it is indeed possible for information to have negative
value. In this case, it is conceivable that Liedtke might be willing to take some (costly) action to prevent the
information about the court award from being revealed!
259
12.13.
Pay of f
($billion)
Accept 2 billion
2
Counterof f er 5 billion
Award = 10.3
(0.2)
Inf ormation
about both
EMV = 5.23
Texaco accepts 5
(0.17)
Award = 5
(0.5)
Award = 0
(0.3)
Award = 10.3
(0.2)
Texaco ref uses
(0.50)
Award = 5
(0.5)
Award = 0
(0.3)
Award = 10.3
(0.2)
Texaco counterof f ers 3
(0.33)
Award = 5
(0.5)
Award = 0
(0.3)
EMV = 4.63
Accept 2
2
Counter 5
5
Accept 2
2
Counter 5
5
Accept 2
2
Counter 5
5
Accept 2
Counter 5
Accept 2
Counter 5
Accept 2
Counter 5
Accept 2
2
10.3
2
5
2
0
2
Counter 5
Accept 2
Ref use
2
Counter 5
Accept 2
Acc. 3
Acc. 3
Ref use
2
Counter 5
Acc. 3
Ref use
3
10.3
3
5
3
0
EVPI = EMV(Information) - EMV(Counteroffer $5 billion) = 5.23 - 4.63 = $0.60 billion.
Note that 0.60 > 0 + 0.35, or the sum of the two EVPIs for the individual bits of information in 12.11a and
12.11b(i). The information about Texaco’s reaction alone was not enough to be worthwhile. However, this
information along with the court information helps Liedtke refine his strategy. Now he knows exactly what
to do in each case, especially when the court award = 0.
This decision tree is saved in the Excel file “Problem 12.13.xlsx.”
260
12.14.a.
Loss
Do nothing
Freeze
(0.50)
E(Loss) = 11.25
Get
Inf ormation
50
Burners
22.5
Sprinklers
29.5
Do nothing
No Freeze
(0.50)
5
2
Sprinklers
Freeze (0.50)
Do nothing
E(Loss) = 25
0
Burners
50
No Freeze (0.50) 0
Freeze (0.50)
Burners
E(Loss) = 13.75
22.5
No Freeze (0.50) 5
Freeze (0.50)
Sprinklers
E(Loss) = 15.75
29.5
No Freeze (0.50) 2
Influence diagram:
Information about
weather:
Basic Diagram:
Weather
Action
Burner
Loss
Sprinkler
Loss
Weather
Action
Loss
Burner
Loss
Loss
EVPI = E(Loss | Burner) - E(Loss | Information) = 13.75 - 11.25 = $2.5 K = $2500.
This decision tree is shown in the first worksheet of the Excel file “Problem 12.14.xlsx.”
261
Sprinkler
Loss
b. The uniform distributions in this part of the problem are approximated as discrete distributions using the
extended Pearson-Tukey method. Influence diagram:
Information about
losses:
Weather
Action
Burner
Loss
Sprinkler
Loss
Loss
EVPI = E(Loss | Burners) - E(Loss | Information) = 13.75K - 13.741K = $9
This decision tree to find the value of information is very large and exceeds the student version limit of 50
nodes. To represent it, we have split the decision tree into 2 separate trees: without information (in the
second worksheet) and with information (in the third worksheet). Also, the tree with information still had
too many nodes, so the uncertainty regarding the freeze is included in a formula in the consequence cell of
each branch.
c. Better weather forecasts appear to be more important than knowing more about the specific losses from
burners and sprinklers. In fact, the only reason that EVPI in 12.12b is positive is that the farmer would set
sprinklers out instead of burners if he learned that the sprinkler loss is low and the burner loss is high.
(However, this may actually be an unlikely scenario, given that the levels of losses in the two cases may be
caused by the same factors. See the discussion for problem 5.9.)
12.15. Of course, the only available alternative is that the events are dependent. In this case, depending on
which chance node is placed before the decision node in the tree, the branches for the subsequent one must
display the appropriate conditional probabilities. Bayes’ theorem may be used to calculate the required
conditional probabilities.
Having dependent events is an interesting possibility, because knowing the outcome of one event perfectly
will reveal some (imperfect) information about the other event. That information is manifested as the
conditional probabilities in the decision tree.
262
Case Study: Texaco—Pennzoil Revisited
1. Clearly, the best thing that could happen would be for Texaco to accept a $5 billion counteroffer. This
would give a sure consequence of $5 billion, much better than an expected value of $4.63 billion and some
uncertainty.
2, 3. Influence diagram:
Accept
$2 billion?
Texaco
Reaction
Pennzoil
reaction
Court
Payoff
Decision tree:
Accept 2 billion
2
Texaco accepts 5 billion
(0.17)
Counteroffer 5 billion
4.63
(0.2)
4.56
Final court (0.5)
decision
Texaco
refuses
(0.50)
Texaco
counteroffers
3 billion
(0.33)
Counter- of f er
5 billion and
inf luence
Liedtke
5
(0.3)
(0.2)
4.56
Final court (0.5)
decision
4.56
Refuse
(0.3)
Accept 3 billion
Texaco accepts 5 billion
5
0
10.3
5
0
3
5
(0.2)
EMV = 5
10.3
10.3
4.56
Final court
decision
Texaco
ref uses
(0.5)
(0.3)
(0.2)
Texaco
counterof f ers
3 billion
5
0
10.3
4.56
Final court
decision
4.56
Ref use
(0.3)
Accept 3 billion
263
(0.5)
5
0
3
The expected value of wizardry is
EMV(Control) - EMV(Counteroffer $5 billion)
= $5 billion - $4.63 billion
= $0.37 billion.
Liedtke could afford to spend up to $370,000,000 to control Kinnear! That’s a lot of money to spend on an
activity that may be difficult to justify on ethical grounds. This decision tree is saved in the Excel file
“Texaco Case Revisited.xlsx.”
Case Study: Medical Tests
Would you be willing to pay for these tests? A good decision-analysis approach would suggest not. Why
pay for information that will not affect your treatment? The doctor’s motivation, however, is likely to be
that of protecting himself. Many courts, in considering malpractice cases, consider whether the doctor
followed “standard medical practice” during treatment. If obtaining a certain test is considered to be
“standard”, then the doctor might order it as protection in the event of a future lawsuit. You, on the other
hand, are primarily interested in your own health and the current treatment. If you are also interested in
continuing as a patient for your physician, you may want to pay for the “unnecessary” tests.
Case Study: DuMond International, Part II
The base case decision tree is shown in the first worksheet of the Excel file “Dumond Case Part II.xlsx.”
The required Bayesian calculations for Sales and Delay are discussed in the second worksheet. Then the
third worksheet has the following decision tree to determine the value of information for each event
individually.
It turns out that EVPIs for each of the three uncertain events, considered individually, are zero. Thus, there
would appear to be no incentive to learn more about any of these uncertainties. (Note that it is necessary to
flip the probabilities P(Sales | Delay) and P(Delay) for the analysis where information is obtained about
sales but not about delay.)
264
New product
EMV = 1.1046
Sales high (0.57)
Delay
(0.40)
Delay Inf o
EMV = 1.1046
Delay
(0.40)
0.20
No Ban
(0.70)
1.00
Current
product
No Delay
(0.60)
Sales low (0.36)
Ban
(0.30)
New
product
Sales high (0.57)
Sales low (0.43)
Ban (0.30)
No ban (0.70)
Sales high (0.64)
New
product
Sales low (0.36)
Sales high
(0.612)
Sales low
(0.388)
Ban (0.30)
No ban (0.70)
Ban (0.30)
No ban (0.70)
Delay
(0.40)
New
product
0.20
No Delay
(0.60)
Delay
(0.40)
New
product
Current
product
0.85
0.20
1.00
1.20
1.35
0.20
1.00
No delay (0.5567)
Ban Inf o
EMV = 1.1046
No ban
(0.70)
1.35
Delay (0.4433)
New
product
Current
product
1.00
No delay (0.6275)
Current
product
Ban
(0.30)
0.20
Delay (0.3725)
Current
product
0.85
0.75
No ban (0.70)
New
product
1.00
265
No Delay
(0.60)
1.35
1.20
Ban (0.30)
Current
product
Sales Inf o
EMV = 1.1046
0.75
Sales high (0.64)
No Delay
(0.60)
Old product
EMV = 0.76
1.20
Sales low (0.43)
0.75
0.85
0.20
1.00
Sales high (0.57)
Sales low (0.43)
Sales high (0.64)
Sales low (0.36)
Sales high (0.57)
Sales low (0.43)
Sales high (0.64)
Sales low (0.36)
1.20
0.75
1.35
0.85
1.20
0.75
1.35
0.85
Influence diagrams:
Information
about ban:
Basic diagram:
Sales
level
Sales
level
Delay
Delay
Ban old
product?
Ban old
product?
New
Product?
New
Product?
Value
Value
Information
about delay:
Sales
level
Information
about sales:
Sales
level
Delay
Delay
Ban old
product?
Ban old
product?
New
Product?
New
Product?
Value
Value
However, it is also possible to look at combinations of the information. These influence diagrams are
shown in the fourth and fifth worksheets of the Excel file “Dumond Case Part II.xlsx.” To calculate the
EVPIs, add influence arcs from the uncertainties into the decision node. Considered jointly, obtaining
information about both the delay and sales is worth $1,720. Likewise, obtaining information about both
sales and the ban is worth $52,780. The EVPI for all three is also $52,780, indicating that, given
information about both sales and ban, information about the delay is worthless.
266
CHAPTER 13
Real Options
Notes
The inclusion of this chapter in the 3rd edition reflects changes in the way the world now looks at strategic
opportunities. Mounting turbulence in markets and economies, the breakneck pace of change, and the
extent to which old approaches are blown to bits and businesses reinvented are responsible for real options
being a hot topic. Valuations of business opportunities now must account for not just where the business is
now, but where the business is going to go and the optionality it possesses. The extent to which a
management team can create optionality and be ready to capture the value that stems from it is key to its
value now and in the future.
The study of real options is the marriage of strategic intuition and analytical rigor, and the chapter intends
to develop both. On the intuition side, a most important skill is to understand options thinking--one can
only gain from options if one can imagine what the options might be and how to set up ownership of the
options. The analysis is based on a methodical framing of the underlying uncertainties and how the option
choice can serve to reduce the uncertainty and add value.
The relation of the evaluation of real options to decision analysis is clear. Options are downstream
decisions. A decision tree can be used to describe and evaluate an option. The downstream alternatives that
follow uncertainty must be included in the tree. The tree facilitates evaluation of how the downstream
decisions would be made, and how much one would gain from owning the opportunity to make those
choices. The analysis may be based on a Monte Carlo simulation. The simulation can reflect a richer model
of the dynamic progression of underlying variables that influence the downstream decision and the value
obtained in the aftermath than a simple tree.
Several ideas come from real options thinking:
• Uncertainty only hurts you if you have no options
• Greater uncertainty  higher value of options
• Create the option—rather than hope it will appear
• Option value is higher when
– Chances are higher that it will be exercised when:
• There is more uncertainty.
• The investment required to exercise the option is lower.
• There is a longer window in which to exercise.
– Cash flows (if exercised) are higher.
The chapter will build on the previous chapters. It will reinforce that flexibility is valuable, but it comes at a
cost which may never be recovered. Without dynamic analysis, it will be apparent that managers tend to
assign zero value to flexibility (by ignoring it), or assign “infinite” value by choosing more flexible projects
for “strategic” reasons. Options thinking (with dynamic analysis) enables managers to develop insights
about the value of flexibility relative to its costs. Discounted cash flow valuation with no options will
undervalue companies, sometimes significantly. Options valuation, if one is not careful, may overvalue
companies, particularly if it includes value of options that are not exclusively yours, e.g. if anyone may be
able to exercise the option ahead of you. So the appropriate analysis will reflect that you may have to earn
ownership of the real options. And that you must recognize how competitive moves of others affect your
valuation.
Topical cross-reference for problems
Double booking
Delay/learn option
Exercise window
Exit option
13.1
13.4, 13.7, 13.8, 13.10
13.1
13.1, 13.7, 13.13
267
Expand option
Lease versus buy
Option value, properties
Staged investment
Switch option
Trigger strategy
13.1, 13.7, 13.9, 13.11
13.1
13.2, 13.12
13.5
13.1,13.3, 13.4, 13.7
13.1, 13.8
Solutions
13.1.a. Double Booking
How does the option value arise? The value comes from the ability to take the train if the plane does
not fly rather than missing the trip altogether and this trip is worth a lot of money to you.
What constitutes the option cost? This option costs you nothing (except for the 15 minutes of your
time it takes to make the reservation) because the train reservation is fully refundable.
What constitutes the exercise window? You must use the train reservation on the same day as the
plane reservation.
What constitutes the trigger strategy? The trigger in this case is simply the weather and how it
affects the plane flying. If the plane flight is cancelled this triggers use of the option.
How can you estimate the value it gives you? The price you receive for exercising this option is the
amount the trip is worth minus the train ticket. But it will cost you the extra time it takes the train to
get to the destination versus how long it would have taken the plane to fly there. Therefore the final
value would be the value of the trip minus the price of the train ticket and minus the value of the extra
time to take the train and the extra time it takes to reserve the train ticket.
EXAMPLE:
Decision Tree for Plane Reservation without a Train Reservation
0.8
Plane Flies
$500
$300
Reserve Plane Ticket
($200)
$230
0.2
Weather Cancels Flight
1
$230
$150
($50)
Do Nothing
$0
$310
1
Decision Tree for Plane Reservation with a Train Reservation
0.8
Plane Flies
$500
$300
Reserve Plane Ticket
$310
0.2
($200)
Weather Cancels Flight
1
$150
$350
Do Nothing
$0
Take Train
$400
$350
Stay Home
$0
($50)
Assumptions in the above decision trees: It is $200 to make the plane reservation and if the plane
doesn’t take off you’ll get $150 back. The trip is “worth” $500 to you, this includes all related values
such as the value of work this could create in the future. The train ticket costs $100. There is a 20%
chance that the plane will not fly. You must have a train reservation because trains sell out weeks
before they leave.
268
Therefore the value of the train reservation is $80. This is the difference between the first decision tree
where you don’t have a train reservation where, if the plane does not fly you stay home, and the second
decision tree where you have a reservation to take the train. This does not include the time it takes to
reserve the ticket and the extra time that the train takes over flying. For example if the value of the
extra time it takes for the reservation plus the 20% of extra time the train takes totals less than $80 to
you, then it is worth it to purchase the train reservation.
b. Lease Then Buy
How does the option value arise? Renting on a month-to-month basis gives you the option every
month to move to a different location. The value arises from the fact that once you move to the new
location, and you’ve lived there a while, you’ll have a better understanding of the best places to live.
Also, you could probably find a bargain on a better house if you have longer to hunt for a house to buy.
You could also save a significant amount of money if you wait until the mortgage rates go down.
What constitutes the option cost? The cost of this option is the total amount of rent money you spend
before buying a house. The rent money is money that will not be put towards the mortgage of a house
and you can’t write off any of the rent money for tax purposes, like you can write of the interest of a
mortgage.
What constitutes the exercise window? The exercise window is virtually any time in the future. You
can keep renting forever or you can stop any month you like.
What constitutes the trigger strategy? The trigger strategy would be both financial and emotional.
Buying a house is more about emotions as well as financial consequences, so you might use a rule
“when you find a house you love, buy it,” or “when your preferences have stabilized and you see a
house as good as any you have seen, buy it.” You could also wait until market conditions were more in
your favor, such as house prices and mortgage rates, but you don’t know how long to wait for this.
How can you estimate the value it gives you? You would have to compare the house that you could
have purchased before moving to the house you actually purchase. You would need to value any
financial savings from any reductions in mortgage rates or housing prices. You would also need to put
a value on intangibles such as buying in a better location. Then subtract the total cost of rent money
spent.
c. Contingency
How does the option value arise? The option gives you value via the opportunity to go to areas in
which you could catch malaria, while protecting you from getting sick. It avoids unnecessary side
effects of the medicine if you exercise the option only after you know the prevalence of malaria where
you end up going.
What constitutes the option cost? The cost of this option is the price of the pills plus the cost of
visiting the doctor, including the value of your time it takes to go to the doctor and purchase the pills.
What constitutes the exercise window? The exercise window is the three months that you are on
vacation in South East Asia. This could be even longer if the pills or prescription have an expiration
date far into the future, such that you could go to a malaria-exposed area again and could use the same
pills and/or prescription.
What constitutes the trigger strategy? When you decide to go into a malaria-plagued area, you will
then begin taking the pills; this is your trigger strategy.
How can you estimate the value it gives you? You would need to put a value on visiting the areas
where you might get malaria. If these areas are places that you really would like to visit, it could be
worth a lot to you. Also, how much is it worth not to get malaria, this is probably worth a lot to a
healthy person.
269
d. Capacity Planning
How does the option value arise? By buying 3 extra tickets, this gives you the option of inviting your
partner and a couple of friends or family members. This gives you value when you are able to invite
those people to go with you.
What constitutes the option cost? The cost is the price you pay for the extra 3 tickets. You will
probably get benefit from some or all of this money back if your partner, family and/or friends value it
more than the price of the ticket. You may even get a premium for the tickets since the concert is sold
out, no one wants to go and you can sell the tickets. So your final cost would be simply your time and
the cost of using your money for a limited amount of time.
What constitutes the exercise window? These options can be exercised at any time on or before the
date of the concert. The option will be exercised when any person accepts the ticket and possibly pays
you back for the ticket.
What constitutes the trigger strategy? You may have a price in mind that you would like to recoup
on the tickets. Your trigger strategy may be to sell the ticket if you get at least that price. Otherwise,
you offer to friends and family who may like to go.
How can you estimate the value it gives you? You would have to put a value on pleasing your
partner, family and friends. You may also get value by selling the tickets to someone else at more than
face value (only if it is legal, of course.)
e. Sequencing
How does the option value arise? The option value arises by allowing you to work on the video if
your classmate does not show up on time. It also frees up the time you would have scheduled to work
on the video by itself by not wasting any time while waiting for your classmate to arrive.
What constitutes the option cost? The cost is the value of your time plus the amount it costs to rent
the equipment – if the video components don’t arrive in time. This is because you would have to rent
the equipment again if the components don’t arrive in time.
What constitutes the exercise window? You will exercise this option during the scheduled four hours
for the two projects.
What constitutes the trigger strategy? You will trigger the option when or if your classmate doesn’t
show up on time and if you receive the video components on time.
How can you estimate the value it gives you? The value you receive is the value of your time that
you will save by doubling up both projects into the same elapsed time. More specifically it will be the
time that you can still work while waiting for your classmate to arrive, instead of waiting around doing
nothing.
13.2. Always a Non-Negative Incremental Value. Yes, an option will always have a non-negative value.
If it did not have a positive value, you wouldn’t exercise the option, in which case it would give an
incremental value of zero.
13.3. The Hybrid
a. Two engines provide for a much more efficient automobile by giving the vehicle the option of which
engine to use. When the vehicle needs power it switches on the gas engine and when the car is idle or going
slowly the electric motor will be used. This creates efficiency by saving gasoline while idling and going
slowly. The vehicle automatically recharges the batteries when braking or going downhill.
b. To make the car worth it, you must be able to save enough in gas money to pay for the additional price
of having 2 engines and the batteries. Also note that there may be tax incentives for buying hybrid cars. In
areas where there is a High Occupancy Vehicle lane on the highway, there is often a law that allows hybrid
owners to use that lane even when only 1 person is in the car.
270
13.4. Ruined Mound Properties
If the first tract of land turns out to be profitable, then we will exercise the option on the second tract of
land. You can determine the profitability with the triggering function: (Volume of Ore in Tons * lbs of
Copper in each Ton of Ore * Price Per lb of Copper) – Cost of Processing the Ore.
However to be more precise in your triggering strategy you might consider:
1. Any synergies you will obtain with the first project. These would reduce the cost of processing the
ore.
2. Trends in the price of copper. If the project is profitable now, but prices of copper are falling, this
would probably ultimately cause the project to be unprofitable. It would be helpful to do a
regression on the price of copper against time to find the trend line to determine the future price.
3. The variability of the amounts in the triggering function. For example, even if you know the exact
amount of ore in the first tract of land, this will not give you the exact amount in the second tract.
It might have a range of , say, plus or minus 15%.
When taken all together, this should give you a good trigger price for the outcome of our formula. Also,
more than likely that profit number associated with the trigger could be negative due to the fact that we
have synergies that will play out in the future and there are fluctuations in copper prices.
13.5. Staged Investment
Very simply, there is a huge advantage to the acquirer to going with the staged payments; if at any time the
drug does not gain FDA approval and has to be scrapped, then the acquirer would not have already paid
some money unnecessarily and would not be required to pay any more money. This could add tremendous
value to the contract from the acquirer’s perspective. We could even determine that value by drawing up a
decision tree and estimating the possibility of approval at each stage, then bringing back the value of the
possible savings to the beginning of the contract, giving us the incremental amount of money we would be
willing to pay for such a contract.
13.6. Nan and Dan vs. the Monika Worm
The value to Pierre for the sequel with $10K down and a $25K profit in two years is $3K (rounded). You
can see this in the first decision tree that follows.
The value to Pierre when reducing the option price by 20% (from $10K to $8K) is $5K. This is the second
decision tree that follows.
The value to Pierre when reducing the advance by 20% (from $50K to $40K) is $7K. This is the third
decision tree that follows.
Therefore Pierre should try to reduce the advance price if possible because he would gain the most money
from it.
Pierre can increase the amount he pays for these options by at most the amounts calculated for the value of
each option listed above. So, if he is able to reduce the advance price by 20% then he can raise the price he
pays for this option by at most $7K (from $10K to $17K). He probably wouldn’t want to raise it too much
since he wants to have enough potential to make money as he can get.
(Extra Credit)
To analyze the value of a series of novels, you would just add each potential sequel into your decision tree.
You could assume that a 3rd novel will only be written if a 2nd novel was completed so you could add the
node for the 3rd novel after the “sequel written” line, and so on for as many novels as you think might be
written. You could also assume that the chance of completing each successive novel would be reduced. The
farther you go into the future the more things that could occur that would stop the author from completing
that book.
271
Value of Book without Option
Present
Year 1
Advance to Author
($50,000)
Rights to Sequel
Paid from Publisher
$70,000
Cash Flow
$20,000
Present Value
Year 2
$0
$0
$0
$0
$20,000.00
Value of Book with Option for Sequel
Present
Year 1
Year 2
Advance to Author
($50,000)
$0
($50,000)
Rights to Sequel
($10,000)
Paid from Publisher
$70,000
$75,000
Cash Flow
$10,000
$0
$25,000
Present Value
$30,661.16
Value of Option
Potential Profit of Option
$20,661.16 before $10,000 payment
$10,661.16
2
$3
Value of Book with Option for Sequel (20% Reduction in Option Price)
Present
Year 1
Year 2
Advance to Author
($50,000)
$0
($50,000)
Rights to Sequel
($8,000)
Paid from Publisher
$70,000
$75,000
Cash Flow
$12,000
$0
$25,000
$32,661.16
Present Value
2
$5
Value of Option
Potential Profit of Option
$20,661.16
$12,661.16
Value of Book with Option for Sequel (20% less for Advance for Sequel)
Present
Year 1
Year 2
Advance to Author
($50,000)
$0
($40,000)
Rights to Sequel
($10,000)
Paid from Publisher
$70,000
$75,000
Cash Flow
$10,000
$0
$35,000
Present Value
$38,925.62
Value of Option
Potential Profit of Option
$28,925.62
$18,925.62
2
$7
Decision Tree for Option of Purchasing Sequel
0.6
Book Written
$21
$11
Purchase Sequel Option
($10)
$3
0.4
Book Cancelled
$0
($10)
Just Purchase 1st Book
$0
Decision Tree for Option of Purchasing Sequel
0.6
Book Written
$21
$13
Purchase Sequel Option
($8)
$5
0.4
Book Cancelled
$0
($8)
Just Purchase 1st Book
$0
Decision Tree for Option of Purchasing Sequel
0.6
Book Written
$29
$19
Purchase Sequel Option
($10)
$7
0.4
Book Cancelled
$0
($10)
Just Purchase 1st Book
$0
13.7. Server Capacity
The following spreadsheet provides some initial calculations, which are applied in the tree that follows.
Calculations:
Price per Acct
Var Cost per Acct
Overhead - Fixed
$
$
$
30.00 per Month
13.50 per Month
9,000.00 per Month
# of Accounts
Yearly
Yearly
Yearly
Yearly
per Month
Revenue
Var Costs
Overhead
Profit (Loss)
400 $ 144,000.00 $ 64,800.00 $ 108,000.00 $
(28,800.00)
800 $ 288,000.00 $ 129,600.00 $ 108,000.00 $
50,400.00
1200 $ 432,000.00 $ 194,400.00 $ 108,000.00 $ 129,600.00
272
a. Which server? The GZ1450 has a higher expected value than the GZ1000 by $7,800, that is $28,200
versus $20,400. Shu Mei should purchase the GZ1450. The GZ1000 has a capacity of only 80, therefore
Shu Mei can only have a maximum of 800 accounts with that server. Therefore if she buys the GZ1000 and
has 800 accounts in the first year, she would not be able to add any more accounts in the second year.
a. GZ1000 or GZ1450
0.5
400 SY Accounts
0.5
400 FY Accounts
$ (28,800.00)
-78600
$ (28,800.00)
-39000
-78600
0.5
800 SY Accounts
600
GZ1000
-21000
$ 50,400.00
20400
600
0.5
800 SY Accounts
0.5
800 FY Accounts
$ 50,400.00
79800
$ 50,400.00
79800
79800
0.5
800 SY Accounts
79800
$ 50,400.00
79800
2
28200
0.5
400 SY Accounts
0.5
400 FY Accounts
$ (28,800.00)
-90600
$ (28,800.00)
-51000
-90600
0.5
800 SY Accounts
-11400
GZ1450
-33000
$ 50,400.00
28200
-11400
0.5
800 SY Accounts
0.5
800 FY Accounts
$ 50,400.00
67800
$ 50,400.00
107400
67800
0.5
1200 SY Accounts
147000
$ 129,600.00
273
147000
b. Expand Option. Yes this changes her decision. She would want to purchase the GZ1000 for the first
year then expand by purchasing another GZ1000 in the second year, if the first year has 800 accounts. This
gives her an expected value of $36,000 which is $7,800 greater than purchasing the GZ1450. This would
work very well for Shu Mei because she would also have more total capacity for year 2, 160 connections
rather than 140.
b. Expand Option
0.5
400 SY Accounts
0.5
400 FY Accounts
$(28,800.00)
$ (78,600.00)
$ (28,800.00) $ (78,600.00)
-39000
0.5
800 SY Accounts
$
$ 50,400.00
$
GZ1000
-21000
600.00
600.00
0.5
800 SY Accounts
36000
$ 71,400.00
Purchase Another GZ1000
$
(8,400.00)
111000
0.5
800 FY Accounts
$ 50,400.00
$ 71,400.00
0.5
1200 SY Accounts
$ 150,600.00
$ 129,600.00
$ 150,600.00
1
$ 50,400.00
111000
0.5
800 SY Accounts
$ 79,800.00
Don't Purchase Another
1
$
-
79800
36000
$ 50,400.00
$ 79,800.00
0.5
800 SY Accounts
$ 79,800.00
$ 50,400.00
$ 79,800.00
0.5
400 SY Accounts
0.5
400 FY Accounts
$(28,800.00)
$ (90,600.00)
$ (28,800.00) $ (90,600.00)
-51000
0.5
800 SY Accounts
$ (11,400.00)
GZ1450
-33000
$ 50,400.00
28200
$ (11,400.00)
0.5
800 SY Accounts
0.5
800 FY Accounts
$ 50,400.00
$ 67,800.00
$ 50,400.00
107400
$ 67,800.00
0.5
1200 SY Accounts
$ 147,000.00
$ 129,600.00
274
$ 147,000.00
c. Switch Option. This is an even better option for Shu Mei. The Expected value is now $37,800 which is
$9,600 more than just purchasing a GZ1450, and is $1,800 more than the expand option.
c. Switch Option
0.5
400 SY Accounts
$ (78,600.00)
0.5
400 FY Accounts
$(28,800.00)
$ (28,800.00) $ (78,600.00)
-39000
0.5
800 SY Accounts
$
$
50,400.00
$
GZ1000
-21000
600.00
600.00
0.5
800 SY Accounts
37800
$ 75,000.00
Trade in for GZ1450
$
(4,800.00)
$ 50,400.00
114600
$ 75,000.00
0.5
1200 SY Accounts
$ 154,200.00
0.5
800 FY Accounts
$ 129,600.00
$ 154,200.00
1
$ 50,400.00
114600
0.5
800 SY Accounts
$ 79,800.00
Don't Trade in
1
$
$ 50,400.00
-
79800
37800
$ 79,800.00
0.5
800 SY Accounts
$ 79,800.00
$ 50,400.00
$ 79,800.00
0.5
400 SY Accounts
0.5
400 FY Accounts
$(28,800.00)
$ (90,600.00)
$ (28,800.00) $ (90,600.00)
-51000
0.5
800 SY Accounts
$ (11,400.00)
GZ1450
-33000
$
28200
50,400.00
$ (11,400.00)
0.5
800 SY Accounts
0.5
800 FY Accounts
$ 50,400.00
$ 67,800.00
$
107400
50,400.00
$ 67,800.00
0.5
1200 SY Accounts
$ 147,000.00
$ 129,600.00
275
$ 147,000.00
d. Exit Option. No this wouldn’t change her original decision because the expected value of the GZ1450
($29,400) is greater than the expected value of the GZ1000 ($20,400), so she would still want to purchase
the GZ1450. However, the expected value of the GZ1450 does increase with the Exit Option from $28,200
to $29,400, an increase of $1,200.
Note: This analysis assumes that the Exit option was not combined with one of the previous options. In
some cases when an Exit option is combined with one or more of the other options, an even better value
could be obtained. For example, you can see that when the Exit option is combined with the previous
Switch option the expected value remains the same as the Switch option.
d. Exit Option
0.5
400 SY Accounts
$ (78,600.00)
Continue
$
$ (28,800.00) $ (78,600.00)
-
-39000
0.5
400 FY Accounts
0.5
800 SY Accounts
$
1
$ 50,400.00
$
600.00
600.00
$ (28,800.00) $ (39,000.00)
Exit
GZ1000
$ (41,400.00)
$
-21000
8,400.00
$ (41,400.00)
20400
0.5
800 SY Accounts
0.5
800 FY Accounts
$ 50,400.00
$ 79,800.00
$ 50,400.00
79800
$ 79,800.00
0.5
800 SY Accounts
$ 79,800.00
$ 50,400.00
$ 79,800.00
2
0.5
400 SY Accounts
29400
$ (90,600.00)
Continue
$
$ (28,800.00) $ (90,600.00)
-
-51000
0.5
400 FY Accounts
0.5
800 SY Accounts
$ (11,400.00)
2
$ 50,400.00
$ (11,400.00)
$ (28,800.00) $ (48,600.00)
Exit
GZ1450
$ (48,600.00)
$ 13,200.00
-33000
$ (48,600.00)
29400
0.5
800 SY Accounts
0.5
800 FY Accounts
$ 50,400.00
$ 67,800.00
$ 50,400.00
107400
$ 67,800.00
0.5
1200 SY Accounts
$ 147,000.00
$ 129,600.00
276
$ 147,000.00
Note: Exit and Switch Options Together
Exit
(41400)
1
400 FY Accounts
8400
(41400)
2
(28800)
1
400 SY Accounts
(39000)
(78600)
Don't Exit
(28800)
0
(39000)
(78600)
1
800 SY Accounts
600
GZ1000
(21000)
50400
37800
600
1
800 SY Accounts
75000
Trade in for GZ1450
(4800)
50400
114600
1
800 FY Accounts
75000
1
1200 SY Accounts
154200
129600
154200
1
50400
114600
1
800 SY Accounts
79800
Don't Purchase Another
1
0
50400
79800
37800
79800
1
800 SY Accounts
79800
50400
79800
Exit
(48600)
1
400 FY Accounts
13200
(48600)
1
(28800)
1
400 SY Accounts
(48600)
(90600)
Don't Exit
(28800)
0
(51000)
GZ1450
(90600)
1
800 SY Accounts
(11400)
(33000)
29400
50400
(11400)
1
800 SY Accounts
1
800 FY Accounts
50400
67800
50400
107400
67800
1
1200 SY Accounts
147000
129600
277
147000
e. Delay Option. Yes this is an attractive option. The expected value of $39,600 is better than all other
options. You could also add the alternative of not going into business at each decision node, however this
would not change anything—since you make money by staying in business in each instance.
Note: When any of the options are combined, you could come up with different values, but most
combinations will not change your decision.
e. Delay Option
0.5
400 SY Accounts
-37200
Buy GZ1000
-8400
$ (28,800.00)
2400
0.5
400 FY Accounts
-37200
0.5
800 SY Accounts
42000
$ 50,400.00
42000
1
0
2400
0.5
400 SY Accounts
-42000
Buy GZ1450
-13200
$ (28,800.00)
-2400
-42000
0.5
800 SY Accounts
37200
$ 50,400.00
39600
37200
0.5
800 SY Accounts
42000
Buy GZ1000
-8400
$ 50,400.00
42000
0.5
800 FY Accounts
42000
0.5
800 SY Accounts
42000
$ 50,400.00
42000
2
0
76800
0.5
800 SY Accounts
37200
Buy GZ1450
-13200
$ 50,400.00
76800
37200
0.5
1200 SY Accounts
116400
$ 129,600.00
278
116400
13.8. ConExoco Petroleum
The variables needed to value the option of putting a deposit down on another oil platform could be:
• The prices of oil and gas
• The amount of oil and gas found in nearby wells
• The total cost of drilling nearby wells
• The Hurdle Rate for ConExoco
• The value of the option would be based on the after-tax cash flows from the wells, including the
costs of drilling
The triggering formula might be:
• Price of Oil and Gas * Anticipated Volume of Oil and Gas – Cost of Drilling the New Well
If the level of the triggering formula is not high enough, then cancel the platform and forfeit the deposit.
13.9. Hopfer Pharmaceutical
You can treat this option as a financial call option, which is the right to buy the larger project in the future
at the exercise price.
If you plug the variables into the Black-Scholes model, you will come up with a call value of $25.5 million.
Since this amount is less than the NPV of the first project (-$28 million) you should not take this project.
NPV
Future Payment (4 years)
Present Value of Future Payment
Project 1
($28)
35%
Risk Free Rate
10%
Call Value
Call Delta (hedge ratio)
$25.5034
0.295
Using Put-Call Parity
Put Value
Put Delta (hedge ratio)
$258.2434
-0.705
Project 2 Annual Volatility
$270
$750
$512.26
European Option: Basic (No Dividend) Model
S: underlying asset price
X: exercise price
Rf: risk-free rate
σ: annualized volatility
t: years to expiration
$270.000
$750.000
10.00%
35.00%
4.000
13.10. Underwater Partnership
You could set up a model that would determine the expected value of this property. In the model you could
include revenue and cost variables and their respective uncertainties.
The model should show that there is great potential for profit for this property since we are currently in a
down cycle in the economy and indicators are showing the potential for improvement. Experts could be
used to assess the likelihood of a turn-around in the local economy, which could be built into the model.
13.11. Animal Testing for Coated Stent
This company should set up a decision tree that includes the likelihood of approval for each step in the
FDA process and for each decision, such as whether to split the technology into two and consider separate
approvals. This would give the company an idea of the expected value for each option available to them.
When valuing the different options, the decision tree will tell the company the expected value for each
option and the company should choose the option with the greatest expected value.
13.12. Switch and Defer/Learn Together
When the switch and defer/learn options are considered together, the decision tree is as follows. Note that
the switch option by itself does as well as both options on an EMV basis and on the basis of the resulting
279
risk profile. In other words, once you have incorporated the switch option, the defer/learn option, which is
addressing the same risk, does not add incremental value.
Launch Service
0.6
License awarded
40
50
40
1
0
40
Do not launch
-10
Apply for license
-10
0
-10
28
Switch technology
0.4
No license
10
20
10
1
1
0
10
28
No switch
0
10
0
Do nothing
0
0
0
13.13. Acquisition
EBITDA for Honest Communications Inc. (HCI)
Increase (Decrease) in Customers per Quarter
Inc (Decr) in Mins Used per Customer per Quarter
Inc (Decr) in $ per Minute Used per Quarter
COGS as a % of Total Revenue
SG&A as a % of Total Revenue
-0.6%
-1.4%
-1.2%
56%
27%
Quarter
# of Customers
Tot Mins used per Cust
$ per Minute
Total Revenue
COGS
SG&A
EBITDA
EffectiveYearly EBITDA (Q * 4)
HCI Value
Synergies
Deal Value
Exit Clause
Exit (1=yes, 0=no)
Value of Deal w/ Option(Q4-05)
HCI Deal Value w/ Option
Q4 2004
Quarter
# of Customers
Tot Mins used per Cust
$ per Minute
Total Revenue
COGS
SG&A
EBITDA
Effective Yearly EBITDA
HCI Value
Synergies
Deal Value
Exit Clause
Exit (1=yes, 0=no)
Value of Deal w/ Option(Q4-05)
Q4 2004
17.2
259.0
0.132
588.0
329.3
158.8
100.0
399.9
2979
616
3595
17.2
259.0
0.132
588.0
329.3
158.8
100.0
399.9
2979
728
3707
Quarterly Volatility of Customers
Quarterly Volatility of Mins Used
Quarterly Volatility of $ per Minute Used
EBITDA Multiple
Quarterly EBITDA ( in millions)
Q1 2005
Q2 2005
16.7
258.0
0.133
575.0
322.0
155.3
97.8
391.0
2913
616
3529
Formulas:
Q1 2005
=CB.Triangular(16.5,17.1,18.3)
=CB.Triangular(250,259,269)
=CB.Triangular(0.127,0.131,0.139)
=#_Custs*Tot_Mins*$_per_Min
=Tot_Rev*COGS%_of_Tot_Revenue
=Tot_Rev*SG&A%_of_Tot_Revenue
=Tot_Rev-COGS-SG&A
=EBITDA*4
=Effective_Yearly_EBITDA*EBITDA_Multiple
=616
=HCI_Value+Synergies
280
17.6
244.9
0.122
525.4
294.2
141.8
89.3
357.2
2661
616
3277
0.07
0.10
0.09
7.45
Q3 2005
Q4 2005
17.5
302.3
0.109
576.8
323.0
155.7
98.1
392.2
2922
616
3538
18.8
272.2
0.105
538.2
301.4
145.3
91.5
366.0
2726
616
3342
80
0.0
3342.4
3000.0
Q4 2005
=Q3+Q3*(Incr_Custs+Volatility_Custs*CB.Normal(0,1))
=Q3+Q3*(Inc_Min_Used+Volatility_Mins*CB.Normal(0,1))
=Q3+Q3*(Inc_$/Min+Volatility_$/Min*CB.Normal(0,1))
=#_Custs*Tot_Mins*$_per_Min
=Tot_Rev*COGS%_of_Tot_Revenue
=Tot_Rev*SG&A%_of_Tot_Revenue
=Tot_Rev-COGS-SG&A
=EBITDA*4
=Effective_Yearly_EBITDA*EBITDA_Multiple
=616
=HCI_Value+Synergies
=80
=IF(EBITDA<Exit_Clause,1,0)
=IF(Exit=1,3000,Deal_Value)
Forecast: Quickercom Value of Deal without Exit Option
Forecast: Quickercom Value of Deal with Exit Option
Summary:
Forecast: Q4 EBITDA
Summary:
Display Range is from 1424 to 5266
Entire Range is from 1396 to 9366
After 200,000 Trials, the Std. Error of the Mean is 2
Statistics:
Value
200000
3346
3260
--742
549852
0.73
3.97
0.22
1396
9366
7970
1.66
Trials
Mean
Median
Mode
Standard Deviation
Variance
Skewness
Kurtosis
Coeff. of Variability
Range Minimum
Range Maximum
Range Width
Mean Std. Error
Crystal Bal l Academi c Edition
Not for Commercial Use
Statistics:
196,914Displayed
FrequencyChart
Display Range is from 27.1 to 156.1
Entire Range is from 26.2 to 293.6
After 200,000 Trials, the Std. Error of the Mean is 0.1
Statistics:
Forecast: Option
Crystal Bal l Academi c Edition
Not for Commercial Use
Forecast: Q4EBITDA
.017
4454
194,206Displayed
.335
67060
.011
.017
.011
.251
.006
.006
.168
Mean = 3346
Mean = 91.6
0
.000
2385
3345
4306
5266
27.1
Mean = 3481.5
.000
0
3000.0
3490.9
3981.9
4472.8
4963.7
Certainty is 97.36% from -Infi ni ty to +Infinity
Forecast: HCI Value Alone, no Deal
Forecast: HCI Value of Deal with Exit Option
Summary:
Summary:
Display Range is from 808 to 4650
Entire Range is from 780 to 8750
After 200,000 Trials, the Std. Error of the Mean is 2
Statistics:
Certainty Level is 67.78%
Certainty Range is from -Infinity to +Infinity
Display Range is from 1400.4 to 3000.0
Entire Range is from 780.0 to 3000.0
After 200,000 Trials, the Std. Error of the Mean is 1.1
Value
200000
2730
2644
--742
549852
0.73
3.97
0.27
780
8750
7970
1.66
Trials
Mean
Median
Mode
Standard Deviation
Variance
Skewness
Kurtosis
Coeff. of Variability
Range Minimum
Range Maximum
Range Width
Mean Std. Error
Crystal Bal l Academi c Edition
Not for Commercial Use
Statistics:
196,914Displayed
FrequencyChart
4454
.022
Value
200000
2648.8
3000.0
3000.0
506.3
256388.8
-0.99
2.52
0.19
780.0
3000.0
2220.0
1.13
Trials
Mean
Median
Mode
Standard Deviation
Variance
Skewness
Kurtosis
Coeff. of Variability
Range Minimum
Range Maximum
Range Width
Mean Std. Error
Forecast: HCI ValuenoDeal
200,000Trials
Crystal Bal l Academi c Edition
Not for Commercial Use
Forecast: HCI Deal Value
200,000Trials
.017
.328
.011
.246
.006
197,597Displayed
FrequencyChart
65535
.164
Mean = 2730
0
.000
808
1769
2729
3690
0
.000
.084
1424
196,914Displayed
FrequencyChart
.022
FrequencyChart
200,000Trials
Value
200000
91.6
88.7
--24.9
619.2
0.73
3.97
0.27
26.2
293.6
267.4
0.06
Trials
Mean
Median
Mode
Standard Deviation
Variance
Skewness
Kurtosis
Coeff. of Variability
Range Minimum
Range Maximum
Range Width
Mean Std. Error
200,000Trials
Crystal Bal l Academi c Edition
Not for Commercial Use
4454
.022
Value
200000
3481.5
3260.4
3000.0
598.2
357805.6
1.59
5.99
0.17
3000.0
9365.7
6365.7
1.34
Trials
Mean
Median
Mode
Standard Deviation
Variance
Skewness
Kurtosis
Coeff. of Variability
Range Minimum
Range Maximum
Range Width
Mean Std. Error
Forecast: Deal Value
200,000Trials
Summary:
Certainty Level is 97.36%
Certainty Range is from -Infinity to +Infinity
Display Range is from 3000.0 to 4963.7
Entire Range is from 3000.0 to 9365.7
After 200,000 Trials, the Std. Error of the Mean is 1.3
.082
4650
Mean = 2648.8
.000
0
1400.4
1800.3
2200.2
2600.1
Certainty is 67.78% from -Infi ni ty to +Infinity
Quickercom's Expected Value
Value of Deal w/o Option
Value of Deal w/ Option
Value of Option
3346.0
3481.5
135.5
HCI's Expected Value
HCI Expected Value
HCI Value w/ Deal
Value of deal
2730.0
2646.8
-83.2
Chance of Exit
Mean
St Dev
Exit Value
Chance of Exit
Formula for Chance of Exit:
=NORMDIST(X_Value,Mean,St_Dev,1)
91.6
24.9
80
32.1%
281
3000.0
59.4
91.6
123.8
156.1
The examples above were created in Excel using Crystal Ball rather than @RISK. However, the basic
procedure for building the model is the same
1.
Using the data given in the problem, you first determine the initial quarter (first quarter 2005) by using
triangular distributions for Customers, Minutes and Price. You then can calculate all the remaining
quarters using a normal distribution with the means and volatilities given in the problem. Then you can
forecast using @Risk or Crystal Ball what the effective annual EBITDA for 2005 is using the fourth
quarter EBITDA. You should come up with approximately $682.4 million giving HCI a value of
$2,730 million, and when synergies are included you see a deal value of $3,346.
2.
The expected value of the deal for Quickercom while including the Exit Option comes out to be
approximately $3,481.5. You calculate this by forecasting a cell that calculates the deal value but
makes the deal $3,000 million when the fourth Quarter EBITDA is less than $80 million. This
represents Quickercom keeping their money.
3.
To calculate the value of the option you must find the expected value of the deal for Quickercom with
the option and without the option. The deal with the option as stated above is $3,481.5 million and the
deal without the option is approximately $3,346 million giving the Exit Option a value of $135.5
million.
4.
To calculate the value of the deal for HCI you must find the expected value of the deal with the option
and the expected value of HCI without the deal. The deal with the option has an approximate value of
$2,646.8 million to HCI. The Expected value of HCI without a deal is approximately $2,730 million
giving HCI a value for this deal of -$83.2 million. Therefore, it is in HCI’s best interest not to take this
deal.
282
CHAPTER 14
Risk Attitudes
Notes
Chapter 14 looks at the nuts and bolts of utility functions and assessment in decision analysis. In general,
the chapter is straightforward and presents few pedagogical difficulties. Perhaps the most difficult part of
the chapter is the assessment procedure. It is important that students understand how the certaintyequivalent and probability-equivalent assessment methods differ, and it is also important that they
distinguish utility assessment from probability assessment as it was discussed in Chapter 8.
Considerable space is devoted to the exponential utility function and the idea of risk tolerance, however, it
is relevant to note that in Dr. Clemen’s view, the exponential utility function, along with its implied
constant risk aversion, is a poor model for preferences. Students seem to look askance at the simple gamble
used to assess risk tolerance, stating plainly that assessed risk tolerance depends on how much money they
currently have. The best defense for the exponential utility function is that it provides a good first-cut
approximation if one wants to incorporate risk aversion into a decision analysis. A quick sensitivity
analysis can determine a critical risk tolerance, and the decision maker can be asked, via a simple
assessment question, whether his or her risk tolerance exceeds the critical value. If the choice is clear, then
there is no need for further preference modeling. If the choice is not so clear, then it may be worthwhile to
assess a utility function more carefully.
The section on Modeling Preferences Using PrecisionTree has been added. To include risk preferences, you
need only choose the utility curve and PrecisionTree does the rest.
Topical cross-reference for problems
Certainty equivalents
Diversification
Linear-plus-exponential utility
Multiple objectives
PrecisionTree
Probability equivalents
Probabilistic insurance
Risk premium
Risk tolerance
Sensitivity analysis
Simulation
St. Petersburg paradox
Stochastic dominance
Texaco-Pennzoil
Trade-off equivalents
Utility assessment
14.3, 14.6, 14.7, 14.12
14.8
14.25, 14.26
14.31
14.7, 14.18, 14.20, 14.22-14.25, 14.30,
Interplants, Inc., Strenlar Part III
14.12
14.22
14.6, 14.29
14.5, 14.7, 14.14, 14.18, 14.32, 14.33,
Interplants, Inc., Strenlar, Part III
14.18, Strenlar, Part III
14.30
14.11
14.19, 14.20
14.14 - 14.18
14.12, 14.34
14.12, 14.14
Solutions
14.1. We face decisions where a fundamental trade-off is whether to accept a sure consequence or to take a
chance at a better one, possibly winding up with something worse. Individuals who are very anxious about
uncertain prospects may make different choices than those who are less anxious. In other words, one’s
attitudes toward risk can be key when deciding which alternative is most preferred.
371
14.2. Here are some possible definitions of risk.
Greater variance in the distribution = greater risk:
0.5
0.5
0.5
0.5
$100
$101
$200
$0.50
Greater probability of a loss = greater risk:
0.5
0.5
0.5
0.5
$1
$0
$200
-$10
Greater probability of injury or fatality = greater risk. The second of the two branches in the decision tree
below might represent a community’s choice to accept a nuclear power plant. The benefits are considerably
more revenue for the community, but this is offset by the higher chance of injury or fatality.
0.0001
1 injury or f atality
0.9999
no injury or f atality
0.002
1 injury or f atality + $200,000
0.998
no injury or f atality + $200,000
14.3. Answers will vary, but all should contain the idea that a certainty equivalent is an amount of money
(or some other commodity) that a decision maker can have risk free, and the decision maker views this riskfree amount (or consequence) as being equivalent to a risky prospect (or lottery or gamble). If the decision
maker is indifferent between a risky prospect A and a sure consequence B, then B is the certainty
equivalent for A.
372
14.4. A risk premium is the difference between a certainty equivalent for a lottery and the lottery’s
expected value. I think of the risk premium as what the decision maker is willing to give up (in expected
value) to remove the uncertainty. (It is no coincidence that our car insurance payment is called a premium.)
14.5. Risk tolerance is just what it sounds like. It is a measure of the degree of risk that an individual is
willing to accept. The more risk tolerant a person is, the less concave is his or her utility function.
Risk tolerance is an idea that can be applied to any utility function. At any point on the utility function (or
for any amount of wealth), a decision maker will have a certain level of risk tolerance. Furthermore, this
degree of risk tolerance may change as the amount of wealth changes. For the exponential utility function,
risk tolerance happens to be constant regardless of the wealth level. For the logarithmic utility function, risk
tolerance increases as wealth increases.
14.6. With EU = 0.93, the certainty equivalent must be $1000 because U($1000) = 0.93. The risk premium
is RP = $1236 - $1000 = $236.
14.7. We have 𝑈(𝑥) = 1 − 𝑒 −𝑥⁄1210 , for all x values.
a. U(1000) = 1- e-1000/1210 = 0.56.
Likewise,
U(800)
U(0)
U(-1250)
= 0.48
= 0.00
= -1.81
b. EU = 0.33(0.56) + 0.21 (0.48) + 0.33(0) + 0.14 (-1.81) = 0.052.
The decision tree is shown in the Excel file “Problem 14.7.xlsx.” The decision tree currently shows the
expected utility 0.052. To find the Certainty Equivalent (part c), click on the tree's name and, in the Utility
Function tab, choose Certainty Equivalent for the Display. The CE is $64.52. The risk premium is the EMV
- CE. To find the EMV, click on the tree's name and, in the Utility Function tab, choose Expected Value for
the Display. The EMV = $335.50. Therefore, the risk premium is $270.98.
c. To find CE, set up equation
1 - e-CE/1210 = 0.052
and solve for CE:
1 - 0.052 = e-CE/1210
ln(1 - 0.052) = -CE/1210
-1210 [ ln(1 - 0.052) ] = CE
$64.52 = CE.
d. µ = EMV = $335.50, and σ2 = 554,964.75
CE ≈ 335.5 -
e.
0.5(554,964.75)
= $106.18.
1210
The approximation is poor because the distribution is skewed.
0.5 (3002)
= $2362.81.
CE ≈ 2400 - 1210
373
14.8. On one hand, it does not make sense to evaluate projects both in terms of expected value (implying
risk neutrality) and to maintain diversified holdings in order to reduce risk (implying risk aversion). If we
really were risk neutral, the best strategy would be to put all of our money into the project with the highest
expected return. We would end up with no diversification at all!
In spite of this, there are good reasons why a firm might use EMV to choose from among projects but still
diversify:
a)
It is not possible to put all of the firm’s assets into any one project. Each project is of a limited size
or is subject to capacity limitations.
b) As a heuristic approach, diversification and evaluation based on EMVs makes some sense. If a
firm can evaluate projects separately (which is possible if an exponential utility function is
appropriate), and if for relatively small projects the firm is “almost” risk neutral, then individual
projects might be evaluated on the basis of EMV. As long as a project is limited in size, the result
again will be diversification, reflecting an overall risk-averse attitude.
14.9. One possibility is that the decision maker could have an S-shaped utility function (below), indicating
that he is risk-seeking for very small amounts but risk averse for large ones. Alternatively, he could just get
a lot of entertainment value from playing the slot machines. If the latter is the case, this could perhaps be
modeled with another dimension in his utility function.
14.10. The answer is entirely a matter of subjective judgment. For example, an individual might pay $5.00
to play Game 1, but have to be paid $1400 to play Game 2.
14.11. This is the famous St. Petersburg paradox, explored by Daniel Bernoulli in 1738, “Exposition of a
new theory on the measurement of risk,” an English translation of which is available in Econometrica, 22
(1954), 23-36. Virtually everyone would pay some amount, typically in the neighborhood of $2 to $20, to
play the game. We can calculate the expected value of the random variable (X):
1
1
1
1
E(X) = 2 (2) + 4 (4) + 8 (8) + ... + n (2n) + ...
2
= 1 + 1 + 1 + ...
= ∞.
Thus, the expected value of the gamble is infinite. No one would be willing to pay an infinite amount of
money to play the game.
For Daniel Bernoulli, this posed a problem. In his 1738 paper, he resolved the problem by proposing that
people must have a utility function that is logarithmic or approximately so: U(x) = log(x + c) for an
appropriate constant c. With this utility function, one can calculate the expected utility for the gamble and
find that it must have a finite certainty equivalent.
374
14.12. Answers will vary depending on students’ subjective judgments. Here is an example.
a. Certainty Equivalents:
U(100) = 0
U(20,000) = 1
U(1000) = 0.5 U(100) + 0.5 U(20,000) = 0.5
U(2500) = 0.5 U(1000) + 0.5 U(20,000) = 0.75
U(5000) = 0.5 U(2500) + 0.5 U(20,000) = 0.875
U(9000) = 0.5 U(5000) + 0.5 U(20,000) = 0.9375
U(15000) = 0.5 U(9000) + 0.5 U(20,000) = 0.969
b. Probability Equivalents:
U(1500) = p U(20,000) + (1 - p) U(100) ==> p = 0.55
U(5600) = p U(20,000) + (1 - p) U(100) ==> p = 0.65
U(9050) = p U(20,000) + (1 - p) U(100) ==> p = 0.75
U(14,700) = p U(20,000) + (1 - p) U(100) ==> p = 0.90
c. Trade-Offs. Call the two reference outcomes r and R, and let r = 2500 and R = 30,000. Let the “minimal
outcome be x0 = 100. Now, following the pattern described in Chapter 14, suppose we assess x1 = 600, x2 =
1000, x3 = 2,500, x4 = 4,500,and x5 = 20,000.
Let Ui denote U(xi). Assuming that U0 = 0 and doing a little algebra, we have the following four equations
in five unknowns:
U2 = 2U1
U3 = 3U1
U4 = 4U1
U5 = 5U1
At this point, we can set U1 to any arbitrary positive number and then calculate U2, U3,U4, and U5. If we set
U1 = 0.20, then we have
U0 = U(100) = 0
U1 = U(600) = 0.20
U2 = U(1000) = 0.40
U3 = U(2500) = 0.60
U4 = U(4500) = 0.80
U5 = U(20,000) = 1.00
1
0.9
0.8
0.7
0.6
CE Approach
0.5
PE Approach
0.4
TO Approach
0.3
0.2
0.1
0
$0
$5,000 $10,000 $15,000 $20,000 $25,000
375
14.13. Again, answers will vary. Here is an example, following up on the assessments in problem 14.12.
For the gamble: Win $y with probability 0.5, Lose $y/2 with probability 0.5, the maximum Y that would be
acceptable is $3000. Thus, the risk tolerance R for the exponential utility function is R = $3000, and the
utility function is:
U(x) = 1 - e-x/3000
Thus, U($100) = 0.0328, and U($20,000) = 0.999
To rescale the utility function, find constants a and b so that
a + b U($100) = 0
and
a + b U($20,000) = 1
Solving for a and b gives a = -0.034 and b = 1.035. Thus, the rescaled utility function is
U'(x) = -0.034 + 1.035 [1 - e-x/3000]
Plotting this utility function with those from problem 14.12:
1.00
0.90
0.80
0.70
0.60
Exponential
0.50
CE Approach
0.40
PE Approach
0.30
TO Approach
0.20
0.10
0.00
$0
$5,000
$10,000 $15,000 $20,000 $25,000
376
14.14. a. Liedtke is clearly risk averse as the curve is concave.
Utility
1
0.8
0.6
0.4
0.2
Settlement amount
0
0
2
4
8
6
10
12
b.
Accept 2 billion
0.45
Texaco accepts 5 billion
0.75
(0.17)
(0.2)
1.00
0.575
Counter- of f er
5 billion
Final court
decision
0.613
Texaco
ref uses
(0.50)
(0.5)
(0.3)
(0.2)
Texaco
counterof f ers 3
billion
(0.33)
0.75
0.00
1.00
0.575
Final court
decision
0.60
Ref use
(0.5)
(0.3)
Accept 3 billion
0.75
0.00
0.60
Liedtke should counteroffer $5 billion, because the expected utility of doing so (0.613) far exceeds the
utility of $2 billion (0.45). However, if Texaco counteroffers $3 billion, Liedtke should accept it. Thus, he
is being slightly more risk averse than before.
c. From the graph in part a, Liedtke’s certainty equivalent for a utility value of 0.575 appears to be about
$2.8 billion. Note that his CE must be less than $3 billion, because U($3 billion) = 0.60. Thus, he should
not make a counteroffer for less than this amount. (Nor should he accept a settlement for less than $2.8
billion.)
377
14.15. Graphs for the three directors’ utility functions:
Director A
Director B
Utility
3
Utility
100
2.6
80
2.2
60
1.8
40
1.4
20
Settlement amount
Settlement amount
1
0
0
2
4
6
8
10
42
Utility
0
2
4
6
8
Director C
36
30
24
18
12
Settlement amount
6
0
2
4
6
8
10
Director A is very risk averse, B is slightly risk seeking, and C appears to be risk neutral. To find the
strategies for each, we have to solve the decision tree three times:
Director A:
Accept 2 billion
2.6
Texaco accepts 5 billion
2.9
(0.17)
(0.2)
3.0
2.35
Counter- of f er
5 billion
Final court
decision
2.592
Texaco
ref uses
(0.50)
(0.5)
(0.3)
(0.2)
Texaco
counterof f ers 3
billion
(0.33)
2.9
1.0
3.0
2.35
Final court
decision
2.8
Ref use
(0.3)
Accept 3 billion
378
(0.5)
2.9
1.0
2.8
10
Director B:
Accept 2 billion
8
Texaco accepts 5 billion
30
(0.17)
(0.2)
100
35
Counter- of f er
5 billion
Final court
decision
34.15
Texaco
ref uses
(0.50)
(0.5)
(0.3)
(0.2)
35
Texaco
counterof f ers 3
billion
(0.33)
Final court
decision
35
Ref use
(0.5)
(0.3)
Accept 3 billion
30
0
100
30
0
15
Director C:
Accept 2 billion
13.00
Texaco accepts 5 billion
23.50
(0.17)
(0.2)
21.96
Counter- of f er
5 billion
Final court
decision
22.22
Texaco
ref uses
(0.50)
(0.5)
(0.3)
(0.2)
Texaco
counterof f ers 3
billion
(0.33)
21.96
Final court
decision
21.96
Ref use
(0.5)
(0.3)
Accept 3 billion
42.05
23.50
6.00
42.05
23.50
6.00
16.50
Director A is so risk averse that he will accept the $2 billion. Director B, though, is risk seeking and will
refuse the $2 billion, as well as $3 billion. In fact, because EU(Court) = 35 > U($5 billion), Director B
would not be happy about counteroffering as little as $5 billion. Director C, being risk neutral, has exactly
the same preferences as implied by the analysis in Chapter 4. Thus, C would counteroffer $5 billion and
turn down a Texaco offer of $3 billion.
14.16. The purpose of this question is for students to realize that careful assessment of preferences does not
necessarily mean that people will automatically agree on the best course of action. In fact, no method for
reconciling differences exists that is entirely without controversy. The directors might try to reach a
379
consensus through discussion, let Liedtke choose, put the question to the shareholders, or they might vote.
Notice that if they vote, the choice would be to counteroffer $5 billion and turn down a Texaco offer of $3
billion, because B and C agree on this strategy.
14.17. To rescale Director A’s utility function, let U'(x) = 0.5 U(x) - 0.5.
Thus, a = b = 0.5. Using this transformation we obtain:
Outcome
10.3
5.0
3.0
2.0
0.0
Director A U(x)
3.0
2.9
2.8
2.6
1.0
U'(x)
1.00
0.95
0.90
0.80
0.00
Director A
U'(x)
1
0.8
0.6
0.4
0.2
Settlement amount (x)
0
0
2
4
6
8
10
Director A:
Accept 2 billion
0.80
Texaco accepts 5 billion
0.95
(0.17)
(0.2)
1.00
0.675
Counter- of f er
5 billion
Final court
decision
0.796
Texaco
ref uses
(0.50)
(0.5)
(0.3)
(0.2)
Texaco
counterof f ers 3
billion
(0.33)
0.95
0.00
1.00
0.675
Final court
decision
0.90
Ref use
(0.5)
(0.3)
Accept 3 billion
0.95
0.00
0.90
The decision tree shows the same optimal strategy. In fact, note that 0.796 = 0.5 (2.592) - 0.5.
380
14.18. If R = $1.171595 Billion (or $1,171,595,000), then Liedtke would be just indifferent between the
two alternatives. The only practical way to solve this problem is to use a computer to search for the critical
value. The decision-tree is modeled in the Excel file “Problem 14.18.xlsx.” Different values of R can be
entered until the critical value is found. The second worksheet in the file shows a sampling of R-values
between $1 billion and $5 billion. Excel’s Goal Seek can be used to find the critical R value as described in
the file.
14.19. A risk averse decision maker would choose the least risky option (by whatever definition of risk) if
the options have the same expected value. As in question 14.2, risk may be measured in any of a variety of
ways, including variance or standard deviation of payoffs, probability of loss, and so on.
14.20. a. Using 𝑈(𝑥) = 1 − 𝑒 −𝑥⁄100 , for all values of X, we have:
x
50
100
150
U(x)
0.393
0.632
0.777
1
1
1
EU(C) = 3 0.393 + 3 0.632 + 3 0.777 = 0.601
1
1
1
EU(D) = 4 0.393 + 2 0.632 + 4 0.777 = 0.609
Investment D has the greater expected utility and thus would be the preferred choice. This expected utility
decision tree is shown in the Excel file “Problem 14.20.xlsx.”
b.
In the CDF graph above, Area I = Area II. Thus, these two “balance” each other out, with the result that the
expected values (EMVs) for the two investments are the same. However, it is clear that C is more “spread
out” (riskier) than D. In particular, imagine calculating expected utilities for these two investments when
the utility function is concave or risk averse. The concave shape will tend to devalue the larger payoff (150)
relative to the smaller one (50). As a result, investment C, with the larger probability of the more extreme
payoffs, will end up having a smaller expected utility than D. This graph (as generated by PrecisionTree) is
shown in the second worksheet in the Excel file “Problem 14.20.xlsx.”
381
This phenomenon is known as “second-order” stochastic dominance. If one gamble displays second-order
stochastic dominance over another, then the former will be preferred to the latter under any risk-averse
utility function (as D is preferred to C here). The rule for detecting second-order stochastic dominance
relies on examination of the areas separating the two CDFs. The definition is in terms of integrals, and is as
follows. Let Fi(y) denote the CDF for option i. Then option j dominates i if the condition
∫ [Fi (y ) − Fj (y )]dy ≥ 0
z
−∞
holds for all z. Essentially, this condition looks at the total area separating the two CDFs at every point z,
and ensures that the difference (up to the point z) is non-negative. For more on second-order stochastic
dominance, consult Bunn (1984) or Whitmore and Findlay (1978). Complete references are given in
Making Hard Decisions with DecisionTools, 3rd ed..
14.21. The key to this problem (and many in this chapter) is that the indifference judgments permit us to set
up equations indicating that expected utilities are equal. First, because A and E are best and worst,
respectively, we have U(A) = 1, and U(E) = 0. To determine U(C), we use the first assessment to set up the
equation
U(C) = 0.5 U(A) + 0.5 U(E) = 0.5.
Likewise, using the second assessment we have
U(B) = 0.4 U(A) + 0.6 U(C)
= 0.4 (1) + 0.6 (0.5)
= 0.7.
Finally, from the third assessment, we have
0.5 U(B) + 0.5 U(D) = 0.5 U(A) + 0.5 U(E)
0.5 (0.7) + 0.5 U(D) = 0.5
U(D) =
0.5 - 0.35
= 0.30
0.5
14.22. The decision tree is:
No insurance
No loss
(p)
0
Loss
(1 - p)
-L
Conv entional insurance
-Premium
No claim
(p)
Probabilistic
insurance
-Premium/2
Cov ered
(0.5)
-Premium
Claim
(1 - p)
Not Cov ered
(0.5)
382
-L
The best consequence is no loss, so U(0) = 1, Likewise, the worst is -L, so U(-L) = 0. We also know that
0 < U(-Premium) < U(-Premium/2) < 1.
Indifference between conventional insurance and no insurance means that
U(-Premium) = p U(0) + (1-p) U(-L) = p.
Expected utility for probabilistic insurance is
EU( Prob. Insurance) = p U(-Premium/2) + (1-p) [0.5 U(-Premium) + 0.5 U(-L)]
= p U(-Premium/2) + (1-p) 0.5 U(-Premium).
Because of risk aversion, the utility function U(x) is concave. This implies that
U(-Premium/2) > U(-Premium) + 0.5 [1 - U(-Premium)].
This inequality basically means that U(-Premium/2) is more than halfway between
U(-Premium) and 1. (It is closer to 1 than to U(-Premium.)
1
U(-Premium/2)
U(-Premium)
-Premium
-Premium/2
0 = No loss
Thus, we can substitute:
EU(Prob. Insurance) = p U(-Premium/2) + (1-p) 0.5 U(-Premium)
> p {U(-Premium) + 0.5 [1 - U(-Premium)]} + (1-p) [0.5 U(-Premium)]
= pU(-Premium) + 0.5p - 0.5 p U(-Premium) + 0.5 (1-p) U(-Premium)
= pU(-Premium) - 0.5 p U(-Premium) + 0.5 (1-p) U(-Premium) + 0.5p
= 0.5 U(-Premium) [p + (1-p)] + 0.5p
= 0.5 U(-Premium) + 0.5 U(-Premium) because U(-Premium) = p
= U(-Premium).
Thus, EU(Prob. Insurance) > U(-Premium) = U(Conventional insurance). Probabilistic insurance would be
preferred to conventional insurance by any risk-averse individual.
383
Here is a numerical example. Suppose the decision maker has a logarithmic utility function for total assets,
U(x) = ln(x). She currently holds assets worth $15,000, including a $3000 computer which she can insure.
The premium for insurance is $66.79, which means she just barely prefers the insurance:
U(Insurance) = U($15,000 - $66.79) = ln($14,933.21) = 9.6114429
EU(No insurance) = 0.98 U($15,000) + 0.02 U($12,000)
= 0.98 (9.6158055) + 0.02 (9.3926619)
= 9.6114426
However, considering the probabilistic insurance,
EU(Probabilistic insurance)
= 0.98 U($15,000 - $33.40) + 0.02 [0.5 U($14,933.21) + 0.5 U($12,000)]
= 0.98 (9.6145763) + 0.02 [ 0.5 (9.6114429) + 0.5 (9.3926619)]
= 9.6114448
> 9.6114429 = U(Insurance).
The Excel file “Problem 14.22.xlsx” shows this numerical example. The example uses the logarithmic
utility function U(x) = ln(x + R), where R represents current assets worth $15,000. We also assume that a
$3000 computer is being insured, the premium for insurance is $66.79, and there is a 2% probability that
the computer is lost.
14.23. a. Using the logarithmic utility function U(x) = ln(x + $10000), we have
EU(venture) = 0.5 ln($20,000) + 0.5 ln($5000) = 9.2103
U($10,000) = ln($10,000) = 9.2103
Thus, the investor is indifferent between these two choices.
b. If she wins the coin toss, she has $1000 more. Thus, if she wins, she has $11,000 and a choice between
investing or not:
EU(venture) = 0.5 ln($21,000) + 0.5 ln($6000) = 9.3259
U($11,000) = ln($11,000) = 9.3057.
Thus, if she wins the coin toss, she should invest in the venture. However, if she loses the coin toss, she
only has $9000 and the investment option:
EU(venture) = 0.5 ln($19,000) + 0.5 ln($4000) = 9.0731
U($9,000) = ln($9,000) = 9.1050.
Thus, in this case, she should definitely not invest in the venture. Finally, considering the initial problem of
whether to toss the coin in the first place, her expected utility is
EU(Coin toss) = 0.5 EU(venture | wealth = $11,000) + 0.5 U($9000)
= 0.5 (9.326) + 0.5 (9.105)
= 9.2155
Because EU(Coin toss) is greater than U($10,000) = 9.21034, she should go for the coin toss. This looks a
bit paradoxical, because it seems as though the investor is behaving in a risk-seeking way despite having
the risk-averse logarithmic utility function. The risky choice is preferred, however, because of the
investment options that follow it. This points out the importance of considering subsequent investment
opportunities in modeling decisions.
384
The decision situation is modeled with a decision tree in the Excel file “Problem 14.23.xlsx.”
14.24. a. If wealth = $2500, then
EU(A) = 0.2 ln(12,500) + 0.8 ln (3500) = 8.415
EU(B) = 0.9 ln(5500) + 0.1 ln(500) = 8.373
Choose A.
b. If wealth = $5000, then
EU(A) = 0.2 ln(15,000) + 0.8 ln (6000) = 8.883
EU(B) = 0.9 ln(8000) + 0.1 ln(3000) = 8.889
Choose B.
c. If wealth = $10,000, then
EU(A) = 0.2 ln(20,000) + 0.8 ln (11,000) = 9.425
EU(B) = 0.9 ln(14,000) + 0.1 ln(8000) = 9.424
Choose A.
d. This seems very strange. One might think that as wealth level increases, a decision maker might change
from one gamble to another, but never back to the first. (This is Bell’s one-switch rule.) Only certain utility
functions have this property, and the logarithmic utility function is not one of them.
This decision situation is modeled in the Excel file “Problem 14.24.xlsx.” The user can vary the wealth
(cell $B$5) and see that as the wealth increases from $2.5K to $5K, the investors switches from A to B, but
as wealth increases to $10K, the investor switches back to A.
14.25. a. If wealth = $2500, then
EU(A) = 0.2 [0.0003(12,500) - 8.48 e-12,500/2775]
+ 0.8 [0.0003(3500) - 8.48 e-3500/2775] = -0.35
EU(B) = 0.9 [0.0003(5500) - 8.48 e-5500/2775]
+ 0.1 [0.0003(500) - 8.48 e-500/2775] = -0.26
Choose B.
b. If wealth = $5000, then
EU(A) = 0.2 [0.0003(15,000) - 8.48 e-15,000/2775]
+ 0.8 [0.0003(6000) - 8.48 e-6000/2775] = 1.55
EU(B) = 0.9 [0.0003(8000) - 8.48 e-8000/2775]
+ 0.1 [0.0003(3000) - 8.48 e-3000/2775] = 1.54
Choose A.
c. If wealth = $10,000, then
EU(A) = 0.2 [0.0003(20,000) - 8.48 e-20,000/2775]
+ 0.8 [0.0003(11,000) - 8.48 e-11,000/2775] = 3.71
EU(B) = 0.9 [0.0003(14,000) - 8.48 e-14,000/2775]
+ 0.1 [0.0003(8000) - 8.48 e-8000/2775] = 3.63
Choose A.
385
d. As indicated in the problem, the linear-plus-exponential utility function can only switch once as wealth
increases.
This decision situation is modeled in the Excel file “Problem 14.25.xlsx.” Since the linear-plus-exponential
utility function is not one of the pre-defined utility functions, we use the utility function as a formula for a
linked decision-tree model. As in Problem 14.24, the user can vary their wealth in cell $B$5, however,
with this utility function the optimal alternative switches from B to A as wealth increases from $2.5K to
$5K, but stays with A as wealth continues to increase.
14.26. We can use gamble A from problem 14.25, using the three levels of wealth that we had in that
problem. For each level of wealth, we must find a CE, a certain amount of money such that the utility of
CE equals the expected utility of the gamble. Solving for the CE with the linear-plus-exponential is best
done using a search technique. Here are the results:
Wealth level:
$2,500
$5,000
$10,000
EV(A)
$5,300
$7,800
$12,800
EU(A)
-0.35
1.55
3.71
CE
$4,472
$7,247
$12,661
RP
$828
$553
$149
As the wealth level increases, the risk premium decreases, showing that the linear-plus-exponential utility
function has decreasing risk aversion.
14.27. a. The task is to find the value x that makes Brown indifferent between keeping and selling the
business deal. In the following decision tree, he must determine x that makes him indifferent between the
two alternatives:
Keep
Deal Succeeds
(0.5)
1100
Deal f ails
(0.5)
1000
Sell
1000 + x
Indifference permits us to construct the equation
U(1000 + x) = 0.5 U(1100) + 0.5 U(1000)
ln(1000 + x) = 0.5 ln(1100) + 0.5 ln(1000)
ln(1000 + x) = 6.9554
1000 + x = e6.9554 = 1048.81
x = $48.81
386
b. Here we must find the value y that makes Brown indifferent between buying and not buying the business
deal. Now his decision tree looks like:
Buy
Deal Succeeds
(0.5)
1100 - y
Deal f ails
(0.5)
1000 - y
Don't buy
1000
Again, indifference allows us to specify the equation
U(1000) = 0.5 U(1100 - y) + 0.5 U(1000 - y).
The solution to this equation, as indicated in the problem, is y = $48.75.
c. The difference between the two answers in parts a and b results from different “starting points.” In part a
Brown has $1000 as well as the business deal. In part b, he only has $1000. As a result, he looks at the
business deal slightly differently because he is using slightly different portions of his utility function.
d. Solve U(1000) = 0.5 U(1100 - y) + 0.5 U(1000 - y).
ln(1000) = 0.5 U(1100 - y) + 0.5 U(1000 - y)
ln(1000) = 0.5 [U(1100 - y) + U(1000 - y)]
2 ln(1000) = ln[(1100 - y) (1000 - y)]
e2 ln(1000) = eln[(1100 - y) (1000 - y)]
10002 = (1100 - y) (1000 - y)
1,000,000 = 1,100,000 - 2100 y + y2
0 = 100,000 - 2100 y + y2
Now use the quadratic formula to solve this equation for y:
2100 ± 21002 - 4 (100,000)
y =
2
= 1050 ±
4,410,000 − 400,000
2
= 1050 ±
4,010,000
2
= 1050 ± 1,001.25
Take the negative square root: y = $48.75 (The positive root gives a nonsensical answer to the problem.)
387
14.28. a.
Utility
10000
8000
6000
4000
2000
Assets (A)
0
0
20
40
60
80
100
The utility function is concave, indicating risk aversion.
b. Compare U($10,000) with the expected utility of the gamble:
U($10,000) = 200 (10) - 102 = 1900
EU(Gamble) = 0.4 U(0) + 0.6 U($20,000)
= 0.4 [200 (0) - 02] + 0.6 [200 (20) - 202]
= 2160
Thus, the individual should accept the gamble.
c. Now compare U($90,000) with the expected utility of the gamble:
U($90,000) = 200 (90) - 902 = 9900
EU(Gamble) = 0.4 U($80,000) + 0.6 U($100,000)
= 0.4 [200 (80) - 802] + 0.6 [200 (100) - 1002]
= 0.4 [9600] + 0.6 [10,000]
= 9840
Now the optimal choice is not to gamble.
d. The quadratic utility function used here displays increasing risk aversion. For many people, such
behavior seems unreasonable. How can we explain the notion that someone would be more nervous about
gambling if he or she had more wealth? Perhaps a very miserly person would become more and more
cautious as wealth increased.
14.29.
Gamble
10, 40
20, 50
30, 60
40, 70
EMV
25
35
45
55
CE
23.30
33.00
42.57
51.93
RP
1.70
2.00
2.43
3.07
The risk premium actually increases as the stakes increase, indicating increasing risk aversion. Is this a
reasonable model of choice behavior? Do people generally become more anxious about risky situations
when they own more wealth?
388
Note that solving for CE requires the use of the quadratic formula. For example, in the first gamble we have
U(CE)
= 0.5 U(10) + 0.5 U(40)
= 0.5 (0.000025) + 0.5 (0.609775)
= 0.3049.
Thus, the equation to solve is
-0.000156 CE2 + 0.028125 CE - 0.265625 = 0.3049
-0.000156 CE2 + 0.028125 CE - 0.570525 = 0
Using the quadratic formula,
CE =
-0.028125 ±
=
0.0281252 - 4(-0.000156)(-0.570525)
2(-0.000156)
-0.028125 ± 0.020856846
2(-0.000156)
Take the positive square root:
CE =
-0.028125 + 0.020856846
2(-0.000156)
= $23.30.
14.30. The simulation is straightforward and is saved in the Excel file “Problem 14.30.xlsx.” With 10,000
iterations, the result was EU(New process) = 0.00315, as compared to U(Do nothing) = 0. Thus, the CEO is
just barely in favor of the new process, but the two are so close that indifference is a reasonable conclusion.
When students run this simulation, many will have a negative EU for the new process. The discussion can
focus on sampling variability in simulations.
Should the CEO be concerned that his utility can vary with the new process? No; the utility function has
already captured the risk attitude. All that is required is to compare EU(New process) with U(Do nothing).
Indifference indicates that the higher expected return on the new process just makes up for the increase in
risk.
14.31. We have the following:
Best machine: $5M, 800 gps. Thus, U(5, 800) = 1.0.
Worst machine: $6M, 150 gps. Thus, U(6, 150) = 0.
From Assessment II (Figure 14.19a), we know that
U(5, 350) = 0.35 U(5, 800) + 0.65 U(6, 150) = 0.35.
Likewise, from Assessment III (Figure 14.19b), we have
U(8, 500) = 0.70 U(5, 800) + 0.30 U(6, 150) = 0.70.
So the expected utilities for the two projects can now be calculated:
EU(A) = 0.5 U(5, 800) + 0.5 U(6, 150) = 0.50.
389
EU(B) = 0.4 U(8, 500) + 0.5 U(5, 350) = 0.49.
Therefore, the choice would be to choose project A, even though it appears slightly riskier than B.
b. Why does A appear riskier than B? It is a lottery between the best and worst possible consequences
rather than two intermediate consequences. Because the utility function has already accounted for risk
attitude, though, A should still be chosen. Even taking risk into account via the utility function, A comes
out ahead.
14.32. The decision tree is:
0.5
0.5
X
-X /2
0
We construct the following equation based on indifference between the two alternatives:
U(0) = 0.5 U(x) + 0.5 U(-x/2)
0 = 0.5 (1 - e-x/R) + 0.5 (1 - ex/2R)
= 1 + 0.5 [ - e-x/R - ex/2R)
Manipulating this equation algebraically yields
2 = e-(x/R) + e(x/2R)
This equation can be solved numerically for x/R. Doing so gives
x/R = 0.9624
or x = 96.24% of R. Thus, x is not quite 4% less than R.
14.33. We wish to show that EU(Gamble) = U($0) when U(𝑥) = 1 − 𝑒 −𝑥⁄𝑌 , where Y is assessed as stated
in the problem. As U($0) = 1 − 𝑒 0⁄𝑌 = 0, we need to show EU(Gamble) ≈ 0.
EU(Gamble) = U(𝑌) + U�−𝑌�2�
1
2
1
1
2
= ��1 − 𝑒 −𝑌⁄𝑌 � + �1 − 𝑒 �
2
1
= �2 − �𝑒 −1 + 𝑒 1⁄2 ��
2
1
= [2 − 2.0166]
2
= −0.0083
𝑌� �⁄𝑌
2
��
390
14.34. We wish to show that EU(𝑋2 ) = 2EU(𝑋1 ), when you are indifferent between 𝐴1 and 𝐴2 and you are
indifferent between 𝐵1 and 𝐵2 . These four lotteries are:
A1
Win $50 with probability 0.5
Win X1 with probability 0.5
A2
Win Maximum with probability 0.5
Win Minimum with probability 0.5
B1
Win $50 with probability 0.5
Win X2 with probability 0.5
B2
Win Maximum with probability 0.5
Win X1 with probability 0.5
Note that we are assuming $50 falls between the maximum and minimum. We have:
EU(𝐴1 ) = EU(𝐴2 )
1
2
1
1
1
U($50) + U(𝑋1 ) = U(𝑀𝑎𝑥) + U(𝑀𝑖𝑛)
2
Setting U(𝑀𝑖𝑛) = 0 gives
2
2
U($50) + U(𝑋1 ) = U(𝑀𝑎𝑥).
(1)
The indifference between 𝐵1 and 𝐵2 implies
EU(𝐵1 ) = EU(𝐵2 )
1
2
1
1
1
U($50) + U(𝑋2 ) = U(𝑀𝑎𝑥) + U(𝑋1 )
2
2
2
U($50) + U(𝑋2 ) = U(𝑀𝑎𝑥) + U(𝑋1 )
Subtracting equation (1) from equation (2) gives
U(𝑋2 ) − U(𝑋1 ) = U(𝑋1 ), or
𝑈(𝑋2 ) = 2𝑈(𝑋1 )
Similarly, it is possible to show that U(X3) = 3U(X1), U(X4) = 4U(X1), and so on.
391
(2)
Case Study: Interplants, Inc.
1.
Sell
20
Ion Engine
Settlement policy
Success (0.85) 125
Production
Agreement
(0.68)
Failure (0.15)
Inexpensive
(0.185)
-15
Success (0.85) -15
Dispute
(0.32)
Failure (0.15)
-15
Success (0.85) 100
Agreement
(0.68)
EMV
= 50.05
Failure (0.15)
Moderate
(0.63)
-18
Success (0.85) -18
Dispute
(0.32)
Failure (0.15)
-18
Success (0.85) 75
Agreement
(0.68)
Failure (0.15)
Costly
(0.185)
-23
Success (0.85) -23
Dispute
(0.32)
Failure (0.15)
-23
An alternative decision tree would collapse the Settlement policy and Ion Engine nodes, because in each
case agreement on the settlement policy and success in the development of the ion engine are required for
success of the business:
392
Sell
20
Settlement policy
and Ion Engine
Production
Success (0.58) 125
Inexpensiv e
(0.185)
Failure (0.42)
-15
Success (0.58) 100
Moderate
(0.63)
Failure (0.42)
-18
Success (0.85) 75
Costly
(0.185)
Failure (0.15)
-23
2. Don’s certainty equivalent for the risky prospect of keeping the company is 20 billion credits.
3. There are two ways to solve this problem. One way to solve this problem is to use the approximation.
We have
CE ≈ µ -
0.5 σ2
R .
Rearranging this expression, we obtain
0.5 σ2
R ≈ µ - CE
For our problem, we have µ = EMV = 50.28, and σ2 = 3553.88.
Thus, for CE = 15, we have
0.5 (3553.88)
R ≈ 50.28 - 15
= 50.70 billion credits.
For CE = 20, we have
0.5 (3553.88)
R ≈ 50.28 - 20
= 58.68 billion credits.
Unfortunately, the approximation overstates R substantially in each case.
The alternative is to model the decision tree in PrecisionTree and vary the risk tolerance to find the points
of indifference. The task then is to find a risk tolerance that gives a CE of 15 billion credits. A risk
tolerance of 53.55 credits gives Don a CE of 20 billion credits. A risk tolerance of 43.26 credits gives Don
a CE of 15 billion credits. The decision model is saved in the Excel file “Interplants Case.xlsx.”
393
Case study: Strenlar, Part III
This case requires students to return to their analyses of Strenlar in Chapter 4, and to consider the
possibility of Fred being risk averse. Of course, the precise value obtained will depend on the specific
model used. Because different students may have used slightly different assumptions in modeling Fred’s
decision, the critical Rs will vary somewhat.
The decision tree as given in Chapter 4 of this manual is modeled in the Excel file “Strenlar Case Part
III.xlsx.” The spreadsheet model is constructed so the user can vary R to find where Fred becomes
indifferent to his choices Fred should never choose the lump sum as the expected utility of accepting the
job is always higher. For R <$6,780,000, he should accept the job. He is indifferent at
R=$6,780,000between accepting the job and going to court. For R>$6,780,000, he should go to court. In
other words, if Fred would be just indifferent between doing nothing and accepting a 50-50 gamble
between winning $6.8 million and losing $3.4 million, then he should certainly refuse PI and press on with
Strenlar. However, if he would turn down such a gamble (and the original case suggests that he might),
then he should consider accepting the job offer.
394
CHAPTER 15
Utility Axioms, Paradoxes, and Implications
Notes
Chapter 15 is very much an unfinished story. Beginning with the Allais paradox in the early 1950s and
continuing to the present day, the axioms of expected utility and “paradoxical” behavior relative to the
axioms have generated much debate. Much of the behavioral research has focused on the paradoxes, those
situations in which reasonable and thoughtful people behave in ways inconsistent with the axioms. The
paradox exists because careful explanation of the inconsistency often does not lead such people to modify
their choices. Some of these paradoxes are discussed and explained in the chapter, and still more are
explored in the problems.
On the other hand, considerable work has progressed in the area of utility theory. In this research, the
axiomatic foundations of decision analysis are studied. Certain axioms can be modified, relaxed, or
eliminated altogether, in which case decision rules (variations on choosing the alternative with the greatest
expected utility) can be derived that more accurately describe how people actually do make decisions.
What is the purpose for these streams of research? The motivation for the behavioral research is fairly
clear. It seems reasonable to learn how people actually do make judgments and decisions. Such
understanding may provide guidance for teaching people how they can improve their decision making by
avoiding commonly observed pitfalls. In a general sense, this has been a guiding light for Making Hard
Decisions with DecisionTools all along. The decision-analysis approach provides a systematic framework
and toolkit for making judgments and decisions. Indeed, some of the known pitfalls occur within decision
analysis. However, understanding of those pitfalls can help individuals identify and avoid them, and in
some cases the tools can be designed in such a way as to lead decision makers away from the pitfalls.
In addition, understanding how people behave can be quite useful to a decision maker who must act jointly
with others. For example, an advertising manager might like to know how people make judgments and
purchase decisions. Casinos understand that their customers will stop gambling after they hit their loss
limit. By offering free meals and even small amounts of cash when a player is close to his or her limit, the
casino reframes the situation into one of gains and patrons are more likely to stay longer. By using
customer-loyalty cards, they monitor customer winnings and know exactly when to intervene. The text
explores some of these implications. Two specific examples are included in problem 15.16 and the “Life
Insurance Game” case study.
The motivation for research on the axioms, or the development of generalized utility models, seems less
clear. At a very fundamental level, we would like to choose a set of axioms that are compelling; they make
sense as guiding principles for decision making. On the basis of these axioms, then, we derive a decision
rule. This decision rule then provides the basis for addressing complex decisions, the choice for which may
not be obvious. Expected utility theory, based on the axioms as described in the chapter, has been the
standard for over forty years. In fact, the axioms of expected utility implicitly provide the basis for
decomposing hard decisions into a structure that consists of decisions, uncertain events, and consequences
that can be valued independently of the “gambles” in which they may occur. Thus, our entire decision
analysis approach has been dictated, at least implicitly, by the axioms.
At first glance, the axioms of expected utility do seem compelling. Deeper inspection of the axioms,
though, has led a number of scholars to question whether the axioms are as compelling as they might seem.
The sure-thing principle in particular has been called into question, as has the transitivity axiom. In spite of
these rumblings at the foundations of decision analysis, though, no compelling set of axioms, accompanied
by a decision rule and implicit procedure for decomposing large problems, has emerged. In fact, a
fundamental question still exists. Should we change the axioms to make our decision rule consistent with
the way people actually do behave? Or do we leave the axioms and decision rule as they are because we
believe that in their current form they provide the best possible guidance for addressing hard decisions?
395
Lacking answers to these basic questions, axiomatic research and the development of generalized utility
models may continue to generate interesting results but without a clear notion of how the results relate to
practical decision-analysis applications.
As one might suspect, considerable reading material is available on developments in behavioral decision
theory. Along with the specific references provided in the chapter, we recommend von Winterfeldt and
Edwards (1986) Decision Analysis and Behavioral Research. These authors provide excellent discussion of
behavioral paradoxes, which they term “cognitive illusions.” For generalized utility, the literature is much
less accessible, being dominated by highly technical journal articles. The best available compendium, still
quite technical, is Fishburn (1988) Nonlinear Preference and Utility Theory, Baltimore: The Johns Hopkins
University Press. A view of the role that generalized utility can play in decision analysis is presented in
Edwards, W. (1992) Utility Theories: Measurements and Applications. Boston, MA: Kluwer.
Topical cross-reference for problems
Constructionist view
Framing effects
Sunk costs
Endowment effect
Theater ticket problem
Utility assessment
Risk perception
15.3
15.7-15.9, 15.16, The Life Insurance Game
15.10
15.6
15.7
15.4, 15.12 - 15.15
Nuclear Power Paranoia, The Manager’s
Perspective
Solutions
15.1. In a very general sense, decision analysis depends on the idea of maximizing expected utility. Thus, it
is important to understand what underlies expected utility. If the underlying axioms are not satisfactory,
maybe the decision-analysis approach isn’t either.
15.2. It is useful to know what people do “wrong” relative to the axiomatic foundation of decision analysis.
We might help them do it “right” (in accord with the axioms, which they may find compelling).
Alternatively, we might think about whether the theory, including the underlying axioms, adequately
capture everything considered to be important in decision making.
15.3. The discussion on pages 586-587 of Making Hard Decisions with DecisionTools presents a
“constructionist” view of decision analysis, whereby a decision maker constructs a requisite model,
including preferences and beliefs for issues that may not have been previously considered. The point of this
question is to emphasize that as the environment changes and we become exposed to new problems, then
we should reexamine our preferences, beliefs, and even the structure of problems we face. (Recall the
DuPont case from Chapter 1. The decision to ban CFCs was based on information relating to the ozone
layer in the atmosphere, issues not understood when CFCs were first introduced.) Continual monitoring and
adapting of preferences, beliefs, and problem structure is important. In making long-range decisions, we
might even want to think about how our preferences and beliefs could change over time.
15.4. a, b. Answers are based on subjective judgments and will no doubt vary considerably from person to
person.
c. Finding p to reach indifference between Lotteries 1 and 2 means that
p U(1000) + (1 - p) U(0) = U(400).
But because we can set U(1000) = 1 and U(0) = 0, we have U(400) = p. On the other hand, indifference in
part b means that
0.5 U(400) + 0.5 U(0) = q U(1000) + (1 - q) U(0).
396
Again using U(1000) = 1 and U(0) = 0, we obtain U(400) = 2q. Thus, perfect consistency in the two
assessments requires that p = 2q.
d. Most students will have some feelings about which of the assessments was easier to make. Some may
argue that the assessment in a was easier because it compared a lottery with a sure consequence. Others
may claim that that the assessment in a was more difficult for the same reason; that it required comparison
of an uncertain event with a sure consequence, and that the two lotteries in b are more easily compared. In
any event, it is likely that p ≠ 2q and that the student has more confidence in one assessment than the other.
15.5. This is a topic for discussion. One could think of many different “rules for clear thinking.” The
challenge is to make them very general but at the same time useful in specific situations. For example, one
possible rule is to make decisions on the basis of what you believe the consequences to you will be. This
means that the decision maker must be very careful to understand clearly what those consequences really
are: how he or she will feel about the actual experience at the end of each branch of the tree. Another rule
might be to understand all crucial aspects of the decision situation. In this book, we have used the notion of
requisite model to capture this idea; the decision model should include everything that has a meaningful
impact on the consequences. A third possible rule might be to understand and use trade-offs in making
choices. For a more complete discussion, see Frisch and Clemen (1994).
15.6. This problem is sometimes known as the “endowment” problem or paradox. Most people will not sell
the wine in part a, and would not buy it in part b. But if the two bottles of wine have equal value, then such
a pattern of behavior is not consistent because it is a matter of trading cash for wine or vice versa. One
argument is that there is utility to be gained from the “process” of originally collecting a bottle wine that
has since become valuable. (If nothing else, a valuable bottle of wine collected years ago may be useful for
conversations at parties or indicate one’s prowess in identifying promising wines. No such value accrues
from a valuable wine recently purchased!)
15.7. In an experiment, Tversky and Kahneman (1981) found that only 46% of respondents in part a would
buy a replacement ticket, but that 88% of respondents in part b would still attend the show. The effect is
explained by “psychological accounting.” Once an individual has decided to see a show, a “mental
account” is set up with a budget. In part a, the money has already been spent from the existing account, and
little or no money remains in the account to purchase a replacement ticket. In part b, though, the missing
$20.00 may be from an altogether different “mental account” that contains funds not necessarily earmarked
for the show. Reference: Tversky, A., and D. Kahneman (1981), “The Framing of Decisions and the
Psychology of Choice,” Science, 211, 453-458.
15.8. This question also is discussed by Kahneman and Tversky (1981). In an experiment, 68% of
respondents were willing to make the trip to save money on the smaller purchase, but only 29% were
willing to do so for the larger purchase. Clearly both effort spent and benefit gained are the same in the two
scenarios. One explanation is that we tend to think in terms of percentage saved. The $9.96 savings on the
popcorn popper amounts to almost a 50% savings However, the same amount of money saved on the stereo
is only 0.91%.
15.9. This is an actual advertisement that appeared in the local newspaper in Eugene, Oregon. I was struck
first by the impertinence of the dealer who thought that I would be attracted by a $500 discount from
$37,998 to $37,498. Given the price of the BMW, $500 seems like a pittance; it is, after all, only a 1.3%
discount. Of course, I wasn’t in the market for the BMW, anyway. Had I been, the discount would indeed
have been meaningful; $500 is still $500, and I would much rather save this amount than not save it.
In comparison with the markdown on the computer system, the BMW offer seems puny, and the
reason appears to be that we really do tend to use percentages as the relevant frame for valuing “deals” that
are offered to us. The discount on the computer system is 15%, a hefty markdown more consistent with
promotional offers that we encounter than the discount on the BMW. But here is the question: Suppose you
were in the market for both the computer and the car, and you only have time to purchase one or the other,
but not both, before the discounts are revoked at the end of the day. (And you will be leaving the country
tomorrow, so you cannot wait for the next sale, and so on.) Now suppose the car dealer drops the price
another $20. Which one would you buy?
397
15.10. This is the “sunk cost” paradox. Many people would go on to the coast in part a because “they
already spent the money,” but in part b they would stay at the new resort even though it costs more. In
either case it “costs” $50 to take the preferred action. If you spend $50, would you rather be where you
want to be, or somewhere else? (For an excellent discussion, see Robyn Dawes (1988) Rational Choice in
an Uncertain World, New York: Harcourt Brace Jovanovich, Inc., pp 22-31.)
15.11. If the two agree on the bet, then for both EU(Bet) > U(0).
a. If P(Rain tomorrow) = 0.10, and they agree on this probability,
For A, 0.10 U(40) + 0.90 U(-10) > U(0), and E(Payoff) = -5.
For B, 0.10 U(-40) + 0.90 U(10) > U(0), and E(Payoff) = 5.
For each individual, the CE for the bet must be greater than U(0). With an upward-sloping utility function,
this implies CE > 0.
Because E(Payoff) = -5 for A, the only way for CE > 0 is for the utility function to be convex. In fact, it
must be convex enough for the inequality to hold. A must be a risk seeker:
A
EU(Bet)=
U(CE)
U(0)
B
EU(Bet)=
U(CE)
U(0)
-5
0 CE
0 CE 5
For B, the utility function might be concave, but not so much that CE < 0. B could be risk-averse.
b. If they agree that P(Rain tomorrow) = 0.30, then E(Payoff for A) = 5 and E(Payoff for B) =
-5. Exactly the same arguments hold now as before, except that A and B have traded places. Now B must
be risk-seeking and A could be risk-averse, risk-neutral, or risk-seeking.
c. If we know nothing about their probabilities, it is possible that their utility functions are identical. For
example, they could both be risk seeking. Also, they could have different assessments of the probability of
rain such that EU(bet) for each would be greater than U(0), given the same (possibly concave) utility
function for each. For example, A could think that P(Rain tomorrow) = 0.999 while B assesses P(Rain
tomorrow) = 0.001. In this case, each one could be very risk-averse yet still accept the bet.
d. If they agree that P(Rain tomorrow) = 0.20, then the expected payoff for each is 0. However, if they have
agreed to bet, then their CEs must each be greater than 0. Thus, in this case each one must be risk-seeking;
only for a convex (risk-seeking) utility function is the CE for a bet greater than its expected value.
15.12. Answers are based on individual judgment and hence will vary. However, the end product should be
a graph that looks something like the following:
398
Utility
Day s in shop
15.13 . Again, answers are based on subjective judgments. Students’ graphs should show an upwardsloping curve:
Utility
40,000
200,000
Miles
15.14. Answers will vary due to the subjective judgments that must be made. A graph of the utility function
most likely will be downward sloping:
Utility
Homework Hours
399
15.15. a. Again, this question will be answered on the basis of subjective judgment. Here are some possible
“types”:
The coffee hater
The black-coffee drinker
Utility
Utility
0
Proportion of cof f ee
0
1
Proportion of cof f ee
1
The coffee-with-a-little-milk lover
Utility
0
Proportion of cof f ee
1
It is important to note that there may indeed be cases where the utility function is not monotonic; that is,
that the high point on the utility curve is somewhere in the middle, like the coffee-with-a-little-milk lover.
b. Take the coffee-with-a-little-milk lover, and let the peak be at c*. The decision tree, with c = c*, would
be:
Cup with c* cof f ee and (1 - c*) mil
A
c*
Black cof f ee (c = 1)
B
(1-c*)
Milk (c = 0)
Clearly, the expected proportions of coffee are the same:
E(c | A) = c*
E(c | B) = c*(1) + (1-c*) 0 = c*
However, the cup in A is preferred to either of the outcomes in B because of the shape of the utility
function. Thus, comparing the cups of coffee on the basis of the expected proportion of coffee is not a good
preference model in this case.
15.16. a, b. Kahneman and Tversky’s (1979, 1981) results suggest that people tend to be risk-averse for
gains and risk-seeking for losses. In this situation, the plaintiff is (most likely) thinking in terms of potential
400
gains and thus may behave in a risk-averse fashion. Being risk-averse, she would settle for a payoff that is
less than her expected gain of $1 million. The defendant, on the other hand, viewing the situation in terms
of potential losses, may behave in a risk-seeking manner. If so, he would only be willing to settle for an
amount less than $1 million. The following graph shows possible utility functions for each individual:
If there is to be a settlement out of court, it will only occur if -CE(defendant) > CE(Plaintiff). Moreover,
this means that the settlement must be less than $1 million. Is the plaintiff being exploited?
c. In real world court cases, one might expect to observe patterns of settlements like this, and may observe
defendants making riskier choices than those made by plaintiffs. To my knowledge, however, this remains
an hypothesis; I know of no empirical work demonstrating this effect.
If the defendant’s expected loss were less than the plaintiff’s, as it might be if each party is optimistic
about his or her chances of winning the lawsuit, then CE(defendant) would be even lower. Thus, a
settlement, should it occur, would have to be for even less money.
Case Study: The Life Insurance Game
This case explores some implications of the psychological results showing that people are risk-averse in
considering gains but risk-seeking for losses. The point of purchasing life insurance is that, for a relatively
small premium (a sure amount) the decision maker can avoid an uncertain situation. That uncertain
situation includes both losses of income as well as an individual’s eventual amount of savings.
Buying life insurance is essentially a risk-averse act. By leading his clients to think in terms of losses,
Tom is inadvertently asking them to consider situations in which they may act in a risk-seeking way (i.e.,
not purchase insurance), especially those with relatively little to lose in the first place. Peggy suspects that
by framing the life insurance “gamble” as one that can reduce the uncertainty about the client’s future
savings, the client may be more likely to take the risk-averse act of buying the insurance.
Case Studies: Nuclear Power Paranoia and The Manager’s Perspective
These two case studies go together. Both are concerned with the way the public perceives and acts with
regard to risky or hazardous situations. First, consider Ray Kaplan. Given his concern with the nuclear
power plant, his behavior seems paradoxical because he engages in a lot of very risky activities: motorcycle
riding (occasionally without a helmet), eating beef (high cholesterol, possibility of colon cancer, and the
introduction of carcinogens from the charcoal itself), eating ice cream (cholesterol), lawn mowing (one of
the most dangerous household activities), breathing traffic fumes and exhaust, and sun-tanning (source of
skin cancer). Ed Freeman, on the other hand, is falling into the trap of believing that risk can be understood
adequately in terms of potential fatalities and injuries.
401
A large body of literature now exists demonstrating that individuals do not view all risks in the same
way. In particular, people tend to focus on two general issues. The first is commonly called “dread,” and
activities ranking high in this dimension are characterized by such things as lack of personal control,
involuntary exposure, and potential for catastrophe affecting many people over a large area. The second
dimension relates to the degree to which effects are unknown, uncertain, or delayed. For example,
operation of nuclear power plants ranks high in terms of dread, much more so than mountain climbing,
exposure to lightning strikes, or motorcycle riding. Likewise, not all potential effects due to a nuclear
power plant accident are known or understood, and the effects (cancers in particular) may be delayed over a
long time. Thus, in the public perception, nuclear power plants present very different kinds of risks than
those that we accept in our lives with little worry.
There is insufficient space in this manual to discuss the many aspects of risk to life and limb and the
wide array of research relating to risk perception. These two cases can provide a springboard for further
discussion. Two good sources for students to read as a basis for class discussion are Chapter 6 from Derek
Bunn (1984) Applied Decision Analysis (New York: McGraw-Hill), and Paul Slovic (1987) “Perception of
Risk,” Science, 236, pp 280-285.
402
CHAPTER 16
Conflicting Objectives I: Fundamental Objectives and
the Additive Utility Function
Notes
Chapter 16 is composed of four parts. First we revisit the notion of fundamental objectives, structuring
them into hierarchies, and measuring achievement of those objectives. The second part of the chapter walks
the reader through the basics in the context of a simple automobile example. The emphasis here is on
intuition and conceptual understanding, and we introduce the additive utility function. Achievement of the
individual objectives is measured in a risk-neutral way, and trade-offs are assessed by a “pricing out”
procedure. By the end of this part, the student should have a good basic understanding of trade-offs and the
additive utility function.
In the third part, we look at some different methods for assessing utility functions and weights for the
individual attributes. Of greatest interest here are the swing-weighting and lottery techniques for assessing
trade-offs. Swing weights in particular are very useful and relatively easy to think about, but this kind of
assessment will most likely be new for the student.
The last part shows how this approach is used in the context of the Eugene Public Library decision.
Because so many attributes are used in the library case, it may be difficult for some students to see how this
more complicated problem is just a grown-up version of the two- and three-attribute automobile example.
The text does go to some lengths to develop specific points (how to deal with a multi-level objectives
hierarchy, how to assess a trade-off for dollars versus an aggregate of other attributes), but the instructor
may wish to devote some class time to full development of the library example or one similar.
A few problems are of special note. Problem 16.25 introduces equity issues, and shows how an additive
model may be inappropriate. Problems 16.22 - 16.24 present Net Present Value as a version of the additive
value function and show that, if one must consider risk, then NPV may not work very well. These problems
can serve as the basis for an interesting class discussion, especially if some of the students have been
exposed to the idea of a risk-adjusted discount rate. For a follow-up problem see problem 16.8, and for
more reading consult Chapter 9 in Keeney and Raiffa (1976).
If you are familiar with the first edition of the textbook, Chapter 16 has been completely rewritten. Part of
the reason was the incorporation of Value-Focused Thinking in the first section of the book. But the main
motivation was simply to improve the presentation and organization of the material. The chapter now
introduces and focuses squarely on the additive value function, rather than dancing around it as was done in
the first edition. The discussion of indifference curves and the marginal rate of substitution has been
expanded. The presentation of swing weights has been improved. Regarding software and DecisionTools,
students are referred to Chapter 4 where we explained how to incorporate multiple objectives into
PrecisionTree.
Topical cross-reference for problems
Assessment lottery for trade-offs
Constructionist view
Equity
Fundamental objectives hierarchy
Indifference curve
Influence diagram
Interaction between attributes
Linked Decision trees
Multiattribute dominance
Net present value
403
16.10, 16.11
16.1
16.25
The SatanicVerses, Dilemmas in Medicine,
A Matter of Ethics
16.3, 16.12, 16.14
The Satanic Verses
16.28
16.23-16.26
16.4, 16.14, 16.16
16.22 - 16.24
Personal tradeoff assessment
PrecisionTree
Pricing out
Proportional scores
Risk to life and limb
Sensitivity analysis
Swing weights
16.12, 16.17 - 16.21
16.23-16.26
16.12, 16.14, A Matter of Ethics
16.6, 16.12
16.5, 16.25, FDA
16.26, 16.27
16.8, 16.9
Solutions
16.1. The existence of multiple objectives is one of the factors that can complicate decisions as discussed in
Chapter 1. Anyone who has had to make such a decision knows how difficult it can be to think about the
issues; as we ruminate on the issues, and as people bring up different points, the importance of the various
objectives may appear to change. This fluidity of our perceptions is a good reason to address decisions with
multiple objectives carefully and in a systematic way.
16.2. The first phase includes identification of the objectives and development of a set of attributes which
can be used to measure accomplishment toward the objectives. This initial phase generally results in the
specification of a fundamental-objectives hierarchy. The second phase requires the decision maker to assess
weights for the attributes and utilities on the attributes for the different alternatives. This phase culminates
in the calculation of overall utility for the alternatives.
16.3. An indifference curve is a set of points on a graph that represent alternatives (or consequences) that
are equivalent in terms of overall utility. Because all such alternatives have the same overall utility, the
decision maker should be indifferent among them.
16.4. Alternative A dominates alternative B if A performs better than B on all attributes.
16.5. a. “Twice as important” has little meaning without some indication of real measurements being used.
For example, the phrase could be used to mean that the dollars being spent currently on safety are
accomplishing twice as much, in terms of employee satisfaction overall, as the dollars being spent on other
benefits. Alternatively, the phrase could mean that another dollar spent on safety will be worth (in some
sense) two dollars spent on other benefits. Or, the committee member making the statement could just be
emphasizing the importance of safety and really cannot justify the quantification “twice.”
In a decision-analysis approach, it is important to realize that “twice as important” has a very specific
meaning. That is, it usually refers to one weight being twice as large as another. Thus, an attribute that is
“twice as important” is one in which going from worst to best yields twice the increase in satisfaction as
going from worst to best on the less important attribute. Thus, relative importance is tied intimately to the
best and worst available alternatives or consequences.
b. One could analyze safety costs and determine the (rough) cost of reducing the risk of fatality or injury by
a certain amount. Now every policy can be viewed in terms of an expenditure of dollars on either safety
measures or insurance benefits. “Twice as important” could mean that a dollar spent on safety measures
yields twice the overall benefit to the employees as a dollar spent on insurance benefits.
A more complete decision-analysis approach would look at the available alternatives for improving safety
one one hand or insurance on the other. The increase in employee satisfaction from worst to best on safety
should be twice the increase in satisfaction resulting from moving from worst to best on insurance benefits.
16.6. Proportional scores are just linear transformations of original values. That is, they involve taking the
original score and multiplying by a constant and adding a constant:
1
worst value
U(x) = best value - worst value x - best value - worst value .
404
Plotting the proportional score against the original score will always produce a line. When used to model
preferences, lines always represent risk-neutral preferences.
16.7. Let S = Sunny, C = Cloudy, and R = Rainy. We have S = 2C and C = 3R, so we might let R=1, C=3,
and S=6. Scaling these, we need to find a and b so that
a + b(1) = 0
and
a + b(6) = 1.
Solving these two equations for a and b gives a = -0.2 and b = 0.2. Thus,
S(1) = -0.2 + 0.2(1) = 0.0
S(3) = -0.2 + 0.2(3) = 0.4
S(6) = -0.2 + 0.2(6) = 1.0.
16.8. Swing weights are based on the relative value of moving from worst to best on each separate attribute.
By making these assessments, the weights can be calculated. Moreover, the ratio of two weights can be
interpreted as the relative value of moving from worst to best on the two attributes.
16.9. We have
kA = 0.70 kB
kA + kB = 1.
By substituting,
kA = 0.70 (1 - kA)
Solving now for kA and kB, we obtain kA = 0.41 and kB = 0.59.
16.10. Lottery weights are assessed by considering lotteries where one obtains either the best possible
consequence (best on all attributes) or worst possible (worst on all attributes). The alternative to the lottery
is a consequence that is worst on all attributes except one — call it attribute A. Because the utility function
is scaled so that U(best possible) = 1 and U(worst possible) = 0, the probability p (of winning the best
possible outcome in the lottery) that makes the decision maker indifferent between the lottery and the sure
consequence, we obtain the fact that U(best on A, worst on everything else) = p. But the overall utility, or
additive value function, also means that U(best on A, worst on everything else) = kA. Thus, we have kA = p.
16.11. From the section titled “Assessing Weights: Lotteries,” the first two assessments give k1 = 0.25 and
k2 = 0.34. Thus, k3 = 1- 0.25 - 0.34 = 0.41.
16.12. a, b. Answers to this question will vary somewhat because the question asks for subjective
judgments. However, for most people, the answer to part a will be greater than the answer to part b. For
example, someone might pay $50 to move from one mile away to the apartment next to campus, but only
$10 to move from the 4-mile-distant apartment to the one that is 3 miles away.
c. The utilities typically are not “linear” or proportional. That is, each mile closer is not worth exactly the
same incremental amount of rent. Thus, the proportional scoring technique would not be appropriate
because it assumes that one mile closer is always worth the same amount.
405
d.
Rent
300
280
260
240
220
1
2
3
4
5
Miles f rom campus
16.13. a. Think about the trade-offs. How much should your friend be willing to pay for another unit of
reliability?
b. Many definitions are possible. One might be the number of days in the shop per year. This is an
uncertain quantity, and so it may be important to take risk into account in assessing the trade-off between
price and reliability.
c. The lottery method will allow your friend to incorporate risk attitude into the tradeoff. Swing weights
and pricing out could also be used.
16.14. a. Machine C is dominated by Machine B. Thus, C may be eliminated from the analysis.
b. An extra day of reliability is worth $180. Assuming proportionality for the utilities, and that the
computer will last for two years, Machine A, with price $1000 and expected down time of 4 days per year
(or 8 for the two years combined) would be equivalent to hypothetical Machine E for $1360 and expected
down time of 3 days per year (6 total for the two years). Because A is best on price but worst on reliability,
UA = kP(1) + kR(0) = kP
Proportional scores can be calculated for Machine E:
1360 - 1750
UP(E) = 1000 - 1750 = 0.52
3-4
UR(E) = 0.5 - 4 = 0.2857
Now, because Machines A and E are equivalent, we can set up the equation
kP = kP (0.52) + kR (0.2857).
With the condition that kP + kR = 1, we can solve these two equations for kP and kR to get
kP = 0.3731 and kR = 0.6269.
406
c. Using kP = 0.3731 and kR = 0.6269, we can calculate overall utility for Machines A, B, and D:
U(A) = 0.3731 (1) + 0.6269 (0) = 0.3731
1300 - 1750
U(B)=0.3731 1000 - 1750
(
)+0.6269(0.52 --44 )=0.3731(0.60)+0.6269(0.57) = 0.5812
U(D) = 0.3731 (0) + 0.6269 (1) = 0.6269.
Machine D is the choice. D provides 3 more expected trouble-free days over the upcoming two years than
does B, but at an incremental cost of only $400, less than the equivalent $54≤0 implied by the assessed
trade-off of $180 per extra day.
d.
A
4
3
Day s per y ear
C
B
2
1
D
1000
1200
1400
1600
1800
2000
Price ($)
e. Other considerations might include service quality, cost of service, warranty, probability of a
“catastrophic” failure (e.g., loss of important files, destruction of machine through electrical failure, etc.).
16.15. When the utilities range from zero to one, the weights, assessed by pricing out, swing weights, or
lotteries, have very precise meanings and hence are easier to assess and interpret with confidence. For
example, with the utilities ranging from zero to one, the weight on an attribute can be interpreted as:
a. The indifference probability if lotteries are used (Figure 16.11),
b. The “swing” proportion if swing weights are used,
c. The “worth” of the attribute relative to other attributes if pricing-out is used.
16.16. Here are the possibilities:
Contractor
A
B
B'
C
C'
D
Cost
100
80
100
79
83
82
Time
20
25
18
28
26
26
Note that the developer is indifferent between B and B' on one hand and between C and C' on the other,
based on the information in the problem.
407
Note also that D is better than C' because D is cheaper. Thus, D is also better than C'. But B is better than
D, because B is both cheaper and quicker. That leaves A. B' is better than A (B is quicker for the same
cost). Therefore B is also better than A. B gets the job.
Some students try to work this problem by calculating trade-off weights. The problem with this approach is
that the C-C' judgment gives different weights than the calculations based on B-B'. The trade-off rates are
not constant.
16.17 - 16.20. These four problems are obviously matters of subjective assessment. In each one, students
are required to go through the entire process of identifying objectives and their corresponding attributes,
assessing weights and individual utilities, and finally comparing alternatives. The problems themselves
provide considerable guidance. Careful thought is necessary in these problems. Having students work one
or more of these four problems as a “project” for the course is a worthwhile and engaging assignment.
16.21. No, but it couldn’t hurt as long as the decision maker is careful to consider switching from the
Portalo to the Norushi. The advantage of moving systematically along the curve created by the points
representing the alternatives is that the curve typically has a convex shape, indicating diminishing marginal
returns. That is, if you want a still longer life span, you have to fork over even more money per year (of life
span). Thus, as we move along the curve, increasingly higher tradeoff rates are necessary to switch to the
next alternative. At some point, switching to the next alternative is no longer preferred. Because of the
convex shape of the curve, it is also clear that the decision maker would not switch to the subsequent
alternatives, as they require still higher tradeoff rates. Thus, considering the switches systematically is an
efficient way to think through the alternatives, providing an appropriate rule for stopping the series of
comparisons.
16.22. a. The attributes are cash flows xi, and the weights are discount factors
1
. Note that the
(1 + r)i
weights and attributes are combined linearly.
b. Assuming that the cash flows occur at the end of the period, we have the following:
Riskless alternative:
NPV = -$20,000 +
$10,000 $10,000 $10,000
+
+
= $5313.
1.09
1.09 2
1.09 3
For the risky alternative:
NPV1 = -$20,000 +
$15,000 $15,000 $15,000
+
+
= $17,969
1.09
1.09 2
1.09 3
NPV2 = -$20,000 +
$5,000 $5,000 $5,000
+
+
= -$7344
1.09
1.09 2
1.09 3
Thus, E(NPV) = 0.5 ($17,969) + 0.5 (-$7344) = $5313, which is the same as for the riskless alternative.
Thus, NPV appears to be a risk-neutral decision criterion. See file “Problem 16.22.xlsx.”
c. We might assess a utility function for wealth and use it instead of cash flow. It would also be reasonable
to use a higher interest rate for the risky project (a risk-adjusted discount rate), which would decrease the
NPV for the risky alternative. However, assessing the risk-adjusted discount rate is not a simple matter.
Even though it is theoretically possible to match the risky project up with specific securities in a market and
deduce from those securities an appropriate interest rate, actually doing so may be quite complicated.
Moreover, identifying a matching security or portfolio requires substantial subjective judgments regarding
the cash flows and uncertainty about both the risky project and the matching security.
408
16.23. a. The answer requires a subjective assessment. Most students will prefer B on the grounds that it is
less risky. It is certain that the total payoff will be $10,100, but unclear whether the $10,000 will come in
the first year or the second.
b. Assuming end-of-period cash flows, E(NPV) is equal to $8883.51 for both A and B:
Project A:
NPVA1 =
$10,000 $10,000
+
= $17,591.11
1.09
1.09 2
$100 $100
= $175.91
NPVA2 = 1.09 +
1.092
E(NPVA) = 0.5 ($17,591.11) + 0.5 ($175.91) = $8883.51
Project B:
NPVB1 =
$100
$10,000
+
= $9258.48
1.09
1.09 2
NPVB2 =
$100 $10,000
+
= $8508.54
1.09
1.09 2
E(NPVB) = 0.5 ($9258.48) + 0.5 ($8508.54) = $8883.51
Because they both have the same E(NPV), a decision maker using E(NPV) as a decision criterion would be
indifferent between them. This decision tree is shown in the first worksheet of the Excel file “Problem
16.23.xlsx.” The decision tree is a linked tree where the outcome values are linked to the NPV calculations
in the spreadsheet.
c. Using the logarithmic utility function, we have
U($10,000) = ln(10,000) = 9.21
U($100) = ln(100) = 4.61
Again assuming end-of-period cash flows,
Project A:
9.21
9.21
= 16.20
NPUA1 = 1.09 +
1.092
4.61
4.61
NPUA2 = 1.09 +
= 8.11
1.092
E(NPUA) = 0.5 (16.20) + 0.5 (8.11) = 12.16
Project B:
9.21
4.61
= 12.33
NPUB1 = 1.09 +
1.092
409
4.61
9.21
NPUB2 = 1.09 +
= 11.98
1.092
E(NPUB) = 0.5 (12.33) + 0.5 (11.98) = 12.16
Again, both projects have the same net present utility, so the decision maker using this decision criterion
would be indifferent. This decision tree is shown in the second worksheet of the Excel file “Problem
16.23.xlsx.” The tree is a linked tree where the outcome values are linked to the NPU calculations in the
spreadsheet.
d. No, these calculations are not consistent with preferences in which either A or B is preferred. The utility
function is not capturing the interactions among the annual cash flows. In particular, it is not recognizing
that the cash flows in different periods can in some way act as substitutes for each other.
16.24.
Alternative A:
Year 1
$10,000
Year 2
$10,000
NPV
$17,591
U
0.97
$100
$100
$176
0.03
Alternative B:
Year 1
$10,000
Year 2
$100
NPV
$9,258
U
0.84
EU(A)
0.50
EU(B)
0.83
$100
$10,000
$8,509
0.82
Now it is clear that Alternative B is preferred to A. Thus, by using the exponential utility function, we have
been able to incorporate a risk attitude. However, this model assumes that all we care about is the NPV
(and the riskiness of NPV) of any given project, and not the actual pattern of the cash flows.
This spreadsheet model is shown in the Excel file “Problem 16.24.xlsx.” The tree is a linked tree where the
outcome values are linked to the NPV calculations in the spreadsheet, and the calculations are then based
on the expected utility defined by an exponential utility function with an R-value of 5000.
16.25. a. The expected number of deaths is 1.0 in each case. This decision tree is shown in the first
worksheet of the Excel file “Problem 16.25.xlsx.” The tree is a linked tree where the outcome values are
linked to the spreadsheet model for total expected deaths.
b. A decision maker may not weight the groups equally. For example, Group 1 may be wealthy
philanthropists, while another may be drug traffickers.
410
c.
Group 1
(0.5)
Chemical
A
(0.5)
(0.5)
B
(0.5)
Group
U (1) =0
1
U (0) =1
2
U (0) =1
1
U (1) =0
2
U (1) =0
1
U (1) =0
2
U (0) =1
1
U (0) =1
2
For Chemical A, EA(U) = 0.5[k1 (0) + (1-k1) (1)] + 0.5[k1 (1) + (1-k1) (0)]
= 0.5[k1 (1) + (1-k1) (1)]
= 0.5
For Chemical B, EB(U) = 0.5[k1 (0) + (1-k1) (0)] + 0.5[k1 (1) + (1-k1) (1)]
= 0.5[k1 (1) + (1-k1) (1)]
= 0.5.
Note that the value of k1 makes no difference at all. This decision situation is modeled in the second
worksheet of the Excel file “Problem 16.25.xlsx.” The worksheet is structured so the user can vary the k1
values and see that the value makes no difference at all.
d. There is an issue of equity in the treatment of the two groups. That is, if each group represents a
substantial proportion of the population or the policy maker’s constituency, they may not appreciate
Chemical A because, after the fact, it will appear that one group was abused while the other was not. Thus,
because of the equity concern, the policy maker may prefer Chemical B.
16.26. a.
UPortalo = 0.45 (1) + 0.55 (0) = 0.45
UNorushi = 0.45 (0.75) + 0.55 (0.50) = 0.613
UStandard = 0.45 (0) + 0.55 (1) = 0.55
Thus, the Norushi would be chosen.
b. The Norushi would be chosen over the Portalo as long as
kL(0.75) + (1 - kL) (0.50) > kL
or
kL < 0.67
411
Likewise, the Norushi will be chosen over the Standard as long as
kL(0.75) + (1 - kL) (0.50) > (1 - kL)
or
kL > 0.40.
Thus, the choice is the Norushi as long as 0.40 < kL < 0.67. For larger kL, the choice is the Portalo, and for
smaller kL, the choice is the Standard. This decision problem is modeled in the Excel spreadsheet “Problem
16.26.xlsx.” The spreadsheet model allows the user to vary the weights and see the impact on the preferred
decision.
16.27. The data from Table 16.6 is saved in the Excel file “Problem 16.27.xlsx.” In the spreadsheet, for
example, we can use sensitivity analysis to vary the weight on "Related costs" where the remaining weight
is distributed to the other objectives according to the original relative ratios. The data from the sensitivity
analysis was aggregated and graphed in the third worksheet. We can see that the optimal choice of site 3 is
not sensitive to the weight placed on "Related costs". This exercise can be repeated for the other weights.
For example, we can see how much change in the numbers is required to make Site 2 preferred over Site 3.
It is virtually impossible to get this result by wiggling the primary weights (those on site size, access,
parking, etc.). It is possible to make Site 2 preferred by wiggling the weights on the individual attributes.
Doing so requires increasing the weight on those attributes on which Site 2 performs better than Site 3 and
lowering weights where Site 3 scores better. However, accomplishing the change in ranking requires
several fairly radical changes. It is unlikely that the library committee members would credit such extensive
and radical changes in the table.
0.9
0.8
Expected Utility
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.2
0.4
0.6
0.8
1
Cost Weight
Site 1
Site 2
Site 3
Site 4
16.28. From problem 16.11 we had k1 = 0.25, k2 = 0.34, and k3 = 1 - 0.25 - 0.34 = 0.41. However, the
assessment in this problem indicates that k3 = 0.18, which is quite different from 0.41. This would suggest
that the overall utility (or additive value function) that we have developed in Chapter 16 is not appropriate
for modeling preferences in this case. A multiplicative utility function might be used, though, as discussed
in Chapter 16.
412
Case Study: The Satanic Verses
1. A possible hierarchy for a bookstore owner:
Maximize
prof it
Short term
Long term
Self
Employ ees
Customers
Property
Maximize
saf ety
Do not submit to threats
2. The available alternatives include (but are not limited to):
• Sell the book openly.
• Do not sell the book at all.
• Stop selling the book for a period of time.
• Sell the book but do not display it.
• Sell the book out of the back room.
3. The risks associated with selling the book include the risk of retaliation by Moslem activists. This risk is
slightly different if an attempt is made to sell the book covertly. While the probability of retaliation is
lower, a covert sales program, if discovered, might actually lead to harsher retaliation. On the other hand, if
the choice is made not to sell the book, there is uncertainty associated with the lost revenues. To some
degree, this risk exists with any covert sales program as well.
A simple influence diagram for this problem:
Demand
Revenue
Action
Profit
Damage
Retaliation
413
Case Study: Dilemmas in Medicine
1. The four principles described in the case can be stated as objectives which a physician should work
toward. Clearly, though, these objectives sometimes conflict. Here is a possible fundamental-objectives
hierarchy:
Maximize
benef icence
Maximize
Nonmalef icence
Maximize
justice
Maximize
autonomy
Minimize short term pain
Maximize long term health
Pain
Health
Access to treatment
Use of scarce resources
Patient
Relativ es
2. The primary conflicts are:
a. Nonmaleficence versus beneficence. In many cases, the extraordinary measures taken to save a
struggling infant may be extremely painful to the patient. Some commentators have described
treatment of such infants as torture. Of course, the patient cannot articulate any feelings.
b. Beneficence versus autonomy. Doctors may feel an urge to attempt to save an infant’s life while the
parents, acting on the child’s behalf, may prefer to withhold extraordinary treatment on the grounds
that the child is likely to have severe impairments and perhaps a short life.
Of course, other conflicts are also possible. For example, nonmaleficence may conflict with autonomy if
parents insist on a highly risky or painful treatment for an infant.
3. The two objectives hierarchies are likely to differ considerably. The most important difference is that
you must consider possible legal ramifications of your involvement.
PATIENT
Y OURSELF
Minimize legal
risk
Pain
Maximize personal
welf are
Fear
Maximize personal
welf are
Medical
complications
Maximize others’
welf are
Short term grief
Long term emotional
inv olv ement
Grief
Wealth or estate
Maximize patient’s
welf are
414
Respect patient’s wishes
Make last time as
comf ortable as possible
Case Study: A Matter of Ethics
1.
Minimize
ov ercharging
Maximize income/
wealth
Income
Sav ings
Maximize R & D
potential
The hierarchy shows the fundamental objectives in which Paul appears to be interested.
2. Apparently, Paul’s current situation is not so bad that he would be willing to reduce his savings by
$4800. Note, though, that staying at his job has opposite effects on two objectives, minimizing
overcharging and maximizing R & D. Thus, it is both the reduction of overcharging along with the
reduction in R & D that together are worth less than $4800 to Paul.
Case Study: FDA and the Testing of Experimental Drugs
Note: Students may wish to do some research or reading on this topic before writing answers or class
discussion.
1, 2. If we keep a drug off the market for extra testing, the risk of subsequent users developing
unanticipated side effects is reduced. But benefits from the drug will also be reduced due to later release.
Some lives may be lost, some patients may suffer more, or more work days may be lost. From still another
perspective, fewer or shorter tests may mean a savings of tax dollars.
3, 4. These questions are meant to lead students to compare their feelings about lives lost (AIDS victims)
versus suffering from pain (arthritis victims). In fact, rheumatoid arthritis rarely, if ever, is directly
responsible for a victim’s death. However, far more people suffer from arthritis than from AIDS. Is the
aggregate suffering due to arthritis worth more or less to you than the lives of 200 AIDS victims? Why? If
these seem like callous questions, remember that the federal government allocates millions of dollars to
medical research and must make choices about how much to allocate to research on each of a wide variety
of diseases. Implicit trade-offs such as the ones in this problem must be made constantly.
415
CHAPTER 17
Conflicting Objectives II: Multiattribute Utility Models
with Interactions
Notes
This final chapter deals with multiattribute utility models in a way that is fully consistent with Keeney and
Raiffa (1976). Compared to other chapters in the book, the material is relatively dense, both conceptually
and technically. If a reader has made it through the rest of the book, though, he or she should be well
prepared to tackle multiattribute utility.
The key concepts are the ideas of independence: preferential independence, utility independence, and
additive independence. It is the application of these notions that makes the construction of a multiattribute
utility function a feasible judgmental task. Thus, these topics are introduced and thoroughly discussed
before the blood bank example. Furthermore, instructors are encouraged to discuss these concepts
thoroughly during class time. If students have been exposed to polynomial regression analysis including
interaction terms, useful parallels can be drawn to the multiattribute utility model.
For those familiar with the first edition, instructors should note a notational change; I now use x+ and xinstead of x1 and x0 for the best and worst values of attribute X. Also, the BC Hydro case has been added at
the end of the chapter as an example that draws together all of the multiattribute material in the text.
Topical cross-reference for problems
Interaction between attributes
Preferential independence
Utility independence
Additive independence
Substitutes and Complements
Time value of money
Equity
Personal multiattribute assessment
Multiplicative utility function
Sensitivity analysis
Stochastic dominance
17.1
17.3, 17.4
17.4, 17.5, 17.10, 17.11
17.5, 17.7, 17.12
17.6 - 17.9, Mining Investment
17.8
17.9
17.10, 17.11
17.13, 17.15
17.14
Mining Investment
Solutions
17.1. Attributes interact when preferences for outcomes on one attribute depend in some way on the level
of another attribute. In particular, because we often are concerned about decision making under uncertainty,
interaction among attributes in a utility sense means that preferences for lotteries over outcomes of one
attribute may differ when lotteries over other attributes change. Another way to say this is that your
certainty equivalent for a specific lottery in attribute A may depend on exactly what uncertainty you face
regarding attribute B.
For example, consider a job choice. The risks you are willing to take regarding salary may depend on the
chances associated with advancement. Suppose that two jobs offer the same prospects in terms of future
raises in salary. One job, though, is with a smaller firm, and gives potential for both more control in the
firm as well as possibly losing the job altogether. The other job is a relatively stable one with a much more
gradual promotion path. An individual might find the salary uncertainty more acceptable (accord it a higher
CE) when it is accompanied with the riskier promotion path, because the combination of a higher salary
and a lot of control in the company is a much preferred alternative as compared to either a lot of control
alone or a high salary alone. In this sense, the two attributes interact in a complementary way.
416
The additive utility function from Chapter 15 does not permit interaction. To see this, refer back to
problems 16.23 and 16.25. In both of these examples, two attributes were identified and studied in the
context of problems where the two attributes were both in lotteries. In each case, the argument was made
that the attributes might interact in a certain way, but the weighted scoring technique did not allow one to
model the interaction.
17.2. The main advantage is that the assessments are very straightforward and require the same kinds of
judgments that were used in single-attribute utility assessment. However, the disadvantages are that many
assessments may be needed, the judgmental task of thinking about multiple attributes at once may be
difficult, and, if indifference curves are drawn by hand, the “eyeballing” process may introduce some
imprecision into the process.
17.3. Preferential independence means that preferences over sure consequences in one attribute do not
depend on sure consequences in another. For most of us, preferences over cost and quality of goods that we
purchase are preferentially independent. Imagine a group of products that all have the same price but differ
in specific and known ways on quality. Your rankings of preferences in terms of quality probably would
not depend on the common price; better quality is preferred. Likewise, if quality were fixed and price were
varied, your rankings of preference over prices would not vary; lower prices would be better.
Examples of attributes that are not preferentially independent are more difficult to find. A rather contrived
but simple example involves attributes of your experience at a football game. The attributes in question
would be beverages and weather. On a cold day you might prefer a hot drink, but on a warm day you might
prefer something cold. Regardless of the beverage on hand, though, you might prefer a warm day to a cold
one.
17.4. Consider two attributes, A and B. Preferential independence means that preferences over sure
consequences in A do not depend on sure consequences of B; no matter where B is set, the preferences
(rankings) over sure consequences in A are the same. In contrast, utility independence means that we are
looking at lotteries over A. If A and B are utility independent, then preferences over lotteries (or CEs) in A
do not change with different sure consequences in B.
17.5. This question follows up on 17.4. With additive independence, preferences for lotteries in Attribute A
do not depend on lotteries in attribute B. For utility independence, the requirement is that preferences over
lotteries in A not change with different sure consequences in B.
17.6. a. The assessments indicate that kX = 0.48 and kY = 0.67. The two-attribute utility function is U(x, y)
= 0.48UX(x) + 0.67UY(y) - 0.15UX(x)UY(y).
b. The two attributes would appear to be substitutes. That is, one would most likely prefer to have a lottery
with an even chance of (best support, worst reliability) and (worst support, best reliability) than an even
chance at (best support, best reliability) and (worst support, worst reliability). This is consistent with the
negative value (-0.15) of the interaction coefficient.
17.7. From Assessment 1, EU(A) = EU(B), or
kR (0.5) + kP (0.5) + (1 - kR - kP) (0.5) (0.5)
= 0.65 [kR (1) + kP (0) + (1 - kR - kP) (1) (0)]
+ 0.35 [kR (0) + kP (1) + (1 - kR - kP) (0) (1)]
= 0.65 kR + 0.35 kP
This equation reduces to
417
0.25 = 0.40 kR + 0.10 kP
Assessment 2 is a standard assessment lottery and implies that kP = 0.46. Substituting this into the equation
above and solving gives kR = 0.51. Thus, 1 - kR - kP = 0.03. The positive sign for the interaction term
implies that these two attributes are complements. However, the interaction effect is quite small. An
additive model would probably work reasonably well. (As long as we have the full model, though, we
might as well use it!)
17.8. a. The first thing to do is calculate utilities for the various consequences:
UX(5000) = 0
UX(7500) = 0.412
UX(18,000) = 0.972
UX(20,000) = 1.00
UY(5000) = 0
UY(7500) = 0.289
UY(20,000) = 1.00
The first assessment is that $7500 for each of the two years would be equivalent to a 50/50 gamble between
$20,000 each year or $5000 each year. This is a standard assessment of some intermediate consequence
against a gamble involving the worst ($5000; $5000) and the best ($20,000; $20,000) Thus, U(7500, 7500)
= 0.50, or
kX UX(7500) + kY UY(7500) + (1 - kX - kY) UX(7500) UY(7500) = 0.50.
(1)
Assessment 2 indicates that ($18,000; $5000) would be just as good as ($5000; $20,000) Therefore, we can
write
kX UX(18,000) + kY UY(5000) + (1 - kX - kY) UX(18,000) UY(5000)
= kX UX(5000) + kY UY(20,000) + (1 - kX - kY) UX(5000) UY(20,000).
This reduces to
kX UX(18,000) = kY
(2)
Equations (1) and (2) are now two linear equations in two unknowns, kX and kY . Substituting in for the
utilities, these two equations become
kX (0.412) + kY (0.289) + (1 - kX - kY) (0.412) (0.289) = 0.50.
kX (0.972) = kY
(1')
(2')
Solving these two equations simultaneously gives kX = 0.832 and kY = 0.809. These constants, coupled
with the individual exponential utility functions, specify the full two-attribute utility function:
U(x, y) = 0.832 [1.05 - 2.86 (e-x/5000)] + 0.809 [1.29 - 2.12 (e-y/10,000)]
- 0.641 [1.05 - 2.86 (e-x/5000)] [1.29 - 2.12 (e-y/10,000)].
Because (1 - kX - kY) < 0, the two attributes can be viewed as substitutes for each other. This makes sense,
because the cash flows in different years can, to a great extent, act as substitutes.
418
b.
Alternative A:
Year 1
$10,000
Year 2
$10,000
Probability
0.50
U(x, y)
0.75
EU(A)
-1.14
$100
$100
0.50
-3.02
Alternative B:
Year 1
$10,000
Year 2
$100
Probability
0.50
U(x, y)
0.24
$100
$10,000
0.50
-0.47
EU(B)
-0.12
Thus, B is clearly the preferred alternative with the higher expected utility.
c.
10000
9000
8000
7000
6000
5000
4000
3000
2000
1000
0
4000
6000
8000
10000
U(x,y) = 0.25
12000
U(x,y) = 0.75
14000
16000
U(x,y) = 0.50
17.9. The value of 1 - kX - kY should be positive. Attributes X and Y appear to be complements because the
decision maker would prefer that X and Y be similar.
17.10. This problem requires students to work through a full utility assessment problem for choosing
among computers. The problem itself provides guidance.
17.11. This problem requires students to assess a two-attribute utility function for salary and community
population size for making a choice from among job offers. Answers will of course vary.
419
18000
a. Determine whether preferences are mutually utility independent by following the procedures in the text.
See also problem 17.10a.
b. Assess the two individual utility functions and the weights kX and kY . Do this using the standard
procedures described in Chapter 17. Students should eventually produce a graph showing indifference
curves. The slope of the indifference curves will depend on whether individuals prefer small towns, large
cities, or something in between. For individuals who prefer a medium-size community, indifference curves
may actually be U-shaped:
Salary
Increasing
pref erence
Community size
c. Other attributes that may be important: promotion potential, kind of work, distance from relatives,
geographic location, and so on.
17.12. a. This problem requires students to check their assessments for additive independence. Follow the
instructions in the text.
b. The basic idea would be to consider subsets of the attributes. Suppose that there are n attributes, and we
divide them into subsets A with nA attributes and B with nB attributes. Now set up two lotteries. One lottery
is a 50-50 chance between 1) best on everything in A and B, and 2) worst on everything in A and B. The
other lottery is a 50-50 chance between 1) best on everything in A and worst on everything in B, and 2)
worst on everything in A and best on everything in B.
Additive independence holds if mutual utility independence holds and the decision maker is indifferent
between these two lotteries no matter how the attributes are split up into sets A and B.
17.13. This kind of problem is one that is often at the heart of major disputes. The first step is to structure
objective hierarchies on each side. That is find out what is important for each side. To resolve a dispute, it
is important for each side to acknowledge the validity of objectives that the other side may have. The
output of this first and critical phase should be a hierarchy that captures everyone’s objectives.
The second step is to start exploring the values of the weights on the attributes. It is important at this point
to realize that the different groups may have very different weights. However, if specific alternatives have
been identified, then the different weights can be used to analyze these alternatives. The best possible
situation is one in which the preferred alternative is the same regardless of whose weights are used. If this
is not the case, then the assessed weights can provide guidance in the development of compromises among
the groups. That is, each group may be able to give in a little in areas that are less meaningful (have smaller
weights) while gaining more in areas that are more important.
Case Study: A Mining Investment Decision
1. Based on the expected utilities in Table 17.2, “Bid high with partner” is the optimal strategy. In contrast,
the risk profiles in Figure 17.8 suggest that “Develop own property with partner” stochastically dominates
420
the other options. The risk profiles, though, only consider NPV. When product output (PO) is also
considered, the ranking changes.
2. Apparently the firm would prefer to have a chance at obtaining both the best of NPV and the best PO,
versus worst on each, as opposed to being sure that one of the two would be maximized. High NPV
certainly means high profits. High PO may mean a higher market share and a stronger competitive position.
With both high PO and high NPV, the firm has a very strong competitive position.
The mining exploration business is indeed a risky industry and is characterized by large risks and
potentially very high gains. It is not surprising to find that the firm might be somewhat risk-prone and want
to go for the ultimate win — maximizing both NPV and PO.
****************
Chapter 17 Online Supplement: Solutions to Problems
17S.1. When n = 2, k in the multiplicative utility function satisfies
1+k
= (1 + k k1) (1 + k k2)
= 1 + k (k1 + k2) + k2 k1 k2
k
= k [k1 + k2 + k k1 k2]
1
= k1 + k2 + k k1 k2
k
=
1 - k1 - k2
k1 k2
Now substitute this into
U(x1, x2) = (k k1 U1(x1) + 1) (k k2 U2(x2) + 1)
= k2 k1U1(x1)k2 U2(x2) + k [k1 U1(x1) + k2 U2(x2)] + 1
=
( 1 -kk11k-2k2 )2 k1 U1(x1)k2 U2(x2)
+
=
( 1 -kk11k-2k2 )[k1 U1(x1) + k2 U2(x2)] + 1
( 1 -kk11k-2k2 )[(1 - k1 - k2)U1(x1)U2(x2)]
+
=1+
( 1 -kk11k-2k2 )[k1 U1(x1) + k2 U2(x2)] + 1
( 1 -kk11k-2k2 )[k1 U1(x1) + k2 U2(x2) + (1 - k1 - k2)U1(x1)U2(x2)]
= 1 + k [k1 U1(x1) + k2 U2(x2) + (1 - k1 - k2)U1(x1)U2(x2)]
421
The large term in square brackets on the right-hand side is the two-attribute multilinear utility function
introduced in Chapter 17. It is multiplied by a constant (k) and has a constant added (1). Thus, as long as
k > 0, the multiplicative utility function is simply a positive linear transformation of the multilinear utility
function, and thus the two must be equivalent.
17S.2. Substitute values for k, kecon, kenv, and kfirm into
(1 + k) = (1 + k kecon) (1 + k kenv) (1 + k kfirm)
(1 + 1.303) = [1 + 1.303 (0.36)] [1 + 1.303 (0.25)] [1 + 1.303 (0.14)]
2.303 = (1.47) (1.33) (1.18) = 2.303.
422
Download