Uploaded by xiaizyue888

2024S RMSC2001 L2

advertisement
To p i c 2 R i s k , U t i l i t y
and Decision
P r e s e n t e d
b y
T o n y
S i t
T e r m
2 ,
2 0 2 4
RMSC2001
Reference:
Autor, D. (2014) Lecture Note 14: Uncertainty, Expected Utility Theory and the Market for Risk –
http://ocw.mit.edu/courses/economics/14-03-microeconomic-theory-and-public-policy-fall2010/lecturenotes/MIT14_03F10_lec14.pdf
-[
2. Utility
]-
SLIDE
1
Overview
2.1 Introduction
Example – doctor in a village, decision theories, expected value, a basic decision theory
criterion, Example – The St. Petersburg Paradox
2.2 Expected Utility Theory - Axioms
Bernoulli’s solution to the St. Petersburg Paradox, the 4 VNM axioms: completeness;
transitivity; continuity; independence, the VNM Utility Theorem
2.3 Expected Utility Theory – Utility Functions
Risk attitudes, the shape of !("), characterizing by !ʹ (") and !ʹʹ ("), certain equivalent, risk
premium
2.4 Appendix
Example – another doctor in a village, framing, prospect theory, a good book
-[
2. Utility
]-
SLIDE
2
-[
Introduction
]-
On making decisions…
• In Chapter 1, we discussed some key risk management concepts. At the top of
list is the concept of risk itself, which we decided to have two key components:
uncertainty and the possibility of loss.
• In this Chapter, we will think about how people/companies
• Actually make decisions in the presence of risk
• Should make decisions in the presence of risk
• The second of these is easier than the first – we make some assumptions about
how “rational” people should act, and prove some conclusions using maths
• The first of these is complex, counter-intuitive and fascinating. We will just touch
upon it a little
-[
2. Utility
]-
SLIDE
3
-[
Introduction
]-
On making decisions…
• Example
• Imagine you are doctor working in a remote village. Six hundred people
there have suddenly come down with a life-threatening disease. You have
two treatments at your disposal, treatments A and B
Choice A
Choice B
You will save exactly
200 people.
There is 1/3 chance that you will
save the whole village, and a 2/3
chance that you can save none.
• Have a think – which treatment will you choose?
• The doctor has a decision to make in the presence of risk – there is a
chance he can save lives, there is a chance of loss
-[
2. Utility
]-
SLIDE
4
-[
Introduction
]-
On making decisions…
• Two scientists, Daniel Kahneman and Amos Tversky,
used (several) questions like these to understand
more about how people think and make decisions in
reality
• Their revolutionary work resulted in Daniel
Kahneman winning the Nobel Prize for Economics in
2002 (sadly Tversky died in 1996 – Nobel prizes
cannot be awarded posthumously)
Prof. Daniel Kahneman
• We will briefly return to them at the end of this
chapter
Prof. Amos Tversky
-[
2. Utility
]-
SLIDE
5
-[
Introduction
]-
On making decisions…
• We make decisions under uncertainty all the time
• Uncertainty about product quality – do you trust the seller of a secondhand car? The food in an empty restaurant?
• Uncertainty in dealing with others – will they pay me back? Outcomes
can depend on what others do
• Uncertainty when buying financial assets – the return will depend on
which state is realised in the future
• Scientists (often a mix of psychologists, mathematicians and economists) have
devised some theories to describe how people make decisions under
uncertainty (so-called decision theories)
• Expected Utility Theory
• Rank Dependent Utility Theory
• (Cumulative) Prospect Theory
• We will look at the oldest (and simplest) of these: Expected Utility Theory
-[
2. Utility
]-
SLIDE
6
Expectation is the root of all heartache
William Shakespeare
All’s Well That Ends Well, Act II.i,
What I can do can do no hurt to try,
Since you set up your rest 'gainst remedy.
He that of greatest works is finisher
Oft does them by the weakest minister:
So holy writ in babes hath judgment shown,
When judges have been babes; great floods have flown
From simple sources, and great seas have dried
When miracles have by the greatest been denied.
Oft expectation fails and most oft there
Where most it promises, and oft it hits
Where hope is coldest and despair most fits.
-[
2. Utility
]-
SLIDE
7
-[
Introduction
]-
Expected utility theory (EUT)
• We will discuss Expected Utility Theory (EUT) in the context of
games/gambles/lotteries
• Let the game/gamble/lottery # have possible outcomes (usually monetary)
$1,$2,…, $!. These are mutually exclusive and exhaustive
• Mutually exclusive: if $" happens, then none of the remaining $#, %≠& can
happen
• Exhaustive: together, $1,$2,…, $! cover all eventualities
• Let $1,$2,…, $! have corresponding probabilities '1,'2,…,'!
• We can then calculate the expected value of the game/gamble/lottery
( # = '!$! + '"$" + … + '# $# =
-[
2. Utility
-⊺ .
(
=/
'% $) .
%&'
]-
SLIDE
8
-[
Introduction
]-
Expected utility theory (EUT)
• Example
• It is often helpful to draw # as a tree diagram. Here the game is
as follows:
• Draw one card from a standard deck of playing cards.
• If the card is a spade, I will pay you $100.
• If the card is the ace of spades, I will pay you $1000.
• If the card is not a spade, I will pay you nothing
• A natural question to ask yourself is, “how much would I be
happy to pay to play this game?”
• In this tree, you have paid $20 to play.
Dr John Wright‘s handwriting
-[
2. Utility
]-
SLIDE
9
-[
Introduction
]-
Expected utility theory (EUT)
• A natural answer to this question is this
• Work out the expected value of the game, ((#)
• I would be prepared to pay less than ((#) to play
• Example
• The probabilities are
• Pr(123" 345 67 8'3158)=1/52, payoff is $1000
• Pr(123" 3 8'315 9ℎ39 &8;ʹ 9 3; 345)=12/52, payoff is $100
• Pr(;69 123" 3 8'315)=39/52, payoff is $0
• ⇒((#)=$1000×1/52+$100×12/52+$0×39/52=$42.31
-[
2. Utility
]-
SLIDE
10
-[
Introduction
]-
Expected utility theory (EUT)
• If the cost of playing the game/gamble/lottery is exactly equal to the
expected value of the game/gamble/lottery, then we say the
game/gamble/lottery is fair
• The idea is that if you played the game a large number of times, you will be
even – not lost money, nor made any
• We propose the following, very basic decision theory criterion
• If the cost of playing is below the expected value, play. If above,
don’t. If equal, be indifferent.
• Now we have a theory, how does it match with empirical findings (i.e. data
from the real world)?
• People play Mark Six
• Let’s play a game: heads you win $1 million, tails you lose $1 million
-[
2. Utility
]-
SLIDE
11
-[
Introduction
]-
St. Petersburg Paradox
• Consider a game in which one player flips a fair coin:
If the coin reveals a head, this player will pay the other payer
$1 and the game is over.
• If the coin lands on the tail, it is tossed again. If the second
toss lands heads, the first player pays $2 and the game is over.
• If the second toss lands tails, the coin is flipped again.
• Thus, the game continues until the first head appears, and
then the first player pays $2n-1.
The paradox takes its name from its analysis by Daniel
Bernoulli, one-time resident of the eponymous Russian
city, who published his arguments in the Commentaries
of the Imperial Academy of Science of Saint
Petersburg. However, the problem was invented by
Daniel's cousin, Nicolas Bernoulli, who first stated it in a
letter to Pierre Raymond de Montmort on September 9,
1713.
https://en.wikipedia.org/wiki/St._Petersburg_paradox
-[
2. Utility
The opulent Spas Na Krovi, or the Church of the Savior
on Spilled Blood, features enameled, onion-shaped
domes. PHOTOGRAPH BY ROBERT HARDING
PICTURE LIBRARY, NAT GEO IMAGE COLLECTION
]-
SLIDE
12
-[
Introduction
]-
St. Petersburg Paradox
• How much should Player 2 be prepared to play this game?
• Let us work out the expected value. Player 2 wins
$1 with probability 1/2
$2 with probability 1/4
$4 with probability 1/8 …
• Generally, $2! with probability 2-(n+1)
• ⇒( # = ∑*
#&!
"!
"!"#
!
"
= ∑*
#&+ = ∞.
• Therefore, according to our decision theory, Player 2 should be willing to pay
anything to play this game
-[
2. Utility
]-
SLIDE
13
-[
Introduction
]-
St. Petersburg Paradox
• This paradox – the fact that in theory Player 2 should be willing to pay anything
but in reality people are only prepared to pay a small amount of money – was
noted and resolved in 1738 by Daniel Bernoulli
• “The determination of the value of an item must not be based on the price, but
rather on the utility it yields… There is no doubt that a gain of one thousand
ducats* is more significant to the pauper than to a rich man though both gain
the same amount.”
• His point was this – we shouldn’t consider the raw value of the outcomes, but
how useful the outcomes are to the player
• Thus the idea of (mathematical) utility was born
Daniel Bernoulli
(1720-1725)
*An old unit of currency used in Europe at the time. The word “ducat” is related to “duke”.
-[
2. Utility
]-
SLIDE
14
「
hot
-[
hentien
in
Exp utility theory
exem
]-
Axioms
• Bernoulli’s log(") was the first example of a utility function
• About 200 years later, mathematical foundations for the existence of utility
functions were provided by John von Neumann and Oskar Morgenstern
• They proved that
• If a person’s preferences satisfied 4 fundamental assumptions, then that
person has a utility function
• And that person will always prefer choices that maximize their expected
utility
John von Neumann
(1903-1957)
• In maths, fundamental assumptions are called axioms. Let’s go through the 4 von
Neumann-Morgenstern (VNM) axioms
-[
2. Utility
]-
SLIDE
15
-[
Exp utility theory
]-
Axioms
• These axioms are meant to describe the behaviour of a “rational” decision-maker.
They are
i. Completeness
ii. Transitivity
iii. Continuity
iv. Independence
• To explain them, we need some notation. For gambles/outcomes ? and @
i. ?≻@ means the decision-maker prefers ? to @
ii. ?≺@ means the decision-maker prefers @ to ?
iii. ?≽@ means the decision-maker prefers ? to @ or is indifferent between ?
and @
iv. ?≼@ means the decision-maker prefers @ to ? or is indifferent between ?
and @
v. ?∼@ means the decision-maker is indifferent between ? and @
-[
2. Utility
]-
SLIDE
16
-[
Exp utility theory
]-
Axioms
•
Completeness
•
•
Axioms
•
•
In maths: for every ? and @, either ? ≽ @ or ? ≼ @
In words: for any choice of outcomes, the decision-maker has well-defined
preferences.
For any two outcomes ? and @, she either prefers ? to @, or prefers @ to ?, or
is indifferent between them
Transitivity
•
•
In maths: for every ?, @ and F with ? ≽ @ and @ ≽ F, we must have ? ≽ F
In words: the decision-maker is consistent in her preferences
-[
2. Utility
]-
SLIDE
17
-[
Exp utility theory
]-
Axioms
Axioms
•
Continuity
• In maths: for ?, @ and F such that ? ≽ @ ≽ F, there exists some probability ' such that @ ∼
'? + 1 − ' F
• In words: given three outcomes ?, @ and F such that the decision-maker prefers ? to @, and @
to F, there exists a lottery combination of ? and F which appears equally as good to her as
outcome @
• The lottery combination '? + 1 − ' F means
• the decision-maker tosses a biased coin which shows “Heads” with probability ' and
“Tails” with probability 1 − '
• if the coin shows “Heads” she plays gamble/outcome ?
• if the coin shows “Tails”, she plays gamble/outcome F
•
Independence
• In maths: for all ?, @ and F and probability ', we have ? ≽ @ if and only if '? + 1 − ' F ≽
'@ + 1 − ' F
• In words: if the decision-maker prefers ? to @, she’ll also prefer the possibility of ? to the
possibility of @, given that in both cases the other possibility is F
• If the decision-maker is comparing '? + 1 − ' F to '@ + 1 − ' F, she should focus on the
difference between ? and @ and hold that preference independent of F and '
SLIDE
2. Utility
-[
]-
18
-[
Exp utility theory
]-
Axioms
• The VNM Utility Theorem was proved in the 1940s
• The four axioms seem reasonable and the conclusion (“rational” people should
maximize their expected utility) is natural
• However, as Kahneman and Tversky (and others) would go on to show, people often do
not behave as the VNM theory says they should
• Does that mean
• People are irrational?
• VNM theory’s idea of rationality is not realistic?
• Either way, much research is spent on finding better ways to describe how people make
decisions e.g. Prospect Theory
」
-[
2. Utility
]-
SLIDE
19
-[
Utility functions
]-
Utility functions
• The VNM Utility Theorem proof actually tells us how to
construct !(⋅), the decision-maker’s utility function
• In this section we will spend some time characterizing
!(⋅)
• Example
• # is a simple gamble: you have a 10% chance of
winning $100; a 90% chance of winning $0. Thus
((#)=0.1×$100+0.9×$0=$10.
• Would you prefer
• To take the gamble # or
• Be given the ((#)=$10 with certainty?
Adapted from Harris and Wu (2014; Journal of Business Economics)
-[
2. Utility
]-
SLIDE
20
-[
Utility functions
]-
Utility functions
• Let’s generalise slightly
• # involves winning 3 with probability ' or J with probability (1−')
• The expected utility of taking ((#) now is simply
!(((#))=!('3+(1−')J)
• The expected utility of playing # is '!(3)+(1−')!(J)
• The rational decision-maker compares !('3+(1−')J) and '!(3)+(1−')!(J) and chooses the
larger
• This depends completely on what !(⋅) is – it is personal to the decision-maker
• There are three possibilities, each describes the decision-maker’s risk preference
-[
2. Utility
]-
SLIDE
21
-[
Utility functions
]-
Utility functions
seend
order
detavatile
(G )
↓
;
↑
→
cutilit, ,
• If ! '3 + 1 − ' J > '! 3 +
1−' ! J
• Decision-maker opts to
receive ( # with certainty
• We say they are risk-averse
• Their utility function ! ⋅ is
concave
• If ! '3 + 1 − ' J < '! 3 +
1−' ! J
• Decision-maker opts to
gamble
• We say they are risk-seeking
• Their utility function ! ⋅ is
convex
• If ! '3 + 1 − ' J = '! 3 +
1−' ! J
• Decision-maker is indifferent
between the choices
• We say they are risk-neutral
• Their utility function is linear
0
θ
θ
0
0
Adapted from Policonomics.com
fist
-[
2. Utility
]-
eder
shaldbepesitile
SLIDE
,
22
-[
Utility functions
]-
Utility functions
• Derivatives can help us find the risk preference according to the utility function
• Note that in each of the three graphs, ! " always has a positive slope, i.e.
!, " > 0
• This makes sense, the extra (marginal) utility you receive from an extra
dollar should be positive
• The second derivative helps us characterize the risk preference
• !,, " = 0 ⇒ slope remains constant, i.e. straight line. Risk-neutral
• !,, " > 0 ⇒ slope increases with ", i.e. convex. Risk-seeking
• !,, " < 0 ⇒ slope decreases as " increases, i.e. concave. Risk-averse
-[
2. Utility
]-
SLIDE
23
-[
Utility functions
]-
Utility functions, Examples of
• Some commonly used utility functions are
• ! " = ln " (Bernoulli’s choice)
!
• ! " = - 1 − 5 .-/ , 3 > 0
• ! " =
/#$%.!
,
!.0
P>0
• Derivatives tell us the risk preference
!
⇒
/
!, "
• ! " = ln " ⇒ !, " =
• ! " =
• ! " =
!
1 − 5 .-/
/#$%.!
⇒ ⋯?
!.0
⇒
-[
!,, " = −
!
/&
< 0 ⇒ risk-averse
= 5 .-/ ⇒ −3 5 .-/ < 0 ⇒ risk-averse
2. Utility
]-
SLIDE
24
-[
Utility functions
]-
An example
%
80
20
%
$5
$30
• Gamble # has a 80% chance of winning $5, and a 20% chance of winning $30
• The fair price of # is ((#) = 0.8×$5 + 0.2×$30 = $10
• Utility function is ! " = log "
• The expected utility from taking the gamble is ( ! #
= 0.8× log 5 + 0.2× log 30 = 1.97
• The expected utility from taking the fair price with certainty is ! 10 = 2.30
• Thus the rational decision-maker would prefer taking $10 with certainty rather than taking the
gamble
-[
2. Utility
]-
SLIDE
25
-[
Utility functions
]-
An example
• The difference between the expected value $10 and the Certain
Equivalent $7.17 ($2.83) reflects the risk involved in the gamble
• We call it the risk premium
%
80
20
%
$5
• How to interpret the risk premium?
• Imagine you are risk-neutral. I offer you two choices: $9.99 or
gamble #. Applying the maximum expected utility criterion, you
would choose #.
• You are not prepared to sacrifice any expected return from the
gamble, even though it involves uncertainty
• Now imagine you are risk-averse with ! " = ln " . If I offer
you two choices: $7.16 or gamble #, you would choose the
$7.16
• You are prepared to sacrifice some expected return from the
gamble, to avoid the uncertainty
u(E(W)) = u(10) = 2.30
E(u(W)) = 0.8u(5) + 0.2u(30) = 1.97.
Therefore, u(E(W)) > E(u(W))
Uncertainty reduces utility.
Certainty equivalence = 7.17
$30
-[
2. Utility
]-
SLIDE
26
-[
Utility functions
]-
An example
• An individual’s risk premium for a gamble is
• Their expected wealth given they take the gamble minus their level of wealth that
corresponds to certainty equivalence
• It is a dollar amount – i.e. an amount of money; not an amount of utility
• The sign of the risk premium depends on the risk-attitude
• If risk-averse, then ((! ^) < ! ((^) hence risk premium > 0
• If risk-neutral, then ((! ^ ) = ! ((^) hence risk premium = 0
• If risk-seeking, then ((! ^) > ! ((^) hence risk premium < 0
• From a risk-averse point of view, the risk premium is a price of insurance – how much we
are prepared to pay to avoid risk
-[
2. Utility
]-
SLIDE
27
-[
Summary
]-
To conclude…
• In this chapter we discussed models for how people make decisions, focussing on Expected Utility
Theory and its conclusions – rational people have utility functions and their criterion for choosing
options is to maximize their expected utility.
• Incorporating utility functions into the fabric of a company can be part of the risk manager’s job. First
they need to estimate what the utility function of the company actually is.
• However, EUT is far from the final answer. More general theories, designed to better reflect how
people make decisions in real life have since been developed (see Appendix).
• Later in your studies you will encounter risk-neutral measures/pricing for derivatives. To perform
financial risk management, you must become familiar and comfortable with numerous financial
products. We start this process in the next chapter.
-[
2. Utility
]-
SLIDE
28
-[
Appendix
]-
Another choice…
• Example
• Imagine you are doctor working in a remote village. Six hundred people there have
suddenly come down with a life-threatening disease. You have two treatments at your
disposal, treatments C and D
Choice C
Choice D
Exactly 400 people will die.
If you choose treatment D, there is
a 1/3 chance that no one will die,
and a 2/3 chance that everyone will
die.
• Have a think – which treatment will you choose?
-[
2. Utility
]-
SLIDE
29
-[
Appendix
]-
Recall our first question in this chapter…
• You saw this problem before, at the beginning of the chapter
• When Khaneman and Tversky asked these questions they found
• Given the choice between A and B, 72% chose A
• Given the choice between C and D, 78% chose D
• But these problems are exactly the same! The only difference is how the options were presented
(or framed) to the respondents
• Treatment A was framed as a gain (you save 200 people)
• Treatment C was framed as a loss (400 people will die)
• Kahneman and Tversky discovered that people are more likely to take risks when it comes to
losses than gains
• People prefer a “sure thing” when it comes to a potential gain, but are willing to take a
chance if it involves avoiding a loss
-[
2. Utility
]-
SLIDE
30
-[
Appendix
]-
Recall our first question in this chapter…
• Kahneman and Tversky called this aspect of human psychology “framing”. It is one of several “cognitive
biases” they found evidence for. Some others include
• Reference dependence – when evaluating possible outcomes, people compare them to a reference
level. Outcomes above the level become “gains”, those below become “losses”
• Loss aversion
• Non-linear probability weighting – people overweight small probabilities and underweight large
probabilities
• Kahneman and Tversky used their findings to create (cumulative) Prospect Theory, a model for how people
make decisions under risk
• (cumulative) Prospect Theory generalizes Expected Utility Theory
-[
2. Utility
]-
SLIDE
31
-[
Appendix
]-
Recall our first question in this chapter…
• It is well worth reading “Thinking, Fast and Slow” by
Daniel Kahneman
• He writes about the two systems in our brains that
drive the way we think
• System 1: fast, intuitive and emotional
• System 2: slow, deliberative, logical
-[
2. Utility
]-
SLIDE
32
The lecture notes are intended for the personal use of registered students only and must not be published or re-used in any form without the express consent of the author.
Copyright belongs to the author. The lecture notes are intended as a summary of a given area only and should not be a substitute for reference texts.
All the icons are designed by Freepik at www.flaticon.com. Slide template is designed by Zeen, inspired by Mirokit. The author acknowledges Dr Philip Lee, Dr John Wright and Prof. Chun Yip Yau for their valuable contributions
Department of Statistics, Faculty of Science, The Chinese University of Hong Kong Spring 2024.
-[
2. Utility
]-
SLIDE
33
Download