Review of Decision Critreria

Notes for Mitchell and Hutchison (2008)
AAE 575
Fall 2012
Section 4.4: Representing Risky Situations
Need three things for Risk Management
1) Actions to choose
2) Events (with probabilities)
3) Outcomes (to the decision maker)
Represent in either Matrix or Decision Tree: equivalent (game theory proved)
Go through Table 4.1 to understand it: simple/hokey example
Action: Treat or do not treat
Event: Rain (p = 0.1) or does not rain (p=0.9)
Outcomes: four possible $/ha depending on action and event
Which should a person choose? $50 with treatment no matter what or not treat and possibly get
$100 or only $10? Hokey example
Key point: Actions have to affect events/probabilities and/or outcomes.
Otherwise there is nothing to manage
Table 4.1: Action affects the outcomes
Could have action affect probabilities:
Action: put or don’t put in sweet corn patch
Event: raccoons attack/don’t attack sweet corn patch
Outcome: loss due to raccoons eating sweet corn
Affect outcomes = insurance. Affect probabilities = protection
[not a broad/widely used distinction, but some do use it]
Table 4.2: more realistic example [subjective probabilities]
Action: Use IPM or conventional system
Event: Three levels of pest pressure: low, moderate, high, with p = 0.7, 0.2, and 0.1.
Outcomes: six possible $/ha net returns
Figure 4.1: Decision Tree for same example
Figure 4.2: plots the pdf and cdf of the outcomes (net returns) for the two actions
Which action/distribution of $/ha outcomes do you choose?
First: how do we talk about the “risk” in outcomes/net returns?
 Central tendency: Mean, median, mode (seen these already)
 Dispersion
o [symmetric]: variance/st. dev., CV, risk-return (Sharpe) ratio (seen these already)
o [asymmetric]: Value at Risk (VaR): chose key probability and find outcome,
Probability of key outcome (break even or profit target)
o Both need the cdf
Figure 4.4: Collect experimental data to “smooth” the pdf/cdf and replace subjective
probabilities with objective probabilities.
Not change the overall details, just smoother now: how do we estimate a smooth pdf/cdf once
have data like used for Figure 4.4?
Section 4.5: Decision Making Criteria and Tools
Define/describe various criteria to use to choose the “best” action = how to “optimize” when
must make decision under risk.
1) Not using probabilities (game theory strategies)
2) Using probabilities (“risk management”)
No Probabilities
 Maximin = minimax: choose action that gives best outcome for worst case scenarios:
maximize the minimum (maximin) outcome or minimize the maximum loss (minimax)
 Maximax: choose action with best outcome among the best case scenarios (impractical, but a
useful overly optimistic base case for comparison)
 Simple Average: ignore probabilities = give all outcome equal weight (1/n)
Table 4.1: maximin/minimax = Treat, Maximax and Simple Average = Do not treat
Table 4.2: all three = IPM
Table 4.3 (object. probs.): maximin/minimax, Simple Average = IPM, Maximax = Conventional
With Probabilities
 Safety First Criteria (common in developing nation and human health contexts)
o Minimize probability that returns less than zero (or other target)
o Maximize Mean outcome, subject to constraint: mean outcome must exceed set level
 Maximize Mean outcome: choose action that gives the highest mean outcome
o Called “risk neutral”, ignores variability/dispersion
o Mean = $1000, action A with st dev of $100 same as action B with st dev = $500
o Most people willing to trade off between mean and variability: give up some mean
returns in trade for a lower variability (e.g., insurance)
Positive (Descriptive): How do people actually tradeoff between mean and variably (or
asymmetry)? Try to find a theory that describes what people actually do.
Normative (Prescriptive): How should people tradeoff between mean and variably/asymmetry?
Define rules/BMPs for managing risky situations (finance, EPA, FDA, etc.)
Risk Preferences: risk neutral, risk loving, risk averse
Simple choice:
a) random returns with mean  and variability/dispersion/spread a
b) random returns with mean  and variability/dispersion/spread b > a
 Risk neutral: indifferent between a and b
 Risk averse: choose a: same mean, lower variability/dispersion/spread
 Risk loving: choose b: same mean, more variability/dispersion/spread
Certainty Equivalent: Certain (non-random) return that makes the decision maker indifferent
between the random choice and the certain outcome.
How much money would you need to be as well off as choosing the random payoff?
If the Mean Return = , then
If CE < , then risk averse
If CE = , then risk neutral
If CE > , then risk loving
Risk Premium: difference between mean return and CE return: RP =  + CE, or CE + RP = 
If RP > , then risk averse
If RP = , then risk neutral
If RP < , then risk loving
Preference/Utility Function: function that quantifies how people weight outcomes for risk
Standard Graphical Presentation
Mean-Variance Utility: U ( )     2
Value of  defines the person’s risk preferences and degree of risk aversion
Decision Rule: choose action that gives the highest value for U()
Example: Suppose fertilizer rate affects both the mean and variance of corn yield, then the
economic problem for choosing the optimal N rate assuming mean variance preferences is
max U ( ( N ))  E[ PY ( N )  rN  K ]   var[ PY ( N )  rN  K ]
max U ( ( N ))  PY ( N )  rN  K   P 2 Y2 ( N )
Just need the equations for how mean and variance of yield are affected by N rate (estimate with
field data), plus the risk preference parameter  (and prices and cost).
1) What risk preference parameter  to use?
2) Are we sure farmers have mean-variance preferences?
Similar Alternative: Mean-St Dev Utility: U ( )     
Note: remember that CE = Mean – RP, so technically, mean variance and mean st dev
preferences are actually CE = Mean – RP. The U() is actually the CE and the RP is the second
term,  2 or   .
Implementation of these preferences: Choose input x to maximize the CE
Expected Utility Theory
John von Neumann (and Morganstern): (main character in A Beautiful Mind)
People choose actions to maximize their expected utility, i.e., expected value of U()
Back to our example: max E[U ( ( N ))]
Question: what utility function to use?
Many types of utility functions proposed/used:
Constant Absolute Risk Aversion (CARA): U ( )   exp( RA )
 1 RR RR  1
Constant Relative Risk Aversion (CRRA): U ( )   ln( ) RR  1
  1 RR R  1
Decreasing Absolute Risk Aversion (DARA) Utility: U ( )   exp( )  
Exponential-Power Utility: U ( )    exp(   )
Lots of research has gone into “eliciting risk preferences”: Experiments and data collection with
estimation to determine what utility function(s) and parameter values are most consistent
with what people actually do?
Technical Issue: for these utility functions (which have desired theoretical properties) and profit
distributions we commonly observe, no closed form function for E[U((x))] exists, so
must use numerical methods to find the economically optimal x.
Note: are other theories of human decision making under risk: non-expected utility theory:
probabilities inside the utility function, not linear as get for EU theory: rank dependent
expected utility, ambiguity aversion, loss aversion/prospect theory, etc.
Here: we will use only von Neumann-Morganstern Expected Utility Hypothesis
Case Study: Cabbage IPM
Small plot research, IPM and conventional
 Number of sprays and yield imply net returns (i) based on cabbage price and spray cost
 Convert frequencies to probabilities by dividing frequency by 24 to get pi
 For each treatment
o Calculate mean as   i pi i , where i indexes outcomes
o Calculate variance as  2  i pi ( i  i )
o Calculate standard deviation as    2
o Calculate utility for each outcome as Ui  U ( i )   exp(r i )
o Calculate expected utility as EU  E[U ( i )]  i pU
i i   i pi ( exp( r i ))
o Calculate CE as CE = –ln(–EU)/r
o Calculate CE for Mean-Variance Utility as CE  E[ ]   2     2
Derive CE using calculated EU for CARA utility by solving U(CE) = EU for CE
–exp(–rCE) = EU
exp(–rCE) = –EU
–rCE = ln(–EU)
CE = –ln(–EU)/r
Decision Criteria
Choose treatment (IPM or Conventional) with
a) Greatest expected profit
b) Greatest CE under mean-variance utility
c) Greatest CE with CARA utility
Next Step
 Use small plot data to estimate pdf of net returns for each case
 Use simulation to estimate E and EU for IPM and Conventional
 Same Decision Criteria
Monte Carlo Simulation
Suppose have variable x with pdf f(x) and want to know E[x] =
 xf ( x)dx
Suppose you cannot solve the integral
However, you can obtain many random draws from the pdf f(x)
1 K
Monte Carlo Approximation: E[x]   xk , where xk is the kth random draw from the pdf f(x)
K k 1
More Common: know x ~ f(x), but want to know E[g(x)] =
Monte Carlo Approximation: E[g(x)] 
 g ( x) f ( x)dx
 g ( x ) , where xk is kth random draw from pdf f(x)
k 1
 Suppose have input x ~ lognormal with mean  and st dev 
 Yield is a negative exponential function of this random input: y  Ymax 1  exp(0  1 x) 
Net returns are   pYmax 1  exp(0  1 x)   rx  K
Utility is U ( )   exp(r )   exp   r{ pYmax 1  exp(  0  1 x)   rx  K }
What is expected yield, expected profit and expected utility?
1) Draw “many” x’s from lognormal pdf with mean  and st dev 
2) Calculate yield, net returns and utility for each draw
3) Average of yield, net returns and utility are the Monte Carlo integral estimate of expected
yield, expected profit and expected utility
Key: how do you obtain draws of x and how many to draw?
Apply the decision criteria: Set x at the value that maximizes E, CE, or …
What if multiple random variables?
a) Uncorrelated
b) Correlated