Expected Value of Information

advertisement
EXPECTED VALUE OF
INFORMATION
Omkar Apahle
INFORMATION
A
B
A
B
A
B
C
EXPECTED VALUE OF
INFORMATION (EVI)
•
•
-
EVI is required to negate the effects of ,
overconfidence,
underestimation of risk and
surprise
EVI is required often in,
Risk Analysis
Sensitivity Analysis
Decision problem
DEFINITION
• Expected Value of Information (EVI) is the
integral over all possible posterior
distributions of the opportunity loss
prevented by improved information, weighted
by probability of that information.
CLASSIFICATION
Expected Value of
Information (EVI)
Expected Value of
Imperfect
Information (EVII)
Expected Value of
Including
Uncertainty (EVIU)
Expected Value of
Ignoring
Uncertainty
(EVEU)
Expected Value of
Perfect
Information (EVPI)
• Example,
• Weather condition and Camping activity
• EVPI = Highest price the decision maker is
willing to pay for being able to know “Weather
Condition” before making camping decision.
• EVII = Highest price the decision maker is
willing to pay for being able to know “Weather
Forecast” before making camping decision.
CHARECTERISTICS OF EVI
• Expected Value of Information (EVI) can never
be less than zero.
• No other information gathering / sharing
activities can be more valuable than that
quantified by value of perfect information.
SOURCES
• Sample data
• Expert judgments
EVI & BAYES RULE
• Bayesian analysis relies on both sample
information and prior information about
uncertain prospects.
• Bayesian analysis provides a formal
representation of human learning . An
individual would update his / her “subjective
beliefs” after receiving new information.
EVI & Bayes rule continued ….
• Investment in stock market
• Expert will provide perfect information
• Perfect information = always correct
• P ( Expert says “Market Up” Market
Really Goes Up ) = 1
EVI & Bayes rule continued ….
• Applying Bayes’ theorem
P ( Market Up Exp Says “Up” ) =
P ( Exp “Up” Market Up ) P (Market Up )
P ( Exp “Up” Market Up ) P (Market Up ) +
P ( Exp “Up” Market Down) P (Market Down)
EVI & PRIOR DISTRIBUTION
• EVI depends on the prior distribution used to
represent current information.
• Subject experts and lay people often produce
distributions that are far too tight.
• New or more precise measurements are often
found to be outside the reported error bars of
old measurements.
• Expected posterior probability before the
results are known is exactly the prior
probability content of that region.
• Expected value of posterior mean is equal to
the prior mean.
• Prior variance = Posterior variance +
Variance of posterior mean
UNCERTAINTY
• A random variable y is more uncertain than
another random variable z if
- y = z + random noise
- Every risk averter prefers a gamble with
payoffs equal to z to one with payoffs equal to
y
- The density of y can be obtained from density
of z by shifting weight to the tails through a
series of mean-preserving spreads.
Uncertainty continued…..
• The decision as to whether to include
uncertainty or not purely depends on the
decision maker
• Expected Value of Including Uncertainty
(EVIU)
• Expected Value of Ignoring (Excluding)
Uncertainty (EVEU)
NOTATION
d ∈ D is a decision chosen from space D
x∈X
is an uncertain variable in space X
L (d, x) is the loss function of d and x
f (x)
is prior subjective probability density
on x
• xiu = E (x) is the value of x when ignored
uncertainty
• E [ L (d , x) ] = ∫x L (d , x) f (x) dx
is the prior expectation over x loss for d
•
•
•
•
Notation continued …
• Bayes’ decision
dy = Min-1d E [L(d , x)]
• Deterministic optimum decision ignoring
uncertainty
diu = Min-1d L(d , xiu)
EVIU
• Expected Value of Including Uncertainty
(EVIU) is the expectation of the difference in
loss between an optimal decision ignoring
uncertainty and Bayes’ decision.
EVIU = E [ L (diu , x) ] - E [ L (dy , x) ]
EVPI
• Expected Value of Perfect information (EVIU)
is the expectation of the difference in loss
between an Bayes’ decision and the decision
made after the uncertainty is removed by
obtaining perfect information on x.
EVPI = E [ L (dy , x) ] - E [ L (dpi (x), x) ]
SOCRATIC RATIO
• Dimensionless index of relative severity of
EVIU and EVPI
EVIU
S iu =
EVPI
LOSS FUNCTIONS
• When ignoring uncertainty doesn’t matter ?
• Classes of common loss functions,
1. Linear
2. Quadratic
3. Cubic
LINEAR LOSS FUNCTION
• Assume, x iu = x
• d ∈ { d1, d2,….dn }
• Loss function,
a1 + b1x
• L (d, x) = a2 + b2x
…
an+ bnx
if d = d1
if d = d2
….
if d = dn
Linear loss function continued….
• E [ L (d , x)] = L (d , x ) = L ( d , xiu )
• Bayes decision,
dy =
Mind-1 E [ L (d , x)]
=
Mind-1 L ( d , xiu ) =
diu
• EVIU = E [ L (diu , x)] - E [ L (dy , x)] = 0
• Considering uncertainty makes no difference to
decision and hence to the outcome.
QUADRATIC LOSS FUNCTION
• Let the loss function be,
L (d , x) = k ( d – x ) 2
• E [L (d , x) ] = k ( d 2 – 2d E(x) + x 2 )
• On derivation we get,
2d – 2E(x) = 0
dy = E(x) = diu
• Uncertainty makes no difference in decision
CUBIC ERROR LOSS FUNCTION
• Decisions involving uncertain future demand
• L (d , x) = r ( d – x ) 2 + s ( d – x ) 3 r ,s > 0
• Henrion (1989) showed that,
EVIU
1
• Siu =
<
EVPI
3
• Obtaining better information about x is better
than including uncertainty ( in all cases ).
BILINEAR LOSS FUNCTION
• Newsboy problem
• How many newspapers to order ?
• d = newspaper to be ordered
x = uncertain demand
a = $ loses if ordered too many
b = $ forgoes if ordered too few
Newsboy problem continued ….
• Loss function,
a ( d – x ) if d > x
L (d , x) =
where a, b > 0
b ( x – d ) if d < x
Newsboy problem continued ….
• Uniform prior on x, with mean x and width w
1 / w if x - w/2 < x < x + w/2
f(x) =
w>0
0
• d iu = x
otherwise
Newsboy problem continued ….
Probability
density:
f (x)
w
d
x
Paper Demanded (x)
Newsboy problem continued ….
Loss
a(d–x)
b (x–d)
Too few
Too many
0
Error (excess newspapers) = ( d – x )
Newsboy problem continued ….
• Results
• EVPI =
w a b / 2 (a + b)
• EVIU =
w ( a – b ) / 8 (a + b)
• Siu = EVIU / EVPI = ( a – b ) 2 / 4 a b
Newsboy problem continued ….
• Socratic ratio is independent of the
uncertainty
• EVIU does not increase with uncertainty value
relative to the EVPI
• Considering uncertainty is more important
than getting better information
CATASTROPHIC LOSS PROBLEM
• Plane-catching problem
• How long to allow for the trip to the airport ?
• d = decision
x = uncertainty ( actual travel time )
k = marginal cost per minute of leaving earlier
M = loss due to missing the plane
Plane-catching problem continued ….
• Loss function
0
if d > x
M
if d < x
L(d,x)=k(d–x)+
• k and M are positive
• M > k ( d – x ) for all d , x
Plane-catching problem continued ….
300
M : Loss due to missing
plane
L ( d, x = 35 ):
Loss as a function of d
150
Loss
(in min )
k ( d – x ) = Wasted time
0
- 60
- 50
- 40
- 30
- 20
- 10
d : departure time ( minutes before plane )
0
Plane-catching problem continued ….
• x is uncertain and the decision is subjective
• f ( x ) = subjective probability density function
• Baye’s decision (dy) will allow x such that ,
k
f (dy) =
M
Plane-catching problem continued ….
• In case we ignore uncertainty
d iu = x 0.5
where, x 0.5 = median value
• d iu will lead us to miss the plane half the time
• EVIU = E [ L (diu , x) ] - E [ L (dy , x) ]
• EVPI = E [ L (dy , x) ]
HEURISTIC FACTORS
• Four heuristic factors contribute to
understand the EVI
1.Uncertainty
(about parameter value)
2. Informativeness
(the extent to which the current uncertainty
may be reduced)
Heuristic factors continued …
3. Promise
(the probability that improved information
will result in a different decision and the
magnitude of the resulting gain)
4. Relevance
(the extent to which uncertainty about the
parameter to overall uncertainty)
Heuristic factors continued …
Example,
Whether to permit or prohibit use of a food
additive
• Expected social cost = Number of life years lost
• θ = Risk from consuming additive
= Excess cancer risk
• K = Expected social cost with use of substitute
in case additive is prohibited
Heuristic factors continued …
• f 0 = probability distribution representing
current information about θ
• f + = probability distribution representing θ
is hazardous
• f- = probability distribution representing θ
is safe
• L0 = expected social cost if additive is
permitted
• L1 = expected social cost if additive is
permitted / prohibited after research
Heuristic factors continued …
• EVI = L0 – L1
• Condition:
Additive is permitted if and only if L0 > K
• Substantial chance that additive is riskier than
alternative and should be prohibited
Prohibit additive use
Expected K
L0
Social
L1
Loss
f0
Permit
additive
use
f-
f+
θ
Effect of greater prior uncertainty
Expected
K
Social
L0
Loss
L1
Prohibit additive use
f0
Permit additive
L- use
f-
f+
θ
Effect of greater informativeness
Expected
K
Social
L0
Loss
L1
Prohibit additive use
f0
Permit
additive
fuse
f+
θ
Effect of greater promise
Expected
K
Social
L0
Loss
L1
Prohibit additive use
f0
Permit
additive
use
f-
f+
θ
Effect of relevance
θ2
Permit additive use
Prohibit additive use
θ1
RISK PREMIUM
• How to measure the monetary value of risk ?
• Let,
a
=
uncertain monetary reward
w
=
initial wealth
a+w
=
terminal wealth
U (w + a) =
Utility function
Selling price of risk
• Rs = selling price of risk
= sure amount of money a decision
maker would be willing to receive to
sell the risk a
• { w + Rs } ~* { w + a }
• Under expected utility function,
U {w + Rs} = EU {w + a}
Bid price of risk
• Rb = bid price of risk
= sure amount of money a decision
maker would be willing to pay to buy
the risk a
• { w } ~* { w + a - Rb }
• Under expected utility function,
U {w} = EU {w + a - Rb }
Risk premium
• R = risk premium
= sure amount of money one would be
willing to receive to become indifferent
between receiving the risky return a
versus receiving the sure amount
[E(a) – R]
• { w + a } ~* { w + E(a) - R }
• EU {w + a } = U { w + E(a) - R }
Risk premium continued …
• v = { w + E(a) - R } = certainty equivalent
• R = E [v] – v
E [v] = expected loss
• Willingness to insure
COMBINING PRIORS
• A single expert report an idiosyncratic
perception of a consensus
• Useful if use combined judgments
• Aggregation procedure
– Weighted average
– Bayes’ rule
– Copula method
Example
• Climate sensitivity ( Morgan and Keith , 1995)
• ∆ T 2x = equilibrium increase in global
annual mean surface temperature
as a result of doubling of
atmospheric CO2 from its preindustrial concentration
Example continued …
•
•
•
•
•
•
Estimates gathered from different experts.
All experts are treated equal.
Range given by IPCC: 1.5 to 4.50C
Most likely value : 2.50C
Tails extended to account underestimation
Sensitivity analysis : include / exclude expert 5
Example continued …
20
15
10
∆ T 2x
5
0
1
2
3
4a 4b
5
6
7
-5
Experts
8
9
10 11 12 13 14 15 16
Example continued …
• PDF
All Experts
Excluding 5
All experts with
exponential tails
BEST ESTIMATE
• xiu = Mean of the SPD
• What to choose – mean, median or mode ?
• If Mean >> Median
Make decision based on median and ignore
uncertainty
• If Mean ~ Median
Make decision considering possibility of
extreme scenarios
IN CONCLUSION
• “As for me, all I know is I know nothing.”
Socrates
• Expected Value of Information depends upon
the expected benefits of Socratic wisdom (i.e.
admitting one’s limits of knowledge) relative
to the expected benefits of perfect wisdom
(i.e. knowing the truth).
THNAK YOU !
Download