*** THE MEASUREMENT OF RISK Note: risk and uncertainty are treated to be equivalent throughout the discussion. * DEFINITION: A risky event is any event that is not known for sure ahead of time. This means: . uncertainty about the occurrence of events . temporal dimension of risk According to scientific belief, any event can be explained in a cause-effect framework. This suggests that "risk" does not exist. If so, why are there risky events? 1/ because of a limited ability to control and/or measure precisely some of the causing factors (e.g., chaotic systems, flipping a coin, etc.) 2/ because of a limited ability to process information (bounded rationality: playing chess, etc.) 3/ because of information cost (the economics of information: obtaining and processing information may not be "worth it" if it is too costly) Probability theory was proposed by the scientific community as rationalization for the existence of risky events. In that context, a risky event is defined to be any event A for which Pr(A) < 1. Ex: The outcome of flipping a coin is not a random event; it is the outcome of a deterministic process which behaves as if it were a random event. Thus, a particular event may or may not be risky depending on 1/ the quality of measurements; 2/ the ability to control it; 3/ the ability to obtain and process information; and 4/ the cost of information. As a result, there are a lot of disagreements about what a probability is supposed to be. In general, a probability can be interpreted as measure of anything we don't know. But knowledge can be subjective... Intuition: A probability is a measure of the "relative frequency" of an event. But what if -the event is not repeatable? -individuals disagree about the magnitude of a probability? Subjective Interpretation of a Probability: A probability is a subjective and personal evaluation of the relative likelihood of an event reflecting the individual's own information and belief. But is it reasonable to assume the subjective probability exist? * THE EXISTENCE OF PROBABILITY DISTRIBUTIONS Based on the concept of (subjective) relative likelihood. . Notation: Given some sample space S: A <L B: event B is "more likely" than event A. A L B: event B is at least as likely as event A. A ~L B: events A and B are equally likely. . Assumptions: As1: For any two events A and B, exactly one of the following holds: A <L B ; A ~L B ; B <L A. 2 As2: As3: As4: If A1 A2 = = B1 B2 and Ai L Bi, i = 1, 2, then (A1 A2) L (B1 B2). If in addition, either A1 <L B1 or A2 <L B2, then (A1 A2) <L (B1 B2). <L S. for any event A, L A. If A1 A2 ... is a decreasing sequence of events and if B L Ai, i = 1, 2, ..., then B L Ai. i 1 As5: There exists a random variable x uniformly distributed in the interval [0, 1] where x satisfies [x (a1, b1)] L [x (a2, b2)] iff (b1-a1) (b2-a2) for any sub-interval {(ai, bi): 0 ai bi 1, i = 1, 2}. . Theorem: Under assumptions As1-As5, for any event A, there exists a unique probability function Pr(A) satisfying A ~L G[0, Pr(A)] where G[a, b] is the event that a uniformly distributed random variable lies in the interval (a, b). Also, Pr(A) Pr(B) iff A L B for any two events A and B. * ELICITATION OF PROBABILITIES: How to estimate a distribution function or a probability function (or its parameters)? A- Case of Repeatable events: A probability distribution can be estimated from repeated observations of an event (= the sample information). This is the "classical" approach to statistical analysis. Ex: .flipping a coin .distribution of price changes (if it is stable over time) Note: This would be similar to a Bayesian approach under "uninformative prior". In this case, the sample information represents all the available information, and Bayesian and classical statistics are fairly similar. 1/ If the form of the probability function for the sample likelihood is known (e.g. normal, etc.), the parameters of the probability function can be estimated by maximizing the sample likelihood function. This is called the maximum likelihood estimation method. 2/ The moments of the sample distribution can be estimated directly (assuming that they exist): sample mean, sample variance, sample skewness, sample kurtosis, etc. Note that this does not require knowing the form of the probability function (e.g. the least squares method). 3/ Plot the sample data: {% of sample observations t} as a function of t. Then draw a curve through the points. This gives an estimate of the distribution function. 3 B- Case of non-repeatable events: In this case, we need subjective probability judgments (e.g. obtained from administering a questionnaire to an individual). 1/ Using reference lotteries: Partition the sample space into distinct events: A1, A2, ... For each event Ai, i = 1, 2, ..., follow the steps: .step 1: Guess some pij as a rough estimate of Pr(Ai), j = 0. .step 2: Consider the game: receive $Y > 0 if Ai occurs, $0 otherwise. Ask for the individual's willingness-to-pay to play the game: $Wj. .step 3: Consider the game: receive $Y > 0 with probability pij. Ask for the individual's willingness to pay to play the game: $Zj. .step 4: if Wj < Zj, then choose pi,j+1 smaller than pij. Let j = j+1. Go to step 3. if Wj > Zj, then choose pi,j+1 larger than pij. Let j=j+1. Go to step 3. if Wj = Zj, then pij = Pr(Ai). 2/ The fractile method: Find zi such that Pr(x zi) = i, 0 i 1, for selected values of i: .Step 1: Find the value z.5 such that (x z.5) ~L (x z.5). .Step 2: Find the value z.25 such that (x z.25) ~L (z.25 x z.5). Find the value z.75 such that (x z.75) ~L (z.5 x z.75). .Step 3: Same for z.125, z.375, z.625, z.875. . etc... . Plot the points {Pr(x zi)} as a function of z, and draw a curve through them. This gives an estimate of the distribution function for x. C- Bayesian Analysis: Bayes theorem allows the use of both subjective prior information and sample information. - if the parametric form of the posterior probability function is known, then parameter estimates can be obtained by maximizing the posterior probability function. - or posterior moments can be estimated directly (posterior mean, variance, etc.: the Kalman filter). * Do people actually follow Bayes' rule in learning? In general, they don't. Sometimes, people do not update their prior ("conservatism"). And sometimes, people neglect their prior ("memory loss"). * The learning process is complicated. The brain has: . a short term memory with limited capacity and quick decay; . a long term memory with nearly limitless capacity and slow decay, but highly selective. If the information stored by the brain is decaying, then memory loss suggests that new information (sample information) may tend to carry more weight than the old information (prior information). But actions can be taken (e.g. reviewing, etc.) to slow down the decay process of information stock. This would imply that "remembering" is costly... * The learning process is costly. Obtaining and processing information is typically costly (in terms of time, money, resources, etc.). In general, education and experience can reduce learning cost, thus stressing the role of "human capital" in economic decisions and resource allocation. Under 4 costly information, some information may not be worth obtaining, processing or remembering. And under bounded rationality, people may not be able to obtain or process some information, or to revise prior probability judgments in the light of additional evidence. * Information is carried out by signals (e.g. written words, language, etc.). These signals are not perfect (e.g., they may have different meanings for different people). The nature of signals can influence the way information is processed by individuals. This is called "framing bias" in subjective elicitation. * As a result of the complexities of the learning process, any model of behavior under risk is likely to be a somewhat unsatisfactory representation of the real world. Note: Some people argue that Bayesian analysis should be used in normative economic analysis even if it is not consistent with human behavior. The reason is that Bayesian learning may represent a "better" learning process than human learning (under bounded rationality, memory loss and framing bias...). Note: Ambiguity is the situation where individuals assign non-unique probabilities to some uncertain events.