Economics 765 – Assignment 1 1.3 In the one-period binomial model (as considered in the first class), suppose we want to determine the price at time zero of the derivative security with payoff V1 = S1 . This means that the derivative security pays the stock price, in either state of the world. It can also be interpreted as a European call option with strike price K = 0. (Why? Be sure you understand this.) Use the risk-neutral pricing formula V0 = 1 [p̃V1 (H) + q̃V1 (T )] 1+r to compute the time-zero price V0 of this option. The value of a European call with strike price K is (S1 − K)+ . We can safely assume that S1 ≥ 0, and so, if K = 0, (S1 − K)+ = S1 . Recall that the risk-neutral probabilities are given by p̃ = 1+r−d u−d and q̃ = u−1−r . u−d We set V1 (H) = S1 (H) = uS0 and V1 (T ) = S1 (T ) = dS0 , and then we get from the pricing formula that µ ¶ S0 (1 + r − d)u (u − 1 − r)d V0 = + = S0 . 1+r u−d u−d Thus the unsurprising result is that you pay the market price of the security in order to get a derived security whose value is identical to that of the security itself. The hedging portfolio is trivial: it consists of one unit of the underlying asset. It is unnecessary to carry any of the risk-free asset. 1.1 The axioms for a probability measure P defined on a measure space (Ω, F ) can be stated as follows: (i) P (Ω) = 1, and (ii) whenever A1 , A2 , . . . is a sequence of disjoint sets in F , then à P ! ∞ [ An n=1 = ∞ X P (An ). n=1 Use these axioms to show the following. (i) If A ∈ F , B ∈ F , and A ⊂ B, then P (A) ≤ P (B). (ii) If A ∈ F and {An }∞ n=1 is a sequence of sets in F with limn→∞ P (An ) = 0 and A ⊂ An for every n, then P (A) = 0. 1 (i) The subset B \ A is defined as the set of points in B that are not in A: B \ A = {ω ∈ Ω | ω ∈ B, ω ∈ / A}. It follows that A and B \ A are disjoint, and that their union is B. Then using axiom (ii) with A1 = A, A2 = B \ A, An = ∅ for n > 2, we see that P (B) = P (A) + P (B \ A). Since P (B \ A) ≥ 0, this inequality is equivalent to P (A) ≤ P (B), as required. (ii) From part (i), we see that, since A ⊂ An for all n, it follows that P (A) ≤ P (An ). Going to the limit as n → ∞, we get P (A) ≤ lim P (An ) = 0. n→∞ Since also P (A) ≥ 0, we get the desired result that P (A) = 0. 1.5 When dealing with double Lebesgue integrals, just as with double Riemann integrals, the order of integration can be reversed. The only assumption required is that the function being integrated should be either nonnegative or integrable. Here is an application of this fact. Let X be a nonnegative random variable with cumulative distribution function F (x) = P (X ≤ x). Show that Z ∞ EX = (1 − F (x)) dx 0 by showing that Z Z is equal to both EX and R∞ 0 Ω ∞ 0 I[0,X(ω)[ (x) dx dP (ω) (1 − F (x)) dx. If we look at the inner integral in the double integral above, we see that it is Z 0 Z ∞ I[0,X(ω)[ (x) dx = Thus the double integral itself is Z X(ω)dP (ω) = E(X), X(ω) dx = X(ω). 0 by definition of the expectation. Ω When we reverse the order of integration, the double integral becomes Z ∞Z I[0,X(ω)[ (x) dP (ω) dx. 0 Ω 2 The inner integral is now to be thought of as a function of x. The indicator function that is the integrand is equal to 1 iff x < X(ω), which is equivalent to X(ω) > x. Thus we see that Z Z Ω I[0,X(ω)[ (x) dP (ω) = dP (ω) = P (X(ω) > x) = 1 − P (X(ω) ≤ x) = 1 − F (x). X(ω)>x Thus the double integral is Z ∞¡ ¢ 1 − F (x) dx. 0 This gives the desired conclusion. 1.9 Suppose that X is a random variable defined on a probability space (Ω, F, P ), that A ∈ F, and that, for every Borel subset B of R, we have Z IB (X(ω)) dP (ω) = P (A) · P {X ∈ B}. (S.01) A Then we say that X is independent of the event A. Show that, if X is independent of an event A, then Z g(X(ω)) dP (ω) = P (A) · Eg(X) (S.02) A for every nonnegative, Borel-measurable, function g. This is just the standard machine. If g = IB , then the left-hand side of (S.02) is the same as that of (S.01). Then the result follows on noting that Z Z Eg(X) = EIB (X) = IB (X(ω)) dP (ω) = dP (ω) = P (X ∈ B), Ω X(ω)∈B so that the right-hand side of (S.02) is also equal to that of (S.01). The next step uses linearity. We consider simple functions of the form n X g(x) = ak IBk (x). k=1 We now see that Z X n A k=1 ak IBk (X(ω)) dP (ω) = = n X k=1 n X Z ak A IBk (X(ω)) dP (ω) ak P (A) · P (X ∈ Bk ) k=1 = P (A) = P (A) n X k=1 n X ak P (X ∈ Bk ) ak EIBk (X) k=1 n X = P (A) E k=1 3 ak IBk (X). The first equality follows from the linearity of the integral; the second from the first step of the machine, just proved; the third is trivial; the fourth follows from the definition of the expectation of a random variable, here IBk (X), that can take on only finitely many values; and the fifth is again linearity. This shows that the result is true for simple functions. The third step of the machine uses monotone convergence as usual, and the fourth step, which separates the positive and negative parts of a function is not even asked for here. 1.10 Let P be the uniform Lebesgue measure on Ω = [0, 1]. Define ( Z(ω) = For A ∈ B[0, 1], define 1 2 0 if 0 ≤ ω < − 2 1 if − ≤ ω ≤ 1. 2 Z P̃ (A) = Z(ω) dP (ω). A (i) Show that P̃ is a probability measure. (ii) Show that, if P (A) = 0, then P̃ (A) = 0. We say that P̃ is absolutely continuous with respect to P . (iii) Show that there is a set A for which P̃ (A) = 0 but P (A) > 0. In other words, P and P̃ are not equivalent. (i) We know that all we need for P̃ to be a probability measure is that Z should be measurable, nonnegative, and of expectation 1. It is obviously measurable, since its only inverse images are {0 ≤ ω < 1/2} and {1/2 < ω ≤ 1}, which, being intervals, are Borel sets. Nonnegativity is trivial. The expectation is that of a random variable that can take on only finitely many values, and so is calculated as follows: Z 1 1 1 Z(ω) dω = 0 · P (0 ≤ ω < − ) + 2 · P (− ≤ ω ≤ 1) = 2 · − = 1. EZ = 2 [0,1] 2 2 (ii) We calculate as follows, noting that Z = 2 I[0.5,1] . Z Z P̃ (A) = Z(ω) dP (ω) = 2 A Z =2 [0,1] [0,1] IA I[0.5,1] dP IA∩[0.5,1] dP = 2P (A ∩ [0.5, 1]) <= 2P (A) = 0. The inequality at the second last step is a consequence of the result of exercise 1.1. (iii) Any set A contained in the interval [0, 1/2[ is such that P̃ (A) = 0, because, as we just showed, this expression is the P -probability of the interval A ∩ [0.5, 1], which is the empty set. Of course any interval of positive length in [0, 1/2[ has positive P measure. 4 1.14 Let X be a nonnegative random variable defined on a probability space (Ω, F, P ) with the exponential distribution, which is P {X ≤ a} = 1 − e−λa , a ≥ 0. where λ is a positive constant. Let λ̃ be another positive constant, and define Z= Define P̃ by λ̃ −(λ̃−λ)X e . λ Z P̃ (A) = Z dP for all A ∈ F . A (i) Show that P̃ (Ω) = 1. (ii) Compute the distribution function P̃ {X ≤ a} for a ≥ 0 for the random variable X under the probability measure P̃ . (i) By definition, Z λ̃ P̃ (Ω) = Z dP = λ Ω Z ¡ ¢ ¢ λ̃ ¡ exp − (λ̃ − λ)X(ω) dP (ω) = E exp −(λ̃ − λ)X , λ Ω (S.03) where the last equality is just the definition of the expectation. Recall the measure µX defined on (R, B) by the requirement that, for B ∈ B, µX (B) = P (X ∈ B). Thus µX ([0, x]) = P (X ∈ [0, x]) = P (0 ≤ X ≤ x) = 1 − e−λx , since it is clear from the CDF that X takes on only nonnegative values. Recall also that, for a nonnegative Borel-measurable function g, Z Eg(X) = g(x) dµX (x). R Thus ¡ ¢ E exp −(λ̃ − λ)X = Z ∞ Z0 ∞ = 0 Z ¡ ¢ exp − (λ̃ − λ)x d(1 − e−λx ) ¡ ¢ exp − (λ̃ − λ)x λ e−λx dx ∞ e =λ 0 Finally, from (S.03) we see that P̃ (Ω) = 1. 5 −λ̃x λ h −λ̃x i∞ λ dx = − e = . 0 λ̃ λ̃ (ii) This is a straightforward computation. Z Z P̃ {X ≤ a} = Z dP = {X≤a} Ω I[0,a] Z dP = E(I[0,a] Z) Z λ̃ a −(λ̃−λ)x −λx e λe dx = I[0,a] Z dµX = λ 0 0 Z a h ia −λ̃x −λ̃x = λ̃ e dx = − e = 1 − eλ̃a . Z ∞ 0 0 Thus the change of measure leaves us with an exponentially distributed random variable, but with parameter λ̃ instead of λ. 6