M2S1 Lecture Notes G. A. Young http://www2.imperial.ac.uk/∼ayoung September 2011 ii Contents 1 DEFINITIONS, TERMINOLOGY, NOTATION 1.1 EVENTS AND THE SAMPLE SPACE . . . . . . . . . . . . . . . 1.1.1 OPERATIONS IN SET THEORY . . . . . . . . . . . . . . 1.1.2 MUTUALLY EXCLUSIVE EVENTS AND PARTITIONS . 1.2 THE σ-FIELD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 THE PROBABILITY FUNCTION . . . . . . . . . . . . . . . . . . 1.4 PROPERTIES OF P(.): THE AXIOMS OF PROBABILITY . . . 1.5 CONDITIONAL PROBABILITY . . . . . . . . . . . . . . . . . . . 1.6 THE THEOREM OF TOTAL PROBABILITY . . . . . . . . . . . 1.7 BAYES’ THEOREM . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8 COUNTING TECHNIQUES . . . . . . . . . . . . . . . . . . . . . 1.8.1 THE MULTIPLICATION PRINCIPLE . . . . . . . . . . . 1.8.2 SAMPLING FROM A FINITE POPULATION . . . . . . . 1.8.3 PERMUTATIONS AND COMBINATIONS . . . . . . . . . 1.8.4 PROBABILITY CALCULATIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 . 1 . 2 . 2 . 3 . 4 . 5 . 6 . 7 . 7 . 8 . 8 . 8 . 9 . 10 2 RANDOM VARIABLES & PROBABILITY DISTRIBUTIONS 2.1 RANDOM VARIABLES & PROBABILITY MODELS . . . . . . . . . . . . . . 2.2 DISCRETE RANDOM VARIABLES . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 PROPERTIES OF MASS FUNCTION fX . . . . . . . . . . . . . . . . 2.2.2 CONNECTION BETWEEN FX AND fX . . . . . . . . . . . . . . . . . 2.2.3 PROPERTIES OF DISCRETE CDF FX . . . . . . . . . . . . . . . . . 2.3 CONTINUOUS RANDOM VARIABLES . . . . . . . . . . . . . . . . . . . . . 2.3.1 PROPERTIES OF CONTINUOUS FX AND fX . . . . . . . . . . . . . 2.4 EXPECTATIONS AND THEIR PROPERTIES . . . . . . . . . . . . . . . . . . 2.5 INDICATOR VARIABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 TRANSFORMATIONS OF RANDOM VARIABLES . . . . . . . . . . . . . . . 2.6.1 GENERAL TRANSFORMATIONS . . . . . . . . . . . . . . . . . . . . 2.6.2 1-1 TRANSFORMATIONS . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 GENERATING FUNCTIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.1 MOMENT GENERATING FUNCTIONS . . . . . . . . . . . . . . . . . 2.7.2 KEY PROPERTIES OF MGFS . . . . . . . . . . . . . . . . . . . . . . 2.7.3 OTHER GENERATING FUNCTIONS . . . . . . . . . . . . . . . . . . 2.8 JOINT PROBABILITY DISTRIBUTIONS . . . . . . . . . . . . . . . . . . . . 2.8.1 THE CHAIN RULE FOR RANDOM VARIABLES . . . . . . . . . . . 2.8.2 CONDITIONAL EXPECTATION AND ITERATED EXPECTATION 2.9 MULTIVARIATE TRANSFORMATIONS . . . . . . . . . . . . . . . . . . . . . 2.10 MULTIVARIATE EXPECTATIONS AND COVARIANCE . . . . . . . . . . . 2.10.1 EXPECTATION WITH RESPECT TO JOINT DISTRIBUTIONS . . . 2.10.2 COVARIANCE AND CORRELATION . . . . . . . . . . . . . . . . . . 2.10.3 JOINT MOMENT GENERATING FUNCTION . . . . . . . . . . . . . 2.10.4 FURTHER RESULTS ON INDEPENDENCE . . . . . . . . . . . . . . 2.11 ORDER STATISTICS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 13 14 14 15 15 15 16 19 20 21 21 24 26 26 26 28 29 33 33 34 37 37 37 39 39 40 iv CONTENTS 3 DISCRETE PROBABILITY DISTRIBUTIONS 41 4 CONTINUOUS PROBABILITY DISTRIBUTIONS 45 5 MULTIVARIATE PROBABILITY DISTRIBUTIONS 5.1 THE MULTINOMIAL DISTRIBUTION . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 THE DIRICHLET DISTRIBUTION . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 THE MULTIVARIATE NORMAL DISTRIBUTION . . . . . . . . . . . . . . . . . . 51 51 51 52 6 PROBABILITY RESULTS & LIMIT THEOREMS 6.1 BOUNDS ON PROBABILITIES BASED ON MOMENTS 6.2 THE CENTRAL LIMIT THEOREM . . . . . . . . . . . . . 6.3 MODES OF STOCHASTIC CONVERGENCE . . . . . . . 6.3.1 CONVERGENCE IN DISTRIBUTION . . . . . . . 6.3.2 CONVERGENCE IN PROBABILITY . . . . . . . . 6.3.3 CONVERGENCE IN QUADRATIC MEAN . . . . . . . . . . 55 55 56 57 57 58 59 . . . . . . . . . . . . . . 61 61 61 63 63 64 65 65 66 68 68 69 70 70 71 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 STATISTICAL ANALYSIS 7.1 STATISTICAL SUMMARIES . . . . . . . . . . . . . . . . . . . . . . . 7.2 SAMPLING DISTRIBUTIONS . . . . . . . . . . . . . . . . . . . . . . 7.3 HYPOTHESIS TESTING . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 TESTING FOR NORMAL SAMPLES - THE Z-TEST . . . . 7.3.2 HYPOTHESIS TESTING TERMINOLOGY . . . . . . . . . . 7.3.3 THE t-TEST . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.4 TEST FOR σ . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.5 TWO SAMPLE TESTS . . . . . . . . . . . . . . . . . . . . . . 7.4 POINT ESTIMATION . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 ESTIMATION TECHNIQUES I: METHOD OF MOMENTS . 7.4.2 ESTIMATION TECHNIQUES II: MAXIMUM LIKELIHOOD 7.5 INTERVAL ESTIMATION . . . . . . . . . . . . . . . . . . . . . . . . 7.5.1 PIVOTAL QUANTITY . . . . . . . . . . . . . . . . . . . . . . 7.5.2 INVERTING A TEST STATISTIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CHAPTER 1 DEFINITIONS, TERMINOLOGY, NOTATION 1.1 EVENTS AND THE SAMPLE SPACE Definition 1.1.1 An experiment is a one-off or repeatable process or procedure for which (a) there is a well-defined set of possible outcomes (b) the actual outcome is not known with certainty. Definition 1.1.2 A sample outcome, ω, is precisely one of the possible outcomes of an experiment. Definition 1.1.3 The sample space, Ω, of an experiment is the set of all possible outcomes. NOTE : Ω is a set in the mathematical sense, so set theory notation can be used. For example, if the sample outcomes are denoted ω 1 , ..., ω k , say, then Ω = {ω 1 , ..., ω k } = {ω i : i = 1, ..., k} , and ω i ∈ Ω for i = 1, ..., k. The sample space of an experiment can be - a FINITE list of sample outcomes, {ω 1 , ..., ω k } - a (countably) INFINITE list of sample outcomes, {ω 1 , ω 2 , ...} - an INTERVAL or REGION of a real space, ω : ω ∈ A ⊆ Rd Definition 1.1.4 An event, E, is a designated collection of sample outcomes. Event E occurs if the actual outcome of the experiment is one of this collection. An event is, therefore, a subset of the sample space Ω. Special Cases of Events The event corresponding to the collection of all sample outcomes is Ω. The event corresponding to a collection of none of the sample outcomes is denoted ∅. i.e. The sets ∅ and Ω are also events, termed the impossible and the certain event respectively, and for any event E, E ⊆ Ω. 1 2 CHAPTER 1. DEFINITIONS, TERMINOLOGY, NOTATION 1.1.1 OPERATIONS IN SET THEORY Since events are subsets of Ω, set theory operations are used to manipulate events in probability theory. Consider events E, F ⊆ Ω. Then we can reasonably concern ourselves also with events obtained from the three basic set operations: UNION INTERSECTION COMPLEMENT E∪F E∩F E0 “E or F or both occur” “both E and F occur” “E does not occur” Properties of Union/Intersection operators Consider events E, F, G ⊆ Ω. 1.1.2 COMMUTATIVITY E∪F =F ∪E E∩F =F ∩E ASSOCIATIVITY E ∪ (F ∪ G) = (E ∪ F ) ∪ G E ∩ (F ∩ G) = (E ∩ F ) ∩ G DISTRIBUTIVITY E ∪ (F ∩ G) = (E ∪ F ) ∩ (E ∪ G) E ∩ (F ∪ G) = (E ∩ F ) ∪ (E ∩ G) DE MORGAN’S LAWS (E ∪ F ) = E ∩ F 0 0 0 (E ∩ F ) = E ∪ F 0 0 0 MUTUALLY EXCLUSIVE EVENTS AND PARTITIONS Definition 1.1.5 Events E and F are mutually exclusive if E ∩ F = ∅, that is, if events E and F cannot both occur. If the sets of sample outcomes represented by E and F are disjoint (have no common element), then E and F are mutually exclusive. Definition 1.1.6 Events E1 , ..., Ek ⊆ Ω form a partition of event F ⊆ Ω if (a) Ei ∩ Ej = ∅ for i 6= j, i, j = 1, ..., k k S (b) Ei = F , i=1 so that each element of the collection of sample outcomes corresponding to event F is in one and only one of the collections corresponding to events E1 , ...Ek . 1.2. THE σ-FIELD 3 Figure 1.1: Partition of Ω In Figure 1.1, we have Ω = 6 S Ei i=1 Figure 1.2: Partition of F ⊂ Ω In Figure 1.2, we have F = 6 S i=1 1.2 (F ∩ Ei ), but, for example, F ∩ E6 = ∅. THE σ-FIELD Events are subsets of Ω, but need all subsets of Ω be events? The answer is negative. But it suffices to think of the collection of events as a subcollection A of the set of all subsets of Ω. This subcollection should have the following properties: (a) if A, B ∈ A then A ∪ B ∈ A and A ∩ B ∈ A; (b) if A ∈ A then A0 ∈ A; 4 CHAPTER 1. DEFINITIONS, TERMINOLOGY, NOTATION (c) ∅ ∈ A. A collection A of subsets of Ω which satisfies these three conditions is called a field. It follows from the properties of a field that if A1 , A2 , . . . , Ak ∈ A, then k [ i=1 Ai ∈ A. So, A is closed under finite unions and hence under finite intersections also. To see this note that if A1 , A2 ∈ A, then 0 A01 ∪ A02 ∈ A =⇒ A1 ∩ A2 ∈ A. A01 , A02 ∈ A =⇒ A01 ∪ A02 ∈ A =⇒ This is fine when Ω is a finite set, but we require slightly more to deal with the common situation when Ω is infinite. We require the collection of events to be closed under the operation of taking countable unions, not just finite unions. Definition 1.2.1 A collection A of subsets of Ω is called a σ−field if it satisfies the following conditions: (I) ∅ ∈ A; (II) if A1 , A2 , . . . ∈ A then (III) if A ∈ A then A0 ∈ A. S∞ i=1 Ai ∈ A; To recap, with any experiment we may associate a pair (Ω, A), where Ω is the set of all possible outcomes (or elementary events) and A is a σ−field of subsets of Ω, which contains all the events in whose occurrences we may be interested. So, from now on, to call a set A an event is equivalent to asserting that A belongs to the σ−field in question. 1.3 THE PROBABILITY FUNCTION Definition 1.3.1 For an event E ⊆ Ω, the probability that E occurs will be written P (E). Interpretation: P (.) is a set-function that assigns “weight” to collections of possible outcomes of an experiment. There are many ways to think about precisely how this assignment is achieved; CLASSICAL : “Consider equally likely sample outcomes ...” FREQUENTIST : “Consider long-run relative frequencies ...” SUBJECTIVE : “Consider personal degree of belief ...” or merely think of P (.) as a set-function. Formally, we have the following definition. Definition 1.3.2 A probability function P (.) on (Ω, A) is a function P : A → [0, 1] satisfying: 1.4. PROPERTIES OF P(.): THE AXIOMS OF PROBABILITY (a) P (∅) = 0, 5 P (Ω) = 1; (b) if A1 , A2 , . . . is a collection of disjoint members of A, so that Ai ∩ Aj = ∅ from all pairs i, j with i 6= j, then ! ∞ ∞ [ X P Ai = P (Ai ). i=1 i=1 The triple (Ω, A, P (.)), consisting of a set Ω, a σ-field A of subsets of Ω and a probability function P (.) on (Ω, A) is called a probability space. 1.4 PROPERTIES OF P(.): THE AXIOMS OF PROBABILITY For events E, F ⊆ Ω 1. P (E 0 ) = 1 − P (E). 2. If E ⊆ F , then P (E) ≤ P (F ). 3. In general, P (E ∪ F ) = P (E) + P (F ) − P (E ∩ F ). 4. P (E ∩ F 0 ) = P (E) − P (E ∩ F ) 5. P (E ∪ F ) ≤ P (E) + P (F ). 6. P (E ∩ F ) ≥ P (E) + P (F ) − 1. NOTE : The general addition rule 3 for probabilities and Boole’s Inequalities 5 and 6 extend to more than two events. Let E1 , ..., En be events in Ω. Then ! ! n n X X [ X \ n Ei = P (Ei ) − P (Ei ∩ Ej ) + P (Ei ∩ Ej ∩ Ek ) − ... + (−1) P Ei P i=1 i i<j i=1 i<j<k and n [ P Ei i=1 ! ≤ n X P (Ei ). i=1 To prove these results, construct the events F1 = E1 and Fi = Ei ∩ i−1 [ for i = 2, 3, ..., n. Then F1 , F2 , ...Fn are disjoint, and P n [ i=1 Ei ! Ek k=1 n S !0 Ei = i=1 =P n [ i=1 Fi ! = n S Fi , so i=1 n X i=1 P (Fi ). 6 CHAPTER 1. DEFINITIONS, TERMINOLOGY, NOTATION Now, by property 4 above P (Fi ) = P (Ei ) − P = P (Ei ) − P Ei ∩ i−1 S k=1 i−1 S Ek k=1 (Ei ∩ Ek ) , i = 2, 3, ..., n, and the result follows by recursive expansion of the second term for i = 2, 3, ...n. NOTE : We will often deal with both probabilities of single events, and also probabilities for intersection events. For convenience, and to reflect connections with distribution theory that will be presented in Chapter 2, we will use the following terminology; for events E and F P (E) is the marginal probability of E P (E ∩ F ) is the joint probability of E and F 1.5 CONDITIONAL PROBABILITY Definition 1.5.1 For events E, F ⊆ Ω the conditional probability that F occurs given that E occurs is written P(F |E), and is defined by P (F |E) = P (E ∩ F ) , P (E) if P(E) > 0. NOTE: P (E ∩ F ) = P (E)P (F |E), and in general, for events E1 , ..., Ek , ! k \ Ei = P (E1 )P (E2 |E1 )P (E2 |E1 ∩ E2 )...P (Ek |E1 ∩ E2 ∩ ... ∩ Ek−1 ). P i=1 This result is known as the CHAIN or MULTIPLICATION RULE. Definition 1.5.2 Events E and F are independent if P (E|F ) = P (E), so that P (E ∩ F ) = P (E)P (F ). Extension : Events E1 , ..., Ek are independent if, for every subset of events of size l ≤ k, indexed by {i1 , ..., il }, say, l l \ Y P Eij = P (Eij ). j=1 j=1 1.6. THE THEOREM OF TOTAL PROBABILITY 1.6 7 THE THEOREM OF TOTAL PROBABILITY THEOREM Let E1 , ..., Ek be a (finite) partition of Ω, and let F ⊆ Ω. Then P (F ) = k X i=1 P (F |Ei )P (Ei ). PROOF E1 , ..., Ek form a partition of Ω, and F ⊆ Ω, so F = (F ∩ E1 ) ∪ ... ∪ (F ∩ Ek ) =⇒ P (F ) = k P i=1 P (F ∩ Ei ) = k P i=1 P (F |Ei )P (Ei ), writing F as a disjoint union and using the definition of a probability function. Extension: The theorem still holds if E1 , E2 , ... is a (countably) infinite a partition of Ω, and F ⊆ Ω, so that ∞ ∞ X X P (F ) = P (F ∩ Ei ) = P (F |Ei )P (Ei ), i=1 i=1 if P(Ei ) > 0 for all i. 1.7 BAYES’ THEOREM THEOREM Suppose E, F ⊆ Ω, with P (E), P (F ) > 0. Then P (E|F ) = P (F |E)P (E) . P (F ) PROOF P (E|F )P (F ) = P (E ∩ F ) = P (F |E)P (E), so P (E|F )P (F ) = P (F |E)P (E). Extension: If E1 , ..., Ek are disjoint, with P (Ei ) > 0 for i = 1, ..., k, and form a partition of F ⊆ Ω, then P (F |Ei )P (Ei ) . P (Ei |F ) = k X P (F |Ej )P (Ej ) j=1 NOTE: in general, P (E|F ) 6= P (F |E). 8 CHAPTER 1. DEFINITIONS, TERMINOLOGY, NOTATION 1.8 COUNTING TECHNIQUES This section is included for completeness, but is not examinable. Suppose that an experiment has N equally likely sample outcomes. If event E corresponds to a collection of sample outcomes of size n(E), then P (E) = n(E) , N so it is necessary to be able to evaluate n(E) and N in practice. 1.8.1 THE MULTIPLICATION PRINCIPLE If operations labelled 1, ..., r can be carried out in n1 , ..., nr ways respectively, then there are r Y ni = n1 ...nr i=1 ways of carrying out the r operations in total. Example 1.1 If each of r trials of an experiment has N possible outcomes, then there are N r possible sequences of outcomes in total. For example: (i) If a multiple choice exam has 20 questions, each of which has 5 possible answers, then there are 520 different ways of completing the exam. (ii) There are 2m subsets of m elements (as each element is either in the subset, or not in the subset, which is equivalent to m trials each with two outcomes). 1.8.2 SAMPLING FROM A FINITE POPULATION Consider a collection of N items, and a sequence of operations labelled 1, ..., r such that the ith operation involves selecting one of the items remaining after the first i − 1 operations have been carried out. Let ni denote the number of ways of carrying out the ith operation, for i = 1, ..., r. Then there are two distinct cases; (a) Sampling with replacement : an item is returned to the collection after selection. Then ni = N for all i, and there are N r ways of carrying out the r operations. (b) Sampling without replacement : an item is not returned to the collection after selected. Then ni = N − i + 1, and there are N (N − 1)...(N − r + 1) ways of carrying out the r operations. e.g. Consider selecting 5 cards from 52. Then (a) leads to 525 possible selections, whereas (b) leads to 52.51.50.49.48 possible selections. NOTE : The order in which the operations are carried out may be important e.g. in a raffle with three prizes and 100 tickets, the draw {45, 19, 76} is different from {19, 76, 45}. 1.8. COUNTING TECHNIQUES 9 NOTE : The items may be distinct (unique in the collection), or indistinct (of a unique type in the collection, but not unique individually). e.g. The numbered balls in the National Lottery, or individual playing cards, are distinct. However when balls in the lottery are regarded as “WINNING” or “NOT WINNING”, or playing cards are regarded in terms of their suit only, they are indistinct. 1.8.3 PERMUTATIONS AND COMBINATIONS Definition 1.8.1 A permutation is an ordered arrangement of a set of items. A combination is an unordered arrangement of a set of items. RESULT 1 The number of permutations of n distinct items is n! = n(n − 1)...1. RESULT 2 The number of permutations of r from n distinct items is Prn = n! = n(n − 1)...(n − r + 1) (n − r)! (by the Multiplication Principle). If the order in which items are selected is not important, then RESULT 3 The number of combinations of r from n distinct items is the binomial coefficient n! n n (as Prn = r!Crn ). Cr = = r!(n − r)! r -recall the Binomial Theorem, namely n (a + b) = n X n i=0 i ai bn−i . Then the number of subsets of m items can be calculated as follows; for each 0 ≤ j ≤ m, choose a subset of j items from m. Then m X m = (1 + 1)m = 2m . Total number of subsets = j j=0 If the items are indistinct, but each is of a unique type, say Type I, ..., Type κ say, (the so-called Urn Model) then RESULT 4 The number of distinguishable permutations of n indistinct objects, comprising ni items of type i for i = 1, ..., κ is n! . n1 !n2 !...nκ ! Special Case : if κ = 2, then the number of distinguishable permutations of the n1 objects of type I, and n2 = n − n1 objects of type II is Cnn2 = n! . n1 !(n − n1 )! 10 CHAPTER 1. DEFINITIONS, TERMINOLOGY, NOTATION RESULT 5 There are Crn ways of partitioning n distinct items into two “cells”, with r in one cell and n − r in the other. 1.8.4 PROBABILITY CALCULATIONS Recall that if an experiment has N equally likely sample outcomes, and event E corresponds to a collection of sample outcomes of size n(E), then P (E) = n(E) . N Example 1.2 A True/False exam has 20 questions. Let E = “16 answers correct at random”. Then 20 Number of ways of getting 16 out of 20 correct 16 = 20 = 0.0046. P (E) = Total number of ways of answering 20 questions 2 Example 1.3 Sampling without replacement. Consider an Urn Model with 10 Type I objects and 20 Type II objects, and an experiment involving sampling five objects without replacement. Let E=“precisely 2 Type I objects selected” We need to calculate N and n(E) in order to calculate P(E). In this case N is the number of ways of choosing 5 from 30 items, and hence 30 N= . 5 To calculate n(E), we think of E occurring by first choosing 2 Type I objects from 10, and then choosing 3 Type II objects from 20, and hence, by the multiplication rule, 10 20 n(E) = . 2 3 Therefore P (E) = 10 20 2 3 = 0.360. 30 5 This result can be checked using a conditional probability argument; consider event F ⊆ E, where F = “sequence of objects 11222 obtained”. Then F = 5 T Fij i=1 where Fij = “type j object obtained on draw i” i = 1, ..., 5, j = 1, 2. Then P (F ) = P (F11 )P (F21 |F11 )...P (F52 |F11 , F21 , F32 , F42 ) = 10 9 20 19 18 . 30 29 28 27 26 1.8. COUNTING TECHNIQUES 11 Now consider event G where G = “sequence of objects 12122 obtained”. Then P (G) = 10 20 9 19 18 , 30 29 28 27 26 i.e. P (G) = P (F ). In fact,any sequence containing two Type I and three Type II objects has this 5 probability, and there are such sequences. Thus, as all such sequences are mutually exclusive, 2 10 20 5 10 9 20 19 18 2 3 = P (E) = 30 2 30 29 28 27 26 5 as before. Example 1.4 Sampling with replacement. Consider an Urn Model with 10 Type I objects and 20 Type II objects, and an experiment involving sampling five objects with replacement. Let E = “precisely 2 Type I objects selected”. Again, we need to calculate N and n(E) in order to calculate P(E). In this case N is the number of ways of choosing 5 from 30 items with replacement, and hence N = 305 . To calculate n(E), we think of E occurring by first choosing 2 Type I objects from 10, and 3 Type II objects from 20 in any order. Consider such sequences of selection Sequence Number of ways 11222 12122 . 10.10.20.20.20 10.20.10.20.20 . 2 3 etc., and thus a sequence with 2 Type I objects and 3 Type II objects can be obtained in 10 20 5 ways. As before there are such sequences, and thus 2 5 102 203 2 = 0.329. P (E) = 305 Again, this result can be verified using a conditional probability argument; consider event F ⊆ E, where F = “sequence of objects 11222 obtained”. Then 2 3 10 20 P (F ) = 30 30 as the results of the draws are independent. This result is true for any sequence containing two 5 Type I and three Type II objects, and there are such sequences that are mutually exclusive, 2 so 2 3 10 20 5 P (E) = . 30 30 2 12 CHAPTER 1. DEFINITIONS, TERMINOLOGY, NOTATION CHAPTER 2 RANDOM VARIABLES & PROBABILITY DISTRIBUTIONS This chapter contains the introduction of random variables as a technical device to enable the general specification of probability distributions in one and many dimensions to be made. The key topics and techniques introduced in this chapter include the following: • EXPECTATION • TRANSFORMATION • STANDARDIZATION • GENERATING FUNCTIONS • JOINT MODELLING • MARGINALIZATION • MULTIVARIATE TRANSFORMATION • MULTIVARIATE EXPECTATION & COVARIANCE • SUMS OF VARIABLES Of key importance is the moment generating function, which is a standard device for identification of probability distributions. Transformations are often used to transform a random variable or statistic of interest to one of simpler form, whose probability distribution is more convenient to work with. Standardization is often a key part of such simplification. 2.1 RANDOM VARIABLES & PROBABILITY MODELS We are not always interested in an experiment itself, but rather in some consequence of its random outcome. Such consequences, when real valued, may be thought of as functions which map Ω to R, and these functions are called random variables. Definition 2.1.1 A random variable (r.v.) X is a function X : Ω → R with the property that {ω ∈ Ω : X(ω) ≤ x} ∈ A for each x ∈ R. The point is that we defined the probability function P (.) on the σ−field A, so if A(x) = {ω ∈ Ω : X(ω) ≤ x}, we cannot discuss P (A(x)) unless A(x) belongs to A. We generally pay no attention to the technical condition in the definition, and just think of random variables as functions mapping Ω to R. 13 14 CHAPTER 2. RANDOM VARIABLES & PROBABILITY DISTRIBUTIONS So, we regard a set B ⊆ R as an event, associated with event A ⊆ Ω if A = {ω : X(ω) = x for some x ∈ B}. A and B are events in different spaces, but are equivalent in the sense that P (X ∈ B) = P (A), where, formally, it is the latter quantity that is defined by the probability function. Attention switches to assigning the probability P (X ∈ B) for appropriate sets B ⊆ R. If Ω is a list of discrete elements Ω = {ω 1 , ω 2 , ...}, then the definition indicates that the events of interest will be of the form [X = b], or equivalently of the form [X ≤ b] for b ∈ R. For more general sample spaces, we will concentrate on events of the form [X ≤ b] for b ∈ R. 2.2 DISCRETE RANDOM VARIABLES Definition 2.2.1 A random variable X is discrete if the set of all possible values of X (that is, the range of the function represented by X), denoted X, is countable, that is X = {x1 , x2 , ..., xn } [FINITE] or X = {x1 , x2 , ...} [INFINITE]. Definition 2.2.2 PROBABILITY MASS FUNCTION The function fX defined on X by fX (x) = P [X = x], x∈X that assigns probability to each x ∈ X is the (discrete) probability mass function, or pmf. NOTE: For completeness, we define fX (x) = 0, x∈ / X, so that fX is defined for all x ∈ R Furthermore we will refer to X as the support of random variable X, that is, the set of x ∈ R such that fX (x) > 0. 2.2.1 PROPERTIES OF MASS FUNCTION fX Elementary properties of the mass function are straightforward to establish using properties of the probability function. A function fX is a probability mass function for discrete random variable X with range X of the form {x1 , x2 , ...} if and only if X (ii) fX (xi ) = 1. (i) fX (xi ) ≥ 0, These results follow as events [X = x1 ], [X = x2 ] etc. are equivalent to events that partition Ω, that is, [X = xi ] is equivalent to event Ai hence P [X = xi ] = P (Ai ), and the two parts of the theorem follow immediately. Definition 2.2.3 DISCRETE CUMULATIVE DISTRIBUTION FUNCTION The cumulative distribution function, or cdf, FX of a discrete r.v. X is defined by FX (x) = P [X ≤ x], x ∈ R. 2.3. CONTINUOUS RANDOM VARIABLES 2.2.2 15 CONNECTION BETWEEN FX AND fX Let X be a discrete random variable with range X = {x1 , x2 , ...}, where x1 < x2 < ..., and probability mass function fX and cdf FX . Then for any real value x, if x < x1 , then FX (x) = 0, and for x ≥ x1 , X FX (x) = fX (xi ) ⇐⇒ fX (xi ) = FX (xi ) − FX (xi−1 ) i = 2, 3, ... xi ≤x with, for completeness, fX (x1 ) = FX (x1 ) . These relationships follow as events of the form [X ≤ xi ] can be represented as countable unions of the events Ai . The first result therefore follows from properties of the probability function. The second result follows immediately. 2.2.3 PROPERTIES OF DISCRETE CDF FX (i) In the limiting cases, lim FX (x) = 0, x→−∞ lim FX (x) = 1. x→∞ (ii) FX is continuous from the right (but not continuous) on R that is, for x ∈ R, lim FX (x + h) = FX (x). h→0+ (iii) FX is non-decreasing, that is a < b =⇒ FX (a) ≤ FX (b). (iv) For a < b, P [a < X ≤ b] = FX (b) − FX (a). The key idea is that the functions fX and/or FX can be used to describe the probability distribution of the random variable X. A graph of the function fX is non-zero only at the elements of X. A graph of the function FX is a step-function which takes the value zero at minus infinity, the value one at infinity, and is non-decreasing with points of discontinuity at the elements of X. 2.3 CONTINUOUS RANDOM VARIABLES Definition 2.3.1 A random variable X is continuous if the function FX defined on R by FX (x) = P [X ≤ x] for x ∈ R is a continuous function on R , that is, for x ∈ R, lim FX (x + h) = FX (x). h→0 Definition 2.3.2 CONTINUOUS CUMULATIVE DISTRIBUTION FUNCTION The cumulative distribution function, or cdf, FX of a continuous r.v. X is defined by FX (x) = P [X ≤ x], x ∈ R. 16 CHAPTER 2. RANDOM VARIABLES & PROBABILITY DISTRIBUTIONS Definition 2.3.3 PROBABILITY DENSITY FUNCTION A random variable is absolutely continuous if the cumulative distribution function FX can be written Z x FX (x) = fX (t)dt −∞ for some function fX , termed the probability density function, or pdf, of X. From now on when we speak of a continuous random variable, we will implicitly assume the absolutely continuous case, where a pdf exists. 2.3.1 PROPERTIES OF CONTINUOUS FX AND fX By analogy with the discrete case, let X be the range of X, so that X = {x : fX (x) > 0}. (i) The pdf fX need not exist, but as indicated above, continuous r.v.’s where a pdf fX cannot be defined in this way will be ignored. The function fX can be defined piecewise on intervals of R . (ii) For the cdf of a continuous r.v., lim FX (x) = 0, x→−∞ lim FX (x) = 1. x→∞ (iii) Directly from the definition, at values of x where FX is differentiable, fX (x) = d {FX (t)}t=x . dt (iv) If X is continuous, fX (x) 6= P [X = x] = lim [P (X ≤ x) − P (X ≤ x − h)] = lim [FX (x) − FX (x − h)] = 0. h→0+ h→0+ (v) For a < b, P [a < X ≤ b] = P [a ≤ X < b] = P [a ≤ X ≤ b] = P [a < X < b] = FX (b) − FX (a). It follows that a function fX is a pdf for a continuous random variable X if and only if Z ∞ (i) fX (x) ≥ 0, (ii) fX (x)dx = 1. −∞ This result follows direct from definitions and properties of FX . Example 2.1 Consider a coin tossing experiment where a fair coin is tossed repeatedly under identical experimental conditions, with the sequence of tosses independent, until a Head is obtained. For this experiment, the sample space, Ω is then the set of sequences ({H} , {T H} , {T T H} , {T T T H} ...) with associated probabilities 1/2, 1/4, 1/8, 1/16, ... . Define discrete random variable X : Ω −→ R, by X(ω) = x ⇐⇒ first H on toss x. Then x 1 , x = 1, 2, 3, ... fX (x) = P [X = x] = 2 2.3. CONTINUOUS RANDOM VARIABLES 17 and zero otherwise. For x ≥ 1, let k(x) be the largest integer not greater than x, then FX (x) = X k(x) fX (xi ) = xi ≤x X i=1 k(x) 1 fX (i) = 1 − 2 and FX (x) = 0 for x < 1. Graphs of the probability mass function (left) and cumulative distribution function (right) are shown in Figure 2.1. Note that the mass function is only non-zero at points that are elements of X, and that the cdf is defined for all real values of x, but is only continuous from the right. FX is therefore a step-function. CDF F(x) 0.2 0.0 0.0 0.1 0.2 0.4 f(x) F(x) 0.3 0.6 0.4 0.8 0.5 1.0 PMF f(x) 0 2 4 6 8 10 x Figure 2.1: PMF fX (x) = 0 2 4 6 8 10 x 1 x ,x 2 = 1, 2, . . . , and CDF FX (x) = 1 − 1 k(x) . 2 Example 2.2 Consider an experiment to measure the length of time that an electrical component functions before failure. The sample space of outcomes of the experiment, Ω is R+ , and if Ax is the event that the component functions for longer than x > 0 time units, suppose that P(Ax ) = exp −x2 . Define continuous random variable X : Ω −→ R+ , by X(ω) = x ⇐⇒ component fails at time x. Then, if x > 0, FX (x) = P [X ≤ x] = 1 − P (Ax ) = 1 − exp −x2 18 CHAPTER 2. RANDOM VARIABLES & PROBABILITY DISTRIBUTIONS and FX (x) = 0 if x ≤ 0. Hence if x > 0, fX (x) = and zero otherwise. d {FX (t)}t=x = 2x exp −x2 , dt Graphs of the probability density function (left) and cumulative distribution function (right) are shown in Figure 2.2. Note that both the pdf and cdf are defined for all real values of x, and that both are continuous functions. CDF F(x) F(x) 0.0 0.0 0.2 0.2 0.4 0.4 f(x) 0.6 0.6 0.8 0.8 1.0 PDF f(x) 0 1 2 3 4 5 0 1 x 2 3 4 5 x Figure 2.2: PDF fX (x) = 2x exp{−x2 }, x > 0, and CDF FX (x) = 1 − exp{−x2 }, x > 0. Note that here FX (x) = Z x fX (t)dt = −∞ x fX (t)dt 0 as fX (x) = 0 for x ≤ 0, and also that Z Z ∞ fX (x)dx = −∞ Z 0 ∞ fX (x)dx = 1. 2.4. EXPECTATIONS AND THEIR PROPERTIES 2.4 19 EXPECTATIONS AND THEIR PROPERTIES Definition 2.4.1 For a discrete random variable X with range X and with probability mass function fX , the expectation or expected value of X with respect to fX is defined by EfX [X] = ∞ X xfX (x) = x=−∞ X xfX (x). x∈X For a continuous random variable X with range X and pdf fX , the expectation or expected value of X with respect to fX is defined by Z ∞ Z EfX [X] = xfX (x)dx = xfX (x)dx. X −∞ NOTE : The sum/integral may not be convergent, and hence the expected value may be infinite. It is important always to check that the integral is finite: a sufficient condition is the absolute integrability of the summand/integrand, that is X x or in the continuous case Z ∞ −∞ |x| fX (x) < ∞ =⇒ |x| fX (x)dx < ∞ =⇒ X x Z xfX (x) = EfX [X] < ∞, ∞ −∞ xfX (x)dx = EfX [X] < ∞. Extension : Let g be a real-valued function whose domain includes X. Then X g(x)fX (x), if X is discrete, x∈X EfX [g(X)] = Z g(x)fX (x)dx, if X is continuous. X PROPERTIES OF EXPECTATIONS Let X be a random variable with mass function/pdf fX . Let g and h be real-valued functions whose domains include X, and let a and b be constants. Then EfX [ag(X) + bh(X)] = aEfX [g(X)] + bEfX [h(X)], as (in the continuous case) EfX [ag(X) + bh(X)] = Z =a X Z [ag(x) + bh(x)]fX (x)dx X g(x)fX (x)dx + b Z X h(x)fX (x)dx = aEfX [g(X)] + bEfX [h(X)]. 20 CHAPTER 2. RANDOM VARIABLES & PROBABILITY DISTRIBUTIONS SPECIAL CASES : (i) For a simple linear function EfX [aX + b] = aEfX [X] + b. (ii) Consider g(x) = (x−EfX [X])2 . Write μ =EfX [X] (a constant that does not depend on x). Then, expanding the integrand Z Z Z Z 2 2 2 EfX [g(X)] = (x − μ) fX (x)dx = x fX (x)dx − 2μ xfX (x)dx + μ fX (x)dx = Z x2 fX (x)dx − 2μ2 + μ2 = Z x2 fX (x)dx − μ2 = EfX [X 2 ] − {EfX [X]}2 . Then V arfX [X] = EfX [X 2 ] − {EfX [X]}2 p is the variance of the distribution. Similarly, V arfX [X] is the standard deviation of the distribution. (iii) Consider g(x) = xk for k = 1, 2, .... Then in the continuous case Z k xk fX (x)dx, EfX [g(X)] = EfX [X ] = X and EfX [X k ] is the kth moment of the distribution. (iv) Consider g(x) = (x − μ)k for k = 1, 2, .... Then k EfX [g(X)] = EfX [(X − μ) ] = Z X (x − μ)k fX (x)dx, and EfX [(X − μ)k ] is the kth central moment of the distribution. (v) Consider g(x) = aX + b. Then V arfX [aX + b] = a2 V arfX [X], V arfX [g(X)] = EfX [(aX + b − EfX [aX + b])2 ] = EfX [(aX + b − aEfX [X] − b)2 ] = EfX [(a2 (X − EfX [X])2 ] = a2 V arfX [X]. 2.5 INDICATOR VARIABLES A particular class of random variables called indicator variables are particularly useful. Let A be an event and let IA : Ω → R be the indicator function of A, so that 1, if ω ∈ A, IA (ω) = 0, if ω ∈ A0 . 2.6. TRANSFORMATIONS OF RANDOM VARIABLES 21 Then IA is a random variable taking values 1 and 0 with probabilities P (A) and P (A0 ) respectively. Also, IA has expectation P (A) and variance P (A){1 − P (A)}. The usefulness lies in the fact that any discrete random variable X can be written as a linear combination of indicator random variables: X ai IAi , X= i for some collection of events (Ai , i ≥ 1) and real numbers (ai , i ≥ 1). Sometimes we can obtain the expectation and variance of a random variable X easily by expressing it in this way, then using knowledge of the expectation and variance of the indicator variables IAi , rather than by direct calculation. 2.6 2.6.1 TRANSFORMATIONS OF RANDOM VARIABLES GENERAL TRANSFORMATIONS Consider a discrete/continuous r.v. X with range X and probability distribution described by mass/pdf fX , or cdf FX . Suppose g is a real-valued function defined on X. Then Y = g(X) is also an r.v. (Y is also a function from Ω to R). Denote the range of Y by Y. For A ⊆ R, the event [Y ∈ A] is an event in terms of the transformed variable Y . If fY is the mass/density function for Y , then X fY (y), Y discrete, y∈A P [Y ∈ A] = Z fY (y)dy, Y continuous. A We wish to derive the probability distribution of random variable Y ; in order to do this, we first consider the inverse transformation g −1 from Y to X defined for set A ⊆ Y (and for y ∈ Y) by g −1 (A) = {x ∈ X : g(x) ∈ A} , g −1 (y) = {x ∈ X : g(x) = y} , that is, g −1 (A) is the set of points in X that map into A, and g −1 (y) is the set of points in X that map to y, under transformation g. By construction, we have P [Y ∈ A] = P [X ∈ g −1 (A)]. Then, for y ∈ R, we have FY (y) = P [Y ≤ y] = P [g(X) ≤ y] = X fX (x), x∈Ay Z X discrete, fX (x)dx, X continuous, Ay where Ay = {x ∈ X : g(x) ≤ y}. This result gives the “first principles” approach to computing the distribution of the new variable. The approach can be summarized as follows: • consider the range Y of the new variable; • consider the cdf FY (y). Step through the argument as follows FY (y) = P [Y ≤ y] = P [g(X) ≤ y] = P [X ∈ Ay ]. 22 CHAPTER 2. RANDOM VARIABLES & PROBABILITY DISTRIBUTIONS 1.0 Transformation: Y=sin(X) 0.0 x1 x2 -1.0 -0.5 Y = sin(X) 0.5 y 0 1 2 3 4 5 6 X (radians) Figure 2.3: Computation of Ay for Y = sin X. Note that it is usually a good idea to start with the cdf, not the pmf or pdf. Our main objective is therefore to identify the set Ay = {x ∈ X : g(x) ≤ y} . Example 2.3 Suppose that X is a continuous r.v. with range X ≡ (0, 2π) whose pdf fX is constant 1 fX (x) = , 0 < x < 2π, 2π and zero otherwise. This pdf has corresponding continuous cdf FX (x) = x , 2π 0 < x < 2π. Consider the transformed r.v. Y = sin X. Then the range of Y , Y, is [−1, 1], but the transformation is not 1-1. However, from first principles, we have FY (y) = P [Y ≤ y] = P [sin X ≤ y] . Now, by inspection of Figure 2.3, we can easily identify the required set Ay , y > 0 : it is the union of two disjoint intervals Hence Ay = [0, x1 ] ∪ [x2 , 2π] = 0, sin−1 y ∪ π − sin−1 y, 2π . 2.6. TRANSFORMATIONS OF RANDOM VARIABLES 23 5 10 Transformation: T=tan(X) 0 x1 x2 -10 -5 T = tan(X) t 0 1 2 3 4 5 6 X (radians) Figure 2.4: Computation of Ay for T = tan X. FY (y) = P [sin X ≤ y] = P [X ≤ x1 ] + P [X ≥ x2 ] = {P [X ≤ x1 ]} + {1 − P [X < x2 ]} = 1 sin−1 y 2π and hence, by differentiation, 1 π − sin−1 y + 1− 2π fY (y) = [A symmetry argument verifies this for y < 0.] = 1 1 + sin−1 y, 2 π 1 1 p . π 1 − y2 Example 2.4 Consider transformed r.v. T = tan X. Then the range of T , T, is R, but the transformation is not 1-1. However, from first principles, we have, for t > 0, FT (t) = P [T ≤ t] = P [tan X ≤ t] . Figure 2.4 helps identify the required set At : in this case it is the union of three disjoint intervals π i 3π i 3π π −1 −1 At = [0, x1 ] ∪ , x2 ∪ , 2π = 0, tan t ∪ , π + tan t ∪ , 2π , 2 2 2 2 (note, for values of t < 0, the union will be of only two intervals, but the calculation proceeds identically). Therefore, hπ i 3π FT (t) = P [tan X ≤ t] = P [X ≤ x1 ] + P < X ≤ x2 + P < X ≤ 2π 2 2 = 1 1 n πo 1 3π 1 1 −1 −1 π + tan t − + 2π − = tan−1 t + , tan t + 2π 2π 2 2π 2 π 2 24 CHAPTER 2. RANDOM VARIABLES & PROBABILITY DISTRIBUTIONS and hence, by differentiation, fT (t) = 2.6.2 1 1 . π 1 + t2 1-1 TRANSFORMATIONS The mapping g(X), a function of X from X, is 1-1 and onto Y if for each y ∈ Y, there exists one and only one x ∈ X such that y = g(x). The following theorem gives the distribution for random variable Y = g(X) when g is 1-1. Theorem 2.6.1 THE UNIVARIATE TRANSFORMATION THEOREM Let X be a random variable with mass/density function fX and support X. Let g be a 1-1 function from X onto Y with inverse g −1 . Then Y = g(X) is a random variable with support Y and Discrete Case : The mass function of random variable Y is given by fY (y) = fX (g −1 (y)), y ∈ Y = {y|fY (y) > 0} , where x is the unique solution of y = g(x) (so that x = g −1 (y)). Continuous Case : The pdf of random variable Y is given by d −1 −1 fY (y) = fX (g (y)) y ∈ Y = {y|fY (y) > 0} , g (t) t=y , dt where y = g(x), provided that the derivative is continuous and non-zero on Y. d −1 g (t) dt Proof. Discrete case : by direct calculation, fY (y) = P [Y = y] = P [g(X) = y] = P [X = g −1 (y)] = fX (x) where x = g −1 (y), and hence fY (y) > 0 ⇐⇒ fX (x) > 0. Continuous case : function g is either (I) a monotonic increasing, or (II) a monotonic decreasing function. Case (I): If g is increasing, then for x ∈ X and y ∈ Y, we have that g(x) ≤ y ⇐⇒ x ≤ g −1 (y). Therefore, for y ∈ Y, FY (y) = P [Y ≤ y] = P [g(X) ≤ y] = P [X ≤ g −1 (y)] = FX (g −1 (y)) 2.6. TRANSFORMATIONS OF RANDOM VARIABLES and, by differentiation, because g is monotonic increasing, d −1 d −1 −1 −1 g (t) t=y = fX (g (y)) g (y) t=y , fY (y) = fX (g (y)) dt dt 25 as Case (II): If g is decreasing, then for x ∈ X and y ∈ Y we have d −1 g (t) > 0. dt g(x) ≤ y ⇐⇒ x ≥ g −1 (y) Therefore, for y ∈ Y, FY (y) = P [Y ≤ y] = P [g(X) ≤ y] = P [X ≥ g −1 (y)] = 1 − FX (g −1 (y)), so fY (y) = −fX (g −1 d −1 d −1 −1 g (y) = fX (g (y)) g (t) t=y (y)) dt dt as d −1 g (t) < 0. dt Definition 2.6.1 Suppose transformation g : X −→ Y is 1-1, and is defined by g(x) = y for x ∈ X. Then the Jacobian of the transformation, denoted J(y), is given by J(y) = d −1 g (t) t=y , dt that is, the first derivative of g −1 evaluated at y = g(x). Note that the inverse transformation g −1 : Y −→ X has Jacobian 1/J(x). NOTE : (i) The Jacobian is precisely the same term that appears as a change of variable term in an integration. (ii) In the Univariate Transformation Theorem, in the continuous case, we take the modulus of the Jacobian (iii) To compute the expectation of Y = g(X), we now have two alternative methods of computation; we either compute the expectation of g(X) with respect to the distribution of X, or compute the distribution of Y , and then its expectation. It is straightforward to demonstrate that the two methods are equivalent, that is EfX [g(X)] = EfY [Y ] This result is sometimes known as the Law of the Unconscious Statistician. 26 CHAPTER 2. RANDOM VARIABLES & PROBABILITY DISTRIBUTIONS IMPORTANT NOTE: Note that the apparently appealing “plug-in” approach that sets −1 fY (y) = fX g (y) will almost always fail as the Jacobian term must be included. For example, if Y = eX so that X = log Y , then merely setting fY (y) = fX (log y) is insufficient, you must have 1 fY (y) = fX (log y) × . y 2.7 2.7.1 GENERATING FUNCTIONS MOMENT GENERATING FUNCTIONS Definition 2.7.1 For random variable X with mass/density function fX , the moment generating function, or mgf, of X, MX , is defined by MX (t) = EfX [etX ], if this expectation exists for all values of t ∈ (−h, h) for some h > 0, that is, MX (t) = DISCRETE CASE CONTINUOUS CASE MX (t) = X Z etx fX (x) etx fX (x)dx where the sum/integral is over X. NOTE : It can be shown that if X1 and X2 are random variables taking values on X with mass/density functions fX1 and fX2 , and mgfs MX1 and MX2 respectively, then fX1 (x) ≡ fX2 (x), x ∈ X ⇐⇒ MX1 (t) ≡ MX2 (t), t ∈ (−h, h). Hence there is a 1-1 correspondence between generating functions and distributions: this provides a key technique for identification of probability distributions. 2.7.2 KEY PROPERTIES OF MGFS (r) (i) If X is a discrete random variable, the rth derivative of MX evaluated at t, MX (t), is given by (r) MX (t) = and hence o X dr dr nX sx {M (s)} = f (x) = xr etx fX (x) e X X s=t dsr dsr s=t (r) MX (0) = X xr fX (x) = EfX [X r ]. 2.7. GENERATING FUNCTIONS 27 If X is a continuous random variable, the rth derivative of MX is given by Z Z dr (r) sx MX (t) = r e fX (x)dx = xr etx fX (x)dx ds s=t and hence (r) MX (0) = Z xr fX (x)dx = EfX [X r ]. (ii) If X is a discrete random variable, then X MX (t) = = 1+ (∞ ) r X X (tx) etx fX (x) = fX (x) r! r=0 ∞ r nX X t r=1 r! ∞ r o X t xr fX (x) = 1 + Ef [X r ]. r! X r=1 The identical result holds for the continuous case. (iii) From the general result for expectations of functions of random variables, EfY [etY ] ≡ EfX [et(aX+b) ] =⇒ MY (t) = EfX [et(aX+b) ] = ebt EfX [eatX ] = ebt MX (at). Therefore, if Y = aX + b, MY (t) = ebt MX (at) Theorem 2.7.1 Let X1 , ..., Xk be independent random variables with mgfs MX1 , ..., MXk respectively. Then if the random variable Y is defined by Y = X1 + ... + Xk , MY (t) = k Y MXi (t). i=1 Proof. For k = 2, if X1 and X2 are independent, integer-valued, discrete r.v.s, then if Y = X1 + X2 , by the Theorem of Total Probability, X X fY (y) = P [Y = y] = P [Y = y|X1 = x1 ] P [X1 = x1 ] = fX2 (y − x1 ) fX1 (x1 ) . x1 x1 Hence MY (t) = EfY [etY ] = X ety fY (y) = y = X et(x1 +x2 ) x2 = ( X x1 ( X y X x1 etx1 fX1 (x1 ) ety ( fX2 (x2 ) fX1 (x1 ) )( X x2 X x1 ) etx2 fX2 (x2 ) fX2 (y − x1 ) fX1 (x1 ) ) (changing variables in the summation, x2 = y − x1 ) ) = MX1 (t)MX2 (t), 28 CHAPTER 2. RANDOM VARIABLES & PROBABILITY DISTRIBUTIONS and the result follows for general k by recursion. The result for continuous random variables follows in the obvious way. Special Case : If X1 , ..., Xk are identically distributed, then MXi (t) ≡ MX (t), say, for all i, so MY (t) = k Y i=1 2.7.3 MX (t) = {MX (t)}k . OTHER GENERATING FUNCTIONS Definition 2.7.2 For random variable X, with mass/density function fX , the factorial moment or probability generating function, fmgf or pgf , of X, GX , is defined by GX (t) = EfX [tX ] = EfX [eX log t ] = MX (log t), if this expectation exists for all values of t ∈ (1 − h, 1 + h) for some h > 0. Properties : (i) Using similar techniques to those used for the mgf, it can be shown that (r) GX (t) = dr X−r {G (s)} = E X(X − 1)...(X − r + 1)t X f X s=t dsr (r) =⇒ GX (1) = EfX [X(X − 1)...(X − r + 1)], where EfX [X(X − 1)...(X − r + 1)] is the rth factorial moment. (ii) For discrete random variables, it can be shown by using a Taylor series expansion of GX that, for r = 1, 2, ..., (r) GX (0) = P [X = r]. r! Definition 2.7.3 For random variable X with mass/density function fX , the cumulant generating function of X, KX , is defined by KX (t) = log [MX (t)] , for t ∈ (−h, h) for some h > 0. Moment generating functions provide a very useful technique for identifying distributions, but suffer from the disadvantage that the integrals which define them may not always be finite. Another class of functions which are equally useful and whose finiteness is guaranteed is described next. Definition 2.7.4 The characteristic function, or cf, of X, CX , is defined by CX (t) = EfX eitX . 2.8. JOINT PROBABILITY DISTRIBUTIONS By definition CX (t) = = Z Z eitx fX (x)dx = x∈X 29 Z [cos tx + i sin tx] fX (x)dx x∈X cos txfX (x)dx + i x∈X Z sin txfX (x)dx x∈X = EfX [cos tX] + iEfX [sin tX] . We will be concerned primarily with cases where the moment generating function exists, and the use of moment generating functions will be a key tool for identification of distributions. 2.8 JOINT PROBABILITY DISTRIBUTIONS Suppose X and Y are random variables on the probability space (Ω, A, P (.)). Their distribution functions FX and FY contain information about their associated probabilities. But how do we describe information about their properties relative to each other? We think of X and Y as components of a random vector (X, Y ) taking values in R2 , rather than as unrelated random variables each taking values in R. Example 2.5 Toss a coin n times and let Xi = 0 or 1, depending on whether the ith toss is a tail or a head. The random P vector X = (X1 , . . . , Xn ) describes the whole experiment. The total number of heads is ni=1 Xi . The joint distribution function of a random vector (X1 , . . . , Xn ) is P (X1 ≤ x1 , . . . , Xn ≤ xn ), a function of n real variables x1 , . . . , xn . For vectors x = (x1 , . . . , xn ) and y = (y1 , . . . , yn ), write x ≤ y if xi ≤ yi for each i = 1, . . . , n. Definition 2.8.1 The joint distribution function of a random vector X = (X1 , . . . , Xn ) on (Ω, A, P (.)) is given by FX : Rn −→ [0, 1], defined by FX (x) = P (X ≤ x), x ∈ Rn . [Remember, formally, {X ≤ x} means {ω ∈ Ω : X(ω) ≤ x}.] We will consider, for simplicity, the case n = 2, without any loss of generality: the case n > 2 is just notationally more cumbersome. Properties of the joint distribution function. The joint distribution function FX,Y of the random vector (X, Y ) satisfies: (i) lim x,y−→−∞ FX,Y (x, y) = 0, lim FX,Y (x, y) = 1. x,y−→∞ 30 CHAPTER 2. RANDOM VARIABLES & PROBABILITY DISTRIBUTIONS (ii) If (x1 , y1 ) ≤ (x2 , y2 ) then FX,Y (x1 , y1 ) ≤ FX,Y (x2 , y2 ). (iii) FX,Y is continuous from above, FX,Y (x + u, y + v) −→ FX,Y (x, y), as u, v −→ 0+ . (iv) lim FX,Y (x, y) = FX (x) ≡ P (X ≤ x), y−→∞ lim FX,Y (x, y) = FY (y) ≡ P (Y ≤ y). x−→∞ FX and FY are the marginal distribution functions of the joint distribution FX,Y . Definition 2.8.2 The random variables X and Y on (Ω, A, P (.)) are (jointly) discrete if (X, Y ) takes values in a countable subset of R2 only. Definition 2.8.3 Discrete variables X and Y are independent if the events {X = x} and {Y = y} are independent for all x and y. Definition 2.8.4 given by The joint probability mass function fX,Y : R2 −→ [0, 1] of X and Y is fX,Y (x, y) = P (X = x, Y = y). The marginal pmf of X, fX (x), is found from: fX (x) = P (X = x) X P (X = x, Y = y) = y = X fX,Y (x, y). y Similarly for fY (y). The definition of independence can be reformulated as: X and Y are independent iff fX,Y (x, y) = fX (x)fY (y), for all x, y ∈ R. More generally, X and Y are independent iff fX,Y (x, y) can be factorized as the product g(x)h(y) of a function of x alone and a function of y alone. Let X be the support of X and Y be the support of Y . Then Z = (X, Y ) has support Z = {(x, y) : fX,Y (x, y) > 0}. In nice cases Z = X × Y, but we need to be alert to cases with Z ⊂ X × Y. In general, given a random vector (X1 , . . . , Xk ) we will denote its range or support by X(k) . 2.8. JOINT PROBABILITY DISTRIBUTIONS 31 Definition 2.8.5 The conditional distribution function of Y given X = x, FY |X (y|x) is defined by FY |X (y|x) = P (Y ≤ y|X = x), for any x such that P (X = x) > 0. The conditional probability mass function of Y given X = x, fY |X (y|x), is defined by fY |X (y|x) = P (Y = y|X = x), for any x such that P (x = x) > 0. Turning now to the continuous case, we define: Definition 2.8.6 The random variables X and Y on (Ω, A, P (.)) are called jointly continuous if their joint distribution function can be expressed as Z x Z y FX,Y (x, y) = fX,Y (u, v)dvdu, u=−∞ v=−∞ x, y ∈ R, for some fX,Y : R2 −→ [0, ∞). Then fX,Y is the joint probability density function of X, Y . If FX,Y is ‘sufficiently differentiable’ at (x, y) we have fX,Y (x, y) = ∂2 FX,Y (x, y). ∂x∂y This is the usual case, which we will assume from now on. Then: (i) P (a ≤ X ≤ b, c ≤ Y ≤ d) = FX,Y (b, d) − FX,Y (a, d) − FX,Y (b, c) + FX,Y (a, c) Z dZ b = fX,Y (x, y)dxdy. c a If B is a ‘nice’ subset of R2 , such as a union of rectangles, Z Z fX,Y (x, y)dxdy. P ((X, Y ) ∈ B) = B (ii) The marginal distribution functions of X and Y are: FX (x) = P (X ≤ x) = FX,Y (x, ∞), FY (y) = P (Y ≤ y) = FX,Y (∞, y). 32 CHAPTER 2. RANDOM VARIABLES & PROBABILITY DISTRIBUTIONS Since FX (x) = Z x −∞ Z ∞ −∞ fX,Y (u, y)dy du, we see, differentiating with respect to x, that the marginal pdf of X is Z ∞ fX,Y (x, y)dy. fX (x) = −∞ Similarly, the marginal pdf of Y is fY (y) = Z ∞ fX,Y (x, y)dx. −∞ We cannot, as we did in the discrete case, define independence of X and Y in terms of events {X = x} and {Y = y}, as these have zero probability and are trivially independent. So, Definition 2.8.7 X and Y are independent if {X ≤ x} and {Y ≤ y} are independent events, for all x, y ∈ R. So, X and Y are independent iff FX,Y (x, y) = FX (x)FY (y), ∀x, y ∈ R, or (equivalently) iff fX,Y (x, y) = fX (x)fY (y), whenever FX,Y is differentiable at (x, y). (iv) Definition 2.8.8 The conditional distribution function of Y given X = x, FY |X (y|x) or P (Y ≤ y|X = x) is defined as Z y fX,Y (x, v) dv, FY |X (y|x) = fX (x) v=−∞ for any x such that fX (x) > 0. Definition 2.8.9 defined by The conditional density function of Y , given X = x, fY |X (y|x), is fY |X (y|x) = fX,Y (x, y) , fX (x) for any x such that fX (x) > 0. This is an appropriate point to remark that not all random variables are either continuous or discrete, and not all distribution functions are either absolutely continuous or discrete. Many practical examples exist of distribution functions that are partly discrete and partly continuous. 2.8. JOINT PROBABILITY DISTRIBUTIONS 33 Example 2.6 We record the delay that a motorist encounters at a one-way traffic stop sign. Let X be the random variable representing the delay the motorist experiences. There is a certain probability that there will be no opposing traffic, so she will be able to proceed without delay. However, if she has to wait, she could (in principle) have to wait for any positive amount of time. The experiment could be described by assuming that X has distribution function FX (x) = (1 − pe−λx )I[0,∞) (x). This has a jump of 1 − p at x = 0, but is continuous for x > 0: there is a probability 1 − p of no wait at all. We shall see later cases of random vectors, (X, Y ) say, where one component is discrete and the other continuous: there is no essential complication in the manipulation of the marginal distributions etc. for such a case. 2.8.1 THE CHAIN RULE FOR RANDOM VARIABLES As with the chain rule for manipulation of probabilities, there is an explicit relationship between joint, marginal, and conditional mass/density functions. For example, consider three continuous random variables X1 , X2 , X3 , with joint pdf fX1 ,X2 ,X3 . Then, fX1 ,X2 ,X3 (x1 , x2 , x3 ) = fX1 (x1 )fX2 |X1 (x2 |x1 )fX3 |X1 ,X2 (x3 |x1 , x2 ), so that, for example, fX1 (x1 ) = = = Z Z Z X2 X2 X2 Z Z Z X3 fX1 ,X2 ,X3 (x1 , x2 , x3 )dx2 dx3 X3 fX1 |X2 ,X3 (x1 |x2 , x3 )fX2 ,X3 (x2 , x3 )dx2 dx3 X3 fX1 |X2 ,X3 (x1 |x2 , x3 )fX2 |X3 (x2 |x3 )fX3 (x3 )dx2 dx3 . Equivalent relationships hold in the discrete case and can be extended to determine the explicit relationship between joint, marginal, and conditional mass/density functions for any number of random variables. NOTE: the discrete equivalent of this result is a DIRECT consequence of the Theorem of Total Probability; the event [X1 = x1 ] is partitioned into sub-events [(X1 = x1 ) ∩ (X2 = x2 ) ∩ (X3 = x3 )] for all possible values of the pair (x2 , x3 ). 2.8.2 CONDITIONAL EXPECTATION AND ITERATED EXPECTATION Consider two discrete/continuous random variables X1 and X2 with joint mass function/pdf fX1 ,X2 , and the conditional mass function/pdf of X1 given X2 = x2 , defined in the usual way by fX1 |X2 (x1 |x2 ) = fX1 ,X2 (x1 , x2 ) . fX2 (x2 ) 34 CHAPTER 2. RANDOM VARIABLES & PROBABILITY DISTRIBUTIONS Then the conditional expectation of g(X1 ) given X2 = x2 is defined by X g(x1 )fX1 |X2 (x1 |x2 ), X1 DISCRETE. x ∈X 1 1 EfX1 |X2 [g(X1 )|X2 = x2 ] = Z g(x1 )fX1 |X2 (x1 |x2 )dx1 , X1 CONTINUOUS, X1 i.e. the expectation of g(X1 ) with respect to the conditional density of X1 given X2 = x2 , (possibly giving a function of x2 ). The case g(x) ≡ x is a particular case. Theorem 2.8.1 THE LAW OF ITERATED EXPECTATION For two continuous random variables X1 and X2 with joint pdf fX1 ,X2 , i h EfX1 [g(X1 )] = EfX2 EfX1 |X2 [g(X1 )|X2 = x2 ] . Proof EfX1 [g(X1 )] = = = = = = Z Z Z Z Z Z X1 X1 X1 X1 X2 X2 g(x1 )fX1 (x1 )dx1 g(x1 ) g(x1 ) Z X2 Z n Z Z X2 X2 fX1 ,X2 (x1 , x2 )dx2 dx1 fX1 |X2 (x1 |x2 )fX2 (x2 )dx2 dx1 g(x1 )fX1 |X2 (x1 |x2 )fX2 (x2 )dx2 dx1 X1 g(x1 )fX1 |X2 (x1 |x2 )dx1 fX2 (x2 )dx2 o EfX1 |X2 [g(X1 )|X2 = x2 ] fX2 (x2 )dx2 h i = EfX2 EfX1 |X2 [g(X1 )|X2 = x2 ] , so the expectation of g(X1 ) can be calculated by finding the conditional expectation of g(X1 ) given X2 = x2 , giving a function of x2 , and then taking the expectation of this function with respect to the marginal density for X2 . Note that this proof only works if the conditional expectation and the marginal expectation are finite. This results extends naturally to k variables. 2.9 MULTIVARIATE TRANSFORMATIONS Theorem 2.9.1 THE MULTIVARIATE TRANSFORMATION THEOREM Let X = (X1 , ..., Xk ) be a vector of random variables, with joint mass/density function fX1 ,...,Xk . 2.9. MULTIVARIATE TRANSFORMATIONS 35 Let Y = (Y1 , ..., Yk ) be a vector of random variables defined by Yi = gi (X1 , ..., Xk ) for some functions gi , i = 1, ..., k, where the vector function g mapping (X1 , ..., Xk ) to (Y1 , ..., Yk ) is a 1-1 transformation. Then the joint mass/density function of (Y1 , ..., Yk ) is given by DISCRETE fY1 ,...,Yk (y1 , ..., yk ) = fX1 ,...,Xk (x1 , ..., xk ), CONTINUOUS fY1 ,...,Yk (y1 , ..., yk ) = fX1 ,...,Xk (x1 , ..., xk ) |J(y1 , ..., yk )| , where x = (x1 , ..., xk ) is the unique solution of the system y = g(x), so that x = g−1 (y), and where J(y1 , ..., yk ) is the Jacobian of the transformation, that is, the determinant of the k × k matrix whose (i, j)th element is ∂ −1 g i (t) t1 =y1 ,...,t =y , k k ∂tj where gi−1 is the inverse function uniquely defined by Xi = gi−1 (Y1 , ..., Yk ). Note again the modulus. Proof. The discrete case proof follows the univariate case precisely. For the continuous case, consider the equivalent events [X ∈ C] and [Y ∈ D], where D is the image of C under g. Clearly, P [X ∈ C] = P [Y ∈ D]. Now, P [X ∈ C] is the k dimensional integral of the joint density fX1 ,...,Xk over the set C, and P [Y ∈ D] is the k dimensional integral of the joint density fY1 ,...,Yk over the set D. The result follows by changing variables in the first integral from x to y = g(x), and equating the two integrands. Note : As for single variable transformations, the ranges of the transformed variables must be considered carefully. Example 2.7 The multivariate transformation theorem provides a simple proof of the convolution formula: if X and Y are independent continuous random variables with pdfs fX (x) and fY (y), then the pdf of Z = X + Y is Z ∞ fZ (z) = fX (w)fY (z − w)dw. −∞ Let W = X. The Jacobian of the transformation from (X, Y ) to (Z, W ) is 1. So, the joint pdf of (Z, W ) is fZ,W (z, w) = fX,Y (w, z − w) = fX (w)fY (z − w). Then integrate out W to obtain the marginal pdf of Z. Example 2.8 Consider the case k = 2, and suppose that X1 and X2 are independent continuous random variables with ranges X1 = X2 = [0, 1] and pdfs given respectively by fX1 (x1 ) = 6x1 (1 − x1 ), 0 ≤ x1 ≤ 1, fX2 (x2 ) = 3x22 , 0 ≤ x2 ≤ 1, and zero elsewhere. In order to calculate the pdf of random variable Y1 defined by Y1 = X1 X2 , 36 CHAPTER 2. RANDOM VARIABLES & PROBABILITY DISTRIBUTIONS using the transformation result, consider the additional random variable Y2 , where Y2 = X1 (note, as X1 and X2 take values on [0, 1], X1 ≥ X1 X2 so Y1 ≤ Y2 ). The transformation Y = g(X) is then specified by the two functions g1 (t1 , t2 ) = t1 t2 , g2 (t1 , t2 ) = t1 , and the inverse transformation X = g−1 (Y) (i.e. X in terms of Y) is X1 = Y2 , X2 = Y1 /Y2 , giving g1−1 (t1 , t2 ) = t2 , g2−1 (t1 , t2 ) = t1 /t2 . Hence ∂ −1 g1 (t1 , t2 ) = 0, ∂t1 ∂ −1 g2 (t1 , t2 ) = 1/t2 , ∂t1 ∂ −1 g1 (t1 , t2 ) = 1, ∂t2 ∂ −1 g2 (t1 , t2 ) = −t1 /t22 , ∂t2 and so the Jacobian J(y1 , y2 ) of the transformation is given by 0 1 1/y2 −y1 /y 2 2 so that J(y1 , y2 ) = −1/y2 . Hence, using the theorem fY1 ,Y2 (y1 , y2 ) = fX1 ,X2 (y2 , y1 /y2 ) × |J(y1 , y2 )| = 6y2 (1 − y2 ) × 3(y1 /y2 )2 × 1/y2 = 18y12 (1 − y2 )/y22 , on the set Y(2) = {(y1 , y2 ) : 0 ≤ y1 ≤ y2 ≤ 1}, and zero otherwise. Hence fY1 (y1 ) = R1 y1 18y12 (1 − y2 )/y22 dy2 = 18y12 [−1/y2 − log y2 ]1y1 = 18y12 (−1 + 1/y1 + log y1 ) = 18y1 (1 − y1 + y1 log y1 ), for 0 ≤ y1 ≤ 1, and zero otherwise. 2.10. MULTIVARIATE EXPECTATIONS AND COVARIANCE 37 2.10 MULTIVARIATE EXPECTATIONS AND COVARIANCE 2.10.1 EXPECTATION WITH RESPECT TO JOINT DISTRIBUTIONS Definition 2.10.1 For random variables X1 , ..., Xk with range X(k) with mass/density function fX1 ,...,Xk , the expectation of g(X1 , ..., Xk ) is defined in the discrete and continuous cases by X X ... g(x1 , ..., xk )fX1 ,...,Xk (x1 , ..., xk ), X1 Xk EfX1 ,...,Xk [g(X1 , ..., Xk )] = Z Z ... g(x1 , ..., xk )fX1 ,...,Xk (x1 , ..., xk )dx1 ...dxk . X1 Xk PROPERTIES (i) Let g and h be real-valued functions and let a and b be constants. Then, if fX ≡ fX1 ,...,Xk , EfX [ag(X1 , ..., Xk ) + bh(X1 , ..., Xk )] = aEfX [g(X1 , ..., Xk )] + bEfX [h(X1 , ..., Xk )]. (ii) Let X1 , ...Xk be independent random variables with mass functions/pdfs fX1 , ..., fXk respectively. Let g1 , ..., gk be scalar functions of X1 , ..., Xk respectively (that is, gi is a function of Xi only for i = 1, ..., k). If g(X1 , ..., Xk ) = g1 (X1 )...gk (Xk ), then EfX [g(X1 , ..., Xk )] = k Y EfXi [gi (Xi )], i=1 where EfXi [gi (Xi )] is the marginal expectation of gi (Xi ) with respect to fXi . (iii) Generally, EfX [g(X1 )] ≡ EfX1 [g(X1 )], so that the expectation over the joint distribution is the same as the expectation over the marginal distribution. The proof is an immediate consequence of the fact that the marginal pdf fX1 is obtained by integrating the joint density with respect to x2 , . . . , xk . So, whevever we wish, it is reasonable to denote the expectation as, say, E[g(X1 )], rather than EfX1 [g(X1 )] or EfX [g(X1 )]: we can ‘drop subscripts’. 2.10.2 COVARIANCE AND CORRELATION Definition 2.10.2 The covariance of two random variables X1 and X2 is denoted CovfX1 ,X2 [X1 , X2 ], and is defined by CovfX1 ,X2 [X1 , X2 ] = EfX1 ,X2 [(X1 − μ1 )(X2 − μ2 )] = EfX1 ,X2 [X1 X2 ] − μ1 μ2 , where μi = EfXi [Xi ] is the marginal expectation of Xi , for i = 1, 2, and where EfX1 ,X2 [X1 X2 ] = Z Z x1 x2 fX1 ,X2 (x1 , x2 )dx1 dx2 , that is, the expectation of function g(x1 , x2 ) = x1 x2 with respect to the joint distribution fX1 ,X2 . 38 CHAPTER 2. RANDOM VARIABLES & PROBABILITY DISTRIBUTIONS Definition 2.10.3 The correlation of X1 and X2 is denoted CorrfX1 ,X2 [X1 , X2 ], and is defined by CovfX1 ,X2 [X1 , X2 ] CorrfX1 ,X2 [X1 , X2 ] = q . V arfX1 [X1 ]V arfX2 [X2 ] If CovfX1 ,X2 [X1 , X2 ] = CorrfX1 ,X2 [X1 , X2 ] = 0 then variables X1 and X2 are uncorrelated. Note that if random variables X1 and X2 are independent then CovfX1 ,X2 [X1 , X2 ] = EfX1 ,X2 [X1 X2 ] − EfX1 [X1 ]EfX2 [X2 ] = EfX1 [X1 ]EfX2 [X2 ] − EfX1 [X1 ]EfX2 [X2 ] = 0, and so X1 and X2 are also uncorrelated (the converse does not hold). NOTES: (i) For random variables X1 and X2 , with (marginal) expectations μ1 and μ2 respectively, and (marginal) variances σ 21 and σ 22 respectively, if random variables Z1 and Z2 are defined by Z1 = (X1 − μ1 )/σ 1 , Z2 = (X2 − μ2 )/σ 2 , then Z1 and Z2 are standardized variables. Then EfZi [Zi ] = 0, V arfZi [Zi ] = 1 and CorrfX1 ,X2 [X1 , X2 ] = CovfZ1 ,Z2 [Z1 , Z2 ]. (ii) Extension to k variables: covariances can only be calculated for pairs of random variables, but if k variables have a joint probability structure it is possible to construct a k × k matrix, C say, of covariance values, whose (i, j)th element is CovfXi ,Xj [Xi , Xj ], for i, j = 1, .., k, that captures the complete covariance structure in the joint distribution. If i 6= j, then CovfXj ,Xi [Xj , Xi ] = CovfXi ,Xj [Xi , Xj ], so C is symmetric, and if i = j, CovfXi ,Xi [Xi , Xi ] ≡ V arfXi [Xi ]. The matrix C is referred to as the variance-covariance matrix. (iii) If random variable X is defined by X = a1 X1 + a2 X2 + ...ak Xk , for random variables X1 , ..., Xk and constants a1 , ..., ak , then EfX [X] = k X ai EfXi [Xi ], k X a2i V arfXi [Xi ] + 2 i=1 V arfX [X] = i=1 = aT Ca, k X i−1 X ai aj CovfXi ,Xj [Xi , Xj i=1 j=1 a = (a1 , . . . , ak )T . 2.10. MULTIVARIATE EXPECTATIONS AND COVARIANCE 39 (iv) Combining (i) and (iii) when k = 2, and defining standardized variables Z1 and Z2 , 0 ≤ V arfZ1 ,Z2 [Z1 ± Z2 ] = V arfZ1 [Z1 ] + V arfZ2 [Z2 ] ± 2CovfZ1 ,Z2 [Z1 , Z2 ] = 1 + 1 ± 2CorrfX1 ,X2 [X1 , X2 ] = 2(1 ± CorrfX1 ,X2 [X1 , X2 ]) and hence −1 ≤ CorrfX1 ,X2 [X1 , X2 ] ≤ 1. 2.10.3 JOINT MOMENT GENERATING FUNCTION Definition 2.10.4 Let X and Y be jointly distributed. The joint moment generating function of X and Y is MX,Y (s, t) = E(esX+tY ). If this exists in a neighbourhood of the origin (0,0), then it has the same attractive properties as the ordinary moment generating function. It determines the joint distribution of X and Y uniquely and it also yields the moments: ∂ m+n MX,Y (s, t) = E(X m Y n ). m n ∂s ∂t s=t=0 Joint moment generating functions factorize for independent random variables. We have MX,Y (s, t) = MX (s)MY (t) if and only if X and Y are independent. Note, MX (s) = MX,Y (s, 0), etc. The definition of the joint moment generating function extends in an obvious way to three or more random variables, with the corresponding result for independence. For instance, the random variable X is independent of (Y, Z) if and only if the joint moment generating function MX,Y,Z (s, t, u) = E(esX+tY +uZ ) factorizes as MX,Y,Z (s, t, u) = MX (s)MY,Z (t, u). 2.10.4 FURTHER RESULTS ON INDEPENDENCE The following results are useful: (I) Let X and Y be independent random variables. Let g(x) be a function only of x and h(y) be a function only of y. Then the random variables U = g(X) and V = h(Y ) are independent. (II) Let X1 , . . . , Xn be independent random vectors. Let gi (xi ) be a function only of xi , i = 1, . . . , n. Then the random variables Ui = gi (Xi ), i = 1, . . . , n are independent. 40 2.11 CHAPTER 2. RANDOM VARIABLES & PROBABILITY DISTRIBUTIONS ORDER STATISTICS Order statistics, like sample moments, play a very important role in statistical inference. Let X1 , . . . , Xn be independent, identically distributed continuous random variables, with cdf FX and pdf fX . Then order the Xi : let Y1 be the smallest of {X1 , . . . , Xn }, Y2 be the second smallest of {X1 , . . . , Xn }, . . ., Yn be the largest of {X1 , . . . , Xn }. Note that since we assume continuity, the chance of ties is zero. It is customary to use the notation Yk = X(k) , and then X(1) , X(2) , . . . , X(n) are known as the order statistics of X1 , . . . , Xn . Two key results are: Result A The order statistics have joint density n! n Y fX (yi ), y1 < y 2 < . . . < y n . i=1 Result B The order statistic X(k) has density n fX (y){1 − FX (y)}n−k {FX (y)}k−1 . f(k) (y) = k k An informal proof of Result A is straightforward, using a symmetry argument based on the independent, identically distributed nature of X1 , . . . , Xn . Result B follows on noting that the event X(k) ≤ y occurs if and only if at least k of the Xi lie in (−∞, y]. Recalling the Binomial distribution, this means that X(k) has distribution function F(k) (y) = n X n j=k The pdf follows on differentiating this cdf. j {FX (y)}j {1 − FX (y)}n−j . CHAPTER 3 DISCRETE PROBABILITY DISTRIBUTIONS Definition 3.1.1 DISCRETE UNIFORM DISTRIBUTION X ∼ U nif orm(n) fX (x) = 1 , n x ∈ X = {1, 2, ..., n} , and zero otherwise. Definition 3.1.2 BERNOULLI DISTRIBUTION X ∼ Bernoulli(θ) fX (x) = θx (1 − θ)1−x , x ∈ X = {0, 1} , and zero otherwise. NOTE The Bernoulli distribution is used for modelling when the outcome of an experiment is either a “success” or a ‘failure”, where the probability of getting a success is equal to θ. Such an experiment is a ‘Bernoulli trial’. The mgf is MX (t) = (1 − θ) + θet . Definition 3.1.3 BINOMIAL DISTRIBUTION X ∼ Bin(n, θ) fX (x) = n x θ (1 − θ)n−x , x x ∈ X = {0, 1, 2, ..., n} , n ≥ 1, 0 ≤ θ ≤ 1. NOTES 1. If X1 , ..., Xk are independent and identically distributed (IID) Bernoulli(θ) random variables, and Y = X1 + ..., Xk , then by the standard result for mgfs, MY (t) = {MX (t)}k = 1 − θ + θet k , so therefore Y ∼ Bin(k, θ) because of the uniqueness of mgfs. Thus the binomial distribution is used to model the total number of successes in a series of independent and identical experiments. 2. Alternatively, consider sampling without replacement from infinite collection, or sampling with replacement from a finite collection of objects, a proportion θ of which are of Type I, and the remainder are of Type II. If X is the number of Type I objects in a sample of n, X ∼ Bin(n, θ). 41 42 CHAPTER 3. DISCRETE PROBABILITY DISTRIBUTIONS Definition 3.1.4 POISSON DISTRIBUTION X ∼ P oisson(λ) fX (x) = e−λ λx , x! x ∈ X = {0, 1, 2, ...} , λ > 0, and zero otherwise NOTES 1. If X ∼ Bin(n, θ), let λ = nθ. Then MX (t) = 1 − θ + θet n = 1+ λ(et − 1)) n n −→ exp λ et − 1 , as n −→ ∞, which is the mgf of a Poisson random variable. Therefore, the Poisson distribution arises as the limiting case of the binomial distribution, when n −→ ∞, θ −→ 0 with nθ = λ constant (that is, for “large” n and “small” θ). So, if n is large, θ is small, we can reasonably approximate Bin(n, θ) by P oisson(λ). 2. Suppose that X1 and X2 are independent, with X1 ∼ P oisson(λ1 ), X2 ∼ P oisson(λ2 ), then if Y = X1 + X2 , using the general mgf result for independent random variables, MY (t) = MX1 (t)MX2 (t) = exp λ1 et − 1 exp λ2 et − 1 = exp (λ1 + λ2 ) et − 1 so that Y ∼ P oisson(λ1 + λ2 ). Therefore, the sum of two independent Poisson random variables also has a Poisson distribution. This result can be extended easily; if X1 , ..., Xk are independent random variables with Xi ∼ P oisson(λi ) for i = 1, ..., k, then Y = k X i=1 Xi =⇒ Y ∼ P oisson k X i=1 λi ! . 3. THE POISSON PROCESS (This material is not examinable.) The Poisson distribution arises as part of a larger modelling framework. Consider an experiment involving events (such as radioactive emissions) that occur repeatedly and randomly in time. Let X(t) be the random variable representing the number of events that occur in the interval [0, t], so that X(t) takes values 0, 1, 2, .... Informally, in a Poisson process we have the following properties. In the interval (t, t + h) there may or may not be events. If h is small, the probability of an event in (t, t + h) is roughly proportional to h; it is not very likely that two or more events occur in a small interval. Formally, a Poisson process with intensity λ is a process X(t), t ≥ 0, taking values in {0, 1, 2, . . .} such that (a) X(0) = 0 and if s < t then X(s) ≤ X(t), (b) m = 1, λh + o(h), o(h), m > 1, P (X(t + h) = n + m| X(t) = n) = 1 − λh + o(h), m = 0. 43 (c) If s < t, the number of events X(t) − X(s) in (s, t] is independent of the number of events in [0, s]. Here [definition] lim h−→0 o(h) = 0. h Then, if Pn (t) = P [X(t) = n] = P [n events occur in [0, t]] it can be shown that e−λt (λt)n n! (that is, the random variable corresponding to the number of events that occurs in the interval [0, t] has a Poisson distribution with parameter λt.) Pn (t) = Definition 3.1.5 GEOMETRIC DISTRIBUTION X ∼ Geometric(θ) fX (x) = (1 − θ)x−1 θ, x ∈ X = {1, 2, ...} , 0 ≤ θ ≤ 1. NOTES 1. The cdf is available analytically as FX (x) = 1 − (1 − θ)x , x = 1, 2, 3, ... 2. If X ∼ Geometric(θ), then for x, j ≥ 1, P [X = x+j|X > j] = P [X = x + j, X > j] P [X = x + j] (1 − θ)x+j−1 θ = = = (1−θ)x−1 θ = P [X = x]. P [X > j] P [X > j] (1 − θ)j So P[X = x + j|X > j] =P[X = x]. This property is unique (among discrete distributions) to the geometric distribution, and is called the lack of memory property. 3. Alternative representations are sometimes useful: fX (x) = φx−1 (1 − φ) , fX (x) = φx (1 − φ) , x = 1, 2, 3... (that is, φ = 1 − θ), x = 0, 1, 2, . . . . 4. The geometric distribution is used to model the number, X, of independent, identical Bernoulli trials until the first success is obtained. It is a discrete waiting time distribution. Definition 3.1.6 NEGATIVE BINOMIAL DISTRIBUTION X ∼ N egBin(n, θ) x−1 n fX (x) = θ (1 − θ)x−n , n−1 x ∈ X = {n, n + 1, ...} , n ≥ 1, 0 ≤ θ ≤ 1. 44 CHAPTER 3. DISCRETE PROBABILITY DISTRIBUTIONS NOTES 1. If X ∼ Bin(n, θ), Y ∼ N egBin(r, θ), then for r ≤ n, P [X ≥ r] = P [Y ≤ n]. 2. The Negative Binomial distribution is used to model the number, X, of independent, identical Bernoulli trials needed to obtain exactly n successes. [The number of trials up to and including the nth success]. 3. Alternative representation: let Y be the number of failures in a sequence of independent, identical Bernoulli trials that contains exactly n successes. Then Y = X − n, and hence n+y−1 n θ (1 − θ)y , y ∈ {0, 1, ...} . fY (y) = n−1 4. If Xi ∼ Geometric(θ), for i = 1, ...n, are i.i.d. random variables, and Y = X1 + ... + Xn , then Y ∼ N egBin(n, θ) (result immediately follows using mgfs). 5. If X ∼ N egBin(n, θ), let n(1 − θ)/θ = λ and Y = X − n. Then )n n ( θ 1 −nt = −→ exp λ(et − 1) , MY (t) = e MX (t) = t λ 1 − e (1 − θ) 1 − n (et − 1) as n −→ ∞, hence the alternate form of the negative binomial distribution tends to the Poisson distribution as n −→ ∞ with n(1 − θ)/θ = λ constant. Definition 3.1.7 HYPERGEOMETRIC DISTRIBUTION X ∼ HypGeom(N, R, n) for N ≥ R ≥ n N −R R n−x x fX (x) = , x ∈ X = {max(0, n − N + R), ..., min(n, R)} , N n and zero otherwise. NOTES 1. The hypergeometric distribution is used as a model for experiments involving sampling without replacement from a finite population. Specifically, consider a finite population of size N , consisting of R items of Type I and N − R of Type II: take a sample of size n without replacement, and let X be the number of Type I objects on the sample. The mass function for the hypergeometric distribution can be obtained by using combinatorics/counting techniques. However the form of the mass function does not lend itself readily to calculation of moments etc.. 2. As N, R −→ ∞ with R/N = θ(constant), then n x P [X = x] −→ θ (1 − θ)n−x , x so the distribution tends to a Binomial distribution. CHAPTER 4 CONTINUOUS PROBABILITY DISTRIBUTIONS Definition 4.1.1 CONTINUOUS UNIFORM DISTRIBUTION X ∼ U nif orm(a, b) fX (x) = 1 , b−a a ≤ x ≤ b. FX (x) = x−a , b−a a ≤ x ≤ b. NOTES 1. The cdf is 2. The case a = 0 and b = 1 gives the Standard uniform. Definition 4.1.2 EXPONENTIAL DISTRIBUTION X ∼ Exp(λ) fX (x) = λe−λx , x > 0, λ > 0. NOTES 1. The cdf is FX (x) = 1 − e−λx , x > 0. 2. An alternative representation uses θ = 1/λ as the parameter of the distribution. This is sometimes used because the expectation and variance of the Exponential distribution are EfX [X] = 1 = θ, λ V arfX [X] = 1 . λ2 3. If X ∼ Exp(λ), then, for all x, t > 0, P [X > x + t|X > t] = P [X > x + t] e−λ(x+t) P [X > x + t, X > t] = e−λx = P [X > x]. = = e−λt P [X > t] P [X > t] Thus, for all x, t > 0, P [X > x + t|X > t] = P [X > x] - this is known as the Lack of Memory Property, and is unique to the exponential distribution amongst continuous distributions. 4. Suppose that X(t) is a Poisson process with rate parameter λ > 0, so that P [X(t) = n] = e−λt (λt)n . n! Let X1 , ..., Xn be random variables defined by X1 = “time that first event occurs”, and, for i = 2, ..., n, Xi = “time interval between occurrence of (i − 1)st and ith events”. Then X1 , ..., Xn 45 46 CHAPTER 4. CONTINUOUS PROBABILITY DISTRIBUTIONS are IID because of the assumptions underlying the Poisson process. So consider the distribution of X1 ; in particular, consider the probability P[X1 > x] for x > 0. The event [X1 > x] is equivalent to the event “No events occur in the interval (0,x]”, which has probability e−λx . But FX1 (x) = P [X1 ≤ x] = 1 − P [X1 > x] = 1 − e−λx =⇒ X1 ∼ Exp(λ). 5. The exponential distribution is used to model failure times in continuous time. It is a continuous waiting time distribution, the continuous analogue of the geometric distribution. 6. If X ∼ U nif orm(0, 1), and 1 Y = − log(1 − X), λ then Y ∼ Exp(λ). 7. If X ∼ Exp(λ), and Y = X 1/α , for α > 0, then Y has a (two-parameter) Weibull distribution, and α fY (y) = αλy α−1 e−λy , y > 0. Definition 4.1.3 GAMMA DISTRIBUTION X ∼ Ga(α, β) fX (x) = β α α−1 −βx e , x Γ(α) x > 0, α, β > 0, where, for any real number α > 0, the gamma function, Γ(.) is defined by Z ∞ tα−1 e−t dt. Γ(α) = 0 NOTES 1. If X1 ∼ Ga(α1 , β), X2 ∼ Ga(α2 , β) are independent random variables, and Y = X1 + X2 then Y ∼ Ga(α1 + α2 , β) (directly from properties of mgfs). 2. Ga(1, β) ≡ Exp(β). 3. If X1 , ..., Xn ∼ Exp(λ) are independent random variables, and Y = X1 + ... + Xn then Y ∼ Ga(n, λ) (directly from 1. and 2. ). 4. For α > 0, integrating by parts, we have that Γ(α) = (α − 1)Γ(α − 1) and hence if α = 1, 2, ..., then Γ(α) = (α − 1)!. A useful fact is that Γ( 12 ) = √ π. 47 5. Special Case : If α = 1, 2, ... the Ga(α/2, 1/2) distribution is also known as the chi-squared distribution with α degrees of freedom, denoted by χ2α . 6. If X1 ∼ χ2n1 and X2 ∼ χ2n2 are independent chi-squared random variables with n1 and n2 degrees of freedom respectively, then random variable F defined as the ratio F = X1 /n1 X2 /n2 has an F-distribution with (n1 , n2 ) degrees of freedom. Definition 4.1.4 BETA DISTRIBUTION X ∼ Be(α, β) fX (x) = Γ(α + β) α−1 (1 − x)β−1 , x Γ(α)Γ(β) 0 < x < 1, α, β > 0. NOTES 1. If α = β = 1, Be(α, β) ≡ U nif orm(0, 1) 2. If X1 ∼ Ga(α1 , β), X2 ∼ Ga(α2 , β) are independent random variables, and Y = X1 X1 + X2 then Y ∼ Be(α1 , α2 ) (using standard multivariate transformation techniques). 3. Suppose that random variables X and Y have a joint probability distribution such that the conditional distribution of X, given Y = y for 0 < y < 1, is binomial, Bin(n, y), and the marginal distribution of Y is beta, Be(α, β), so that n x y (1 − y)n−x , fX|Y (x|y) = x fY (y) = x = 0, 1, ..., n, Γ(α + β) α−1 (1 − y)β−1 , y Γ(α)Γ(β) 0 < y < 1. Then the marginal distribution of X is given by fX (x) = Z 0 1 fX|Y (x|y)fY (y)dy n Γ(α + β) Γ(x + α)Γ(n − x + β) = , x Γ(α)Γ(β) Γ(n + α + β) x = 0, 1, 2...n. Note that this provides an example of a joint distribution of continuous Y and discrete X. 48 CHAPTER 4. CONTINUOUS PROBABILITY DISTRIBUTIONS Definition 4.1.5 NORMAL DISTRIBUTION X ∼ N (μ, σ 2 ) 1 2 exp − 2 (x − μ) , fX (x) = √ 2σ 2πσ 2 1 x ∈ R, μ ∈ R, σ > 0. NOTES 1. Special Case : If μ = 0, σ 2 = 1, then X has a standard or unit normal distribution. Usually, the pdf of the standard normal is written φ(x), and the cdf is written Φ(x). 2. If X ∼ N (0, 1), and Y = σX + μ then Y ∼ N (μ, σ 2 ). Re-expressing this result, if X ∼ N (μ, σ 2 ), and Y = (X − μ)/σ, then Y ∼ N (0, 1) (using transformation or mgf techniques). 3. The Central Limit Theorem Suppose X1 , ..., Xn are IID random variables with some mgf MX , with EfX [Xi ] = μ and V arfX [Xi ] = σ 2 that is, the mgf and the expectation and variance of the Xi ’s are specified, but the pdf is not. Let the standardized random variable Zn be defined by Zn = n X i=1 Xi − nμ √ nσ 2 and let Zn have mgf MZn . Then, as n −→ ∞, MZn (t) −→ exp t2 /2 , irrespective of the distribution of the Xi ’s, that is, the distribution of Zn tends to a standard normal distribution as n tends to infinity. This theorem will be proved and explained in Chapter 6. 4. If X ∼ N (0, 1), and Y = X 2 , then Y ∼ χ21 , so that the square of a unit normal random variable has a chi-squared distribution with 1 degree of freedom. 5. If X ∼ N (0, 1), and Y ∼ N (0, 1) are independent random variables, and Z is defined by Z = X/Y , then Z has a Cauchy distribution fZ (z) = 1 1 , π 1 + z2 z ∈ R. 6. If X ∼ N (0, 1), and Y ∼ Ga(n/2, 1/2) for n = 1, 2, ... (so that Y ∼ χ2n ), are independent random variables, and T is defined by X T =p Y /n then T has a Student-t distribution with n degrees of freedom, T ∼ St(n), n+1 −(n+1)/2 1/2 Γ t2 1 2 n 1+ , t ∈ R. fT (t) = n nπ Γ 2 49 Taking limiting cases of the Student-t distribution n −→ ∞ : St(n) −→ N (0, 1), n −→ 1 : St(n) −→ Cauchy. 7. If X1 ∼ N (μ1 , σ 21 ) and X2 ∼ N (μ2 , σ 22 ) are independent and a, b are constants, then T = aX1 + bX2 ∼ N (aμ1 + bμ2 , a2 σ 21 + b2 σ 22 ). 50 CHAPTER 4. CONTINUOUS PROBABILITY DISTRIBUTIONS CHAPTER 5 MULTIVARIATE PROBABILITY DISTRIBUTIONS For purely notational reasons, it is convenient in this chapter to consider a random vector X as a column vector, X = (X1 , . . . , Xk )T , say 5.1 THE MULTINOMIAL DISTRIBUTION The multinomial distribution is a multivariate generalization of the binomial distribution. Recall that the binomial distribution arose from an infinite Urn model with two types of objects being sampled with replacement. Suppose that the proportion of “Type 1” objects in the urn is θ (so 0 ≤ θ ≤ 1) and hence the proportion of “Type 2” objects in the urn is 1 − θ. Suppose that n objects are sampled, and X is the random variable corresponding to the number of “Type 1” objects in the sample. Then X ∼ Bin(n, θ), and n x θ (1 − θ)n−x , x ∈ {0, 1, 2, ..., n} . fX (x) = x Now consider a generalization; suppose that the Urn contains k + 1 types of objects (k = 1, 2, ...), with θi being the proportion of Type i objects, for i = 1, ..., k + 1. Let Xi be the random variable corresponding to the number of type i objects in a sample of size n, for i = 1, ..., k. Then the joint distribution of vector X = (X1 , ..., Xk )T is given by fX1 ,...,Xk (x1 , ..., xk ) = k+1 Q xi n! n! xk+1 = θ , θx1 1 ....θ xk k θk+1 x1 !...xk !xk+1 ! x1 !...xk !xk+1 ! i=1 i where 0 ≤ θi ≤ 1 for all i, and θ1 + ... + θk + θk+1 = 1, and where xk+1 is defined by xk+1 = n − (x1 + ... + xk ). This is the mass function for the multinomial distribution which reduces to the binomial if k = 1. It can also be shown that the marginal distribution of Xi is Bin(n, θi ). 5.2 THE DIRICHLET DISTRIBUTION The Dirichlet distribution is a multivariate generalization of the beta distribution. Recall that the beta distribution arose as follows; suppose that V1 and V2 are independent Gamma random variables with V1 ∼ Ga(α1 , β), V2 ∼ Ga(α2 , β). Then if X is defined by X = V1 / (V1 + V2 ), we have that X ∼ Be(α1 , α2 ). Now consider a generalization; suppose that V1 , ..., Vk+1 are independent Gamma random variables with Vi ∼ Ga(αi , β), for i = 1, ..., k + 1. Define Xi = Vi V1 + ... + Vk+1 for i = 1, ..., k. Then the joint distribution of vector X = (X1 , ..., Xk )T is given by density fX1 ,...,Xk (x1 , ..., xk ) = Γ(α) xα1 −1 ...xkαk −1 xk+1 αk+1 −1 , Γ(α1 )...Γ(αk )Γ(αk+1 ) 1 51 52 CHAPTER 5. MULTIVARIATE PROBABILITY DISTRIBUTIONS for 0 ≤ xi ≤ 1 for all i such that x1 + ... + xk + xk+1 = 1, where α = α1 + ... + αk+1 and where xk+1 is defined by xk+1 = 1 − (x1 + ... + xk ). This is the density function which reduces to the beta distribution if k = 1. It can also be shown that the marginal distribution of Xi is Beta(αi , α). 5.3 THE MULTIVARIATE NORMAL DISTRIBUTION The random vector X = (X1 , . . . , Xk )T has a multivariate normal distribution if the joint pdf is of the form: k/2 1 1 1 T −1 fX (x1 , ..., xk ) = exp − (x − μ) Σ (x − μ) . 2π 2 |Σ|1/2 Here x is the (column) vector of length k formed by x1 , . . . , xk , μ is a (column) vector of length k, Σ is a k × k symmetric, positive definite matrix [Σ = ΣT , xT Σx > 0 for all x 6= 0], and |Σ| denotes the determinant of Σ. We write X ∼ Nk (μ, Σ). Properties 1. E[X] = μ: μ is the mean vector of X. If μ = (μ1 , . . . , μk )T , we have E[Xi ] = μi . Further, Σ is the variance-covariance matrix of X, Σ = [σ ij ], where σ ij = cov[Xi , Xj ]. 2. Since Σ is symmetric and positive definite, there exists a matrix Σ 1/2 [the ‘square root of Σ’] such that: (i) Σ1/2 is symmetric; (ii) Σ = Σ1/2 Σ1/2 ; (iii) Σ1/2 Σ−1/2 = Σ−1/2 Σ1/2 = I, the k × k identity matrix, with Σ−1/2 = (Σ1/2 )−1 . Then, if Z ∼ Nk (0, I), so that Z1 , . . . Zk are IID N (0, 1), and X = μ + Σ1/2 Z, then X ∼ Nk (μ, Σ). Conversely, if X ∼ Nk (μ, Σ), then Σ−1/2 (X − μ) ∼ Nk (0, I). 3. A useful result is the following: if X ∼ Nk (μ, Σ) and D is a m × k matrix of rank m ≤ k, then Y ≡ DX ∼ Nm (Dμ, DΣDT ). A special case is where X ∼ Nk (0, I) and D is a k × k matrix of full rank k, so that D is invertible: then Y = DX ∼ Nk (0, DD T ). 4. Suppose we partition X as X= with Xa Xb , X1 Xm+1 ... Xa = ... , Xb = . Xm Xk We can similarly partition μ and Σ: μ= Then if X ∼ Nk (μ, Σ) we have: μa μb , Σ= Σaa Σab Σba Σbb . 5.3. THE MULTIVARIATE NORMAL DISTRIBUTION (I) The marginal distribution of Xa is Nm (μa , Σaa ). (II) The conditional distribution of Xb , given Xa = xa is −1 Xb |Xa = xa ∼ Nk−m (μb + Σba Σ−1 aa (xa − μa ), Σbb − Σba Σaa Σab ). (III) If a = (a1 , . . . , ak )T , then T a X≡ (IV) V = (X − μ)T Σ−1 (X − μ) ∼ χ2k . k X i=1 ai Xi ∼ N (aT μ, aT Σa). 53 54 CHAPTER 5. MULTIVARIATE PROBABILITY DISTRIBUTIONS CHAPTER 6 PROBABILITY RESULTS & LIMIT THEOREMS 6.1 BOUNDS ON PROBABILITIES BASED ON MOMENTS Theorem 6.1.1 If X is a random variable, then for non-negative function h, and c > 0, EfX [h(X)] . c Proof. (continuous case) : Suppose that X has density function fX which is positive for x ∈ X. Let A = {x ∈ X : h(x) ≥ c} ⊆ X. Then, as h(x) ≥ c on A, Z Z Z EfX [h(X)] = h(x)fX (x)dx = h(x)fX (x)dx + h(x)fX (x)dx P [h(X) ≥ c] ≤ ≥ Z A h(x)fX (x)dx ≥ ZA A A0 cfX (x)dx = cP [X ∈ A] = cP [h(X) ≥ c] . SPECIAL CASE I - THE MARKOV INEQUALITY : If h(x) = |x|r for r > 0, so 1 P [|X|r ≥ c] ≤ EfX [|X|r ] . c SPECIAL CASE II - THE CHEBYCHEV INEQUALITY: Suppose that X is a random variable with expectation μ and variance σ 2 . Then taking h(x) = (x − μ)2 and c = k 2 σ 2 , for k > 0, gives P [|X − μ| ≥ kσ] ≤ 1/k 2 , and setting = kσ gives P [|X − μ| ≥ ] ≤ σ 2 /2 , P [|X − μ| < ] ≥ 1 − σ 2 /2 . Theorem 6.1.2 JENSEN’S INEQUALITY d2 Suppose that X is a random variable, and function g is convex so that 2 {g(t)}t=x = g 00 (x) > 0, dt ∀x, with Taylor expansion around expectation μ of the form 1 (6.1) g(x) = g(μ) + (x − μ)g 0 (μ) + (x − μ)2 g 00 (x0 ), 2 for some x0 such that x < x0 < μ. Then EfX [g(X)] ≥ g(EfX [X]). Proof. Taking expectations in (6.1), and noting that EfX [(X − μ)] = 0, EfX (X − μ)2 = σ 2 , g 00 (x0 ) ≥ 0, we have that 1 2 EfX [g(X)] = g(μ) + 0 × g 0 (μ) + σ × g 00 (x0 ) ≥ g(μ) = g(EfX [X]), 2 as σ 2 , g 00 (x0 ) > 0. 55 56 6.2 CHAPTER 6. PROBABILITY RESULTS & LIMIT THEOREMS THE CENTRAL LIMIT THEOREM Theorem 6.2.1 Suppose X1 , ..., Xn are i.i.d. random variables with mgf MX , with VarfX [Xi ] = σ 2 , EfX [Xi ] = μ, both finite. Let the random variable Zn be defined by Zn = n X i=1 Xi − nμ √ nσ 2 and let Zn have mgf MZn . Then, as n −→ ∞, MZn (t) −→ exp t2 2 , , irrespective of the form of MX . Proof. First, let Yi = (Xi − μ)/σ for i = 1, ..., n. Then Y1 , ..., Yn are i.i.d. with mgf MY say, and by the elementary properties of expectation, EfY [Yi ] = 0, V arfY [Yi ] = 1, for each i. Using the power series expansion result for mgfs, we have that MY (t) = 1 + tEfY [Y ] + = 1+ t2 t3 t4 EfY [Y 2 ] + EfY [Y 3 ] + EfY [Y 4 ] + ... 2! 3! 4! t2 t3 t4 + EfY [Y 3 ] + EfY [Y 4 ] + ... 2! 3! 4! Now, the random variable Zn can be rewritten n 1 X Zn = √ n i=1 Xi − μ σ and thus, again by a standard mgf result, as Y1 , ..., Yn are independent, we have that n n Y √ t4 t3 t2 3 4 Ef [Y ] + 2 EfY [Y ]... . MY (t/ n) = 1 + + MZn (t) = 2n 6n3/2 Y 6n i=1 Thus, as n −→ ∞, using the properties of the exponential function, which give that if an → a, then an n → ea , 1+ n we have 2 t . MZn (t) −→ exp 2 INTERPRETATION: Sums of independent and identically distributed random variables have a limiting distribution that is Normal, irrespective of the distribution of the variables. 6.3. MODES OF STOCHASTIC CONVERGENCE 6.3 6.3.1 57 MODES OF STOCHASTIC CONVERGENCE CONVERGENCE IN DISTRIBUTION Definition 6.3.1 Consider a sequence {Xn }, n = 1, 2, . . ., of random variables and a corresponding sequence of cdfs, FX1 , FX2 , ... so that for n = 1, 2, . . ., FXn (x) =P[Xn ≤ x] . Suppose that there exists a cdf, FX , such that for all x at which FX is continuous, lim FXn (x) = FX (x). n−→∞ Then the sequence {Xn } converges in distribution to the random variable X with cdf FX . This is denoted d Xn −→ X, and FX is the limiting distribution. Convergence of a sequence of mgfs also indicates convergence in distribution. That is, if for all t at which MX (t) is defined, if as n −→ ∞, we have MXn (t) −→ MX (t) d then Xn −→ X. Definition 6.3.2 The sequence {Xn } of random variables converges in distribution to the constant c if the limiting distribution of Xn is degenerate at c, that is, d Xn −→ X and P[X = c] = 1, so that FX (x) = 0, x < c, 1, x ≥ c. This special type of convergence in distribution occurs when the limiting distribution is discrete, with the probability mass function only being non-zero at a single value. That is, if the limiting random variable is X, then fX (x) = 1, x = c, and zero otherwise. Theorem 6.3.1 The sequence of random variables {Xn } converges in distribution to c if and only if, for all > 0, lim P [|Xn − c| < ] = 1. n−→∞ This theorem indicates that convergence in distribution to a constant c occurs if and only if the probability becomes increasingly concentrated around c as n −→ ∞. 58 6.3.2 CHAPTER 6. PROBABILITY RESULTS & LIMIT THEOREMS CONVERGENCE IN PROBABILITY Definition 6.3.3 CONVERGENCE IN PROBABILITY TO A CONSTANT The sequence of random variables {Xn } converges in probability to the constant c, denoted P Xn −→ c if for all > 0 lim P [|Xn − c| < ] = 1, or, equivalently, lim P [|Xn − c| ≥ ] = 0, n−→∞ n−→∞ that is, if the limiting distribution of Xn is degenerate at c. Interpretation. Convergence in probability to a constant is precisely equivalent to convergence in distribution to a constant. d P A very useful result is Slutsky’s Theorem which states that if Xn −→ X and Yn −→ c, where c is d d d a finite constant, then: (i) Xn + Yn −→ X + c, (ii) Xn Yn −→ cX, (iii) Xn /Yn −→ X/c, if c 6= 0. Theorem 6.3.2 WEAK LAW OF LARGE NUMBERS Suppose that {Xn } is a sequence of i.i.d. random variables with expectation μ and variance σ 2 . Let Yn be defined by n 1X Yn = Xi . n i=1 Then, for all > 0, lim P [|Yn − μ| < ] = 1, n−→∞ P that is, Yn −→ μ, and thus the mean of X1 , . . . , Xn converges in probability to μ. Proof. Using the properties of expectation, it can be shown that Yn has expectation μ and variance σ 2 /n, and hence by the Chebychev Inequality, P [|Yn − μ| ≥ ] ≤ σ2 −→ 0, n2 as n −→ ∞ for all > 0. Hence P [|Yn − μ| < ] −→ 1, P as n −→ ∞ and Yn −→ μ. Definition 6.3.4 CONVERGENCE TO A RANDOM VARIABLE The sequence of random variables {Xn } converges in probability to the random variable P X, denoted Xn −→ X, if, for all > 0, lim P [|Xn − X| < ] = 1, n−→∞ or, equivalently, lim P [|Xn − X| ≥ ] = 0. n−→∞ 6.3. MODES OF STOCHASTIC CONVERGENCE 6.3.3 59 CONVERGENCE IN QUADRATIC MEAN Definition 6.3.5 CONVERGENCE IN QUADRATIC MEAN The sequence of random variables {Xn } converges in quadratic mean (also called L2 qm convergence) to the random variable X, denoted Xn −→ X, if E(Xn − X)2 → 0, as n −→ ∞. Theorem 6.3.3 For the sequence {Xn } of random variables, (a) Convergence in quadratic mean to a random variable implies convergence in probability: qm P Xn −→ X =⇒ Xn −→ X. (b) Convergence in probability to a random variable implies convergence in distribution: P d Xn −→ X =⇒ Xn −→ X. qm Proof. The proof of (a) is simple. Suppose that Xn −→ X. Fix > 0. Then, by Markov’s inequality, E(Xn − X)2 P (|Xn − X| > ) = P ((Xn − X)2 > 2 ) ≤ −→ 0. 2 Proof of (b). Fix > 0 and let x be a continuity point of FX . Then FXn (x) = P (Xn ≤ x, X ≤ x + ) + P (Xn ≤ x, X > x + ) ≤ P (X ≤ x + ) + P (|Xn − X| > ) = FX (x + ) + P (|Xn − X| > ). Also, FX (x − ) = P (X ≤ x − ) = P (X ≤ x − , Xn ≤ x) + P (X ≤ x − , Xn > x) ≤ FXn (x) + P (|Xn − X| > ). Hence, FX (x − ) − P (|Xn − X| > ) ≤ FXn (x) ≤ FX (x + ) + P (|Xn − X| > ). Take the limit as n −→ ∞ to conclude that FX (x − ) ≤ lim inf FXn (x) ≤ lim sup FXn (x) ≤ FX (x + ). n−→∞ n−→∞ This holds for all > 0. Take the limit as −→ 0 and use the fact that FX is continuous at x to conclude that lim FXn (x) = FX (x). n−→∞ Note that the reverse implications do not hold. Convergence in probability does not imply convergence in quadratic mean. Also, convergence in distribution does not imply convergence in probability. 60 CHAPTER 6. PROBABILITY RESULTS & LIMIT THEOREMS CHAPTER 7 STATISTICAL ANALYSIS 7.1 STATISTICAL SUMMARIES Definition 7.1.1 A collection of independent, identically distributed random variables X1 , ..., Xn each of which has distribution defined by cdf FX (or mass/density function fX ) is a random sample of size n from FX (or fX ). Definition 7.1.2 A function, T , of a random sample, X1 , ..., Xn , that is, T = t(X1 , ..., Xn ) that depends only on X1 , ..., Xn is a statistic. A statistic is a random variable. For example, the sample mean ˉ = X1 + X2 + ... + Xn X n is a statistic. But a statistic T need not necessarily be constructed from a random sample (the random variables need not be independent, identically distributed), but that is the case encountered most often. In many circumstances it is necessary to consider statistics constructed from a collection of independent, but not identically distributed, random variables. 7.2 SAMPLING DISTRIBUTIONS Definition 7.2.1 If X1 , ..., Xn is a random sample from FX , say, and T = t(X1 , ..., Xn ) is a statistic, then FT (or fT ), the cdf (or mass/density function) of random variable T , is the sampling distribution of T . This notion extends immediately to the case of a statistic T constructed from a general collection of random variables X1 , ..., Xn . EXAMPLE: If X1 , ..., Xn are independent random variables, with Xi ∼ N (μi , σ 2i ) for i = 1, ..., n, and a1 , ..., an are constants, consider the distribution of random variable Y defined by Y = n X ai Xi . i=1 Using standard mgf results, the distribution of Y is derived to be normal with parameters μY = n X σ 2Y ai μi , i=1 = n X a2i σ 2i . i=1 Now consider the special case of this result when X1 , ..., Xn are independent, identically distributed with μi = μ and σ 2i = σ 2 , and where ai = 1/n for i = 1, ..., n. Then n X σ2 1 ˉ . Y = Xi = X ∼ N μ, n n i=1 61 62 CHAPTER 7. STATISTICAL ANALYSIS Definition 7.2.2 For a random sample X1 , ..., Xn from a probability distribution, then the sample variance, S 2 , is the statistic defined by n 1 X ˉ 2. Xi − X S = n−1 2 i=1 Theorem 7.2.1 SAMPLING DISTRIBUTION FOR NORMAL SAMPLES If X1 , ..., Xn is a random sample from a normal distribution, say Xi ∼ N (μ, σ 2 ), then: ˉ is independent of Xi − X, i = 1, ..., n ; (a) X ˉ and S 2 are independent random variables; (b) X (c) The random variable n X ˉ 2 Xi − X (n − 1)S 2 = i=1 σ2 σ2 has a chi-squared distribution with n − 1 degrees of freedom. Proof. Omitted here. Theorem 7.2.2 Suppose that X1 , ..., Xn is a random sample from a normal distribution, say Xi ∼ N (μ, σ 2 ). Then the random variable T = ˉ −μ X √ S/ n has a Student-t distribution with n − 1 degrees of freedom. Proof. Consider the random variables Z = V = √ ˉ − μ) n(X ∼ N (0, 1), σ (n − 1)S 2 ∼ χ2n−1 , σ2 and T =r Z , V n−1 and use the properties of the normal distribution and related random variables (NOTE 6, following Definition 4.1.5). Also, see EXERCISES 5, Q4 (b). 7.3. HYPOTHESIS TESTING 7.3 7.3.1 63 HYPOTHESIS TESTING TESTING FOR NORMAL SAMPLES - THE Z-TEST We concentrate initially on random data samples that we can assume to have a normal distribution, and utilize the Theorem from the previous section. We will look at two situations, namely one sample and two sample experiments. So, we suppose that X1 , ..., Xn ∼ N μ, σ 2 (one sample) and X1 , ..., Xn ∼ N (μX , σ 2X ), Y1 , ..., Yn ∼ N (μY , σ 2Y ) (two sample): in the latter case we assume also independence of the two samples. • ONE SAMPLE Possible tests of interest are: μ = μ0 , σ = σ 0 for some specified constants μ0 and σ 0 . • TWO SAMPLE Possible tests of interest are: μX = μY , σ X = σ Y . Recall from Theorem 7.2.1 that, if X1 , ..., Xn ∼ N (μ, σ 2 ) are the i.i.d. outcome random variables of n experimental trials, then (n − 1) S 2 σ2 ˉ and ∼ χ2n−1 , X ∼ N μ, n σ2 ˉ and S 2 statistically independent. Suppose we want to test the hypothesis that μ = μ0 , with X for some specified constant μ0 , (for example, μ0 = 20.0) is a plausible model. More specifically, we want to test H0 : μ = μ0 , the NULL hypothesis, against H1 : μ 6= μ0 , the ALTERNATIVE hypothesis. So, we want to test whether H0 is true, or whether H1 is true. In the case of a Normal sample, ˉ is Normal, and the distribution of X 2 ˉ −μ X ˉ ∼ N μ, σ √ ∼ N (0, 1) , =⇒ Z = X n σ/ n where Z is a random variable. Now, when we have observed the data sample, we can calculate x, and therefore we have a way of testing whether μ = μ0 is a plausible model; we calculate x from x1 , ..., xn , and then calculate x − μ0 √ . z= σ/ n If H0 is true, and μ = μ0 , then the observed z should be an observation from an N (0, 1) distribution (as Z ∼ N (0, 1)), that is, it should be near zero with high probability. In fact, z should lie between -1.96 and 1.96 with probability 1 − α = 0.95, say, as P [−1.96 ≤ Z < 1.96] = Φ(1.96) − Φ(−1.96) = 0.975 − 0.025 = 0.95. If we observe z to be outside of this range, then there is evidence that H0 is not true. So, basically, if we observe an extreme value of z, either H0 is true, but we have observed a rare event, or we prefer to disbelieve H0 and conclude that the data contains evidence against H0 . Notice the asymmetry between H0 and H1 . The null hypothesis is ‘conservative’, reflecting perhaps a current state of belief, and we are testing whether the data is consistent with that 64 CHAPTER 7. STATISTICAL ANALYSIS Figure 7.1: CRITICAL REGION IN A Z-TEST (taken from Schaum’s ELEMENTS OF STATISTICS II, Bernstein & Bernstein. hypothesis, only rejecting H0 in favour of the alternative H1 if evidence is clear i.e. when the data represent a rare event under H0 . As an alternative approach, we could calculate the probability p of observing a z value that is more extreme than the z we did observe; this probability is given by 2Φ(z), z < 0, p= 2(1 − Φ(z)), z ≥ 0. If this p is very small, say p ≤ α = 0.05, then again there is evidence that H0 is not true. This approach is called significance testing. In summary, we need to assess whether z is a surprising observation from an N (0, 1) distribution - if it is, then we reject H0 . Figure 6.1 depicts the ”critical region” in a Z-test. 7.3.2 HYPOTHESIS TESTING TERMINOLOGY There are five crucial components to a hypothesis test, namely: • the TEST STATISTIC; • the NULL DISTRIBUTION of the statistic; • the SIGNIFICANCE LEVEL of the test, usually denoted by α; • the P-VALUE, denoted p; • CRITICAL VALUE(S) of the test. 7.3. HYPOTHESIS TESTING 65 In the Normal example given above, we have that: • z is the test statistic; • The distribution of random variable Z if H0 is true is the null distribution; • α = 0.05 is the significance level of the test (choosing α = 0.01 gives a “stronger” test); • p is the p-value of the test statistic under the null distribution; • the solution CR of Φ(CR ) = 1 − α/2 gives the critical values of the test ±CR . These critical values define the boundary of a critical region: if the value z is in the critical region we reject H0 . 7.3.3 THE t-TEST In practice, we will often want to test hypotheses about μ when σ is unknown. We cannot perform the Z-test, as this requires knowledge of σ to calculate the z statistic. Recall that we ˉ and S 2 , and that the two estimators are statistically know the sampling distributions of X independent. Now, from the properties of the Normal distribution, if we have independent random variables Z ∼ N (0, 1) and Y ∼ χ2ν , then we know that random variable T defined by Z T =p Y /ν has a Student-t distribution with ν degrees of freedom. Using this result, and recalling the ˉ and S 2 , we see that sampling distributions of X T =s ˉ −μ X √ σ/ n (n − 1)S 2 /σ 2 (n − 1) = ˉ − μ) (X √ ∼ tn−1 : S/ n T has a Student-t distribution with n − 1 degrees of freedom, denoted St(n − 1), which does not depend on σ 2 . Thus we can repeat the procedure used in the σ known case, but use the sampling distribution of T rather than that of Z to assess whether the value of the test statistic is “surprising” or not. Specifically, we calculate the observed value t= (x − μ) √ s/ n and find the critical values for a α = 0.05 test by finding the ordinates corresponding to the 0.025 and 0.975 percentiles of a Student-t distribution, St(n − 1) (rather than a N (0, 1)) distribution. 7.3.4 TEST FOR σ The Z-test and t-test are both tests for the parameter μ. To perform a test about σ, say H0 : σ = σ 0 , H1 : σ 6= σ 0 , 66 CHAPTER 7. STATISTICAL ANALYSIS we construct a test based on the estimate of variance, S 2 . In particular, we saw from Theorem 7.2.1 that the random variable Q, defined by Q= (n − 1)S 2 ∼ χ2n−1 , σ2 if the data have an N (μ, σ 2 ) distribution. Hence if we define test statistic value q by q= (n − 1)s2 σ0 then we can compare q with the critical values derived from a χ2n−1 distribution; we look for the 0.025 and 0.975 quantiles - note that the chi-squared distribution is not symmetric, so we need two distinct critical values. 7.3.5 TWO SAMPLE TESTS It is straightforward to extend the ideas from the previous sections to two sample situations where we wish to compare the distributions underlying two data samples. Typically, we consider sample one, x1 , ..., xnX , from a N (μX , σ 2X ) distribution, and sample two, y1 , ..., ynY , independently from a N (μY , σ 2Y ) distribution, and test the equality of the parameters in the two models. Suppose that the sample mean and sample variance for samples one and two are denoted (x, s2X ) and (y, s2Y ) respectively. 1. First, consider the hypothesis testing problem defined by H0 : μX = μY , H1 : μX 6= μY , when σ X = σ Y = σ is known, so the two samples come from normal distributions with the same, known, variance. Now, from the sampling distributions theorem we have, under H0 2 2 2 σ2 ˉ ∼ N μX , σ ˉ − Yˉ ∼ N 0, σ + σ X , Yˉ ∼ N μY , , =⇒ X , nX nY nX nY ˉ and Yˉ are independent. Hence by the properties of normal random variables since X ˉ − Yˉ X ∼ N (0, 1), Z= r 1 1 σ + nX nY if H0 is true, giving us a test statistic z defined by x−y z= r , 1 1 σ + nX nY which we can compare with the standard normal distribution. If z is a surprising observation from N (0, 1), and lies in the critical region, then we reject H0 . This procedure is the Two Sample Z-Test. 7.3. HYPOTHESIS TESTING 67 2. If we can assume that σ X = σ Y , but the common value, σ, say, is unknown, we parallel the one sample t-test by replacing σ by an estimate in the two sample Z-test. First, we obtain an estimate of σ by “pooling” the two samples; our estimate is the pooled estimate, s2P , defined by (nX − 1)s2X + (nY − 1)s2Y s2P = , nX + nY − 2 which we then use to form the test statistic t defined by t= x−y r . 1 1 sP + nX nY It can be shown that if H0 is true then t should be an observation from a Student-t distribution with nX + nY − 2 degrees of freedom. Hence we can derive the critical values from the tables of the Student-t distribution. 3. If σ X 6= σ Y , but both parameters are known, we can use a similar approach to the one above to derive a test statistic z defined by z=s x−y σ2 σ 2X + Y nX nY , which has an N (0, 1) distribution if H0 is true. 4. If σ X 6= σ Y , but both parameters are unknown, we can use a similar approach to the one above to derive a test statistic t defined by t= s x−y s2X s2 + Y nX nY . The distribution of this statistic when H0 is true is not analytically available, but can be adequately approximated by a Student (m) distribution, where m= with wX = (wX + wY )2 , 2 wX wY2 + nX − 1 nY − 1 s2X , nX wY = s2Y . nY Clearly, the choice of test we use depends on whether σ X = σ Y or not. We may test this hypothesis formally; to test H0 : σ X = σ Y , H1 : σ X 6= σ Y , we compute the test statistic Q= 2 SX , SY2 68 CHAPTER 7. STATISTICAL ANALYSIS which has as null distribution the F distribution with (nX − 1, nY − 1) degrees of freedom. This distribution can be denoted F (nX − 1, nY − 1), and its quantiles are tabulated. Hence we can look up the 0.025 and 0.975 quantiles of this distribution (the F distribution is not symmetric), and hence define the critical region. Informally, if the test statistic value q is very small or very large, then it is a surprising observation from the F distribution and hence we reject the hypothesis of equal variances. HYPOTHESIS TESTING SUMMARY In general, to test a hypothesis H0 , we consider a statistic calculated from the sample data. We derive mathematically the probability distribution of the statistic, considered as a random variable, when the hypothesis H0 is true, and compare the actual observed value of the statistic computed from the data sample with the hypothetical probability distribution. We ask the question “Is the value a likely observation from this probability distribution ”? If the answer is “No”, then reject the hypothesis, otherwise accept it. 7.4 POINT ESTIMATION Definition 7.4.1 Let X1 , ..., Xn be a random sample from a distribution with mass/density function fX that depends on a (possibly vector) parameter θ. Then fX1 (x1 ) = fX (x1 ; θ), so that fX1 ,...,Xk (x1 , ..., xk ) = k Y fX (xi ; θ). i=1 A statistic T = t(X1 , ..., Xn ) that is used to represent or estimate a function τ (θ) of θ based on an observed sample of the random variables x1 , ..., xn is an estimator, and t = t(x1 , ..., xn ) is an estimate, τ̂ (θ), of τ (θ). The estimator T = t(X1 , . . . , Xn ) is said to be unbiased if E(T ) = θ, otherwise T is biased. 7.4.1 ESTIMATION TECHNIQUES I: METHOD OF MOMENTS Suppose that X1 , ..., Xn is a random sample from a probability distribution with mass/density function fX that depends on a vector parameter θ of dimension k, and suppose that a sample x1 , ..., xn has been observed. Let the rth moment of fX be denoted μr , and let the rth sample moment, denoted mr be defined for r = 1, 2, . . . by n 1X r mr = xi . n i=1 Then mr is an estimate of μr , and n 1X r Mj = Xi n i=1 is an estimator of μr . PROCEDURE : The method of moments technique of estimation involves matching the theoretical moments μr ≡ μr (θ) to the sample moments mr , r = 1, 2, . . . , l, for suitable l, and solving for θ. In most situations taking l = k, the dimension of θ, suffices: we obtain k equations in the k elements of vector θ which may be solved simultaneously to find the parameter estimates. 7.4. POINT ESTIMATION 69 We may, however, need l > k. Intuitively, and recalling the Weak Law of Large Numbers, it is reasonable to suppose that there is a close relationship between the theoretical properties of a probability distribution and estimates derived from a large sample. For example, we know that, for large n, the sample mean converges in probability to the theoretical expectation. 7.4.2 ESTIMATION TECHNIQUES II: MAXIMUM LIKELIHOOD Definition 7.4.2 Let random variables X1 , ..., Xn have joint mass or density function, denoted fX1 ,...,Xk , that depends on a vector parameter θ = (θ1 , ..., θ k ). Then the joint/mass density function considered as a function of θ for the (fixed) observed values x1 , ..., xn of the variables is the likelihood function, L(θ): L(θ) = fX1 ,...,Xn (x1 , ..., xn ; θ). If X1 , ..., Xn represents a random sample from joint/mass density function fX L(θ) = n Y fX (xi ; θ). i=1 Definition 7.4.3 Let L(θ) be the likelihood function derived from the joint/mass density function of random variables X1 , ..., Xn , where θ ∈ Θ ⊆ Rk , say, and Θ is termed the parameter space. Then for a fixed set of observed values x1 , ..., xn of the variables, the estimate of θ termed the maximum likelihood estimate (MLE) of θ, b θ, is defined by b θ = arg maxL(θ). θ∈Θ That is, the maximum likelihood estimate is the value of θ for which L(θ) is maximized in the parameter space Θ. DISCUSSION : The method of estimation involves finding the value of θ for which L(θ) is maximized. This is generally done by setting the first partial derivatives of L(θ) with respect to θj equal to zero, for j = 1, ..., k, and solving the resulting k simultaneous equations. But we must be alert to cases where the likelihood function L(θ) is not differentiable, or where the maximum occurs on the boundary of Θ! Typically, it is easier to obtain the MLE by maximising the (natural) logarithm of L(θ): we maximise l(θ) = log L(θ), the log-likelihood. THE FOUR STEP ESTIMATION PROCEDURE: Suppose a sample x1 , ..., xn has been obtained from a probability model specified by mass or density function fX (x; θ) depending on parameter(s) θ lying in parameter space Θ. The maximum likelihood estimate is produced as follows; 1. Write down the likelihood function, L(θ). 2. Take the natural log of the likelihood, collect terms involving θ. 3. Find the value of θ ∈ Θ, b θ, for which log L(θ) is maximized, for example by differentiation. Note that, if parameter space Θ is a bounded interval, then the maximum likelihood estimate may lie on the boundary of Θ. If the parameter is a k vector, the maximization involves evaluation of partial derivatives. 70 CHAPTER 7. STATISTICAL ANALYSIS 4. Check that the estimate b θ obtained in STEP 3 truly corresponds to a maximum in the (log) likelihood function by inspecting the second derivative of log L(θ) with respect to θ. In the single parameter case, if the second derivative of the log -likelihood is negative at θ = b θ, then b θ is confirmed as the MLE of θ (other techniques may be used to verify that the likelihood is maximized at θ̂). 7.5 INTERVAL ESTIMATION The techniques of the previous section provide just a single point estimate of the value of an unknown parameter. Instead, we can attempt to provide a set or interval of values which expresses our uncertainty over the unknown value. Definition 7.5.1 Let X = (X1 , . . . , Xn ) be a random sample from a distribution depending on an unknown scalar parameter θ. Let T1 = l1 (X1 , . . . , Xn ) and T2 = l2 (X1 , . . . , Xn ) be two statistics satisfying T1 ≤ T2 for which P (T1 < θ < T2 ) = 1 − α, where P denotes probability when X has distribution specified by parameter value θ, whatever the true value of θ, and where 1 − α does not depend on θ. Then the random interval (T1 , T2 ) is called a 1 − α confidence interval for θ, and T1 and T2 are called lower and upper confidence limits, respectively. Given data, x = (x1 , . . . , xn ), the realised value of X, we calculate the interval (t1 , t2 ), where t1 = l1 (x1 , . . . , xn ) and t2 = l2 (x1 , . . . , xn ). NOTE: θ here is fixed (it has some true, fixed but unknown value in Θ), and it is T1 , T2 that are random. The calculated interval (t1 , t2 ) either does, or does not, contain the true value of θ. Under repeated sampling of X, the random interval (T1 , T2 ) contains the true value a proportion 1 − α of the time. Typically, we will take α = 0.05 or α = 0.01, corresponding to a 95% or 99% confidence interval. If θ is a vector, then we use a confidence set, such as a sphere or ellipse, instead of an interval. So, C(X) is a 1 − α confidence set for θ if P (θ ∈ C(X)) = 1 − α, for all possible θ. The key problem is to develop methods for constructing a confidence interval. There are two general procedures. 7.5.1 PIVOTAL QUANTITY Definition 7.5.2 Let X = (X1 , . . . , Xn ) be a random sample from a distribution specified by (scalar) parameter θ. Let Q = q(X; θ) be a function of X and θ. If Q has a distribution that does not depend on θ, then Q is defined to be a pivotal quantity. If Q = q(X; θ) is a pivotal quantity and has a probability density function, then for any fixed 0 < α < 1, there will exist q1 and q2 depending on α such that P (q1 < Q < q2 ) = 1 − α. Then, if for each possible sample value x = (x1 , . . . , xn ), q1 < q(x1 , . . . , xn ; θ) < q2 if and only if l1 (x1 , . . . , xn ) < θ < l2 (X1 , . . . , xn ), for functions l1 and l2 , not depending on θ, then (T1 , T2 ) is a 1 − α confidence interval for θ, where Ti = li (X1 , . . . , Xn ), i = 1, 2. 7.5. INTERVAL ESTIMATION 7.5.2 71 INVERTING A TEST STATISTIC A second method utilises a correspondence between hypothesis testing and interval estimation. Definition 7.5.3 For each possible value θ0 , let A(θ0 ) be the acceptance region of a test of H0 : θ = θ0 , of significance level α, so that A(θ0 ) is the set of data values x such that H0 is accepted in a test of significance level α. For each x, define a set C(x) by C(x) = {θ0 : x ∈ A(θ0 )}. Then the random set C(X) is a 1 − α confidence set. Example. Let X = (X1 , . . . , Xn ), with the Xi IID N (μ, σ 2 ), with σ 2 known. Both the above ˉ − zα/2 σ/√n, X ˉ + zα/2 σ/√n), procedures yield a 1 − α confidence interval for μ of the form (X where Φ(zα/2 ) = 1 − α/2. We know that ˉ −μ X √ ∼ N (0, 1), σ/ n and so is a pivotal quantity. We have P (−zα/2 < Rearranging gives ˉ −μ X √ < zα/2 ) = 1 − α. σ/ n √ √ ˉ − zα/2 σ/ n < μ < X ˉ + zα/2 σ/ n) = 1 − α, P (X which defines a 1 − α confidence interval. If we test H0 : μ = μ0 against H1 : μ 6= μ0 a reasonable test has rejection region of the form √ √ {x : |ˉ x − μ0 | > zα/2 σ/ n}. So, H0 is accepted for sample points with |ˉ x − μ0 | < zα/2 σ/ n, or, √ √ x ˉ − zα/2 σ/ n < μ0 < x ˉ + zα/2 σ/ n. The test is constructed to have size (significance level) α, so P (H0 is accepted|μ = μ0 ) = 1 − α. So we can write √ √ ˉ + zα/2 σ/ n | μ = μ0 ) = 1 − α. ˉ − zα/2 σ/ n < μ0 < X P (X This is true for every μ0 , so √ √ ˉ + zα/2 σ/ n) = 1 − α ˉ − zα/2 σ/ n < μ < X P (X is true for every μ, so the confidence interval obtained by inverting the test is the same as that derived by the pivotal quantity method.