Multidimensional Normal Distribution Definition: The p

advertisement
Multidimensional Normal Distribution
Definition: The p-dimensional random vector Z called standard normal if the coordinates of
Z are independent standard normal one-dimensional random variables. i.e.
Zi ∼ N (0, 1),
Z0 = (Z1 , . . . , Zp )
i = 1, . . . , p.
Theorem 1. If Z ∈ Rp is a random vector with standard normal distribution then the following
statements are equivalent
(i) The density of Z is given by
p
1
f (Z1 , . . . , Zp ) = (2π)− 2 exp{− Z0 Z}.
2
(ii) Let Rp (u) = E(exp{u0 Z}) be the moment generating function of Z, then
1
Rp (u) = exp{ u0 u}.
2
(iii) For arbitrary c ∈ Rp where c is a constant vector, c0 Z is a one dimensional random
variable with normal distribution; E(c0 Z) = 0, Var(c0 Z) = c0 c, i.e. c0 z ∼ N (0, c0 c).
(iv) Let U be a p × p orthogonal matrix, then the distribution of Z and UZ are the same
and Z0 Z has a χ2 distribution with p degree of freedom.
Definition: The random vector X ∈ Rp has normal distribution if it can be expressed as
X = AZ + µ,
where A is a p × ` constant matrix, Z ∈ R` , ` ≤ p is a random vector with standard normal
distribution and µ ∈ Rp is a constant vector.
Remarks:
1. EX = µ.
2.
ΣX = E((X − E(X))(X − E(X))0 ) = E((AZ)(AZ)0 ) = E(AZZ0 A0 ) = AI` A0 = AA0 ,
where I` denotes the ` × ` matrix with 1-s in the main diagonal and 0-s elsewhere.
1
Theorem 2. The multidimensional normal distribution is uniquely defined by its expected
value vector and its covariance matrix.
Proof. There is a one to one correspondence between moment generating function and distribution.
E(exp(u0 X)) = E(exp(u0 AZ + u0 µ)) = eu µ e 2 u AA u .
0
1
0
0
The last equality is true since by definition E(exp(u0 AZ)) is the moment generating function
of Z at A0 u.
Notation X ∈ Rp random variable with normal distribution, with expected value EX = µ
and covariance matrix ΣX is denoted as
X ∼ Np (µ, ΣX ).
Remark: Moment generating function of X:
Rp (u) = E(exp{u0 X}) = eu µ e 2 u Σx u .
0
1
0
Definition: Let X ∼ Np (µ, ΣX ), then
X = AZ + µ
is the canonical form of X if A has the following properties:
(i) the column vectors of A are orthogonal with positive norms,
(ii) the norms of column vectors of A are non-increasing.
Theorem 3. Any X ∈ Rp normally distributed random variable has canonical form.
Proof. Let X ∼ Np (µ, ΣX ), then by definition there exists a (p × `) matrix A such that
X = AZ + µ, where z ∈ R` standard normal random variable. Consider the singular value
decomposition of A. A = VDU0 where V and U are p × p and ` × ` orthogonal matrices and
D is a p × ` diagonal matrix where the values s1 ≥ s2 ≥ . . . ≥ 0 non-increasing in the diagonal.
Thus
X = VD(U0 Z) + µ.
Using Theorem 1 (iv), U0 Z has the same distribution as Z. It means that
X = (VD)Z + µ = AZ + µ,
is a canonical representation of X.
Theorem 4. Let X ∼ Np (µX , ΣX ) random variable. Then any linear transformation of X has
normal distribution.
Proof. Let Y = BX + µY . Then
Y = B(AZ + µX ) + µY = (BA)Z + (BµX + µY ).
2
Download