Linear Algebra & Properties of the Covariance Matrix

advertisement
Linear Algebra & Properties of the Covariance
Matrix
October 3rd, 2012
Linear Algebra & Properties of the Covariance Matrix
Estimation of r̄ and C
Let rn1 , rnt , . . . , rnT be the historical return rates on the nth asset.
 1
rn
r2 
 n
n = 1, 2, . . . , N .
rn =  .. 
 . 
rnT
Linear Algebra & Properties of the Covariance Matrix
Estimation of r̄ and C
The expected return is approximated by
ˆr̄n =
T
1 X t
rn
T
t=1
and the covariance is approximated by
T
ĉmn
1 X t
t
(rn − ˆr̄n )(rm
− ˆr̄m ) ,
=
T −1
t=1
or in matrix/vector notation
T
b=
C
1 X t
(r − ˆr̄ )(r t − ˆr̄ )>
T −1
an outer product.
t=1
Linear Algebra & Properties of the Covariance Matrix
Importance of C
The expected return r̄ is often estimated exogenously (e.g. not
statistically estimated). But
statistical estimation of C is an important topic,
is used in practice,
and is the backbone of many finance strategies.
A major part of Markowitz theory was the assumption for C to be
a covariance matrix it must be symmetric positive definite (SPD).
Linear Algebra & Properties of the Covariance Matrix
Real Matrices/Vectors
Let’s focus on real matrices, and only use complex numbers if
necessary. A vector x ∈ RN and a matrix A ∈ RM×N are

 

x1
a11 a12 . . . a1N
 x2 
 a21 a22 . . . a2N 

 

x =  .  and A =  .
.. 
..
 .. 
 ..
.
. 
xN
aM1 aM2 . . .
aMN
Linear Algebra & Properties of the Covariance Matrix
Matrix/Vector Multiplication
The matrix A times a vector x is

a11 a12
 a21 a22

Ax =  .
 ..
a new vector
 
. . . a1N
x1
 x2 
. . . a2N 
 
..   .. 
..
.
.  . 
aM1 aM2 . . .

xN
aMN
a11 x1 + a12 x2 + · · · + a1N xN
a21 x1 + a22 x2 + · · · + a2N xN
..
.





=
 ∈ RM .


aM1 x1 + aM2 x2 + · · · + aMN xN
Matrix/matrix multiplication is an extension of this.
Linear Algebra & Properties of the Covariance Matrix
Matrix/Vector transpose
Their transposes are
>
x1
 x2 
 
=  .  = (x1 , x2 , . . . , xN )
 .. 

x>
xN

a11
 a21

and A> =  .
 ..
a12
a22
...
...
..
.
aM1 aM2 . . .
> 
a1N
a11


a2N 
 a12
..  =  ..
 .
. 
aMN
a1N
a21
a22
...
...
..
.

aM1
aM2 

..  .
. 
a2N
...
aMN
Note that (Ax)> = x > A> .
Linear Algebra & Properties of the Covariance Matrix
Vector Inner/outer Product
The vector x ∈ RN inner product with another vector y ∈ RN is
x > y = y > x = x1 y1 + x2 y2 + . . . xN yn .
There is also the outer product,

x1 y1 x1 y2 . . .
 x2 y1 x2 y2 . . .

xy > =  .
..
 ..
.
xN y1 xN y2 . . .
x1 yN
x2 yN
..
.



 6= yx > .

XN YN
Linear Algebra & Properties of the Covariance Matrix
Matrix/Vector Norm
The norm on the vector space RN is k · k, and for any x ∈ RN we
have
√
kxk = x > x .
Properties of the norm
kxk ≥ 0 for any x,
kxk = 0 iff x = 0,
kx + y k ≤ kxk + ky k (triangle inequality),
|x > y | ≤ kxkky k with equality iff x = ay for some a ∈ R
(Cauchy-Schwartz),
kx + y k2 = kxk2 + ky k2 if x > y = 0 (Pythagorean thm).
Linear Algebra & Properties of the Covariance Matrix
Linear Independence & Orthogonality
Two vectors x, y are linear independent if there is no scalar
constant a ∈ R s.t.
y = ax .
These two vectors are orthogonal if
x >y = 0 .
Clearly, orthogonality ⇒ linear independent.
Linear Algebra & Properties of the Covariance Matrix
Span
A set of vectors x1 , x2 , . . . , xn is said to span a subspace V ⊂ RN
if for any y ∈ V there are constants a1 , a2 , . . . , an such
y = a1 x2 + a2 x2 + · · · + an xn .
We write, span(x1 , . . . , xn ) = V . In particular, a set x1 , x2 , . . . , xN
will span RN iff they are linearly independent:
span(x1 , x2 , . . . , xN ) = RN ⇔ xi linearly independent of xj for all i, j.
If so, then x1 , x2 , . . . , XN are a basis for RN .
Linear Algebra & Properties of the Covariance Matrix
Orthogonal Basis
Definition
A basis of RN is orthogonal if all its elements are orthogonal,
span(x1 , x2 , . . . , xN ) = RN
and
xi> xj = 0 ∀i, j .
Linear Algebra & Properties of the Covariance Matrix
Invertibility
Definition
A square matrix A ∈ RN×N is invertible if for any b ∈ RN there
exists x s.t.
Ax = b .
If so, then x = A−1 b.
A is invertible if any of the following hold:
A has rows that are linearly independent
A has columns that are linearly independent
det(A) 6= 0
there is no non-zero vector x s.t. Ax = 0.
In fact, these 4 statements are equivalent.
Linear Algebra & Properties of the Covariance Matrix
Eigenvalues
Let A be an N × N matrix. A scalar λ ∈ C is an eigenvalue of A if
Ax = λx
for some vector x = re(x) +
√
−1 im(x) ∈ CN .
If so x is an eigenvector.
By 4th statement of previous slide, A−1 exists ⇔ zero is not an
eigenvalue.
Linear Algebra & Properties of the Covariance Matrix
Eigenvalue Diagonalization
Let A be an N × N matrix with N linearly independent
eigenvectors. There is an N × N matrix X and an N × N diagonal
matrix Λ such that
A = X ΛX −1 ,
where Λii is an eigenvalue of A, and the columns of
X = [x1 , x2 , . . . , xN ] are eigenvectors,
Axi = Λii xi .
Linear Algebra & Properties of the Covariance Matrix
Real Symmetric Matrices
A matrix is symmetric if A> = A.
Proposition
A symmetric matrix has real eigenvalues.
pf: If Ax = λx then
λ2 kxk2 = λ2 x ∗ x = x ∗ A2 x = √
x ∗ A> Ax = kAxk2 , which is real and
∗
>
nonnegative (x = re(x) − −1 im(x)> i.e the conjugate
transpose). Hence, λ is cannot be imaginary or complex. Linear Algebra & Properties of the Covariance Matrix
Real Symmetric Matrices
Proposition
Let A = A> be a real matrix. A’s eigenvectors form an orthogonal
basis of RN , and the diagonalization is simplified:
A = X ΛX > ,
i.e. X −1 = X > .
Basic Idea of Proof: Suppose that Ax = λx and Ay = αy with
α 6= λ. Then
λx > y = x > A> y = x > Ay = αx > y ,
and so it must be that x > y = 0.
Linear Algebra & Properties of the Covariance Matrix
Skew Symmetric Matrices
A matrix is skew symmetric if A> = −A.
Proposition
A skew symmetric matrix has purely imaginary eigenvalues.
pf: If Ax = λx then
λ2 kxk2 = λ2 x ∗ x = x ∗ A2 x = −x ∗ A> Ax = −kAxk2 which is less
than zero. Hence, λ2 < 0 must be purely imaginary. Linear Algebra & Properties of the Covariance Matrix
Positive Definitness
Definition
A matrix A ∈ RN×N is positive definite iff
x > Ax > 0
∀x ∈ RN .
It is only positive semi-definite iff
x > Ax ≥ 0
∀x ∈ RN .
Linear Algebra & Properties of the Covariance Matrix
Positive Definitness ⇒ Symmetric
Proposition
If A ∈ RN×N is positive definite, then A> = A, i.e. it is symmetric
positive definite (SPD).
pf: For any x ∈ RN we have

0 < x > Ax =

1 >

>
>
>
x (A
| −
| +
{zA })x + x ( A
{zA } )x  ,
2
sym.
skew-sym.
but if A − A> 6= 0 then there exists eigenvector x s.t.
(A − A> )x = αx where α ∈ C \ R, which contradicts the
inequality. Linear Algebra & Properties of the Covariance Matrix
Eigenvalue of an SPD Matrix
Proposition
If A is SPD, then it has all positive eigenvalues.
pf: By definition for any y ∈ RN \ {0} it holds that
y > Ay > 0 .
In particular, for any λ and x s.t. Ax = λx, we have
λkxk2 = x > Ax > 0
and so λ > 0. Consequence: for A SPD there’s no x s.t. Ax = 0, and so A−1
exists.
Linear Algebra & Properties of the Covariance Matrix
So Can/How Do We Invert C ?
The invertibility of the covariance matrix is very important:
we’ve seen a need for it in the efficient frontiers and
Markowitz problem,
if it’s not invertible then there are redundant assets,
we’ll need it to be invertible when we discuss multivariate
Gaussian distributions.
If there is no risk-free, our covariance matrix is practically
invertible by definition:
!
X
X
0 < var
w i ri =
wi wj Cij = w > Cw
i
i,j
where w ∈ RN \ {0} is any allocation vector (w may not sum to
1). Hence, C is SPD ⇒ C −1 exists.
Linear Algebra & Properties of the Covariance Matrix
b is Covariance Matrix?
So How Do We Know C
b=
Recall, C
symmetric,
1
T −1
PT
t=1 (r
t
− r̄ )(r t − r̄ )> . Each summand is
(r t − ˆr̄ )(r t − ˆr̄ )> is symmetric ,
and sum of symmetric matrices is symmetric. But each summand
is not invertible because there exists a vector y s.t.
y > (r t − ˆr̄ ) = 0, which means that zero is an eigenvalue,
(r t − ˆr̄ ) (r t − ˆr̄ )> y = 0 .
| {z }
=0
Linear Algebra & Properties of the Covariance Matrix
b is Covariance Matrix?
So How Do We Know C
If
span(r 1 − ˆr̄ , r 2 − ˆr̄ , . . . , r T − ˆr̄ ) = RN
b is invertible. So need T ≥ N.
then C
b to be close C will need T N.
In order for C
b is also SPD:
C
T
>b
y Cy = y
>
1 X t
(r − ˆr̄ )(r t − ˆr̄ )>
T −1
!
y
t=1
=
T
T
t=1
t=1
1 X > t
1 X > t
y (r −ˆr̄ )(r t −ˆr̄ )> y =
(y (r −ˆr̄ ))2 > 0 ,
T −1
T −1
because at least 1 t s.t. y > (r t − ˆr̄ ) 6= 0.
Linear Algebra & Properties of the Covariance Matrix
A Simple Covariance Matrix
C = (1 − ρ)I + ρU where I is the identity and Uij = 1 for all i, j.
U1 = N1
Ux
where
= 0
if x > 1 = 0
1 =. (1, 1, . . . , 1)> . Hence, for any x we have
Cx = (1 − ρ)Ix + ρUx = (1 − ρ)x + ρ(1> x)1 ,
which means x is an eigenvector iff 1> x = 0 or x = a1. The
eigenvalues of C are 1 − ρ + Nρ and 1 − ρ, and the eigenvectors
are 1 and the N − 1 vectors orthogonal to 1, respectively.
Linear Algebra & Properties of the Covariance Matrix
Positive Definite (Complex Hermitian Matrices)
Let C be a square matrix, i.e. C ∈ CN×N .
C is positive definite if
z ∗ Cz > 0
∀z ∈ CN \ {0}
where CN is N-dimensional complex vectors, and z ∗ is
conjugate transpose,
z ∗ = (zr + izi )∗ = zr − izi .
C is positive semi-definite if
z ∗ Cz ≥ 0
∀z ∈ CN \ {0}
If C positive definite and real, then it must also be
symmetric i.e. Cnm = Cmn ∀m, n.
Linear Algebra & Properties of the Covariance Matrix
Gershgorin Circles
Proposition
All eigenvalues of the matrix A lie in one of the Gershgorin circles,
X
|λ − Aii | ≤
|Aij |
for at least 1 i ≤ N,
j6=i
where λ is an eigenvalue of A.
pf: Let λ be an eigenvalue and letPx be an eigenvector. Choose i
so that |xi | = maxj |xj |. We
P have j Aij xj = λxj . W.L.O.G. we
can take |xi | = 1 so that j6=i Aij xj = λ − Aii , and taking absolute
values yields the result,
X
X
X
|λ − Aii | = Aij xj ≤
|Aij | · |xj | ≤
|Aij | . j6=i
j6=i
j6=i
Linear Algebra & Properties of the Covariance Matrix
Download