PowerPoint for Chapter 3

advertisement
Financial Analysis, Planning and
Forecasting
Theory and Application
Chapter 3
Discriminant Analysis and Factor Analysis:
Theory and Method
By
Alice C. Lee
San Francisco State University
John C. Lee
J.P. Morgan Chase
Cheng F. Lee
Rutgers University
1
Outline








3.1 Introduction
3.2 Important concepts of linear algebra
Linear combination and its distribution
Vectors, matrices, and their operations
Linear-equation system and its solution
3.3 Two-group discriminant analysis
3.4 k-group discriminant analysis
3.5 Factor analysis and principal-component analysis
Factor score
Factor loadings
3.6 Summary
Appendix 3A. Discriminant analysis and dummy regression
analysis
Appendix 3B. Principal-component analysis
2
3.2
Important concepts of linear algebra
 Linear
combination and its distribution
 Vectors, matrices, and their operations
 Linear-equation system and its solution
3
3.2
Important concepts of linear algebra
Y  a1 x1  a2 x2    an xn
~
z  a1 ~
x1  a2 ~
x2    a m ~
xm ,
(3.1)
(3.1′)
m
~
z   ai ~
xi
i 1
(3.2a)
m
 z2   ai2 i2  2i  j ai a j  ij i j
(3.2b)
i 1
4
3.2
Important concepts of linear algebra
 z2  a1212  a22 22  2a1a2 121 2
( 121 2  12 )
(3.2b′)
 z2  a12 12  a22 22  a32 32  2a1a2 12 1 2  2a1a3 32 1 32  2a3 a2  23 3 2
(3.2b′′)
 X11 
 X 21 
X 
X 
X1   12  and X 2   22 
 


 X1n 
 X 2 n 
5
3.2
Important concepts of linear algebra
 X 11 X 12 
X

X
22 
X 1   12
 
 


X
X
n2 
 1n
 X 11
X
X   21
 

 X n1
X 12
X 22

X n2
X 1n 
 X 2 n 
 

 X nn 

(3.3)
6
3.2
Important concepts of linear algebra
A'
B
 12  12  13 


2
2
  [a1 a 2 a3 ]  21  2  23 
13
 31  32  32 


3 3
3
B  5 ,
6
A
 a1 
a  ,
 2
 a3 
3 1
(3.2b′′′)
 a1 
a  a 2 ,
 a3 
7
3.2
Important concepts of linear algebra
 2 0
A  [2 1 0], B   0 3 ,
(13)
 1 0
( 32 )
 2 0
AB  [2 1 0] 0 3  [4 3].
 1 0
8
3.2
Important concepts of linear algebra
Step 1: Multiply A’ by B
C  [(a1 12  a2 21  a3 31 ), (a1 12  a2 22  a3 32 ), (a1 31  a2 23  a3 32 )],
Step 2: Multiply C by A
 2  a1 (a1 12  a2 21  a3 31 )  a2 (a1 12  a2 22  a3 32 )  a3 (a1 31  a2 23  a3 32 )
 a12 12  a 22 22  a32 32  2(a1 a 2 12  a1 a3 13  a 2 a3 23 ),
Linear Equation System and its Solution
a1 X1 
 a1m X m  b1
(3.4)
an1 X1 
 anm X m  bn .
9
3.2
Important concepts of linear algebra
 a11  a1m   X 1   b1 
 





        .

a n1  a nm   X m  bn 
1 0
0 1
I 


0
0

0


1
10
3.2
Important concepts of linear algebra
1
X  A b,
Xi 
 a11
A
a 21
a12 
,

a22 
(3.5)
Aˆ i
A
(3.6)
A  a11 a 22  a12 a 21 .
2 8
A
,

7 6
11
3.2
Important concepts of linear algebra
( A  I ) X  0,
(3.7)
A  I  0,
(3.8)
3 2 4
A  2 0 2,
4 2 3
3  
 2

 4
4 

2   0.
2 3   
2
12
3.2
Important concepts of linear algebra
3  
A  8 I   2
 4
4 

2 .
2 3   
2
1 2
1 3 
 11
 (1)(36) (1)(18) (1)(36) 


2 1
22
23 

A  8I 
.
(1)(18) (1)(9) (1)(18)
 3 1
3 2
33 


 (1)(36) (1)(18) (1)(36) 
36 18 36
 18 9 18.
36 18 36
13
3.2
Important concepts of linear algebra
36
36
36
( A  8I ) 18   0 or that A 18   8 I 18  .
 
 
 
36
36
36
 36 18 36 
x'  
  [0.6667 0.3333 0.6667]
 54 54 54 
 5x1  2 x2  4 x3  0,
2 x1  8 x2  2 x3  0,
4 x1  2 x2  5x3  0,
14
3.2
Important concepts of linear algebra
1
A1 
 Adjoint A 
| A|
3 2 4
| A | 2 0 2  (0  16  16)  (0  12  12)  8.
4 2 3
 (1)11 ( 4) ( 1)1 2 ( 2) ( 1)13 (4) 
Cofactor of A  (1)21 (2) ( 1)2 2 ( 7) ( 1)2 3 ( 2) 


31
3 2
3 3
 (1) (4) ( 1) ( 2) ( 1) ( 4) 
 4 2 4 
  2 7 2  .


 4 2 4 
15
3.2
Important concepts of linear algebra
 4 2 4 
Adjoint of A   2 7 2  ,


 4 2 4
 1
 2
 1
1
A 
 4
 1
 2
1
4
7

8
1
4
1 
2 
1 
.
2 
1

2 
Note to instructor: The numbers with red circle are different from those in the text.
16
Extra Example to show how simultaneous equation
system can be solve by matrix inversion method
3x1  2 x2  4 x3  32
2 x1
 3x3  19
(a)
4 x1  2 x2  3x3  29
The simultaneous equation (a) can be written as matrix form
as equation (b)
 3 2 4  x1  32 
 2 0 3  x   19 

  2  
 4 2 3  x3   29
(b)
K
X
A
Then we can solve this equation system by matrix inversion.
17
1
A1 
 Adjoint A 
| A|
3 2 4
| A | 2 0 3  (0  24  16)  (0  18  12)  10.
4 2 3
 (1)11 (6) ( 1)1 2 ( 6) ( 1)13 (4) 
Cofactor A  ( 1)21 ( 2) ( 1)2 2 ( 7) ( 1)23 ( 2) 


31
3 2
3 3
 (1) (6) ( 1) (1) ( 1) ( 4) 
 6 6 4 
  2 7 2  .


 6 1 4 
18
 6 2 6 
Adjoint of A  Transpose of Cofactor A   6 7 1 ,


 4 2 4
 6
  10
 6 2 6  
1
1 
6


1

A 
 Adjoint A     6 7 1  
| A|
10
10
 4
2 4  
 4
 10
2
10
7

10
2
10
6 
10 

1 
.
10 

4

10 
19
We know
A  X=K
A1AX = A1K
X = A1K
 6
  10
 x1  
x    6
 2   10
 x3  
 4
X
 10
2
10
7

10
2
10
6 
2
6
 6

  32  19   29 


10  32 
10
10
10


 2
1     6
7
1
 = 3
19
=

32


19


29
  
10     10
10
10
  29  
  5 
4
4
2
4
  K 
 32  19   29 
 10

10 
10
10
A 1
Please note that this is one of three methods can be used to solve
simultaneous equation system. Other two methods are substitution method
and Cramer rule method. These two methods have been taught in high
school algebra. In practice, matrix method is the best method to solve large
20
equation systems, such as portfolio analysis (see Chapter 7).
Cramer’s Rule
32 2 4
19 0 3
x1 
29 2 3 (0  174  152)  (0  192  114) 326  306 20



2
3 2 4
10
10
10
2 0 3
4 2 3
3 32 4
3 2 32
2 19 2
2 0 19
4 29 3 30
x2 

3
3 2 4 10
2 0 2
4 2 3
4 2 29 50
x3 

5
3 2 4 10
2 0 2
4 2 3
21
3.3
Two-group Discriminant Analysis
( B  EC) A  0, D '   X1,1  X1,2 ,..., X m,1  X m,2 
(3.11)
where
B = DD′, between-group variance;
C = Within-group variance;
A = Coefficient vector representing the
coefficients of Eq. (3.8);
E = Ratio of the weighted between-group
variance to the pooled within variance.
(C 1 B  EI ) A  0
(3.12)
22
3.3
Two-group discriminant analysis
TABLE 3.1 Roster of liquidity and leverage ratios
For two groups with two predictors and a “dummy” criterion variable Y.
Group 1
Group 2
[N1=6]
[N2=8]
x
x
x
x1i
x 2i
Yi
x1i
x 2i
2.0
1.8
2.3
3.1
1.9
2.5
0.5
0
0.4
8
0.4
9
0.4
1
0.4
3
0.4
4
1
1
1
1
1
1
1.8
1.9
1.7
1.5
2.2
2.8
1.6
1.4
0.3
5
0.3
4
0.4
2
0.4
9
0.3
6
0.3
8
0.5
5
0.5
6
1i
 13.6
2
1i
 32
1i
x 2 i  6.179
x
x
2i
 2.75
2
2i
 1.2671
x
x
x
1i
 14.9
2
1i
 29.19
1i
x2i  6.245
x
x
Yi
0
0
0
0
0
0
0
0
2i
 3.45
2
2i
 1.5447
23
3.3
Two-group discriminant analysis
Yi  a1 X 1i  a2 X 2i
(3.13)
yi  a1 x1i  a2 x2i
(3.14)
Var(x1i)a1 + Cov(x1i , x2i) a2 = Cov(x1i , yi)
(3.15a)
Cov(x1i , x2i) a1 + Var(x2i)a2 = Cov(x2i , yi)
(3.15b)
24
3.3
Two-group discriminant analysis
Var ( x1i

X

)
n
2
1i
  X 1i

 n





32  29.19  13.6  14.9 


n
n


2
61.19  28.5 



14
 14 
 4.3707 4.144  0.2267;
2
2
Var ( x 2i
X

)
2
2i
n
  X 2i

 n





2
1.2671 1.5447  2.75  3.45 



14
14


2
2
2.8122  62 

 
14
 14 
 02008 0196  0.0048;
12.424
Cov( x1 , x 2 ) 
 (2.0357)(0.4428)
14
 0.8874 0.9014
 0.014;
25
3.3
Two-group discriminant analysis
X

Cov( x , y ) 
1
Y
1i i
n
 ( X 1 )(Y )
13.6
 (2.0357)(0.4285)
14
 0.9714 0.8722
 0.0992;
Cov( x 2
X

, y) 
n
Y
2i i
 ( X 2 )(Y )
2.75
 (0.4428)(0.4288)
14
 0.0.1964 0.1897
 0.00668.
26
3.3
Two-group discriminant analysis
0.0992  0.0140
0.0066 0.0048
0.00047616 0.0000924 0.00056886
 0.63697;


a1 
0.2267  0.0140
0.0008926
0.0010886 0.000196
 0.0140 0.0048
0.2267
 0.0140
a2 
0.2267
 0.0140
0.00992
0.0066
0.00149 0.00138 0.00288


 3.2359;
 0.0140 0.00108 0.00019 0.00089
0.0048
 a1   0.63697
 a    3.2359   0.1968
 1 
 2 
 

1
1


 
27
3.3
Two-group discriminant analysis
(13.6) 2
(14.9) 2
C11  32 
 29.19 
 2.612;
6
8
C 22
(2.75) 2
(3.45) 2
 1.2671
 1.5447
 0.0636;
6
8
(13.6)( 2.75)
(14.9)( 3.45)
C12  6.179 
 6.245 
 0.2350 ;
6
8
28
3.3
Two-group discriminant analysis
2
2
13.6

14.9

B11  6
 2.0357  8
 2.0357  0.5601;
 6

 8

2
2
 2.75

 3.45

B22  6
 0.4428  8
 0.4428  0.0025;
 6

 8

 13.6
  2.75

B12  6 
 2.0357  
 0.4428 
 6
 6

 14.9
  3.45

 8
 2.0357  
 0.4428   0.03753;
 8
 8

29
3.3
Two-group discriminant analysis
 2.612 0.2350
 0.5601 0.03753
C
and B  
.


0.2350 0.0636 
0.03753 0.0025 
0.4007 0.0268
C B
,

0.0268 0.1384
1
.
0.4007 0.0268
Adjoint of C B  E1I  


0.2071

0.1384


1
30
3.3
Two-group discriminant analysis
11
1 2


(1)(0.4007) (1)(2.071) 

Cofactor of C 1 B  E1I  
2 1
22


 (1)(0.0268) (1)(0.1384) 
 0.4007 2.071 


 0.0268 0.1384
 0.4007  0.0268
  2.071  0.1384


31
3.3
Two-group discriminant analysis
 0.4007/  2.071 0.1935
A1  



1
1

 

0.63697065/ 3.2359 0.1968
a



1
1

 

32
3.3
Two-group discriminant analysis
 0.0268
 0.4007 0.5391
  a1 
0




 2.071
 0.1384 0.5391 a 2 

(3.16)
- 0.1384a1  0.0268a2  0
2.071a1 - 0.4007a2  0
2.071a1 - 0.4007a2  0
0.1907/ 0.9818 0.1935
a

.


1

  1 
33
3.4
k-group discriminant analysis
Yj  a j1 X1  a j 2 X 2 
 a jm X m
(j  1, 2,
E j  A ' A / A ' CA
, k)
(3.17)
(3.18)
A'  [ A1 , A2 ,...Am ],
(C 1 B  EI ) A  0
34
3.4
k-group discriminant analysis
Y1  a11 X 1  a12 X 2    a1m X m
(3.20a)
Y2  a21 X 1  a22 X 2    a2m X m
(3.20b)
Y3  a31 X 1  a32 X 2    a3m X m
(3.20c)




Yr  ar1 X 1  ar 2 X 2    arm X m ,
(3.20r)
35
3.4
k-group discriminant analysis
R  q1 p(1 | 2)  q2 p(2 | 1)
(3.21a)
R  q1 p(1 | 2)C12  q2 p(2 | 1)C21
(3.21b)
Where
q1 = Prior probability of being classified as bankrupt,
q2 = Prior probability of being classified as non-bankrupt,
p (2 | 1) = Conditional probability of being classified as non- bankrupt
when, in fact, the firm is bankrupt,
p (1 | 2) = Conditional probability of being classified as bankrupt
when, in fact, the firm is non-bankrupt,
C12 = Cost of classifying a bankrupt firm as non-bankrupt,
C 21 = Cost of classifying a non-bankrupt firm as bankrupt.
36
3.5 Factor analysis and principal-component analysis


Factor score
Factor loadings
37
3.5 Factor analysis and principal-component analysis
f i  b1Y1  b2Y2    b j Y j  bpYp (3.22)
Y    f  U ,
E (Y   )(Y   )   M    
(3.23)
(3.24)
38
3.6 Summary

In this chapter, method and theory of both
discriminant analysis and factor analysis needed
for determining useful financial ratios, predicting
corporate bankruptcy, determining bond rating,
and analyzing the relationship between bankruptcy
avoidance and merger are discussed in detail.
Important concepts of linear algebra-linear
combination and matrix operations- required to
understand both discriminant and factor analysis
are discussed.
39
Appendix 3A.
Discriminant analysis and dummy regression analysis
Yi  a1 X1i  aX 2i 
 am X mi
A' DD ' A A' BA
E

A' CA
A' CA
where
A  [ a1, a2 ,
(3.A.1)
(3.A.2)
, am ];
D  [ X1,1  X1,2 , X 2,1  X 2,2 , X m,1  X m,2 ];
C  Within-group variance matrix;
DD '  Between-group variance matrix.
40
Appendix 3A.
Discriminant analysis and dummy regression analysis
E 2[( BA)( A ' CA)  ( A ' BA)(CA)]

0
2
A
( A ' CA)
2[ BA  ECA]
 0,
A' CA
( B  EC ) A  0.
b11a12  b22 a22  2b12 a1a2
E
2
2
c11a1  c22 a2  2c12 a1a2
(3.A.3)
(3.A.2a)
41
Appendix 3A.
Discriminant analysis and dummy regression analysis
E
 [(2b11a1  2b12 a2 )(C11a12  C22 a22  2C12 a1a2
a1
 (b11a12  b22 a22  2b12 a1a2 )(2C11a1  2C12 a 2 )]
 (C a  C a  2C12 a1a 2 )
2
11 1
2
22 2
-2
 2[b11a1  b12 a2 )  E (C11a1  C12 a2 )]
1
 (C a  C a  2C12 a1a 2 ) .
2
11 1
2
22 2
b11a1  b12 a2  E(C11a1  C12 a2 ).
42
Appendix 3A.
Discriminant analysis and dummy regression analysis
[b11 , b12 ] A  E[C11 , C12 ] A.
(3.A.4)
[b21 , b22 ] A  E[C21 , C22 ] A.
(3.A.5)
b11 b12 
C11 C12 
A  E
A
b


 21 b22 
C 21 C 22 
43
Appendix 3A.
Discriminant analysis and dummy regression analysis
F  A' DD' A  [ A' CA  L].
F
 0  2 DD ' a  2CA,
A

A
H

1

C
D  A1


(3.A.6)
(3.A.7)
(3.A.8)

 DD ' C  A1   DD ' C  A    0.
H
44
Appendix 3A.
Discriminant analysis and dummy regression analysis
BA = ECA.
(3.A.9)
(1 + E)BA = E(B + C)A
or
 B  E ' S  A  0,
(3.A.10)
A ' DD ' A
E'
,
A ' SA
45
Appendix 3A.
Discriminant analysis and dummy regression analysis

A
H

1
  S D  A2 .

1
A  S MD,
(3.A.11)
(3.A.12)
Yi  a1 X1i  a2 X 2i .
(3.A.l’)
yi  a1x1i  a2 x2i .
(3.A.13)
46
Appendix 3A.
Discriminant analysis and dummy regression analysis
2
x
  1i  a1    x1i x2i  a2   x1i yi,
2
x
x
a

x
  1i 2i  1   2i  a2   x2i yi,
(3.A.l4a)
(3.A.l4b)
 N1 X 11  N 2 X 12  NN1 
 X1V1 y   X1Y  NX , Y  N1 X 11   N  N  N  N 
1
2
2 

 1
 N1 N 2 
 N1 N 2 

  X 11  X 12   
  D1 
 N 
 N 
(3.A.l5)
47
Appendix 3A.
Discriminant analysis and dummy regression analysis
 N1 N 2 
 N1 N 2 
 x2 y   N   X 21  X 22    N   D2  (3.A.l6)
1
A  S MD,
.
  x12i
S
  x2i x1i
x x
x
1i 2 i
2
2i



48
Appendix 3B. Principal-component analysis
 Y11
Y
21

Y


Yp1
Y1n 

Y2 n 


Ypn 
f1t  b11 y1t  b21 y2t 
 bp1 y pt
 n  1, 2, ,306 


 p  1, 2, , 61
t  1,
, n
49
Appendix 3B. Principal-component analysis
f1  yb1 ,
(3.B.1)
'
1
f f  f Y 'Yf1.
(3.B.2)
b b  1.
(3.B.3)
'
1 1
'
1 1
  b Y Yb1  1  b b  1 ,
'
1
'
'
1 1
50
Appendix 3B. Principal-component analysis

'
 2Y Yb1  21b1.
b1
Y Y  b
'
1
 1b1.
(3.B.4)
f f   b b  1
'
1 1
'
1 1 1
51
Appendix 3B. Principal-component analysis
b Y Yb2   b b
'
2
'
'
1 1 2
0
if and only if b b  0
'
1 2
  b Y Yb2  2  b b  1    b b  ,
'
2
'
'
2 2
'
1 2

'
 2Y Yb2  22b2  b1  0.
b2
.
2bY Yb2    0
'
1
'
52
Appendix 3B. Principal-component analysis
b Y Y  b1   Y Y  0.
'
2
'
'
1 1 1
Y Y  b
'
2
B  b1
 2b2 ,
b2
(3.B.5)
bp  .
(3.B.6)
53
Appendix 3B. Principal-component analysis
F  YB
 1
0
'
' '
F F  B Y YB    


 0
fi fk  i
'
(3.B.7)
0
0 
,
(3.B.8)


 p 
0
2
0
i  1,
, p .
(3.B.9)
54
Download