Uploaded by jadoo toona

econ106 lec1

advertisement
Topics in Linear Algebra
Econ 106 - Mathematical Economics
School of Economics
University of the Philippines-Diliman
1 / 101
Matrices, Vectors, and Scalars
Denition
A matrix is a rectangular array of numbers (or symbols
representing numbers) and is written in the form

a 11

 a 21
A=
 ..
 .
a m1
The number
ai j
a 12
a 22
···
···
..
.
...
a m2
···

a 1n

a 2n 
.
.. 
. 

a mn
in the array is called an entry of the matrix A.
Each entry has double subscripts which indicate the address of the
entry, e.g., a21 corresponds to the entry located in the 2nd row, 1st
column.
Alternatively, we can denote the above matrix A as A = [ai j ]m×n .
2 / 101
Matrices, Vectors, and Scalars
Remarks:
A matrix with m rows and n columns is referred to as an m ×n
matrix and m and n are called the dimensions of the matrix.
If m = n , the matrix is called a square matrix of order n .
A matrix with only one row is called a row vector. Similarly, a
matrix with only one column is called a column vector.
Either one is called a vector. A vector with n entries is called
an n -vector.
3 / 101
Matrices, Vectors, and Scalars
Denition
A scalar α is a single number which may be real (i.e., α ∈ R) or
complex (i.e., α ∈ C).
On notations:
Matrices will be denoted by capital Latin letters (A, B, C).
Vectors will be denoted by small Latin letters (x, y, z).
Scalars will be denoted by lowercase Greek letters (α, β, γ).
4 / 101
Matrices, Vectors, and Scalars
Denition (Transpose of a Matrix)
Let A = [ai j ] be an m × n matrix with real entries (possibly
complex). Then the transpose of A is the n × m matrix AT
obtained by interchanging the rows and columns of A, so that the
(i , j )th entry of AT is a j i .
If x is an n × 1 matrix (or a column vector), we write


x1
 
 x2 

x=
 ..  .
 . 
xn
5 / 101
Matrices, Vectors, and Scalars
If we wish to write x as a row vector, we use the transpose notation:
xT = [x 1 , x 2 , · · · , x n ].
Remarks:
We adopt the convention that unless specied otherwise a
vector is a column vector.
The set of real numbers, denoted by R, and the set of
n -vectors with real entries is denoted by Rn .
The matrix with zero entries is called the zero matrix or null
matrix and is denoted by 0.
6 / 101
Operations on Matrices, Vectors, and Scalars
Denition (Equality of Matrices)
Two matrices A and B are said to be equal if and only if A and B
have the same number of rows and the same number of columns
and ai j = bi j , for all i , for all j .
Recall from Math 30: Addition and Subtraction
Let A and B be m × n matrices. The sum of A and B, denoted
by A + B, is the m × n matrix obtained by adding the entries if
A and B with the same 'address' (that is, adding b i j to a i j ).
The dierence A − B is obtained by subtracting bi j from
ai j .
7 / 101
Addition of Matrices
Theorem (Recall some basic properties)
Let A, B and C be m × n matrices. Then
(Commutative Property)
1
A+B = B+A
2
A + (B + C) = (A + B) + C
3
A+0 = A = 0+A
(Associative Property)
(Identity for Addition)
8 / 101
Scalar Multiplication
Denition
Let A be an m × n matrix and let α be a scalar. The scalar
multiple of α and A, denoted by αA, is the m × n matrix obtained
by multiplying each entry of A by α, that is, αA = [αai j ]. This
operation is called scalar multiplication.
Example. Let α = 3, and
¸
·
0 −2 3
.
B=
2 1 8
Then
·
¸ ·
¸
3(0) 3(−2) 3(3)
0 −6 9
αB =
=
.
3(2) 3(1) 3(8)
6 3 24
9 / 101
Scalar Multiplication
Theorem (Recall some basic properties)
Let A = [ai j ],
B = [b i j ],
1
(α + β)A = αA + βA
2
α(βA) = (αβ)A
3
α(A + B) = αA + αB
and α, β be scalars. Then
10 / 101
Inner Product
Denition (Inner Product)
Let x and y ∈ Rn . The inner product (vector product) of x and y
is dened as
xT y = x 1 y 1 + x 2 y 2 + · · · x n y n =
n
X
xi y i .
i =1
N.B. The inner product of two vectors is a scalar.
11 / 101
Inner Product
Theorem.
Let x, y, z ∈ Rn . Then
1
xT y = yT x
2
xT (cy) = cxT y,
3
(x + y)T z = xT z + yT z
4
xT x ≥ 0
5
xT x = 0
for all scalars c
if and only if x = 0.
12 / 101
Inner Product
Example.
Let xT = [x1 , x2 , . . . , xn ] be a consumption vector where
denotes consumption of the j th good.
Let pT = [p 1 , p 2 , . . . , p n ] be the price vector where
the price of good j .
Then
pT x = p 1 x 1 + p 2 x 2 + · · · + p n x n =
n
X
pj
xj
denotes
p i xi
i =1
is the total consumption expenditure.
13 / 101
Matrix Multiplication
Denition (Matrix Multiplication)
Let A be an m × n matrix and B an n × p matrix. The product AB
is dened as the m × p matrix C = [ci j ] dened by
c i j = (A i . )(B . j ),
i.e., the (i , j )-entry is the inner product of the i th row of A and the
j th column of B.
Example. Let


1 0
A = 3 1
2 5
Then
and
¸
·
6 2
B=
.
0 1

 

1(6) + 0(0) 1(2) + 0(1)
6 2
AB = 3(6) + 1(0) 3(2) + 1(1) = 18 7 .
2(6) + 5(0) 2(2) + 5(1)
12 9
14 / 101
Matrix Multiplication
Remark: A system of linear equations can be written in matrix
form.
For example,
(
3x + 5y = 15
4x − 7y = 28
may be written as
·
¸· ¸ · ¸
3 5
x
15
=
.
4 −7 y
28
15 / 101
Matrix Multiplication
Remarks:
1
The matrix product AB is dened only if the number of
columns of A is equal to the number of rows of A. Thus, AB
may be dened but BA is not.
Take, for example, a matrix A = [ai j ]2×3 and another matrix
then AB is dened but BA is not.
B = [b i j ]3×4 ,
2
Matrix multiplication is not commutative.
Example.
·
¸
·
¸
3 2
1 2
B=
1 0
0 3
·
¸
·
¸
3 12
5 2
AB =
BA =
1 2
3 0
A=
Clearly, AB 6= BA.
16 / 101
The Identity Matrix
Denition (Identity Matrix)
Let A be a square (n × n ) matrix. Then the entries a11 , a22 , . . . , ann
constitute the main diagonal or simply, the diagonal of A.
An identity matrix is a square matrix in which every entry on the
main diagonal is a 1 and every entry o the main diagonal is 0.
The identity matrix dened above is denoted by In or I when the
context is clear.
Examples.

1
·
¸
0
1 0

I2 =
I4 = 
0
0 1
0
0
1
0
0
0
0
1
0

0
0

.
0
1
17 / 101
The Identity Matrix
Multiplication by an identity matrix. Verify!
·
a 11
a 21
a 12
a 22
¸·
¸ ·
1 0
a 11
=
0 1
a 21
¸
a 12
.
a 22
In general, AI = A = IA.
18 / 101
Matrix Operations
Theorem (Recall some properties)
1
A(BC) = (AB)C
2
A(B ± C) = AB ± AC
(A ± B)C = AC ± BC
3
AI = A = IA
4
c(AB) = (cA)B = A(cB)
5
A0 = 0 = 0A.
19 / 101
Transpose of a Matrix
Denition (Transpose of a Matrix)
Let A = [ai j ] be an m × n matrix with real entries (possibly
complex). Then the transpose of A is the n × m matrix A T
obtained by interchanging the rows and columns of A, so that the
(i , j )th entry of AT is a j i .
Example.


·
¸
3 5
3 0 −1
A=
=⇒ AT =  0 4 .
5 4 6
−1 6
20 / 101
Transpose of a Matrix
Some Properties:
1
(AT )T = A
2
(A ± B)T = AT ± BT
3
(cA)T = cAT
4
(AB)T = BT AT .
21 / 101
Symmetric Matrices
Denition (Symmetric Matrix)
A square matrix A is said to be symmetric if and only if AT = A.
Example.


3 2 5
A = 2 0 1 = AT
5 1 7
22 / 101
Determinant of a Matrix
Recall:
Given a 2 × 2 matrix
·
A=
a 11
a 21
¸
a 12
,
a 22
the determinant is given by
det A = a 11 a 22 − a 21 a 12 .
23 / 101
Determinant of a Matrix
The determinants of matrices of order greater than 2 are
dened by induction, ie, the determinant of 3 × 3 matrices can
be obtained from determinants of 2 × 2 matrices; determinant
4 × 4 matrices can be obtained using 3 × 3 matrices, and so on.
24 / 101
Determinant of a Matrix
Denition
Let A = [ai j ] be a square matrix.
1
2
The minor of ai j denoted by mi j is the determinant of the
submatrix obtained by deleting the i th row and the j th
column of A.
The cofactor of
ai j ,
denoted by ci j is dened as
c i j = (−1)i + j m i j .
25 / 101
Determinant of a Matrix
Example.


2 1 3
A = 4 5 0 .
6 2 1
Find the minor m21 and cofactor c21 of
a 21 .
Solution.
To nd m21 using the denition provided above, we delete the 2nd
row and the 1st column. Thus, the minor of a21 is
·
¸
1 3
m 21 = det
= 1(1) − 2(3) = −5,
2 1
while its cofactor is
c 21 = (−1)2+1 m 21 = −(−5) = 5.
26 / 101
Determinant of a Matrix
Let A = [ai j ]3×3 and consider the following array of numbers:
a 11 c 11
a 21 c 21
a 31 c 31
a 12 c 12
a 22 c 22
a 32 c 32
a 13 c 13
a 23 c 23
a 33 c 33
It can be shown that the row sums and column sums of the above
array are equal to one another.
This common sum is dened as the determinant of the 3 × 3 matrix
A.
27 / 101
Determinant of a Matrix
Denition (Cofactor expansion along row k )
Let A = [ai j ]n×n . Then the determinant of A in terms of row k
elements and their respective cofactors ck j is
det A =
n
X
ak j ck j .
j =1
N.B. The cofactor expansion along a column j is similarly dened.
N.B. The determinant of a matrix A is unique, which means that
regardless of the choice of row or column for cofactor expansion,
the resulting quantity will be the same.
Life Hack: When performing cofactor expansion, choose the row
or column with the most number of 0 entries.
28 / 101
Determinant of a Matrix
Example. Let


2 1 3
A = 4 5 0 .
1 2 1
Find the determinant using cofactor expansion.
Solution.
Using the rst row to evaluate the determinant of A, we have
det A = a 11 c 11 + a 12 c 12 + a 13 c 13 .
29 / 101
Determinant of a Matrix
(Solution continued)


2 1 3
A = 4 5 0 .
1 2 1
·
5
2
·
4
c 12 = (−1)1+2 det
1
·
4
c 13 = (−1)1+3 det
1
c 11 = (−1)1+1 det
¸
0
=5
1
¸
0
= −4
1
¸
5
= 3.
2
det A = a 11 c 11 + a 12 c 12 + a 13 c 13
det A = 2(5) + 1(−4) + 3(3) = 15.
30 / 101
Determinant of a Matrix
Using the same matrix A in the previous example, let us nd the
determinant using cofactor expansion along the 3rd column:
det A = a 13 c 13 + a 23 c 23 + a 33 c 33 .
·
4
c 13 = (−1)
det
1
·
2
c 23 = (−1)2+3 det
1
·
2
c 33 = (−1)3+3 det
4
1+3
¸
5
=3
2
¸
1
= −3
2
¸
1
= 6.
5
Thus,
det A = 3(3) + 0(−3) + 1(6) = 15.
31 / 101
Determinant of a Matrix
Example. Let

1
−1

A=
2
0
0 4
8 3
1 1
4 −3

2
5

.
4
0
Evaluate the determinant using cofactor expansion.
Solution.
We use the 4th row to evaluate det A (since it has the most number
of 0 entries - we noted this earlier!). This means that we only need
to nd two cofactors.
det A =
a 41 c 41 + a 42 c 42 + a 43 c 43 + a 44 c 44
= 0(c 41 ) + a 42 c 42 + a 43 c 43 + 0(c 44 )
32 / 101
Determinant of a Matrix
(Solution continued)

1 4 2
c 42 = (−1)4+2 det −1 3 5 = 49
2 1 4


1 0 2
c 43 = (−1)4+3 det −1 8 5 = 7
2 1 4

Hence,
det A = 4(49) + (−3)(7) = 175.
33 / 101
Determinant of a Matrix
Theorem
Let A and B be square matrices of order n . Then
1 det AT = det A.
2
det(AB) = (det A)(det B).
Example.
A=
·
¸
·
¸
3 2
3 1
=⇒ AT =
2 4
1 4
det A = 10 = det AT .
34 / 101
Determinant of a Matrix
Example.
·
¸
·
¸
3 1
1 0
A=
, B=
.
2 4
−3 2
det A = 10, det B = 2.
·
0
2
AB =
−10 8
¸
det AB = 20 = (det A)(det B).
35 / 101
Triangular Matrices
Denition
1 A square matrix is said to be upper triangular if and only if
its entries below the main diagonal are zeros.
2
A square matrix is said to be lower triangular if and only if
its entries above the main diagonal are zeros.
A square matrix is said to be a diagonal matrix if and only if
all its entries above and below the main diagonal are zeros.
These matrices are called triangular matrices.
3
36 / 101
Triangular Matrices
Examples. Let






1 0 2
1 0 0
1 0 0
A = 0 8 5 B = 2 5 0 C = 0 5 0 .
0 0 4
3 4 6
0 0 6
From our denition, we know that
A
is an upper triangular matrix,
B
is a lower triangular matrix,
C
is a diagonal matrix.
37 / 101
Determinant of a Triangular Matrix
Theorem
Let A = [ai j ] be a triangular matrix. Then
det A =
n
Y
ai i .
i =1
In other words, the determinant is the product of the diagonal
entries of A.
Let us look at an illustration of the proof for an upper triangular
matrix A with n = 3,
a 11
A= 0
0

a 12
a 22
0

a 13
a 23  .
a 33
38 / 101
Determinant of a Triangular Matrix
Using cofactor expansion along the rst column, we have
det A =
a 11 (−1)2 det
·
a 22
0
=
a 11 (a 22 a 33 − 0)
=
a 11 a 22 a 33
¸
a 23
+0+0
a 33
Exercise. Verify the same result using cofactor expansion along the
3rd row.
39 / 101
Inverse of a Matrix
Denition (Inverse of a Matrix)
Let A be a square matrix of order n . The inverse of A is a square
matrix B that satises AB = I = BA.
Example. Let
A=
·
¸
·
¸
1 −1
0 1/2
B=
.
2 0
−1 1/2
·
¸·
¸ ·
¸
0 1/2
1 0
1 −1
AB =
=
= I.
2 0
−1 1/2
0 1
·
¸·
¸ ·
¸
0 1/2 1 −1
1 0
BA =
=
= I.
−1 1/2 2 0
0 1
Hence, B is the inverse of A.
40 / 101
Inverse of a Matrix
Remarks:
1
From the denition, it is clear that if B is an inverse of A, then
A is an inverse of B.
2
The inverse of a matrix is unique. For if B and C are both
inverses of A, then
AB = I = BA,
AC = I = CA
Hence,
B = BI = B(AC) = (BA)C = IC = C.
41 / 101
Inverse of a Matrix
Question: Do all square matrices have inverses? Let us take
·
¸
1 1
A=
.
1 1
Suppose that
·
¸·
1 1 b 11
AB =
1 1 b 21
This implies
¸ ·
¸
b 12
1 0
=
.
b 22
0 1
(
b 11 + b 21 = 1
b 11 + b 21 = 0,
which is clearly a contradiction.
So there are (square) matrices that do not have inverses!
42 / 101
Inverse of a Matrix
Notation: From now on we will denote the inverse of A by A−1 .
Denition
Let A be a square matrix. A is said to be nonsingular if and only if
A has an inverse. Otherwise, A is said to be singular.
Theorem
Let A and B be square matrices of the same order. Then
1
(A−1 )−1 = A
2
(AB)−1 = B−1 A−1
3
(AT )−1 = (A−1 )T
43 / 101
Inverse of a Matrix
Proof of (1) is immediate from the denition.
Proof of (2): Observe that
(AB)(B−1 A−1 ) = A(BB−1 )A−1
= AIA−1
= AA−1
= I.
On the other hand,
(B−1 A−1 )(AB) = B−1 (A−1 A)B
= B−1 IB
= B−1 B
= I.
44 / 101
Inverse of a Matrix
Proof of (3): By denition of the inverse of a matrix A,
A−1 A = I,
AA−1 = I.
Taking the transpose of both sides of both equations and
noting that the transpose of a product is the product of the
transposes in reverse order (see properties of transpose of a
matrix in previous slides), and that the transpose of I is itself,
we get
(AT )(A−1 )T = I,
Hence,
(A−1 )T AT = I.
(AT )−1 = (A−1 )T .
45 / 101
Inverse of a Matrix
Theorem
A square matrix is nonsingular if and only if its determinant is
nonzero.
46 / 101
Inverse of a Matrix
Denition (Cofactor matrix)
Let A = [ai j ]n×n The cofactor matrix of A is the matrix

cof
c 11

 c 21
A=
 ..
 .
c n1
where ci j is the cofactor of
c 12
c 22
···
···
..
.
...
c n2
···

c 1n

c 2n 
,
.. 
. 

c nn
ai j .
The adjoint matrix of A, denoted by adj A, is dened as
adj A = (cof A)T .
47 / 101
Inverse of a Matrix
Theorem
Let A be a nonsingular matrix. Then
A−1 =
1
adj A.
det A
48 / 101
Inverse of a Matrix
Example. Find the inverse of the matrix


3 0 1
A = 2 1 4 .
1 0 1
Solution.
Note that det A = 2; hence, A is nonsingular.
Computing the cofactors of A,
c 11
c 12
·
1
= (−1) det
0
·
2
= (−1)3 det
1
2
¸
4
=1
1
¸
4
=2
1
49 / 101
Inverse of a Matrix
c 13
=
c 21
=
c 22
=
c 23
=
c 31
=
c 32
=
·
2
(−1) det
1
·
0
(−1)3 det
0
·
3
(−1)4 det
1
·
3
(−1)5 det
1
·
0
(−1)4 det
1
·
3
(−1)5 det
2
4
¸
1
= −1
0
¸
1
=0
1
¸
1
=2
1
¸
0
=0
0
¸
1
= −1
4
¸
1
= −10
4
50 / 101
Inverse of a Matrix
c 33
= (−1)6 det
·
¸
3 0
=3
2 1
Hence,

1
2
−1
2
0
A= 0
−1 −10 3

cof

1 0 −1
A =  2 2 −10 .
−1 0
3

adj
Therefore, using the theorem above, we have

 

1 0 −1
1/2 0 −1/2
1
1 −5  .
A−1 = 2 2 −10 =  1
2
1 0
3
−1/2 0 3/2
Exercise. Verify that AA−1 = A−1 A.
51 / 101
Solving Linear Systems
Recall: A system of linear equations can be written in matrix form.
For example,
(
3x + 5y = 15
4x − 7y = 28
may be written as
¸· ¸ · ¸
·
3 5
x
15
=
.
4 −7 y
28
52 / 101
Solving Linear Systems
Remark: In general, a linear system is given by


a 11 x 1 + a 12 x 2 + · · · + a 1n x n = b 1




a 21 x 1 + a 22 x 2 + · · · + a 2n x n = b 2






..
.
..
.
..
.
..
.
a m1 x 1 + a m2 x 2 + · · · + a mn x n = b m .
What happens if we:
1
collect all the coecients
a i j 's
2
collect all the unknowns
vector x; and
3
collect all the scalar quantities bi 's on the right hand side of
the system and store them as a column vector b?
x j 's
and store them as a matrix A;
and store them as a column
53 / 101
Solving Linear Systems
Then we have
Ax = b,
which is usually referred to as a matrix equation.
In a more familiar form (the less compact form),

a 11

 a 21
 .
 .
 .
a m1
a 12
a 22
···
···
..
.
...
a m2
···
   
a 1n
x1
b1
   
a 2n   x 2   b 2 
 .  =  . .
.. 
   
. 
  ..   .. 
a mn
xn
bm
Clearly, A is an m × n matrix; x is an n -vector; b is an m -vector.
54 / 101
Solving Linear Systems
Goal: We want to solve the matrix equation for x.
Setting: We will deal with the case where A is a square matrix of
order n .
Notice that the matrix equation looks similar to the familiar single
variable case for solving an unknown x ,
αx = β,
for some constants α and β.
Now the obvious approach to nd
the equation by 1/α. This gives
x=
x
is to multiply both sides of
1
β.
α
N.B. 1/α is the multiplicative inverse of α.
55 / 101
Solution by Matrix Inversion
Recall:
Let A be a square matrix of order n . If A is nonsingular, then A−1
exists.
We can now go back to our matrix equation Ax = b. Suppose A is
nonsingular. Then
A−1 Ax = A−1 b
Ix = A−1 b
x = A−1 b.
Remark: The matrix inverse A−1 acts like the multiplicative inverse
in the single variable case.
We shall call this method of solving the matrix equation solution
by matrix inversion.
56 / 101
Solution by Matrix Inversion
Remark: Since A−1 is unique, the solution to our matrix equation
is also unique.
In fact, our next theorem states that the uniqueness of the solution
also implies that the matrix A is nonsingular.
Theorem
Let A be a square matrix. Then
1 Ax = b has a unique solution if and only if A is nonsingular.
2
The only solution to Ax = 0 is x = 0 if and only if A is
nonsingular.
57 / 101
Solution by Matrix Inversion
Example. Solve the linear system



3x 1 + x 3 = 4
2x 1 + x 2 + 4x 3 = 6


x + x = 8
1
3
In matrix form, the above system is equivalent to

   
4
3 0 1 x1
2 1 4 x 2  = 6 .
8
1 0 1 x3
From our previous computation,

1/2 0 −1/2
1 −5  .
A−1 =  1
−1/2 0 3/2

58 / 101
Solution by Matrix Inversion
Solve for x:
x = A−1 b.
  
 

1/2 0 −1/2 4
(1/2)(4) + 0(6) + (−1/2)(8)
−2
1 −5  6 =  1(4) + 1(6) + (−5)(8)  = −30 .
x= 1
−1/2 0 3/2
8
(−1/2)(4) + 0(6) + (3/2)(8)
10

Thus, x =
£
¤T
−2 −30 10 .
Another way of presenting the solution:
x 1 = −2, x 2 = −30, x 3 = 10.
59 / 101
Solution by Matrix Inversion
Example. Solve the system of equations
(
2x 1 − x 2 = 4
3x 1 + 4x 2 = 6
In matrix form,
.
·
¸· ¸ · ¸
2 −1 x 1
4
=
.
3 4
x2
6
Since det A = 11, A−1 exists.
The cofactors of A are
c 11 = (−1)2 (4) = 4
c 12 = (−1)3 (3) = −3
c 21 = (−1)3 (−1) = 1
c 22 = (−1)4 (2) = 2.
60 / 101
Solution by Matrix Inversion
Hence,
cof
·
¸
4 −3
A=
1 2
=⇒
adj
·
¸
4 1
A=
.
−3 2
Thus,
−1
A
·
¸ ·
¸
1 4 1
4/11 1/11
=
=
.
−3/11 2/11
11 −3 2
Therefore,
·
4/11 1/11
x=
−3/11 2/11
¸· ¸ · ¸
4
2
=
.
6
0
61 / 101
Solution by Cramer's Rule
Let us have another look at the matrix equation Ax = b.
From our previous result, the solution is given by
¶
1
x=A b=
adj A b.
det A
−1
µ
To illustrate our next method for solving linear systems, let us
consider a nonsingular matrix A of order 3.


x1
c 11
x 2  = 1 c 12
det A
x3
c 13

c 21
c 22
c 23
 
c 31 b 1
c 32  b 2  .
c 33 b 3
62 / 101
Solution by Cramer's Rule
Observe that
x1
=
x2
=
x3
=
1
(b 1 c 11 + b 2 c 21 + b 3 c 31 )
det A
1
(b 1 c 12 + b 2 c 22 + b 3 c 32 )
det A
1
(b 1 c 13 + b 2 c 23 + b 3 c 33 ).
det A
Let Ak be the matrix obtained by replacing the k th column of A by
b. If k = 1, we have

b1

A1 = b 2
b3
a 12
a 22
a 32

a 13
a 23  .
a 33
63 / 101
Solution by Cramer's Rule
Computing det A1 using cofactor expansion along column 1 yields
det A1 = b 1 c 11 + b 2 c 21 + b 3 c 31 .
Thus,
x1
=
1
(b 1 c 11 + b 2 c 21 + b 3 c 31 )
det A
=
det A1
.
det A
=
1
(b 1 c 12 + b 2 c 22 + b 3 c 32 )
det A
=
det A2
.
det A
Similarly,
x2
Exercise. Derive the same result for A3 .
64 / 101
Solution by Cramer's Rule
Theorem (Cramer's Rule)
Let A be a nonsingular matrix of order n and b be an n -vector.
Then the j th component of the unknown vector x in the matrix
equation Ax = b is given by
xj =
det A j
det A
,
where A j is the matrix obtained by replacing the j th column of A
by b.
65 / 101
Solution by Cramer's Rule
Example. We return to one of our previous examples and solve it
using Cramer's Rule.

   
3 0 1 x1
4
2 1 4 x 2  = 6 .
1 0 1 x3
8
A1
A2
A3

4
= 6
8

3
= 2
1

3
= 2
1

0 1
1 4
0 1

4 1
6 4
det A1
= −4
det A2
= −60
det A3
= 20
x1 =
x2 =
8 1

0 4
1 6
0 8
det A1
= −2
det A
x3 =
det A2
= −30
det A
det A3
= 10
det A
66 / 101
Eigenvalues and Eigenvectors
Motivation. Let
A=
·
¸
3 −2
1 0
u=
· ¸
−1
1
v=
· ¸
2
.
1
What happens to u and v when pre-multiplied by A?
· ¸
4
Av =
= 2v
2
· ¸
−5
Au =
.
−1
So Av is just 2v. Thus, A "stretches", or dilates v.
Figure: Eect of multiplication by A
67 / 101
Eigenvalues and Eigenvectors
Recall: Let
dilate by a
into 2x .
f (x) = 2x .
factor of 2.
We know that the action of f on x is to
In other words, f transforms x and turns it
Going back to our motivation above, in a sense, we can think of a
matrix A as a "function" that transforms vectors u and v in our
previous illustration.
In this section, we shall be concerned with 'special' vectors on
which the action of A is quite simple.
We will study equations that look like this:
Ax = 2x
Or in general,
Ax = −4x.
Ax = λx,
for some scalar λ.
68 / 101
Eigenvalues and Eigenvectors
Denition
Let A be a square matrix with real entries. A scalar λ is called an
eigenvalue (or characteristic value) of A if and only if there
exists a vector x such that
Ax = λx,
x 6= 0.
The vector x is called an eigenvector (or characteristic vector)
associated with λ.
69 / 101
Eigenvalues and Eigenvectors
Example. Let
·
¸
1 −1
A=
.
2 4
The scalar λ1 = 2 is an eigenvalue of A since x = 1
satisfy
·
¸· ¸ · ¸
· ¸
£
Ax =
¤T
, and λ1
1 −1
2 4
The scalar λ2 = 3 is an
satisfy
·
Ax =
−1
1
2
1
=
=2
= 2x.
−1
−2
−1
£
¤T
eigenvalue of A since x = 1 −2 ,
1 −1
2 4
¸·
and λ2
¸ · ¸
· ¸
1
3
1
=
=3
= 3x.
−2
−6
−2
70 / 101
Eigenvalues and Eigenvectors
Remark: An eigenvector associated with an eigenvalue is not
unique.
To illustrate, suppose λ is an eigenvalue of A and x is an
eigenvector associated with λ, i.e.,
Ax = λx.
If β is a nonzero scalar, then y = βx is also an eigenvector
associated with λ since
Ay = A(βx) = βAx = β(λx) = λ(βx) = λy.
Question: How do we nd eigenvalues and eigenvectors of a
square matrix?
71 / 101
Eigenvalues and Eigenvectors
Theorem
1 B is nonsingular if and only if x = 0 is the only solution to the
equation Bx = 0.
2 B is singular if and only if there exists x 6= 0 satisfying the
equation Bx = 0.
From the denition of an eigenvalue λ of A, we have
⇐⇒
Ax = λx,
x 6= 0
(A − λI)x = 0,
x 6= 0.
Thus, the equation (A − λI)x = 0 has a solution x 6= 0.
Therefore, the matrix A − λI is singular and so det(A − λI) = 0.
72 / 101
Eigenvalues and Eigenvectors
Theorem
A scalar λ is an eigenvalue of A if and only if det(A − λI) = 0.
Denition (Characteristic Polynomial)
The polynomial p(λ) = det(A − λI) is called the characteristic
polynomial of the matrix A.
To answer our initial question, it is now clear that nding the
eigenvalues of a given matrix A reduces to a root-nding problem,
that is, nding the roots of the characteristic polynomial p(λ).
Recall: To nd the roots of a polynomial p , we set p = 0.
73 / 101
Eigenvalues and Eigenvectors
Example. Let
A=
A − λI =
Setting
p(λ) = 0
·
¸
1 −1
.
2 4
·
¸
·
¸ ·
¸
1 −1
1 0
1 − λ −1
−λ
=
.
2 4
0 1
2
4−λ
and solving for the roots of p ,
0 =
p(λ)
= det(A − λI)
= (1 − λ)(4 − λ) + 2
= λ2 − 5λ + 6
= (λ − 2)(λ − 3)
Thus, the eigenvalues of A are:
λ1 = 2, λ2 = 3.
74 / 101
Eigenvalues and Eigenvectors
To obtain an eigenvector x associated with an eigenvalue λ, we
solve the equation
(A − λI)x = 0.
For λ1 = 2, we have
·
¸· ¸ · ¸
1 − 2 −1
x1
0
(A − 2I)x =
=
.
2
4 − 2 x2
0
Thus,
(
−x 1 − x 2 = 0
2x 1 + 2x 2 = 0
Hence,
.
x 2 = −x 1 .
75 / 101
Eigenvalues and Eigenvectors
Arbitrarily choose a value for
x1 ,
x=
say
x 1 = 1.
·
¸
1
−1
Thus,
is an eigenvector associated with λ1 = 2.
Remark: Do not choose x1 = 0 since x1 and x2 will be both zero
and x cannot be an eigenvector (from our denition).
76 / 101
Eigenvalues and Eigenvectors
For λ2 = 3, we have
(A − 3I)x =
Thus,
·
¸· ¸ · ¸
1 − 3 −1
x1
0
=
.
2
4 − 3 x2
0
(
−2x 1 − x 2 = 0
2x 1 − x 2 = 0
Hence, x2 = −2x1 .
Again, by arbitrarily choosing
x 1 = 1,
·
1
x=
−2
.
we get an eigenvector
¸
associated with λ2 = 3.
77 / 101
Eigenvalues of a Triangular Matrix
Consider the upper triangular matrix
a 11
A= 0
0

a 12
a 22
0

a 13
a 23 
a 33
From our previous results, the eigenvalues of A are obtained from
the equation
a 11 − λ

0
p(λ) = det(A − λI) = det
0

a 12
a 22 − λ
0

a 13
a 23  = 0
a 33 − λ
Notice that the new matrix A − λI is still upper triangular. From
one of our previous theorems, we have
det(A − λI) = (a 11 − λ)(a 22 − λ)(a 33 − λ) = 0.
Thus, the eigenvalues of A are λ1 = a11 , λ2 = a22 , λ3 = a33 .
78 / 101
Eigenvalues of a Triangular Matrix
Theorem
Let A be a triangular matrix of order n . Then the eigenvalues of A
are its diagonal entries, that is,
λi = a i i ,
for i = 1, 2, . . . , n .
79 / 101
Eigenvalues of a Real Symmetric Matrix
Theorem
The eigenvalues of a real symmetric matrix are real.
Example. Consider the real symmetric matrix
·
¸
2 1
A=
.
1 2
¸
·
2−λ
1
= λ2 − 4λ + 3.
p(λ) = det(A − λI ) = det
1
2−λ
Solving for the roots of
p(λ),
we get
λ1 = 1,
λ2 = 3.
80 / 101
Quadratic Forms
Denition (Quadratic Forms in Rn )
Let A be a square matrix of order n . The quadratic form
generated by A is
Q(x) = xT Ax,
where x ∈ Rn .
We illustrate quadratic forms in R2 and derive some nice properties
that can be generalized in Rn .
81 / 101
Quadratic Forms
Consider a square matrix A of order 2, the quadratic form
generated by A is given by the expression
Q(x) = xT Ax,
x ∈ R2
. The quadratic form in R2 is written explicitly as
Q(x) = xT Ax
·
¸· ¸
¤ a 11 a 12 x 1
x2
a 21 a 22 x 2
·
¸
¤ a 11 x 1 + a 12 x 2
x2
a 21 x 1 + a 22 x 2
=
£
x1
=
£
x1
=
a 11 x 12 + a 12 x 1 x 2 + a 21 x 2 x 1 + a 22 x 22
=
a 11 x 12 + (a 12 + a 21 )x 1 x 2 + a 22 x 22 .
82 / 101
Quadratic Forms
Example. Let
·
¸
2 1
.
3 −1
The quadratic form generated by A is
Q(x) =
£
x1
¸· ¸
·
¤ 2 1
x1
x2
3 −1 x 2
= 2x 12 + 4x 1 x 2 − x 22 .
83 / 101
Quadratic Forms
Remark: A quadratic form written explicitly can be written in
matrix form in many ways.
Example.
Q(x) = 3x 12 + 5x 1 x 2 + 2x 22
·
¸· ¸
¤ 3 4 x1
£
= x1 x2
1 2 x2
·
¸· ¸
£
¤ 3 3 x1
= x1 x2
2 2 x2
·
¸· ¸
£
¤ 3 5/2 x 1
= x1 x2
.
5/2 2
x2
N.B. There is one matrix that generates a quadratic form that is
symmetric.
84 / 101
Quadratic Forms
Question: How do we nd the symmetric matrix that generates a
given quadratic form?
Consider the quadratic form
Q(x) = xT Ax.
Dene the matrix B as
B=
¢
1¡
A + AT .
2
We show that B is a symmetric matrix that generates the same
quadratic form generated by A. In particular,
BT =
¢ 1¡
¢
1¡
A + AT = AT + A = B.
2
2
85 / 101
Quadratic Forms
Note that
xT Bx =
=
1 T
x (A + AT )x
2
1
1 T
x Ax + xT Ax.
2
2
Recall that xT Ax is a scalar, which means its transpose is itself, i.e.,
xT Ax = xT AT x.
Therefore,
xT Bx = xT Ax.
86 / 101
Quadratic Forms
Example. Let
·
¸
1 0
A=
.
4 3
x 12 + 4x 1 x 2 + 3x 22
·
¸· ¸
£
¤ 1 0 x1
= x1 x2
.
4 3 x2
Q(x) =
Using the method we have seen earlier we have
B=
·
¸
·
¸ ·
¸
¢ 1 1 0
1¡
1 1 4
1 2
A + AT =
+
=
.
2 3
2
2 4 3
2 0 3
xT Bx =
=
£
x1
x2
·
¸· ¸
¤ 1 2 x1
2 3 x2
x 12 + 4x 1 x 2 + 3x 22 .
87 / 101
Deniteness of Quadratic Forms
Denition
Let A be a square matrix. A quadratic form Q(x) = xT Ax (and the
matrix A) is said to be
1 positive denite if and only if Q(x) > 0, ∀x 6= 0;
2
positive semidenite if and only if Q(x) ≥ 0, ∀x;
3
negative denite if and only if Q(x) < 0, ∀x 6= 0;
4
negative semidenite if and only if Q(x) ≤ 0, ∀x;
5
indenite if and only if there are vectors x, y such that
Q(x) < 0
and Q(y) > 0.
88 / 101
Deniteness of Quadratic Forms
Example.
·
¸
2 0
A=
.
0 3
Let x ∈ R2 . Then
Q(x) =
£
x1
x2
·
¸· ¸
¤ 2 0 x1
0 3 x2
= 2x 12 + 3x 22
≥ 0.
Thus, Q(x) and A are positive semidenite. If x 6= 0, then either
x 1 6= 0 or x 2 6= 0; hence, Q(x) > 0. It follows that Q(x) and A are
positive denite.
89 / 101
Deniteness of Quadratic Forms
Example.
A=
·
¸
0 0
.
0 5
£
x2
Let x ∈ R2 . Then
Q(x) =
x1
·
¸· ¸
¤ 0 0 x1
0 5 x2
= 5x 22
≥ 0.
Thus, Q(x) and A are positive semidenite.
90 / 101
Deniteness of Quadratic Forms
Example.
·
¸
0 0
A=
.
0 5
But there are£nonzero
vectors x for which Q(x) = 0. Take, for
¤T
example, x = 2 0 .
·
¸· ¸
£
¤ 0 0 2
Q(x) = 2 0
0 5 0
= 0.
Thus, Q(x) and A are not positive denite.
Remark: The set of positive denite matrices is a proper subset of
the set of positive semidenite matrices.
91 / 101
Deniteness of Quadratic Forms
Example.


1 6 3
A = 0 −2 5 .
0 0 0
Let x ∈ R3 . Then
Q(x) =
=
Taking x =
£
x1
x2
x3
¤

 
1 6 3 x1
0 −2 5 x 2 
0 0 0 x3
x 12 + 6x 1 x 2 + 3x 1 x 3 + 5x 2 x 3 − 2x 22 .
£
¤T
1 0 0 ,
Q(x) = 1.
Taking
£
¤T
x= 0 1 0 ,
Q(x) = −2.
Therefore, Q(x) and A are indenite.
92 / 101
Deniteness and Eigenvalues
Theorem
Let A be a real symmetric matrix.
1 A is positive semidenite (negative semidenite) if and only if
its eigenvalues are nonnegative (nonpositive).
is positive denite (negative denite) if and only if its
eigenvalues are positive (negative).
2
A
3
A
is indenite if and only if it has both positive and negative
eigenvalues.
93 / 101
Deniteness and Eigenvalues
Example.


3 0 4
A = 0 1 2 .
4 2 5
The characteristic polynomial
p(λ)
is given by
p(λ) = det(A − λI)


3−λ
0
4
1−λ
2 
= det  0
4
2
5−λ
= −λ3 + 9λ2 − 3λ − 13.
Setting
p(λ) = 0,
and solving for λ, we get the eigenvalues of A:
λ1 = −1,
p
λ2 = 5 − 2 3,
Therefore, A is an indenite matrix.
p
λ3 = 5 + 2 3.
94 / 101
Principal Minors
Denition
Let A be a square matrix of order n .
1 The submatrix obtained by deleting any (n − r ) rows and the
same numbered columns of A is called a principal submatrix
of order r and its determinant is called a principal minor of
order r .
2
The principal submatrix of order r obtained by deleting the
last (n − r ) rows and (n − r ) columns of A is called a leading
principal submatrix of order r and its determinant is called a
leading principal minor of order r .
95 / 101
Principal Minors
Example.


3 0 4
A = 0 1 2
4 2 5
Deleting the rst row and the rst column of A,
we obtain a principal submatrix of order 2:
·
¸
1 2
.
−2 2
Deleting the second row and the second column of A,
we obtain another principal submatrix of order 2:
·
¸
3 4
.
1 2
Deleting the third row and third column of A,
we obtain another principal submatrix of order 2:
¸
·
3 0
.
1 1
96 / 101
Principal Minors
Example.


3 0 4
A = 0 1 2
4 2 5
The leading principal submatrices are
[3],
·
¸
3 0
,
1 1


3 0 4
1 1 2 .
1 −2 2
97 / 101
Deniteness and Principal Minors
Theorem
Let A be a real symmetric matrix.
1 A is positive semidenite if and only if all its principal minors
are nonnegative.
is negative semidenite if and only if all its principal minors
of order 1 are nonpositive; those of order 2 are nonnegative;
those of order 3 are nonpositive, and so on.
2
A
3
A
4
A
is positive denite if and only if its leading principal minors
are positive
is negative denite if and only if its leading principal minor
of order 1 is negative, that of order 2 is positive, that of order
3 is negative, and so on.
98 / 101
Deniteness and Principal Minors
Example.

3 −2 0
A = −2 3 0 .
0
0 5

The leading principal minors:
£ ¤
det 3 = 3
¸
3 −2
=5
det
−2 3


3 −2 0
det −2 3 0 = 5
0
0 5
·
Therefore, A is positive denite.
99 / 101
Deniteness and Principal Minors
Example.

1 −1 −1
4 .
A = −1 2
−1 4 10

The leading principal minors:
£ ¤
det 1 = 1
¸
1 −1
=1
det
−1 2


1 −1 −1
4 =0
det −1 2
−1 4 10
·
Therefore, A is not positive denite.
100 / 101
Deniteness and Principal Minors
Question: Is it positive semidenite?
To answer this question, we need to check if all the principal
minors are nonnegative.
The principal minors of order 1 are: 1,2,10
The principal minors of order 2 are: 1, 9, 4
The principal minor of order 3 is: 0
Therefore, A is positive semidenite since it is real symmetric and
its principal minors are all nonnegative.
101 / 101
Download