Uploaded by 김형수

Chap-5 11강%28sec.5.3%265.4%29

advertisement
Linear transformations in sec.3.3
Theorem 10’.
a. Let T : ℝ2 → ℝ2 : the linear transformation determined by
a 2×2 matrix A . If S : a region in ℝ2 with finite area, then
the area of 𝑇 𝑆 = det 𝐴 ∙ the area of 𝑆 .
b. If T is determined by a 3×3 matrix A , and
if S : a solid in ℝ3 with finite volume, then
the volume of 𝑇 𝑆 = det 𝐴 ∙ the volume of 𝑆 .
Pf. (a) :
1
Quiz.
Q. Find the eigenvalues of the given matrix A and
a basis for the corresponding eigenspace.
Sol.
2
𝐴 = −4
3
4
3
−6 −3
3
1
2
Section 5.3 Diagonalization(대각화)
 If A is similar to a diagonal matrix D , that is, if 𝑨 = 𝑷𝑫𝑷−𝟏
for some invertible matrix P and some diagonal matrix D ,
def!
⟺ A square matrix A is diagonalizable(대각화 가능하다) :
- Diagonalization enables us to compute 𝐴𝑘 quickly
for large values of 𝑘 .
𝐴𝑘 = 𝑃𝐷𝑃−1 𝑃𝐷𝑃−1 × ⋯ × 𝑃𝐷𝑃−1 = 𝑃𝐷𝑘 𝑃−1
3
The diagonalization theorem
Theorem 5( The Diagonalization Theorem: 대각화 정리 ).
An 𝑛 × 𝑛 matrix A is diagonalizable.
⟺ A has 𝒏 linearly independent eigenvectors.
⟺ Span{𝐯1 , … , 𝐯𝑛 } = ℝ𝑛 . → {𝐯1 , … , 𝐯𝑛 } : eigenvector basis of ℝ𝑛 .
In fact, 𝐴 = 𝑃𝐷𝑃−1 with a diagonal matrix D
⟺ 𝑃 = 𝐯1
…
𝐯𝑛 and
each 𝐯𝑖 ∶ an eigenvector corresponding to 𝜆𝑖 .
4
The diagonalization theorem
Pf. (⟹) If P is any 𝑛 × 𝑛 matrix with columns 𝐯1 , … , 𝐯𝑛 and
D is any diagonal matrix with diagonal entries 𝜆1 , … , 𝜆𝑛 , then
Suppose A is diagonalizable. i.e., 𝐴 = 𝑃𝐷𝑃−1 .
⟶ 𝐴𝑃 = 𝑃𝐷 ⟶ (1)=(2)
⟶
Since P : invertible, its columns 𝐯1 , … , 𝐯𝑛 : linearly independent.
5
The diagonalization theorem
Since these columns are nonzero, by (4),
𝜆1 , … , 𝜆𝑛 : eigenvalues and 𝐯1 , … , 𝐯𝑛 : corresponding eigenvectors.
(⟸) Suppose 𝐯1 , … , 𝐯𝑛 : linearly independent eigenvectors
corresponding to eigenvalues 𝜆1 , … , 𝜆𝑛 .
⟶ By (1)&(2), 𝐴𝑃 = 𝑃𝐷 .
Since the eigenvectors 𝐯1 , … , 𝐯𝑛 : linearly independent,
P : invertible by Thm.8 in sec.2.3.
⟶ 𝐴 = 𝑃𝐷𝑃−1 . ∴ 𝐴 : diagonalizable.
6
Diagonalizing matrices(행렬의 대각화)
Example 3’. (a) Diagonalize the following matrix, if possible.
That is, find an invertible matrix P and a diagonal matrix D
such that 𝐴 = 𝑃𝐷𝑃−1 .
Sol. Step 1. : Find the eigenvalues of A .
Step 2. : Find three linearly independent eigenvectors of A .
1
Since a eigenvector corresponding to 𝜆 = 1 is 𝐯1 = −1
1
and
7
Diagonalizing matrices
−1
−1
Eigenvectors corresponding to 𝜆 = −2 are 𝐯2 =
1 , 𝐯3 = 0 .
0
1
Since 𝐴 = 𝐯1 𝐯2 𝐯3 with det A ≠ 0 ,
by Thm.8. in sec.2.3, {𝐯1 , 𝐯2 , 𝐯3 } : linearly independent.
Step 3. : Construct P from the vectors in step 2.
Step 4. : Construct D from the corresponding eigenvalues.
8
Diagonalizing matrices
(b) Use (a) to compute 𝐴4 .
1
1
Sol. Since 𝑃−1 =
1
2
−1 −1
1 −1
𝐴 = 𝑃𝐷 𝑃−1 = −1
1
1
0
1
Then, 𝐴4 = 𝑃𝐷4 𝑃−1 = −1
1
1 −15 −15
=
15
31
15 .
−15 −15
1
1
1 ,
0
−1 1
0
0
1
1
0 0 −2
0
1
2
1 0
0 −2 −1 −1
−1 −1 1
0
0
1
1
0 0 16
0
1
0
1 0
0 16 −1
1
1 .
0
1 1
2 1
−1 0
9
Diagonalizing matrices
Theorem 6.
An 𝑛 × 𝑛 matrix with 𝒏 distinct eigenvalues is diagonalizable.
Pf. Let 𝐯1 , … , 𝐯𝑛 : eigenvectors corresponding to the 𝑛 distinct
eigenvalues of a matrix A.
Then, by Thm.2 in sec.5.1, { 𝐯1 , … , 𝐯𝑛 } : linearly independent.
So, by Thm.5, A : diagonalizable.
Example 5. Determine whether the matrix A is diagonalizable.
5 −8
1
𝐴= 0
0
7 .
0
0 −2
Sol. Since A : triangular matrix, eigenvalues are 5, 0, and − 2.
So, by Thm.6, A is diagonalizable.
10
Section 5.3 : Exercises
1, 7,9, 11,12,17, 27, 31,32
11
Section 4.1 Vector spaces(벡터 공간)
Definition. Let V(≠ ∅) be a set of elements, called vectors,
on which vector addition and scalar multiplication are defined.
def!
Then V ⟺ a vector space if the followings are satisfied :
For all vectors u, v, and w in V and for all scalars c and d ,
1. 𝐮 + 𝐯 ∈ 𝑉,
2. 𝐮 + 𝐯 = 𝐯 + 𝐮 ,
3. 𝐮 + 𝐯 + 𝐰 = 𝐮 + (𝐯 + 𝐰) ,
4. ∃ zero vector 𝟎 ∈ 𝑉 such that 𝐮 + 𝟎 = 𝐮 ,
5. For each 𝐮 ∈ 𝑉, ∃ − 𝐮 ∈ 𝑉 such that 𝐮 + (−𝐮) = 𝟎 ,
6. 𝑐𝐮 ∈ 𝑉,
7. 𝑐 𝐮 + 𝐯 = 𝑐𝐮 + 𝑐𝐯 ,
9. 𝑐(𝑑𝐮) = (𝑐𝑑)𝐮 ,
8. 𝑐 + 𝑑 𝐮 = 𝑐𝐮 + 𝑑𝐮 ,
10. 1𝐮 = 𝐮 .
12
Vector spaces
Example. (a) ℝ𝑛 , 𝑛 ≥ 1 ∶ vector space,
(b) ℙ𝑛 = 𝐩 𝑡 = 𝑎0 + 𝑎1 𝑡 + ⋯ + 𝑎𝑛 𝑡 𝑛 , 𝑎𝑖 ∈ ℝ, 𝑖 = 0,1, … , 𝑛
: the set of all polynomials of degree at most 𝑛(≥ 0)
(차수가 𝑛 보다 같거나 작은 모든 다항식들의 집합)
⟶ vector space. (∵) See Ex.4 in sec.4.1.
(c-d) If the following each set is a vector space,
write “V ” in the blank. Otherwise, write “S ” .
𝑥
𝑥
(c) 𝐴 = 𝑦 𝑥 ≥ 0 , 𝑦 ≥ 0 :
(d) 𝐵 = 𝑦 𝑥 + 𝑦 = 0 :
Definition. For a vector space V ,
if a set B is linearly independent and Span{B } = V :
def!
⟺ B : a basis(기저) for V and
the number of vectors in a basis B for V
def!
⟺ the dimension(차원) of V ; dim V .
13
Coordinate system(좌표 시스템) in sec.4.4
Definition. Suppose B ={ b1 , ... , 𝐛𝑛 } is a basis for V .
For each x ∈ V ,
the weights 𝒄𝟏, … , 𝒄𝒏 such that 𝐱 = 𝑐1𝐛1 + ⋯ + 𝑐𝑛𝐛𝑛 .
def!
⟺ the coordinate of x relative to the basis B .
(기저 B 에 대한 x 의 좌표 : the B–coordinate of x )
x의 B 좌표
− If 𝑐1, … , 𝑐𝑛 : the B–coordinate of x , then the vector in ℝ𝑛
𝑐1
.
def!
𝐱 B = . ⟺ the coordinate vector of x relative to B .
.
(기저 B 에 대한 x 의 좌표 벡터 or
𝑐𝑛
the B–coordinate vector of x : x의 B 좌표 벡터)
def!
− The mapping 𝐱 ⟼ 𝐱 B ⟺ the coordinate mapping(좌표 사상) .
14
The coordinate mapping(좌표 사상) in sec.4.4
Theorem 8. Let B = 𝐛1 , … , 𝐛𝑛 ∶ a basis for a vector space V .
Then the coordinate mapping 𝐱 ⟼ 𝐱 B is a one-to-one linear
transformation from V onto ℝ𝑛 .
Pf. Take u = 𝑐1 b1 + . . . + 𝑐𝑛 𝐛𝑛 and w = 𝑑1 b1 + . . . + 𝑑𝑛 𝐛𝑛 in V .
Then, u + w = (𝑐1 + 𝑑1 )b1 + . . . + 𝑐𝑛 + 𝑑𝑛 𝐛𝑛 .
𝑐1
𝑐1 + 𝑑1
𝑑1
⋮
⟶ 𝐮+𝐰 B =
= ⋮ + ⋮ = 𝐮B + 𝐰B .
𝑐𝑛
𝑐𝑛 + 𝑑𝑛
𝑑𝑛
15
The coordinate mapping
So the coordinate mapping preserves addition.
If 𝑟 is any scalar, then
𝑟u = 𝑟(𝑐1 b1 + . . . + 𝑐𝑛 𝐛𝑛 ) = 𝑟𝑐1 b1 + . . . + 𝑟𝑐𝑛 𝐛𝑛 .
𝑟𝑐1
⟶ 𝑟𝐮 B = ⋮ = 𝑟 𝐮 B .
𝑟𝑐𝑛
So the coordinate mapping preserves scalar multiplication
and hence is a linear transformation.
𝑐1
Suppose that 𝐮 B = 𝐰 B = ⋮ .
𝑐𝑛
Then, u = v = 𝑐1 b1 + . . . + 𝑐𝑛 𝐛𝑛 in V . ∴ one-to-one .
For ∀ 𝐲 = 𝑦1 , … , 𝑦𝑛 ∈ ℝ𝑛 , let u = 𝑦1 b1 + . . . + 𝑦𝑛 𝐛𝑛 in V . ⟶ 𝐮 B = 𝐲 .
∴ The coordinate mapping is onto ℝ𝑛 .
16
Section 5.4 Eigenvectors and linear transformations
 Let 𝑉 ∶ 𝑛-dimensional vector space and 𝑊 ∶ 𝑚-dimensional
vector space.
Consider a linear transformation 𝑇 ∶ 𝑉 → 𝑊 .
Choose basis B for V and basis C for W .
For ∀ 𝐱 ∈ 𝑉, the coordinate vector 𝐱 B is in ℝ𝑛 and
the coordinate vector of its image, [T(x)]C , is in ℝ𝑚 :
17
The matrix of a linear transformations
 Let { b1 , ... , 𝐛𝑛 } : the basis B for V .
𝑟1
If x = 𝑟1 b1 + . . . + 𝑟𝑛 𝐛𝑛 , then 𝐱 B = ⋮
𝑟𝑛
.
(1)
Then, 𝑇 𝐱 = 𝑇 𝑟1 b1 + . . . + 𝑟𝑛 𝐛𝑛 = 𝑟1𝑇 𝐛1 + ⋯ + 𝑟𝑛 𝑇 𝐛𝑛 .
⟶ 𝑇 𝐱
∴
(2)
C
= 𝑟1 𝑇 𝐛1
C
+ ⋯ + 𝑟𝑛 [𝑇(𝐛𝑛 )]C
𝑟1
𝑟2
= [ 𝑇 𝐛1 C 𝑇 𝐛2 C ... 𝑇 𝐛𝑛 C ] ⋮ = 𝑀 𝐱 B .
(4)
def!
=: 𝑴 ⟺ the 𝑚 × 𝑛 matrix for 𝑇 𝑟𝑛
relative to the bases B and C .
𝑇 𝐱 C =𝑀 𝐱B .
18
The matrix of a linear transformations
19
The matrix of a linear transformations
Example 1. Suppose B = {b1, b2} : a basis for V and
C = {c1, c2, c3} : a basis for W .
Let T : V →W : a linear transformation with
𝑇 𝐛1 = 3𝐜1 − 2𝐜2 + 5𝐜3 and 𝑇 𝐛2 = 4𝐜1 + 7𝐜2 − 𝐜3 .
Find the matrix M for T relative to B and C .
Sol. Since 𝑇 𝐛1
𝐶
3
= −2 and
5
𝑇 𝐛2
𝐶
4
= 7 ,
−1
3
4
𝑀 = −2
7 .
5 −1
20
The matrix of a linear transformations
 If W = V and the basis C is the same as B ,
def!
then the matrix M in (4) ⟺ the matrix for T relative to B ,
or the B–matrix for T ; 𝑇 B .
i.e., B–matrix for T : 𝑉 → 𝑉 such that
𝑇(𝐱) B = 𝑇 B 𝐱 B for ∀ 𝐱 ∈ 𝑉 .
21
The matrix of a linear transformations
Example 2. The mapping T : ℙ2 → ℙ2 defined by
𝑇 𝑎0 + 𝑎1𝑡 + 𝑎2𝑡2 = 𝑎1 + 2𝑎2𝑡
is a linear transformation.
a. Find the B–matrix for T , when B is the basis {1, t , 𝑡2 } .
b. Verify that T(p) B = T B p B .
Sol. (a) : Since T(1) = 0, T(t ) = 1, and T(𝑡2) = 2t ,
0
T(1) B = 0 ,
0
∴
1
T(t ) B = 0 , and
0
0 1
T B= 0 0
0 0
0
T(𝑡2) B = 2 .
0
0
2 .
0
22
The matrix of a linear transformations
(b) : Let p(t ) = 𝑎0 + 𝑎1𝑡 + 𝑎2𝑡2 .
𝑎1
Since T(p(t )) = 𝑎1 + 2𝑎2t , T(p) B = 𝑎1 + 2𝑎2t B = 2𝑎2 .
0
𝑎1
0 1 0 𝑎0
Since T B p B = 0 0 2 𝑎1 = 2𝑎2 , T(p) B = T B p B .
0
0 0 0 𝑎2
23
Linear transformations on ℝ𝑛
Theorem 8. Suppose A = PD𝑃−1 , D : a diagonal 𝑛 × 𝑛 matrix.
If B is the basis for ℝ𝑛 formed from the columns of P ,
then D is the B–matrix for the transformation x ⟼ 𝐴𝐱.
Pf. Let P = 𝐛1 ⋯ 𝐛𝑛 . Then, B = { 𝐛1 , ⋯ , 𝐛𝑛 } .
If 𝐱 = 𝑐1 𝐛1 + 𝑐2 𝐛2 + ⋯ + 𝑐𝑛 𝐛𝑛 ,
𝑐1
(∗)
⋮
⟺ 𝐱 = 𝐛1 ⋯ 𝐛𝑛
= 𝑃 𝐱 B . ⟶ 𝐱 B = 𝑃−1 𝐱 .
𝑐𝑛
If T(x) = Ax for x in ℝ𝑛 , then
(4)
𝑇 B = [ [T(b1)]𝛽 … [T(𝐛𝑛 )]𝛽 ] = [ [Ab1]𝛽 … [A𝐛𝑛 ]𝛽 ]
(∗)
= [𝑃−1 Ab1 … 𝑃−1 A𝐛𝑛 ] = 𝑃−1 A [b1 … 𝐛𝑛 ]
= 𝑃−1 AP = D .
24
Linear transformations on ℝ𝑛
25
Linear transformations on ℝ𝑛
7 2
.
−4 1
Find a basis B for ℝ2 s.t. B–matrix for T is a diagonal matrix.
Sol. In Ex.2 in sec.5.3,
1
1
5 0
A = PD𝑃−1 , 𝑃 =
and 𝐷 =
.
−1 −2
0 3
b1 b2
By Thm.8,
a diagonal matrix D is the B–matrix for T when B = {b1, b2} .
1
1
∴ B ={
,
}.
−1
−2
Note. If A is a similar to a matrix C with A = PC𝑃−1 ,
then C = 𝑃−1 AP : B–matrix for the transformation x ⟼ 𝐴𝐱
when the basis B is formed from the columns of P .
Example 3. Define T : ℝ2→ ℝ2 by T(x)=Ax, 𝐴 =
26
Section 5.4 : Exercises
1,3,4,5,7,8,9,11,13,14
27
Download