LINEAR ALGEBRA 1

advertisement
LINEAR ALGEBRA 1
Denition 1.
vector space
is a set V with the operation of scalar multiplication and addition dened (so λv1 +v2
v1 , v2 ∈ V , λ ∈ R) and so they obey the usual rules (λ(v1 + v2 ) = λv1 + λv2 ). An independent
set {v1 , . . . vn }is a set so that α1 v1 + . . . + αn vn = 0 ⇒ α1 , α2 , . . . αn = 0. A spanning set is a set {v1 , v2 , . . . , vn }
so that V = {α1 v1 + . . . + αn vn : α1 , α2 , . . . αn ∈ R}. A basis is a set which is both spanning and independent. To
A
makes sense when
be formal at this point, one has to check that bases exists and so on. This is done properly with the
theorem, but we will skip that.
replacement
Instead we have two useful corallaries of the replacement theorem. In practice these
are all you need.
Proposition 2. [Useful Corollaries of the Replacement Theorem] All bases of a vector space V are the same size,
which we call the dimension of V written ” dim(V )”. Every independent set has at most dim(V ) elements, and can
be augmented to a basis. Every spanning set has at least dim(V ) elements and can be rened to a basis.
Denition 3.
Denition 4.
A
matrix is a box of numbers
that obeys certain rules about multiplication and so on.
A linear transformation is a linear map A : V → W where V, W are vector spaces. Linear means
A(λv1 + v2 ) = λA(v1 ) + A(v2 ). Linear is important because it means we can know exactly what A does everywhere,
just by knowing what A does to a basis (Why?). A choice of a basis {v1 , v2 , . . . vm } for V and {w1 , w2 , . . . wn }
for W gives rise to a matrix by Aij = the component of wi in Avj . One could also say its dened by: Avj =
A1j w1 + A2j w2 + . . . + Anj wn . Multiplication of matrices is composition of the linear transformations.
Theorem 5. [Rank-Nullity Theorem] Given a linear transformation A : V
→ W , let N ull(A) = {v : A(v) = 0} be
the subspace of V that A sends to zero. Let Range(A) = A(V ) = {w : ∃v s.t. A(v) = w} be the subspace of W that
is is the image of A. Then:
dim (N ull(A)) + dim (Range(A)) = dim(V )
Proof.
dim(V ) = n. Take {v1 , . . . vl } to be a basis for N ull(A), so that l = dim (N ull(A)).
V (replacement theorem), so that {v1 , . . . , vl , ṽ1 , . . . ṽn−l } is a basis for V . Check that
{A(ṽ1 ), . . . A(ṽn−l )} is a basis for Range(A). (Why is independent? Why is it spanning?). Hence dim(Range(A)) =
n − l and the result follows.
(Pf using Bases) Say
Extend this to a basis for all of
Remark 6.
v1 , v2 , . . . , vn are a basis for V . A linear transformation is dened by
Av1 . . . Avn . Then the matrix for A is matrix whose columns are
written in the basis w1 . . . wn :


[How to deal with matrices] Say
where it sends
Av1 , . . . Avn
v1 , . . . vn .
(Why?) Say we know
(of course these have to be
.
.
 .
A=
 Av1
.
.
.
In particular, its useful to remeber that
Ae1 = rst
.
.
.
Av2
.
.
.
...
.
.
.

Avn 

.
.
.
column of
A.
From this you can see how matrix multiplication on the left works on columns. If
are
b1 , b2 , . . . bn ,
i.e.
B=
b1
b2 . . .

.
.
.

A
 b1
.
.
.
bn
.
.
.
b2
.
.
.
...
.
.
.
B
is a matrix whose columns
then:


.
.
.
 

bn 
 =  Ab1
.
.
.
.
.
.
.
.
.
.
.
.
Ab2
.
.
.
...


Abn 

.
.
.
(How do we see this from the denition/ideas above?) We can also see how matrix multiplication on the right
works on rows:

...
 ...



...
a1
a2
.
.
.
am


...
...
 ...
... 


B = 


...
...
1
a1 B
a2 B
.
.
.
am B

...
... 



...
LINEAR ALGEBRA 1
2
This gives rise to two little formulas that are sometimes useful:

.
.
.

 b1

.
.
.
b2
.
.
.
...
.
.
.
.
.
.

bn



.
.
.


λ1
λ2
..
.
.
.
.
.
.
 
=
 λ1 b1
λ 2 b2
.
.
.
.
 
λ1
 
,

.
.
.
...
λ n bn
.
.
.
.
.
.
λ2
..

...
 ...



...

.
a1
a2
.
.
.
am
 
...
...
 ...
... 
 
=
 
...
...
Theorem 7. [Change of Basis Formula] Let
T be a linear transformation from Rn to Rn By using the ordinary
basis for Rn , we can think of T as a matrix too. Let B = {v1 , . . . , vn } be a basis of Rn . Let TB be the matrix that
represents the same transformation as T , but in the basis B . Then: (Why is the inverse ok in the below formula?)
..
 .
TB = 
 v1
..
.
Example 8.
v2
...
..
.
A.
Then in the basis
R
T
 v1
vn 


v2
..
.
..
.
..
. 
...
..
.
vn 

..
.
As an example, suppose we know that
basis of
..
.


The change of basis formula is useful, because if there is a basis of
then we can recover the matrix for
n
..
.. −1
. 
 .
..
.

1
1
1
,
−1
1
−1
1
1
we know that
A
1
1
1
−1
→
A
→
A
looks like
A=
1
1
1
−1
1
0
0
1
1
1
where the action of
does this:
0
1
A
is known,
[Av1 Av2 ] =
we have:
V
A
1
−1
−1
=
1
0
0
−1
1
0
. So then, in the standard
(How could we have seen that dierently?) You can use these ideas to easily construct the matrices for rotations,
reections etc.
Denition 9.
An
eigenvector
of eigenvalue λ is a vector v so that Av = λv . All the possible eigenvalues are given
n-th degree polynomial Det (A − λI) (This is called the characteristic polynomial). The roots
of λ where N ull (A − λI) is non-trivial. The spaces N ull(A − λk I) are called eigenspaces. If we
by the roots of the
of this give values
have a basis of eigenvectors, then we can write:

.
.
.

A=
 v1
.
.
.
Remark
.
10
.
.
.
v2
.
.
.
.
.
.
...
vn
.
.
.
 λ
1





λ2
..
.
.
.


  v1

.
.
λn
.
.
.
.
.
v2
.
.
.
.
.
.
...
−1

vn 

.
.
.
Under what circumstances can we nd a basis of eigenvectors? One condition is that all the eigenvalues
are distinct (no repeated roots of the characteristic polynomial) The lemma below shows us why:
Lemma 11. For
n o
λ1 6= λ2 we have N ull (A − λ1 I) ∩ N ull (A − λ2 I) = ~0 . As a corollary, we see that if v1 ∈
N ull(A − λ1 I), v2 ∈ N ull(A − λ2 I), . . .vk ∈ N ull(A − λn I) then {v1 . . . vk } are independent.
Proof.
v ∈ N ull (A − λ1 I) ∩ N ull (A − λ2 I) then λ1 v = Av = λ2 v so v = ~0. The corollary follows by induction
Pk
on k . It is clear when k = 1. Suppose it holds for k . If αk+1 vk+1 +
i=1 αi vi = 0, then apply A − λk+1 I to both
Pk
sides to get αk+1 0 +
α
(λ
−
λ
)
v
=
0
By
the
induction
hypothesis
now, αi (λi − λk+1 ) = 0 for 1 ≤ i ≤ k
i
i
k+1
i
i=1
and the result follows.
If
Proposition 12. If A has n distinct eigenvalues, then A is diagonalizable.
Proof. Let λ1 , . . . , λn be the eigenvalues. By the denition of these, we can nd non-trivial v1 ∈ N ull(A−λ1 I), v2 ∈
N ull(A − λ2 I), . . . vk ∈ N ull(A − λn I). By the lemma, this is an indepenedent set. Since dim = n here, this is a
basis! Hence we have a basis of eigenvalues and we are done.
Remark
. If there are repeated roots of the
case
n minimal polynomial, thenoA might not be diagonalizable. In this
k
2
dene the generalized eigenspaces as Gλ = v : ∃k s.t. (A − λI) v = 0 = N ull(A − λI) + N ull (A − λI)
+ . . ..
13
λ1 a1
λ2 a2
.
.
.
.
.
λm am
.
LINEAR ALGEBRA 1
3
In this case, a lemma almost identical to the above shows that the generalized eigenspaces don't overlap. We then
take a basis for each generalized eigenspace individually, so that in this basis we have

A
decomposed into blocks.

Mλ1
Mλ2


A=

..




.
Mλ k
Now, restricting our attention to the generalized eigenspaces, we have (Why does the chain end eventually?):
N ull ((A − λI)) ⊂ N ull (A − λI)2 ⊂ . . . ⊂ N ull ((A − λI)r )
v ∈ N ull((A − λI)r ) s.t. v ∈
/ N ull((A − λI)r−1 ). Then (A − λI)s v ∈ N ull ((A − λI)r−s ) and one
r−1
that {v1 , v2 , . . . , vr } = {(A − λI)
v, (A − λI)r−2 v, . . . , (A − λI)v, v} is an independent set from the
Take any
can check
denitions (hint: induction). This set up is called a Jordan chain. Notice that:
So on this set of vectors,
A
Avs
=
vs−1 + λvs
Av1
=
λv1
looks like:

λ
1
λ



This is called a
Jordan Block.

1
λ


1 
λ
If we start o with another independent vector
v ∈ N ull((A − λI)r ),
another Jordan chain and get another Jordan block. In general every matrix can be written in
Form
we can make
Jordan Canonical
with only Jordan blocks.
Denition 14.
The
minimal polynomial
is the smallest polynomial
p(x) = an xn + . . . + a0
so that:
n
pA (A) = an A + . . . + a1 A + a0 = 0
Remark
of
pA
.
15
Factor
pA =
Q
(x − λs )
k
. By our work with the Jordan form, we know that the the only possible roots
are eigenvalues. (Otherwise each Jordan block cannot vanish). Moreover, the exponent
size of the
Largest Jordan Block.
Problem 16.
Categories to look a from Written's Wiki:
Change of Basis, Projections, Rotations, Reections
k
corresponds to the
Rank Nullity Theorem, Jordan Canonical Form,
Download