Linear Algebra II

advertisement
Linear Algebra II
Robert Carlson
Contents
Preface
1 Vector Spaces
1.1 Fields . . . . .
1.2 Vector Spaces .
1.3 Some Properties
1.4 Subspaces . . .
1.5 Problems . . . .
1.6 Solutions . . . .
iii
.
.
.
.
.
.
1
1
4
5
6
7
7
2 Finite Dimensional Vector Spaces
2.1 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
9
9
. . . . . . . . . .
. . . . . . . . . .
of Vector Spaces
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3 Linear Maps
11
3.1 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4 Polynomials
13
4.1 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.2 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
5 Eigenvalues and Eigenvectors
15
5.1 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
5.2 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
6 Inner Product Spaces
17
6.1 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
6.2 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
i
ii
CONTENTS
7 Operators on Inner Product Spaces
19
7.1 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
7.2 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
8 Operators on Complex Vector Spaces
21
8.1 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
8.2 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
9 Operators on Real Vector Spaces
23
9.1 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
9.2 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
10 Trace and Determinant
25
10.1 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
10.2 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Preface
These notes for a second course in linear algebra will start as a companion
to the text Linear Algebra Done Right, by S. Axler.
Linear Algebra is one of the cornerstones of mathematics, both pure and
applied. Its roots, in the study of systems of linear equations, can be found
in Babylonian and Chinese manuscripts from several hundred years BCE.
Determinants are clearly present in the work of Leibnitz and Seki (Japan)
from about 1693. However matrices themselves don’t make an appearance
until the work of Sylvester (1850) and Cayley (1855). The book of Kline
[8, pp. 795–812] has extensive information. The formulation of an abstract
vector space appears in a book by G. Peano in 1888.
iii
iv
Preface
Chapter 1
Vector Spaces
1.1
Fields
Linear algebra grows out of the study of the problem of solving a system of
equations of the form
a11 x1
..
.
+
a12 x2
aM 1 x1 + aM 2 x2
+
... +
a1N xN
..
.
=
y1
..
.
...
+ . . . + aM N xN = yM
(1.1.1)
In the simplest case the coefficients amn and the right hand sides y1 , . . . , yM ,
are real numbers. The problem is to find real numbers x1 , . . . , xN so that all
equations are satisfied.
Using the Matlab notation, the vectors
 
 
x1
y1
 .. 
 .. 
[y1 , . . . , yM ] =  .  , [x1 , . . . , xN ] =  .  ,
xN
yM
give us the prototype for a vector space. Let RN denote the set of ordered Ntuples of real numbers X = [x1 , . . . , xN ] (which is a column vector, written in
a form more friendly to typesetting). This set enjoys some simple arithmetic
properties. It has a addition defined by
X + Z = [x1 + z1 , . . . , xN + zN ].
It also has scalar multiplication,
αX = [αx1 , . . . , αxN ],
1
c ∈ R.
2
CHAPTER 1. VECTOR SPACES
Although the real numbers are comfortably familiar, they have some algebraic deficiencies. If p(x) = ax2 + bx+ c is a polynomial with real coefficients,
then the solutions are given by the quadratic formula
√
−b ± b2 − 4ac
x=
.
2a
We can easily construct examples when b2 − 4ac < 0, e.g. p(x) = x2 + 1,
in which case the solutions should be square roots of negative numbers. If
x is real, then x2 ≥ 0, so the roots can’t be real numbers. To find numbers
which include the square roots of negative numbers, the complex numbers
are needed.
A complex number z is defined to be an ordered pair of real numbers,
z = (x, y).
The set of complex numbers, denoted by C, is equipped with addition and
multiplication. The addition is just the vector addition of R2 . The product
of complex numbers w = (u, v) and z = (x, y) is defined by
w ∗ z = (ux − vy, uy + vx).
Both addition and multiplication are commutative. The complex number
w = (1, 0) is a multiplicative identity. We can find a multiplicative inverse
for z by solving wz = (1, 0), or
ux − vy = 1,
The solution is
u=
x2
x
,
+ y2
uy + vx = 0.
v=
−y
,
+ y2
x2
which exists whenever z 6= (0, 0). That is,
z −1 = (
x2
x
−y
, 2
).
2
+ y x + y2
It is helpful to write complex numbers in the form z = x + iy, meaning
z = (x, y). We think of the real numbers as those complex numbers having
the form (x, 0). A simple computation then shows that (0, 1)∗(0, 1) = (−1, 0),
so the complex number i = (0, 1) satisfies i2 = −1.
3
1.1. FIELDS
Since the complex numbers C are less familiar than the real numbers, it
is a good idea to write down their arithmetic properties.
1. Addition is commutative:
w+z =z+w
2. Addition is associative:
r + (w + z) = (r + w) + z
3. There is a unique additive identity (zero),
z + (0, 0) = z
4. Every element has an additive inverse
z + (−z) = (0, 0)
5. Multiplication is commutative
w∗z =z∗w
6. Multiplication is associative
r ∗ (w ∗ z) = (r ∗ w) ∗ z
7. There is a unique non-zero multiplicative identity
z ∗ (1, 0) = z
8. Each non-zero element z has a multiplicative inverse z −1 so that
z ∗ z −1 = (1, 0)
9. Multiplication distributes over addition
r ∗ (w + z) = r ∗ w + r ∗ z
When discussing these axioms, it is standard to denote the additive identity by 0 rather than (0, 0), and the multiplicative identity by 1 rather than
(1, 0).
4
CHAPTER 1. VECTOR SPACES
If a set F comes equipped with two operations, + and ∗, such that properties 1-9 hold, we say F is a field. In addition to the real numbers R and the
complex numbers C, there are many other fields. For instance, the rational
numbers Q are a field with the usual addition and multiplication. Notice
that the integers are not a field with the usual arithmetic operations.
The real (or complex) rational functions, that is functions of the form
p(x)/q(x) where p and q 6= 0 are polynomials with real (or complex) coefficients is a field with the usual addition and multiplication.
If p is a prime number, then Zp = Z/pZ is a field with finitely many
elements.
Although our text will only consider the cases F is R or C, it is important
to know that much of linear algebra can be developed for any field. This
point of view is systematically developed in [7].
1.2
Vector Spaces
By analogy with RN , we can consider the set FN of ordered n-tuples from
F, regardless of whether F = R, F = C, or F is any field. Addition and
scalar multiplication will be defined componentwise using addition from the
field. These sets FN , together with their arithmetic operations, satisfy the
following properties.
1. Addition is commutative
U +V =V +U
2. Addition is associative
(U + V ) + W = U + (V + W )
3. There is an additive identity (given by (0, . . . , 0) for FN ), usually
denoted by 0
U +0=U
4. Every element has an additive inverse
U + (−U) = 0
5. Multiplication by 1 in F acts as a multiplicative identity
1∗U =U
1.3. SOME PROPERTIES OF VECTOR SPACES
5
6. Distributive laws
α(U + V ) = αU + αV,
(α + β)U = αU + βU,
α∈F
α, β ∈ F.
More generally, suppose F is a field, V is a set, and operations of addition
and scalar multiplication are defined such that 1-6 hold. Then we say V
(with these operations) is a vector space over F.
Here are some additional examples of vector spaces. Let P denote the set
of all polynomials p : R → R with real coefficients. Using the usual addition
and scalar multiplication
(p + q)(x) = p(x) + q(x),
(αp)(x) = αp(x),
P is a vector space over R. The analogous definition using C works to give a
vector space over C. If we restrict to the set of polynomials PN with degree
at most N, we obtain more vector spaces over R or C. Other examples can be
obtained by restricting to the even polynomials, which satisfy p(−x) = p(x),
or the odd polynomials, which satisfy p(−x) = −p(x).
We can define a larger vector space than P by using the set C(R) of
continuous functions f : R → R (or f : C → C). Addition and scalar
multiplication are defined as for polynomial functions. To verify that C(R)
is a vector space, we need to know that the sum of continuous functions is
continuous, and the product of a scalar α ∈ R with a continuous function f
is continuous.
Vector spaces also arise in elementary differential equations. Recall that
if p(x) is a given continuous function, then the set of solutions to the equation
y ′′ + p(x)y = 0
is a vector space. The main point here is that sums and scalar multiples of
solutions are solutions.
1.3
Some Properties of Vector Spaces
Proposition 1.3.1. A vector space has a unique additive identity.
6
CHAPTER 1. VECTOR SPACES
Proof. Suppose 0 and 0′ are additive identities. Using axiom 3,
0′ = 0′ + 0 = 0.
Proposition 1.3.2. Every element in a vector space has a unique additive
inverse.
Proof. Suppose w and w ′ are additive inverses of v. Then
w = w + 0 = w + (v + w ′) = (w + v) + w ′ = 0 + w ′ = w ′.
Proposition 1.3.3. If v ∈ V, then 0F v = 0V .
Proof.
0F v = (0F + 0F )v = 0F v + 0F v,
so adding −0F v to both sides gives 0F v = 0V .
Proposition 1.3.4. For every α ∈ F, α0V = 0V
Proof.
α0V = α(0V + 0V )
does the job.
Proposition 1.3.5. (−1)v = −v
Proof.
0V = 0F v = (1 − 1)v = v + (−1)v,
so (−1)v is the (unique) additive inverse of v.
7
1.4. SUBSPACES
1.4
Subspaces
Notice that condition 1 in the text is redundant.
What are subspaces of R2 ?
1) 0
2) Lines through the origin, which are the scalar multiples of a single
vector.
3) All of R2 .
We often want to represent vectors as sums of simple pieces. For instance
any vector in RN is a sum
X
X=
xn En ,
where En is the n-th standard basis vector.
A related idea would be to focus on the spaces αEn , and consider writing
each vector as a sum of vectors from the subspaces. This idea leads to two
notions.
Suppose U1 , . . . , UM are subspaces of V. The sum of subspaces U1 + · · · +
UM is the set of all vectors u1 + · · · + uM with um ∈ Um . It’s easy to see that
this is a subspace of V.
While this idea is useful, it does not give us uniqueness of representation.
As an example in R4 , let
U1 = {(0, x2 , x3 , 0)},
U2 = {(0, x2 , 0, x4 )}.
Then U1 + U2 is the set of vectors with first component 0, but given a vector
X of the form X = (0, x2 , 0, 0) there are many ways to write it as a sum of
vectors from U1 and U2 . This is not the case if Um = αEm .
We say V is the direct sum of subspaces U1 , . . . , UM if each element of V
can be written in a unique way as a sum u1 + · · · + uM with um ∈ Um . In
this case we write
V = U1 ⊕ · · · ⊕ UM .
Proposition 1.4.1. Suppose that U1 , . . . , UM are subspaces of V . Then
V = U1 ⊕ · · · ⊕ UM
if and only if the following two conditions hold:
(1) V = U1 + · · · + UM ,
8
CHAPTER 1. VECTOR SPACES
and
(2) the only way to write 0 = u1 + · · · + uM with um ∈ UM is to take
um = 0 for all m.
Proof. First let’s check sufficiency. Since (1) holds, we only have to check
uniqueness of the representation. Suppose X ∈ V and
X = u1 + · · · + uM ,
um ∈ Um ,
X = v1 + · · · + vM ,
vm ∈ Um .
and
Subtraction gives
0 = (u1 − v1 ) + · · · + (uM − vM ),
u − m − vm ∈ Um .
By (2), um − vm = 0 or um = vm , and the representation is unique.
Now suppose V = U1 ⊕ · · · ⊕ UM . Then (1) holds. Also, 0 ∈ V has a
unique representation, and one such is 0 = 01 + · · · + 0M , with 0m ∈ Um . (Of
course these are the same 0 ∈ V.) Thus (2) holds as well.
The next result does not have an obvious generalization to more than 2
subpaces.
Proposition 1.4.2. Suppose that U and W are two subspaces of V. Then
V = U ⊕ W if and only if (1) V = U + W and (2) U ∩ W = 0.
Proof. Suppose V = U ⊕ W . Then (1) holds. Let v ∈ U ∩ W . Then
0 = v + (−v) with v ∈ U and −v ∈ W . This forces v = 0, so (2) holds.
Suppose (1) and (2) hold, and we try to write 0 = u + w. Then w = −u.
Since U and W are subspaces, u is in U ∩ W = 0. Thus the representation
of 0 is unique, and the sum is direct.
1.5
Problems
1.6
Solutions
Download