Math 2250-1 4.2-4.4 vector spaces, subspaces, bases and dimension

advertisement
Math 2250-1
Wed Oct 3 - filled in
4.2-4.4 vector spaces, subspaces, bases and dimension
Exercise 1) Vocabulary review:
A linear combination of the vectors v1 , v2 , ... vn is
any sum of scalar multiples, i.e. any vector v that can be expressed as
v = c1 v1 C c2 v2 C ...cn vn
The span of v1 , v2 , ... vn is
the collection of all possible linear combinations
The vectors v1 , v2 , ... vn are linearly independent iff
the only linear combination that adds up to the zero vector is the "trivial" one for which all the linear
combination coefficients are zero. i.e.
c1 v1 C c2 v2 C ...cn vn = 0 0 c1 = c2 =...= cn = 0
* another way of saying this is that none of the vectors vi can be expressed as a linear combination of the
others.
* another way of saying this is that for each vector v in the span of v1 , v2 , ... vn the linear combination
coefficients in the expression
v = c1 v1 C c2 v2 C ...cn vn
are always unique.
The vectors v1 , v2 , ... vn are linearly dependent iff
they are not linearly independent, i.e. some linear combo adds up to zero with not all of the linear combo
coefficients=0, i.e. at least one of the vectors in the collection can be expressed as a linear combination of
the others, i.e. is "linearly dependent" on the others.
V is a vector space means
a collection of objects, for which there are two operations, addition and scalar multiplication...so that
when you do these operations you stay inside the collection, and the usual algebra rules hold ....
W is a subspace of a vector space V means
a special subset which is a vector space in its own right, with the same operations...only have to check that
this subset is closed under addition and scalar multiplication, and then subset inherits all the other
algebra properties....for example, the zero vector will automatically be in any subspace, because it is zero
times any (other) vector in that subspace.
, New concepts for today:
Definition: The vectors v1 , v2 , ... vn are called a basis for a subspace W iff they span W and are linearly
independent. (We talked about a basis for the span of a collection of vectors before, which this definition
generalizes.)
Definition: The dimension of a subspace W is the number of vectors in a basis for W . (It turns out that all
bases for a subspace always have the same number of vectors.)
Yesterday we showed in an example that the span of two particular vectors was a subspace. Exactly the
same idea shows a much more general fact:
Theorem 1) The span of any collection of vectors v1 , v2 , ... vn in =m (or actually the span of any
f1 , f2 , ... fn in some vector space V) is closed under addition and scalar multiplication, and so is a subspace
of =m (or V .
why: We need to show W d span v1 , v2 , ... vn is a closed under addition and b closed under
scalar multiplication. Let
v = c1 v1 C c2 v2 C ...C cn vn 2 W
w = d1 v1 C d2 v2 C ...C dn vn 2 W
a
Then using the algebra properties of addition and scalar multiplication to rearrange and collect terms,
v C w = c1 C d1 v1 C c2 C d2 v2 C ...C cn C dn vn .
Thus v C w is also a linear combination of v1 , v2 , ... vn , so v C w 2 W is true.
b
Using algebra to rearrange terms we also compute
c v = cc1 v1 C cc2 v2 C ...C ccn vn 2 W
But most subsets are not subspaces: (This was Exercise 5 yesterday):
Exercise 1) Show that
1a) W = x, y T 2 =2 s.t. x2 C y2 = 1 is not a subspace of =2 .
1b) W = x, y T 2 =2 s.t. y = 3 x C 1 is not a subspace of =2 .
1c) W = x, y T 2 =2 s.t. y = 3 x is a subspace of =2 . Then find a basis for this subspace. What is
the dimension of this subspace?
1a) not a subspace. one reason: the zero vector is not in this set.
another reason: this set is not closed under addition...for example, 1, 0 T, 0, 1 T are in the set, but
their sum is 1, 1 T is not...because 12 C 12 = 2 s 1. another reason....this set is not closed under
scalar multiplication....e.g. 2 1, 0 T = 2, 0 is not in the set....because 22 C 0 = 4 s 1.
A
1b) not a subspace: 0, 0 is not in W. As in 1a), you could also show directly that W is not closed under
addition or scalar multiplication. You only need give one reason, of course.
1c) W is a subspace..
a) closure under addition: Let x1 , y1 T, x2 , y2
E1
E2
T
2 W Thus we know that
y1 = 3 x1
y2 = 3 x2 .
C x2 , y2 T = x1 C x2 , y1 C y2 T 2 W In other words, is it true that
y1 C y2 = 3 x1 C x2 ?
But this equation IS true, because it is equivalent to the result of adding the two true equations E1 , E2 .
We need to show that x1 , y1
T
b closure under scalar multiplication: We need to show that c x1 , y1 T = cx1 , cy1 T 2 W . In other
words is it true that
cy1 = 3 cx1 ?
But this is a true equation, because it is the result of multiplying both sides of E1 by the constant c.
A
The geometry of subspaces in =m is pretty special:
Exercise 2) Use matrix theory to show that the only subspaces of =2 are
(0) The single vector 0, 0 T , or
(1) A line through the origin, i.e. span u for some non-zero vector u , or
(2) All of =2 .
Exercise 3) Can you use matrix theory to show that the only subspaces of =3 are
(0) The single vector 0, 0, 0 T , or
(1) A line through the origin, i.e. span u for some non-zero vector u , or
(2) A plane through the origin, i.e.span u, v where u, v are linearly independent, or
(3) All of =3 .
Exercise 4) What are the dimensions of the subspaces in Exercise 2 and Exercise 3?
Solution to Exercises 3,4: Let W be a subspace of =3 . As we discussed above, W must contain the zero
vector 0, 0, 0 T.
(0) Notice that W = 0, 0, 0 T is one possible subspace. It doesn't take any non-zero vectors to span
this subspace, so we call it 0Kdimensional.
(1) If W contains more elements than the zero vector, let u s 0 be such an element. Then since W is
closed under scalar multiplication it contains
span u = c1 u, with c1 2 = .
The span of a single vector is always a subspace (as a special case of Theorem 1), and this could be all of
W . In this case the vector u is a basis for this 1Kdimensional subspace of =3 , which is a line through
the origin. (And where we're more used to writing the parameter c1 as a t, as we think of partical motion
with velocity vector u.)
(2) If W contains more vectors than span u , then let v 2 W, but v ; span u . Then, because W is
closed under addition and scalar multiplication it must contain
span u, v = c1 u C c2 v with each of c1 , c2 2 = .
This is a subspace (Theorem 1) and could be all of W . In fact, this span is a plane through the origin,
and the vectors u, v are a basis for this 2Kdimensional subspace. That's because if we reduce row
echelon form the matrix having u, v in the columns the result must be
u1 v1
1 0
u2 v2
/
0 1
.
0 0
u3 v3
And that computation implies u, v are linearly independent. (Why?)
(The reason the rref had to work out that way is because the only other possible reduced row echelon
form would have been
u1 v1
1 c
u2 v2
/
0 0
0 0
u3 v3
which would implied that v = cu, which we assumed was not the case when we found a v ; span u .)
(3) If W contains more vectors than the span of u, v , let w 2 W be such a vector. Then we claim that W
is actually all of =3 and that u, v, w are a basis. This is because the only possible reduced row echelon
form computation based on what we've already worked out is:
u1 v1 w1
1 0 0
u2 v2 w2
/
0 1 0
0 0 1
u3 v3 w3
which shows that u, v, w span =3 and are linearly independent, i.e. a basis for 3Kdimensional =3 .
(Since we already know how the first two colums reduce, the only other possible reduction would have
been
u1 v1 w1
1 0 c
1
u2 v2 w2
u3 v3 w3
/
0 1 c2
0 0 0
which would say that w = c1 u C c2 v which contradicts the fact that we chose w not in the span of u, v.)
2
Exercise 5) Yesterday we showed that the plane spanned by
1 ,
1
K1
1
has implicit equation
K5
K2 x C 3 y C z = 0 .
Suppose you were just given the implicit equation. How could you find a basis for this plane?
We can backsolve (or front solve or ...), letting two of the variables being free parameters, and expressing
the third one in terms of them. For example, since the zKcoefficient is 1 in this problem it is convenient to
let
x=t
y=s
0 z = 2 tK3 s .
In linear combination form this reads
x
1
0
y = t 0 Cs
z
T
1
2
.
K3
T
Thus the vectors 1, 0, 2 , 0, 1,K3 span the solution space. They are also linearly independent, since
1
0
t
0
t 0 Cs
2
1
K3
=00
s
2 tK3 s
=
0
0t=s=0.
0
T
Thus 1, 0, 2 , 0, 1,K3 are a basis for this plane, which has dimension 2.
This discussion leads to
Theorem 2: Let Am # n be a matrix. Consider the solution space W of all solutions to ths homogeneous
matrix equation
Ax=0 ,
i.e.
W = x 2 =n s.t. Am # n x = 0 .
Then W 4 =n is a subspace. Furthermore, you can find a basis for this subspace by backsolving from the
reduced row echelon form of A: once you write the solution in linear combination form
x = c1 v1 C c2 v2 C ...C ck vk
the vectors v1 , v2 , ... vk will always be linearly independent. Thus, since these vectors span W by
construction, v1 , v2 , ... vk are a basis for W.
Exercise 6) Check that a , b hold for the solution space W to A x = 0 , i.e to verify that
W = x 2 =n s.t. Am # n x = 0
is indeed a subspace.
Illustrate how to find a basis for the solution space to a homogeneous matrix equation by completing this
large example:
Exercise 7) Consider the matrix equation A x = 0 , with the matrix A (and its reduced row echelon form)
shown below:
1 2 0 1 1 2
1 2 0 1 1 2
A :=
2
4 1
K1 K2 1
4
1
7
1 K2
1
/
0 0 1 2 K1 3
0 0 0 0
0 0
K2 K4 0 K2 K2 K4
0 0 0 0 0 0
7a) Find a basis for the solution space W = x 2 =6 s.t. A x = 0 by backsolving, writing your explicit
solutions in linear combination form, and extracting a basis. Explain why these vectors span the solution
space and verify that they're linearly indepdendent.
7b) What is the dimension of W ? How is the dimension related to the shape of the reduced row echelon
form of A?
7c) Since solutions to homogeneous matrix equations are exactly linear dependencies between the
columns of the matrix, your basis in 7a can be thought of as a "basis" of the key column dependencies.
Check this.
(By the way, this connection between homogeneous solutions and column dependencies is the reason why
any matrix has only one reduced row echelon form - the four conditions for rref (zero rows at bottom,
non-zero rows start with leading ones, leading ones in successive rows move to the right, all column
entries in any column containing a leading "1" are zero except for that "1"), together with specified column
dependencies uniquely determine the rref matrix.)
Download