Coordinate Systems and Bases

advertisement
9.2 Coordinate Systems and Bases
In this section we want to look at generalized coordinate systems for planes and higher
dimensional subspaces. In a Cartesian coordinate system the coordinate axes are usually
perpendicular. As we have seen earlier it is often convenient to use the lines through the
eigenvectors of a matrix as the coordinate axes. In this case the axes are not
perpendicular to each other and we have what might be called an "oblique" coordinate
system. In this section we look at such coordinate systems for subspaces as well as for all
vectors with a certain number of components.
Let's take another look at these oblique coordinate systems and review the connection
between the geometric and algebraic perspectives. For definiteness, suppose we are
working with a plane S through the origin in three dimensions. Thus S is a subspace.
From the geometric point of view to define a coordinate system for S we pick two lines
L1 and L2 in S. We choose a positive direction on each line and a unit of length on each
line and mark off a coordinate system on each line. Let u1 and u2 be the vectors from the
origin to the unit points on each line. Note that specifying the two lines, their positive
direction and their units of length is equivalent to giving the two vector u1 and u2 in S that
don’t lie along the same line, i.e. that are linearly independent. To get the coordinates for
x
a point w = y in this new coordinate system we draw parallels to L2 and L1 through w.
 z
These parallels hit L1 and L2 at points v1 and v2 whose coordinates on L1 and L2 are c1 and
c2. Note that v1 = c1u1 and v1 = c1u1 and w = v1 + v2 = c1u1 + c1u1. c1 and c2 are the
coordinates of w in the new coordinate system. If we look at this process of defining this
new coordinate system for S from the algebraic point of view we see that we pick two
vectors u1 and u2 in S that are linearly independent and the new coordinates of a point w
are the coefficients of u1 and u2 when we write w as a linear combination of u1 and u2.
The two vectors u1 and u2 are called a basis for S. When we generalize this to higher
dimensional subspaces this leads to the following definition.
Definition 1. A collection u1, …, un of vectors is a basis for a subspace S if it has the
following three properties.
1.
2.
2.
u1, …, un are all in S
every vector in S can be written as a linear combination of u1, …, un
u1, …, un are linearly independent
Example 1 (a continuation of Example 2 in section 9.1). Let S be the subspace
x
y
consisting of vectors v =  z  such that
 
w
9.2 - 1
(1)
x + 2y + 3z + 5w = 0
2x + 4y + 8z + 4w = 0
3x + 6y + 12z + 6w = 0
Find basis for S.
1 2 3 5 | 0
Let's review what we did in section 9.1. The augmented matrix is M =  2 4 8 4 | 0 .
 3 6 12 6 | 0 
We use elementary row operations to bring it to reduced row echelon form.
 1 2 3 5 | 0  R2 - 2R1  R2  1 2 3 5 | 0  R2/2  R3  1 2 3 5 | 0 
 2 4 8 4 | 0  R3 - 3R1  R3  0 0 2 - 6 | 0  R2/3  R3  0 0 1 - 3 | 0 
 3 6 12 6 | 0 
0 0 3 -9 | 0
0 0 1 -3 | 0


R1 - 3R2  R1  1 2 0 4 | 0 
R3 - R2  R3  0 0 1 - 3 | 0 
0 0 0 0 | 0

So a set of equations equivalent to (1) is
x + 2y
(2)
+ 4w = 0
z - 3w = 0
0 = 0
or
x = - 2y - 4w
(2)
z =
3w
So a solution v to (1) can be written as
(3)
x
 - 2y - 4w 
-2
-4
y
y
1
0
v =  z  =  3w  = y  0  + w  3  = yu1 + wu2
 


 
 
w
 w 
 0 
 1 
where
u1
-2
1
=  0 
 
 0 
u2
-4
0
=  3 
 
 1 
So S consists of all linear combinations of u1 and u2. We shall show that u1 and u2 is a
basis for S by showing that they are linearly independent. Suppose yu1 + wu2 = 0.
Looking at (3) we see that
0
-2
-4
 - 2y - 4w 
 0  = yu1 + wu2 = y  1  + w  0  =  y

0
 0 
 3 
 3w 
0
 0 
 1 
 w 
9.2 - 2
Looking at the second and fourth components we see that this implies y = 0 and w = 0.
So u1 and u2 are linearly independent and hence are a basis of S.
We see that the method we used in this example can be used to find a basis for the null
space of any matrix A.
Now let's consider the question of finding a basis for the column space of a matrix, i.e.
for finding a basis for the subspace of all linear combinations of a fixed set of vectors.
1
2
 3
5
Example 2. Let u1 =  2 , u2 =  4 , u3 =  8  and u4 =  4 . Let S be the set of all linear
3
6
 12 
6
combinations of u1, u2, u3 and u4. Find a basis for S.
An initial candidate for a basis would be u1, u2, u3 and u4. We need to check if they are
linearly independent or dependent. So we consider the equation xu1 + yu2 + zu3 + wu4 = 0.
This is equivalent to the equations (1) which in turn are equivalent to (2). We see that
there is a non-zero solution. So u1, u2, u3 and u4 are not a basis for S. However, we can
use the solutions to xu1 + yu2 + zu3 + wu4 = 0 to reduce u1, u2, u3 and u4 to a basis. If we
pick y = 1 and w = 0 then we get x =- 2 and z = 0. So – 2u1 + u2 = 0. So u2 = 2u1. On the
other hand if we pick y = 0 and w = 1 then we get x =- 4 and z = 3.
So - 4u1 + 3u3 + u4 = 0. So u4 = 4u1 - 3u3. Note that any vector w that can be written as a
linear combination w = c1u1 + c2u2 + c3u3 + c4u4 of u1, u2, u3 and u4 can be written a linear
combination of u1 and u3 alone. To see this just substitute u2 = 2u1 and u4 = 4u1 - 3u3 into
w = c1u1 + c2u2 + c3u3 + c4u4 to get w = c1u1 + c2(2u1) + c3u3 + c4(4u1 - 3u3) =
(c1 + 2c2 + 4c4)u1 + (c3 - 3c4)u3. So S consists of all linear combinations of u1 and u3.
Now we need check if u1 and u3 are independent. So we consider the equation
xu1 + zu3 = 0. This is equivalent to the equations (1) without the y and w terms or with (1)
with the addition of y = 0 and w = 0. This is in turn equivalent to (2) with the addition of
y = 0 and w = 0. However, this implies x = 0 and z = 0. So u1 and u3 are independent and
hence form a basis for S.
To summarize, we can find a basis for the subspace S consisting of all linear
combinations of vectors u1, u2, …, un we form the matrix A whose columns are the uj and
reduce it to row echelon form matrix M. The vectors among u1, u2, …, un that correspond
to the columns of M with a leading one (the pivot columns) form a basis for S.
If S is a subspace then the number of elements in a basis for S corresponds to the
geometric dimension of S. For example, if S has a basis with two elements then S is a
tow dimensional plane.
Definition 2. If S is a subspace, then the dimension of S is the number of elements in any
basis.
The following Theorem assures us that the dimension of a subspace is well defined.
9.2 - 3
Theorem 1. If S is a subspace then the number of elements of in any two bases for S is
the same.
This theorem is an immediate consequence of the following Proposition.
Proposition 2. Let S be the subspace of all linear combinations of u1, u2, …, un. If
v1, …, vm are in S and m > n then v1, …, vm are linearly dependent.
Proof. We can write vj = a1ju1 + a2ju2 +  + anjun. Let U be the matrix whose columns
are u1, u2, …, un and V be the matrix whose columns are v1, …, vm and
 aa
A=
…
a
11
21
n1
a12  a1m
a22  a2m
an2  anm

 be the matrix whose elements are aij.

 a11 
a
Then vj = U  …21  and
 
 an1 
 c1 
c
V = UA. Since A has more columns than rows, there is a non-zero vector c =  …2  such
 
 cm 
that Ac = 0. This follows from the material on uniqueness of solutions to linear equations
in section 3.5. Therefore Vc = UAc = U0 = 0. So c1v1 + c2v2 +  + cmvm = 0. So
v1, …, vm are linearly dependent. //
1
-1
3
3
2
Example 3. Let u1 = -1, u2 =  1  and v1 = 2u1 – u2 = -3, v2 = u1 – 2u2 = -3, v3 = u1 - u2 = -2.
1
1
1
-1
0
So v1, v2 and v3 are in the subspace S of all linear combinations of v1 and v2. In this case n = 2 and m =
3 so by the previous proposition v1, v2 and v3 should be dependent. Let's use the argument in the proof
 1 -1
of the proposition to show that they are. Let U = -1 1  be the matrix whose columns are u1 and u2
1 1
3
3
2


2 1 1
and V = -3 -3 -2 be the matrix whose columns are v1, v2 and v3 and A = -1 -2 -1 be the matrix
 1 -1 0 
whose elements are coefficients of the ui in the formulas of the vj. Then V = UA. We want to find c =
 c1 
 c2  such that Ac = 0. We reduce A to reduced row echelon form.
 c3 
 2 1 1 0 R1  R2 -1 -2 -1 0 -R1  R1 1 2 1 0 R2-2R1  R2 1 2 1 0
-1 -2 -1 0   2 1 1 0
2 1 1 0
0 -3 -1 0


c1
+ (1/3)c3 = 0
-R2/3  R2 1 2 1 0 R1-2R2  R1 1 0 1/3 0
0 1 1/3 0
0 1 1/3 0 
c2 + (1/3)c3 = 0


 c1   1 
c1 = - (1/3)c3
 c = - (1/3)c  Taking c3 = -3  c =  c2  =  1 
2
3
 c3   -3 
1
1
1
 
 
 
So A 1  = 0. So V 1  = UA 1  = 0. So (1)v1 + (1)v2 – 3v3 = 0. So v1, v2 and v3 are dependent.
 -3 
 -3 
 -3 
9.2 - 4
Download