Math 317: Linear Algebra Homework 7 Solutions

advertisement
Math 317: Linear Algebra
Homework 7 Solutions
The following problems are for additional practice and are not to be turned in: (All
problems come from Linear Algebra: A Geometric Approach, 2nd Edition by ShifrinAdams.)
Exercises: Section 3.3: 1–6, 8, 18, 19
Section 3.4: 1,3,19
Turn in the following problems.
1. Section 3.3, Problem 9
Proof: Suppose that u, v, w ∈ Rn form a linearly independent set. We prove
that u + v, v + 2w and −u + v + w form a linearly independent set. To
this extent, let c1 (u + v) + c2 (v + 2w) + c3 (−u + v + w) = 0. We show
that c1 = c2 = c3 = 0. By rearranging (collecting like terms) the linear
combination given above, we have that c1 (u+v)+c2 (v+2w)+c3 (−u+v+w) =
0 =⇒ (c1 − c3 )u + (c1 + c2 + c3 )v + (2c2 + c3 )w = 0. Since {u, v, w} form
a linear independent set, we know that c1 − c3 = 0, c1 + c2 + c3 = 0 and
2c2 + c3 = 0. From these equations we find that c1 = c3 , c2 = −c3 /2 and
c1 + c2 + c3 = c3 + −c3 /2 + c3 = 0 =⇒ c3 = 0 =⇒ c1 = c2 = 0. Thus u + v,
v + 2w and −u + v + w form a linearly independent set.
2. Section 3.3, Problem 10
Proof: Suppose that v1 , v2 , . . . , vk are nonzero vectors with the property that
vi · vj = 0 whenever i 6= j. We prove that {v1 , . . . , vk } forms a linearly
independent set. Suppose that c1 v1 + c2 v2 + . . . + ck vk = 0. We show that
c1 = . . . = ck = 0. We take the dot product of this equation with vi to obtain
c1 (v1 · vi ) + c2 (v2 · vi ) + . . . + ci (vi · vi ) + . . . ck (vk · vi ) = 0 · vi = 0.
Using the fact that vi · vj = 0 whenever i 6= j, the above equation gives
ci (vi · vi ) = 0 =⇒ c1 = 0 since vi 6= 0 and v1 · v1 = kvi k2 . Since this true for
all i = 1, 2, . . . , k, we have that c1 = c2 = . . . = ck = 0 and hence {v1 , . . . , vk }
forms a linearly independent set.
3. Section 3.3, Problem 11
(a) Proof : Suppose that v1 , v2 , . . . , vn are nonzero mutually orthogonal vectors in Rn . In the previous problem, we proved that V = {v1 , . . . , vn }
forms a linearly independent set. Thus dim V = n, and V ⊂ Rn . Since
dim Rn = n, it must be the case that V = Rn . Thus v1 , v2 , . . . , vn forms
a basis for Rn .
(b) Given any x ∈ Rn , we give an explicit formula for the coordinates of x
1
Math 317: Linear Algebra
Homework 7 Solutions
with respect to the basis {v1 , . . . , vn }. Since v1 , v2 , . . . , vn forms a basis
for Rn , we can write
x = c1 v 1 + c2 v 2 + . . . + cn v n ,
for some c1 , c2 , . . . , cn ∈ R. To find these c0i s, we take the dot product of
vi on both sides to the equation above to obtain:
x · vi = c1 (v1 · vi ) + c2 (v2 · vi ) + . . . + ci (vi · vi ) + . . . + cn (vn · vi ).
Once again, we use the fact that vi · vj = 0 whenever i 6= j to obtain
x · vi = ci (vi · vi ) =⇒ ci =
(c) Recalling that projvi x =
x·vi
v,
kvi k2 i
x · vi
x · vi
=
v i · vi
kvi k2
from part (b), we see that
x = c1 v 1 + c2 v 2 + . . . + cn v n
x · v2
x · vn
x · v1
v1 +
v2 + . . . +
vn
=
2
2
kv1 k
kv2 k
kvn k2
n
X
=
projvi x
i=1
4. Section 3.3, Problem 15
Proof: Suppose that k > n. We prove that any k vectors in Rn must form
a linearly dependent set. We recall that the columns of A form a linearly
dependent set if N (A) 6= {0}. This is equivalent to showing that Ax = 0
has a nontrivial solution which is the same as showing that A is not a full
rank matrix. Let A be the n × k matrix that contains any k vectors in Rn .
Since rank(A) ≤ n < k (because the maximum number of nonzero rows in
any echelon form of A is n), we have that A is not a full rank matrix and
hence there are free variables that will ensure that Ax = 0 has infinitely many
solutions which in turns imply that N (A) 6= {0}.
5. Section 3.3, Problem 21
Proof : To show that {Av1 , Av2 , . . . , Avk } is linearly independent, we start
with c1 (Av1 ) + c2 (Av2 ) + . . . + ck (Avk ) = 0, and show that c1 = c2 = . . . =
ck = 0. By linearity of A we have that c1 (Av1 ) + c2 (Av2 ) + . . . + ck (Avk ) =
A(c1 v1 + . . . + ck vk ) = 0. Since rank(A) = n for an m × n matrix, we know
that Ax = 0 has only the trivial solution, i.e. x = 0. Thus, A(c1 v1 + . . . +
ck vk ) = 0 =⇒ c1 v1 + . . . + ck vk = 0. Since {v1 , v2 , . . . , vk } is linearly
2
Math 317: Linear Algebra
Homework 7 Solutions
independent, then c1 = c2 = . . . = ck = 0. Thus, {Av1 , Av2 , . . . , Avk } is
linearly independent.
6. Section 3.4, Problem 3d


1 0
1 1
. Then a row echelon form of A augmented by
2 0
0 −1


b2 − b 1 − b4
1 0 2 0 2

 0 1 1 0 1
b2 − b1
.
b = (b1 , b2 , b3 , b4 ) is given by [U |b̂] = 

 0 0 0 1 −1
b1 + b4
0 0 0 0 0 −2b2 + b3 − 2b4
A basis for C(A) is given by the pivot columns of A. Consulting the row
echelon form of A, we find the columns 1, 2, and 4 all correspond to pivot
columns. Thus, the 1st,
2nd,and

 4thcolumns
  of A will form a basis for C(A).
1
−1
1 


     

1   0  1

That is, C(A) = span   ,   ,   and dim(C(A)) = 3.
0
2
2 





−1
1
0
1 −1 1
1
0
2
Let A = 
0
2
2
−1 1 −1
A basis
rows in any echelon form of A. Thus, R(A) =
forR(A)

 are
 thenonzero
1
0
0






 1  0 

0

     
    
span 
2 , 1 ,  0  and dim(R(A)) = 3.



0 0  1 






2
1
−1
To find a basis for N (A), we solve Ax = 0. From the row echelon form of A,
we see that x3 and x5 are free variables. We also obtain the following reduced
system of equations to solve:
x1 + 2x3 + 2x5 = 0 =⇒ x1 = −2x3 − 2x5
x2 + x3 + x5 = 0 =⇒ x2 = −x3 − x5
x4 − x5 = 0 =⇒ x4 = x5 .
 


x1
−2x3 − 2x5
x2 
 −x3 − x5 
 


 = 
 =
x
x
In standard form we may write the solution as 
3
3
 


x4 


x5
x5
x5
3
Math 317: Linear Algebra
Homework 7 Solutions
   
 
 
−2
−2
−2
−2 




−1
−1






−1
−1

 
 
   

 
   
x3 
 1 +x5  0 . Thus a basis for N (A) is given by N (A) = span  1  ,  0 

0
1
 0   1 






0
1
0
1
and dim(N (A)) = 2.
A basis for N (AT ) is made up of the coefficients in the constraint equations
for b so that Ax = b. Since we only have one zero row in our echelon form, we
have 
one
equation and thus a basis for N (AT ) is given by N (AT ) =
 constraint

0 


 

−2
T
span 
 1 , and dim(N (A )) = 1.





−2
To check the orthogonality, take the dot product of any vector in N (A) with
any vector in R(A) to see that it is 0. Similarly, take any vector in C(A) with
any vector in R(AT ) to see that it is 0.
7. Section 3.4, Problem 22
(a) Proof : In this problem, we refer back to the properties proved in exercise 3.2.10. To prove that rank(AB) ≤ rank(A), we recall from exercise
3.2.10, that C(AB) ⊂ C(A). Thus, dim C(AB) ≤ dimC(A). However,
rank(A) = dim C(A) and thus rank(AB) ≤ rank(A).
(b) Proof: If n = p and B is nonsingular, then from exercise 3.2.10, we have
that C(AB) = C(A) =⇒ dim C(AB) = dim C(A) =⇒ rank(AB) =
rank(A).
(c) Proof : From exercise 3.2.10, we know that N (B) ⊂ N (AB) and hence
dim N (B) ≤ dim N (AB). Now dim N (B) = p−rank(B) and dim N (AB) =
p − rank(AB) and thus p − rank(B) ≤ p − rank(AB) =⇒ rank(AB) ≤
rank(B).
(d) Proof: If m = n and A is nonsingular, then from exercise 3.2.10, we
have that N (B) = N (AB) which implies dim N (B) = dim N (AB) =⇒
p − rank(B) = p − rank(AB) =⇒ rank(B) = rank(AB).
(e) Proof : Suppose that rank(AB) = n. We show that rank(A) = rank(B) =
n. From the previous problems we have that n = rank(AB) ≤ rank(B) ≤
n =⇒ rank(B) = n. Similarly, we have that n = rank(AB) ≤
rank(A) ≤ n =⇒ rank(A) = n.
4
Download