HOMEWORK 2 SOLUTIONS Levandosky, Linear Algebra 2.11

advertisement
HOMEWORK 2 SOLUTIONS
Levandosky, Linear Algebra
2.11 There are many possible parameterizations.
Following the notation in the
3
text, we can first let x0 =
, and obtain a direction vector using the
1
3−2
vector whose tail is at (2, −3) and head is at (3, 1): v =
=
1 − (−3)
1
. Thus, the line can be written parametrically as
4
3
1
L=
+t
|t∈R .
1
4

1
2.13 As above, one such parameterization is given by x0 =  2  and v =
3

  
1 − (−2)
3
 2 − 1  =  1 ; i.e.
3−2
1
 

 
3
 1

L =  2  + t 1  | t ∈ R .


3
1



1
2.15 We can first let x0 =  2 . To obtain two direction vectors, we can use
3
the vectors whose tails are at
 (1, 2, 3) and whose
 heads are at (2, 3, 4)and
2−1
1
2−1
(2, 1, 5). This gives v1 =  3 − 2  =  1  and v2 =  1 − 2  =
4−3
1
5−3


1
 −1 . Thus, the plane can be written paramterically as
2
 

 


1
1
 1

P =  2  + s  1  + t  −1  | s, t ∈ R .


3
1
2


2
 3 

2.17 We can first let x0 = 
 4 . To obtain three direction vectors, we can use
5
the vectors whose tails are at (2, 3, 4, 5) and whose heads are at (1, 1, 1, 1),
1
2
HOMEWORK 2 SOLUTIONS


1−2

 1−3 


(0, 1, 0, 1), and (−1, −2, 3, 1). This gives v1 = 
 1−4  = 
1−5








0−2
−2
−1 − 2
−3
 1−3 
 −2 
 −2 − 3 
 −5 







v2 = 
 0 − 4  =  −4 , and v3 =  3 − 4  =  −1 .
1−5
−4
1−5
−4
the hyperplane can be written paramterically as
 







2
−1
−2
−3




 

 −2 
 −2 
 −5 
3








P =  +r
+
s
+
t
|
r,
s,
t
∈
R
.
 −4 
 −1 
4
−3 






5
−4
−4
−4


−1
−2 
,
−3 
−4
Thus,

 

2
1
4.1 (a) a · b =  1  ·  −2  = 2 · 1 + 1 · −2 + 1 · 0 = 0.
1
0
(b)

 
 
   

2
1
4
2
5
a · (b + c) =  1  ·  −2  +  2  =  1  ·  0 
1
0
−2
1
−2
= 2 · 5 + 1 · 0 + 1 · −2
= 8.
(c) x · y =
2
3
−3
·
= 2 · −3 + 3 · 2 = 0.
2
(d)
2
−3 = 5 kx − yk = −
3
2 1 p
= 52 + 12
√
= 26.
1 p
√
2
2
4.4 Note that −3 = 1 + (−3) = 10. So, to get a vector pointing
in the same direction, but with length 3, we simply
scale the given vector
" √3 #
1
10
by √310 . So, the desired vector is √310
= √−9
.
−3
10
4.10 Suppose that
c1 u + c2 v + c3 w = 0
for some ci ∈ R. We must show that c1 = c2 = c3 = 0. To show this, first
dot product both sides of the above equation by u. This gives
c1 (u · u) + c2 (u · v) + c3 (u · w) = u · 0.
HOMEWORK 2 SOLUTIONS
3
By hypothesis, u · v = 0 and u · w = 0. Since we also know u · u = kuk2
and u · 0 = 0, our equation above gives
c1 kuk2 = 0.
By hypothesis, u is a nonzero vector, so kuk2 6= 0. The above equation
therefore implies c1 = 0.
If we repeat this process by dotting the original equation with v, we will
similarly find c2 = 0. Dotting the original equation with w gives c3 =0.
Thus, c1 = c2 = c3 = 0, and hence the set of vectors {u, v, w} is linearly
independent.
4.11 We simply expand the product and use the given data:
(u − v + 2w) · (x + y − z) = u · x + u · y − u · z
−v·x−v·y+v·z
+ 2w · x + 2w · y − 2w · z
= 1 + 2 − 0 − (−1) − 2 + 3 + 2(1) + 2(−2) − 2(−1)
= 5.

 
 

7
2 · 3 + 0 · −4 + 1 · 1
3
2 0
1
7.1  3 −4 2   −4  =  3 · 3 + −4 · −4 + 2 · 1  =  27 .
8
5 · 3 + 1 · −4 + −3 · 1
1
5 1 −3

7.3 This product is not defined, since the matrix A has three columns but the
vector v only has two rows.
7.4 Again this product is not defined, since the matrix A has four columns but
the vector v only has two rows. Remember: the matrix-vector product is
only defined with the number of columns of the matrix A equals the number
of rows of the vector v.
10.16 The first quadrant is not a subspace of R2, since
it is not closed under scalar
1
multiplication. For example, notice that
is in the first quadrant, but
2
1
−1
−
=
is not in the first quadrant.
2
−2
10.21 We must check the three properties of a subspace. In each case, we use the
fact that V and W are subspaces.
(1) Since V and W are subspaces, we know 0 ∈ V and 0 ∈ W . Thus,
0 ∈ V ∩ W.
(2) Suppose x, y ∈ V ∩ W . Then x, y ∈ V , and V is a subspace, so
x + y ∈ V . Similarly, x, y ∈ W , and W is a subspace, so x + y ∈ W .
Thus, x + y ∈ V ∩ W .
(3) Suppose x ∈ V ∩ W , and c ∈ R. Then x ∈ V , and V is a subspace,
so cx ∈ V . Similarly, x ∈ W and W is a subspace, so cx ∈ W . Thus,
cx ∈ V ∩ W .
10.22 Again, we must check the three properties of a subspace.
(1) Since V and W are subspaces, we know 0 ∈ V and 0 ∈ W . Therefore,
0 + 0 ∈ V + W , and hence 0 ∈ V + W .
4
HOMEWORK 2 SOLUTIONS
(2) Suppose x, y ∈ V +W . Then we can write x = v1 +w1 and y = v2 +w2
for some v1 , v2 ∈ V , w1 , w2 ∈ W . But then x + y = (v1 + v2 ) + (w1 +
w2 ). Since V is a subspace, v1 + v2 ∈ V ; since W is a subspace,
w1 + w2 ∈ W . So, if we write v3 = v1 + v2 and w3 = w1 + w2 , then
x + y = v3 + w3 ∈ V + W .
(3) Suppose x ∈ V + W , and c ∈ R. Write x = v + w for some v ∈ V , w ∈
W . Then v ∈ V , and V is a subspace, so cv ∈ V . Similarly, w ∈ W
and W is a subspace, so cw ∈ W . Thus, cx = (cv) + (cw) ∈ V + W .
11.14 By hypothesis, {v1 , v2 , v3 } is a basis for V . By our handy theorem (Proposition 12.3), to check that another set of three vectors is also a basis for V ,
we only have to check either: 1) the set spans V ; or 2) the set is linearly
independent. So, to prove that {2v1 − v2 , v1 + v2 + 2v3 , v1 + v3 }, we’ll
simply check that the set is linearly independent. Suppose
c1 (2v1 − v2 ) + c2 (v1 + v2 + 2v3 ) + c3 (v1 + v3 ) = 0
for some ci ∈ R. We must show that c1 = c2 = c3 = 0. Rewrite the above
equation as
(2c1 + c2 + c3 )v1 + (−c1 + c2 )v2 + (2c2 + c3 )v3 = 0.
Since {v1 , v2 , v3 } is linearly independent, the above must be the trivial
combination; i.e. we must have
2c1 + c2 + c3 = 0
−c1 + c2 = 0
2c2 + c3 = 0
Solving the last two equations give c1 = c2 and c3 = −2c2 . Plugging these
into the first equation gives c2 = 0, and hence c1 = 0 and c3 = 0.
     
0 
0
1



     
1
0
 ,   ,  0  is linearly inde12.12 (a) Sometimes true. The set 






1 
0
0





0
0
     0
1
0
1 


     

0
1
 ,   ,  1  is linearly dependent.
pendent, but the set 
 0   0   0 





0
0
0
4
(b) Never true. If a set of three vectors spanned R , then by our handy fact
4
(Proposition 12.1), any set of more than three
would
 vectors
  in R
 
 be

1
0
0


      
0   1   0  
linearly dependent. But the standard basis 
 0 , 0 , 1 ,



0
0
0
is a set of four linearly independent vectors.
(c) Never true. Since the standard basis for R3 is a set of three vectors
spanning R3 , by our handy fact (Prop. 12.1), any set of more than
three vectors in R3 must be linearly dependent.

0 


0 

0 


1
HOMEWORK 2 SOLUTIONS
5
       
0
1 
0
 1
(d) Sometimes true. The set  0  ,  1  ,  0  ,  1  spans R3 ,


   0  0  1 1
1
1 
1
 1
whereas the set  0  ,  0  ,  0  ,  0  clearly does not.


0
0
0
0
(e) Always true, by Proposition 12.3 (since dim(R4 ) = 4).
(f) Always true, again by Proposition 12.3.
13.1 We check the two properties required for a function to be a linear transformation:
x1
y1
(1) Suppose
,
∈ R2 . Then
x2
y2
x1
y1
x1 + y1
f
+
=f
x2
y2
x2 + y2
3(x1 + y1 )
=
−2(x1 + y1 ) + 5(x2 + y2 )
3x1 + 3y1
=
(−2x1 + 5x2 ) + (−2y1 + 5y2 )
3x1
3y1
=
+
−2x1 + 5x2
−2y1 + 5y2
x1
y1
=f
+f
.
x2
y2
x1
(2) Suppose
∈ R2 and c ∈ R. Then
x2
x1
cx1
3cx1
f c
=f
=
x2
cx2
−2cx1 + 5cx2
3x1
=c
−2x1 + 5x2
x1
= cf
.
x2
13.3 This function is not a linear transformation. For example, observe that
2
7
f
=
,
1
−8
but
f
2
4
22
7
2
=f
=
6= 2
.
1
2
−16
−8
13.20 Take any vector w ∈ T(V ). Then w = T(v) for some v ∈ V . Since the set
{v1 , . . . , vk } spans V , we can write
v = c1 v1 + · · · + vk
for some ci ∈ R. But then observe that
w = T(v) = T(c1 v1 + · · · + vk ) = c1 T(v1 ) + · · · + ck T(vk ),
6
HOMEWORK 2 SOLUTIONS
and so w ∈ span(T(v1 ), . . . , T(vk )). Thus, the set {T(v1 ), . . . , T(vk )}
spans T(V ).
13.22 Since the set {v1 , . . . , vk } is linearly dependent, there exists some ci ∈ R,
not all zero, such that
c1 v1 + · · · + vk = 0.
Applying the linear transformation T to both sides yields
T(c1 v1 + · · · + vk ) = T(0),
which we can rewrite as
c1 T(v1 ) + · · · + ck T(vk ) = 0.
Since the ci are not all zero, this implies the set {T(v1 ), . . . , T(vk )} is
linearly dependent.
Download