Uploaded by kimdoy17

Linear Algebra Textbook Answers

advertisement
Chapter 1 Solutions
Section 1.1
A Practice Problems
! " ! " !
" ! "
1
2
1+2
3
A1
+
=
=
4
3
4+3
7
x2
1
2
+
1
4
3
4
2
3
A2
! " ! " !
" ! "
3
4
3−4
−1
−
=
=
2
1
2−1
1
x2
3
4
−
2
1
−
x1
! " !
" ! "
−1
3(−1)
−3
A3 3
=
=
4
3(4)
12
x
2
−1
3
4
x1
" ! " !
" ! "
4
−1
4 + (−1)
3
A5
+
=
=
−2
3
−2 + 3
1
! " !
" ! "
3
(−2)3
−6
A7 −2
=
=
−2
(−2)(−2)
4
!
4
1
x1
4
1
! "
! " ! " ! " ! "
2
3
4
6
−2
A4 2
−2
=
−
=
1
−1
2
−2
4
x2
2
3
−2
2
−1
1
−2
−1
4
3
2
3
−1
2
1
2
2
1
x1
3
−1
! " ! " !
" ! "
−3
−2
−3 − (−2)
−1
A6
−
=
=
−4
5
−4 − 5
−9
! "
! " ! " ! " ! "
2
4
1
4/3
7/3
A8 12
+ 13
=
+
=
6
3
3
1
4
1
Copyright © 2020 Pearson Canada Inc.
2
A9
A11
A13
A15
! "
! " ! " ! " ! "
3
1/4
2
1/2
3/2
−2
=
−
=
1
1/3
2/3
2/3
0
    
  
2  5  2 − 5  −3
    
  
3 −  1 =  3 − 1  =  2
4
−2
4 − (−2)
6
  
 

 4  (−6)4  −24
  
 

−6 −5 = (−6)(−5) =  30
  
 

−6
(−6)(−6)
36


  

 2/3
 3  7/3


  

2 −1/3 + 13 −2 = −4/3


  

2
1
13/3
2
3
A10
A12
A14
A16
!√ "
! " ! " !
" !
"
√
1
2
3
5
2
√
√
√
√
2 √ +3
=
+
=
6
6
3 6
4 6
3
    
 

 2 −3  2 + (−3)   −1
    
 

 1 +  1 =  1 + 1  =  2
−6
−4
−6 + (−4)
−10
 
       
−5
−1  10  −3  7
 
       
−2  1 + 3  0 = −2 +  0 = −2
 
       
1
−1
−2
−3
−5

 
  √    √
−1  √2 −π  2√− π
√ 1

      
2 1 + π  0 =  2 +  0  = 
2 
 
  √    √

1
1
π
2
2+π
    

 2  6  −4
    

w =  4 − −3 =  7
A17 (a) 2"v − 3"
    

−4
9
−13
    

  
 
 
 

 1  4  5
5  5 −15  5 −10
    

  
 
 
 

(b) −3("v + 2"
w) + 5"v = −3  2 + −2 +  10 = −3 0 +  10 =  0 +  10 =  10
    

  
 
 
 

−2
6
−10
4
−10
−12
−10
−22
" − 2"u = 3"v , so 2"u = w
" − 3"v or "u = 12 ("
(c) We have w
w − 3"v ). This gives
   
  

 2  3
−1 −1/2
1     1   

"u = −1 −  6 = −7 = −7/2

2     2   
3
−6
9
9/2
 
−3
 
(d) We have "u − 3"v = 2"u , so "u = −3"v = −6.
 
6
A18 (a)
(b)
(c)
(d)
  
 

3/2  5/2  4 






1
" = 1/2 + −1/2 =  0 
v + 12 w
2"
  
 

1/2
−1
−1/2
          

 8 6  15   16 −9  25
          

" ) − (2"v − 3"
2("v + w
w) = 2  0 − 2 − −3 =  0 −  5 =  −5
          

−1
2
−6
−2
8
−10
     
 5 6 −1
     
" − "u = 2"v , so "u = w
" − 2"v . This gives "u = −1 − 2 = −3.
We have w
     
−2
2
−4
    

 10   2   8 
    

" , so 12 "u = w
" − 13 "v , or "u = 2"
We have 12 "u + 13 "v = w
w − 23 "v = −2 − 2/3 =  − 8/3.
    

−4
2/3
−14/3
Copyright © 2020 Pearson Canada Inc.
3
A19
Thus,
     
 3 2  1
" = OQ
" − OP
" =  1 − 3 = −2
PQ
     
−2
1
−3
     
−5 2 −7
" = OS
" − OP
" =  1 − 3 = −2
PS
     
5
1
4
     
1 −5  6
     
"
"
"
S R = OR − OS = 4 −  1 =  3
     
0
5
−5
     
1 2 −1
" = OR
" − OP
" = 4 − 3 =  1
PR
     
0
1
−1
     
1  3 −2
" = OR
" − OQ
" = 4 −  1 =  3
QR
     
0
−2
2
         
 1 −2 −1 −7  6
" + QR
" = −2 +  3 =  1 = −2 +  3 = PS
" + S"R
PQ
         
−3
2
−1
4
−5
! "
! "
3
−5
A20 The equation of the line is "x =
+t
,t∈R
4
1
A21 The equation of the line is "x =
! "
! "
2
−4
+t
,t∈R
3
−6
 


2
 4
 


A22 The equation of the line is "x = 0 + t  −2, t ∈ R
 


5
−11
 
 
4
−2
 
 
A23 The equation of the line is "x = 1 + t  1, t ∈ R
 
 
5
2
For Problems A24 - A28, alternative correct answers are possible.
"
A24 The direction
! " ! vector
" !d of" the line is given by the directed line segment joining the two points:
2
−1
3
d" =
−
=
. This, along with one of the points, may be used to obtain an equation for
−3
2
−5
the line
! "
! "
−1
3
"x =
+t
, t∈R
2
−5
"
A25 The !direction
" ! "vector
! d" of the line is given by the directed line segment joining the two points:
−2
4
−6
d" =
−
=
. This, along with one of the points, may be used to obtain an equation for the
−1
1
−2
line
! "
! "
4
−6
"x =
+t
, t∈R
1
−2
Copyright © 2020 Pearson Canada Inc.
4
A26 The direction vector d" of the line is given by the directed line segment joining the two points:
     
−2  1 −3
     
d" =  1 −  3 = −2. This, along with one of the points, may be used to obtain an equation for
     
0
−5
5
the line
 
 
 1
−3
 
 
"x =  3 + t −2 , t ∈ R
 
 
−5
5
A27 The direction vector d" of the line is given by the directed line segment joining the two points:
     
4 −2 6
     
d" = 2 −  1 = 1. This, along with one of the points, may be used to obtain an equation for the
     
2
1
1
line
 
 
−2
6
 
 
"x =  1 + t 1 , t ∈ R
 
 
1
1
A28 The direction vector d" of the line is given by the directed line segment joining the two points: d" =
    

 −1  1/2 −3/2
    

 1  − 1/4 =  3/4. This, along with one of the points, may be used to obtain an equation for the
1/3
1
−2/3
line
 


1/2
−3/2
 


"x = 1/4 + t  3/4 , t ∈ R
 


1
−2/3
" the line is given by the directed line segment joining the two points:
A29 The !direction
" ! vector
" ! d of
"
2
−1
3
d" =
−
=
.
−3
2
−5



 x1 = −1 + 3t
Hence, the parametric equation of the line is 
t ∈ R.

 x2 = 2 − 5t,
5
1
A scalar equation is x2 = 2 + −5
3 (x1 − (−1)) = − 3 x1 + 3 .
! " ! " ! "
2
1
1
"
A30 The direction vector is d =
−
=
.
2
1
1



 x1 = 1 + t
Hence, the parametric equation of the line is 
t ∈ R.

 x2 = 1 + t,
A scalar equation is x2 = 1 + (x1 − 1) = x1 .
! " ! " ! "
3
1
2
"
A31 The direction vector is d =
−
=
.
0
0
0



 x1 = 1 + 2t
Hence, the parametric equation of the line is 
t ∈ R.

 x2 = 0 + 0t,
A scalar equation is x2 = 0 + 0(x1 − 1) = 0.
Copyright © 2020 Pearson Canada Inc.
5
" ! " ! "
−1
1
−2
−
=
.
5
3
2



 x1 = 1 − 2t
Hence, the parametric equation of the line is 
t ∈ R.

 x2 = 3 + 2t,
A scalar equation is x2 = 3 + (−1)(x1 − 1) = −x1 + 4.
A32 The direction vector is d" =
!
A33 (a) Let P, Q, and R be three points in Rn , with corresponding vectors "p , "q , and "r. If P, Q, and R are
" and PR
" should define the same line. That is, the
collinear, then the directed line segments PQ
direction vector of one should be a non-zero scalar multiple of the direction vector of the other.
" = t PR,
" for some t ∈ R.
Therefore, PQ
! " ! " ! "
! " ! " ! "
4
1
3
−5
1
−6
"
"
" so they are collinear.
(b) We have PQ =
−
=
and PR =
−
=
= −2PQ,
1
2
−1
4
2
2
     
     
 3 1  2
−3 1 −4






     
(c) We have S"T = −2 − 0 = −2 and S"U =  4 − 0 =  4. Therefore, the points S , T , and
     
     
3
1
2
−1
1
−2
"
"
U are not collinear because S U ! tS T for any real number t.
! " ! " !
" !
" ! " ! "
x1
y1
x1 + y1
y1 + x1
y
x
A34 For V2:
"x + "y =
+
=
=
= 1 + 1 = "y + "x
x2
y2
x2 + y2
y2 + x2
y2
x2
For V8:
! " !
" !
" ! " ! "
! "
! "
x1
(s + t)x1
sx1 + tx1
sx1
tx1
x1
x
(s + t)"x = (s + t)
=
=
=
+
=s
+ t 1 = s"x + t"x
x2
(s + t)x2
sx2 + tx2
sx2
tx2
x2
x2
! "
!
"
!
"
25
475
450
"
"
"
√
√
A35 We get that F1 =
and F2 =
. Thus, the net force is F =
.
0
25 3
25 3
B Homework Problems
! "
0
4
! "
5
B5
15
!
"
2√
√
B9
2 − 18
 
 3
 
B13  6
 
15
B1
 
 2
 
B17 (a) 16
 
11
"
−5
3
! "
−1
B6
−2
 
 0
 
B10 −3
 
−9
 
0
 
B14 0
 
0
B2


 10


(b) −22


−13
!
! "
2
6
!
"
15
B7
−10
 
 1
 
B11  4
 
−1
 
0
 
B15 0
 
0
B3
 
 2
 
(c) 10
 
7
 
−4
 
(d)  1
 
0
Copyright © 2020 Pearson Canada Inc.
! "
5
16
!
"
3/4
B8
19/4
 
4
 
B12 4
 
2
√ 

3 + 2


B16 
0 


−1
B4
6
B18
B19
B20
B21
B23
B25
B27
B29
B31
B33
B35
B37
B39
 


 


11
 1 
−1
 5/3 
 


 


(a) 25
(b)  3/2 
(c) "u = −3
(d) "u = 11/3
 


 


9
11/4
1
5/3
 
 
 
 
 
 3
−4
 5
−7
−9








 








"
"
"
"
"








PQ =  1, PR =  0, PS = −6, QR = −1, S R =  6
 
 
 
 
 
−1
−3
2
−2
−5
 
 
 
 
 
6
 0
 5
−6
−5
" = 2, PR
" = −4, PS
" = −2, QR
" = −6, S"R = −2
PQ
 
 
 
 
 
2
1
0
−1
1
 
! "
! "
2
2
−3
 
"x =
+t
,t∈R
B22 "x = t 2, t ∈ R
 
−1
2
1
 
 
! "
! "
 1
1
3
1
 
 
"x =
+t ,t ∈R
B24 "x = −1 + t 1, t ∈ R
 
 
1
2
2
0
 
 
 
 
1
1
−2
2
 
 
 
 
"x = 1 + t 0, t ∈ R
B26 "x =  3 + t 3, t ∈ R
 
 
 
 
1
1
1
1
! "
! "
! "
! "
2
−1
−2
1
"x =
+t
, t∈R
B28 "x =
+t
, t∈R
4
−2
5
−6
 
 
 
1
0
−1
 
 
 
"x = t 3 , t ∈ R
B30 "x = 1 + t  1 , t ∈ R
 
 
 
2
4
−2
 
 
 


−2
 0
 1
−1/2
 
 
 


"x =  6 + t −1 , t ∈ R
B32 "x =  2 + t −5/3 , t ∈ R
 
 
 


1
0
1/2
−1/2






 x1 = 2 + t
 x1 = 3 + 3t
t ∈ R; x2 = 5 − 2(x1 − 2).
B34 
t ∈ R; x2 = −1 + 32 (x1 − 3).



 x2 = 5 − 2t,
 x2 = −1 + 2t,






 x1 = t
 x1 = −3 + 7t
t
∈
R;
x
=
3
−
8x
.
B36
t ∈ R; x2 = 1.


2
1
 x = 3 − 8t,
 x = 1,


2
2






 x1 = 2 − 2t
 x1 = 5 + t
3
t ∈ R; x2 = 2 (x1 − 2).
B38 
t ∈ R; x2 = −2 + 5(x1 − 5).



 x2 = −3t,
 x2 = −2 + 5t,
collinear
B40 not collinear
B41 collinear
Copyright © 2020 Pearson Canada Inc.
7
C Conceptual Problems
C1 (a) We need to find t1 and t2 such that
! "
! "
! " !
"
3
1
1
t1 + t2
= t1
+ t2
=
−2
1
−1
t1 − t2
x2
1
1
x1
That is, we need to solve the two equations in two unknowns
t1 + t2 = 3 and t1 − t2 = −2. Using substitution and/or elimination we find that t1 = 12 and t2 = 52 .
1
−1
3
−2
(b) We use the same approach as in part (a). We need to find t1 and t2 such that
! "
! "
! " !
"
x1
1
1
t1 + t2
= t1
+ t2
=
x2
1
−1
t1 − t2
Solving t1 + t2 = x1 and t1 − t2 = x2 by substitution and/or elimination gives t1 = 12 (x1 + x2 ) and
t2 = 12 (x1 − x2 ).
√
√
√
(c) We have x1 = 2 and x2 = π, so we get t1 = 12 ( 2 + π) and t2 = 12 ( 2 − π).
" + QR
" + RP
" can be described informally as “start at P and move to Q, then move from Q to
C2 (a) PQ
R, then from R to P; the net result is a zero change in position.”
" = "q − "p , QR
" = "r − "q , and RP
" = "p − "r. Thus,
(b) We have PQ
" + QR
" + RP
" = "q − "p + "r − "q + "p − "r = "0
PQ
 
 x1 
 
C3 Let "x =  x2 . Then
 
x3
  
 

 
tx1   s(tx1 ) (st)x1 
 x1 
  
 

 
s(t"x ) = s tx2  =  s(tx2 ) = (st)x2  = (st)  x2  = (st)"x
  
 

 
tx3
s(tx3 )
(st)x3
x3
 
 
 x1 
y1 
 
 
C4 Let "x =  x2  and "y = y2 . Then,
 
 
x3
y3

 
 

 x1 + y1   s(x1 + y1 )  sx1 + sy1 

 
 

s("x + "y ) = s  x2 + y2  =  s(x2 + y2 ) =  sx2 + sy2  =

 
 

x3 + y3
s(x3 + y3 )
sx3 + sy3
 
 
 x1 
y1 
 
 
s  x2  + s y2  =
 
 
x3
y3
 
 
 x1 
y1 
 
 
s  x2  + s y2  = s"x + s"y
 
 
x3
y3
" t ∈ R, is a line in R2 passing through the origin. Then, there exists a real
C5 Assume that "x = "p !+ "td,
0
" Hence, "p = −t1 d" and so "p is a scalar multiple of d.
" On the other
number t1 such that
= "p + t1 d.
0
" Then, there exists a real number t1 such that "p = t1 d.
"
hand, assume that "p is a scalar multiple of d.
"
Hence, if we take t = −t1 , we get that
! " the line with vector equation "x = "p + td passes through the
0
point "p + (−t1 )d" = t1 d" − t1 d" = "0 =
as required.
0
Copyright © 2020 Pearson Canada Inc.
8
C6 If the plane passes through the origin, then there exists s, t ∈ R such that
"0 = "p + s"u + t"v
Hence,
"p = −s"u − t"v
and so "p is a linear combination of "u and "v .
On the other hand, if "p = a"u + b"v , then taking s = −a and t = −b gives
"x = "p + s"u + t"v = "p − a"u − b"v = "0
and hence the plane passes through the origin.
" 0 ≤ s ≤ 1. Similarly, a vector
C7 A vector equation for the line segment from O to R is "x = sOR,
"
equation for the line segment from P to Q is "x = "p + t PQ, 0 ≤ t ≤ 1. The two lines intersect when
" = "p + t PQ
"
sOR
Since O, P, Q, R form a parallelogram, we know that "r = "p + "q . Hence, we get
s("r − "0) = "p + t("q − "p)
s("p + "q ) = "p + t"q − t"p
(s + t − 1)"p = (−s + t)"q
"p and "q cannot be scalar multiples of each other, as otherwise we would not have a parallelogram.
Thus, for this equation to hold, we must have s + t − 1 = 0 and −s + t = 0. Solving, we find that
s = t = 21 as required.
! " !
"
a1
b1 − a1
C8 The line segment from A to B is "x =
+t
, 0 ≤ t ≤ 1. Thus, the point 1/3 of the way from
a2
b2 − a2
A to B is
! "
!
" !2
"
1 b1 − a1
a1
a1 + 13 b1
3
"x =
+
= 2
1
a2
3 b2 − a2
3 a2 + 3 b2
3
4
Hence, the coordinates are 23 a1 + 13 b1 , 23 a2 + 13 b2 .



x1 = 2 + s + t




C9 (a) Parametric equations for the plane are 
x2 = 1 + 2s + t s, t ∈ R.




 x3 = 3s + 2t
(b) Subtracting the second equation from the first equation gives x1 − x2 = 1 − s, so s = 1 − x1 + x2 .
Then, the second equation gives
t = x2 − 1 − 2s = x2 − 1 − 2(1 − x1 + x2 ) = −3 + 2x1 − x2
The third equation now gives
x3 = 3(1 − x1 + x2 ) + 2(−3 + 2x1 − x2 ) = −3 + x1 + x2
Hence, a scalar equation for the plane is x1 + x2 − x3 = 3.
Copyright © 2020 Pearson Canada Inc.
9
C10 (a) We solve ax1 + bt = c for x1 to get x1 =
c
a
− ba t. Thus, parametric equations for the line are

c
b


 x1 = a − a t


 x2 = t
(b) We have
t∈R
! " !c b "
x1
− t
= a a
x2
t
! "
!
"
c/a
−b/a
=
+t
,t ∈ R
0
1
(c) From our work in (b), a vector equation for the line is
"
!
"
5/2
−3/2
"x =
+t
,
0
1
!
t∈R
(d) Parametric equations would be



 x1 = 3


 x2 = t
Thus, we have
t∈R
! " ! " ! "
! "
x1
3
3
0
=
=
+t
,
x2
t
0
1
t∈R
C11 If P(p1 , p2 ) is on the line, then there exists t1 ∈ R such that
! "
! " ! "
p1
d
td
= t1 1 = 1
p2
d2
td2
Thus, p1 = td1 and p2 = td2 . If d1 = 0, then p1 = 0 and hence we have p1 d2 = 0 = p2 d1 . If d1 ! 0,
then t = dp11 and hence
p1
p2 = d2 ⇒ p2 d1 = p1 d2
d1
On the other hand, assume p1 d2 = p2 d1 . If d1 = 0, then p1 = 0 (if d2 = 0, then L would not be a
line). Hence, taking t2 = dp22 gives
! " !
" ! " ! "
d1
0
0
p
t2
=
=
= 1
d2
t2 d2
p2
p2
If d1 ! 0, then we take t3 =
p1
d1
to get
! " !
" !
" ! "
p1
d1
t3 d1
p
t3
=
= p1
= 1
d
d2
t3 d2
p2
d1 2
Copyright © 2020 Pearson Canada Inc.
10
" t ∈ R. Since the lines are not parallel, we have
C12 Let the two lines be "x = "a + s"b, s ∈ R, and "x = "c + td,
"
"
d ! kb for any k. To determine whether there is a point of intersection, we try to solve "a + s"b = "c + td"
for s and t. The components of this vector equation are
b1 s − d1 t = c1 − a1
b2 s − d2 t = c2 − a2
Multiply the first equation by d2 and the second equation by d1 and subtract the second from the first
to get
(b1 d2 − b2 d1 )s = d2 (c1 − a1 ) − d1 (c2 − a2 )
Now b1 d2 − b2 d1 ! 0 since d" ! k"b for any k. Thus, we can solve this equation for s and then solve
for t. Thus, there is a point of intersection.
Section 1.2
A Practice Problems
! "
! "
! " !
"
3
1
1
c1 + c2
A1 Consider
= c1
+ c2
=
. This gives
1
3
−1
3c1 − c2
3 = c1 + c2
1 = 3c1 − c2
Solving we find that c1 = 1 and c2 = 2. Thus, "x ∈ Span B.
! "
! " !
"
8
−2
−2c1
A2 Consider
= c1
=
. Taking c1 = −4 satisfies the equation. Thus, "x ∈ Span B.
−4
1
c1
! "
! " !
"
6
−2
−2c1
A3 Consider
= c1
=
. For the first component, we require that c1 = −3, but this does
3
1
c1
satisfy the second component. Thus, "x " Span B.
! "
! "
! " !
"
2
2
1
2c1 + c2
A4 Consider
= c1
+ c2
=
. This gives
5
−1
2
−c1 + 2c2
2 = 2c1 + c2
5 = −c1 + 2c2
Solving we find that c1 = −1/5 and c2 = 12/5. Thus, "x ∈ Span B.
 
 
 
  

 1
1
0
1 c1 + c3 
 
 
 
  

A5 Consider  2 = c1 1 + c2 1 + c3 0 = c1 + c2 . This gives
 
 
 
  

−1
0
1
1
c2 + c3
1 = c1 + c3
2 = c1 + c2
−1 = c2 + c3
Solving we find that c1 = 2, c2 = 0, and c3 = −1. Thus, "x ∈ Span B.
Copyright © 2020 Pearson Canada Inc.
11
 
 
 
  

0
1
 0
1  c1 + c3

 
 
 
  

A6 Consider 1 = c1 2 + c2 −1 + c3 0 =  2c1 − c2 . This gives
 
 
 
  

3
2
1
4
2c1 + c2 + 4c3
0 = c1 + c3
1 = 2c1 − c2
3 = 2c1 + c2 + 4c3
Adding the second and third equations gives 4 = 4c1 + 4c3 . Thus, c1 + c3 = 1 which contradicts the
first equation. Hence, "x " Span B.
A7 Consider
! "
! "
! "
! " !
"
0
1
1
1
c1 + c2 + c3
= c1
+ c2
+ c3
=
0
2
3
4
2c1 + 3c2 + 4c3
This gives
c1 + c2 + c3 = 0
2c1 + 3c2 + 4c3 = 0
Subtracting two times the first equation from the second equation gives c2 + 2c3 = 0. Thus, if we take
c3 = 1, we get c2 = −2 and hence c1 = 1. Therefore, by definition, the set is linearly dependent.
A8 Consider
! "
! "
! " !
"
0
3
−1
3c1 − c2
= c1
+ c2
=
0
1
3
c1 + 3c2
This gives
3c1 − c2 = 0
c1 + 3c2 = 0
Solving we find that the only solution is c1 = c2 = 0, so the set is linearly independent.
A9 Consider
! "
! "
! " !
"
0
1
1
c + c2
= c1
+ c2
= 1
0
1
0
c1
This gives
c1 + c2 = 0
c1 = 0
Solving we find that the only solution is c1 = c2 = 0, so the set is linearly independent.
! " ! " ! "
2
−4
0
A10 Observe that 2
+
=
, so the set is linearly dependent.
3
−6
0
A11 Consider
 
   
0
1  c1 
 
   
0
=
c
 
2 = 2c1 
1
0
1
c1
This gives c1 = 0, so the set is linearly independent.
Copyright © 2020 Pearson Canada Inc.
12
A12 Observe that
so the set is linearly dependent.
A13 Observe that
so the set is linearly dependent.
A14 Consider
This gives
 
 
   
 1
4
0 0
 
 
   
0 −3 + 0 6 + 1 0 = 0
 
 
   
−2
1
0
0
 
 
   
1
 1
−2 0
 
 
   
0 1 + 2  2 + 1 −4 = 0
 
 
   
0
−1
2
0
 
 
 
  

c1 + 2c2
0
 1
2
 0 

 
 
 
  

0
−2
3
−1
−2c
+
3c
−
c
=
c
+
c
+
c
=
1
2
3
 

1
2
3
 
 
  
0
1
4
−2
c1 + 4c2 − 2c3
c1 + 2c2 = 0
−2c1 + 3c2 − c3 = 0
c1 + 4c2 − 2c3 = 0
A15
A16
A17
A18
Subtracting the first equation from the third equation gives 2c2 − 2c3 = 0. Hence, c2 = c3 . The second
equation then gives 0 = −2c1 + 3c2 − c2 = −2c1 + 2c2 . Thus, c1 = c2 . Therefore, the first equation
gives c1 = c2 = 0 and hence c3 = c2 = 0. So, the set is linearly independent.
! "
1
Since the spanning set cannot be reduced, it is a line with vector equation "x = s , s ∈ R.
0
! "
! "
5! " ! "6
5! "6
2
−1
−1
2
−1
Since
= −2
, we have Span
,
= Span
. Since the spanning set cannot be
−2
1
1 −2
1
! "
−1
reduced, it is a line with vector equation "x = s
, s ∈ R.
1
 
 
   
 


−2
 1
 1 −2






 1



 
 
−3 ,  6 = Span 
−3
Since  6 = −2 −3, we have Span 
. Since the spanning set cannot be














 
 
 1 −2

 1

−2
1
 
 1
 
reduced, it is a line with vector equation "x = s −3, s ∈ R.
 
1
 
 
 1
−2
 
 
This is just two points in R3 . A vector equation would be "x = −3 or "x =  6.
 
 
1
−2
A19 Since neither vector is a scalar multiple of the other, the set cannot be reduced. Thus, it is a plane
 
 
 1
 2
 
 
with vector equation "x = s  0 + t  1, s, t ∈ R.
 
 
−2
−1
Copyright © 2020 Pearson Canada Inc.
13
A20 It is just the origin with vector equation "x = "0.
A21 B does not form a basis for R2 since it does not span R2 . For example, the vector
! "
1
is not in Span B.
0
A22 We will prove B is a basis. Consider
! "
! "
! " !
"
x1
2
1
2c1 + c2
= c1
+ c2
=
x2
3
0
3c1
This gives
2c1 + c2 = x1
3c1 = x2
Solving, we get c1 = 13 x2 and c2 = x1 − 23 x2 . Hence, B spans R2 . Moreover, taking x1 = x2 = 0 gives
the unique solution c1 = c2 = 0, so B is also linearly independent, and hence is a basis for R2 .
! "
! " ! "
2
0
0
A23 Since 0
+1
=
, B is linearly dependent and hence is not a basis.
1
0
0
! "
1
A24 B does not form a basis for R2 since the vector
is not in Span B.
0
A25 We will prove B is a basis. Consider
! "
! "
! " !
"
x1
−1
1
−c1 + c2
= c1
+ c2
=
x2
1
3
c1 + 3c2
This gives
−c1 + c2 = x1
c1 + 3c2 = x2
Solving, we get c1 = − 43 x1 + 14 x2 and c2 = 14 x1 + 14 x2 . Hence, B spans R2 . Moreover, taking x1 = x2 = 0
gives the unique solution c1 = c2 = 0, so B is also linearly independent, and hence is a basis for R2 .
! "
! "
! " ! "
−1
1
0
0
A26 Since 1
+1
−2
=
, B is linearly dependent and hence is not a basis.
1
3
2
0
 
 
   
1
0
1 0
 
 
   
A27 Since 0 2 + 1 0 + 0 4 = 0, B is linearly dependent and hence is not a basis.
 
 
   
1
0
3
0
A28 Consider
This gives
 
 
  

 x1 
−1
1  −c1 + c2 
 x2  = c1  2 + c2 1 =  2c1 + c2 
 
 
  

x3
−1
2
−c1 + 2c2
−c1 + c2 = x1
2c1 + c2 = x2
−c1 + 2c2 = x3
Copyright © 2020 Pearson Canada Inc.
14
Subtracting the first equation from the second equation gives 3c1 = x2 − x1 . Subtracting 2 times
 
 x1 
 
the first equation from the third gives c1 = x3 − 2x1 . Hence, for  x2  to be in the span, we must
 
x3
 
1
have 31 (x2 − x1 ) = x3 − 2x1 . Since, the vector 1 does not satisfy this condition, it is not in Span B.
 
1
Therefore, B does not span R3 and hence is not a basis for R3 .
A29 We will prove it is a basis. Consider
 
 
 
  

 x1 
1
1
0  c1 + c2 
 x2  = c1 0 + c2 0 + c3 1 =  c3 
 
 
 
  

x3
1
0
2
c1 + 2c3
This gives
c1 + c2 = x1
c3 = x2
c1 + 2c3 = x3
Solving we get c3 = x2 , c1 = −2x2 + x3 , and c2 = x1 + 2x2 − x3 . Hence, B spans R3 . Moreover, taking
x1 = x2 = x3 = 0 gives the unique solution c1 = c2 = c3 = 0, so B is also linearly independent, and
hence is a basis for R3 .
A30 We will prove it is a basis. Consider
 
 
 
  

 x1 
1
1
1 c1 + c2 + c3 
 
 
 
  

 x2  = c1 0 + c2 1 + c3 1 =  c2 + c3 
x3
1
1
0
c1 + c2
This gives
c1 + c2 + c3 = x1
c2 + c3 = x2
c1 + c2 = x3
Solving we get c3 = x1 − x3 , c2 = −x1 + x2 + x3 , and c1 = x1 − x2 . Hence, B spans R3 . Moreover,
taking x1 = x2 = x3 = 0 gives the unique solution c1 = c2 = c3 = 0, so B is also linearly independent,
and hence is a basis for R3 .
A31 (a) Consider
! "
! "
! " !
"
x1
1
1
c1 + c2
= c1
+ c2
=
x2
0
1
c2
This gives
c1 + c2 = x1
c2 = x2
Solving, we get c2 = x2 and c1 = x1 − x2 . Hence, B spans R2 . Moreover, taking x1 = x2 = 0 gives
the unique solution c1 = c2 = 0, so B is also linearly independent, and hence is a basis for R2 .
Copyright © 2020 Pearson Canada Inc.
15
(b) Taking x1 = 1 and x2 = 0 we find that the coordinates of "e1 with respect to B are c1 = 1 and
c2 = 0.
Taking x1 = 0 and x2 = 1 we find that the coordinates of "e2 with respect to B are c1 = −1 and
c2 = 1.
Taking x1 = 1 and x2 = 3 we find that the coordinates of "x with respect to B are c1 = −2 and
c2 = 3.
A32 (a) Consider
! "
! "
! " !
"
x1
1
1
c + c2
= c1
+ c2
= 1
x2
1
−1
c1 − c2
This gives
c1 + c2 = x1
c1 − c2 = x2
Solving, we get c1 = 12 x1 + 12 x2 and c2 = 12 x1 − 12 x2 . Hence, B spans R2 . Moreover, taking
x1 = x2 = 0 gives the unique solution c1 = c2 = 0, so B is also linearly independent, and hence
is a basis for R2 .
(b) Taking x1 = 1 and x2 = 0 we find that the coordinates of "e1 with respect to B are c1 = 1/2 and
c2 = 1/2.
Taking x1 = 0 and x2 = 1 we find that the coordinates of "e2 with respect to B are c1 = 1/2 and
c2 = −1/2.
Taking x1 = 1 and x2 = 3 we find that the coordinates of "x with respect to B are c1 = 2 and
c2 = −1.
A33 (a) Consider
! "
! "
! " !
"
x1
1
−1
c1 − c2
= c1
+ c2
=
x2
2
−1
2c1 − c2
This gives
c1 − c2 = x1
2c1 − c2 = x2
Solving, we get c1 = −x1 + x2 and c2 = −2x1 + x2 . Hence, B spans R2 . Moreover, taking
x1 = x2 = 0 gives the unique solution c1 = c2 = 0, so B is also linearly independent, and hence
is a basis for R2 .
(b) Taking x1 = 1 and x2 = 0 we find that the coordinates of "e1 with respect to B are c1 = −1 and
c2 = −2.
Taking x1 = 0 and x2 = 1 we find that the coordinates of "e2 with respect to B are c1 = 1 and
c2 = 1.
Taking x1 = 1 and x2 = 3 we find that the coordinates of "x with respect to B are c1 = 2 and
c2 = 1.
Copyright © 2020 Pearson Canada Inc.
16
A34 Assume that {"v 1 , "v 2 } is linearly independent. For a contradiction, assume without loss of generality
that "v 1 is a scalar multiple of "v 2 . Then "v 1 = t"v 2 and hence "v 1 − t"v 2 = "0. This contradicts the fact that
{"v 1 , "v 2 } is linearly independent since the coefficient of "v 1 is non-zero.
On the other hand, assume that {"v 1 , "v 2 } is linearly dependent. Then there exists c1 , c2 ∈ R not both
zero such that c1"v 1 + c2"v 2 = "0. Without loss of generality assume that c1 ! 0. Then "v 1 = − cc12 "v 2 and
hence "v 1 is a scalar multiple of "v 2 .
A35 To prove this, we will prove that both sets are a subset of the other.
Let "x ∈ Span{"v 1 , "v 2 }. Then there exists c1 , c2 ∈ R such that "x = c1"v 1 + c2"v 2 . Since t ! 0 we get
"x = c1"v 1 +
c2
(t"v 2 )
t
so "x ∈ Span{"v 1 , t"v 2 }. Thus, Span{"v 1 , "v 2 } ⊆ Span{"v 1 , t"v 2 }.
If "y ∈ Span{"v 1 , t"v 2 }, then there exists d1 , d2 ∈ R such that
"y = d1"v 1 + d2 (t"v 2 ) = d1"v 1 + (d2 t)"v 2 ∈ Span{"v 1 , "v 2 }
Hence, we also have Span{"v 1 , t"v 2 } ⊆ Span{"v 1 , "v 2 }. Therefore, Span{"v 1 , "v 2 } = Span{"v 1 , t"v 2 }.
B Homework Problems
! "
! "
! "
3
1
1
= 52
+ 12
2
1
−1
! "
! "
2
2 −3
B3
= −3
−2
3
 
 
 
 
3
1
0
1
 
 
 
 
B5 1 = 0 1 + 1 1 + 3 0
 
 
 
 
4
0
1
1
! "
! " ! "
1
3
2
B7 72
− 12
=
1
5
1
B1
B2 "x " Span B
! "
! "
! "
1
2
2
1 1
B4
=5
+5
0
−1
2
B6 "x " Span B
B9 Linearly independent
B10
B11 Linearly independent
B12
 
   
 2
1  4
 
   
B13 2 −1 + 0 5 = −2
 
   
1
3
2
! "
1
B15 A line. "x = s , s ∈ R
0
! "
1
B17 A line. "x = s , s ∈ R
2
! " ! "
8
0
=
3
0
! " !
"
−2
4
−2
=
5
−10
   
 1 0
   
0 −3 = 0
   
2
0
 
   
4
2 3
1 
2 + 1 6 = 4
2 
  2    
1
3
2
! "
! "
1
2
2
All of R . "x = s
+ t , s, t ∈ R
1
3
B8 0
B14
B16
B18 The origin. "x = "0.
Copyright © 2020 Pearson Canada Inc.
17
B19
B21
B23
B25
B27
 
3
 
A line. "x = s 1, s ∈ R
 
1
A basis
A basis
Not a basis
Not A basis
B20
B22
B24
B26
B28
 
2
 
A line. "x = s 1, s ∈ R
 
3
Not a basis
A basis
A basis
A basis
B29 (a) Show B is a linearly independent spanning set.
(b) The coordinates of "e1 with respect to B are c1 = 1, c2 = 1.
The coordinates of "e2 with respect to B are c1 = 0, c2 = 1.
The coordinates of "x with respect to B are c1 = 2, c2 = 3.
B30 (a) Show B is a linearly independent spanning set.
(b) The coordinates of "e1 with respect to B are c1 = 3/5, c2 = −1/5.
The coordinates of "e2 with respect to B are c1 = −1/5, c2 = 2/5.
The coordinates of "x with respect to B are c1 = 0, c2 = 1.
B31 (a) Show B is a linearly independent spanning set.
(b) The coordinates of "e1 with respect to B are c1 = 1/2, c2 = 0.
The coordinates of "e2 with respect to B are c1 = −1/6, c2 = 1/3.
The coordinates of "x with respect to B are c1 = 0, c2 = 1.
B32 (a) Show B is a linearly independent spanning set.
(b) The coordinates of "e1 with respect to B are c1 = 1/5, c2 = 2/5.
The coordinates of "e2 with respect to B are c1 = −2/5, c2 = 1/5.
The coordinates of "x with respect to B are c1 = −1, c2 = 1.
B33 (a) Show B is a linearly independent spanning set.
(b) The coordinates of "e1 with respect to B are c1 = −5/13, c2 = −1/13.
The coordinates of "e2 with respect to B are c1 = −3/13, c2 = 2/13.
The coordinates of "x with respect to B are c1 = −14/13, c2 = 5/13.
Section 1.3
A Practice Problems
::! "::
√
: 2 :: ; 2
= 2 + (−5)2 = 29
A1 ::
:
: −5 :
::!
√ ": <
√
√
√
:: 2/ 29 :::
√ : = (2/ 29)2 + (−5/ 29)2 = 4/29 + 25/29 = 1
A2 :
: −5/ 29 :
:: ::
:: 1:: ;
√
 
A3 ::: 0::: = 12 + 02 + (−1)2 = 2
::−1::
Copyright © 2020 Pearson Canada Inc.
18
A4
A5
A6
A7
A8
A9
A10
A11
A12
A13
A14
A15
A16
:: ::
:: 2:: ;
:: 3:: = 22 + 32 + (−2)2 = √17
:: ::
: −2 :
:: ::
:: 1:: ;
::1/5:: = 12 + (1/5)2 + (−3)2 = √251/5
:: ::
: −3 :
::
√ :
:: 1/ 3::: < √
√
√
√ :
::
2
2
2

:: 1/ √3::: = (1/ 3) + (1/ 3) + (−1/ 3) = 1
: −1/ 3 :
::! " ! ":: ::! "::
√
: −4
2 : : −6 :: ;
"
The distance between P and Q is 'PQ' = ::
− :: = ::
= (−6)2 + (−2)2 = 2 10.
: 1
3 : : −2 ::
::   :: :: ::
::−3  1:: ::−4:: ;
   
 
" = :: 1 −  1:: = :: 0:: = (−4)2 + 02 + 32 = 5.
The distance between P and Q is 'PQ'
::   :: :: ::
: 1
−2 : : 3 :
::   :: :: ::
::−3  4:: ::−7:: ;
√
   
 
" = :: 5 − −6:: = :: 11:: = (−7)2 + 112 + 02 = 170.
The distance between P and Q is 'PQ'
::   :: :: ::
: 1
1: : 0:
::   :: :: ::
:: 4 2:: :: 2:: ;
√
   
 
" = :: 6 − 1:: = :: 5:: = 22 + 52 + (−3)2 = 38.
The distance between P and Q is 'PQ'
::   :: :: ::
: −2
1 : : −3 :
   
1  2
   
3 · −2 = 1(2) + 3(−2) + 2(2) = 0. Hence these vectors are orthogonal.
2
2
   
−3  2
   
 1 · −1 = (−3)(2) + 1(−1) + 7(1) = 0. Hence these vectors are orthogonal.
7
1
   
2 −1
   
1 ·  4 = 2(−1) + 1(4) + 1(2) = 4 ! 0. Therefore, these vectors are not orthogonal.
1
2
   
4 −1
   
1 ·  4 = 4(−1) + 1(4) + 0(3) = 0. Hence these vectors are orthogonal.
0
3
   
0  x1 
0 ·  x2  = 0(x1 ) + 0(x2 ) + 0(x3 ) = 0. Hence these vectors are orthogonal.
   
0 x3

 

 1/3  3/2
3 4
3 43 4
 2/3 ·  0  = 1 3 + 2 (0) + − 1 − 3 = 1. Therefore, these vectors are not orthogonal.
3
3
2

 
 3 2
−1/3 −3/2
Copyright © 2020 Pearson Canada Inc.
19
A17
A18
A19
A20
" ! "
3 2
The vectors are orthogonal when 0 =
·
= 3(2) + (−1)k = 6 − k.
−1 k
Thus, the vectors are orthogonal only when k = 6.
! " ! "
3
k
The vectors are orthogonal when 0 =
· 2 = 3(k) + (−1)(k2 ) = 3k − k2 = k(3 − k).
−1 k
Thus, the vectors are orthogonal only when k = 0 or k = 3.
   
1  3
   
The vectors are orthogonal when 0 = 2 · −k = 1(3) + 2(−k) + 3(k) = 3 + k.
   
3
k
Thus, the vectors are orthogonal only when k = −3.
   
1  k
   
The vectors are orthogonal when 0 = 2 ·  k = 1(k) + 2(k) + 3(−k) = 0.
   
3 −k
Therefore, the vectors are always orthogonal.
!
A21 The scalar equation of the plane is
  

 2   x1 + 1
  

0 = "n · ("x − "p ) =  4  ·  x2 − 2
  

−1 x3 + 3
= 2(x1 + 1) + 4(x2 − 2) + (−1)(x3 + 3)
= 2x1 + 2 + 4x2 − 8 − x3 − 3
9 = 2x1 + 4x2 − x3
A22 The scalar equation of the plane is
  

3  x1 − 2
  

0 = "n · ("x − "p ) = 0 ·  x2 − 5
  

5 x3 − 4
= 3(x1 − 2) + 0(x2 − 5) + 5(x3 − 4)
= 3x1 − 6 + 5x3 − 20
26 = 3x1 + 5x3
A23 The scalar equation of the plane is
  

 3   x1 − 1
  

0 = "n · ("x − "p ) = −4 ·  x2 + 1
  

1
x3 − 1
= 3(x1 − 1) + (−4)(x2 + 1) + 1(x3 − 1)
= 3x1 − 3 − 4x2 − 4 + x3 − 1
8 = 3x1 − 4x2 + x3
    
 

 1 −2  (−5)(5) − 1(2)  −27
    
 

A24 −5 ×  1 =  2(−2) − 1(5)  =  −9
    
 

2
5
1(1) − (−5)(−2)
−9
Copyright © 2020 Pearson Canada Inc.
20
    
 

 2  4 (−3)(7) − (−5)(−2) −31
    
 

A25 −3 × −2 =  (−5)(4) − 2(7)  = −34
    
 

−5
7
2(−2) − (−3)(4)
8
    
  
−1 0  0(5) − (−1)(4)   4
    
  
A26  0 × 4 = (−1)(0) − (−1)(5) =  5
    
  
−1
5
(−1)(4) − 0(0)
−4
    
  
1 −1  2(0) − 0(−3)   0
    
  
A27 2 × −3 =  0(−1) − 1(0)  =  0
    
  
0
0
1(−3) − 2(−1)
−1
    
  
 4 −2 (−2)(−3) − 6(1) 0
    
  
A28 −2 ×  1 =  6(−2) − 4(−3)  = 0
    
  
6
−3
4(1) − (−2)(−2)
0
    
  
3 3 1(3) − 3(1) 0
    
  
A29 1 × 1 = 3(3) − 3(3) = 0
    
  
3
3
3(1) − 1(3)
0

  
 4(2) − 2(4)  0

  
A30 (a) "u × "u = 2(−1) − (−1)(2) = 0

  
(−1)(4) − 4(−1)
0
(b) We have

 

 4(−1) − 2(1)   −6

 

"u × "v = 2(3) − (−1)(−1) =  5

 

(−1)(1) − 4(3)
−13


 
 1(2) − (−1)(4) 
 6


 
−"v × "u = − (−1)(−1) − 3(2) = − −5 = "u × "v


 
3(4) − 1(−1)
13
(c) We have
    
 

−1  6  4(−3) − 2(−9)   6
    
 

"u × 3"
w =  4 × −9 = 2(6) − (−1)(−3) =  9
    
 

2
−3
(−1)(−9) − 4(6)
−15


  

 4(−1) − 2(−3) 
 2  6


  

" ) = 3 2(2) − (−1)(−1) = 3  3 =  9
3("u × w


  

(−1)(−3) − 4(2)
−5
−15
Copyright © 2020 Pearson Canada Inc.
21
(d) We have
   
−1  5
   
"u × ("v + w
" ) =  4 × −2
   
2
−2

 

 4(−2) − 2(−2)   −4

 

= 2(5) − (−1)(−2) =  8

 

(−1)(−2) − 4(5)
−18

   

 −6  2  −4

   

"u × "v + "u × w
" =  5 +  3 =  8

   

−13
−5
−18
(e) We have
  
   

−1 1(−1) − (−1)(−3) −1  −4
  
   

"u · ("v × w
" ) =  4 ·  (−1)(2) − 3(−1)  =  4 ·  1 = −14
  
   

2
3(−3) − 1(2)
2 −11
  

 2  −6
  

" · ("u × "v ) = −3 ·  5 = −14
w
  

−1 −13
" ) = −14. Then
(f) From part (e) we have "u · ("v × w
   
 3  2
   
"v · ("u × w
" ) =  1 ·  3 = 14 = −"u · ("v × w
")
   
−1 −5
    

 2 4  1
    

A31 A normal vector for the plane is "n =  3 × 1 =  −4. Thus, a scalar equation for the plane is
    

−1
0
−10
x1 − 4x2 − 10x3 = 1(1) − 4(4) − 10(7) = −85
     
1 −2  2
     
A32 A normal vector for the plane is "n = 1 ×  1 = −2. Thus, a scalar equation for the plane is
     
0
2
3
2x1 − 2x2 + 3x3 = 2(2) − 2(3) + 3(−1) = −5
     
 2 0 −5
     
A33 A normal vector for the plane is "n = −2 × 3 = −2. Thus, a scalar equation for the plane is
     
1
1
6
−5x1 − 2x2 + 6x3 = −5(1) − 2(−1) + 6(3) = 15
Copyright © 2020 Pearson Canada Inc.
22
    

1 −2 −17
    

A34 A normal vector for the plane is "n = 3 ×  4 =  −1. Thus, a scalar equation for the plane is
    

2
−3
10
−17x1 − x2 + 10x3 = −17(0) − (0) + 10(0) = 0
For Problems A35 - A40, alternate answers are possible.
A35 We can rewrite the equation as x3 = −2x1 + 3x2 . Thus, a vector equation is
  

 
 
x1
 x1  

 1
0
  

 
 
x2
 x2  = 
 = x1  0 + x2 1 , x1 , x2 ∈ R
x3
−2x1 + 3x2
−2
3
A36 We can rewrite the equation as x2 = 5 − 4x1 + 2x3 . Thus, a vector equation is
  
  
 
 
x1
 x1  
 0
 1
0
 x2  = 5 − 4x1 + 2x3  = 5 + x1 −4 + x3 2 , x1 , x3 ∈ R
  
  
 
 
x3
x3
0
0
1
A37 We can rewrite the equation as x1 = 1 − 2x2 − 2x3 . Thus, a vector equation is
  
  
 
 
 x1  1 − 2x2 − 2x3  1
−2
−2
  
 
 
 x2  = 
x
0
1
=
+
x
+
x
2
  
 0 , x2 , x3 ∈ R
2
3
  
 
x3
x3
0
0
1
A38 We can rewrite the equation as x1 = 73 − 53 x2 + 43 x3 . Thus, a vector equation is
  7 5
  


 
 x1   3 − 3 x2 + 43 x3  7/3
−5/3
4/3
  
  


 
x
0
1
x
=
=
+
x
+
x
2
 2  
  

 0  , x2 , x3 ∈ R
2
3

x3
0
0
1
x3
A39 We can rewrite the equation as x2 = 2x1 + 3x3 . Thus, a vector equation is
  

 
 
x1
 x1  

1
0
  

 
 
 x2  = 2x1 + 3x3  = x1 2 + x3 3 , x2 , x3 ∈ R
x3
x3
0
1
A40 We can rewrite the equation as x2 = 3 − 2x1 − 3x3 . Thus, a vector equation is
  
  
 
 
x1
 x1  
 0
 1
 0
 x2  = 3 − 2x1 − 3x3  = 3 + x1 −2 + x3 −3 , x1 , x3 ∈ R
  
  
 
 
x3
x3
0
0
1
 
 
 2
 0


 


"
"
A41 We have that the vectors PQ = −4 and PR =  5 are vectors in the plane. Hence, a normal vector
 
 
−3
−6
     
 2  0 39
     
for the plane is "n = −4 ×  5 = 12. Then, since P(2, 1, 5) is a point on the plane we get a scalar
     
−3
−6
10
equation of the plane is
39x1 + 12x2 + 10x3 = 39(2) + 12(1) + 10(5) = 140
Copyright © 2020 Pearson Canada Inc.
23
 
 
−5
−2
" = −1 and PR
" =  3 are vectors in the plane. Hence, a normal vector
A42 We have that the vectors PQ
 
 
−2
−5
    

−5 −2  11
    

for the plane is "n = −1 ×  3 = −21. Then, since P(3, 1, 4) is a point on the plane we get a scalar
    

−2
−5
−17
equation of the plane is
11x1 − 21x2 − 17x3 = 11(3) − 21(1) − 17(4) = −56
 
 
 4
 3
" = −3 and PR
" = −7 are vectors in the plane. Hence, a normal vector
A43 We have that the vectors PQ
 
 
−3
−3
    

 4  3 −12
    

for the plane is "n = −3 × −7 =  3. Then, since P(−1, 4, 2) is a point on the plane we get a
    

−3
−3
−19
scalar equation of the plane is
−12x1 + 3x2 − 19x3 = −12(−1) + 3(4) − 19(2) = −14
 
 
−2
−1


" =  0 and PR
" =  0 are vectors in the plane. Hence, a normal vector
A44 We have that the vectors PQ
 
 
0
−1
     
−2 −1  0
     
for the plane is "n =  0 ×  0 = −2. Then, since R(0, 0, 0) is a point on the plane we get a scalar
     
0
−1
0
equation of the plane is −2x2 = 0 or x2 = 0.
 
 
 3
 1
" = −3 and PR
" =  1 are vectors in the plane. Hence, a normal vector
A45 We have that the vectors PQ
 
 
0
−1
     
 3  1 3
     
for the plane is "n = −3 ×  1 = 3. Then, since P(0, 2, 1) is a point on the plane we get a scalar
     
0
−1
6
equation of the plane is
3x1 + 3x2 + 6x3 = 3(0) + 3(2) + 6(1) = 12 or x1 + x2 + 2x3 = 4
 
 
1
 0
" = 1 and PR
" = −5 are vectors in the plane. Hence, a normal vector
A46 We have that the vectors PQ
 
 
2
4
     
1  0  14
     
for the plane is "n = 1 × −5 = −4. Then, since R(1, 0, 1) is a point on the plane we get a scalar
     
2
4
−5
equation of the plane is
14x1 − 4x2 − 5x3 = 14(1) − 4(0) − 5(1) = 9
Copyright © 2020 Pearson Canada Inc.
24
 
 2
 
A47 A normal vector for the plane is "n = −3. Then, since P(1, −3, −1) is a point on the plane we get a
 
5
scalar equation of the plane is
2x1 − 3x2 + 5x3 = 2(1) − 3(−3) + 5(−1) = 6
 
0
 
A48 A normal vector for the plane is "n = 1. Then, since P(0, −2, −4) is a point on the plane we get a
 
0
scalar equation of the plane is
x2 = −2
 
 1
 
A49 A normal vector for the plane is "n = −1. Then, since P(1, 2, 1) is a point on the plane we get a
 
3
scalar equation of the plane is
x1 − x2 + 3x3 = 1(1) − 1(2) + 3(1) = 2
A50 The line of intersection must lie in both planes and hence it must be orthogonal to both normal
vectors. Hence, a direction vector of the line is
    

 1  2  −2






d" =  3 × −5 =  −3
    

−1
1
−11
To find a point on the line we set x3 = 0 in the equations of both planes to get x1 + 3x2 = 5 and
46
3
2x1 − 5x2 = 7. Solving the two equations in two unknowns gives the solution x1 = 11
and x2 = 11
.
Thus, an equation of the line is
A51 A direction vector of the line is




46/11
 −2




"x =  3/11  + t  −3 ,




0
−11
t∈R
     
 2 0  3
     
d" =  0 × 1 = −4
     
−3
2
2
To find a point on the line we set x3 = 0 to get 2x1 = 7 and x2 = 4. Thus, an equation of the line is
A52 A direction vector of the line is
 
 
7/2
 3
 
 
"x =  4  + t −4 ,
 
 
0
2
t∈R
     
 1  3 −2
     
d" = −2 ×  4 =  4
     
1
−1
10
Copyright © 2020 Pearson Canada Inc.
25
To find a point on the line we set x3 = 0 in the equations of both planes to get x1 − 2x2 = 1 and
3x1 + 4x2 = 5. Solving the two equations in two unknowns gives the solution x1 = 75 and x2 = 15 .
Thus, an equation of the line is
 
 
7/5
−2
 
 
"x = 1/5 + t  4 , t ∈ R
 
 
0
10
A53 A direction vector of the line is
     
 1  3 −2
     
"
d = −2 ×  4 =  4
     
1
−1
10
Clearly (0, 0, 0) is on both lines. Hence, an equation of the line is
 
−2
 
"x = t  4 , t ∈ R
 
10
A54 The area of the parallelogram is
A55 The area of the parallelogram is
::   :: :: ::
::1  2:: ::−5:: √
::2 ×  3:: = :: 3:: = 35
::   :: :: ::
:1
−1 : : −1 :
::   :: :: ::
::1 1:: ::−1:: √
::0 × 1:: = ::−3:: = 11
::   :: :: ::
:1
4: : 1:
 
 
−3
4
 
 
A56 As specified in the hint, we write the vectors as  1 and 3. Hence, the area of the parallelogram is
 
 
0
0
::   :: ::
:

::−3 4:: :: 0:::





:: 1 × 3:: = :: 0:: = 13
:
::   :: ::
: 0
0 : : −13 ::
" ) = 0 means that "u is orthogonal to "v × w
" . Therefore, "u lies in the plane through the origin that
A57 "u ·("v × w
" . We can also see this by observing that "u · ("v × w
" ) = 0 means that the parallelepiped
contains "v and w
" has volume zero; this can happen only if the three vectors lie in a common
determined by "u , "v , and w
plane.
A58 We have
("u − "v ) × ("u + "v ) = "u × ("u + "v ) − "v × ("u + "v )
= "u × "u + "u × "v − "v × "u − "v × "v
= "0 + "u × "v + "u × "v − "0
= 2("u × "v )
as required.
Copyright © 2020 Pearson Canada Inc.
26
B Homework Problems
√
B1 17
B4 1
√
B2 13
B5 1
√
B7 √26
B10 √11
B13 57
√
B8 √17
B11 24
√
B9 √41
B12 14
B14 Not orthogonal
B17 Not orthogonal
B15 Not orthogonal
B18 Not orthogonal
B16 Orthogonal
B19 Orthogonal
B20
B22
B24
B26
B28
k=0
k = 2/7
x1 − x2 + 5x3 = 4
−2x2 − x3 = −5
5x1 − 6x2 + 3x3 = 0
B21
B23
B25
B27
 
3
 
B29 6
 
0
 
0
 
B31 0
 
0
 
−1
 
B33  11
 
16
B35
B3 0√
B6 3/2
k = 0, −3
k = 0, 5
3x1 + 3x2 − 4x3 = 17
x1 + 3x2 + x3 = 11
 
 5
 
B30  5
 
10


 1


B32 −11


−16


 −4


B34  20


−12
 
0
 
(a) "u × "u = 0
 
0
 
−6
 
")
(c) "u × 2"
w = −8 = 2("u × w
 
2
" ) = −3 = w
" · ("u × "v )
(e) "u · ("v × w
B36 −2x1 − 4x2 + 5x3 = −15
B38 x1 − x2 − x3 = 1
B40 x1 + x3 = 0
 
 
 
5
2
−2


 
B42 "x =  0 + x2 1 + x3 0 , x2 , x3 ∈ R
 
 
 
0
0
1
 
 
−1
−2
 
 
B44 "x = x2  1 + x3  0 , x2 , x3 ∈ R
 
 
0
1
 
 0
 
(b) "u × "v = −2 = −"v × "u
 
−1
 
−3
 
" ) = −6 = "u × "v + "u × w
"
(d) "u × ("v + w
 
0
" ) = −3 = −"v · ("u × w
")
(f) "u · ("v × w
B37 x1 − 7x2 − 5x3 = −35
B39 x1 + 11x2 + 14x3 = 0
B41 5x1 + 2x2 − 3x3 = 0
 
 
 
1
 0
0


 
B43 "x = 1 + x1 0 + x3 −1 , x1 , x3 ∈ R
 
 
 
0
0
1
 
 
 
6
−1
1
 
 
 
B45 "x = 0 + x2  1 + x3 0 , x2 , x3 ∈ R
 
 
 
0
0
1
Copyright © 2020 Pearson Canada Inc.
27
B46
B48
B50
B52
B54
B56
B58
B60
B62
B64
B65
B68
 
 
1
0
 
 
"x = x1 0 + x2 1 , x1 , x2 ∈ R
 
 
3
5
x1 + 11x2 + 2x3 = 43
x1 + 2x2 + 2x3 = 6
−2x1 − 6x2 + x3 = −31
4x1 + x2 + 2x3 = 6
2x1 + 3x3 = 12
2x1 + 3x2 − 4x3 = 0


 
 11/7 
 5


 
"x = −2/7 + t  1 , t ∈ R


 
0
−7
 
 
9/7
1
 
 
"x = 1/7 + t 4 , t ∈ R
 
 
0
7


 
 4/3 
−5


 
"x = −10/3 + t  11 , t ∈ R


 
0
−3
√
√
75
B66 65
19
B69 2
B47
B49
B51
B53
B55
B57
B59
B61
B63
 
 
 
−2
−1
3/2
 
 
 
"x =  0 + x2  1 + x3  0  , x2 , x3 ∈ R
 
 
 
0
0
1
8x1 − x2 + 2x3 = 25
7x1 + x2 − 14x3 = −6
−19x1 + 22x2 − 21x3 = −6
−x1 + 2x2 − 3x3 = −23
−x1 − 5x2 + 3x3 = −6
4x1 + 2x2 + 2x3 = 0
 
 
1/2
 2
 
 
"x = 3/4 + t −3 , t ∈ R
 
 
0
−4
 


7/4
 5
 


"x = 1/2 + t  2 , t ∈ R
 


0
−16
√
B67 120
B70 6
C Conceptual Problems
C1 (a) First, we know that d" ! "0 as otherwise the vector equation would not be a line. Intuitively, if
there is no point of intersection, the line is parallel to the plane. Hence, the direction vector of the
line must be orthogonal to the normal to the plane. Therefore, we will have that d" · "n = 0. Since
the point P cannot be on the plane, it cannot satisfy the equation of the plane, so "p · "n ! k.
(b) Substitute "x = "p + td" into the equation of the plane to see whether for some t, "x satisfies the
equation of the plane.
" =k
"n · ("p + td)
" = k − "n · "p .
Isolate the term in t: t("n · d)
There is one solution for t (and thus, one point of intersection of the line and the plane) exactly
when "n · d" ! 0. If "n · d" = 0, there is no solution for t unless we also have "n · "p = k. In this case the
equation is satisfied for all t and the line lies in the plane. Thus, to have no point of intersection,
it is necessary and sufficient that "n · d" = 0 and "n · "p ! k.
C2 (a) We have "x · "x = x12 + x22 + x32 ≥ 0.
(b) If "x · "x = 0, then x12 + x22 + x32 = 0 which implies x1 = x2 = x3 = 0 as required. On the other hand
"0 · "0 = 02 + 02 + 02 = 0.
(c) We have "x · "y = x1 y1 + x2 y2 + x3 y3 = y1 x1 + y2 x2 + y3 x3 = "y · "x .
Copyright © 2020 Pearson Canada Inc.
28
(d) We have
"x · (s"y + t"z) = x1 (sy1 + tz1 ) + x2 (sy2 + tz2 ) + x3 (sy3 + tz3 )
= s[x1 y1 + x2 y2 + x3 y3 ] + t[x1 z1 + x2 z2 + x3 z3 ]
= s("x · "y ) + t("x · "z)
! " ! "
x
0
C3 (a) 1 ·
= x1 (0) + x2 (0) = 0
x2 0
(b) "x · "0 = "x · 0("y ) = 0("x · "y ) = 0
C4 Let "x be any point that is equidistant from P and Q. Then "x satisfies '"x −"p ' = '"x −"q ', or equivalently,
'"x − "p '2 = '"x − "q '2 . Hence,
("x − "p ) · ("x − "p ) = ("x − "q ) · ("x − "q )
"x · "x − "x · "p − "p · "x + "p · "p = "x · "x − "x · "q − "q · "x + "q · "q
−2"p · "x + 2"q · "x = "q · "q − "p · "p
2("q − "p ) · "x = '"q '2 − '"p '2
This is the equation of a plane with normal vector 2("q − "p ).
::
 :: ::
 ::
::
2:: ::
−3::
 
 
C5 (a) A point "x on the plane must satisfy :::"x − 2::: = :::"x −  4:::. Square both sides and simplify.
 : :
 :
::
5: :
1:

  
  
  
 

2 
2 
−3 
−3

  
  
  
 
2
2
4
"
"
"
"
x
−
·
x
−
=
x
−
·
x
−

  
  
 4
  
5
5
1
1
 
 
2
−3
 
 
"x · "x − 2 2 · "x + 33 = "x · "x − 2  4 · "x + 26
 
 
5
1
   
−3 2
   
2  4 − 2 · "x = 26 − 33
   
1
5
5x1 − 2x2 + 4x3 = 7/2
    

2 −3 −1/2
    

(b) A point equidistant from the points is 21 2 +  4 =  3 . The vector joining the two points,
    

5
1
3
     
2 −3  5
     
"n = 2 −  4  = −2 must be orthogonal to the plane. Thus, the equation of the plane is
     
5
1
4
  

 5  x1 + 12 
  

"n · ("x − "p ) = −2 ·  x2 − 3 
  

4
x3 − 3
which gives
5x1 − 2x2 + 4x3 = 7/2
Copyright © 2020 Pearson Canada Inc.
29
C6 (a) The statement is false. If "x = "0, "y = "e1 and "z = "e2 , then "x · "y = 0 = "x · "z but "y ! "z.
 
 
 
1
−1
 1
 
 
 
(b) No, it does not. If "x = 1, "y =  1, and "z = −1, then "x · "y = 0 = "x · "z but "y ! "z.
 
 
 
0
2
3
C7 If X is a point on the line through P and Q, then for some t ∈ R, "x = "p + t("q − "p ) Hence,
"x × ("q − "p ) = ("p + t("q − "p )) × ("q − "p )
= "p × "q − "p × "p + t("q − "p ) × ("q − "p ) = "p × "q
 
n1 
 
C8 (a) Let "n = n2 . We have "n · "e1 = '"n' '"e1 ' cos α. But, '"n' = 1 and '"e1 ' = 1, so "n · "e1 = cos α. But,
 
n3


cos α
"n · "e1 = n1 , so n1 = cos α. Similarly, n2 = cos β and n3 = cos γ, so "n =  cos β .


cos γ
(b) cos2 α + cos2 β + cos2 γ = '"n'2 = 1, because "n is a unit vector.
!
"
cos α
2
(c) In R , the unit vector is "n =
, where α is the angle between "n and "e1 and β is the angle
cos β
between "n and "e2 . But in the plane α + β = π2 , so cos β = cos(π/2 − α) = sin α. Now let θ = α,
and we have
1 = '"n'2 = cos2 α + cos2 β = cos2 θ + sin2 θ
" = "v + t"u for any t ∈ R,
C9 The statement is false. For any non-zero vector "u and any vector "v ∈ R3 , let w
t ! 0. Then
"u × w
" = "u × ("v + t"u ) = "u × "v
".
but "v ! w
" = "0, then "u × ("v × w
" ) = "0 which clearly satisfies the equation "x = s"v + t"
C10 If "v × w
w. Assume
"
"n = "v × w
" ! 0. Then "n is orthogonal to both "v and w
" and hence it is a normal vector of the plane
" . Then, "u × ("v × w
" ) = "u × "n is orthogonal to "n so it lies in the
through the origin containing "v and w
" . Hence,
plane through the origin with normal vector "n. That is, it is in the plane containing "v and w
" ) = s"v + t"
there exists s, t ∈ R such that "u × ("v × w
w.
 
0
 
C11 (a) We have "e1 × ("e2 × "e3 ) = 0 = ("e1 × "e2 ) × "e3 .
 
0
 
 
 
1
0
−2
 
 
 
" = 2. Then "e1 × ("e2 × w
" ) = 1 while ("e1 × "e2 ) × w
" =  1.
(b) Take w
 
 
 
0
0
0
C12 Consider "0 = c1 "x + c2"y . Taking the dot product of both sides with "x gives
"0 · "x = (c1 "x + c2"y ) · "x
0 = c1 ("x · "x ) + c2 ("x · "y )
0 = c1 '"x '2 + 0
Copyright © 2020 Pearson Canada Inc.
30
But, '"x ' ! 0 since "x ! "0. Thus, c1 = 0. Similarly, taking the dot product of both sides with respect
to "y gives c2 = 0. Thus, {"x , "y } is linearly independent.
C13 Consider "x = c1"v 1 + c2"v 2 . Taking the dot product of both sides with "v 1 gives
"x · "v 1 = (c1"v 1 + c2"v 2 ) · "v 1
"x · "v 1 = c1 '"v 1 '2 + 0
Since "v 1 ! "0 (as otherwise B would be linearly dependent), we get that c1 =
proof for c2 is the same.
"x ·"v 1
'"v 1 '2
as required. The
Section 1.4
A Practice Problems
 
       
 1
 2  1  4 5
 3
 3  3  6 9


A1   + 2   =   +   =  
 2
−1  2 −2 0
−1
1
−1
2
1
 
 
         
 1
−1
 3  1 −3  6  10
−2
 1
−1 −2  3 −2 −7
A2   − 3   + 2   =   −   +   =  
 5
 1
 4  5  3  8  10
1
2
0
1
6
0
−5
       
 3 5  2 6
−4 2 −2 0
       
A3 −1 + 2 − −3 = 4
       
 2 4  1 5
1
3
1
3
 
 
         
 1
 2
2  2  4 6  0
 2
−2
0  4 −4 0  0
 
 
         
A4 2  1 + 2  1 − 3 1 =  2 +  2 − 3 =  1
 
 
         
 0
 2
1  0  4 3  1
−1
1
1
−2
2
3
−3
A5 The set is a subspace of R2 by Theorem 1.4.2.
A6 Since the condition of the set contains the square of a variable in it, we suspect that it is not a
subspace. To prove it is not a subspace we just need to find one example where the set is not closed
 
 
1
2
 
 
under linear combinations. Let "x = 1 and "y = 1. Observe that "x and "y are in the set since
 
 
0
3
 
3
 
x12 − x22 = 12 − 12 = 0 = x3 and y21 − y22 = 22 − 12 = 3 = y3 , but "x + "y = 2 is not in the set since
 
3
32 − 22 = 5 ! 3.
Copyright © 2020 Pearson Canada Inc.
31
A7 Since the condition of the set only contains linear variables, we suspect that this is a subspace. To
prove it is a subspace we need to show that it satisfies the definition of a subspace.
Call the set S . First, observe that S is a subset of R3 and is non-empty since the zero vector satisfies
 
 
 x1 
y1 
 
 
the conditions of the set. Pick any vectors "x =  x2  and "y = y2  in S . Then they must satisfy the
 
 
x3
y3


 sx1 + ty1 


condition of S , so x1 = x3 and y1 = y3 . We now need to show that s"x + t"y =  sx2 + ty2  satisfies


sx3 + ty3
the conditions of the set. In particular, we need to show that the first entry of s"x + t"y equals its third
entry. Since x1 = x3 and y1 = y3 we get sx1 + ty1 = sx3 + ty3 as required. Thus, S is a subspace of R3 .
A8 Since the condition of the set only contains linear variables, we suspect that this is a subspace. Call
the set S . First, observe that S is a subset of R!2 and
" is non-empty
! " since the zero vector satisfies
x1
y
the conditions of the set. Pick any vectors "x =
and "y = 1 in S . Then they must satisfy the
x2
y
!2
"
sx1 + ty1
condition of S , so x1 + x2 = 0 and y1 + y2 = 0. Then s"x + t"y =
satisfies the conditions of
sx2 + ty2
the set since (sx1 + ty1 ) + (sx2 + ty2 ) = s(x1 + x2 ) + t(y1 + y2 ) = s(0) + t(0) = 0. Thus, S is a subspace
of R2 .
A9 The condition of the set involves multiplication of entries, so we suspect that it is not a subspace.
   
 
 x1  1
2
   
 
Observe that if "x =  x2  = 1, then "x is in the set since x1 x2 = 1(1) = 1 = x3 , but 2"x = 2 is not in
   
 
x3
1
2
the set since 2(2) = 4 ! 2. Therefore, the set is not a subspace.
 
2
 
A10 At first glance this might not seem like a subspace since we are adding the vector 2. However, the
 
2
 
 
 

2
1
1



 

 
 
1
key observation to make is that 2 = 2 1. Therefore, the set can be written as S = Span 
and



 
 


1

2
1
hence is a subspace by Theorem 1.4.2.
A11 Since the condition of the set only contains linear variables, we suspect that this is a subspace. Call
the set S . By definition S is a subset of R4 and is non-empty since the zero vector satisfies the
 
 
 x1 
y1 
 x 
y 
conditions of the set. Pick any vectors "x =  2  and "y =  2  in S , then x1 + x2 + x3 + x4 = 0
 x3 
y3 
x4
y4


 sx1 + ty1 
 sx + ty 
2
 satisfies the conditions of the set since
and y1 + y2 + y3 + y4 = 0. We have s"x + t"y =  2
 sx3 + ty3 
sx4 + ty4
(sx1 +ty1 )+(sx2 +ty2 )+(sx3 +ty3 )+(sx4 +ty4 ) = s(x1 + x2 + x3 + x4 )+t(y1 +y2 +y3 +y4 ) = s(0)+t(0) = 0.
Thus, S is a subspace of R4 .
A12 The set clearly does not contain the zero vector and hence cannot be a subspace.
Copyright © 2020 Pearson Canada Inc.
32
A13 The conditions of the set only contain linear variables, but we notice that the first equation x1 +2x3 = 5
excludes x1 = x3 = 0. Hence the zero vector is not in the set so it is not a subspace.
A14 The conditions of the set involve a multiplication of variables, so we suspect that it is not a subspace.
 
1
1
We take "x =  . Then, "x is in the set since x1 = 1 = 1(1) = x3 x4 and x2 − x4 = 1 − 1 = 0. But,
1
1
 
2
2
2"x =   is not in the set since 2 ! 2(2).
2
2
A15 Since the conditions of the set only contains linear variables, we suspect that this is a subspace.
Call the set S . By definition S is a subset of R4 and is non-empty since the zero vector satisfies the
 
 
 x1 
y1 
 x 
y 
conditions of the set. Pick any vectors "x =  2  and "y =  2  in S , then 2x1 = 3x4 , x2 − 5x3 = 0,
 x3 
y3 
x4
y4


sx
+
 1 ty1 
 sx + ty 
2
 satisfies the conditions of the set
2y1 = 3y4 , and y2 − 5y3 = 0. We have s"x + t"y =  2
 sx3 + ty3 
sx4 + ty4
since 2(sx1 + ty1 ) = 2sx1 + 2ty1 = 3sx4 + t3y4 = 3(sx4 + ty4 ) and (sx2 + ty2 ) − 5(sx3 − ty3 ) =
s(x2 − 5x3 ) + t(y2 − 5y3 ) = s(0) + t(0) = 0. Thus, S is a subspace of R4 .
A16 Since x3 = 2 the zero vector cannot be in the set, so it is not a subspace.
For Problems A17 - A20, alternative correct answers are possible.
 
 
   
0
1
 2 0
0
 1 0
0
A17 1   + 0   + 0   =  
−1 0
0
1
0
2
−3
0
 
 
   
 2
 0
 0 0
−1
 2
 4 0
A18 0   − 2   + 1   =  
 3
 1
 2 0
2
−1
−2
0
 
 
   
1
1
2 0
1
2 0
1
A19 1   + 1   − 1   =  
1 0
0
1
2
1
3
0
Copyright © 2020 Pearson Canada Inc.
33
A20 It is difficult to determine a linear combination by inspection, so we set up a system of equations.
Consider
 
 
 
 
0
 1
1
 1
0
 1
2
 
  = c   + c   + c −1
1
2
3
−2
1
−8
0
 
 
 
 
0
3
3
3


 c1 + c2 + c3 
 c + 2c − c 
2
3 

=  1
−2c1 + c2 − 8c3 
3c1 + 3c2 + 3c3
This gives us the system of equations
c1 + c2 + c3 = 0
c1 + 2c2 − c3 = 0
−2c1 + c2 − 8c3 = 0
3c1 + 3c2 + 3c3 = 0
Adding the first equation and the second equation gives 2c1 + 3c2 = 0. Subtracting the first equation
from the second equation gives c2 − 2c3 = 0. Thus, if we take c3 = 1, we get c2 = 2 and hence
c1 = −3. Indeed, we find that
 
     
 1
1  1 0
 1
2 −1 0
(−3)   + 2   +   =  
−2
1 −8 0
3
3
3
0
 
     
1
 1 −2 0
1
 2 −4 0
A21 Observe that 0   + 2   +   =  , so the set is linearly dependent.
0
−1  2 0
3
1
−2
0
   
   
1 2
1 0
1 2
0 0
A22 Observe that −2   +   + 0   =  , so the set is linearly dependent.
2 4
1 0
1
2
0
0
 
 
  

0
1
0  c1 
0
1
1 c + c 
2
. Comparing entries gives c1 = c2 = 0, so the set is linearly
A23 Consider   = c1   + c2   =  1
0
0
1  c2 
0
1
1
c1 + c2
independent.
Copyright © 2020 Pearson Canada Inc.
34
 
 
 
  

0
3
 4
 3 3c1 + 4c2 + 3c3 
0
2
 4
 3 2c + 4c + 3c 
2
3
. This gives
A24 Consider   = c1   + c2   + c3   =  1
0
1
−5
−2  c1 − 5c2 − 2c3 
0
2
0
1
2c1 + c3
3c1 + 4c2 + 3c3 = 0
2c1 + 4c2 + 3c3 = 0
c1 − 5c2 − 2c3 = 0
2c1 + c3 = 0
Subtracting the second equation from the first gives c1 = 0. Then, the third equation gives c3 = 0 and
any of the other equations gives c2 = 0. Thus, the set is linearly independent.
 
 x1 
 
A25 By the definition of P, every "x =  x2  ∈ P satisfies 2x1 + x2 + x3 = 0. Solving this for x2 gives
 
x3
x2 = −2x1 − x3 . Consider


 
  

x1
c1


 1
 0 



 
  

c2
−2x1 − x3  = c1  0 + c2  1 = 

x3
−2
−1
−2c1 − c2
Solving we find that c1 = x1 , c2 = −2x1 − x3 . Observe that −2c1 − c2 = −2x1 − (−2x1 − x3 ) = x3 so
the third equation is also satisfied. Thus, B spans P. Now consider
 
 
  

c1
0
 1
 0 

 
 
  

0
0
1
c
2
  = c1   + c2   = 

0
−2
−1
−2c1 − c2
Comparing entries we get that c1 = c2 = 0. Hence, B is also linearly independent.
Since B is linearly independent and spans P, it is a basis for P.
NOTE: We could have solved the equation for the plane P for x3 instead.
 
 x1 
 
A26 By the definition of P, every "x =  x2  ∈ P satisfies 3x1 + x2 − 2x3 = 0. Solving this for x2 gives
 
x3
x2 = −3x1 + 2x3 . Consider


 
  

x1
c1


 1
0 



 
  

−3x1 + 2x3  = c1 −3 + c2 2 = −3c1 + 2c2 
x3
0
1
c2
Solving we find that c1 = x1 , c2 = x3 (observe that −3c1 + 2c2 = −3x1 + 2x3 so the second equation
is also satisfied). Thus, B spans P. Now consider
 
 
  

c1
0
 1
0 

0 = c1 −3 + c2 2 = −3c1 + 2c2 
 
 
  

0
0
1
c2
Comparing entries we get that c1 = c2 = 0. Hence, B is also linearly independent.
Since B is linearly independent and spans P, it is a basis for P.
Copyright © 2020 Pearson Canada Inc.
35
 
 x1 
 
A27 By the definition of P, every "x =  x2  ∈ P satisfies 3x1 + x2 − 2x3 = 0. Solving this for x2 gives
 
x3
x2 = −3x1 + 2x3 . Consider


 
  

x1
c1


 1 
 0  



 
  

−3x
+
2x
0
1
c
=
c
+
c
=
1
3
2


1
2

 
   3
1 
x3
3/2
1/2
2 c1 + 2 c2
Solving we find that c1 = x1 , c2 = −3x1 + 2x3 (observe that 32 c1 + 12 c2 = 32 x1 + 12 (−3x1 + 2x3 ) = x3
so the third equation is also satisfied). Thus, B spans P. Now consider
 
 
  

c1
0
 1 
 0  

 
 
  

c2
0 = c1  0  + c2  1  = 

3
1 
0
3/2
1/2
c
+
c
1
2
2
2
Comparing entries we get that c1 = c2 = 0. Hence, B is also linearly independent.
Since B is linearly independent and spans P, it is a basis for P.
 
 x1 
 x 
A28 By the definition of P, every "x =  2  ∈ P satisfies x1 + x2 + x3 − x4 = 0. Solving this for x4 gives
 x3 
x4
x4 = x1 + x2 + x3 . Consider







 
 
  

x1
c1

1
0
0 


0
1
0 

x2
c
2
 = c   + c   + c   = 

1
2
3










0
0
1 
x3
c3


 
 
  
x1 + x2 + x3
1
1
1
c1 + c2 + c3
Solving we find that c1 = x1 , c2 = x2 , c3 = x3 (observe that c1 + c2 + c3 = x1 + x2 + x3 = x4 so the
fourth equation is also satisfied). Thus, B spans P. Now consider
 
 
 
  

c1
0
1
0
0 

0
0
1
0 

c
2
  = c   + c   + c   = 

1
2
3
0
0
1 

0
c3








 
 
 
  
0
1
1
1
c1 + c2 + c3
Comparing entries we get that c1 = c2 = c3 = 0. Hence, B is also linearly independent.
Since B is linearly independent and spans P, it is a basis for P.
For Problems A29 - A32, alternative correct answers are possible.
A29 We observe that neither vector is a scalar multiple of the other.
this is a linearly independent
 Hence,
  


1
1







   





0 2

4
4
set of two vectors in R . Hence, it is a plane in R with basis 
.
  ,  



1
1
   






1 3
Copyright © 2020 Pearson Canada Inc.
36
      

1 0 0





     



0 1 0

A30 The set 
,
,
is a subset of the standard basis for R4 and hence is a linearly independent set























0
0
0








     


0 0 1

     

1 0 0





     



0 1 0

4
4
of three vectors in R . Hence, the span of this set is a hyperplane in R with basis 
,
,
.














0 0 0














0 0 1

A31 Observe that the second and third vectors are just scalar multiples of the first vector. Hence, by
Theorem 1.4.3, we can write
     
 


 3 0  6
 3

















 













 1 0  2
 1

Span 
,
,
=
Span






















−1 0 −2
−1













 0 0  0

 0

  

 3





 1








4


Therefore, it is a line in R with basis 
.
 




−1




 



 0

A32 Observe that the third vector is the sum of the first two vectors. Hence, by Theorem 1.4.3 we can
write
      
   


1  1 2
1  1










1  0 1
   







     
1  0

Span 
,
,
=
Span
,








































0
0
0
0
0














     
   





2 −1 1

2 −1

   
    


1  1
1  1











1  0
1  0








   






4




Since 
is linearly independent, we get that it spans a plane in R with basis 
.
  ,  
  ,  







0
0
0
0











   
   






2 −1

2 −1

A33 If "x = "p + td" is a subspace of Rn , then it contains the zero vector. Hence, there exists t1 such that
" Thus, "p = −t1 d" and so "p is a scalar multiple of d.
" On the other hand, if "p is a scalar
"0 = "p + t1 d.
"
"
"
"
"
" Hence, the set is Span{d}
"
multiple of d, say "p = t1 d, then we have "x = "p + td = t1 d + td = (t1 + t)d.
and thus is a subspace.
A34 Assume there is a non-empty subset B1 = {"v 1 , . . . , "v ' } of B that is linearly dependent. Then there
exists ci not all zero such that
"0 = c1"v 1 + · · · + c'"v ' = c1"v 1 + · · · + c'"v ' + 0"v '+1 + · · · + 0"v n
which contradicts the fact that B is linearly independent. Hence, B1 must be linearly independent.
A35 (a) Assume that Span{"v 1 , . . . , "v k } = Span{"v 1 , . . . , "v k−1 }.
Since "v k ∈ Span{"v 1 , . . . , "v k } our assumption implies that "v k ∈ Span{"v 1 , . . . , "v k−1 }. Consequently,
there exists b1 , . . . , bk−1 ∈ R such that
"v k = b1"v 1 + · · · + bk−1"v k−1
Therefore, "v k is a linear combination of "v 1 , . . . , "v k−1 as required.
Copyright © 2020 Pearson Canada Inc.
37
(b) If "v k can be written as a linear combination of "v 1 , . . . , "v k−1 , then, by definition of linear combination, there exist c1 , . . . , ck−1 ∈ R such that
c1"v 1 + · · · + ck−1"v k−1 = "v k
(1)
To prove that Span{"v 1 , . . . , "v k } = Span{"v 1 , . . . , "v k−1 } we will show that the sets are subsets of
each other.
By definition of span, for any "x ∈ Span{"v 1 , . . . , "v k } there exist d1 , . . . , dk ∈ R such that
"x = d1"v 1 + · · · + dk−1"v k−1 + dk"v k
Using equation (1) to substitute in for "v k gives
"x = d1"v 1 + · · · + dk−1"v k−1 + dk (c1"v 1 + · · · + ck−1"v k−1 )
Rearranging using properties from Theorem 1.1.1 gives
"x = (d1 + dk c1 )"v 1 + · · · + (dk−1 + dk ck−1 )"v k−1
Thus, by definition, "x ∈ Span{"v 1 , . . . , "v k−1 } and hence
Span{"v 1 , . . . , "v k } ⊆ Span{"v 1 , . . . , "v k−1 }
Now, if "y ∈ Span{"v 1 , . . . , "v k−1 }, then there exists a1 , . . . , ak−1 ∈ R such that
"y = a1"v 1 + · · · + ak−1"v k−1
= a1"v 1 + · · · + ak−1"v k−1 + 0"v k
Thus, "y ∈ Span{"v 1 , . . . , "v k }. Hence, we also have Span{"v 1 , . . . , "v k−1 } ⊆ Span{"v 1 , . . . , "v k } and so
Span{"v 1 , . . . , "v k } = Span{"v 1 , . . . , "v k−1 }
A36 The linear combination represent how much material is required to produce 100 thingamajiggers and
250 whatchamacallits.
B Homework Problems
 
 9
 4
B1  
16
23
 
 7
 6
 
B2  5
 
−3
−1
B5 It is a subspace of R2 .
B8 It is a subspace of R3 .
B11 It is not a subspace of R3 .
 
 4
−5
 
B3 −5
 
 3
0
B6 It is not a subspace of R2 .
B9 It is not a subspace of R3 .
B12 It is a subspace of R3 .
 
−5
 2
 
B4  12
 
 5
20
B7 It is not a subspace of R2 .
B10 It is a subspace of R3 .
B13 It is not a subspace of R3 .
Copyright © 2020 Pearson Canada Inc.
38
B14 It is a subspace of R3 .
B15 It is a subspace of R4 .
B17 It is not a subspace of R4 .
     
   
1 0
1  2 3
1 0
2 −1 1
B19   +   −   + 0   =  
5  1 6
1 0
1
1
2
1
0
   
     
 1 2
1 −2 0
−1 2
1 −2 0
B21 0   +   + 0   +   =  
 1 1
1 −1 0
−1
1
1
−1
0
B23 Linearly independent
     
2  1 3
1 −1 0
B25   +   =  
1  0 1
3
4
7
B27 Show B spans P and is linearly independent.
B28 Show B spans P and is linearly independent.
B29 Show B spans P and is linearly independent.
   

1  4





   



3  1

B30 A plane. A basis is 
,
.
   




1
3













2 −2

     

1 −1 3





     



2 −2 1

B32 A hyperplane. A basis is 
,
,






















2
2
0








     




0
0 1
     

1 1 3







0 0 1









B34 A hyperplane. A basis is 
,
,
.






1 2 0





     




1 1 0
B16 It is a subspace of R4 .
B18 It is a subspace of R4 .
 
     
3  3 0
 4
1  0 0
 1
B20 3   − 3   −   =  
−2
1 −9 0
1
2
−3
0
 
 
   
1
3
0 0
1
1
0 0
B22 0   + 0   + 1   =  
2
5
0 0
5
2
0
0
B24 Linearly independent
 
   
−2
2 0
 5
3 4
B26 21   + 12   =  
 1
1 1
4
2
3
   

 1  3





   



 2  2

B31 A plane. A basis is 
,
.
   




−2
4













 1 −1

 

−1





 



−2

B33 A line. A basis is 
.










1




 




 −3 
 

1







2





B35 A line. A basis is 
.


1





 




1
C Conceptual Problems
 
 x1 
 
C1 Let "x =  ...  and let s, t ∈ R. Then
 
xn
  
 
    
 x1  (s + t)x1   sx1 + tx1   sx1  tx1 
 
  .   . 
  
..
..
 = 
 =  ..  +  ..  =
(s + t)  ...  = 
.
.
  
 
    
xn
(s + t)xn
sxn + txn
sxn
txn
Copyright © 2020 Pearson Canada Inc.
 
 
 x1 
 x1 
 
 .. 
s  .  + t  ... 
 
 
xn
xn
39
 
 
 x1 
y1 
 . 
 
C2 Let "x =  .. , "y =  ... , and let t ∈ R.
 
 
xn
yn

 
 

 x1 + y1  t(x1 + y1 ) tx1 + ty1 
 
 


..
..
 = 

t("x + "y ) = t  ...  = 
.
.

 
 

xn + yn
t(xn + yn )
txn + tyn
   
 
 
tx1  ty1 
 x1 
y1 
 .   . 
 . 
 
=  ..  +  ..  = t  ..  + t  ...  = t"x + t"y
   
 
 
txn
tyn
xn
yn
C3 By the definition of spanning, every "x ∈ Span B can be written as a linear combination of the vectors
in B. Now, assume that we have "x = s1"v 1 + · · · + sk"v k and "x = t1"v 1 + · · · + tk"v k . Then, we have
s1"v 1 + · · · + sk"v k = t1"v 1 + · · · + tk"v k
(s1"v 1 + · · · + sk"v k ) − (t1"v 1 + · · · + tk"v k ) = "0
(s1 − t1 )"v 1 + · · · + (sk − tk )"v k = "0
Since {"v 1 , . . . , "v k } is linearly independent, this implies that si − ti = 0 for 1 ≤ i ≤ k. That is, si = ti .
Therefore, there is a unique linear combination of the vectors in B which equals "x .
C4 If "v i = "0, then we have that
0"v 1 + · · · + 0"v i−1 + 1"v i + 0"v i+1 + · · · + 0"v k = "0
Hence, by definition, {"v 1 , . . . , "v k } is linearly dependent.
C5 (a) By definition U ∩ V is a subset of Rn , and "0 ∈ U and "0 ∈ V since they are both subspaces. Thus,
"0 ∈ U ∩ V. Let "x , "y ∈ U ∩ V. Then "x , "y ∈ U and "x , "y ∈ V. Since U is a subspace, we have that
s"x + t"y ∈ U for all s, t ∈ R. Similarly, V is a subspace, so s"x + t"y ∈ V for all s, t ∈ R. Hence,
s"x + t"y ∈ U ∩ V. Thus, U ∩ V is a subspace of Rn .
5! "
6
5! "
6
! "
x1
0
1
2
(b) Consider the subspaces U =
| x1 ∈ R and V =
| x2 ∈ R of R . Then "x =
∈U
0
x2
0
! "
! "
0
1
and "y =
∈ V, but "x + "y =
is not in U and not in V, so it is not in U ∪ V. Thus, U ∪ V is
1
1
not a subspace.
(c) Since U and V are subspaces of Rn , "u , "v ∈ Rn for any "u ∈ U and "v ∈ V, so "u + "v ∈ Rn since Rn
is closed under addition. Hence, U + V is a subset of Rn . Also, since U and V are subspace of
Rn , we have "0 ∈ U and "0 ∈ V, thus "0 = "0 + "0 ∈ U + V. Pick any vectors "x , "y ∈ U + V. Then,
there exists vectors "u 1 , "u 2 ∈ U and "v 1 , "v 2 ∈ V such that "x = "u 1 + "v 1 and "y = "u 2 + "v 2 . We have
s"x + t"y = s("u 1 +"v 1 ) + t("u 2 +"v 2 ) = (s"u 1 + t"u 2 ) + (s"v 1 + t"v 2 ) with s"u 1 + t"u 2 ∈ U and s"v 1 + t"v 2 ∈ V
since U and V are both subspaces. Hence, s"x + t"y ∈ U + V for all s, t ∈ R. Therefore, U + V is a
subspace of Rn .
Copyright © 2020 Pearson Canada Inc.
40
C6 There are many possible solutions.
 
 
 
1
0
0
0
1
0
(a) Pick "p =  , "v 1 =  , "v 2 =  , and "v 3
0
0
1
0
0
0
 
 
 
0
1
0
0
0
1
(b) Pick "p =  , "v 1 =  , "v 2 =  , and "v 3
0
0
0
0
0
0
 
 
 
1
0
0
3
0
0
(c) Pick "p =  , "v 1 =  , "v 2 =  , and "v 3
1
0
0
1
0
0
 
 
 
0
1
2
0
0
0
(d) Pick "p =  , "v 1 =  , "v 2 =  , and "v 3
0
0
0
0
0
0
C7 If "x ∈ Span{"v 1 , s"v 1 + t"v 2 }, then
 
0
0
=  .
0
1
 
1
1
=  .
0
0
 
0
0
=  .
0
0
 
3
0
=  .
0
0
"x = c1"v 1 + c2 (s"v 1 + t"v 2 ) = (c1 + sc2 )"v 1 + c2 t"v 2 ∈ Span{"v 1 , "v 2 }
Hence, Span{"v 1 , s"v 1 + t"v 2 } ⊆ Span{"v 1 , "v 2 }.
Since t ! 0 we get that "v 2 = −st "v 1 + 1t (s"v 1 + t"v 2 ). Hence, if "v ∈ Span{"v 1 , "v 2 }, then
=
>
−s
1
"v 1 + (s"v 1 + t"v 2 )
t
t
"v = b1"v 1 + b2"v 2 = b1"v 1 + b2
=
>
b2 s
b2
"v 1 + (s"v 1 + t"v 2 ) ∈ Span{"v 1 , s"v 1 + t"v 2 }
= b1 −
t
t
Thus, Span{"v 1 , "v 2 } ⊆ Span{"v 1 , s"v 1 + t"v 2 }. Hence Span{"v 1 , "v 2 } = Span{"v 1 , s"v 1 + t"v 2 }.
C8 A subspace S of Rn is a subset of Rn that has the additional properties that S is non-empty and that
s"x + t"y ∈ S for all "x , "y ∈ S and s, t ∈ R. That is, every subspace of Rn must be a subset of Rn , but
not every subset of Rn is a subspace of Rn .
C9 TRUE. We can rearrange the equation to get −t"v 1 + "v 2 = "0 with at least one non-zero coefficient (the
coefficient of "v 2 ). Hence {"v 1 , "v 2 } is linearly dependent by definition.
C10 FALSE. If "v 2 = "0 and "v 1 is any non-zero vector, then "v 1 is not a scalar multiple of "v 2 and {"v 1 , "v 2 } is
linearly dependent by Problem C4.
 
 
 
1
1
2
 
 
 
C11 FALSE. If "v 1 = 1, "v 2 = 0, and "v 3 = 0. Then, {"v 1 , "v 2 , "v 3 } is linearly dependent, but "v 1 cannot be
 
 
 
1
0
0
written as a linear combination of "v 1 and "v 2 .
C12 TRUE. If "v 1 = s"v 2 + t"v 3 , then we have "v 1 − s"v 2 − t"v 3 = "0 with at least one non-zero coefficient (the
coefficient of "v 1 ). Hence, by definition, the set is linearly dependent.
Copyright © 2020 Pearson Canada Inc.
41
C13 FALSE. The set {"0} = Span{"0} is a subspace by Theorem 1.4.2.
C14 TRUE. By Theorem 1.4.2.
Section 1.5
A Practice Problems
A1
A2
A3
A4
A5
A6
A7
   
 5 3
 3 2
  ·   = 5(3) + 3(2) + (−6)(4) + 1(0) = −3
−6 4
   
1 0
   
 1  2
−2 1/2
  ·   = 1(2) + (−2)(1/2) + (−2)(1/2) + 4(−1) = −4
−2 1/2
   
4
−1
   
 1  2
 4 −1
  ·   = 1(2) + 4(−1) + (−1)(−1) + 1(1) = 0
−1 −1
   
1
1
::  √  ::
:: 
2 :
::  1  ::: < √
√
√
::  √  :: = ( 2)2 + 12 + (− 2)2 + (−1)2 = 6
:: − 2 ::
 :
:: 
−1 :
::   ::
:: 1/2 ::
:: 1/2 :: ;
::   :: = (1/2)2 + (1/2)2 + (1/2)2 + (1/2)2 = 1
:: 1/2 ::
: 1/2 :
::   ::
::  1 ::
::  2 :: ;
√
::   :: = 12 + 22 + (−1)2 + 32 = 15


:: −1 ::
: 3 :
√
√
We have '"x ' = 12 + 22 + 52 = 30. Thus, a unit vector in the direction of "x is
 
1
1
1
 
"xˆ =
"x = √ 2
'"x '
30 5
A8 We have '"x ' =
;
√
15. Thus, a unit vector in the direction of "x is
 
 3
−2
1
1
"xˆ =
"x = √  
'"x '
15 −1
1
32 + (−2)2 + (−1)2 + 12 =
Copyright © 2020 Pearson Canada Inc.
42
A9 We have '"x ' =
A10 We have '"x ' =
A11 We have '"x ' =
A12 We have '"x ' =
;
(−2)2 + 12 + 02 + 12 =
√
6. Thus, a unit vector in the direction of "x is
 
−2
 1
1
1
"xˆ =
"x = √  
'"x '
6  0
1
√
;
12 + 22 + 52 + (−3)2 =
;
(1/2)2 + (1/2)2 + (1/2)2 + (1/2)2 = 1. Thus, a unit vector in the direction of "x is
39. Thus, a unit vector in the direction of "x is
 
 1
 2
1
1
"xˆ =
"x = √  
'"x '
39  5
−3
 
1/2
1/2
1
"xˆ =
"x =  
'"x '
1/2
1/2
√
√
12 + 02 + 12 + 02 + 12 = 3. Thus, a unit vector in the direction of "x is
 
1
0
 
1
1
"xˆ =
"x = √ 1
'"x '
3 0
 
1
:: ::
::6:: √
√
√
√
√
 
A13 We have '"x ' = 42 + 32 + 12 = 26, '"y ' = 22 + 12 + 52 = 30, '"x +"y ' = :::4::: = 62 + 42 + 62 =
::6::
√
2 22, and |"x · "y | = 4(2) + 3(1) + 1(5) = 16. The triangle inequality is satisfied since
√
√
√
2 22 ≈ 9.38 ≤ 26 + 30 ≈ 10.58
The Cauchy-Schwarz inequality is also satisfied since 16 ≤
√
26(30) ≈ 27.93.
:: ::
::−2::
;
;
√
√
 
A14 We have '"x ' = 12 + (−1)2 + 22 = 6, '"y ' = (−3)2 + 22 + 42 = 29, '"x + "y ' = ::: 1::: =
:: 6::
;
√
(−2)2 + 12 + 62 = 41, and |"x · "y | = 1(−3) + (−1)(2) + 2(4) = 3. The triangle inequality is satisfied
since
√
√
√
41 ≈ 6.40 ≤ 6 + 29 ≈ 7.83
√
The Cauchy-Schwarz inequality is satisfied since 3 ≤ 6(29) ≈ 13.19.
A15 A scalar equation of the hyperplane is 3x1 + x2 + 4x3 = 3(1) + 1(1) + 4(−1) = 0.
Copyright © 2020 Pearson Canada Inc.
43
A16 A scalar equation of the hyperplane is x2 + 3x3 + 3x4 = 0(2) + 1(−2) + 3(0) + 3(1) = 1.
A17 A scalar equation of the hyperplane is 3x1 − 2x2 − 5x3 + x4 = 3(2) − 2(1) − 5(1) + 1(5) = 4.
A18 A scalar equation of the hyperplane is 2x1 − 4x2 + x3 − 3x4 = 2(3) − 4(1) + 1(0) − 3(7) = −19.
A19 A scalar equation of the hyperplane is x1 − 4x2 + 5x3 − 2x4 = 1(0) − 4(0) + 5(0) − 2(0) = 0.
A20 A scalar equation of the hyperplane is x2 + 2x3 + x4 + x5 = 0(1) + 1(0) + 2(1) + 1(2) + 1(1) = 5.
 
 
 1
 
 
1




 1
! "


 3
−4
−1
 
2
 
 






A21 "n =
A22 "n = −2
A23 "n =  3
A24 "n =  
A25 "n = −1
 
 
1
 2
 2
3
−5
 
−3
−1
A26 We have
! " ! "
"v · "u
−5 0
0
"v =
proj"v ("u ) =
=
2
−5
1 1
'"v '
! " ! " ! "
3
0
3
perp"v ("u ) = "u − proj"v ("u ) =
−
=
−5
−5
0
A27 We have
! " !
"
"v · "u
12/5 3/5
36/25
"
v
=
=
48/25
1 4/5
'"v '2
! " !
" !
"
−4
36/25
−136/25
perp"v ("u ) = "u − proj"v ("u ) =
−
=
6
48/25
102/25
proj"v ("u ) =
A28 We have
A29 We have
   
0 0
"v · "u
5    
1 = 5
"
proj"v ("u ) =
v
=
1    
'"v '2
0
0
     
−3 0 −3
     
perp"v ("u ) = "u − proj"v ("u ) =  5 − 5 =  0
     
2
0
2

 

 1/3 −4/9
"v · "u
−4/3 


−2/3 =  8/9
"v =
proj"v ("u ) =
 

1 
'"v '2
2/3
−8/9
  
 

 4 −4/9  40/9
  
 

perp"v ("u ) = "u − proj"v ("u ) =  1 −  8/9 =  1/9
  
 

−3
−8/9
−19/9
Copyright © 2020 Pearson Canada Inc.
44
A30 We have
A31 We have
A32 We have
   
 1 0
"v · "u
0  1 0
"v =   =  
proj"v ("u ) =
6  0 0
'"v '2
−2
0
     
−1 0 −1
−1 0 −1
perp"v ("u ) = "u − proj"v ("u ) =   −   =  
 2 0  2
−1
0
−1
  

1 −1/2
"v · "u
−1 0  0 
"
v
=
proj"v ("u ) =
 =

2 0  0 
'"v '2
1
−1/2
  
 

 2 −1/2  5/2
 3  0   3 
 = 

perp"v ("u ) = "u − proj"v ("u ) =   − 
 2  0   2 
−3
−1/2
−5/2
! " ! "
"v · "u
0 1
0
"v =
=
proj"v ("u ) =
2
0
2 1
'"v '
! " ! " ! "
3
0
3
perp"v ("u ) = "u − proj"v ("u ) =
−
=
−3
0
−3
A33 We have
A34 We have

  
 2 −2/17
"v · "u
−1   
 3 = −3/17
"v =
proj"v ("u ) =

17   
'"v '2
2/17
−2
  
 

 4 −2/17  70/17
  
 

perp"v ("u ) = "u − proj"v ("u ) = −1 − −3/17 = −14/17
  
 

3
2/17
49/17
  

−2  14/3
"v · "u
−14   
 1 = −7/3
"v =
proj"v ("u ) =

6   
'"v '2
−1
7/3
  
  
 5  14/3 1/3
  
  
perp"v ("u ) = "u − proj"v ("u ) = −1 − −7/3 = 4/3
  
  
3
7/3
2/3
Copyright © 2020 Pearson Canada Inc.
45
A35 We have
A36 We have
A37 We have
   
 1 3/2
"v · "u
9    
"v =  1 = 3/2
proj"v ("u ) =
6   
'"v '2
−2
−3
    

 4 3/2  5/2
    

perp"v ("u ) = "u − proj"v ("u ) =  1 − 3/2 = −1/2
    

−2
−3
1
  

−1  1/3
"v · "u
−5  2 −2/3
"
v
=
proj"v ("u ) =
 =

15  1 −1/3
'"v '2
−3
1
  
 

 2  1/3  5/3
−1 −2/3 −1/3
 = 

perp"v ("u ) = "u − proj"v ("u ) =   − 
 2 −1/3  7/3
1
1
0
  

2 −1/3



"v · "u
−1 0  0 
"
proj"v ("u ) =
v
=
 =

6 1 −1/6
'"v '2
1
−1/6
  
 

−1 −1/3 −2/3
 2  0   2 
 = 

perp"v ("u ) = "u − proj"v ("u ) =   − 
−1 −1/6 −5/6
2
−1/6
13/6
A38 (a) A unit vector in the direction of "u is
 
2/7
1
 
"u = 6/7
û =
 
'"u '
3/7
(b) We have
(c) We get
  

2 220/49
" · "u
110
F



6 = 660/49
" =
"u =
proj"u (F)

2
49   
'"u '
3
330/49
  
 

 10  220/49  270/49
" = F" − proj"u (F)
" =  18  − 660/49 =  222/49
perp"u (F)
  
 

−6
330/49
−624/49
Copyright © 2020 Pearson Canada Inc.
46
A39 (a) A unit vector in the direction of "u is
√ 

 3/ 14
√ 
"u

û =
=  1/ 14
√ 
'"u ' 
−2/ 14
(b) We have
(c) We get
  

 3   24/7
" · "u
16
F



 1  =  8/7
" =
"u =
proj"u (F)

14   
'"u '2
−2
−16/7
  
 

 3  24/7 −3/7
" = F" − proj"u (F)
" = 11 −  8/7 =  69/7 
perp"u (F)
  
 

2
−16/7
30/7
A40 We first pick a point P on the line, say!P(1," 4). Then!the"point R on the line that is closest to Q(0, 0)
" = proj "(PQ)
" where PQ
" = −1 and d" = −2 is a direction vector of the line. We get
satisfies PR
d
−4
2
! " !
"
" "
" = proj "(PQ)
" = PQ · d d" = −6 −2 = 3/2
PR
d
2
−3/2
8
"2
'd'
Therefore, we have
! " !
" ! "
" = OP
" + PR
" = 1 + 3/2 = 5/2
OR
4
−3/2
5/2
Hence, the point on the line closest to Q is R(5/2, 5/2). The distance from R to Q is
::! " !
":: ::!
"::
:
:
:
::
5
−1
3/2
−5/2
:
:
:
"
' perpd"(PQ)'
=:
−
=:
= √
:
:
: −4
−3/2 : : −5/2 :
2
A41 We first pick the point P(3, 7) on the !line.
" Then the
! point
" R on the line that is closest to Q(2, 5)
−1
1
" = proj "(PQ)
" where PQ
" =
satisfies PR
and d" =
is a direction vector of the line. We get
d
−2
−4
! " !
"
" "
1
7/17
" = proj "(PQ)
" = PQ · d d" = 7
PR
=
d
−28/17
17 −4
"2
'd'
Therefore, we have
! " !
" !
"
7/17
58/17
" = OP
" + PR
" = 3 +
OR
=
7
−28/17
91/17
Hence, the point on the line closest to Q is R(58/17, 91/17). The distance from R to Q is
::! " !
": : !
":
:: −1
6
7/17 ::: ::: −24/17 :::
"
' perpd"(PQ)' = :
−
=
= √
: −2
−28/17 :: :: −6/17 ::
17
Copyright © 2020 Pearson Canada Inc.
47
A42 We first pick the point P(2, 2, −1) on the line. Then the point R on the line that is closest to Q(1, 0, 1)
 
 
−1
 1


" = proj "(PQ)
" where PQ
" = −2 and d" = −2 is a direction vector of the line. We get
satisfies PR
d
 
 
2
1
Therefore, we have
  

 1  5/6
"
"
" = proj "(PQ)
" = PQ · d d" = 5 −2 = −5/3
PR
d

6   
"2
'd'
1
5/6
  
 

 2  5/6  17/6 
" = OP
" + PR
" =  2 + −5/3 =  1/3 
OR
  
 

−1
5/6
−1/6
Hence, the point on the line closest to Q is R(17/6, 1/3, −1/6). The distance from R to Q is
::  
: :
:
::−1  5/6::: :::−11/6::: ?
  



29
"
' perpd"(PQ)'
= :::−2 − −5/3::: = ::: −1/3 ::: =






6
:: 2
5/6 :: :: 7/6 ::
A43 We first pick the point P(1, 1, −1) on the line. Then the point R on the line that is closest to Q(2, 3, 2)
 
 
1
1


" = proj "(PQ)
" where PQ
" = 2 and d" = 4 is a direction vector of the line. We get
satisfies PR
d
 
 
3
1
Therefore, we have
   
1 2/3
" · d"
PQ
12
4 = 8/3
"=
" = proj "(PQ)
" =
PR
d
d
   
2
18
"
'd'
1
2/3
    

 1 2/3  5/3
" = OP
" + PR
" =  1 + 8/3 =  11/3
OR
    

−1
2/3
−1/3
Hence, the point on the line closest to Q is R(5/3, 11/3, −1/3). The distance from R to Q is
::   :: ::
:
::1 2/3:: :: 1/3::: √
   


"
' perpd"(PQ)'
= :::2 − 8/3::: = :::−2/3::: = 6






:: 3
2/3 :: :: 7/3 ::
A44 We first pick any point P on the plane (that is, any point P(x1 , x2 , x3 ) such that 3x1 − x2 + 4x3 = 5).
 
2
 
"
We pick P(0, −5, 0). Then the distance from Q to the plane is the length of the projection of PQ = 8
 
1
 
 3
 
onto a normal vector of the plane, say "n = −1. Thus, the distance is
 
4
@@
@
" · "n @@
@@ PQ
@ = √2
"
' proj"n (PQ)' = @
@ '"n' @@
26
Copyright © 2020 Pearson Canada Inc.
48
 
 2
 
A45 We pick the point P(0, 0, −1) on the plane and pick the normal vector for the plane "n = −3. Then
 
−5
the distance from Q to the plane is
@@
@
" · "n @@
@
PQ
@ = √13
"
' proj"n (PQ)'
= @@
@ '"n' @@
38
 
 2
 
A46 We pick the point P(0, 0, −5) on the plane and pick the normal vector for the plane "n =  0. Then
 
−1
the distance from Q to the plane is
@@
@
" · "n @@
@
PQ
@@ = √4
"
' proj"n (PQ)'
= @@
@ '"n' @
5
 
 2
 
A47 We pick the point P(2, 0, 0) on the plane and pick the normal vector for the plane "n = −1. Then the
 
−1
distance from Q to the plane is
@@
@
" · "n @@ √
@
PQ
@
@= 6
"
' proj"n (PQ)'
=@
@ '"n' @@
 
1
 
A48 We pick the point P(2, 2, 1) on the plane and pick the normal vector for the plane "n = 1. Then the
 
3
distance from Q to the plane is
@@
@
" · "n @@
@@ PQ
@@ = √3
"
' proj"n (PQ)' = @
@ '"n' @
11
 
 2
 
A49 We pick the point P(0, 5, 0) on the plane and pick the normal vector for the plane "n =  1. Then the
 
−4
distance from Q to the plane is
@@
@
" · "n @@
@@ PQ
@ = √13
"
' proj"n (PQ)' = @
@ '"n' @@
21
 
 1
 
A50 We pick the point P(6, 0, 0) on the plane and pick the normal vector for the plane "n = −1. Then the
 
−1
distance from Q to the plane is
@@
@
" · "n @@
@@ PQ
@ = √5
"
' proj"n (PQ)' = @
@ '"n' @@
3
Copyright © 2020 Pearson Canada Inc.
49
A51 Pick a point P on the hyperplane, say P(0, 0, 0, 0). Then the point R on the hyperplane that is closest
" = OQ
" + proj"n (QP)
" where "n is a normal vector of the hyperplane. We have
to Q(1, 0, 0, 1) satisfies OR
 
 
−1
 2
 0
 
" =   and "n = −1, so
QP
 0
 1
 
 
−1
1
 
    
 

1
 2 1 −6/7  1/7
 
    
 

"
" = OQ
" + QP · "n "n = 0 + −3 −1 = 0 +  3/7 =  3/7
OR
0
7  1 0 −3/7 −3/7
'"n'2
 
1
1
1
−3/7
4/7
Hence, the point in the hyperplane closest to Q is R(1/7, 3/7, −3/7, 4/7).
 
 1
−2
A52 We pick the point P(1, 0, 0, 0) on the hyperplane and pick the normal vector "n =   for the hyper 3
0
plane. Then the point R in the hyperplane closest to Q satisfies
 

 
    
1
 1 1  1/14  15/14 
"
 

 
    
" = OQ
" + QP · "n "n = 2 + 1 −2 = 2 + −2/14 = 13/7 
OR
1 14  3 1  3/14  17/14 
'"n'2
 

 
    
0
3
3
0
3
Hence, the point in the hyperplane closest to Q is R(15/14, 13/7, 17/14, 3).
 
 3
−1
A53 We pick the point P(0, 0, 0, 0) on the hyperplane and pick the normal vector "n =   for the hyper 4
1
plane. Then the point R in the hyperplane closest to Q satisfies
 
    
 

2
 3 2  −2  0 
"
 
    
 

" = OQ
" + QP · "n "n = 4 + −18 −1 = 4 +  2/3 =  14/3 
OR
3
27  4 3 −8/3  1/3
'"n'2
 
4
1
4
−2/3
10/3
Hence, the point in the hyperplane closest to Q is R(0, 14/3, 1/3, 10/3).
 
 1 
 2 
A54 We pick the point P(4, 0, 0, 0) on the hyperplane and pick the normal vector "n =   for the hyper 1 
−1
plane. Then the point R in the hyperplane closest to Q satisfies
 
    
 

−1
 1 −1  −5/7 −12/7










"
 
    
 
" = OQ
" + QP · "n "n =  3 + −5  2 =  3 + −10/7 =  11/7
OR
 2
7  1  2  −5/7  9/7
'"n'2
 
−2
−1
−2
5/7
−9/7
Hence, the point in the hyperplane closest to Q is R(−12/7, 11/7, 9/7, −9/7).
Copyright © 2020 Pearson Canada Inc.
50
A55 The volume of the parallelepiped is
@@     @@ @@   @@
@@1 0 0@@ @@1 1@@
@@0 · 1 × 0@@ = @@0 · 0@@ = 1
@@     @@ @@   @@
@1
1
1 @ @1 0@
A56 The volume of the parallelepiped is
@@     @@ @@   @@
@@ 4 −1 1@@ @@ 4  28@@
@@ 1 ·  5 × 1@@ = @@ 1 ·  8@@ = 126
@@     @@ @@   @@
@ −1
2
6 @ @ −1 −6 @
A57 The volume of the parallelepiped is
@@     @@ @@  
@
@@−2 3 0@@ @@−2  1@@@
@@ 1 · 1 × 2@@ = @@ 1 · −15@@ = | − 5| = 5
@
@@     @@ @@  
@ 2
2
5 @ @ 2
6 @@
A58 The volume of the parallelepiped is
@@     @@ @@   @@
@@ 1  1 3@@ @@ 1  0@@
@@ 5 ·  0 × 0@@ = @@ 5 · −7@@ = | − 35| = 35
@@     @@ @@   @@
@ −3
−1
4 @ @ −3
0@
A59 By Hooke’s Law, we have that
3.0 = 1k
6.5 = 2k
9.0 = 3k
 
 
3.0
1
 
 
Let "p = 6.5 and d" = 2. We want to find the value of k that makes the vector kd" closest to the point
 
 
9.0
3
P(3, 6.5, 9). We interpret kd" as the line L with vector equation
 
1
 
"x = k 2 ,
 
3
k∈R
The vector on L that is closest to P is the projection of P onto L. Moreover, we know that the
coefficient k of the projection is
"p · d" 43
k=
=
≈ 3.07
'd" '2 14
Thus, based on the data, the best approximation of k would be k ≈ 3.07.
Copyright © 2020 Pearson Canada Inc.
51
B Homework Problems
B1 −2
B5
√
27
 
 1
1  
B9 √  1
6 −2
 
−2
1  1
B12 √  
18 −2
3
B2 23
B6
B3 −3
√
10/3
B7 −12
 
5
1 0
B10 √  
26 0
1
 
1/3
1 1/2
B13 √
 
7/18 1/6
0
√
B4  18 
−20
−40

B8 
 0
20
 
 3
1  2
B11 √  
14 −1
0
 
1
1
1  
B14 √ 1
5 1
 
1
√
√
√
√ √
"
"
'"
x
'
=
B15 We have
21,
'"
y
'
=
35,
|"
x
·
y
|
=
25,
and
'"
x
+
y
'
=
106.
Indeed
we
have
25
≤
21 35
√
√
√
and 106 ≤ 21 + 35.
√
√
√ √
√
B16 We
√ have√'"x ' =√ 14, '"y ' = 12, |"x · "y | = 4, and '"x + "y ' = 34. Indeed we have 4 ≤ 14 12 and
34 ≤ 14 + 12.
B17 2x1 + 2x2 + 6x3 − x4 = 19
B18 x1 + 5x2 + 9x3 + 2x4 = 46
B19 2x1 + x2 + 2x3 + x4 = 10
B20 x2 + 2x3 + x4 = 3
 
 
 
 
2
−2
 
 3
 1
0
−1
! "
1
 0
 
 
−5
3
 


B21
B22 2
B23  
B24  
B25 1
B26 −2
 
 
 
1
 1
−3
7
0
 2
−1
9
3
−2
! "
!
"
! "
! "
3/2
−1/2
4
0
B27 proj"v ("u ) =
, perp"v ("u ) =
B28 proj"v ("u ) =
, perp"v ("u ) =
3/2
1/2
−6
0
!
"
!
"
!
"
!
"
16/25
9/25
−92/25
42/25
B29 proj"v ("u ) =
, perp"v ("u ) =
B30 proj"v ("u ) =
, perp"v ("u ) =
12/25
−12/25
69/25
56/25
 






9/2
−5/2
−2/3
−4/3
 






B31 proj"v ("u ) =  0 , perp"v ("u ) =  −4 
B32 proj"v ("u ) =  1/3, perp"v ("u ) =  8/3
 






9/2
5/2
−2/3
8/3
 


 
 
1/3
 8/3 
0
−1
 0 
 3 
0
 

 , perp ("u ) =  3
B33 proj"v ("u ) =  , perp"v ("u ) = 
B34
proj
("
u
)
=
"v
"v
0
 2
1/3
−13/3
 
 
1/3
5/3
0
1
! "
! "
! "
! "
−1
4
5
0
B35 proj"v ("u ) =
, perp"v ("u ) =
B36 proj"v ("u ) =
, perp"v ("u ) =
1
4
0
3
! "
! "
!
"
!
"
1
2
7/5
18/5
B37 proj"v ("u ) =
, perp"v ("u ) =
B38 proj"v ("u ) =
, perp"v ("u ) =
1
−2
21/5
−6/5
Copyright © 2020 Pearson Canada Inc.
52
 
 
0
 4
 
 
B39 proj"v ("u ) = 0, perp"v ("u ) =  1
 
 
0
−2
 
 
 2
 1
 
 
B41 proj"v ("u ) =  4, perp"v ("u ) = −1
 
 
−2
−1
B43
B44
B45
B47
B49
 
 
 1
3
 
 
B40 proj"v ("u ) =  1, perp"v ("u ) = 0
 
 
−1
3
 


2/9
−11/9
 0 
 2 

B42 proj"v ("u ) =  , perp"v ("u ) = 
1/9
−10/9
2/9
16/9
 
 


2
2/3
−11/3





" = 1/3
" =  14/3
(a) 13 1
(b) proj"u (F)
(c) perp"u (F)
 


 
2
2/3
4/3




 
−2/19
78/19
 1





" = −6/19
" = 63/19
(c) perp"u (F)
(b) proj"u (F)
(a) √119  3




 
6/19
89/19
−3
√
(16/5, −28/5), 1
B46 (16/9, 13/9, 4/9), 50/3
√
√
(5/3, −1/3, −1/3), 14/3
B48 (14/3, 4/3, −2/3), 11/3
√
√
√
√
26/ 38
B50 7/ 21
B51 4 3
B52 4/ 6
B53 (32/17, 1, −2/17, −20/17)
B55 (3/4, 7/4, 11/4, 21/4)
B57 2
B58 21
B54 (10/9, 26/9, 1/18, 7/6)
B56 (2, 1/2, 1, −3/2)
B59 40
B60 48
C Conceptual Problems
! " ! "
! " !
"
1 2
1
2
C1 (a) False. One possible counterexample is
·
=2=
·
.
0 2
0 −97
(b) Our counterexample in part (a) has "u ! "0 so the result does not change.
C2 Since "x = "x − "y + "y ,
'"x ' = '"x − "y + "y ' = '("x − "y ) + "y ' ≤ '"x − "y ' + '"y '
So, '"x ' − '"y ' ≤ '"x −"y '. This is almost what we require, but the left-hand side might be negative. So,
by a similar argument with "y , and using the fact that '"y −"x ' = '"x −"y ', we obtain '"y '−'"x ' ≤ '"x −"y '.
From this equation and the previous one, we can conclude that
@@
@
@'"x ' − '"y '@@ ≤ '"x − "y '
C3 We have
'"v 1 + "v 2 '2 = ("v 1 + "v 2 ) · ("v 1 + "v 2 ) = "v 1 · "v 1 + "v 1 · "v 2 + "v 2 · "v 1 + "v 2 · "v 2
= '"v 1 '2 + 0 + 0 + '"v 2 '2 = '"v 1 '2 + '"v 2 '2
::
:: @@ @@
C4 By Theorem 1.5.2 (2) we have that :: '"x1 ' "x :: = @@ '"x1 ' @@ '"x ' = 1.
Copyright © 2020 Pearson Canada Inc.
53
C5 Consider "0 = c1"v 1 + · · · + ck"v k . Taking the dot product of both sides with respect to "v i gives
0 = "0 · "v i = (c1"v 1 + · · · + ck"v k ) · "v i = ci ("v i · "v i )
Since "v i ! "0, we have that "v i · "v i ! 0 by Theorem 1.5.2 (1). Hence, we have ci = 0. Since this applies
for all 1 ≤ i ≤ k, we have that {"v 1 , . . . , "v k } is linearly independent.
C6 By definition, S ⊥ is a subset of Rn . Moreover, since "0 · "v = 0 for all "v ∈ S we have that "0 ∈ S ⊥ . Let
" 1, w
" 2 ∈ S ⊥ . Then, w
" 1 · "v = 0 and w
" 2 · "v = 0 for all "v ∈ S . Hence, we have that
w
(s"
w1 + t"
w2 ) · "v = s("
w1 · "v ) + t("
w2 · "v ) = s(0) + t(0) = 0
for all "v ∈ S and s, t ∈ R. Hence, S ⊥ is a subspace of Rn .
C7 (a) We have
C(s"x + t"y ) = proj"u (proj"v (s"x + t"y )) = proj"u (s proj"v ("x ) + t proj"v ("y ))
= s proj"u (proj"v ("x )) + t proj"u (proj"v ("y )) = sC("x ) + tC("y )
(b) If C("x ) = "0 for all "x , then certainly
"0 = C("v ) = proj"u (proj"v ("v )) = proj"u ("v ) = "v · "u "u
'"u '2
C8
Hence, "v · "u = 0, and the vectors "u and "v are orthogonal to each other.
"x · (−"u )
"x · "u
−("x · "u )
"u = proj"u ("x )
(−"u ) =
(−"u ) =
2
2
' − "u '
'"u '
'"u '2
Geometrically, proj−"u ("x ) is a vector along the line through the origin with direction vector −"u , and
this line is the same as the line with direction vector "u . We have that proj−"u ("x ) is the point on this
line that is closest to "x and this is the same as proj"u ("x ).
proj−"u ("x ) =
C9 (a)
'"x + "y '2 = ("x + "y ) · ("x + "y ) = "x · "x + "x · "y + "y · "x + "y · "y
= '"x '2 + 2"x · "y + '"y '2
Hence, '"x + "y '2 = '"x '2 + '"y '2 if and only if "x · "y = 0.
(b) Following the hint, we subtract and add projd"("p ):
'"p − "q '2 = '"p − projd"("p ) + projd"("p ) − "q '2
::

 :2
::
 "p · d"  :::
= ::perpd"("p ) + 
− t d"::
"2
:
:
'd'
Since, d" · perpd"("p ) = 0, we can apply the result of (a) to get
'"p − "q '2 = ' perpd"("p )'2 + ' projd"("p ) − "q '2
Since "p and d" are given, perpd"("p ) is fixed, so to make this expression as small as possible choose
"q = projd"("p ). Thus, the distance from the point "p to a point on the line is minimized by the point
"q = projd"("p ) on the line.
Copyright © 2020 Pearson Canada Inc.
54
C10
3
3 44
" + perp"n (PQ)
" = OP
" + PQ
" − proj"n PQ
"
OP
3
4
3
4
3 4
" + PQ
" + proj"n −PQ
" = OQ
" + proj"n QP
"
= OP
C11 (a)


 2/3 
"x · "u


"u = 11/3
perp"u ("x ) = "x −


'"u '2
13/3
 
0
perp"u ("x ) · "u
 
"u = 0
proj"u (perp"u ("x )) =
2
 
'"u '
0
(b)
>
"
"x · "u
"u
"
"u
u
·
'"u ||2
'"u '2
!
"
"x · "u ("x · "u )("u · "u )
"u
=
−
'"u '2
'"u '4
!
"
"x · "u "x · "u
"u
=
−
'"u '2 '"u '2
= "0
proj"u (perp"u ("x )) =
!=
"x −
(c) proj"u (perp"u ("x )) = "0 since perp"u ("x ) is orthogonal to "u and proj"u maps vectors orthogonal to "u to
the zero vector.
C12 (a) We have
√
12 + 02 + 02 = 1
√
'"e2 ' = 02 + 12 + 02 = 1
√
'"e3 ' = 02 + 02 + 12 = 1
'"e1 ' =
Thus, each standard basis vector is a unit vector. We also have
"e1 · "e2 = 1(0) + 0(1) + 0(0) = 0
"e1 · "e3 = 1(0) + 0(0) + 0(1) = 0
"e2 · "e3 = 0(0) + 1(0) + 0(1) = 0
Hence, each vector is orthogonal to every other vector in the set. So, the set {"e1 , "e2 , "e3 } is orthonormal.
(b) If each vector is a unit vector, then they are all non-zero. Hence, the result follows from Problem
C5.
Copyright © 2020 Pearson Canada Inc.
55
Chapter 1 Quiz
E1 We have
!
"
! " ! "
3
−1
1
+2
=
.
−2
2
2
 
 2
;
√
 
E2 A vector orthogonal to "x and "y is "x × "y = −1. The length of "x × "y is 22 + (−1)2 + 72 = 54.
 
7
 
 2
 
1
Thus, a unit vector that is orthogonal to both "x and "y is √54 −1.
 
7


 4/5 
 4/15
.
"u ·"v
E3 proj"u ("v ) = '"u '2 "u = 
 8/15
−4/15


 1/5 
−4/15
.
perp"u ("v ) = "v − proj"u ("v ) = 
 22/15 
49/15
E4 Any direction vector of the line is a non-zero scalar multiple of the directed line segment between P

  
5 − (−2)  7
" =  −2 − 1  = −3. Thus, since P(−2, 1, −4) is a point on the
and Q. Thus, we can take d" = PQ

  
1 − (−4)
5
line we get that a vector equation of the line is
 
 
−2
 7
 
 
"x =  1 + t −3 ,
 
 
−4
5
t∈R
E5 Every vector in the plane satisfies x1 = 3 + 2x3 . Hence, they satisfy
  
  
 
 
 x1  3 + 2x3  3
0
2
  
  
 
 
x
x
0
1
=
=
+
x
+
x
2
 2  
  
0
2
3
 
x3
x3
0
0
1
for x2 , x3 ∈ R. This is a vector equation for the plane.
 
 
 2
−5


" =  2 and PR
" =  2 are vectors in the plane. Hence, a normal vector
E6 We have that the vectors PQ
 
 
−2
6
     
 2 −5  16
     
for the plane is "n =  2 ×  2 = −2. Then, since P(1, −1, 0) is a point on the plane we get a scalar
     
−2
6
14
equation of the plane is
16x1 − 2x2 + 14x3 = 16(1) − 2(−1) + 14(0) = 18
or 8x1 − x2 + 7x3 = 9.
Copyright © 2020 Pearson Canada Inc.
56
     
2 1 3
     
E7 Observe that 6 + 3 = 9. Hence, by Theorem 1.4.3, we have that
     
4
3
7
      
   


2 1 3
2 1






     

   

6 , 3 , 9
6 , 3
Span 
= Span 

















4 3 7
4 3

    

2 1



   

6 , 3
Since B = 
cannot be reduced further (it is linearly independent), it is a basis for the



4 3

spanned set which is a plane in R3 .
E8 Consider
This gives the system
 
 
 
  

0
1
 1 
2 c1 + c2 + 2c3 
0 = c1 2 + c2 −1 + c3 0 =  2c1 − c2 
 
 
 
  

0
1
3
1
c1 + 3c2 + c3
c1 + c2 + 2c3 = 0
2c1 − c2 = 0
c1 + 3c2 + c3 = 0
Adding the first and the second equation gives 3c1 + 2c3 = 0. Hence, we have c1 = − 32 c3 . From the
second equation we have c2 = 2c1 = − 43 c3 . Thus, the third equation gives
2
11
0 = − c3 − 4c3 + c3 = − c3
3
3
Thus, c3 = 0 which implies that c1 = c2 = 0. Therefore, the set is linearly independent.
5! " ! "6
1 −1
E9 (a) To show that
,
is a basis, we need to show that it spans R2 and that it is linearly
2
2
independent.
Consider
! "
! "
! " !
"
x1
1
−1
t −t
= t1
+ t2
= 1 2
x2
2
2
2t1 + 2t2
This gives x1 = t1 − t2 and x2 = 2t1 + 2t2 . Solving using substitution
and elimination we get
! "
x1
1
1
t1 = 4 (2x1 + x2 ) and t2 = 4 (−2x1 + x2 ). Hence, every vector
can be written as
x2
! "
! "
! "
1
1
x1
1
−1
= (2x1 + x2 )
+ (−2x1 + x2 )
x2
2
2
4
4
So, it spans R2 . Moreover, if x1 = x2 = 0, then our calculations above show that t1 = t2 = 0, so
the set is also linearly independent. Therefore, it is a basis for R2 .
Copyright © 2020 Pearson Canada Inc.
57
1
1
(b) Taking x1 = 3 and x2 = 5 in our work above gives t1 = 14 (6 + 5) = 11
4 and t2 = 4 (−6 + 5) = − 4 .
So, these are the coordinates of "x with respect to the basis B. Indeed we have
! "
! "
! "
11 1
1 −1
3
=
−
5
4 2
4 2
(c) Since "y = 2"x , the coordinates of "y with respect to the basis B are t1 =
we have
! "
! "
! "
11 1
1 −1
6
=
−
10
2 2
2 2
11
2
and t2 = − 12 . Indeed
E10 Observe that 0 ! 3 − 5(0) so "0 " S , so S is not a subspace.
E11 If d ! 0, then a1 (0) + a2 (0) + a3 (0) = 0 ! d, so "0 " S and thus, S is not a subspace of R3 .
 
0
 
On the other hand, assume d = 0. Observe that, by definition, S is a subset of R3 and that "0 = 0 ∈ S
 
0
since taking x1 = 0, x2 = 0, and x3 = 0 satisfies a1 x1 + a2 x2 + a3 x3 = 0.
 
 
 x1 
y1 
 
 
Let "x =  x2 , "y = y2  ∈ S . Then they must satisfy the condition of the set, so a1 x1 + a2 x2 + a3 x3 = 0
 
 
x3
y3
and a1 y1 + a2 y2 + a3 y3 = 0.
To show that S is a subspace, we must show that s"x + t"y satisfies the condition of S . We have


 sx1 + ty1 


s"x + t"y =  sx2 + ty2  and


sx3 + ty3
a1 (sx1 + ty1 ) + a2 (sx2 + ty2 ) + a3 (sx3 + ty3 ) = s(a1 x1 + a2 x2 + a3 x3 ) + t(a1 y1 + a2 y2 + a3 y3 )
= s(0) + t(0) = 0
Therefore, S is a subspace of R3 .
 
 x1 
 
E12 By the definition of P, every "x =  x2  ∈ P satisfies x1 − 3x2 + x3 = 0. Solving this for x3 gives
 
x3
x3 = −x1 + 3x2 . Consider


 
  

x1
c1


 1
0 


 
  




x
0
1
c
=
c
+
c
=
2
2


1
2

 
  
−x1 + 3x2
−1
3
−c1 + 3c2
Solving we find that c1 = x1 , c2 = x2 (observe that −c1 + 3c2 = −x1 + 3x2 so the third equation is
also satisfied). Thus, B spans P. Now consider
 
 
  

c1
0
 1
0 

 
 
  

c2
0 = c1  0 + c2 1 = 

0
−1
3
−c1 + 3c2
Comparing entries we get that c1 = c2 = 0. Hence, B is also linearly independent.
Since B is linearly independent and spans P, it is a basis for P.
Copyright © 2020 Pearson Canada Inc.
58
E13 Since the origin O(0, 0, 0) is on the line, we get that the3point
4
" = proj " OP
" ,
Q on the line closest to P is given by OQ
d
 
 3 
where d" = −2 is a direction vector of the line. Hence,
 
3


 18/11 
" · d"
OP


" =
OQ
d" = −12/11


2
"
'd'
18/11
18
11
− 12
11
18
11
3
−2
3
and the closest point is Q(18/11, −12/11, 18/11).
x3
2
3
4
x2
x1
 
1
1
E14 Let Q(0, 0, 0, 1) be a point in the hyperplane. We have that a normal vector to the plane is "n =  .
1
1
3 4
"
"
Then, the point R in the hyperplane closest to P satisfies PR = proj"n PQ . Hence,
 
  

 3
1  5/2
3 4 −2 2 1 −5/2

" = OP
" + proj"n PQ
" =   −   = 
OR
 0 4 1 −1/2
 
  

2
1
3/2
"
Then the distance from the point to the line is the length of PR.
::
:
::−1/2:::
:
:
" = ::−1/2:: = 1
'PR'
::−1/2::
:
::
−1/2 :
" is
E15 The volume of the parallelepiped determined by "u + k"v , "v , and w
A
B
" )| = |"u · ("v × w
" ) + k "v · ("v × w
") |
|("u + k"v ) · ("v × w
" ) + k(0)|
= |"u · ("v × w
".
which equals the volume of the parallelepiped determined by "u , "v , and w
E16 FALSE. The points P(0, 0, 0), Q(0, 0, 1), and R(0, 0, 2) lie in every plane of the form t1 x1 + t2 x2 = 0
with t1 and t2 not both zero.
E17 TRUE. This is the definition of a line reworded in terms of a spanning set.
E18 TRUE. By definition of the plane {"v 1 , "v 2 } spans the plane. If {"v 1 , "v 2 } is linearly dependent, then the set
would not satisfy the definition of a plane, so {"v 1 , "v 2 } must be linearly independent. Hence, {"v 1 , "v 2 }
is a basis for the plane.
E19 FALSE. The dot product of the zero vector with itself is 0.
! "
! "
! "
! "
1
1
1
1/2
E20 FALSE. Let "x =
and "y =
. Then, proj"x "y =
, while proj"y "x =
.
0
1
0
1/2
Copyright © 2020 Pearson Canada Inc.
59
E21 FALSE. If "y = "0, then proj"x ("y ) = "0. Thus, {proj"x ("y ), perp"x ("y )} contains the zero vector so it is
linearly dependent.
E22 TRUE. We have
'"u × ("v + 3"u )' = '"u × "v + 3("u × "u )' = '"u × "v + "0' = '"u × "v '
so the parallelograms have the same area.
Chapter 1 Further Problems
F1 The statement is true. Rewrite the conditions in the form
"u · ("v − w
" ) = 0,
"u × ("v − w
" ) = "0
" is orthogonal to "u , so the angle θ between "u and "v − w
" is
The first condition says that "v − w
Thus, sin θ = 1, so the second condition tells us that
π
2
radians.
" )' = '"u ''"v − w
" ' sin θ = '"u ''"v − w
"'
0 = '"u × ("v − w
" ' = 0 and hence "v = w
".
Since '"u ' ! 0, it follows that '"v − w
F2 Since "u and "v are orthogonal unit vectors, "u × "v is a unit vector orthogonal to the plane containing "u
and "v . Then perp"u ×"v ("x ) is orthogonal to "u × "v , so it lies in the plane containing "u and "v . Therefore,
for some s, t ∈ R, perp"u ×"v ("x ) = s"u + t"v . Now since "u · "u = 1, "u · "v = 0, and "u · ("u × "v ) = 0,
s = "u · (s"u + t"v ) = "u · perp"u ×"v ("x ) = "u · ("x − proj"u ×"v ("x )) = "u · "x − 0
Similarly, t = "v · "x . Hence,
perp"u ×"v ("x ) = ("u · "x )"u + ("v · "x )"v = proj"u ("x ) + proj"v ("x )
F3 (a) We can calculate that both sides of the equation are equal to


 u2 v1 w2 − u2 v2 w1 + u3 v1 w3 − u3 v3 w1 


−u1 v1 w2 + u1 v2 w1 + u3 v2 w3 − u3 v3 w2 
−u1 v1 w3 + u1 v3 w1 − u2 v2 w3 + u2 v3 w2
(b) Using part (a), we get that
"u ×("v × w
" ) + "v × ("
" × ("u × "v ) =
w × "u ) + w
A
B A
B A
B
" )"v − ("u · "v )"
" )"u + ("
("u · w
w + ("v · "u )"
w − ("v · w
w · "v )"u − ("
w · "u )"v = "0
Copyright © 2020 Pearson Canada Inc.
60
! "
c
F4 If a = b = 0, then Span B = {s
| s ∈ R} ! R2 . Thus, at least one of a or b is non-zero. Assume
d
a ! 0. Consider
! "
! "
! " !
"
x1
a
c
t a + t2 c
= t1
+ t2
= 1
x2
b
d
t1 b + t2 d
Since a ! 0, we get t1 =
x1
a
− t2 ac . Hence,
=
>
bx1
bc
x2 =
− t2
−d
a
a
bx1
2
If bc
a − d = 0, then x2 = a and hence B could not span R . Thus,
ad − bc ! 0. Then, we get that
bc
a
− d ! 0 which we rewrite as
1
(−bx1 + ax2 )
ad − bc
1
t1 =
(dx1 − cx2 )
ad − bc
t2 =
This implies that if ad − bc ! 0, then B spans R2 and is linearly independent.
" = perp"v 1 ("v 2 ) = "v 2 − "v'"v2 1·"v'21 "v 1 . Then,
F5 (a) Let w
" · "v 1 = ("v 2 −
w
"v 2 · "v 1
"v 2 · "v 1
"v 1 ) · "v 1 = "v 2 · "v 1 −
("v 1 · "v 1 ) = 0
2
'"v 1 '
'"v 1 '2
" } is an orthogonal set.
Hence, {"v 1 , w
" ! "0 as otherwise we would have "v 2 =
Observe that w
being linearly independent.
"v 2 ·"v 1
"v
'"v 1 '2 1
which would contradict {"v 1 , "v 2 }
" } is linearly independent.
Hence, by Problem C5 in Section 1.5, we have that {"v 1 , w
" }.
Also, by Problem C7 in Section 1.4, we have that P = Span{"v 1 , "v 2 } = Span{"v 1 , w
" } is also a basis for P.
Thus, {"v 1 , w
" . Then, we have that {"v 1 , w
" , "y } is an orthogonal set. Moreover, we know "y ! "0
(b) Let "y = "v 1 × w
" } is linearly independent. Then, by Problem C5 in Section 1.5, we have that {"v 1 , w
" , "y }
since {"v 1 , w
is linearly independent.
Let "x ∈ R3 . Our work with finding the nearest point "r shows us that "r = "x + proj"y ("x ) where
"r ∈ Span{"v 1 , w
" }. Let "r = c1"v 1 + c2 w
" . Then, we have that
" = "x +
c1"v 1 + c2 w
"x · "y
"y
'"y '2
"x = c1"v 1 c2 w
"−
"x · "y
"y
'"y '2
" , and "y . Thus, {"v 1 , w
" , "y } also spans R3 as
Thus, every "x ∈ R3 is a linear combination of "v 1 , w
required.
Copyright © 2020 Pearson Canada Inc.
61
" , "y } is a basis for R3 , for any "x ∈ R3 , there exists unique d1 , d2 , d3 ∈ R such that
(c) Since {"v 1 , w
"x = d1"v 1 + d2 w
" + d3"y
Taking the dot product of both sides with respect to "v 1 gives
"x · "v 1 = d1 ("v 1 · "v 1 ) + 0
Hence, d1 =
"x ·"v 1
.
'"v 1 '2
Similarly, we get d2 =
"x ·"
w
,
'"
w'2
and d3 =
"x ·"y
.
'"y '2
F6 (a) By definition, U ⊕ W is a subset of Rn . Since U and W are subspaces we have "0 ∈ U and "0 ∈ W.
Thus, "0 = "0 + "0 ∈ U ⊕ W so U ⊕ W is non-empty.
" 1 and "y = "u 2 + w
" 2 where "u 1 , "u 2 ∈ U and
Let "x , "y ∈ U ⊕ W and s, t ∈ R. Then, "x = "u 1 + w
" 1, w
" 2 ∈ W. Since U and W are subspaces we have that
w
s"u 1 + t"u 2 ∈ U and s"
w1 + t"
w2 ∈ W
Thus,
" 1 ) + t("u 2 + w
" 2 ) = s"u 1 + t"u 2 + s"
s"x + t"y = s("u 1 + w
w1 + t"
w2 ∈ U ⊕ W
Therefore, U ⊕ W is a subspace of Rn .
" for "u ∈ U and w
" ∈ W. Then we can write
(b) Let "x ∈ U ⊕ W. Then, "x = "u + w
"u = a1"u 1 + · · · + ak"u k
" = b1 w
" 1 + · · · + b' w
"'
w
Thus,
"x = a1"u 1 + · · · + ak"u k + b1 w
" 1 + · · · + b' w
"'
" 1, . . . , w
" ' } = U ⊕ W.
Hence, Span{"u 1 , . . . , "u k , w
Consider
" 1 + · · · + ck+' w
" ' = "0
c1"u 1 + · · · + ck"u k + ck+1 w
This implies that
" 1 − · · · − ck+' w
"'
c1"u 1 + · · · + ck"u k = −ck+1 w
The vector on the left is in U and the vector on the right is in W. Hence, both vectors must be the
" ' } are both linearly
zero vector. Therefore, c1 = · · · = ck+' = 0 since {"u 1 , . . . , "u k } and {"
w1 , . . . , w
independent.
F7 (a) We have
'"u + "v '2 = '"u '2 + 2"u · "v + '"v '2
'"u − "v '2 = '"u '2 − 2"u · "v + '"v '2
By subtraction,
1
1
'"u + "v '2 − '"u − "v '2 = "u · "v
4
4
Copyright © 2020 Pearson Canada Inc.
62
(b) By addition of the above expressions,
'"u + "v '2 + '"u − "v '2 = 2'"u '2 + 2'"v '2
(c) The vectors "u + "v and "u − "v are the diagonal vectors of the parallelogram. Take the equation of
part (a) and divide by '"u ''"v ' to obtain an expression for the cosine of the angle between "u and
"v , in terms of the lengths of "u , "v , and the diagonal vectors. The cosine is 0 if and only if the
diagonals are of equal length. In this case, the parallelogram is a rectangle.
Part (b) says that the sum of the squares of the two diagonal lengths is the sum of the squares of
the lengths of all four sides of the parallelogram. You can also see that this is true by using the
cosine law and the fact that if the angle between "u and "v is θ, then the angle at the next vertex of
the parallelogram is π − θ.
" = t PR.
" Thus, "q − "p = t("r − "p ), or
F8 P, Q, and R are collinear if and only if for some scalar t, PQ
"q = (1 − t)"p + t"r. Then
A
B A
B
("p × "q ) + ("q × "r) + ("r × "p ) = "p × (1 − t)"p + t"r + (1 − t)"p + t"r × "r + "r × "p
= t"p × "r + "p × "r − t"p × "r + "r × "p = "0
since "p × "r = −"r × "p .
" Then the cross-product of the two
F9 (a) Suppose that the skew lines are "x = "p + s"c and "x = "q + td.
direction vectors "n = "c × d" is perpendicular to both lines, so the plane through P with normal "n
contains the first line, and the plane through Q with normal "n contains the second line. Since the
two planes have the same normal vector, they are parallel planes.
     
2 1 −1
     
(b) We find that "n = 0 × 1 = −5. Thus, the equation of the plane passing through P(1, 4, 2) is
     
1
3
2
−1x1 −√5x2 + 2x3 = −17. Hence, we find that the distance from the point Q(2, −3, 1) to this plane
is 32/ 30 which is the distance between the skew lines.
Copyright © 2020 Pearson Canada Inc.
Chapter 2 Solutions
Section 2.1
A Practice Problems
A1 From the last equation we have x2 = 4. Substituting
into the first equation yields x1 − 3(4) = 5, or
! "
17
x1 = 17. Hence, the general solution is !x =
.
4
A2 From the last equation we get x3 = 6. Also, x2 does not appear as the leading variable in any equation
so it is a free variable. Thus, we let x2 = t ∈ R. Then the first equation yields
x1 = 7 − 2t + 6 = 13 − 2t. Thus, the general solution is
  
  
 
 x1  13 − 2t 13
−2
  
  
 
!x =  x2  =  t  =  0 + t  1 ,
  
  
 
x3
6
6
0
t∈R
A3 From the last equation we get x3 = 2. Substituting this into the second equation gives
x2 = 2 − 5(2) = −8. Now substitute x2 and x3 into the first equation to get
 
 32
 
x1 = 4 − 3(−8) + 2(2) = 32. Thus, the general solution is !x = −8.
 
2
A4 Observe that x4 is not the leading variable in any equation, so x4 is a free variable. Let x4 = t ∈ R.
Then the third equation gives x3 = 2 − t and the second equation gives x2 = −3 + t. Substituting these
into the first equation yields x1 = 7 + 2(−3 + t) − (2 − t) − 4t = −1 − t. Thus, the general solution is
A5 A is in row echelon form.
  
  
 
 x1  −1 − t −1
−1
 x  −3 + t −3
 
 =   + t  1 ,
!x =  2  = 
−1
 x3   2 − t   2
 
x4
t
0
1
t∈R
A6 B is in row echelon form.
A7 C is not in row echelon form because there is a non-zero entry beneath the leading entry in the second
row.
1
Copyright © 2020 Pearson Canada Inc.
2
A8 D is not in row echelon form because the leading entry in the third row is to the left of the leading
entry in the second row.
For Problems A9 - A14, there are infinitely many different ways of row reducing each of the following matrices
and infinitely many correct answers. Of course, the idea is to find a sequence of elementary row operations
which takes as few steps as possible. Below is one possible sequence for each matrix. For practice, you should
try to find others. Look for tricks which help reduce the number of steps.
A9 Row reducing gives
!
4
1 1
1 −3 2
"
R1 # R2 ∼
!
1 −3 2
4
1 1
"
R2 − 4R1
!
"
1 −3
2
∼
0 13 −7
A10 Row reducing gives







2 −2 5 8 
 1 −1 2 3 



1 −1 2 3  R1 # R2 ∼  2 −2 5 8  R2 − 2R1 ∼



−1
1 0 2
−1
1 0 2
R3 + R1




 1 −1 2 3 
 1 −1 2 3 




0
0
1
2
0 1 2 
∼  0




0
0 2 5 R3 − 2R2
0
0 0 1
A11 Row reducing gives




 1 −1 −1 
 1 −1 −1 
 2 −1 −2  R − 2R
 0
1
0 
1

 2

∼
∼

 5
0
0  R3 − 5R1  0
5
5  R3 − 5R2




3
4
5 R4 − 3R1
0
7
8 R4 − 7R2




 1 −1 −1 
 1 −1 −1 
 0


 0
1
0 
1
0 


∼



 0
 0
0
5 
0
5 




0
0
8 R4 − 85 R3
0
0
0
A12 Row reducing gives

 2 0
 1 2

 1 4

3 6

 2
 0

 0

0


2 0 
 2 0 2 0

3 4  R2 − 12 R1  0 2 2 4
∼

9 16  R3 − 12 R1  0 4 8 16


0 6 10 20
13 20 R4 − 32 R1



0 2 0 
 2 0 2 0 

 0 2 2 4 
2 2 4 

∼ 

0 4 8 
0 0 4 8 





0 4 8 R4 − R3
0 0 0 0




 R3 − 2R2 ∼

R4 − 3R2
Copyright © 2020 Pearson Canada Inc.
3
A13 Row reducing gives

1
2 1
 0
 1
2
1 1

 3 −1 −4 1

2
1
3 6


2
1
1 
 1

 0
1
2
1 

 0 −7 −7 −2 


0 −3
1
4
A14 Row reducing gives






3
1
0
−4
1
8 2
0
3 0
2 −2 4
1 11 3

3
 1 0
 0 1 −1

 0 2 −2

0 1 23
4
1
3
8
0 1
2 1
4 3
3 12

2
 1
 0
1
∼ 
3
−1

2
1

 1 2
 0 1
∼ 
R3 + 7R2  0 0

R4 + 3R2
0 0


 R # R
2
 1




 R # R
2
 1





∼ 

1
3
0
−4
1
2
−4
3
1
1
1
6

1 1 

2 1 

7 5 

7 7
0
3 0
1
8 2
2 −2 4
1 11 3
1
4
3
8


3 0 1

 1 0

 0 1 −1 2 1


 R3 − 2R2 ∼  0 0
0 0 1


R4 − R2
0 0 24 1 11




 R3 − 3R1 ∼

R4 − 2R1
R4 − R3

 1 2 1
 0 1 2
∼
 0 0 7
0 0 0
1
1
5
2








 R − 3R
1
 2
∼


R4 + 4R1



3 0 1 

 1 0

 0 1 −1 2 1 


∼


0
0
24
1
11




R4 # R3
0 0
0 0 1
A15 Since the bottom row is of the form 0 = −5, it is inconsistent.
A16 The system is consistent. Rewrite the augmented matrix in equation form to get the system of equations x1 = 2 and x3 = 3. We also have that x2 is a free variable, so we let x2 = t ∈ R. Hence, the
general solution is
     
 
 x1  2 2
0
     
 
!x =  x2  =  t  = 0 + t 1 , t ∈ R
     
 
x3
3
3
0
A17 The system is consistent. Rewrite the augmented matrix in equation form to get the system of equations
x1 + x3 = 1
x2 + x3 + x4 = 2
x4 = 3
Thus, x3 is a free variable, so we let x3 = t ∈ R. Then, substituting the third equation into the second
gives x2 = 2 − t − 3 = −1 − t. Also, the first equation becomes x1 = 1 − t. Hence, the general solution
is
  
  
 
 x1   1 − t   1
−1
 x2  −1 − t −1
 
 =   + t −1 , t ∈ R
!x =   = 
 1
 x3   t   0
 
x4
3
3
0
Copyright © 2020 Pearson Canada Inc.
4
A18 The system is consistent. Rewrite the augmented matrix in equation form to get the system of equations
x1 + x2 − x3 + 3x4 = 1
2x3 + x4 = 3
x4 = −2
Thus, x2 is a free variable, so we let x2 = t ∈ R. Then, substituting the third equation into the
second gives 2x3 = 3 − (−2) = 5 or x3 = 25 . Substituting everything into the first equations yields
x1 = 1 − t + 52 − 3(−2) = 19
2 − t. Hence, the general solution is
  
 

 
 x1  19/2 − t 19/2
−1
 x  
  0 
 
t
 = 
 + t  1 ,
!x =  2  = 
 

 0
 x3   5/2   5/2 
 
x4
−2
−2
0
t∈R
A19 The system is consistent. Rewrite the augmented matrix in equation form to get the system of equations
x1 + x3 − x4 = 0
x2 = 0
We get that x3 and x4 are free variables, so we let x3 = s ∈ R and xt = t ∈ R. Then, the general
solution is


 
 
−s + t
−1
1
 0 
 0
 
 = s   + t 0 , s, t ∈ R
!x = 
 1
0
 s 
 
 
t
0
1
A20 Since the bottom row is of the form 0 = 1, it is inconsistent.
A21 We have
!
"
3 −5 2
(a)
1
2 4
!
"
!
"
!
"
(b)
3 −5 2
1
2 4
1
2
4
R1 # R2 ∼
∼
1
2 4
3 −5 2 R2 − 3R1
0 −11 −10
(c) The system is consistent and has no free variables, so there are no parameters in the general
solution.
) *
10
24
(d) By back-substitution, x2 = 11
and x1 = 4 − 2 10
x =
11 = 11 . Hence, the general solution is !
!
"
24/11
.
10/11
Copyright © 2020 Pearson Canada Inc.
5
A22 We have
!
"
1
2 1 5
(a)
2 −3 2 6
!
(b)
1
2 1 5
2 −3 2 6
"
R2 − 2R1
∼
!
1
2 1
5
0 −7 0 −4
"
(c) The system is consistent and has one free variable, so there is one parameter in the general
solution.
) *
(d) Let x3 = t ∈ R. By back-substitution, x2 = 47 and x1 = 5 − 2 47 − t = 27
7 − t. Hence, the general


 
27/7
−1
solution is !x =  4/7  + t  0, t ∈ R.


 
0
1
A23 We have


 1 2 −3 8 


(a)  1 3 −5 11 


2 5 −8 19



 1 2 −3 8
 1 2 −3 8 
(b)
 1 3 −5 11  R2 − R1 ∼  0 1 −2 3



2 5 −8 19 R3 − 2R1
0 1 −2 3




R3 − R2


 1 2 −3 8 


∼ 0 1 −2 3 


0 0
0 0
(c) The system is consistent and has one free variable, so there is one parameter in the general
solution.
(d) Let x3 = t ∈ R. By back-substitution, x2 = 3 + 2t and x1 = 8 − 2(3 + 2t) + 3t = 2 − t. Hence, the
 
 
2
−1
 
 
general solution is !x = 3 + t  2, t ∈ R.
 
 
0
1
A24 We have

6 16
 −3

(a)  1 −2 −5

2 −3 −8

 −3
(b)

 1
2

 1

 0
0
36
−11
−17




6 16
36
−2 −5 −11
−3 −8 −17



 1


 R1 # R2 ∼  −3
2


−2 −5 −11 
 1


0
1
3 
∼  0


1
2
5 R2 # R3
0

−2 −5 −11 

6 16
36  R2 + 3R1 ∼

−3 −8 −17 R3 − 2R1

−2 −5 −11 

1
2
5 

0
1
3
(c) The system is consistent and has no free variables, so there are no parameters in the general
solution.
(d) By back-substitution, x3 = 3, x2 = 5 − 2(3) = −1, and x1 = −11 + 2(−1) + 5(3) = 2. Hence, the
 
 2
 
general solution is !x = −1.
 
3
Copyright © 2020 Pearson Canada Inc.
6
A25 We have

 1 2 −1

1
(a)  2 5

4 9 −1

 1
(b)

 2
4

4 

10 

19





2 −1 4 
 1 2 −1 4 
 1 2 −1 4 





5
1 10  R2 − 2R1 ∼  0 1
3 2 
3 2 
∼ 0 1





9 −1 19 R3 − 4R1
0 1
3 3 R3 − R2
0 0
0 1
(c) The system is inconsistent.
A26 We have

 1 2 −3

(a)  2 4 −6

6 13 −17

 1
(b)

 2
6





0 −5 

1 −8 

4 −21
2 −3 0 −5
4 −6 1 −8
13 −17 4 −21




 1 2 −3 0 −5 



0 1
2 
∼
 R2 − 2R1 ∼  0 0

R3 − 6R1
0 1
1 4
9 R2 # R3

1 2 −3 0 −5 

0 1
1 4
9 

0 0
0 1
2
(c) The system is consistent and has one free variable, so there is one parameter in the general
solution.
(d) Let x3 = t ∈ R. By back-substitution, x4 = 2, x2 = 9 − t − 4(2) = 1 − t, and
 
 
−7
 5
−1
 1
x1 = −5 − 2(1 − t) + 3t = −7 + 5t. Hence, the general solution is !x =   + t  , t ∈ R.
 0
 1
2
0
A27 We have

 0
 1
(a) 
 2
2
(b)
2
2
4
5
−2
−3
−5
−7
0 1
1 4
3 8
3 10

 0 2 −2
 1 2 −3

 2 4 −5

2 5 −7

 1 2 −3
 0 2 −2

 0 0
1

0 1 −1
2
1
3
5






0 1
1 4
3 8
3 10
1
0
1
1
4
1
0
2
2
1
3
5
1
2
1
3

 1 2 −3
 0 2 −2
∼ 
 2 4 −5
2 5 −7

 1 2 −3
 0 2 −2
∼ 
1
 0 0
R4 − 12 R2
0 0
0


 R # R
2
 1








1 4
0 1
3 8
3 10
1
0
1
1
1
2
3
5




 R3 − 2R1 ∼

R4 − 2R1

4 1 

1 2 

0 1 

3/2 2
(c) The system is consistent and has one free variable, so there is one parameter in the general
solution.
Copyright © 2020 Pearson Canada Inc.
7
)
*
(d) Let x5 = t ∈ R. By back-substitution, x4 = 2 − 32 t, x3 = 1 − 2 − 32 t = −1 + 32 t, 2x2 =
)
*
)
* )
*
2 + 2 −1 + 32 t − t = 2t, so x2 = t, and x1 = 1 − 2(t) + 3 −1 + 32 t − 2 − 32 t − 4t = −4. Hence,
 


−4
 0 
 0
 1 
 


the general solution is !x = −1 + t  3/2, t ∈ R.
 


 2
−3/2
0
1
A28 Substituting the points into the equation of a parabola gives the system of 3 linear equations in the 3
unknowns a, b, and c.
3 = a + b(1) + c(1)2
5 = a + b(2) + c(2)2
15 = a + b(4) + c(4)2
We row reduce the corresponding augmented matrix.

 1 1 1 3

 1 2 4 5
1 4 16 15



 1 1 1 3


 R2 − R1 ∼  0 1 3 2
R3 − R1
0 3 15 12




R3 − 3R2


 1 1 1 3 


∼ 0 1 3 2 


0 0 6 6
Using back-substitution, we find that c = 1, b = 2 − 3c = −1, and a = 3 − b − c = 3. Hence, the
parabola is y = 3 − x + x2 .
A29 Substituting the points into the equation of a parabola gives the system of 3 linear equations in the 3
unknowns a, b, and c.
2 = a + b(0) + c(0)2
−1 = a + b(1) + c(1)2
−10 = a + b(2) + c(2)2
We row reduce the corresponding augmented matrix.

2
 1 0 0

 1 1 1 −1
1 2 4 −10


2

 1 0 0


 R2 − R1 ∼  0 1 1 −3
R3 − R1
0 2 4 −12




R3 − 2R2


2 
 1 0 0


∼ 0 1 1 −3 


0 0 2 −6
Using back-substitution, we find that c = −3, b = −3 − c = 0, and a = 2. Hence, the parabola is
y = 2 − 3x2 .
A30 Substituting the points into the equation of a parabola gives the system of 3 linear equations in the 3
unknowns a, b, and c.
9 = a + b(−2) + c(−2)2
2 = a + b(−1) + c(−1)2
17 = a + b(2) + c(2)2
Copyright © 2020 Pearson Canada Inc.
8
We row reduce the corresponding augmented matrix.

 1 −2 4 9

 1 −1 1 2
1
2 4 17





4
4
9 
9 

 1 −2
 1 −2





1 −3 −7 
1 −3 −7 
∼ 0
 R2 − R1 ∼  0



R3 − R1
0
4
0
8 R3 − 4R2
0
0 12 36
Using back-substitution, we find that c = 3, b = −7 + 3c = 2, and a = 9 + 2b − 4c = 1. Hence, the
parabola is y = 1 + 2x + 3x2 .
A31 Substituting the points into the equation of a parabola gives the system of 3 linear equations in the 3
unknowns a, b, and c.
7 = a + b(2) + c(2)2
1 = a + b(−1) + c(−1)2
−3 = a + b(0) + c(0)2
We row reduce the corresponding augmented matrix.

2
 1

 1 −1
1
0

0
 1

0
−1

0
2
4
7
1
1
0 −3

 R1 # R3



0 −3 

1
4 

4 10

0
 1

∼  1 −1

1
2

0
 1

∼  0 −1

R3 + 2R2
0
0

0 −3 

1
1  R2 − R1 ∼

4
7 R3 − R1

0 −3 

1
4 

6 18
Using back-substitution, we find that c = 3, b = −(4 − c) = −1, and a = −3. Hence, the parabola is
y = −3 − x + 3x2 .
A32 If a ! 0 and b ! 0, then the matrix is consistent and has no free variables, so the solution is unique. If
a = 0, b ! 0, the system is consistent, but x3 is a free variable so the solution is not unique. If a ! 0,
b = 0, then row reducing gives


 2 4 −3 6 


7 2 
 0 0

0 0
a a R3 − a7 R2

6
 2 4 −3

7
2
 0 0
0 0
0 5a/7




Hence, the system is inconsistent. If a = 0, b = 0, the system is consistent and x2 is a free variable,
so the solution is not unique.
A33 If c ! 0, d ! 0, the system is consistent and has no free variables, so the solution is unique. If
c = d = 0, then the system is consistent and x3+ is a free variable,
so there are infinitely many
,
solutions. If c ! 0 and d = 0, then the last row is 0 0 0 c , so the system is inconsistent. If
c = 0 and d ! 0, then the system is consistent and x4 is a free variable so there are infinitely many
solutions.
A34 Let a be the number of apples, b be the number of bananas, and c be the number of oranges. Then
a + b + c = 1500
Copyright © 2020 Pearson Canada Inc.
9
The second piece of information given is the weight of the fruit (in grams):
120a + 140b + 160c = 208000
Finally, the total selling price (in cents) is:
25a + 20b + 30c = 38000
Thus, a, b, and c satisfy the system of linear equations
a + b + c = 1500
120a + 140b + 160c = 208000
25a + 20b + 30c = 38000
Row reducing the corresponding augmented matrix for this system gives

 

1
1
1500   1 1 1 1500 
 1

 

 120 140 160 208000  ∼  0 20 40 28000 
25 20 30 38000
0 0 15 7500
By back-substitution 15x3 = 7500 so x3 = 500, 20x2 = 28000 − 40(500) hence x2 = 400, and
x1 = 1500 − 400 − 500 = 600. Thus the fruit-seller has 600 apples, 400 bananas, and 500 oranges.
A35 Let A be her mark in algebra, C be her mark in calculus, and P be her mark in physics. Then for a
physics prize,
0.2A + 0.3C + 0.5P = 84
For an applied mathematics prize,
1
(A + C + P) = 83
3
For a pure mathematics prize,
0.5A + 0.5C = 82.5
Thus, A, C, and P satisfy the linear system of equations
1
3
A+ C+
5
10
1
1
A+ C+
3
3
1
1
A+ C
2
2
1
P = 84
2
1
P = 83
3
= 82.5
To avoid working with fractions, we can multiply each equation by a non-zero scalar to get
2A + 3C + 5P = 840
A + C + P = 249
A+C
= 165
Copyright © 2020 Pearson Canada Inc.
10
Row Reducing the corresponding augmented matrix for this system gives

 

 2 3 5 840   1 1 0 165 

 

 1 1 1 249  ∼  0 1 5 510 
1 1 0 165
0 0 1 84
By back-substitution P = 84, C = 510 − 5(84) = 90, and A = 165 − 90 = 75. Therefore, the student
has 75% in algebra, 90% in calculus, and 84% in physics.
B Homework Problems
B1 !x =
!
−2
2
"
 


−1
−2/3
 


B4 !x =  1 + t  0 , t ∈ R
 


0
1
 
 25
 
B7 !x = −1
 
−3
 
 
! "
 3
4
8
 
 
B2 !x =
B3 !x =  0 + t 1, t ∈ R
 
 
7
−2
0


 
−35
7/6


 
B5 !x =  −4
B6 !x =  7 


 
8
2
 
 
 
7
−5
3
 
 
 
0
 1
0




B8 !x =   + s   + t  , s, t ∈ R
2
 0
2
0
0
1
B9 The matrix is in row echelon form.
B10 The matrix is not in row echelon form as there is a non-zero row below a zero row.
B11 The matrix is not in row echelon form as the leading entry in the third row is not to the right of the
leading entry in the second row.
B12 The matrix is in row echelon form.
For Problems B13 - B24, alternate correct answers are possible.
"
1 7
B13
0 15


2
1 1


B16 0 1 −1


0 0
5


3 −2
5
1


2
1
B19 0 −1


0
0
8 −7


4
1 3

0 1
0


B22 

0 0 −7
0 0
0
!
B14
!
4 −3
0
0

1

B17 0

0

1

B20 0

0

3
0
B23 
0
0
"

2
4

4
5

0 −8

3 −2 1

10 −3 4

0
0 0

2 1
3

2 2
0

0 5 −5

0 0
2
B15
B18
B21
B24

1
0

0

2

0
0

1
0

0

0

1
0

0

0
Copyright © 2020 Pearson Canada Inc.
3
−1
0
0
−1
0
2
1
0
0
1
−1
0
0

5

−3

0

4

0

6

−8

−5

0

0

−2
7

13 −27

1 −2

0
0
11
 
 
 2
−3
 
 
B25 Consistent. The solution is !x = −1 + t  2, t ∈ R.
 
 
0
1


−13/4


B26 Consistent. The solution is !x =  −5/2 .


2
 
 5
−2
B27 Consistent. The solution is !x = t  , t ∈ R.
 1
0
B28 Inconsistent.
 


 
1/2
−1/2
1/2
 0 
 1 
 
 + t  0 , s, t ∈ R.
B29 Consistent. The solution is !x =   + s 
1/2
1/2
 0 
 
0
0
1
B30 Inconsistent.
!
" !
"
! "
!
"
6 3 9
2 1 3
3/2
−1/2
B31
∼
. Consistent with solution !x =
+t
, t ∈ R.
4 2 6
0 0 0
0
1
 
 
!
" !
"
−2
 6
0 2 4 4
1 5 4 8
 
 
B32
∼
. Consistent with solution !x =  2 + t −2, t ∈ R.
 
 
1 5 4 8
0 2 4 4
0
1

 

9 
 5 −2 −1 0   1 1 4

 

1 −1 7  ∼  0 1 3 43/5 . The system is inconsistent.
B33  −4

 

1
1
4 9
0 0 0 76/5

 

 
2   1 3 3
2 
 1 3 3
 2

 

 
1  ∼  0 1 0
1 . Consistent with solution !x =  1.
B34  4 5 12

 

 
−2 7 7 −4
0 0 13 −13
−1

 



 
 −1 −2 1 17   −1 −2 1 17 
−14
−2

 



 
2 5 1  ∼  0
0 1 3 . Consistent with solution !x =  0 + t  1, t ∈ R.
B35  1

 



 
1
2 9 13
0
0 0 0
3
0
 


6 9 1   1 4 6 9 1 
 1 4
 


7 3 2  ∼  0 1 1 3 0 . The system is inconsistent.
B36  2 3
 


−2 1 −3 9 1
0 0 0 0 3

 

6 9
0   1 4 6 9
0 
 1 4

 

7 3
5  ∼  0 1 1 3 −1 .
B37  2 3

 

−2 1 −3 9 −9
0 0 0 0
0
 
 
 
 4
−2
 3
−1
−1
−3
Consistent with solution !x =   + s   + t  , s, t ∈ R.
 0
 1
 0
0
0
1
B38 y = −4 − 5x + 7x2
Copyright © 2020 Pearson Canada Inc.
12
B39 y = 9 − 3x + x2
B40 y = 4 +
11
2 x
+ 12 x2
B41 y = −5 − 2x + 3x2
B42 If b = 0 the system is inconsistent. If b ! 0 and a ! 0, then the matrix is in REF and so by back
substitution the system is consistent with a unique solution. If b ! 0 and a = 0, performing R3 − bR2
gives that the system is consistent only if b = ±1. In either case, the second column corresponds to a
free variable, so there are infinitely many solutions.
B43 If c = 0 and d ! 0, then the system is inconsistent. If c = 0 and d = 0, then the system is consistent
with infinitely many solutions since the second column will correspond to a free variable. If c ! 0,
then the system is consistent with a unique solution.
B44 The price of an armchair is $300, the price of a sofa bed is $600, and the price of a double bed is
$400.
B45 (a) f1 + f2 = 40, f2 + f4 = 60, f3 + f4 = 50, f1 + f3 = 30


 
−20
 1
 60
 
 + t −1, t ∈ R.
(b) f! = 

−1
 50
 
0
1
(c) It means the flow is moving in the opposite direction of the arrow.
B46 The average of the Business students is 78%, the average of the Liberal Arts students is 84% and the
average of the Science students is 93%.
B47 The magnitude of F1 is 8.89N, the magnitude of F2 is 40N, and the magnitude of F3 is 31.11N.
C Conceptual Problems
C1 Write the augmented matrix and row reduce:

 1 1 0
 2 3 1

 0 0 1

0 2 2
1
5
1
a
b
6
4
1



 1 1 0
 R − 2R
 0 1 1
2
1

∼ 

 0 0 1

0 2 2
1
b
3 6 − 2b
1
4
a
1






R4 − 2R2

1
b
 1 1 0
 0 1 1
3
6
−
2b
∼
0
0
1
1
4

0 0 0 a − 6 4b − 11






+
,
(a) To be inconsistent, we must have a row of the form 0 0 · · · 0 c where c ! 0. The only
way this is possible is by taking a = 6 and 4b − 11 ! 0. In all other cases the system is consistent.
(b) By the System Rank Theorem
(1), to have a ,consistent system with a unique solution, we cannot
+
have a row of the form 0 0 · · · 0 c where c ! 0 and we need the number of pivots to
equal the number of variables. Since x, y, and z always have a pivot, we just require that w also
has a pivot. This will happen whenever a ! 6 which also implies that the system is consistent.
(c) By the System Rank Theorem (1),
+ to have a consistent
, system with infinitely many solutions, we
cannot have a row of the form 0 0 · · · 0 c where c ! 0 and we need the number of
pivots to be less than the number of variables. To have the number of pivots less than the number
of variables, we need a = 6. But, then for the system to be consistent we must have 4b − 11 = 0.
Copyright © 2020 Pearson Canada Inc.
13
! = t!n for some non-zero real number t. To determine a point (or
C2 Since the two planes are parallel, m
points) of intersection, we solve the system of equations with augmented matrix
!
"
n1 n2 n2 c
m1 m2 m3 d
Add −t times the first row to the second to obtain
!
"
n1 n2 n2
c
0 0 0 d − tc
! = t!n. If d ! tc, the system of equation is inconsistent, and there are no points of interbecause m
section. If d = tc, then the second equation is a non-zero multiple of the first. That is, the planes
coincide.
C3 (a) It is consistent because x1 = x2 = 0 is a solution.
(b) Observe that we cannot have a = c = 0 as otherwise ad − bc = 0. Since we can just swap the
order of the equations (swap rows in the matrix), without loss of generality we can assume that
a ! 0. Then, making the augmented matrix and row reducing gives
!
"
!
"
!
"
a b 0
a
b
0
a
b
0
∼
∼
c d 0 R2 − ac R1
0 d − bc
0 aR2
0 ad − bc 0
a
Since ad − bc ! 0, we get that the system is consistent with a unique solution.
(c) If the system is consistent with a unique solution, then we must be able to row reduce it as in part
(a). Thus, in the first step, we require that either a or c is non-zero as otherwise the first column
will correspond to a free variable and the system will have infinitely many solutions. As in part
(a), without loss of generality we assume a ! 0. Then, in the last step, we require that ad − bc ! 0
for the system to be consistent.
C4 (a) Write the i-th equation of the system as
ai1 x1 + · · · + ain xn = bi
Substituting
into the i-th equation gives
 


 x1 
 s1 + c(s1 − t1 )
 . 


..
 ..  = !s + c(!s − !t) = 

.


 
xn
sn + c(sn − tn )
.
.
ai1 s1 + c(s1 − t1 ) + · · · + ain sn + c(sn − tn ) = bi
ai1 s1 + cai1 s1 − cai1 t1 + · · · + ain sn + cain sn − cain tn = bi
.
.
ai1 s1 + · · · + ain sn + c ai1 s1 + · · · + ain sn − c ai1 t1 + · · · + ain tn = bi
bi + cbi − cbi = bi
bi = bi
Therefore, the i-th equation is satisfied when !x = !s + c(!s − !t). Since this is valid for all 1 ≤ i ≤ m,
we have shown that !x = !s + c(!s − !t) is a solution for each c ∈ R.
Copyright © 2020 Pearson Canada Inc.
14
(b) Assume c1 ! c2 . If the two solutions !s + c1 (!s − !t) and !s + c2 (!s − !t) are not distinct, then
!s + c1 (!s − !t) = !s + c2 (!s − !t)
c1 (!s − !t) = c2 (!s − !t)
(c1 − c2 )(!s − !t) = !0
Since c1 ! c2 , this implies that !s − !t = !0 and hence !s = !t. But, this contradicts our assumption
that !s ! !t. Hence, the two solutions are distinct.
Section 2.2
A Practice Problems
A1 The matrix is not in RREF for two reasons. The first non-entry in the first row is not a 1; the first
non-entry in the second row is not a 1
A2 The matrix is not in RREF for four reasons. The first non-zero entry in the second row is not a 1; the
leading entry in the second row is to the right of the leading entry in the row beneath it; there is a
non-zero entry in the same column as the leading one in the third row; there is a non-zero entry in
the same column as the leading entry in the second row.
A3 The matrix is in RREF.
A4 The matrix is in RREF.
A5 The matrix is in RREF.
A6 The matrix is not in RREF for three reasons. The leading entry in the first row is to the right of the
leading entry in the row beneath it; there is a non-zero entry in the same column as the leading one
in the third row; there is a non-zero entry in the same column as the leading one in the first row.
A7 The matrix is not in RREF since there is a non-zero entry in the same column as the leading one in
the third row. 





1 
 2
 1 −1 
 1 −1 
A8






1  R2 − 2R1 ∼ 0
3  13 R2 ∼
 1 −1  R1 # R2 ∼  2



3
2
3
2 R3 − 3R1
0
5




 1 −1  R1 + R2
 1 0 



 0
1 
∼  0 1 




0
5 R3 − 5R2
0 0
Thus, the rank
 is 2.
 2 0 1
A9

 0 1 2
1 1 1

2
 1 0

2
 0 1
0 0 −3
Thus, the rank is 3.


 R1 − R3  1


∼  0


1


 1



∼  0


0
− 13 R3


−1 0 
 1 −1 0


1 2 
1 2
∼ 0


1 1
R3 − R1
0
2 1


0 2  R1 − 2R3
 1 0 0


1 2  R2 − 2R3 ∼  0 1 0


0 1
0 0 1
Copyright © 2020 Pearson Canada Inc.

 R1 + R2

∼

R3 − 2R2




15
A10



2
 1 2 3 
 1



 2 1 2  R2 − 2R1 ∼  0 −3
2 3 4 R3 − 2R1
0 −1



 1 0 −5 
 1 0



0
1
4
∼


 0 1
1
0 0
2 2 R3
0 0
Thus, the rank is 3.

 1 0 −2
A11

2
 2 1
2 3
4

 1 0 −2

6
 0 1
0 0
1
Thus, the rank is 3.
A12
Thus, the rank is 2.
A13
Thus, the rank is 3.

 2 −1
A14

 1 −1
3 −2

 1 −1

1
 0
0
1
Thus, the rank is 3.


3 
2
3
 1


−4  R2 − 4R3 ∼ 0
1
4


−2
0 −1 −2


−5  R1 + 5R3
 1 0 0


4  R2 − 4R3 ∼  0 1 0


1
0 0 1





 1 0 −2 
 1 0 −2




6 
6
∼ 0 1
 R2 − 2R1 ∼  0 1


R3 − 2R1
0 3
8 R3 − 3R2
0 0 −10



 R1 + 2R3  1 0 0 



 R2 − 6R3 ∼  0 1 0 
0 0 1

 1
 1

 −1

2

 1
 0

 0

0

 1 1

 1 1
1 1

 1 1

 0 0
0 0
2 8
0 2
3 13
2
2
−2
4
1
3
3
3
2
0
0
0
1
1
4
1




 1 2 1 
 R − R
 0 0 2 
1
 2


 R3 + R1 ∼  0 0 4 



R4 − 2R1
0 0 1



 R1 − R2
 1 2 0 

 0 0 1 



 R3 − 4R2 ∼  0 0 0 



R4 − R2
0 0 0


1 1 
 1 1 0 0


1 0 
∼  1 1 1 0


0 0 R1 # R3
1 1 1 1


0 0 
 1 1 0 0


1 0 
∼  0 0 1 0


1 1 R3 − R2
0 0 0 1



 1 −1


 R1 # R2 ∼  2 −1
3 −2


0 2  R1 + R2  1 0


2 4 
∼  0 1


3 7 R3 − R2
0 0
0 2
2 8
3 13

2 6 

2 4 

1 3
1
2 R2

 R1 − 2R2

∼

R3 + R2








1
− 10
R3
∼
∼



 R2 − R1 ∼
R3 − R1







 R2 − 2R1 ∼
R3 − 3R1


R1 − 2R3  1 0 0
0 


R2 − 2R3 ∼ 0 1 0 −2 


0 0 1
3
Copyright © 2020 Pearson Canada Inc.
16

 1 1 0
 0 1 1

 2 3 1

1 2 3
A15

 1 0
 0 1

 0 0

0 0

 1 0
 0 1

 0 0

0 0
Thus, the rank is3.
 0
A16
 3

 1

2

 1 0
 0 1

 0 1

0 1

 1 0
 0 1

 0 0

0 0
Thus, the rank is 4.
1
1
0
1
1
2
4
4



 1 1 0

 0 1 1


 R3 − 2R1 ∼  0 1 1


R4 − R1
0 1 3



 1 0 −1

 0 1
1

∼ 

0
0
0


1
0 0
1
2 R4



 1 0 0

 0 1 0

∼ 

 0 0 1

R3 # R4
0 0 0
1
2
2
3

 1 0 3
 3 1 8
∼ 
 0 1 0
2 1 6


1 
3
 1 0





0 
0 1 −1
∼ 

5  R3 − R2
0
0
1



−1 R4 − R2
0 0
1


−14  R1 + 7R4  1 0 0


5  R2 − 2R4  0 1 0
∼ 

5  R3 − 3R4  0 0 1


−6
0 0 0
2
5
2
7
−1 −1
1
2
0
0
2
1
0 −1/2
0
3/2
0
0
1
1/2
0
8
3
6
2
5
2
7
3
2
−1 −1
0
2
0
3
0 −7
0
2
1
3
0
1
5
3
1
1



 R # R
3
 1

R1 − R2






R3 − R2
R4 − R2

 R1 + R4
 R − R
4
 2
∼


−1
2
0
1/2
−1/2
3/2
1/2
0
1
3
5
1
∼












R2 − 3R1
∼
R4 − 2R1

2
1  R1 − 3R3

−1
0  R2 + R3
∼

3
5 

4 −1
R4 − R3

0 −56 

0
17 

0
23 

1 −6
A17 There is 1 parameter. Let x3 = t ∈ R. Then we have x1 = −2t, x2 = t, and x4 = 0. Thus, the general
solution is
 
 
−2t
−2
 t
 1
!x =   = t   , t ∈ R
 t
 1
0
0
A18 There are 2 parameters. Let x1 = s ∈ R and x3 = t ∈ R. Then we have x2 = −2t and x4 = 0. Thus, the
general solution is
 
 
 
 s 
1
 0
−2t
0
−2




!x =   = s   + t   , s, t ∈ R
 t 
0
 1
0
0
0
A19 There are 2 parameters. Let x2 = s ∈ R and x3 = t ∈ R. Then we have x1 = 3s − 2t and x4 = 0. Thus,
the general solution is


 
 
3s − 2t
3
−2
1
 
 s 
 = s   + t  0 , s, t ∈ R
!x = 
0
 1
 t 
 
 
0
0
0
Copyright © 2020 Pearson Canada Inc.
17
A20 There are 2 parameters. Let x3 = s ∈ R and x5 = t ∈ R. Then we have x1 = −2s, x2 = s + 2t, and
x4 = −t. Thus, the general solution is


 −2s 
 s + 2t


!x =  s  =


 −t 
t
 
 
−2
 0
 2
 1
 
 
s  1 + t  0 ,
 
 
 0
−1
0
1
s, t ∈ R
A21 There are 2 parameters. Let x2 = s ∈ R and x4 = t ∈ R. Then we have x1 = −4t, x3 = 5t, and x5 = 0.
Thus, the general solution is
 
−4t
 s
 
!x =  5t =
 
 t
0
 
 
0
−4
1
 0
 
 
s 0 + t  5 ,
 
 
0
 1
0
0
s, t ∈ R
A22 There is 1 parameter. Let x3 = t ∈ R. Then we have x1 = 0, x2 = −t, x4 = 0, and x5 = 0. Thus, the
general solution is
 
 
 0
 0
−t
−1
 
 
!x =  t = t  1 , t ∈ R
 
 
 0
 0
0
0
A23 We have
3 −5 2
1
2 4
"
R1 # R2 ∼
!
1
2 4
3 −5 2
1
2
4
0 −11 −10
"
∼
!
1 2
4
0 1 10/11
!
!
1
− 11
R2
"
"
R2 − 3R1
∼
R1 − 2R2
!
"
1 0 24/11
∼
0 1 10/11
"
24/11
Hence, the solution is !x =
.
10/11
!
A24 We have
!
!
We have x1 + x3 =
general solution is
1
2 1 5
2 −3 2 6
1 2 1
5
0 1 0 4/7
27
7
"
"
∼
!
R1 − 2R2 ∼
!
R2 − 2R1
1
2 1
5
0 −7 0 −4
1 0 1 27/7
0 1 0 4/7
"
"
− 17 R2
∼
and x2 = 47 . Then x3 is a free variable, so let x3 = t ∈ R and we get that the


 
27/7
−1


 
!x =  4/7  + t  0 ,


 
0
1
t∈R
Copyright © 2020 Pearson Canada Inc.
18
A25 We have

 1 2

 1 3
2 5

 1 2

 0 1
0 0


−3 8 
 1 2 −3 8


−5 11  R2 − R1 ∼  0 1 −2 3


−8 19 R3 − 2R1
0 1 −2 3


−3 8 
1 2
 1 0
 R − 2R2 
−2 3  1
∼  0 1 −2 3


0 0
0 0
0 0








R3 − R2
∼
We have x1 + x3 = 2 and x2 − 2x3 = 3. Then x3 is a free variable, so let x3 = t ∈ R and we get that
the general solution is
 
 
2
−1
 
 
!x = 3 + t  2 , t ∈ R
 
 
0
1
A26 We have

6
 −3

 1 −2
2 −3

 1 −2

0
 0
0
1

 1 0

 0 1
0 0

16
36 

−5 −11 

−8 −17

−5 −11 

1
3 

2
5

−1 −1 

2
5 

1
3

 1

∼  −3

R1 # R2
2

 1

∼  0

R2 # R3
0

R1 + R3
 1

R2 − 2R3 ∼  0

0

−2 −5 −11 

6 16
36  R2 + 3R1 ∼

−3 −8 −17 R3 − 2R1

−2 −5 −11 
 R1 + 2R2
1
2
5 
∼

0
1
3

0 0
2 

1 0 −1 

0 1
3
 
 2
 
Thus the only solution is !x = −1.
 
3
A27 We have




 1 2 −1 4 
 1 2 −1 4  R1 − 2R2




1 10  R2 − 2R1 ∼  0 1
3 2 
∼
 2 5



4 9 −1 19 R3 − 4R1
0 1
3 3
R3 − R2




 1 0 −7 0 
 1 0 −7 0 




3 2  R2 − 2R3 ∼  0 1
3 0 
 0 1



0 0
0 1
0 0
0 1
Hence, the last equation is 0 = 1 so the system is inconsistent.
Copyright © 2020 Pearson Canada Inc.
19
A28 We have

 1 2

 2 4
6 13

 1

 0
0

 1

 0
0
−3 0 −5
−6 1 −8
−17 4 −21




 1 2 −3 0 −5 



0 1
2 
∼
 R2 − 2R1 ∼  0 0

R3 − 6R1
0 1
1 4
9
R2 # R3



0 −5 
 1 2 −3 0 −5  R1 − 2R2



4
9  R2 − 4R3 ∼  0 1
1 0
1 
∼



1
2
0 0
0 1
2

0 −7 

0
1 

1
2
2 −3
1
1
0
0
0 −5
1
1
0
0
In equation form this is x1 − 5x3 = −7, x2 + x3 = 1, and x4 = 2. Thus, x3 is a free variable, so we let
x3 = t ∈ R. Then the general solution is
 
 
−7
 5
−1
 1
!x =   + t   ,
 0
 1
2
0
A29 We have

 0 2 −2
 1 2 −3

 2 4 −5

2 5 −7

 1 2 −3
 0 2 −2

 0 0
1

0 1 −1

 1 0
 0 1

 0 0

0 0

 1 0
 0 1

 0 0

0 0
0 1
1 4
3 8
3 10
1
0
1
1
4
1
0
2
2
1
3
5
1
2
1
3
−1 −1
0 −5
−1
1
2
3
1
1
0
1
0 −2 −3 −4
0
0
1
0
0
0 −4
0
−1
0
0 −3/2 −1
1
3/2
2



 1
 R # R
 0
2
 1

∼

 2


2



 1

 0


∼

 0
 R2 # R4 
0


 R1 + R3  1
 R + R
 0
3
 2

∼

 0


1
− 2 R4
0






t∈R
2
2
4
5
1 4
0 1
3 8
3 10
−3
−2
−5
−7
2 −3 1 4
1 −1 1 2
0
1 1 0
2 −2 0 1
0
1
0
0
0
0
1
0
0
2
1
1

1 

2 

3 

5

1 

3 

1 

2
R3 − 2R1
R4 − 2R1
∼
R1 − 2R2
∼
R4 − 2R2

0 −4 

2
4  R2 − 2R4
∼

0
1  R3 − R4

3/2
2
In equation form this is x1 = −4, x2 − x5 = 0, x3 − 32 x5 = −1, and x4 + 32 x5 = 2. Thus, x5 is a free
variable, so we let x5 = t ∈ R. Then the general solution is
 


−4
 0 
 1 
 0


 
!x = −1 + t  3/2 ,
 


 2
−3/2
0
1
t∈R
Copyright © 2020 Pearson Canada Inc.
20


0 2 −5


3. To find the rank, we row reduce the coefficient matrix to RREF.
A30 The coefficient matrix is 1 2


1 4 −3

 0 2

 1 2
1 4

 1 0

 0 2
0 0
−5
3
−3
8
−5
−1



3 
3
 1 2
 1 2



0
2
−5
0
2
−5
∼ 
∼



1 4 −3 R3 − R1
0 2 −6



0 
 1 0
 1 0 0
R1 + 8R3 


0  12 R2 ∼  0 1 0
∼  0 2


R2 − 5R3 
0 0 −1
(−1)R3
0 0 1



 R1 # R2





 R1 − R2

∼

R3 − R2




Thus, the rank is 3. The number of parameters in the general solution to the homogeneous system
with coefficient A is the number of variables minus the rank, which is 3 − 3 = 0. Therefore, the only
solution is !x = !0.


3 1 −9
A31 The coefficient matrix is 1 1 −5. To find the rank, we row reduce the coefficient matrix to RREF.


2 1 −7



 3 1 −9 
 1 1



 1 1 −5  R1 # R2 ∼  3 1
2 1 −7
2 1



1 −5 
1
 1
 1

 1

6  − 2 R2 ∼  0
1
 0 −2


0 −1
3
0 −1
−5
−9
−7




−5
−3
3




R2 − 3R1 ∼
R3 − 2R1


R1 − R2  1 0 −2 


∼ 0 1 −3 


R3 + R2
0 0
0
Thus, the rank is 2. The number of parameters in the general solution to the homogeneous system
with coefficient A is the number of variables minus the rank, which is 3 − 2 = 1. Therefore, there are
infinitely many solutions. To find the general solution, we rewrite the RREF back into equation form
to get
x1 − 2x3 = 0
x2 − 3x3 = 0
Thus, x3 is the free variable. So, we let x3 = t ∈ R. Then, the general solution is
   
 
 x1  2t
2
   
 
!x =  x2  = 3t = t 3 ,
   
 
x3
t
1
t∈R
Copyright © 2020 Pearson Canada Inc.
21


 1 −1 2 −3 
 3 −3 8 −5 
. To find the rank, we row reduce the coefficient matrix
A32 The coefficient matrix is 
 2 −2 5 −4 
3 −3 7 −7
to RREF.




 1 −1 2 −3 
 1 −1 2 −3  R1 − R2
 3 −3 8 −5  R2 − 3R1  0
0 2
4 



∼
∼

 2 −2 5 −4  R3 − 2R1  0
0 1
2  R3 − 21 R2




3 −3 7 −7 R4 − 3R1
0
0 1
2 R4 − 12 R2




 1 −1 0 −7 
 1 −1 0 −7 
 0
 0
0 2
4  12 R2
0 1
2 


∼



 0
 0
0 0
0 
0 0
0 




0
0 0
0
0
0 0
0
Thus, the rank is 2. The number of parameters in the general solution to the homogeneous system
with coefficient A is the number of variables minus the rank, which is 4 − 2 = 2. Therefore, there are
infinitely many solutions. To find the general solution, we rewrite the RREF back into equation form
to get
x1 − x2 − 7x4 = 0
x3 + 2x4 = 0
Thus, x2 and x4 are free variables. So, we let x2 = s ∈ R and x4 = t ∈ R. Then, the general solution is
  

 
 
 x1   s + 7t
1
 7
 x   s 
1
 
 = s   + t  0 , s, t ∈ R
!x =  2  = 
0
−2
 x3   −2t 
 
 
x4
t
0
1

 0 1 2
 1 2 5
A33 The coefficient matrix is 
 2 1 5
1 1 4
to RREF.

0
 0 1 2 2
 1 2 5 3 −1

 2 1 5 1 −3

1 1 4 2 −2

2
5
3 −1
 1
 0
1
2
2
0

 0 −3 −5 −5 −1

0 −1 −1 −1 −1

0
 1 0 0 −2
 0 1 0
0
2

 0 0 1
1
−1

0 0 0
0
0

2
0 

3 −1 
. To find the rank, we row reduce the coefficient matrix
1 −3 

2 −2

 1 2 5
 0 1 2
∼ 
 2 1 5
1 1 4

R1 − 2R2  1 0 1

 0 1 2
∼ 
R3 + 3R2  0 0 1

R4 + R2
0 0 1




 R1 # R2














3 −1 

2
0 

1 −3 

2 −2
R3 − 2R1
R4 − R1
∼

−1 −1  R1 − R3

2
0  R2 − 2R3
∼

1 −1 

1 −1
R4 − R3
Copyright © 2020 Pearson Canada Inc.
22
Thus, the rank is 3. The number of parameters in the general solution to the homogeneous system
with coefficient A is the number of variables minus the rank, which is 5 − 3 = 2. Therefore, there are
infinitely many solutions. To find the general solution, we rewrite the RREF back into equation form
to get
x1 − 2x4 = 0
x2 + 2x5 = 0
x3 + x4 − x5 = 0
Thus, x4 and x5 are free variables. So, we let x4 = s ∈ R and x5 = t ∈ R. Then, the general solution is
  

 x1   2s 
 x   −2t 
 2  

!x =  x3  = −s + t =
  

 x4   s 
x5
t
A34 We have

 2 −1 4 1

3 0 0
 1
1
1 2 2

 1

 0
0

 1
 0

0
1
2
−3
0
1
0
 
 
 2
 0
 0
−2
 
 
s −1 + t  1 ,
 
 
 1
 0
0
1

 1

∼  1

2


1
2
2  R1 − 2 R2  1


−2 −2 
∼  0


0 −3 R3 + 32 R2
0


3
3  R1 − 3R3  1


−1 −1  R2 + R3 ∼  0


1
2
0



 R1 # R3
s, t ∈ R

1 2 2 

3 0 0 

−1 4 1

0
3
3 

2 −2 −2 

0 −3 −6

0 0 −3 

1 0
1 

0 1
2
R2 − R1 ∼
R3 − 2R1
1
2 R2
− 13 R3
∼
 
−3
 
!
Therefore, the solution to A | b is !x =  1. If we replace !b with !0, we get that the solution to the
 
2
 
0
 
homogeneous system is !x = 0.
 
0
+
,
A35 We have




1
1
−1






7
5
5 
7 5
 1


0
5 −2  R2 − R1 ∼  0 −7 0


2 −5
4 R3 + R1
0
9 0


1 7 5 5  R1 − 7R2  1 0 5


0 1 0 1 
∼  0 1 0


0 1 0 1
R3 − R2
0 0 0

5 

−7  − 17 R2 ∼
 1
9
9 R3

−2 

1 

0
Copyright © 2020 Pearson Canada Inc.
23
Writing this back into
form gives x1 + 5x3 = −2 and x2 = 1. Let x3 = t ∈ R. Then the
+ equation
,
!
general solution to A | b is
 
 
−2
−5
 
 
!x =  1 + t  0 , t ∈ R
 
 
0
1
Replacing !b with !0, we get the general solution to the homogeneous system is
 
−5
 
!x = t  0 ,
 
1
A36 We have
0 −1
5
−1 −1 −4
!
1 1
4
0 1 −5
!
t∈R
−1 −1 −4 −1
4
R1 # R2 ∼
0 −1
5 −2 −1
"
!
"
1 −4 R1 − R2
1 0
9 −1 −5
∼
2
1
0 1 −5
2
1
−2 −1
−1
4
"
!
"
(−1)R1
∼
(−1)R2
Writing this back into equation form gives x1++ 9x3, − x4 = −5 and x2 − 5x3 + 2x4 = 1. Let x3 = s ∈ R
and x4 = t ∈ R. Then the general solution to A | !b is
 
 
 
−5
−9
 1
 1
 5
−2
!x =   + s   + t   ,
 0
 1
 0
0
0
1
s, t ∈ R
Replacing !b with !0, we get the general solution to the homogeneous system is
 
 
−9
 1
 5
−2
!x = s   + t   ,
 1
 0
0
1
A37 We have




1
4
−1

 1

 0
0

 1

 0
0
0 −1 −1 3
3
2 −4 3
−4 −3
5 5
3
0 −1 −1
1
2
0 −3
1
1 −1 −2
3
0 −1 −1
1
2
0 −3
0
1
1 −1
s, t ∈ R



3 
0 −1 −1

 1



3
6
0 −9  13 R2 ∼
 R2 − 4R1 ∼  0

R3 + R1
0 −4 −4
4
8
− 14 R3



3 

 1 0 −1 −1



2
0 −3 
∼  0 1
∼



R3 − R2
0 0 −1 −1
1
(−1)R3



2 
0

 1 0 0
 R1 + R3


 R − 2R ∼  0 1 0 −2 −1 
2
3
0 0 1
1 −1
Copyright © 2020 Pearson Canada Inc.
24
Writing this back into equation
+ form
, gives x1 = 2, x2 − 2x4 = −1, and x3 + x4 = −1. Let x4 = t ∈ R.
Then the general solution to A | !b is
 
 
 2
 0
−1
 2
!x =   + t   ,
−1
−1
0
1
t∈R
Replacing !b with !0, we get the general solution to the homogeneous system is
 
 0
 2
!x = t   ,
−1
1
A38 We have

 1 −1
 −1 −2

 −4 −1

5
4


4 −1
4 
4
4 −1
 1 −1

 0 −3
5 −2
5  R2 + R1
9
−3
9
∼ 

2
2 −4  R3 + 4R1  0 −5
18 −2
12


1
8
5 R4 − 5R1
0
9 −19 13 −15

4
4 −1
 1 −1
 0
1 −3
1 −3

 0 −5
18 −2
12

0
9 −19 13 −15

 1 0
 0 1

 0 0

0 0

 1 0
 0 1

 0 0

0 0
t∈R
1
−3
1
8
0
0
1
0



1 0
1 
 R1 + R2
 1 0

 0 1 −3 1 −3 



∼
 R3 + 5R2  0 0

3
3
−3



R4 − 9R2
0 0
8 4 12



0
1  R1 − R3
2 
 1 0 0 −1



1 −3  R2 + 3R3  0 1 0
4 −6 
∼ 


1 −1 
1 −1 
 0 0 1


4 12 R4 − 8R3
0 0 0 −4 20



−1
2  R1 + R4
 1 0 0 0 −3 

4 −6  R2 − 4R4  0 1 0 0 14 
∼ 


1 −1  R3 − R4
4 
 0 0 1 0


1 −5
0 0 0 1 −5


 − 1 R
 3 2 ∼


1
3 R3
− 14 R4
∼
∼
 
 
−3
0


0
+
,
 14
Thus, the solution to A | !b is !x =  . The solution to the homogeneous system is !x =  .
 4
0
−5
0
A39 We have



1 4
2 
 1 1 3
 1

 4 4 6 −8 4 −4  R − 4R
 0
2
1



 1 1 4 −2 1 −6  R3 − R1 ∼  0



3 3 2 −4 5
6 R4 − 3R1
0

1
3
1
4
2 

0 −6 −12 −12 −12  − 16 R2
∼

0
1 −3 −3 −8 

0 −7 −7 −7
0
Copyright © 2020 Pearson Canada Inc.
25



3
1
4
2  R1 − 3R2  1
 1 1


 0 0
 0
1
2
2
2 

∼ 

 0 0
1
−3
−3
−8
R
−
R

 0
2

 3
0 0 −7 −7 −7
0 R4 + 7R2
0
1
0
0
0



 1 1 0 −5 −2 −4  R1 + 5R3  1 1
 0 0 1

2
2
2  R2 − 2R3  0 0

∼ 

 0 0 0
1
1
2 
 0 0


0 0 0
7
7 14 R4 − 7R3
0 0

0 −5 −2 −4 

1
2
2
2 

0 −5 −5 −10 

0
7
7
14
0
1
0
0
0
0
1
0
− 15 R3
∼

6 
3

0 −2 

1
2 

0
0
     
 6 −1 −3
 0  1  0
     
+
,
The solution to A | !b is !x = −2 + s  0 +t  0, s, t ∈ R. The solution to the homogeneous system
     
 2  0 −1
0
0
1
 
 
−1
−3
 1
 0
 
 
is !x = s  0 + t  0, s, t ∈ R.
 
 
 0
−1
0
1
A40 (a) Row reducing the coefficient matrix gives

 1 1

 1 2
1 4

 1 0

 0 1
0 0


1
0 
0
 1 1 1


1 −2  R2 − R1 ∼  0 1 0 −2


2 −7 R3 − R1
0 3 1 −7


1
2  R1 − R3  1 0 0
3


0 −2 
∼  0 1 0 −2


1 −1
0 0 1 −1

 R1 − R2

∼

R3 − 3R2




+
,
Since there is a leading one in each row, there cannot be a row in the RREF of A | !b of the form
,
+
0 · · · 0 1 . Hence, the system will be consistent for any b1 , b2 , b3 ∈ R.
+
,
+
,
(b) Since rank(A) = m, there cannot be a row in the RREF of A | !b of the form 0 · · · 0 1 .
Hence, the system will be consistent for any !b ∈ Rm .
(c) Row reducing the coefficient matrix gives








 1 1 
 1 1 
 1 1  R1 − R2  1 0 








∼ 0 1 
∼  0 1 
 2 2  R2 − 2R1 ∼  0 0 




2 3 R3 − 2R1
0 1 R2 # R3
0 0
0 0
To make this inconsistent we would need the augmented column of the RREF to have a non-zero
entry in the third row. For example,


 1 0 0 


 0 1 0 
0 0 1
Copyright © 2020 Pearson Canada Inc.
26
To determine what vector !b this corresponds to, we just reverse the row operations we used to
row reduce the coefficient matrix.








 1 0 0  R1 + R2  1 1 0 
 1 1 0 
 1 1 0 








∼  0 1 0 
∼ 0 0 1  R2 + 2R1 ∼  2 2 1 
 0 1 0 






0 0 1
0 0 1 R2 # R3
0 1 0 R3 + 2R1
2 3 0
 
 
0
0
 
 
Thus, taking !b = 1 makes the system inconsistent. NOTE: If we had taken !b = 0, then the
 
 
0
1
system would be consistent with solution x1 = −1, x2 = 1.
/
0
(d) If rank A < m, then the
, a row of all zeros. Thus, R | !em is inconsistent as it
+ RREF R of A has
has a row of the form 0 · · · 0 1 . Since elementary row operations are reversible, we can
+
,
/
0
apply the reverse of the row operations needed to row reduce A to R on R | !em to get A | !b for
some !b ∈ Rn . Then this system is inconsistent since elementary
+
, row operations do not change the
solution set. Thus, there exists some !b ∈ Rm such that A | !b is inconsistent.
B Homework Problems
B1 The matrix is not in RREF because the leading one in the second row is not the only non-zero element
in its column.
B2 The matrix is in RREF.
B3 The matrix is not in RREF because the the leading coefficient of the second row is not to the right of
the leading coefficient of the row above it and the the leading coefficient of the third row is not to the
right of the leading coefficient of the row above it.
B4 The matrix is not in RREF because the second row is all zeros and is above a row that is not all zeros.
B5 The matrix is not in RREF because the leading element of the second and third rows are not 1’s.
B6 The matrix is in RREF.
B7 The matrix is not in RREF because the leading one in the second row is not the only non-zero
element in its column and because the leading one in the third row is not the only non-zero element
in its column.
B8 The matrix is not in RREF because the leading one in the third row is not the only nob-zero element
in its column.
B9 The
 matrix
 is in RREF.
1 −2


0; the rank is 1.
B10 0


0
0


1 0 0


B12 0 1 0; the rank is 3.


0 0 1

1

B11 0

0

1

B13 0

0

0
5

1 −4; the rank is 2.

0
0

0
2 0

1 −1 0; the rank is 3.

0
0 1
Copyright © 2020 Pearson Canada Inc.
27

1

B14 0

0

1

B16 0

0

3/2 0
0

0 1 −1; the rank is 2.

0 0
0

0 0
4

1 0
0; the rank is 3.

0 1 −1


1 −2 0 3 0
0
0 1 2 0
B18 
; the rank is 3.
0 0 0 1
0

0
0 0 0 0
B19 There is 1 parameter. The general solution is !x =
B20 There are 2 parameters. The general solution is !x
B21 There are 2 parameters. The general solution is !x
B22 There is 1 parameter. The general solution is !x =
B23 There are 2 parameters. The general solution is !x
B24 There are 2 parameters. The general solution is !x
"
8
B25 !x =
−4
 
−2
 
B27 !x = −4
 
3
!

1

B15 0

0

1
0
B17 
0
0

0 0

1 0; the rank is 3.

0 1

0 0 0

1 0 0
; the rank is 4.
0 1 0

0 0 1
 
 5
−3
s  , s ∈ R.
 0
1
 
 
0
 2
1
 0
= s   + t  , s, t ∈ R.
0
−3
0
1
 
 
1
−4
3
−5
= s   + t  , s, t ∈ R.
1
 0
0
1
 
5
1
s  , s ∈ R.
1
0
 
 
1
0
0
1
 
 
= s 0 + t 0, s, t ∈ R.
 
 
0
0
0
0
 
 
−3
−8
−3
 2
 
 
= s  1 + t  0, s, t ∈ R.
 
 
 0
 0
0
1
 


−4
 14
B26 !x =  3  + t −10, t ∈ R
 


0
1
B28 The system is inconsistent.
Copyright © 2020 Pearson Canada Inc.
28
 
 
 
 3
 3
−5
−2
−5
 4
B29 !x =   + s   + t  , s, t ∈ R
 0
 0
 1
0
0
1
 
 
 3
−1
 2
 2
 
 
B31 !x =  0 + s  1, s ∈ R
 
 
−2
 0
6
0




 
−1/3
−4/3
 2
 0 
 1 
 
 + s 
 + t  0, s, t ∈ R
B30 !x = 

 0 
−5
 1 


 
0
0
1

 

 
0 2 1 1 0 0
0

 

 
B32 5 6 1 ∼ 0 1 0; the rank is 3; there are no parameters. The general solution is !x = 0.

 

 
1 3 1
0 0 1
0

1 2

B33 1 4

1 3
t ∈ R.

 3 4
 9 5
B34 
−3 1
3 1



 
4
1 0 6 
−6



 
2 ∼ 0 1 −1; the rank is 2; there is 1 parameter. The general solution is !x = t  1,



 
3
0 0 0
1
 

−2
7 1 0 2/3 −5/3
 

1
0 0 1 −1
3 
 ∼ 
; the rank is 2; there are 2 parameters. The general solution
−3
8 0 0 0
0 
 

1 −2
0 0 0
0


 
−2/3
5/3
 1 
 
 + t  −3 , s, t ∈ R.
is !x = s 
 0 
 1 
 
0
1


1 −3 −1
 1 −3

−1
4
0
1
1
B35 
 ∼
8 −1
 2 −13 −5

3 −12
1 −7 −3
 
−7
−2
 
solution is !x = t  4, t ∈ R.
 
 1
0

1
0

0

0
0
1
0
0

0
7 0

0
2 0
; the rank is 4; there is 1 parameter. The general
1 −4 0

0
0 1
For Problems B36 - B41, alternative correct answers are possible.
 




0 
 0
 2 4 0 −8 
 1 0 0
+
,
 




6  ∼  0 1 0 −2 . The solution to A | !b is !x = −2. The solution to the
B36  1 4 2
 




7
2 5 3 11
0 0 1
7
 
0
 
homogeneous system is !x = 0.
 
0
Copyright © 2020 Pearson Canada Inc.
29
 

4
6 3 7   1 3/2 0 0 
 

6
9 5 3  ∼  0 0 1 0 . The system is inconsistent. The solution to the homoge 

−4 −6 2 3
0 0 0 1


−3/2


neous system is !x = t  1 , t ∈ R.


0
     
1 −3 −5
!
" !
"
3  1  4
+
,
5 3 12 13 14
1 0
3
5 1
∼
. The solution to A | !b is !x =   + s   +t  ,
2 1 5 6 5
0 1 −1 −4 3
0  1  0
0
0
1
 
 
−3
−5
 1
 4
s, t ∈ R. The solution to the homogeneous system is !x = s   + t  , s, t ∈ R.
 1
 0
0
1
 
 

 

3
−3
3 0 3 
0 3 −1 1   1 0
 1
0
 2
+
,

 

3 6 −4 4  ∼  0 1 −2 0 0 . The solution to A | !b is !x =   + s  , s ∈ R.
 4
 

0
 1
−1 −4 5
5 7
0 0
0 1 2
2
0
 
−3
 2
The solution to the homogeneous system is !x = s  , s ∈ R.
 1
0




 
 
1
3 
5
2 
 1 3 2
 1 0 0
 2
−5


+
,
 2 10 6 −4

 0 1 0 −2 −3 
−3
 
4

 ∼ 
. The solution to A | !b is !x =   + s  2,
 1 6 4 −3
 0 0 1
 5
−1
4 
1
5 




 
 
1 8 3 −8 −7
0 0 0
0
0
0
1
 
−5
 2
s ∈ R. The solution to the homogeneous system is !x = s  , s ∈ R.
−1
1
 

 

4
2
5
1 −3
7   1 0 3 0
2 4 
 1
1



 
+
,
 0
  0 1 1 0 −3 1 
2
2
2
−4
4

 ∼ 
. The solution to A | !b is !x = 0 +
 −1 −4 −7 −1
 
9 −9   0 0 0 1
1 1 

 

1
0
2
2
3 −3
5
0 0 0 0
0 0
0
 
 
 
 
−3
−2
−3
−2
−1
 3
−1
 3
 
 
 
 
s  1 + t  0, s, t ∈ R. The solution to the homogeneous system is !x = s  1 + t  0, s, t ∈ R.
 
 
 
 
 0
−1
 0
−1
0
1
0
1



B37 

B38
B39
B40
B41
Copyright © 2020 Pearson Canada Inc.
30
C Conceptual Problems
C1 (a) If !x is orthogonal to !a, !b, and !c, then it must satisfy the equations !a · !x = 0, !b· !x = 0, and !c · !x = 0.
(b) Since a homogeneous system is always consistent, the System-Rank Theorem tells us that a
homogeneous system in 3 variables has non-trivial solutions if and only if the rank of the matrix
is less than 3.
+
,
C2 We first note that the solution set of the homogeneous system A | !0 is a non-empty subset of Rn . Let
+
,
!u , !v both be solutions of A | !0 . Write the i-th equation of the system as
ai1 x1 + · · · + ain xn = 0
Then, we have
ai1 u1 + · · · + ain un = 0
ai1 v1 + · · · + ain vn = 0
Therefore, for any s, t ∈ R we also have
ai1 (su1 + tv1 ) + · · · + ain (sun + tvn ) = s(ai1 u1 + · · · + ain un ) + t(ai1 v1 + · · · + ain vn )
= s(0) + t(0)
=0
+
,
Thus, s!u + t!v satisfies all m equations, so s!u + t!v is also a solution of A | !0 . Therefore, by definition,
the solution set is a subspace of Rn .
C3 (a) Let the variables be x1 , x2 , and x3 . From the reduced row echelon form of the coefficient matrix,
we see that x3 is arbitrary, say x3 = t ∈ R. Then x2 = x3 = t and x1 = −2t, so the general solution
 
−2
 
is !x = t  1, t ∈ R. Since it is the span of one non-zero vector, it is a line.
 
1
(b) Since the rank of A is 2 and the homogeneous system has 3 variables, there must be 1 parameter in
! t ∈ R, with d! ! !0,
the general solution of the system. The general solution is of the form !x = td,
so it represents a line through the origin. If the rank was 1, there would be 2 parameters and the
general solution would be of the form !x = s!u + t!v , s, t ∈ R, with {!u , !v } linearly independent. This
describes a plane through the origin in R3 .


 u1 u2 u3 u4 


(c) This homogeneous system has the coefficient matrix C =  v1 v2 v3 v4 .


w1 w2 w3 w4
If the rank of C is 3, the set of vectors !x orthogonal to the three given vectors is a line through the
origin in R4 , because the general solution has 1 parameter. If the rank is 2, the set is a 2-plane,
and if the rank is 1, the set is a 3-plane.
C4 We are not told enough to know whether the system is consistent. If the augmented matrix also has
rank 4, the system is consistent and there are 3 parameters in the general solution. If the augmented
matrix has rank 5, the system is inconsistent.
Copyright © 2020 Pearson Canada Inc.
31
C5 Since the rank of the coefficient matrix equals the number of rows the system is consistent. There are
3 parameters in the general solution.
C6 We need to know the rank of the coefficient matrix too. If the rank is also 4, then the system is
consistent, with a unique solution. If the coefficient matrix has rank 3, then the system is inconsistent.
C7 Row reducing gives

 
1
b
 1 a b 1   1 0

 
−1
a−b
 1 1 0 a  ∼  0 1
1 0 1 b
0 0 a + b − 1 1 − b − a2 + ab
If a + b ! 1, the system is consistent with the unique solution
 2

 b − 1 + a2 
1


2
!x =
ab + 1 − b − a
a + b − 1 
1 − b − a2 + ab




If a + b = 1, the system is consistent only if 1 − b − a2 + ab = 0 also. This implies that either a = 0 and
b = 1, or a = 1 and b = 0 is required for consistency. If a = 1 and b = 0, then the reduced augmented


1 0 
 1 0


matrix becomes  0 1 −1 1 , and the general solution is


0 0
0 0
 
 
0
−1
 
 
!x = 1 + t  1 ,
 
 
0
1
t∈R


1 
1
 1 0


If a = 0, b = 1, the matrix becomes  0 1 −1 −1 , and the general solution is


0 0
0
0
 
 
 1
−1
 
 
!x = −1 + t  1 ,
 
 
0
1
t∈R
+
,
C8 Since there is no point in common between the three planes, the system A | !b is inconsistent. Thus,
+
,
by the System Rank Theorem (1), rank(A) < rank( A | !b ). Thus, rank(A) < 3. If rank(A) = 1, then
the normal vectors of all the planes would have to be scalar multiples of each other (so that row
reducing the coefficient matrix would give two rows of zeros). But, it is clear that P3 is not parallel
to P1 or P2 . It follows that rank A = 2.
C9 The intersection of two planes in R3 is represented by a system of 2 linear equations in 3 unknowns.
This will correspond to a system whose coefficient matrix has 2 rows and 3 columns. Hence, the rank
of the coefficient matrix of the system is either 1 or 2. Since we are given that the system is consistent,
the System Rank Theorem (2) tells us that the solution set has either 1 or 2 free variables. Thus, the
solution set is either a line or a plane in R3 .
C10 The statement is false. The solution set of x1 + x2 = 1 is not a subspace of R2 , since it does not contain
the zero vector.
Copyright © 2020 Pearson Canada Inc.
32
C11 The statement is false. As a counter example, consider the system x1 + x2 + x3 = 1, 2x1 +2x2 +2x3 = 2.
The system has m = 2 equations and n = 3 variables, but the rank of the corresponding coefficient
matrix A is 1 ! m.
+
,
C12 If A | !b is consistent for every !b ∈ Rm , then rank A = m by the System-Rank Theorem (3). Since the
solution is unique, there are no free variables and so we have that rank A = n by the System-Rank
Theorem (2). Thus, m = rank A = n.
Section 2.3
A Practice Problems
A1 (a) We need to determine if there are values c1 , c2 , and c3 such that
 
 
 
  

−3
1
2
−1 c1 + 2c2 − c3 
 2
0
1
 1  c + c

2
3
  = c   + c   + c   = 

1
2
3
 8


  c1 + 2c3 



1
0
2



 
 
 
  

4
1
1
1
c1 + c2 + c3
Vectors are equal if and only if their corresponding entries are equal. Thus, equating corresponding entries we get the system of linear equations
c1 + 2c2 − c3 = −3
c2 + c3 = 2
c1 + 2c3 = 8
c1 + c2 + c3 = 4
Row reducing the corresponding augmented matrix gives

 
 1 2 −1 −3   1 0
 0 1
1
2   0 1

∼
 1 0
2
8   0 0

 
1 1
1
4
0 0

2 
0

0 −1 

1
3 

0
0
Thus, the solution is c1 = 2, c2 = −1, and c3 = 3. Thus the vector is in the span. In particular,
 
 
   
1
2
−1 −3
0
1
 1   2 
2   + (−1)   + 3   =  
1
0
 2   8 
1
1
1
4
(b) We need to determine if there are values c1 , c2 , and c3 such that
 
 
 
  

5
1
2
−1 c1 + 2c2 − c3 
 
 
  

4
  = c1 0 + c2 1 + c3  1 =  c2 + c3 
6
1
0
 2  c1 + 2c3 
 
 
 
  

7
1
1
1
c1 + c2 + c3
Copyright © 2020 Pearson Canada Inc.
33
Equating corresponding entries we get the system of linear equations
c1 + 2c2 − c3 = 5
c2 + c3 = 4
c1 + 2c3 = 6
c1 + c2 + c3 = 7
Row reducing the corresponding augmented matrix gives

 
 1 2 −1 5   1
 0 1
1 4   0

∼
 1 0
2 6   0

 
1 1
1 7
0
2 −1
5
1
1
4
0
5
9
0
0 3/5
The system is inconsistent. Therefore, the vector is not in the span.






(c) We need to determine if there are values c1 , c2 , and c3 such that
 
 
 
  

 2
1
2
−1 c1 + 2c2 − c3 
−2
 
 
  

  = c 0 + c 1 + c  1 =  c2 + c3 
1
2
3
 1






1
0
 2  c1 + 2c3 
 
 
 
  

1
1
1
1
c1 + c2 + c3
Vectors are equal if and only if their corresponding entries are equal. Thus, equating corresponding entries we get the system of linear equations
c1 + 2c2 − c3 = 2
c2 + c3 = −2
c1 + 2c3 = 1
c1 + c2 + c3 = 1
Row reducing the corresponding augmented matrix gives

 
2   1
 1 2 −1
 
 0 1
1 −2   0

∼
 1 0
2
1   0

 
1 1
1
1
0
0
1
0
0

3 
0

0 −1 

1 −1 

0
0
Thus, the solution is c1 = 3, c2 = −1, and c3 = −1. Thus the vector is in the span. In particular,
 
 
   
1
2
−1  2
1
 1 −2
0
3   + (−1)   + (−1)   =  
 2  1
1
0
1
1
1
1
Copyright © 2020 Pearson Canada Inc.
34
A2 (a) We need to determine if there are values c1 , c2 , and c3 such that
 
 
 
  

 3 
 1
−1
 1  c1 − c2 + c3 

 2 
 
 
  
  = c1 −1 + c2  1 + c3  1 = −c1 + c2 + c3 
−1
 1
 0
−1  c1 − c3 
 
 
 
  

−1
0
2
−1
2c2 − c3
Equating corresponding entries we get the system of linear equations
c1 − c2 + c3 = 3
−c1 + c2 + c3 = 2
c1 − c3 = −1
2c2 − c3 = −1
Row reducing the corresponding augmented matrix gives

 
1
3   1 −1
3
1
 1 −1
 
 −1


1
1
2   0
1 −2
−4

∼
 1
0 −1 −1   0
0
2
5

 
0
2 −1 −1
0
0
0 −1/2
The system is inconsistent. Therefore, the vector is not in the span.






(b) We need to determine if there are values c1 , c2 , and c3 such that
 
 
 
  

−7
 1
−1
 1  c1 − c2 + c3 

 3
 
 
  
  = c1 −1 + c2  1 + c3  1 = −c1 + c2 + c3 
 0
 1
 0
−1  c1 − c3 
 
 
 
  

8
0
2
−1
2c2 − c3
Equating corresponding entries we get the system of linear equations
c1 − c2 + c3 = −7
−c1 + c2 + c3 = 3
c1 − c3 = 0
2c2 − c3 = 8
Row reducing the corresponding augmented matrix gives

 
1 −7   1 0
 1 −1
 
 −1
1
1
3   0 1

∼
 1
0 −1
0   0 0

 
0
2 −1
8
0 0

0 −2 

0
3 

1 −2 

0
0
Thus, the solution is c1 = −2, c2 = 3, and c3 = −2. Thus the vector is in the span. In particular,
 
 
   
 1
−1
 1 −7
 1  3
−1
 1
(−2)   + 3   + (−2)   =  
 1
 0
−1  0
0
2
−1
8
Copyright © 2020 Pearson Canada Inc.
35
(c) We need to determine if there are values c1 , c2 , and c3 such that
 
 
 
  

1
 1
−1
 1  c1 − c2 + c3 
 
 
  

1
  = c1 −1 + c2  1 + c3  1 = −c1 + c2 + c3 
1
 1
 0
−1  c1 − c3 
 
 
 
  

1
0
2
−1
2c2 − c3
Equating corresponding entries we get the system of linear equations
c1 − c2 + c3 = 1
−c1 + c2 + c3 = 1
c1 − c3 = 1
2c2 − c3 = 1
Row reducing the corresponding augmented matrix gives

1 1
 1 −1
 −1
1
1
1

 1
0
−1
1

0
2 −1 1
 
1
1
  1 −1
  0
1
−2
0
 ∼ 
  0
0
2
2
 
0
0
0 −2
The system is inconsistent. Therefore, the vector is not in the span.






A3 A vector !x is in the set if and only if for some t1 , t2 we have
 
 
   
 x1 
1
0 t1 
 
 
   
 x2  = t1 0 + t2 1 = t2 
x3
0
0
0
Equating corresponding entries we get the system x1 = t1 , x2 = t2 , and x3 = 0. This is consistent if
and only if x3 = 0. Thus, the homogeneous equation x3 = 0 defines the set.
A4 A vector !x is in the set if and only if
 
   
 x1 
2 2t1 
 
   
x
=
t
 2  1 1 =  t1 
x3
0
0
for some real numbers t1 , t2 . Thus, we get x1 = 2t1 , x2 = t1 , and x3 = 0. Substituting the second
equation into the first we get x1 = 2x2 or x1 −2x2 = 0. Therefore, the homogeneous system x1 −2x2 =
0, x3 = 0 defines the set.
A5 A vector !x is in the set if and only if
 
 
  

 x1 
1
 3 t1 + 3t2 
 x2  = t1 1 + t2 −1 =  t1 − t2 
 
 
  

x3
2
0
2t1
Copyright © 2020 Pearson Canada Inc.
36
for some real numbers t1 , t2 . Equating corresponding entries we get the system
t1 + 3t2 = x1
t1 − t2 = x2
2t1 = x3
Row reducing the corresponding augmented matrix gives

3
 1

1
−1

2
0
x1
x2
x3
 
3
  1
 
0
−4
∼
 
0
0
x1
x2 − x1
x1 + 3x2 − 2x3




Therefore, the homogeneous equation x1 + 3x2 − 2x3 = 0 defines the set.
A6 A vector !x is in the set if and only if
 
 
  

 x1 
 2
 1  2t1 + t2 
 
 
  

 x2  = t1  1 + t2 −2 =  t1 − 2t2 
x3
−1
1
−t1 + t2
for some real numbers t1 , t2 . Equating corresponding entries we get the system
2t1 + t2 = x1
t1 − 2t2 = x2
−t1 + t2 = x3
Row reducing the corresponding augmented matrix gives




2
1
1 −2
−1
1
x1
x2
x3
 
  1 −2
 
5
 ∼  0
0
0
x2
x1 − 2x2
x1 + 3x2 + 5x3
Therefore, the homogeneous equation x1 + 3x2 + 5x3 = 0 defines the set.




A7 A vector !x is in the set if and only if
 
 
  

 x1 
1
 2 t1 + 2t2 
 x 
 
  

 2  = t 0 + t −1 =  −t2 
1
2
1
 1  t1 + t2 
 x3 
 
  

 
x4
0
1
t2
for some real numbers t1 , t2 . Equating corresponding entries we get the system
t1 + 2t2 = x1
−t2 = x2
t1 + t2 = x3
t2 = x4
Copyright © 2020 Pearson Canada Inc.
37
Row reducing the corresponding augmented matrix gives

2
 1
 0 −1

 1
1

0
1
x1
x2
x3
x4
 
x1
2
  1
  0 −1
x2
 ∼ 
  0
0 −x1 − x2 + x3
 
0
0
x2 + x4






Therefore, the homogeneous system −x1 − x2 + x3 = 0, x2 + x4 = 0 defines the set.
A8 A vector !x is in the set if and only if
 
 
 
  

 x1 
 1
 0
−2  t1 − 2t3 
 x 
−1
 1
 0  −t + t

1
2
 2  = t   + t   + t   = 

 x3  1  1 2  3 3  4  t1 + 3t2 + 4t3 
 
 
 
  

x4
2
−2
−3
2t1 − 2t2 − 3t3
for some real numbers t1 , t2 , t3 . Equating corresponding entries we get the system
t1 − 2t3 = x1
−t1 + t2 = x2
t1 + 3t2 + 4t3 = x3
2t1 − 2t2 − 3t3 = x4
Row reducing the corresponding augmented matrix gives

0 −2
 1
 −1
1
0

 1
3
4

2 −2 −3
x1
x2
x3
x4
 
x1
  1 0 −2
  0 1 −2
x
1 + x2
 ∼ 
  0 0 −3
2x2 + x4
 
0 0
0 −4x1 + 5x2 + x3 + 4x4
Therefore, the homogeneous equation −4x1 + 5x2 + x3 + 4x4 = 0 defines the set.






A9 Row reducing the corresponding coefficient matrix gives

 

 1 3 2 1 0 0

 

 2 8 6 ∼ 0 1 0
−4 1 3
0 0 1
Thus, the solution space is !x = !0. By definition, a basis for the subspace {!0} is the empty set, and the
dimension is 0.
A10 Row reducing the corresponding coefficient matrix gives

 

0 1 3 1 1 0 2 −1

1 1 5 0 ∼ 0 1 3
1

 

1 3 11 2
0 0 0
0
Copyright © 2020 Pearson Canada Inc.
38
We find that the general solution is
 
 
−2
 1
−3
−1
!x = s   + t   ,
 1
 0
0
1
s, t ∈ R
    

−2  1





   



−3 −1

Since B = 
,
is also linearly independent, it is a basis for the solution space. Hence, the
   





1
0













 0  1

dimension is 2.
A11 To show that the set is a basis for the plane, we need to prove that it is both linearly independent
and spans the plane. Since neither vector in the set is a scalar multiple of the other, the set is linearly
independent. For spanning, we need to show that every vector in the plane can be written as a linear
combination of the basis vectors. To do this, we first find the general form of a vector !x in the plane.
We solve the equation of the plane for x1 to get x1 = −x2 + x3 . Hence, every vector in the plane has
  

 x1  −x2 + x3 
  

the form !x =  x2  =  x2 . Therefore, we now just need to show that the system corresponding
  

x3
x3
to


 
  

−x2 + x3 
1
1  t1 + t2 


 
  

 x2  = t1 1 + t2 0 =  t1 
x3
2
1
2t1 + t2
is always consistent. Row reducing the corresponding augmented matrix gives

 

x2
 1 1 −x2 + x3   1 0

 1 0
 ∼  0 1 x3 − 2x2 
x
2

 

2 1
x3
0 0
0
So, the system is always consistent and consequently is a basis for the plane.
A12 Observe that neither vector in the set is a scalar multiple of the other, so the set is linearly independent.
Write the equation of the plane in the form x3 = −2x1 + 3x2 . Then every vector in the plane has the


x1




x2
form !x = 
. Consider

−2x1 + 3x2





 
  

x1

1
 1  t1 + t2 

 
  

x2
 = t1 1 + t2  2 = t1 + 2t1 
−2x1 + 3x2
1
−3
t1 − 3t2
Row reducing the corresponding augmented matrix gives

 
1
x1
x1
 1
  1 1

 
1
2
x
0
1
−x
∼
2
1 + x2

 
1 −3 −2x1 + 3x2
0 0 x1 − x2




Therefore, it is not consistent whenever x1 − x2 ! 0. Hence, it is not a basis for the plane.
Copyright © 2020 Pearson Canada Inc.
39
A13 None of the vectors in the set can be written as a linear combination of the other vectors, so the set
is linearly independent. Write the equation of the hyperplane in the form x1 = x2 − 2x3 + 2x4 . Then


 x2 − 2x3 + 2x4 


x2
. Consider
every vector in the plane has the form !x = 

x3


x4


 
 
  

 x2 − 2x3 + 2x4 
1
0
3 t1 + 3t3 

 
 
  


x2
 = t 1 + t 0 + t 1 =  t1 + t3 

1
2
3










0
1
0  t2 
x3


 
 
  

x4
0
1
1
t2 + t3
Row reducing the corresponding augmented matrix gives

 1 0 3
 1 0 1

 0 1 0

0 1 1
x2 − 2x3 + 2x4
x2
x3
x4
 
x2
  1 0 1
  0 1 0
x3
 ∼ 
  0 0 1 −x3 + x4
 
0 0 0
0






So, the system is always consistent and consequently is a basis for the hyperplane.
A14 To determine whether a set of vectors is linearly dependent or independent, we find all linear combinations of the vectors which equals the zero vector. If there is only one such linear combination, the
combination where all coefficients are zero, then the set is linearly independent.
Consider
 
 
 
  

0
 1
1
 1  t1 + t2 + t3 
0
 
 
  

  = t  2 + t 2 + t −3 = 2t1 + 2t2 − 3t3 
0 1  1 2 3 3  2  t1 + 3t2 + 2t3 
 
 
 
  

0
−1
1
1
−t1 + t2 + t3
Row reducing the coefficient matrix to the corresponding homogeneous system gives






1
2
1
−1
 
1
1  
 
2 −3  
∼
3
2  
 
1
1
1
0
0
0
0
1
0
0
0
0
1
0






Thus, the only solution is t1 = t2 = t3 = 0. Therefore, the set is linearly independent.
A15 Consider
 
 
 
 
  

t1 + 3t4
0
1
0
0
3 

0
1
0
2 

0
t
+
2t
2
4

  = t1   + t2   + t3   + t4   = 
0
1
1
1
6 t1 + t2 + t3 + 6t4 
 
 
 
 
  

0
0
1
1
3
t2 + t3 + 3t4
Row reducing the coefficient matrix to the corresponding homogeneous system gives

 1 0 0
 0 1 0

 1 1 1

0 1 1
3
2
6
3
 
  1 0 0
  0 1 0
 ∼ 
  0 0 1
 
0 0 0
3
2
1
0
Copyright © 2020 Pearson Canada Inc.






40
Therefore, t4 is a free variable, so let t4 = s ∈ R. Then, the general solution of the homogeneous
system is
  

 
−3
t1  −3s
 
t2  −2s
  = 
 = s −2
t3   −s
−1
  

 
t4
s
1
Since there are infinitely many solutions, the set is linearly dependent. Moreover, we have that for
any s ∈ R,
 
 
 
   
1
0
0
3 0
0
1
0
2 0
−3s   − 2s   − s   + s   =  
1
1
1
6 0
0
1
1
3
0
A16 Consider
 
 
 
  

0
1
2
0  t1 + 2t2 
0
1
3
1 t + 3t + t 
2
3
 
 
 
   1

0 = t1 0 + t2 1 + t3 1 =  t2 + t3 
 
 
 
  

0
1
3
1 t1 + 3t2 + t3 
0
1
3
1
t1 + 3t2 + t3
Row reducing the coefficient matrix to the corresponding homogeneous system gives








1
1
0
1
1
2
3
1
3
3
0
1
1
1
1
 
 
 
 
 ∼ 
 
 
1
0
0
0
0

0 −2 

1
1 

0
0 

0
0 

0
0
Therefore, t3 is a free variable, so let t3 = s ∈ R. Then, the general solution of the homogeneous
system is
   
 
 2
t1   2s
t2  = −s = s −1
   
 
t3
s
1
Since there are infinitely many solutions, the set is linearly dependent. Moreover, we have that for
any s ∈ R,
 
 
   
1
2
0 0
1
3
1 0
 
   
 


2s 0 − s 1 + s 1 = 0
 
 
   
1
3
1 0
1
3
1
0
A17 Consider
 
 
 
 
  

0
0
1
−4
3  t2 − 4t3 + 3t4 
 
 
 
  

0
  = t1 1 + t2 2 + t3  1 + t4 1 =  t1 + 2t2 + t3 + t4 
0
1
3
 2
2 t1 + 3t2 + 2t3 + 2t4 
 
 
 
 
  

0
0
1
1
0
t2 + t3
Copyright © 2020 Pearson Canada Inc.
41
Row reducing the coefficient matrix to the corresponding homogeneous system gives

 
 0 1 −4 3   1
 1 2
1 1   0

∼
 1 3
2 2   0

 
0 1
1 0
0
0
1
0
0
0
0
1
0
0
0
0
1






Thus, the only solution is t1 = t2 = t3 = t4 = 0. Therefore, the set is linearly independent.
A18 Consider
 
 
 
  

0
1
0
 2  t1 + 2t3 
 
 
  

0
  = t1 0 + t2 1 + t3 −3 =  t2 − 3t3 
0
1
1
−1 t1 + t2 − t3 
 
 
 
  

0
0
1
k
t2 + kt3
Row reducing the coefficient matrix to the corresponding homogeneous system gives

 
2  
 1 0
 0 1 −3  

 
 1 1 −1  ∼ 

 
0 1
k
1
0
0
0
0
2
1
−3
0 k+3
0
0






By Theorem 2.3.3, the set is linearly independent if and only if the rank of the coefficient matrix
equals the number of vectors in the set. Thus, the set is linearly independent if and only if we have 3
leading ones in the matrix. Therefore, we see that the set is linearly independent if and only if k ! −3.
A19 Consider
 
 
 
  

0
1
 1
−1  t1 + t2 − t3 
 
 
  

0
  = t1 1 + t2 −1 + t3  2 =  t1 − t2 + 2t3 
0
1
 2
 k t1 + 2t2 + kt3 
 
 
 
  

0
2
0
1
2t1 + t3
Row reducing the coefficient matrix to the corresponding homogeneous system gives

 
1 −1   1
 1
 
 1 −1
2   0

∼
 1
2
k   0

 
2
0
1
0
1
−1
1 −3/2
0 k + 5/2
0
0






By Theorem 2.3.3, the set is linearly independent if and only if the rank of the coefficient matrix
equals the number of vectors in the set. Therefore, the set is linearly independent if and only if
k ! −5/2.
A20 To show the set is a basis, we must show it is linearly independent and spans R3 . By Theorem 2.3.5,
we can show this by showing that the rank of the coefficient matrix of the system corresponding to
 
 
 
  

v1 
1
 1
2 t1 + t2 + 2t3 
v2  = t1 1 + t2 −1 + t3 1 =  t1 − t2 + t3 
 
 
 
  

v3
2
−1
1
2t1 − t2 + t3
Copyright © 2020 Pearson Canada Inc.
42
is n. Row reducing the coefficient matrix gives

 

1 2 1 0 0
1
1 −1 1 ∼ 0 1 0

 

2 −1 1
0 0 1
Therefore, the rank of the coefficient matrix is 3, so the set is a basis for R3 .
A21 To be a basis, the set must span R3 . But, Theorem 2.3.2 says that we cannot span R3 with fewer than
3 vectors. So, since this set has only 2 vectors, it cannot span R3 . Therefore, it is not a basis.
A22 To be a basis, the set must be linearly independent. But, Theorem 2.3.4 says that a set with greater
than 3 vectors in R3 cannot be linearly independent. Consequently, the given set is linearly dependent
and so is not a basis.
A23 To show the set is a basis, we must show it is linearly independent and spans R3 . By Theorem 2.3.5,
we can show this by showing that the rank of the coefficient matrix of the system corresponding to
 
 
 
  

v1 
 1
 1
3 t1 + t2 + 3t3 
 
 
 
  

v2  = t1 −1 + t2  2 + t3 0 =  −t1 + 2t2 
v3
1
−1
1
t1 − t2 + t3
is n. Row reducing the coefficient matrix gives

 

1 3 1 0 2
 1

 

2 0 ∼ 0 1 1
−1
 

1 −1 1
0 0 0
Since the rank of the coefficient matrix is 2, the set is not a basis.
B Homework Problems
B1 (b) and (c) have infinitely many possible answers.
 
 
 
 
 
1
 7
 1
 2
−3
 
 
 
 
 
(a) 1 is not in the span.
(b) −9 = −3 −2 + 5 −3 + 0  4
 
 
 
 
 
1
−1
2
1
0
 
 
 
 
 4 
 1
 2
−3
 
 
 
 
(c) −5 = −2 −2 + 3 −3 + 0  4
 
 
 
 
−1
2
1
0
 
 
 
   
1
 0
1
1 2
−5
1
2 2
4
B2 (a)   is not in the span.
(b)   = 3   − 5   +  
2
−4
0
1 1
1
1
1
1
3
 
   
 
5
1 1
2
4
1 2
2
(c)   = −2   −   + 4  
3
0 1
1
9
1
1
3
Copyright © 2020 Pearson Canada Inc.
43
B3
B4
B6
B8
B10
 
   
 
1
0  3
−1
2  3
−1
7




(a)   = 3   +   + 2  
0
0  6
−3
7
2
−3
2
 
 
 
 
 3 
0
 3
−1
−5
2
 3
−1
(c)   = −4   + 2   + 3  
 3 
0
 6
−3
−8
2
−3
2
x1 − x3 = 0
4x1 + x2 − x3 = 0
x1 − x2 = 0, x1 − x3 = 0
5x1 − 7x2 + x3 = 0, −7x1 + 12x2 + x4 = 0
B12 A basis is the empty set. The dimension is 0.
  

−2





 



 1

B14 A basis is 
 

. The dimension is 1.


0




 



 0

 
1
1
(b)   is not in the span.
2
0
B5
B7
B9
B11
B13
B15
B16 It is a basis for the plane.
B17
B18 It is not a basis for the plane.
B19
B20 It is not a basis for the hyperplane.
B21
 
 
   
1
1
−1 0
 
 
   
B22 Linearly dependent. 5s 0 − 4s 1 + s  4 = 0
 
 
   
2
3
2
0
3x1 + x2 = 0, 2x1 − x3 = 0
x1 − x2 + 2x3 = 0
x1 − 7x2 − 11x3 = 0
2x1 − x3 = 0
  

 4





 



−1

A basis is 
. The dimension is 1.


 1





 




 0
    

 4  7





   



−3 −4

A basis is 
. The dimension is 2.
  ,  




1
0







   



0
1
It is a basis for the plane.
It is not a basis for the plane.
It is a basis for the hyperplane.
B23 Linearly independent.
 
 
 
   
5
2
−7
1 0
 
 
 
   
B24 Linearly dependent. (3s + t) 3 + (−4s − 3t) 4 + s  7 + t 9 = 0
 
 
 
   
1
1
1
2
0
 
 
   
 1
−1
−1 0
−1
 1
 1 0
B25 Linearly dependent. −2s   − 3s   + s   =  
 3
 2
 12 0
1
1
5
0
B26 Linearly independent.
B27 Linearly independent.
B28 The set is linearly independent for all k ! 1.
B29 The set is linearly independent for all k ! 2.
B30 The set only contains 2 vectors, so it cannot span R3 . Thus, it is not a basis.
B31 It is a basis.
B32 The set has 4 vectors in R3 , so it is linearly dependent. Thus, it is not a basis.
B33 It is a basis.
Copyright © 2020 Pearson Canada Inc.
44
C Conceptual Problems
 
 
 x1 
 x1 
 
 .. 
n
C1 Let !x =  .  be any vector in R . Observe that  ...  = x1!e1 + x2!e2 + · · · + xn!en .
 
 
xn
xn
Hence, Span B = Rn . Moreover, the only solution to !0 = t1!e1 + · · · + tn!en is t1 = · · · = tn = 0, so B is
linearly independent.
C2 The scalar equation of the hyperplane is just a homogeneous system of 1 linear equation. So, we can
use our procedure for finding a basis of a homogeneous system. In particular, we see that x2 , x3 , and
x4 are all free variables. We let x2 = t1 ∈ R, x3 = t2 ∈ R, and x4 = t3 ∈ R. Then, we have
x1 = −at1 − bt2 − ct3
Thus, the general solution is
  

 
 
 
 x1  −at1 − bt2 − ct3 
−a
−b
−c

 1
 0
 
 x2  
t
1
 = t   + t   + t  0 , t , t , t ∈ R
  = 
1
2
3
1 2 3





 0
 x3  

 0
 1
t2
 
  

 
 
x4
t3
0
0
1
     

−a −b −c







 1  0   0









Thus, B = 
,
,
spans the hyperplane. It is easy to show that B is also linearly inde





















0
1
0


     




 0
0
1
pendent and hence is a basis for the hyperplane.
C3 (a) Consider
! 1 + c2 w
! 2 + c3 w
! 3 = !0
c1 w
(1)
! 1, w
! 2, w
! 3 ∈ S and B is a basis for S , we can write each w
Since w
! i as a linear combination of the
vectors in B. Say,
! 1 = a11!v 1 + a12!v 2
w
! 2 = a21!v 1 + a22!v 2
w
! 3 = a31!v 1 + a32!v 2
w
Substituting these into equation (1) gives
!0 = c1 (a11!v 1 + a12!v 2 ) + c2 (a21!v 1 + a22!v 2 ) + c3 (a31!v 1 + a32!v 2 )
= (c1 a11 + c2 a21 + c3 a31 )!v 1 + (c1 a12 + c2 a22 + c3 a32 )!v 2
Since, B is linearly independent, this implies that
c1 a11 + c2 a21 + c3 a31 = 0
c1 a12 + c2 a22 + c3 a32 = 0
This is a homogeneous system of 2 equation in 3 unknowns (c1 , c2 , c3 ). The rank of the coefficient
matrix is at most 2, so there is at least 3 − 2 = 1 free variables by the System Rank Theorem (2).
! 2, w
! 3 } is linearly dependent.
Thus, there are infinitely many solutions, so {!
w1 , w
Copyright © 2020 Pearson Canada Inc.
45
(b) Take !x 1 = !v 1 , !x 2 = !v 2 , and !x 3 = !0.
(c) Take !y 1 = !y 2 = !y 3 = !0.
C4 Since dim S = n, it means that S has a basis with n vectors in it. Let B = {!v 1 , . . . , !v n } be a basis for
S . By definition of a basis, we know that B is linearly independent. Observe that this implies that the
rank of the coefficient matrix of
t1!v 1 + · · · + tn!v n = !0
is n (as otherwise the system would have a free variable which would make B linearly dependent).
Thus, by Theorem 2.3.5, {!v 1 , . . . , !v n } is also a basis for Rn . Thus,
S = Span{!v 1 , . . . , !v n } = Rn
C5 (a) Since dim S = k, it means that S has a basis with k vectors in it. Suppose that B does span S .
Either B is linearly independent, or we can remove vectors using Theorem 1.4.3 until we get a
subset of B that is linearly independent and still spans S . Thus, we can get a basis for S that
contains less than k vectors in it which contradicts Theorem 2.3.8.
(b) Assume that C is linearly dependent. Then, there is a vector in C that is a linear combination of
the other vectors. Hence, we can remove it from the spanning set using Theorem 1.4.3 without
changing the set it spans. We can continue in this way until we get a subset D of C that is linearly
independent and
Span D = Span C = S
Then, D is a basis for S containing less than k vectors which contradicts Theorem 2.3.8.
C6 (a) By definition, a line through the origin in R5 has vector equation !x = t!v , t ∈ R where !v ! !0.
Thus, a basis for the line is {!v } and so the dimension of the line is 1.
(b) By definition, a plane through the origin in R5 has vector equation !x = t1!v 1 +t2!v 2 , t1 , t2 ∈ R where
{!v 1 , !v 2 } is linearly independent. Thus, a basis for the plane is {!v 1 , !v 2 } and so it has dimension 2.
(c) By definition, a hyperplane through the origin in R5 has vector equation
!x = t1!v 1 + t2!v 2 + t3!v 3 + t4!v 4 ,
t1 , t2 , t3 , t4 ∈ R
where {!v 1 , !v 2 , !v 3 , !v 4 } is linearly independent. Thus, a basis for the hyperplane is {!v 1 , !v 2 , !v 3 , !v 4 }
and so it has dimension 4.
Section 2.4
A Practice Problems
A1 For mass m1 the forces are in equilibrium when
10 − 2x1 + (x2 − x1 ) = 0
Rearranging gives
3x1 − x2 = 10
Copyright © 2020 Pearson Canada Inc.
46
For mass m2 the forces are in equilibrium when
6 − (x2 − x1 ) + 4(x3 − x2 ) = 0
Rearranging gives
−x1 + 5x2 − 4x3 = 6
For mass m3 the forces are in equilibrium when
8 − 4(x3 − x2 ) = 0
Rearranging gives
−4x2 + 4x3 = 8
Row reducing the corresponding augmented matrix gives

 
0 10   1 0 0 12
 3 −1

 
5 −4 6  ∼  0 1 0 26
 −1
 
0 −4
4 8
0 0 1 28
Hence, x1 = 12, x2 = 26, and x3 = 28.




A2 To determine the currents in the loops, it is necessary to write down Kirchoff’s Voltage Law with
Ohm’s Law describing the voltage drop across the resistors. From the diagram, the resulting equations
are:
For the top left loop: R1 i1 + R2 (i1 − i2 ) = E1
For the top middle loop: R2 (i2 − i1 ) + R3 (i2 − i3 ) = 0
For the top right loop: R3 (i3 − i2 ) + R4 i3 + R8 (i3 − i5 ) = 0
For the bottom left loop: R5 i4 + R6 (i4 − i5 ) = 0
For the bottom right loop: R6 (i5 − i4 ) + R8 (i5 − i3 ) + R7 i5 = E2
Multiplying out and collecting terms to display the equations as a system in the variables i1 , i2 , i3 , i4 ,
and i5 yields
(R1 + R2 )i1 − R2 i2 = E1
−R2 i1 + (R2 + R3 )i2 − R3 i3 = 0
−R3 i2 + (R3 + R4 + R8 )i3 − R8 i5 = 0
(R5 + R6 )i4 − R6 i5 = 0
−R8 i3 − R6 i4 + (R6 + R7 + R8 )i5 = E2
A3 We set up a system of linear equations by equating the rate in and the rate out at each intersection.
We get
Intersection
A
B
C
D
rate-in
30
x1
x3 + x4
x2 + 50
=
=
=
=
=
rate-out
x1 + x4
x2 + 20
60
x3
Copyright © 2020 Pearson Canada Inc.
47
Rearranging gives the system of linear equations
x1 + x4 = 30
x1 − x2 = 20
x3 + x4 = 60
x2 − x3 = −50
Row reducing the corresponding augmented matrix gives

 
0
0 1
30   1
 1
 
 1 −1
0 0
20   0

∼
 0
0
1 1
60   0

 
0
1 −1 0 −50
0
0
1
0
0
0
0
1
0
Thus, the flow is given by

1 30 

1 10 

1 60 

0 0
x1 = 30 − t
x2 = 10 − t
x3 = 60 − t
x4 = t
for t ∈ R.
A4 We first multiply both sides of the equation by (x − 1)(x2 + 1)2 to get
4x4 + x3 + x2 + x + 1 =A(x2 + 1)2 + (Bx + C)(x2 + 1)(x − 1) + (Dx + E)(x − 1)
Expanding the right hand side and collecting coefficients of like powers of x gives
4x4 + x3 + x2 + x + 1 =(A + B)x4 + (−B + C)x3 + (2A + B − C + D)x2 + (−B + C − D + E)x + (A − C − E)
Hence, we have the system of linear equations
A+B=4
−B + C = 1
2A + B − C + D = 1
−B + C − D + E = 1
A−C −E = 1
Row reducing the corresponding augmented matrix gives

 
1
0
0
0 4   1
 1
 
 0 −1
1
0
0 1   0

 
 2
1 −1
1
0 1  ∼  0

 
1 −1
1 1   0
 0 −1
 
1
0 −1
0 −1 1
0
Hence,
0
1
0
0
0
0
0
1
0
0
0
0
0
1
0

0
2 

0
2 

0
3 

0 −2 

1 −2
4x4 + x3 + x2 + x + 1
2
2x + 3 −2x − 2
=
+ 2
+
2
2
x − 1 x + 1 (x2 + 1)2
(x − 1)(x + 1)
Copyright © 2020 Pearson Canada Inc.
48
A5 We want to find constants x1 , x2 , x3 , x4 such that
x1 Al(OH)3 + x2 H2CO3 → x3 Al2 (CO3 )3 + x4 H2 O
is balanced. Define vectors in R5 by


 # of aluminum atoms
 # of oxygen atoms 


 # of hydrogen atoms 


# of carbon atoms
We get the vector equation
 
 
 
 
1
0
2
0
3
3
9
1






x1   + x2   = x3   + x4  
3
2
0
2
0
1
3
0
Rearranging gives the homogeneous system
x1 − 2x3 = 0
3x1 + 3x2 − 9x3 − x4 = 0
3x1 + 2x2 − 2x4 = 0
x2 − 3x3 = 0
Row reducing the corresponding coefficient matrix gives

1
3

3

0
 
0 −2
0 1
 
3 −9 −1 0
∼
2
0 −2 0
 
1 −3
0
0
We find that a vector equation for the solution space is
 
 
 x1 
1/3
 
 x2 
  = t 1/2 ,
 x3 
1/6
 
 
x4
1
0
1
0
0

0 −1/3

0 −1/2

1 −1/6

0
0
t∈R
To get the smallest positive integer values, we take t = 6. This gives x1 = 2, x2 = 3, x3 = 1, and
x4 = 6. Thus, a balanced chemical equation is
2Al(OH)3 + 3H2CO3 → Al2 (CO3 )3 + 6H2 O
A6 Let R(x, y) = x + y be the function to be maximized. The vertices are found to be (0, 0), (100, 0),
(100, 40), (0, 80), (50, 80). Now compare the value of R(x, y) at all the vertices: R(0, 0) = 0, R(100, 0) =
100, R(100, 40) = 140, R(0, 80) = 80, R(50, 80) = 130. So the maximum value is 140 (occurring at
the vertex (100, 40)).
Copyright © 2020 Pearson Canada Inc.
49
y
x + y = 140
y = 80
(100, 40)
4x + 5y = 600
x = 100
x
A7 To simplify writing, let α = √12 .
Total horizontal force: R1 + R2 = 0.
Total vertical force: RV − FV = 0. √
Total moment about A: R1 s + FV (2 2s) = 0.
The horizontal and vertical equations at the joints A, B, C, D, E are
αN2 + R2 = 0 and N1 + αN2 + RV = 0;
N3 + αN4 + R1 = 0 and −N1 + αN4 = 0;
−αN2 − N3 + αN6 = 0 and −αN2 + N5 + αN6 = 0;
−αN4 + N7 = 0 and −αN4 − N5 = 0;
−N7 − αN6 = 0 and −αN6 − FV = 0.
B Homework Problems
B1 x1 = 4 m, x2 = 9 m, x3 = 16 m, x4 = 17 m
B2 x1 = 2 m, x2 = 3 m, x3 = 2 m
B3 I1 = −1/7 A, I2 = 16/7 A
B4 I1 = 49/22 A, I2 = 19/22 A, I3 = 12/11 A
B5 (a) The flow is given by x1 = 20 + s, x2 = −15 + s + t, x3 = 40 − s − t, x4 = s, x5 = t, for s, t ∈ R.
(b) x1 = 40, x2 = 25, x3 = 0.
B6
2x2 +3x−3
(x2 +3)(x2 +3x+3)
B7
x3 +2x2 −1
(x−1)(x2 +1)(x2 +3)
=
x+1
x2 +3
=
1
4
x−1
+
−x−2
x2 +3x+3
+
x+ 12
x2 +1
+
− 54 x+ 14
x2 +3
B8 C3 H8 + 5O2 → 3CO2 + 4H2 O
B9 Ca3 (PO)4 + 3S iO2 + C = 3CaS iO3 + CO + 4P
Copyright © 2020 Pearson Canada Inc.
50
B10 Let α = √12 .
Total horizontal force: R1 + R2 = 0.
Total vertical force: RV − FV = 0.
Total moment about A: R1 s + Fv s = 0.
The horizontal and vertical equations at the joints A, B, C, D are
αN2 + N3 − R2 = 0
N1 + αN2 = 0
αN5 − R1 = 0
−N1 − αN5 + Rv = 0
−αN2 + αN4 − αN5 = 0
−αN2 − αN4 + αN5 = 0
−N3 − αN4 = 0
αN4 − Fv = 0
Chapter Quiz


1 −2
1 2 
 0

 2 −2
4 −1 10 


E1 (a) 

1
0 2 
 1 −1

1
0
1
0 9
(b) Row reducing the augmented matrix gives

1
 0
 2 −2

 1 −1

1
0

 1 −1
 0
0

 0
1

0
1

 1 0
 0 1

 0 0

0 0

−2
1 2 

4 −1 10 

1
0 2 

1
0 9

1
0 2 

2 −1 6 

−2
1 2 

0
0 7

1
0
9 

0
0
7 

−2
1 −5 

2 −1
6


1
0 2 
 1 −1

 2 −2
4 −1 10 
R1 # R3 ∼ 

1 −2
1 2 
 0

1
0
1
0 9


1
0 2 
 1 −1

 0
1
0
0 7 
∼ 

R2 # R4
1 −2
1 2 
 0

0
0
2 −1 6


0 1/2 13/2 
R1 + 12 R3  1 0


 0 1
0
0
7 
∼ 

1
−5 
 0 0 −2

R4 + R3
0 0
0
0
1
R2 − 2R1
∼
R4 − R1
R1 + R2
R3 − R2
∼
Thus, the rank of the augmented matrix is 4.
(c) Since the rank of the augmented matrix is greater than the rank of the coefficient matrix, the
system is inconsistent.


4
1 −6
7 
 2

 4
8 −3
8 −1 

E2 (a) 
.
2 −5
0 
 −3 −6

1
2
1 −5
5
Copyright © 2020 Pearson Canada Inc.
51
(b) Row reducing the augmented matrix gives

 2
 4

 −3

1

 1
 0

 0

0



4
1 −6
7  R1 ↔ R4
5 
2
1 −5
 1



 4
8 −3
8 −1 
8 −3
8 −1  R2 − 4R1
∼ 
∼


−6
2 −5
0 
2 −5
0  R3 + 3R1
 −3 −6


2
1 −5
5
2
4
1 −6
7 R4 − 2R1



2
1 −5
5 
R1 − R2
 1 2 1 −5 5 

0 −7
28 −21  (−1/7)R2  0 0 1 −4 3 
∼ 
∼


0
5 −20
15  (1/5)R3
R3 − R2
 0 0 1 −4 3 

−R4
0 −1
4 −3
0 0 1 −4 3
R4 − R2


 1 2 0 −1 2 
 0 0 1 −4 3 


 0 0 0
0 0 


0 0 0
0 0
Thus, the rank of the augmented matrix is 2.
(c) Observe that x2 and x4 are free variables. We have
x1 + 2x2 − x4 = 2 ⇒ x1 = 2 − 2x2 + x4
x3 − 4x4 = 3 ⇒ x3 = 3 + 4x4
Hence, the general solution is
  
  
 
 
 x1  2 − 2x2 + x4  2
−2
1
 x  





0





 0
 1
x2
 2  = 
  ,
=
+
x
+
x





2
4
 x3   3 + 4x4  3

4

0

  
  
 
 
x4
x4
0
0
1
x2 , x4 ∈ R
E3 (a) Row reducing we get






0
1
2
−2















3
3
0 −1 
1 
 1 1 3 3

 0 3 3 0 −1 
1
3
3
1  R1 # R2

∼ 

4
9
6
1 
2 4 9 6
1 





−4 −6 −3 −1 R4 + R3
0 0 3 3
0



1 1 3 3
1  R1 − R2
4/3 
 1 0 2 3

 0 1 1 0 −1/3 
0 1 1 0 −1/3 

∼ 

0 2 3 0 −1  R3 − 2R2  0 0 1 0 −1/3 



0 0 1 1
0
0 0 1 1
0



1 0 0 3
2  R1 − 3R4  1 0 0 0
1 



 0 1 0 0
0 1 0 0
0 
0 
∼ 


0 0 1 0 −1/3 
 0 0 1 0 −1/3 

0 0 0 1
1/3
0 0 0 1
1/3
Hence, the rank is 4.
Copyright © 2020 Pearson Canada Inc.
1
3 R2
R3 − 2R1
1
3 R4
R1 − 2R3
R2 − R3
R4 − R3
∼
∼
52
(b) Rewriting the RREF back as a homogeneous system of linear equations gives
x1 + x5 = 0
x2 = 0
1
x5 = 0
3
1
x4 + x5 = 0
3
x3 −
Since x5 is a free variable, we let x5 = t ∈ R. Then, we find that the general solution of the
homogeneous system is
 


 x1 
 −1 
 x 
 0 


 2 
!x =  x3  = t  1/3
 


 x4 
−1/3
x5
1





 −1 





 0 














1/3
(c) Since the set B = 
contains a single non-zero vector, it is linearly independent. We














−1/3







 1 

have shown in (b) that B also spans the solution space. Thus, B is a basis for the solution space.
Hence, the dimension of the solution space is 1.
E4 A vector !x ∈ R3 is in this set if and only if there exists t1 , t2 ∈ R such that
 
   
 1
1  x1 
 
   
t1 −1 + t2 2 =  x2 
 
   
2
5
x3
We row reduce the corresponding augmented matrix.

 1 1

 −1 2
2 5
x1
x2
x3


x1

 1 1


x1 + x2
 R2 + R1 ∼  0 3
R3 − 2R1
0 3 −2x1 + x3
Thus, −3x1 − x2 + x3 = 0 defines the set.




R3 − R2

x1
 1 1

x1 + x2
∼ 0 3

0 0 −3x1 − x2 + x3




+
,
E5 (a) The system is inconsistent if and only if some row of A is of the form 0 · · · 0 c with c ! 0.
Thus, the system is inconsistent when either b+2 = 0 and b ! 0, or when c2 −1 = 0 and c+1 ! 0;
or equivalently, when b = −2 or c = 1. Thus, the system is inconsistent for all (a, b, c) of the form
(a, b, 1) or (a, −2, c), and is consistent for all (a, b, c) where b ! −2 and c ! 1.
(b) The system has a unique solution if the number of leading ones in the RREF of A equals the
number of variables in the system. So, to have a unique solution we require b + 2 ! 0 and
c2 − 1 ! 0. Hence, the system has a unique solution if and only if b ! −2, c ! 1, and c ! −1.
Copyright © 2020 Pearson Canada Inc.
53
E6 (a) For !x to be orthogonal to !v 1 , !v 2 , !v 3 , we must have
0 = !x · !v 1 = x1 + x2 + 3x3 + x4 + 4x5
0 = !x · !v 2 = 2x1 + x2 + 5x3
0 = !x · !v 3 = 3x1 + 2x2 + 8x3 + 5x4 + 9x5
which is a homogeneous system of three equations in five variables. Row reducing the coefficient
matrix gives

 

1 1 3 1 4 1 0 2 0 −11/4
2 1 5 0 0 ∼ 0 1 1 0
11/2

 

3 2 8 5 9
0 0 0 1
5/4
Let x3 = s ∈ R and x5 = t ∈ R, then we get that the general solution is
 


−2
 11/4 
−1
 −11/2 
 


!x = s  1 + t  0  ,
 


 0
 − 5/4
0
1
s, t ∈ R
These are all vectors in R5 which are orthogonal to the three vectors.
! , then !x · !u = 0, !x · !v = 0, and
(b) If there exists a vector !x ∈ R5 which is orthogonal to !u , !v , and w
!x · w
! = 0, yields a homogeneous system of three linear equations with five variables. Hence, the
rank of the matrix is at most three and thus there are at least 2 parameters (# of variables - rank
= 5 − 3 = 2). So, there are in fact infinitely many vectors orthogonal to the three vectors.
E7 Consider
 
 
 
  

0
 1
1
2  t1 + t2 + 2t3 
0 = t1 −3 + t2 1 + t3 1 = −3t1 + t2 + t3 
 
 
 
  

0
2
2
1
2t1 + 2t2 + t3
We row reduce the coefficient matrix of the corresponding system.



2
 1 1 2 
 1 1



7
 −3 1 1  R2 + 3R1 ∼  0 4
2 2 1 R3 − 2R1
0 0 −3




(−1/3)R3


 1 1 2 


∼ 0 4 7 


0 0 1
Since the rank of the coefficient matrix equals the number of columns, the set is linearly independent
by Theorem 2.3.3.
E8 We want to determine if
 
 
 
b1 
2
3
 
 
 
b
1
 2  = t1   + t2 2
b3
1
1
is consistent for all b1 , b2 , b3 ∈ R. Observe that this corresponds to a system of 3 linear equations in
2 unknowns. Hence, the rank is at most 2, so by Theorem 2.3.1, the set does not span R3 .
Copyright © 2020 Pearson Canada Inc.
54
 
 
 
3
1
4
 
 
 
E9 Consider !x = t1 1 + t2 1 + t3 1. Row reducing the corresponding coefficient matrix gives
 
 
 
2
6
5

 

3 1 4 1 0 0
1 1 1 ∼ 0 1 0

 

2 6 5
0 0 1
Thus, it is a basis for R3 by Theorem 2.3.5.
E10 False. The equation is not linear since it cannot be written in the form ax1 +bx2 = c, where a, b, c ∈ R.
E11 False. The system x1 + x2 = 0 is consistent, but has infinitely many solutions.
E12 False. The system x1 = 1, 2x1 = 2 has more equations than variables, but is consistent.
 
 
1
2
 
 
E13 False. Consider the system x1 = 1, x2 = 1, x3 = 1. It has 1 as a solution, but 2 is not a solution.
 
 
1
2
E14 True. The system x1 = 0 is homogenenous and has a unique solution.
E15 True. If the system is inconsistent, then it doesn’t have any solutions. If the system is consistent
and there are more variables than equations, then by the System-Rank Theorem (2) there must be
parameters since the rank of the matrix could be at most equal to the number of equations. Hence,
the system cannot have a unique solution.
Further Problems
F1 (a) Solve the homogeneous system by back-substitution to get solution


−r13 


!x H = t −r23  , t ∈ R


1
For the non-homogeneous system we get solution
 


c1 
−r13 
 


!x N = c2  + t −r23  ,
 


0
1
t∈R
(b) For the homogeneous system we get
!x H




−r12 
−r15 
 1 
 0 




= t1  0  + t2 −r25  ,




 0 
−r35 
0
1
t1 , t2 ∈ R
For the non-homogeneous system we get solution
 




c1 
−r12 
−r15 
 0 
 0 
 1 


 


!x N = c2  + s  0  + t −r25  ,
 




c3 
 0 
−r35 
0
0
1
s, t ∈ R
Copyright © 2020 Pearson Canada Inc.
55
(c) Let L(i) be the column index of the leading 1 in the i-th row of R; then xL(i) is the i-th leading
variable. Since the rank of R is k, L(i) is defined only for i = 1, . . . , k. Let xN( j) denote the j-th
non-leading variable.
In terms of this notation, the fact that R is in RREF implies that entries above and below the
leading 1s are zero, so that (R)iL(i) = 1, but (R)hL(i) = 0 if h ! i. It is a little more awkward to
express the fact that the entries in the lower left of R must also be zero:
(R)hN( j) = 0
if h > [the greatest L(i) less than N( j)].
We also need some way of keeping coordinates in the correct slot, and we use the standard basis
vectors !ei for this. Now we construct the general solution by back-substitution: let xN( j) = t j ,
solve, and collect terms in t j :
.
!x H = t1 − (R)1N(1)!eL(1) − · · · − (R)kN(1)!eL(k) + !eN(1)
.
+ t2 − (R)1N(2)!eL(1) − · · · − (R)kN(2)!eL(k) + !eN(2) + · · ·
.
+ tn−k − (R)1N(n−k)!eL(1) − · · · − (R)kN(n−k)!eL(k) + !eN(n−k)
Recall that some of these (R)i j are equal to zero. Thus, if we let
!v j = −(R)1N( j)!eL(1) − · · · − (R)kN( j)!eL(k) + !eN( j)
we have obtained the required expression for the solution !x H of the corresponding homogeneous
problem.
For the non-homogeneous system, note that the entries on the right-hand side of the equations
are associated with leading variables. Thus we proceed as above and find that
Thus
.
!x N = c1!eL(1) + · · · + ck!eL(k) + !x H
!p = c1!eL(1) + · · · + ck!eL(k)
We refer to !p as a particular solution for the inhomogeneous system.
+
,
/
0
(d) With the stated assumptions, A | !b is row equivalent to some R | !c , where R is in RREF.
+
,
Therefore, we know that the solution to the system A | !b has a solution of the form found in
part (c): the general solution of the inhomogeneous system can be written as a sum
+ of a, particular
solution !p and the general solution of the corresponding homogeneous system A | !0 .
F2 It will be convenient to have notation for keeping track of multiplications and divisions. We shall
write statements such as M = n2 to indicate the number of multiplications and divisions to this point
in the argument is n2 .
(a) We first calculate a21 /a11 , (M = 1). We know that a21 − (a21 /a11 )a11 = 0, so no calculation
is required in the first column, but we must then calculate a2 j − (a21 /a11 )a1 j for 2 ≤ j ≤ n,
(M = 1 + (n − 1) = n), and also b2 − (a21 /a11 )b1 , M = n + 1.
We now have a zero as the entry in the first column, second row. We must do exactly the same
number of operations to get zeros in the first column and rows, 3, . . . , n. Thus we do (n + 1)
operations on each of (n − 1) rows, so M = (n − 1)(n + 1) = n2 − 1.
Copyright © 2020 Pearson Canada Inc.
56
Now we ignore the first row and first column and turn our attention to the remaining (n−1)×(n−1)
coefficient matrix and its corresponding augmenting column. Since the n × n case requires n2 − 1
operations, the (n − 1) × (n − 1) case requires (n − 1)2 − 1 operations. Similarly, the (n − i) × (n − i)
case requires (n − i)2 − 1 operations.
/
0
When we have done all this, we have the required form R | !c , and
.
M = (n2 − 1) + (n − 1)2 − 1 + · · · + (22 − 1) + (12 − 1)
 n

; 2 
n(n + 1)(2n + 1)
=  k  − n =
−n
6
k=1
1
1
5
= n3 + n2 − n
3
2
6
(b) We use MB for our counter for back-substitution. We begin by solving for the n-th variable by
xn = dn /cnn , (MB = 1). Next we substitute this value into the (n − 1)-st equation:
xn−1 =
1
(dn−1 − c(n−1),n xn )
c(n−1)(n−1)
This requires one multiplication and one division, so MB = 1 + 2. Continuing in this way, we find
that the total for the procedure of back-substitution is
MB = 1 + 2 + 3 + · · · + n =
n(n + 1)
2
(c) For the extra steps
+ in the, Gauss-Jordan procedure, we denote our counter by M J . We again begin
with the matrix C | d! . We start by dividing the last row by cnn ; we know that the entry in
the n-th row and n-th column becomes 1, so no operation is required. We do calculate dn /cnn ,
M J = 1. Next we obtain zeros in all the entries of the n-th column above the last row: no actual
operations are required for these entries themselves, but we must apply the appropriate operation
in the augmenting column: d j − c jn dn/cnn . We require one operation on each of (n − 1) rows, so
M j = 1+(n−1). Now we turn to the (n−1)×(n−1) coefficient matrix consisting of the first (n−1)
rows and (n − 1) columns with the appropriate augmenting column. By an argument similar to
that just given, we require a further 1 + (n − 2) operations, so now M J = [1 + (n − 1)] + [1 + (n − 2)].
Continuing in this way we find that to fully row reducing the matrix, we require
M J = [1 + (n − 1)] + [1 + (n − 2)] · · · + [1 + 0] =
n(n + 1)
2
operations, the same number as required for back-substitution.
Notice that the total number of operations required by elimination and back-substitution (or by
Gauss-Jordan elimination) is
1 3
1
n + n2 − n
3
3
For large n, the important term is 13 n3 .
Copyright © 2020 Pearson Canada Inc.
57
(d) Here our counter is MC . We begin as with standard elimination, so to obtain zeros below the first
entry in the first column, we have MC = (n + 1)(n − 1). Next we must obtain zeros above and
below the entry in the second row and second column. Since the entries have all been changed,
we do not really have a correct notation for the next calculation, but it is of the form: calculate
a j2
a22
then calculate
a jk −
(one division)
a j2
a2k
a22
(one multiplication)
We must do this for each entry in (n − 1) rows (all the rows except the second), and n + 1 −
1 = n columns (include the augmenting column, but omit the first column). Thus we require an
additional n(n − 1) operations, so to this point, MC + (n + 1)(n − 1) + n(n − 1).
Continuing in this way, we find that
MC = (n + 1)(n − 1) + n(n − 1) + · · · + 2(n − 1)
?
@
n+1
;
(n + 1)(n + 2)
= (n − 1)
k = (n − 1)
−1
2
k=2
=
1 3
3
n + n2 − n
2
2
Copyright © 2020 Pearson Canada Inc.
Chapter 3 Solutions
Section 3.1
A Practice Problems
!
" !
" !
"
2 −2
3
−3 −4 1
−1 −6 4
A1
+
=
4
1 −1
2 −5 3
6 −4 2
!
" !
" !
" !
"
6 −6
9
6 8
2
1 2
4
13 4
11
A2
+
+
=
12
3 −3
−4 10 −6
−2 1 −2
6 14 −11
A3 C!x is not defined since !x does not have the same number of entries as C has columns.
A4 AB is not defined since B does not have the same number of rows as A has columns.


!
" 1 −2 !
"


2 −2
3 
10 −12
2
1 =
A5

4
1 −1 
2 −5
4 −2




"  −7
6 −5
1 −2 !

−3
−4
1




1
5
A6 2
=  −4 −13

 2 −5 3


4 −2
−16 −6 −2




"  6
7/3 
1 −2 !

3

 5


1
A7 2
= 19/2 19/3

 −1/2 1/3


4 −2
21 34/3
 
 
!
"!
" 3 !
" 3 !
"
5 −1/2 2 −2
3  
8
−21/2 31/2  
27/2
1 =
1 =
A8
3
1/3 4
1 −1  
22/3 −17/3 26/3  
49/3
0
0
A9 !x T !x = !x · !x = 3(3) + 1(1) + 0(0) = 10
 
!
"
! "
 x1 
3
2 −1
4
  !
A10 We take A =
, !x =  x2 , b =
 
2 −1
5
5
x3
 
 x1 
!
"
! "
 x2 
1 −4 1 −2
1
!


A11 We take A =
, !x =  , b =
1 −1 3
0
x
0
 3 
x4
1
Copyright © 2020 Pearson Canada Inc.
2


 
 
3 −1/4
1/3
 x1 
 1 


  !  
0
1 , !x =  x2 , b = 2/3
A12 We take A =  1


 
 
1 −1
0
x3
3


 
! "
1 −1
 3
x


 
1, !x = 1 , !b =  4
A13 We take A = 3


 
x2
5 −8
17
A14 TRUE. Since A has 2 columns, !x must have 2 entries.
A15 FALSE. Since A has 2 rows, we get that A!x ∈ R2 .
A16 FALSE. IF B is the identity matrix, then AB = A = BA by Theorem 3.1.6.
A17 TRUE. By definition of matrix-matrix multiplication AT A will have n rows (since AT has n rows) and
will have n columns (since A has n columns). Thus, AT A is n × n.
!
"
!
"
0 1
0 0
2
A18 FALSE. Take A =
. Then A = AA =
. (Such a matrix A is called a nilpotent matrix.
0 0
0 0
These matrices have many interesting properties.
!
"
0 1
A19 FALSE. As in part (e), we can just take A = B =
.
0 0
A20 Since A and B have the same size A + B is defined. The number of columns of A does not equal the
number of rows of B so AB is not defined. We have

T
!
"
−3 −1
−3 2 1


T

2 =
(A + B) =  2


−1 2 3
1
3
!
" !
" !
"
1 1 −2
−4 1 3
−3 2 1
T
T
A +B =
+
=
2 3
1
−3 −1 2
−1 2 3
A21 Since A and B are different sizes A + B is not defined. The number of columns of A equals the number
of rows of B so AB is defined. We have
"T !
"
−21
15
−21 −10
(AB) =
=
−10 −27
15 −27


!
"  2
"
4 !

−3
5
1
−21 −10


T T


−4
1 =
B A =

−4 −2 3 
15 −27
5 −3
T
"
13 31 2
A22 AB =
.
10 12 10
!
!
A23 BA is not defined since the number of rows of A does not equal the number of columns of B.
A24 DC is not defined since the number of rows of C does not equal the number of columns of D.
!
"
11 7 3 15
A25 C T D =
.
7 9 11 1
A26 !x T !y = !x · !y = x1 (1) + x2 (2) + x3 (1) = x1 + 2x2 + x3 .
A27 !x !x is not defined.
Copyright © 2020 Pearson Canada Inc.
3
"!
" !
"
2 5 −14 17
52 139
A28 A(BC) =
=
−1 3
16 21
62 46


!
" 1
"
4 !
13 31 2 
52 139


1
3 =
A29 (AB)C =

10 12 10 
62 46
4 −3


13 10


A30 (AB)T = 31 12


2 10
 
 
 
12
 8
−2
 
 
 
A31 (a) A!x = 17, A!y =  4, A!z =  5
 
 
 
3
−4
1
!
(b) We have
A32 We have




3
0
8 −2
1
)
* )
* 12



1 −1 = A !x !y !z = A!x A!y A!z = 17
4
5
A 2




4 −1
1
3 −4
1

"  6
2 3 −4 5 −2

−4 1 2 1  1

−3
!
"!
" !
"!
2 3 6 3
−4 5 1
+
−4 1 −2 4
2 1 −3
!

3 !
"

4
−13 16
=

3
−27 0

2
" !
" !
" !
"
3
6
18
−19 −2
−13 16
=
+
=
2
−26 −8
−1
8
−27 0
A33 (a) We need to determine if there exists t1 , t2 , and t3 such that
!
"
!
"
!
"
!
" !
2 3
1 2
0 1
1 1
t1 + t3
= t1
+ t2
+ t3
=
2 −3
1 0
−1 2
3 −1
t1 − t2 + 3t3
2t1 + t2 + t3
2t2 − t3
"
Comparing corresponding entries gives the system of linear equations
t1 + t3 = 2
2t1 + t2 + t3 = 3
t1 − t2 + 3t3 = 2
2t2 − t3 = −3
Row reducing the corresponding augmented matrix gives

 
0
1
2   1 0
 1
 
 2
1
1
3   0 1

∼
 1 −1
3
2   0 0

 
0
2 −1 −3
0 0

3 
0

0 −2 

1 −1 

0
0
The system is consistent and so A ∈ Span B. In particular, we have t1 = 3, t2 = −2, and t3 = −1,
so
!
"
!
"
!
" !
"
1 2
0 1
1 1
2 3
3
+ (−2)
+ (−1)
=
1 0
−1 2
3 −1
2 −3
Copyright © 2020 Pearson Canada Inc.
4
(b) Consider
!
"
!
"
!
"
!
" !
0 0
1 2
0 1
1 1
c1 + c3
= c1
+ c2
+ c3
=
0 0
1 0
−1 2
3 −1
c1 − c2 + 3c3
2c1 + c2 + c3
2c2 − c3
"
Row reducing the coefficient matrix of the corresponding homogeneous system gives

 
0
1 1
1
 
2
1
1 0

∼
1 −1
3 0

 
0
2 −1
0
0
1
0
0

0

0

1

0
Hence, the only solution is the trivial solution, so B is linearly independent.
Observe that we had actually done the necessary calculations in part (a) to determine that the
set was linearly independent. In particular, the systems in part (a) and part (b) have the same
coefficient matrix.
A34 Using the second view of matrix-vector multiplication and the fact that the i-th component of !ei is 1
and all other components are 0, we get
A!ei = 0!a1 + · · · + 0!ai−1 + 1!ai + 0!ai+1 + · · · + 0!an = !ai
B Homework Problems


 11 −3


0
B1 −1


2
2
B4 undefined


55
22


B7  2 −15


24 −4
B10 undefined
B13 A =
!
1 −3
1
1
B14 A =
!
2
1
3 −1

1
1

2
B15 A = 1

2 −3


 11 10


B2 −6 5


−4 12
! "
6
B5
−7
B8 undefined


 41


B11 −19


−28
 
 x1 
"
! "
 x 
1 −1
1
2

!


, !x =  , b =
3
4
5
 x3 
x4
 
"
! "
 x1 
5
0
  !

, !x =  x2 , b =
 
−2
0
x3

 
 
−1
 x1 
 1 

 
 
1, !x =  x2 , !b =  9 

 
 
0
x3
−3
B3 undefined
22 11
22 −11
!
"
−3 34
B9
13 14
B6
!
B12 6
Copyright © 2020 Pearson Canada Inc.
"
5


 
! "
3
2
1
x1 !  


B16 A = 8 −1, !x =
, b = 1


 
x2
7 −4
1
B17 FALSE
B18 TRUE
B20 FALSE
B21 TRUE
B23
B24
B25
B28
B31
B34
B38
B37
B41
B40
B43
B44
B45
B46
B19 FALSE
B22 FALSE
"
!
"
7 2
12 2
T
T
T
Both A + B and AB are defined. (A + B) =
= A + B and (AB) =
= BT AT .
0 7
4 18


0
6 
 24


6  = BT AT .
A + B is not defined. AB is defined. (AB)T =  42 −6


−39 −7 −15


!
"
 4 7 9
18 33


B26  7 1 0
B27 undefined


10 20
10 25 33
 
!
"
!
"
 4
12 6 6
6x1 + 3x2 + 3x3
 
B29
B30  1
 
−9 3 6
−x1 + 2x2 + 3x3
14
2x1 + 2x2 + x3
B32 undefined
B33 9

!
"
12 −9
12 −9
3 99




3
3
B35  6
B36  6




0 60
6
6
6
6
 


 


15
−40
7
15 −40 7
 


 


A!x = 16, A!y =  −6, A!z = 5
B39 16 −6 5
 


 


2
−13
2
2 −13 2
 
 
 


1
0
0
1 0 0
 
 
 


A!x = 0, A!y = 1, A!z = 0
B42 0 1 0
 
 
 


0
0
1
0 0 1


!
" −4 8 !
" !
" !
"

6 3 −2 
−18 39
−24 48
6 −9
 2 1 =
=
+

1 2 −1 
0 4
−4 8
4 −4
0 6
T
(a) A ∈ Span B.
(a) A ! Span C.
!
(b) B is linearly independent.
(b) C is linearly independent.
(a) A ∈ Span B.
(b) B is linearly independent.
!
" !
"! "
nt+1
9/10 1/5 nt
B47 (a)
=
.
mt+1
1/10 4/5 mt
(b) We have
! " !
"! " !
"!
" !
"
n1
9/10 1/5 n0
9/10 1/5 1000
1300
=
=
=
m1
1/10 4/5 m0
1/10 4/5 2000
1700
! " !
"! " !
"!
" !
"
n2
9/10 1/5 n1
9/10 1/5 1300
1510
=
=
=
m2
1/10 4/5 m1
1/10 4/5 1700
1490
! " !
"! " !
"!
" !
"
n3
9/10 1/5 n2
9/10 1/5 1510
1657
=
=
=
m3
1/10 4/5 m2
1/10 4/5 1490
1343
Copyright © 2020 Pearson Canada Inc.
6
The population will always stay at 3000 since we are not losing or gaining population. In particular, each year-to-year we will have 9/10 + 1/10 of the population of X and 1/5 + 4/5 of the
population of Y.
!
"
!
" !
"
!
"
781/1000 438/1000
1657
3
3 1000
10 1000
(c) A =
,A
=
.A
would represent the population
219/1000 562/1000
2000
1343
2000
distribution after 10 years.


0 1 1 1 0 0


1 0 0 0 1 1
1 0 0 0 1 1

B48 (a) 
(b) 6
(c) 2
0 0 0 0 0 1


0 0 1 0 0 1
0 0 1 0 1 0


0 1 1 0 0 0


1 0 0 1 0 0
1 0 1 1 1 0

B49 (a) 
(b) 1
(c) 2
0 1 1 0 1 1


0 0 0 0 0 1
0 0 0 1 1 0
B50 (a) We have
xn+2 = yn+1 = xn + yn = xn + xn+1
yn+2 = xn+1 + yn+1 = yn + yn+1
and
x0 = 1, x1 = y0 = 1, x2 = 2, x3 = 3, . . .
(b) We have
!
xn+1
yn+1
"
y0 = 1, y1 = x0 + y0 = 2, y2 = 3, . . .
"! "
0 1 xn
=
1 1 yn
!
C Conceptual Problems
n
!
C1 According to the hints, we only need to prove
that if (A
)
* − B)!x = 0 for every !x ∈ R , then A − B
n
!
is the zero matrix. We let A − B = C = !c1 · · · !cn . If C!x = 0 for every !x ∈ R , then taking
!x = !ei , the i-th standard basis vector, we get !0 = C!ei = !ci since !ei has 1 as its i-th component and
0s elsewhere. Thus, every column of C is the zero vector and hence C is the zero matrix. Therefore
A = B as required.
)
*
C2 Let A = !a1 · · · !an . We have
)
* )
* )
*
AIn = A !e1 · · · !en = A!e1 · · · A!en = !a1 · · · !an = A
Using this result for the n × m matrix AT , we get
AT = AT Im = AT ImT = (Im A)T
Taking transposes of both sides gives A = Im A.
Copyright © 2020 Pearson Canada Inc.
7
 
 y1 
 
C3 (a) Let !y =  ...  ∈ Rm . Then, we can write !y as
 
ym
!y = y1!e1 + · · · + ym!em
Let !x i be a solution of A!x = !ei for 1 ≤ i ≤ m. Then, observe that
A(y1 !x 1 + · · · + ym !x m ) = y1 A!x 1 + · · · + ym A!x m = y1!e1 + · · · + ym!em = !y
Thus, A!x = !y is consistent for all !y ∈ Rm .
(b) Since A!x = !y is consistent for all !y ∈ Rm we have that rank A = m by the System-Rank Theorem
(3).
)
*
(c) Let B = !x 1 · · · !x m where !x i are as defined in part (a). Then, we have that
)
* )
* )
*
AB = A !x 1 · · · !x m = A!x 1 · · · A!x m = !e1 · · · !em = Im
(d) We just need to solve the two systems of equations which have augmented matrices
[A | !e1 ] =
!
1 2 1 1
0 1 1 0
"
,
[A | !e2 ] =
!
1 2 1 0
0 1 1 1
"
Row reducing each augmented matrix gives
!
1 0 −1 1
0 1 1 0
"
,
!
1 0 −1 −2
0 1 1
1
"
Hence, the general solution of the first system is
 
 
1
 1 
 
 
!x = 0 + s −1 ,
 
 
0
1
s∈R
 
 
−2
 1 
 
 
!x =  1  + t −1 ,
 
 
0
1
t∈R
and the general solution of the second system is
 
 
1
−2
 
 
Thus, taking s = 0 gives x1 = 0 and taking t = 0 gives !x 2 =  1 . Therefore, one choice for B
 
 
0
0


1 −2


is B = 0 1 .


0 0
Copyright © 2020 Pearson Canada Inc.
8
C4 By definition of matrix multiplication, the i j-th entry of AAT is the dot product of the i-th row of A
and the j-th column of AT . But, the j-th column of AT is just the j-th row of A. Hence, the i j-th entry
of AAT is the dot product of the i-th row and j-th row of A.
Similarly, the i j-th entry of AT A is the dot product of the i-th row of AT and the j-th column of A.
But, the i-th row of AT is just the i-th column of A. Hence, the i j-th entry of AT A is the dot product
of the i-th column and j-th column of A.
In particular, observe that the entries (AAT )ii are given by the dot product of the i-th row of A with
itself. So, if AAT is the zero matrix, then this entry is 0, so the dot product of the i-th row of A with
itself is 0, and hence the i-th row of A is zero. This is true for all 1 ≤ i ≤ n, hence A is the zero matrix.
The argument for AT A is similar.
C5 We have (BBT )T = (BT )T BT = BBT and (BT B)T = BT (BT )T = BT B. So, they are both symmetric.
!
" !
"
!
"
0 a 0 0
a −a
C6 (a) There are many possible choices. A couple of possibilities are:
,
, or
, for
0 0 a 0
a −a
any a ∈ R.
(b) A2 − AB − BA + B2 = (A − B)2 . To get (A − B)2 = O2,2 we can
! pick
" any A and
! B such
" that A − B
1 2
1 1
satisfies one of the choices from (a). One possibility is A =
and B =
.
3 4
3 4
!
"
a b
C7 Let A =
. Then, we have
c d
!
" !
"!
" ! 2
"
1 0
a b a b
a + bc ab + bd
=
=
0 1
c d c d
ac + cd bc + d2
Comparing entries gives
a2 + bc = 1
(1)
ab + bd = 0
(2)
ac + cd = 0
(3)
2
bc + d = 1
Equations (1)-(4) gives a2 − d2 = 0, so a = ±d.
(4)
If a = −d, then all equations
when a2 + bc! = 1. If b "= 0, then we get a = ±1 and c ∈ R,
!
" !are satisfied
"
a
b
1 0
−1 0
so we get matrices
,
. If b " 0, we get 1−a2
.
c −1
c 1
−a
b
If a = d, then equation (2) and equation (3) give 2ab = 0 and 2ac = 0.! If a =
" 0,
! then a "= 0 = −d and
1 0 −1 0
we have the case above. If a " 0, then b = c = 0 and we get matrices
,
.
0 1
0 −1
C8 By definition we have
tr(A + B) = (a11 + b11 ) + (a22 + b22 ) + · · · + (ann + bnn )
= (a11 + a22 + · · · + ann ) + (b11 + b22 + · · · + bnn ) = tr A + tr B
Copyright © 2020 Pearson Canada Inc.
9
C9 When k = 1, we have D1 = diag(λ1 , . . . , λn ) as required. Assume that the result holds for k − 1. That
k−1
is, assume that Dk−1 = diag(λk−1
1 , . . . , λn ). Then,
k−1
Dk = DDk−1 = D diag(λk−1
1 , . . . , λn )
)
*
e1 · · · λk−1
en
= D λk−1
n !
1 !
)
*
e1 · · · λk−1
en
= λk−1
n D!
1 D!
)
*
k−1
!
!
(λ
e
)
·
·
·
λ
(λ
e
)
= λk−1
1
1
n
n
n
1
)
*
= λk1!e1 · · · λkn!en
= diag(λk1 , . . . , λkn )
)
C10 Let B = !b1
*
· · · !bm be any matrix such that AB = Im . Thus,
Im = AB
)
*
)
*
!e1 · · · !em = A !b1 · · · !bm
)
*
= A!b1 · · · A!bm
Hence, we have A!bi = !ei , for 1 ≤ i ≤ m. Each of these correspond to a system of m linear equations in
n unknowns. By the System-Rank Theorem, each system has infinitely many solutions since rank A =
m < n. Therefore, there are infinitely many matrices B such that AB = Im .
C11 Since Span{!v 1 , . . . , !v k } ⊆ Span{!u 1 , . . . , !u k } there exists ci j such that
!v i = ci1!u 1 + · · · + cik!u k ,
1≤i≤k
 
ci1 
)
*
 
Let !ci =  ... , 1 ≤ i ≤ k, and C = !c1 · · · !ck .
 
cik
Observe that
B!ci = ci1!u 1 + · · · + cik!u k = !v i
Thus,
as required.
)
* )
* )
*
B !c1 · · · !ck = B!c1 · · · B!ck = !v 1 · · · !v k = A
 
A1 
 
C12 (a) Since A has a row of zeros, we can write A in block form as A = !0T . Thus,
 
A2
Hence, AB has a row of zeros.
 

 

A1 
A1 B A1 B
 

 

AB = !0T  B = !0T B =  !0T 
 

 

A2
A2 B
A2 B
(b) B could be the zero matrix.
Copyright © 2020 Pearson Canada Inc.
10
Section 3.2
A Practice Problems
A1 (a) Since A has two columns, A!x is defined only if !x has two rows. Thus, the domain of fA is R2 .
Since A has four rows the product A!x will have four entries, thus the codomain of fA is R4 .
(b) We have




3 ! " −19
−2



 3
 6
0 2
fA (2, −5) = 
= 


5 −5
 1
−23

4 −6
38




3 ! "  18
−2



 3
 −9
0 −3
fA (−3, 4) = 
= 


5 4
 1
 17

4 −6
−36
(c) We have


 
3 ! " −2
−2

 
 3
 3
0 1
fA (1, 0) = 
=  

5 0
 1
 1

4 −6
4


 
3 ! "  3
−2

 
 3
 0
 0
0
fA (0, 1) = 
=  

5 1
 5
 1

4 −6
−6




3 ! " −2x1 + 3x2 
−2


 3x + 0x 
 3
0 x1
2 


(d) fA (!x ) = 
=  1

1
5
x
x
+
5x

 2
 1

2 
4 −6
4x1 − 6x2
(e) The standard matrix of fA is
)
[ fA ] = fA (1, 0)


3
−2

*  3
0


fA (0, 1) = 

5
 1

4 −6
A2 (a) Since A has four columns, A!x is defined only if !x has four rows. Thus, the domain of fA is R4 .
Since A has three rows the product A!x will have three entries, thus the codomain of fA is R3 .
Copyright © 2020 Pearson Canada Inc.
11
(b) We have
(c) We have
 

  2 

2 −3
0   −11
1


−2




0
3   =  9
fA (2, −2, 3, 1) = 2 −1

  3 

1
0
2 −1  
7
1
 

 −3 

2 −3
0   −13
1


1




0
3   =  −1
fA (−3, 1, 4, 2) = 2 −1

  4 

1
0
2 −1  
3
2
 

 1  
2 −3
0   1
1

 0  
0
3   = 2
fA (!e1 ) = 2 −1

 0  
1
0
2 −1  
1
0
 

 0  
2 −3
0    2
1

 1  
0
3   = −1
fA (!e2 ) = 2 −1

 0  
1
0
2 −1  
0
0
 

 0  
2 −3
0   −3
1

 0  
0
3   =  0
fA (!e3 ) = 2 −1

 1  
1
0
2 −1  
2
0
 

 0  
2 −3
0    0
1

 0  
0
3   =  3
fA (!e4 ) = 2 −1

 0  
1
0
2 −1  
−1
1
 

 x  

2 −3
0  1   x1 + 2x2 − 3x3 
1

  x2  

0
3   = 2x1 − x2 + 3x4 
(d) fA (!x ) = 2 −1

  x3  

1
0
2 −1  
x1 + 2x3 − x4
x4
(e) The standard matrix of fA is
)
[ fA ] = fA (!e1 )
fA (!e2 )
fA (!e3 )


2 −3
0
* 1

fA (!e4 ) = 2 −1
0
3


1
0
2 −1
A3 The domain of f is R2 and the codomain of f is R2 . Since the components of the definition of f
involve non-linear functions, we suspect that f is not linear. To prove that it is not linear, we just need
to find one example where f does not preserve addition or scalar multiplication. If we take x1 = 0
and x2 = 1, then
!
" ! "
sin 0
0
2 f (0, 1) = 2 1 =
e
2e
Copyright © 2020 Pearson Canada Inc.
12
but,
!
" ! "
,
sin 0
0
f 2(0, 1) = f (0, 2) = 22 = 4
e
e
+
+
,
Thus, 2 f (0, 1) " f 2(0, 1) . Therefore, f is not closed under scalar multiplication and so is not linear.
A4 The domain of f is R3 and the codomain of f is R2 . Since one component of f involve multiplication
 
1
 
of variables, we suspect this is not linear. Let !x = 1. Then
 
1
! " ! "
0
0
2 f (!x ) = 2
=
1
2
but
! "
0
f (2!x ) = f (2, 2, 2) =
" 2 f (!x )
8
Therefore, f is not closed under scalar multiplication and so is not linear.
A5 The domain of g is R3 and the codomain of g is R3 . Since the components of g are non-zero constants,
 
 
1
0
 
 
we suspect this is not linear. Let !x = 0 and !y = 1. Then
 
 
0
0
 
1
 
g(!x ) = 1 = g(!y )
 
1
but
 
1
 
g(!x + !y ) = g(1, 1, 0) = 1 " g(!x ) + g(!y )
 
1
Therefore, g is not closed under addition and so is not linear.
A6 The domain of g is R2 and the codomain of g is R2 . The components of g are linear, so we suspect
that g! is"linear.!To" prove this we need to show that g preserves addition and scalar multiplication. Let
x
y
!x = 1 , !y = 1 , and s, t ∈ R. Then
x2
y2
g(s!x + t!y ) = g(sx1 + ty1 , sx2 + ty2 )
!
"
2(sx1 + ty1 ) + 3(sx2 + ty2 )
=
(sx1 + ty1 ) − (sx2 + ty2 )
!
"
2sx1 + 2ty1 + 3sx2 + 3ty2
=
sx1 + ty1 − sx2 − ty2
!
" !
"
2sx1 + 3sx2
2ty1 + 3ty2
=
+
sx1 − sx2
ty1 − ty2
!
"
!
"
2x1 + 3x2
2y + 3y2
=s
+t 1
= sg(!x ) + tg(!y )
x1 − x2
y1 − y2
Thus, g is linear.
Copyright © 2020 Pearson Canada Inc.
13
A7 The domain of h is R2 and the codomain of h is R3 . Since the !third
" component of h involves a
1
multiplication of variables, we suspect that h is not linear. Let !x =
. Then
2


 
2(1) + 3(2)
 8 


 
2h(1, 2) = 2  1 − 2  = 2 −1


 
1(2)
2
but

  
2(2) + 3(4)  16 
+
,

  
h 2(1, 2) = h(2, 4) =  2 − 4  = −2 " 2h(1, 2)

  
2(4)
8
Therefore, h is not closed under scalar multiplication and so is not linear.
A8 The domain of k is R3 and the codomain of k is R3 . The components of k are linear, so we suspect
 
 
 x1 
y1 
 
 
that k is linear. Let !x =  x2 , !y = y2 , and s, t ∈ R. Then
 
 
x3
y3
k(s!x + t!y ) = k(sx1 + ty1 , sx2 + ty2 , sx3 + ty3 )


 sx1 + ty1 + sx2 + ty2 


0
= 


sx2 + ty2 − (sx3 + ty3 )

 

 sx1 + sx2  ty1 + ty2 

 

0
= 
 +  0 

sx2 − sx3
ty2 − ty3




 x1 + x2 
y1 + y2 




= s  0  + t  0 




x2 − x3
y2 − y3
= sk(!x ) + tk(!y )
Therefore, k is linear.
A9 The domain of # is R3 and the codomain of # is R2 . Since one component of # involves absolute
 
1
 
values, we suspect this is not linear. Let !x = 0. Then
 
0
(−1)#(1, 0, 0) = (−1)
but
,
# (−1)(1, 0, 0) = #(−1, 0, 0) =
+
!
! " ! "
0
0
=
1
−1
" ! "
0
0
=
" (−1)#(1, 0, 0)
| − 1|
1
Therefore, #is not closed under scalar multiplication and so is not linear.
Copyright © 2020 Pearson Canada Inc.
14
A10 The domain of m is R and the codomain of m is R3 . Since the middle component of m is 1, m cannot
be linear. For any x1 ∈ R we have
   
 x1  0
+
,
   
0 (m(x1 )) = 0  1  = 0 " m 0(x1 , 1, 0)
   
0
0
Therefore, m is not closed under scalar multiplication and so is not linear.
REMARK: In this last problem, we used the property that for any linear mapping L : Rn → Rm that
L(!0) = L(0!x ) = 0L(!x ) = !0
Thus, since m(x1 ) could not be the zero vector in R3 , m could not be a linear mapping.
A11 The domain of L is R3 and the codomain of L is R2 . For any !x , !y ∈ R3 and s, t ∈ R we have
L(s!x + t!y ) = L(sx1 + ty1 , sx2 + ty2 , sx3 + ty3 )
!
"
2(sx1 + ty1 )
=
(sx1 + ty1 ) − (sx2 + ty2 ) + 3(sx3 + ty3 )
!
" !
"
2sx1
2ty1
=
+
sx1 − sx2 + 3sx3
ty1 − ty2 + 3ty3
!
"
!
"
2x1
2y1
=s
+t
x1 − x2 + 3x3
y1 − y2 + 3y3
= sL(!x ) + tL(!y )
Therefore, L is linear.
A12 The domain of L is R2 and the codomain
of L is R2 . Since one component of L involves squares, we
! "
1
suspect this is not linear. Let !x =
. Then
0
! " ! "
1
2
2L(!x ) = 2
=
1
2
but
! "
4
L 2!x = L(2, 0) =
" 2L(!x )
2
+
,
Therefore, L is not closed under scalar multiplication and so is not linear.
2
A13 The domain of M is R3 and the codomain of M! is
" R . Since
! the
" components of M involve adding a
1
0
constant, we suspect this is not linear. Let !x =
and !y =
. Then
0
1
! "
! "
3
2
M(!x ) =
, and M(!y ) =
2
3
but
! "
3
M !x + !y = M(1, 1) =
" M(!x ) + M(!y )
3
+
,
Therefore, M is not closed under addition and so is not linear.
Copyright © 2020 Pearson Canada Inc.
15
A14 The domain of M is R3 and the codomain of M is R3 . For any !x , !y ∈ R3 and s, t ∈ R we have
M(s!x + t!y ) = M(sx1 + ty1 , sx2 + ty2 , sx3 + ty3 )


(sx1 + ty1 ) + 3(sx3 + ty3 )


= (sx2 + ty2 ) − 2(sx3 + ty3 )


(sx1 + ty1 ) + (sx2 + ty2 )

 

 sx1 + 3sx3  ty1 + 3ty3 

 

=  sx2 − 2sx3  + ty2 − 2ty3 

 

sx1 + sx2
ty1 + ty2




 x1 + 3x3 
y1 + 3y3 




= s  x2 − 2x3  + t y2 − 2y3 




x1 + x2
y1 + y2
= sM(!x ) + tM(!y )
Therefore, M is linear.
A15 The domain of N is R2 and the codomain of N is R3 . For any !x , !y ∈ R2 and s, t ∈ R we have
N(s!x + t!y ) = N(sx1 + ty1 , sx2 + ty2 )


−(sx1 + ty1 )


0
= 


sx1 + ty1

 

−sx1  −ty1 

 

=  0  +  0 

 

sx1
ty1


 
−x1 
−y1 


 
= s  0  + t  0 


 
x1
y1
= sN(!x ) + tN(!y )
Therefore, N is linear.
A16 The domain of N is R3 and the codomain of N is R3 . Since the components of N involve multiplication
 
1
 
of variables, we suspect this is not linear. Let !x = 1. Then
 
1
but
   
1 2
   
2N(!x ) = 2 1 = 2
   
1
2
 
4
 
N(2!x ) = N(2, 2, 2) = 4 " 2N(!x )
 
4
Therefore, N is not closed under scalar multiplication and so is not linear.
Copyright © 2020 Pearson Canada Inc.
16
A17 To find the standard matrix of the projection, we need to find the image of the standard basis vectors
for R2 under the projection. That is, we need to project the standard basis vectors on to !v . We get
! " !
"
!v · !e1
−2 −2
4/5
!v =
proj!v (!e1 ) =
=
1
−2/5
5
(!v (2
! " !
"
!v · !e2
1 −2
−2/5
!
proj!v (!e2 ) =
v
=
=
1/5
5 1
(!v (2
Thus,
)
* ! 4/5 −2/5"
[proj!v ] = proj!v (!e1 ) proj!v (!e2 ) =
−2/5
1/5
A18 To find the standard matrix of the projection, we need to find the image of the standard basis vectors
for R2 under the projection. That is, we need to project the standard basis vectors on to !v . We get
! " !
"
!v · !e1
4 4
16/41
!v =
proj!v (!e1 ) =
=
20/41
41 5
(!v (2
! " !
"
!v · !e2
5 4
20/41
!v =
proj!v (!e2 ) =
=
25/41
41 5
(!v (2
Thus,
)
* !16/41 20/41"
[proj!v ] = proj!v (!e1 ) proj!v (!e2 ) =
20/41 25/41
A19 To find the standard matrix of the projection, we need to find the image of the standard basis vectors
for R3 under the projection. We get
  

 2  4/9
!v · !e1
2   

!v =  2 =  4/9
proj!v (!e1 ) =
2




9
(!v (
−1
−2/9
  

 2  4/9
!v · !e2
2   

!v =  2 =  4/9
proj!v (!e2 ) =

9  
(!v (2
−1
−2/9
  

 2 −2/9
!v · !e3
−1   
 2 = −2/9
!v =
proj!v (!e3 ) =

9   
(!v (2
−1
1/9
Thus,


4/9 −2/9
)
*  4/9

4/9 −2/9
[proj!v ] = proj!v (!e1 ) proj!v (!e2 ) proj!v (!e3 ) =  4/9


−2/9 −2/9
1/9
A20 To find the standard matrix of the perpendicular of the projection, we need to find the image of the
standard basis vectors for R2 under the mapping. We get
! "
! " ! "
!v · !e1
−2 −2
1
1/5
!v =
−
=
perp!v (!e1 ) = !e1 −
2
0
2/5
5 1
(!v (
! "
! " ! "
!v · !e2
1 −2
0
2/5
!v =
perp!v (!e2 ) = !e2 −
−
=
1
4/5
5 1
(!v (2
Copyright © 2020 Pearson Canada Inc.
17
Thus,
* !1/5 2/5"
[perp!v ] = perp!v (!e1 ) perp!v (!e2 ) =
2/5 4/5
)
A21 To find the standard matrix of the perpendicular of the projection, we need to find the image of the
standard basis vectors for R2 under the mapping. We get
! "
! " !
"
!v · !e1
1 1
1
16/17
!
v
=
−
=
0
−4/17
17 4
(!v (2
! "
! " !
"
!v · !e2
4 1
0
−4/17
!v =
perp!v (!e2 ) = !e2 −
−
=
1
1/17
17 4
(!v (2
perp!v (!e1 ) = !e1 −
Thus,
* ! 16/17 −4/17"
[perp!v ] = perp!v (!e1 ) perp!v (!e2 ) =
−4/17
1/17
)
A22 To find the standard matrix of the perpendicular of the projection, we need to find the image of the
standard basis vectors for R3 under the mapping. We get
Thus,
 
1
!v · !e1
 
!
perp!v (!e1 ) = !e1 −
v
=
0 −
2
(!v (
0
 
0
!v · !e2
 
!v = 1 −
perp!v (!e2 ) = !e2 −
 
(!v (2
0
 
0
!v · !e3
0 −
!
perp!v (!e3 ) = !e3 −
v
=
 
(!v (2
1
  

1  13/14 
1   
2 =  −1/7 

14   
3
−3/14
  

1 −1/7
2   
2 =  5/7 

14   
3
−3/7
  

1 −3/14
3   
2 =  −3/7 

14   
3
5/14


*  13/14 −1/7 −3/14
5/7 −3/7
[perp!v ] = perp!v (!e1 ) perp!v (!e2 ) perp!v (!e3 ) =  −1/7


−3/14 −3/7
5/14
)
A23 The domain is R2 and the codomain is R2 . The columns of the standard matrix of L are the images of
the standard basis vectors under L. We have
"
)
* !−3
5
[L] = L(1, 0) L(0, 1) =
−1 −2
A24 The domain is R2 and the codomain is R3 . The columns of the standard matrix of L are the images of
the standard basis vectors under L. We have


* 1 0
[L] = L(1, 0) L(0, 1) = 0 1


1 1
)
Copyright © 2020 Pearson Canada Inc.
18
A25 The domain is R1 and the codomain is R3 . The columns of the standard matrix of L are the images of
the standard basis vectors under L. We have
 
)
* 1
[L] = L(1) = 0
 
3
A26 The domain is R3 and the codomain is R1 . The columns of the standard matrix of M are the images
of the standard basis vectors under M. We have
)
[M] = M(1, 0, 0)
M(0, 1, 0)
* )
M(0, 0, 1) = 1 −1
√ *
2
A27 The domain is R3 and the codomain is R2 . The columns of the standard matrix of M are the images
of the standard basis vectors under M. We have
)
* !2 0 −1"
[M] = M(1, 0, 0) M(0, 1, 0) M(0, 0, 1) =
2 0 −1
A28 The domain is R3 and the codomain is R4 . The columns of the standard matrix of N are the images
of the standard basis vectors under N. We have

0
)
* 0
[N] = N(!e1 ) N(!e2 ) N(!e3 ) = 
0
0
0
0
0
0

0

0

0

0
A29 The domain is R3 and the codomain is R2 . The columns of the standard matrix of L are the images of
the standard basis vectors under L. We have
"
)
* !2 −3
1
[L] = L(1, 0, 0) L(0, 1, 0) L(0, 0, 1) =
0
1 −5
A30 The domain is R4 and the codomain is R2 . We have
"
* !5 0
3 −1
[K] = K(1, 0, 0, 0) K(0, 1, 0, 0) K(0, 0, 1, 0) K(0, 0, 0, 1) =
0 1 −7
3
)
A31 The domain is R4 and the codomain is R3 . We have
)
[M] = M(1, 0, 0, 0)
M(0, 1, 0, 0)
"
* !3
1
A32 We have [L] = L(!e1 ) L(!e2 ) =
.
5 −2
"
)
* !−1
13
A33 We have [L] = L(!e1 ) L(!e2 ) =
.
11 −21
M(0, 0, 1, 0)


1
* 1 0 −1

M(0, 0, 0, 1) = 1 2
0 −3


0 1
1
0
)
Copyright © 2020 Pearson Canada Inc.
19


* 1 1
A34 We have [L] = L(!e1 ) L(!e2 ) = 0 0.


1 1
)


1 1 1


A35 We have [L] = L(!e1 ) L(!e2 ) L(!e3 ) = 0 0 0.


1 1 1
)
*


5 0 0


A36 We have [L] = L(!e1 ) L(!e2 ) L(!e3 ) = 0 3 0.


0 0 2
)
*
"
−1
1/2 −1
√
A37 We have [L] = L(!e1 ) L(!e2 ) L(!e3 ) =
.
2 0 −1
)
*
!


1
0
2


1.
A38 We have [L] = L(!e1 ) L(!e2 ) L(!e3 ) = 1 −2


1
1 −2
)
*
A39 Since L is linear, we have that
+
,
L(0, 1) = L (1, 1) − (1, 0) = L(1, 1) − L(1, 0) =
!
" ! " ! "
5
3
2
−
=
−2
5
−7
"
)
* !3
2
Thus, [L] = L(!e1 ) L(!e2 ) =
.
5 −7
A40 Observe that
! "
1
=
0
! "
0
=
1
! "
1 1
+
2 1
! "
1 1
−
2 1
! "
1 1
2 −1
! "
1 1
2 −1
Thus, since L is linear, we have that
 
3
+1
, 1
1
1
1  
L(1, 0) = L (1, 1) + (1, −1) = L(1, 1) + L(1, −1) = 2 +
2
2
2
2
2 
0
 
3
+1
, 1
1
1
1  
L(0, 1) = L (1, 1) − (1, −1) = L(1, 1) − L(1, −1) = 2 −
2
2
2
2
2 
0


*  2 1
Thus, [L] = L(!e1 ) L(!e2 ) =  1 1.


−1 1
)
Copyright © 2020 Pearson Canada Inc.
   
 1  2 
1    
 0 =  1 
2    
−2
−1
   
 1 1
1    
 0 = 1
2    
−2
1
20
 
1
 
A41 We need to write the standard basis vectors as linear combinations of the vectors 0,
 
1
We get three systems of equations.
 
 
 
 
1
1
1
1
 
 
 
 
0 = c1 0 + c2 1 + c3 1
0
1
1
0
 
 
 
 
0
1
1
1
1 = d1 0 + d2 1 + d3 1
 
 
 
 
0
1
1
0
 
 
 
 
0
1
1
1
0 = e1 0 + e2 1 + e3 1
 
 
 
 
1
1
1
0
 
 
1
1
 
 
1, and 1.
1
0
Observe that all three systems have the same coefficient matrix. Thus, we can row reduce one multiple
augmented matrix. We get

 

1 −1
0 
 1 1 1 1 0 0   1 0 0

 

1
1 
 0 1 1 0 1 0  ∼  0 1 0 −1

1 1 0 0 0 1
0 0 1
1
0 −1
Hence, we have that
 
 
 
 
1
1
1
1
0 = 1 0 − 1 1 + 1 1
 
 
 
 
0
1
1
0
 
 
 
 
0
1
1
1
 
 
 
 
1 = −1 0 + 1 1 + 0 1
0
1
1
0
 
 
 
 
0
1
1
1
 
 
 
 
0
0
1
  = 0   + 1   − 1 1
1
1
1
0
Consequently,
! " ! " ! " ! "
2
4
5
3
−
+
=
0
5
6
1
! " ! "
! " ! "
2
4
5
2
L(!e2 ) = −
+
+0
=
0
5
6
5
! " ! " ! " ! "
2
4
5
−1
L(!e3 ) = 0
+
−
=
0
5
6
−1
L(!e1 ) =
"
3 2 −1
Therefore, [L] =
.
1 5 −1
!
A42 (a) Since the standard matrix of each mapping has three columns and two rows, the domain is R3
and codomain is R2 for both mappings.
Copyright © 2020 Pearson Canada Inc.
21
(b) By Theorem 3.2.5 we get
" !
" !
"
2 1 3
1 2 −1
3 3 2
+
=
−1 0 2
2 2
3
1 2 5
!
"
!
" !
"
2 1 3
1 2 −1
1 −4
9
[2S − 3T ] = 2
−3
=
−1 0 2
2 2
3
−8 −6 −5
[S + T ] =
!
A43 (a) The standard matrix of S has four columns and two rows, so the domain of S is R4 and the
codomain of S is R2 . The standard matrix of T has two columns and four rows, so the domain of
T is R2 and the codomain of T is R4 .
(b) By Theorem 3.2.5 we get
[S ◦ T ] =
!
−3
0

 1
−2
[T ◦ S ] = 
 2
3


4 !
"  1
"

−3 0 1 −2
1
6 −19

=
2 4 2  2 −1
10 −10


3 −4



4 !
5
16
9
" −3


 6
1 −3 −3 0 1
8
4
0

= 


−1 0
2 4 2
0
−6 −8 −4


−4
−9 −17 −16 −5
A composition of mappings f ◦ g is defined only if the domain of f contains the codomain of g. If it is defined,
then by Theorem 3.2.5, the domain of f ◦ g is the domain of g and the codomain of f ◦ g is the codomain of f .
This, of course, exactly matches the definition of matrix multiplication.
A44 The domain of L is R2 and the codomain of M is R2 .
codomain of L ◦ M are both R3 . We have

11

[L ◦ M] = [L][M] = 11

3
Thus, L ◦ M is defined. The domain and

−4
1

−9 −6

−2 −1
A45 The domain of M is R3 and the codomain of L is R3 . Thus, M ◦ L is defined. The domain and
codomain of M ◦ L are both R2 . We have
!
"
1 9
[M ◦ L] = [M][L] =
8 0
A46 The domain of L is R2 , but the codomain of N is R4 . Hence, this composition is not defined.
A47 The domain of N is R2 , but the codomain of L is R3 . Hence, this composition is not defined.
A48 The domain of M is R3 , but the codomain of N is R4 . Hence, this composition is not defined.
A49 The domain of N is R2 and the codomain of M is R2 . Thus, N ◦ M is defined. The domain of N ◦ M
is R3 and the codomain is R4 . We have


0
3
 5

 6
1
5

[N ◦ M] = [N][M] = 

−3 −3 −6
13 −7 −2
Copyright © 2020 Pearson Canada Inc.
22


1
 3


A50 (a) The standard matrix of a such a linear mapping would be [L] = L(1, 0) L(0, 1) = −1 −5.


4
9
Hence, a linear mapping would be
)
*


 3x1 + x2 


L(!x ) = [L]!x = −x1 − 5x2 


4x1 + 9x2
 2

 3x1 + x2 


(b) We could take L(x1 , x2 ) = −x1 − 5x2 . Then, L(1, 0) = (3, −1, 4), L(0, 1) = (1, −5, 9), but L is


4x1 + 9x2
not linear since L(2, 0) = (12, −2, 8) " 2L(1, 0).
B Homework Problems
B1 (a) The domain of fA is R3 and the codomain of fA is R2 .
! "
!
"
11
29
(b) fA (2, 1, −1) =
, fA (3, −8, 9) =
0
−15
! "
! "
! "
6
2
3
(c) fA (!e1 ) =
, fA (!e2 ) =
, fA (!e3 ) =
−1
6
4
!
"
6x1 + 2x2 + 3x3
(d) fA (x1 , x2 , x3 ) =
−x1 + 6x2 + 4x3
(e) The standard matrix of fA is
)
[ fA ] = fA (1, 0, 0)
fA (0, 1, 0)
* ! 6 2 3"
fA (0, 0, 1) =
−1 6 4
B2 (a) The domain of fA is R4 and the codomain is R3 .
 
 
2
 12 
 
 
(b) fA (1, 1, 1, 1) = 5, fA (3, 1, −2, 3) = −2
 
 
4
14
 
 
 
 
1
 2
−2
1
 
 
 
 
(c) fA (!e1 ) = 0, fA (!e2 ) =  1, fA (!e3 ) =  3, fA (!e4 ) = 1
 
 
 
 
4
−1
0
1


 x1 + 2x2 − 2x3 + x4 


(d) fA (x1 , x2 , x3 ) =  x2 + 3x3 + x4 


4x1 − x2 + x4
)
(e) The standard matrix of fA is [ fA ] = fA (1, 0, 0)
B3 f is not linear
B5 h is linear


2 −2 1
* 1

fA (0, 1, 0) fA (0, 0, 1) = 0
1
3 1.


4 −1
0 1
B4 g is linear
B6 k is linear
Copyright © 2020 Pearson Canada Inc.
23
B7 # is not linear
B9 L is not linear
!
"
1 4 6
B10 [proj!v ] =
13 6 9


4 0 2
1 

B12 [proj!v ] = 0 0 0

5
2 0 1
!
"
1 4 −2
B14 [perp!v ] =
1
5 −2
B8 m is linear
B16 Domain R2 , codomain R2 , [L] =
B18
B20
B22
B24
!
1 25
26 −5
!
1 1
B13 [perp!v ] =
2 −1

 1
1 
B15 [perp!v ] =  0
5
−2
B11 [proj!v ] =
"
7 −2
π
0
!

1

Domain R3 , codomain R3 , [L] = 0

1

1

Domain R4 , codomain R3 , [M] = 1

1
!
"
8
7
[L] =
3 −1
!
"
1 6
[L] =
−5 7

0 −1

1
1

1
1

2
0 0

0 −1 0

0
2 0
"
−5
1
"
−1
1

0 −2

5
0

0
4
1
0

2
1
Domain R2 , codomain R4 , [K] = 
1
1

0

Domain R3 , codomain R3 , [N] = 1

0
!
"
1 1 4
[L] =
5 2 2
!
"
7/3 −5/3
[L] =
2/3
8/3
B17 Domain R3 , codomain R2 , [L] =
B19
B21
B23
B25
B26 (a) The domain of both S and T is R2 and the codomain of both S and T is R3 .




 4 3
−2 −7




9
(b) [S + T ] = −3 9, [−S + T ] = −3




6 7
4
3
!
0 0
1 0

4

2

−1

3

0 1

0 0

1 0
B27 (a) The domain of S is R2 and the codomain of S is R3 . The domain of T is R3 and the codomain of
T is R2 .


!
"
−2 7 27
10 30


(b) [S ◦ T ] =  0 4 16, [T ◦ S ] =


−7 −7
−6 1 1
!
"
7 20
B28 [L ◦ M] =
−1 14


 10 0 8


B29 [M ◦ L] =  5 1 6


−5 7 10
B30 not defined
!
"
10 −1 6
B31 [N ◦ L] =
35 −11 6
Copyright © 2020 Pearson Canada Inc.
"
24


18 −6


B32 [M ◦ N] = 16 −7


40 −25
B33 not defined
C Conceptual Problems
C1 If L(!x + !y ) = L(!x ) + L(!y ) and L(s!x ) = sL(!x ), then
L(s!x + t!y ) = L(s!x ) + L(t!y ) = sL(!x ) + tL(!y )
On the other hand, if L(s!x +t!y ) = sL(!x )+tL(!y ), then taking s = 1, t = 1 gives L(!x +!y ) = L(!x )+L(!y ).
We also have
L(s!x ) = L(s!x + !0) = sL(!x ) + L(!0) = sL(!x ) + !0 = sL(!x )
C2 (a) Since 0!x = !0 for any !x ∈ Rn , we have that
L(!0) = L(0!x ) = 0L(!x ) = !0
(b) If M(!0) " !0, then M cannot be linear.
C3 We have
(L + M)(s!x + t!y ) = L(s!x + t!y ) + M(s!x + t!y ) = sL(!x ) + tL(!y ) + sM(!x ) + tM(!y )
+
,
+
,
= s L(!x ) + M(!x ) + t L(!y ) + M(!y ) = s(L + M)(!x ) + t(L + M)(!y )
Hence, L + M is linear.
For all !x ∈ Rn we have
+
,
[L + M]!x = (L + M)(!x ) = L(!x ) + M(!x ) = [L]!x + [M]!x = [L] + [M] !x
Thus, [L + M] = [L] + [M] by the Matrices Equal Theorem.
C4 We have
+
,
+
,
(M ◦ L)(s!x + t!y ) = M L(s!x + t!y ) = M sL(!x ) + tL(!y )
+
,
+
,
= sM L(!x ) + tM L(!y ) = s(M ◦ L)(!x ) + t(M ◦ L)(!y )
For all !x ∈ Rn we have
+
,
+
,
[M ◦ L]!x = (M ◦ L)(!x ) = M L(!x ) = M [L]!x = [M][L]!x
Thus, [M ◦ L] = [M][L]!x by the Matrices Equal Theorem.
Copyright © 2020 Pearson Canada Inc.
25
C5 For any !x , !y ∈ R3 and s, t ∈ R we have
CROSS!v (s!x + t!y ) = !v × (s!x + t!y ) = !v × (s!x ) + !v × (t!y ) = s CROSS!v (!x ) + t CROSS!v (!y )
Thus, CROSS!v is linear.
The cross product !v × !x is only defined if !x ∈ R3 , so the domain of the CROSS!v is R3 . The result of
!v × !x is the vector in R3 orthogonal to both !v and !x so the codomain is R3 .
We have
     
v1  1  0
     
CROSS!v (1, 0, 0) = v2  × 0 =  v3 
     
v3
0
−v2
     
v1  0 −v3 
     
CROSS!v (0, 1, 0) = v2  × 1 =  0
     
v3
0
v1
     
v1  0  v2 
     
CROSS!v (0, 0, 1) = v2  × 0 = −v1 
     
v3
1
0


v2 
 0 −v3


0 −v1 .
Hence [CROSS!v ] =  v3


−v2
v1
0
C6 For any !x , !y ∈ Rn and s, t ∈ R we have
DOT!v (s!x + t!y ) = !v · (s!x + t!y ) = !v · (s!x ) + !v · (t!y ) = s(!v · !x ) + t(!v · !y ) = s DOT!v !x + t DOT!v !y
so DOT!v is linear.
The input into DOT!v must be a vector in Rn , so the domain in Rn . Since DOT!v (!x ) is a real number,
its codomain
  is R.
v1 
)
* )
*
 
For !v =  ...  we have [DOT!v ] = DOT!v !e1 · · · DOT!v !en = v1 · · · vn = !v T .
 
vn
 
u1 
!x · !u
 
!u = (!x · !u )!u since !u is a unit
C7 Let !u =  ... . Observe that for any !x ∈ Rn we have proj!u (!x ) =
 
(!u (2
un
Copyright © 2020 Pearson Canada Inc.
26
vector. Thus, by definition of [proj!u ] we have
C8 (a) Consider
)
*
[proj!u ] = proj!u (!e1 ) · · · proj!u (!en )
)
*
= (!e1 · !u )!u · · · (!en · !u )!u )
)
*
= u1!u u2!u · · · un!u


u1 u1 u1 u2 · · · u1 un 
u u u u · · · u u 
2 n
 2 1 2 2

=  .
.. 
.
.
.
 .
.
. 


un u1 un u2 · · · un un
 
u1 
*
  )
=  ...  u1 · · · un = !u !u T
 
un
c1!v 1 + · · · + ck!v k = !0
Then,
L(c1!v 1 + · · · + ck!v k ) = L(!0)
c1 L(!v 1 ) + · · · + ck L(!v k ) = !0
Thus, c1 = · · · = ck = 0 since {L(!v 1 ), . . . , L(!v k )} is linearly independent. Thus, {!v 1 , . . . , !v k } is
linearly independent.
(b) Take L(!x ) = !0. Then, {L(!v 1 ), . . . , L(!v k )} contains the zero vector, so it is linearly dependent.
C9 By definition T is a subset of Rn . Also, we have !0 ∈ T since L(!0) = !0 ∈ S since S is a subspace.
Hence, T is non-empty.
Let !x , !y ∈ T and s, t ∈ R. Then, L(!x ) ∈ S and L(!y ) ∈ S. Thus,
L(s!x + t!y ) = sL(!x ) + tL(!y ) ∈ S
since S is closed under linear combinations. Therefore, s!x + t!y ∈ T and so T is a subspace of Rn .
Copyright © 2020 Pearson Canada Inc.
27
Section 3.3
A Practice Problems
"
0 −1
A1 [Rπ/2 ] =
1
0
!
"
1 1
1
A3 [R−π/4 ] = √2
−1 1
"
−1
0
A2 [Rπ ] =
0 −1
!
"
0.309 −0.951
A4 [R2π/5 ] =
0.951
0.309
!
"
1 0
A5 The matrix of a vertical shear by amount 3 is [V] =
.
3 1
!
"
1 0
The matrix of stretch by a factor of 5 in the x2 direction is [S ] =
.
0 5
!
!
A6 The composition of S followed by V is given by V ◦ S . The matrix of V ◦ S is
"!
" !
"
1 0 1 0
1 0
[V ◦ S ] =
=
3 1 0 5
3 5
!
A7 The composition of S following by V is given by S ◦ V. The matrix of S ◦ V is
[S ◦ V] =
!
"!
" !
"
1 0 1 0
1 0
=
0 5 3 1
15 5
A8 The composition of S followed by a rotation through angle θ is given by Rθ ◦ S . The matrix of Rθ ◦ S
is
!
"!
" !
"
cos θ − sin θ 1 0
cos θ −5 sin θ
[Rθ ◦ S ] =
=
sin θ
cos θ 0 5
sin θ 5 cos θ
A9 The composition of S following a rotation through angle θ is given by S ◦ Rθ . The matrix of S ◦ Rθ is
"!
" !
"
1 0 cos θ − sin θ
cos θ − sin θ
[S ◦ Rθ ] =
=
0 5 sin θ
cos θ
5 sin θ 5 cos θ
!
1
A10 The matrix of a horizontal shear by amount 1 is [H] =
0
!
1
The matrix of a vertical shear by amount -2 is [V] =
−2
!
"
1
.
1
"
0
.
1
A11 The composition of H followed by V is given by V ◦ H. The matrix of V ◦ H is
"!
" !
"
1 0 1 1
1
1
[V ◦ H] =
=
−2 1 0 1
−2 −1
!
A12 The composition of H following by V is given by H ◦ V. The matrix of H ◦ V is
"!
" !
"
1 1
1 0
−1 1
[H ◦ V] =
=
0 1 −2 1
−2 1
!
Copyright © 2020 Pearson Canada Inc.
28
A13 The composition of H followed by a reflection F over the x1 -axis is given by F ◦ H. The matrix of
F ◦ H is
!
"!
" !
"
1
0 1 1
1
1
[F ◦ H] =
=
0 −1 0 1
0 −1
A14 The composition of H following a reflection F over the x1 -axis is given by H ◦ F. The matrix of H ◦ F
is
!
"!
" !
"
1 1 1
0
1 −1
[H ◦ F] =
=
0 1 0 −1
0 −1
! "
1
A15 A normal vector to the line x1 + 3x2 = 0 is !n =
. To find the standard matrix of R, we need to
3
reflect the standard basis vectors over the line. We get
! "
! " !
"
!e1 · !n
1 1
1
4/5
!
n
=
−
2
=
0
−3/5
10 3
(!n(2
! "
! " !
"
!e2 · !n
3 1
0
−3/5
!n =
refl!n (!e2 ) = !e2 − 2
−2
=
1
−4/5
10 3
(!n(2
refl!n (!e1 ) = !e1 − 2
"
4/5 −3/5
Thus, [R] =
.
−3/5 −4/5
!
"
2
A16 A normal vector to the line 2x1 − x2 = 0 is !n =
. To find the standard matrix of S , we need to
−1
reflect the standard basis vectors over the line. We get
!
! "
! " !
"
!e1 · !n
2 2
1
−3/5
!n =
refl!n (!e1 ) = !e1 − 2
−2
=
0
4/5
5 −1
(!n(2
! "
! " ! "
!e2 · !n
−1 2
0
4/5
!n =
refl!n (!e2 ) = !e2 − 2
−2
=
2
1
3/5
5 −1
(!n(
Thus, [S ] =
!
"
−3/5 4/5
.
4/5 3/5
"
−4
. To find the standard matrix of R, we need to
1
reflect the standard basis vectors over the line. We get
A17 A normal vector to the line −4x1 + x2 = 0 is !n =
!
! "
! " !
"
!e1 · !n
−4 −4
1
−15/17
!n =
refl!n (!e1 ) = !e1 − 2
−2
=
0
8/17
17 1
(!n(2
! "
! " !
"
!e2 · !n
1 −4
0
8/17
!n =
refl!n (!e2 ) = !e2 − 2
−2
=
1
15/17
17 1
(!n(2
"
−15/17 8/17
Thus, [R] =
.
8/17 15/17
!
Copyright © 2020 Pearson Canada Inc.
29
"
3
A18 A normal vector to the line 3x1 − 5x2 = 0 is !n =
. To find the standard matrix of R, we need to
−5
reflect the standard basis vectors over the line. We get
! "
! " !
"
!e1 · !n
3
1
3
8/17
!n =
refl!n (!e1 ) = !e1 − 2
−2
=
0
15/17
34 −5
(!n(2
! "
! " !
"
!e2 · !n
−5 3
0
15/17
!n =
refl!n (!e2 ) = !e2 − 2
−2
=
1
−8/17
34 −5
(!n(2
!
"
8/17 15/17
Thus, [R] =
.
15/17 −8/17
 
1
 
A19 A normal vector to the plane x1 + x2 + x3 = 0 is !n = 1. To find the standard matrix of the reflection,
 
1
3
we need to reflect the standard basis vectors for R over the plane.
 
  

1
1  1/3
!e1 · !n
1
 
  

!n = 0 − 2 1 = −2/3
refl!n (!e1 ) = !e1 − 2
 

3  
(!n(2
0
1
−2/3
 
  

0
1 −2/3
!e2 · !n
1   
 

!n = 1 − 2 1 =  1/3
refl!n (!e2 ) = !e2 − 2
 

3  
(!n(2
0
1
−2/3
 
  

0
1 −2/3
!e3 · !n
1  
 

!n = 0 − 2 1 = −2/3
refl!n (!e3 ) = !e3 − 2
 

3  
(!n(2
1
1
1/3


 1 −2 −2
1 

1 −2.
Thus, [refl!n ] = −2

3
−2 −2
1
 
 2
 
A20 A normal vector to the plane 2x1 − 2x2 − x3 = 0 is !n = −2. To find the standard matrix of the
 
−1
reflection, we need to reflect the standard basis vectors for R3 over the plane.
 
   
1
 2 1/9
!e1 · !n
2
0 − 2 −2 = 8/9
!
refl!n (!e1 ) = !e1 − 2
n
=
 
9    
(!n(2
0
−1
4/9
 
  

0
 2  8/9
!e2 · !n
−2   
 

!n = 1 − 2
refl!n (!e2 ) = !e2 − 2
−2 =  1/9
2


9
(!n(
0
−1
−4/9
 
  

0
 2  4/9
!e3 · !n
−1   
 
−2 = −4/9
!n = 0 − 2
refl!n (!e3 ) = !e3 − 2
 

9   
(!n(2
1
−1
7/9


8
4
1
1

1 −4.
Thus, [refl!n ] = 8

9
4 −4
7
!
Copyright © 2020 Pearson Canada Inc.
30
 
 1
 
A21 A normal vector to the plane x1 − x3 = 0 is !n =  0. To find the standard matrix of the reflection,
 
−1
3
we need to reflect the standard basis vectors for R over the plane.
 
   
1
 1 0
!e1 · !n
1    
 
!n = 0 − 2  2 = 0
refl!n (!e1 ) = !e1 − 2
 
14    
(!n(2
0
−3
1
 
   
0
 1 0
!e2 · !n
0    
 
 2 = 1
1
!
refl!n (!e2 ) = !e2 − 2
n
=
−
2
 
14    
(!n(2
0
−3
0
 
   
0
 1 1
!e3 · !n
0 − 2 −1  2 = 0
!
refl!n (!e3 ) = !e3 − 2
n
=
 
14    
(!n(2
1
−3
0


0 0 1


Thus, [refl!n ] = 0 1 0.


1 0 0
 
 1
 
A22 A normal vector to the plane x1 + 2x2 − 3x3 = 0 is !n =  2. To find the standard matrix of the
 
−3
reflection, we need to reflect the standard basis vectors for R3 over the plane.
 
  

1
 1  6/7
!e1 · !n
1   
 

!n = 0 − 2  0 = −2/7
refl!n (!e1 ) = !e1 − 2
2






2
(!n(
0
−1
3/7
 
  

0
 1 −2/7
!e2 · !n
0   
 

!n = 1 − 2  0 =  3/7
refl!n (!e2 ) = !e2 − 2
 

2  
(!n(2
0
−1
6/7
 
  

0
 1  3/7
!e3 · !n
−1
 
 0 =  6/7
!n = 0 − 2
refl!n (!e3 ) = !e3 − 2

 
2   
(!n(2
1
−1
−2/7


3/7
 6/7 −2/7


3/7
6/7.
Thus, [refl!n ] = −2/7


3/7
6/7 −2/7
A23 Both the domain and codomain of D are R3 , so the standard matrix [D] has three columns and three
rows. A dilation by a factor of 5 stretches all vectors in R3 by a factor of 5. Thus,


5 0 0


[D] = 0 5 0


0 0 5
Copyright © 2020 Pearson Canada Inc.
31
The domain of inj is R3 and the codomain of inj is R4 , so the standard matrix [inj] has three columns
and four rows. We have
 
 
 
1
0
0
0
1
0
inj(1, 0, 0) =   ,
inj(0, 1, 0) =   ,
inj(0, 0, 1) =  
0
0
0
0
0
1


1 0 0
0 1 0
. Since the domain of inj is R3 and the codomain of D is R3 we have that inj ◦ D
so [inj] = 
0 0 0
0 0 1
is defined, and, by Theorem 3.2.5, we get




 5 0 0
1 0 0 


5
0
0
 
0 1 0 
0 5 0





[inj ◦D] = [inj][D] = 
 0 5 0 = 

0 0 0 0 0 5 0 0 0
0 0 1
0 0 5
A24 (a) We have
P(1, 0, 0) =
! "
0
0
P(0, 1, 0) =
! "
1
0
P(0, 0, 1) =
! "
0
1
"
0 1 0
. We also have
0 0 1
 
 
 
1
0
0
 
 
 
S (1, 0, 0) = 0
S (0, 1, 0) = 1
S (0, 0, 1) = 0
 
 
 
2
0
1


1 0 0


Hence, [S ] = 0 1 0. The domain of P and the codomain of S are both R3 , so P◦S is defined.


2 0 1
By Theorem 3.2.5 we get


!
" 1 0 0 !
"

0 1 0 
0 1 0
0 1 0 =
[P ◦ S ] =

0 0 1 
2 0 1
2 0 1
Thus, [P] =
!
!
"
1 s
(b) The matrix of a horizontal shear T in R by amount s is given by [T ] =
. Assume that
0 1
T ◦ P = P ◦ S . Then we have
2
!
1 s 0
0 1 0
!
0
0
"!
[T ][P] = [P][S ]
" !
1 0
0 1
=
0 1
0 0
" !
1 s
0 1
=
0 1
2 0


" 1 0 0


0 
0 1 0

1 
2 0 1
"
0
1
Copyright © 2020 Pearson Canada Inc.
32
2
But, these matrices are not equal
! and" so we have a contradiction. If T is a vertical shear in R by
1 0
amount s, then we have [T ] =
and
s 1
!
"!
1 0 0
s 1 0
!
0
0
[T ][P] = [P][S ]
" !
1 0
0 1
=
0 1
0 0
" !
1 0
0 1
=
s 1
2 0
0
1
0
1


" 1 0 0


0 1 0


2 0 1
"
Thus our assumption that T ◦P = P◦S must be false, so there is no shear T such that T ◦P = P◦S .
(c) We have
! "
! "
! "
1
0
0
Q(1, 0, 0) =
Q(0, 1, 0) =
Q(0, 0, 1) =
0
1
0
!
"
1 0 0
Thus, [Q] =
. Since the domain of Q and the codomain of S are both R3 we get that
0 1 0
Q ◦ S is defined and
1 0 0
[Q ◦ S ] = [Q][S ] =
0 1 0
!


" 1 0 0 !
"


1 0 0
0 1 0 = 0 1 0
2 0 1
B Homework Problems
"
0 1
B1
−1 0
√ "
! √
− √2/2
√2/2
B3
− 2/2 − 2/2
!
"
!
"
1 −2
3 0
B5 [H] =
, [S ] =
0
1
0 1
!
"
3 −6
B7 [S ◦ H] =
0
1
√ "
!
3/2 −3 3/2
√
B9 [S ◦ Rπ/3 ] =
3/2
1/2
!
"
1/2 0
B11 [V ◦ T ] =
−1/2 1/2
!
"
1 0
B13 [J] =
1 1
!
"
5 −12
1
B15 [refl!n ] = 13
−12 −5
!
"
1 −7 24
B17 [refl!n ] = 25
24 7
!
√ "
1/2 − 3/2
√
B2
3/2
1/2
! √
"
− 3/2
−1/2
√
B4
1/2 − 3/2
!
"
3 −2
B6 [H ◦ S ] =
0
1
√ "
!
3/2
−
3/2
B8 [Rπ/3 ◦ S ] = √
3 3/2
1/2
!
"
!
"
1 0
1/2 0
B10 [V] =
, [T ] =
−1 1
0 1/2
!
"
1/2 0
B12 [T ◦ V] =
−1/2 1/2
!
"
2 0
B14 [K] =
0 2
!
"
5
1 −12
B16 [refl!n ] = 13
5 12
!
"
5
1 12
B18 [refl!n ] = 13
5 −12
!
Copyright © 2020 Pearson Canada Inc.
33

 19
1 
−4
B19
21 
8

 3
1 
B21  0
5
−4

−4
8

13
16

16 −11

0 −4

5
0

0 −3

 0
1/3

B23 (a) [inj ◦C] =  0

 0
0

1/3

(b) [C ◦ S ] =  0

0

1 3

(c) [T ◦ S ] = 0 1

0 0


3
 6 −2
1 

3
6
B20 −2

7
3
6 −2


 9 8 −12
1 

 8 9
12
B22

17 
−12 12 −1

0
0 

0
0 

0
0 

1/3 0 

0 1/3



0
0 
0 
1/3 0



1/3 −2/3, [S ◦ C] =  0 1/3 −2/3.



0
1/3
0
0
1/3



−6
0
1 3



−2, [S ◦ T ] = 0 1 −2



1
0 0
1
C Conceptual Problems
C1 We have
[R ◦ S ] = [R][S ] =
!
"!
" !
"
4/5 −3/5 −3/5 4/5
−24/25
7/25
=
−3/5 −4/5
4/5 3/5
−7/25 −24/25
7
24
and sin θ = − 25
hence θ is an angle of
This is the matrix of a rotation by angle θ where cos θ = − 25
about 3.425 radians.
C2 Let R23 denote the reflection in the x2 x3 -plane. Then R23 (!e1 ) = −!e1 , R23 (!e2 ) = !e2 , and R23 (!e3 ) = !e3 ,
so


−1 0 0


[R23 ] =  0 1 0


0 0 1
Similarly, if R12 denotes the reflection in the x1 x2 -plane,
Then


0
1 0


0
[R12 ] = 0 1


0 0 −1

 
0  cos π 0
−1 0

 
0 =  0
1
[R12 ◦ R23 ] =  0 1

 
0 0 −1
− sin π 0
This is the matrix of the rotation through angle π about the x2 -axis.
Copyright © 2020 Pearson Canada Inc.

sin π 

0 

cos π
34
C3 (a) Using trigonometric identities we get
"!
"
cos α − sin α cos θ − sin θ
[Rα ◦ Rθ ] = [Rα ][Rθ ] =
sin α
cos α sin θ
cos θ
!
"
cos α cos θ − sin α sin θ − cos α sin θ − sin α cos θ
=
sin α cos θ + cos α sin θ − sin α sin θ + cos α cos θ
!
"
cos(α + θ) − sin(α + θ)
=
= [Rα+θ ]
sin(α + θ)
cos(α + θ)
!
(b) Since sin(−θ) = − sin θ and cos(−θ) = cos θ we get
!
" !
"
cos(−θ) sin(−θ)
cos θ − sin θ
T
[R−θ ] =
=
= Rθ
− sin(−θ) cos(−θ)
sin θ
cos θ
(c) We need to prove that the two columns are orthogonal to each other and that they are both unit
vectors. We have
!
" !
"
cos θ − sin θ
·
= cos θ(− sin θ) + sin θ(cos θ) = 0
sin θ
cos θ
!
" !
"
cos θ cos θ
·
= cos2 θ + sin2 θ = 1
sin θ
sin θ
!
" !
"
− sin θ − sin θ
·
= (− sin θ)2 + cos2 θ = 1
cos θ
cos θ
as required.
C4 (a) We have that [proj!u ] = !u !u T . Thus,
[proj!u ]T = (!u !u T )T = (!u T )T !u T = !u !u T = [proj!u ]
So, [proj!u ] is symmetric.
(b) We have
Thus,
+
,
refl!n (!p ) = Id(!p ) − 2 proj!n (!p ) = Id +(−2) proj!n (!p )
[refl!n ] = [Id −2 proj!n ] = I − 2[proj!n ]
(c) We have
[refl!n ]T = (I − 2[proj!n ])T = I T − 2[proj!n ]T = I − 2[proj!n ]
by part (a). Thus, [refl!n ] is symmetric.
(d) The projection property states that proj!n ◦ proj!n = proj!n . Therefore
[proj!n ][proj!n ] = [proj!n ]
It follows that
[refl!n ][refl!n ] = (I − 2[proj!n ])(I − 2[proj!n ])
= I 2 − 2I[proj!n ] − 2[proj!n ]I + 4[proj!n ][proj!n ]
= I − 4[proj!n ] + 4[proj!n ]
=I
Copyright © 2020 Pearson Canada Inc.
35
C5 (a) Let A be the matrix of a rotation through angle θ, so that A = [Rθ ]. Then we want
[R2π ] = I = A3 = [Rθ ]3 = [Rθ ][Rθ ][Rθ ] = [R3θ ]
Hence, we want 3θ = 2π, or θ = 2π/3. Thus, if A = [R2π/3 ], we get A3 = I.
(b) Let A be the matrix of the rotation through angle 2π/5. Then A5 = I.
Section 3.4
A Practice Problems
A1 By definition of the matrix of a linear mapping, !y i is in the range of L if and only if the system
[L]!x = !y i is consistent. !v ∈ Null(L) if and only if [L]!v = !0.
(a) Row reducing the augmented matrix for [L]!x = !y 1 gives
!
" !
"
3 5 −1 12
1 0 −7
9
∼
1 2
1 3
0 1
4 −3
 
 
 9
 7
 
 
The system is consistent with solutions !x = −3 + t −4, t ∈ R. For example, L(9, −3, 0) = !y 1 .
 
 
0
1
(b) Row reducing the augmented matrix for [L]!x = !y 2 gives
!
" !
"
3 5 −1 1
1 0 −7 −3
∼
1 2
1 1
0 1
4
2
 
 
−3
 7
 
 
2
The system is consistent with solutions !x =   + t −4, t ∈ R. For example, L(−3, 2, 0) = !y 2 .
 
 
0
1
! "
0
(c) [L]!v =
, so !v ∈ Null(L).
0
A2 By definition of the matrix of a linear mapping, !y i is in the range of L if and only if the system
[L]!x = !y i is consistent. !v ∈ Null(L) if and only if [L]!v = !0.
(a) Row reducing the augmented matrix for [L]!x = !y 1 we find that the system is inconsistent so !y 1
is not in the range of L.
(b) Row reducing the augmented matrix for [L]!x = !y 2 gives

 
8   1 0 2
 7 −2

 
 7 −8 −10  ∼  0 1 3
1
1
5
0 0 0




! "
2
The system is consistent with solution !x =
. Thus, L(2, 3) = !y 2 .
3
Copyright © 2020 Pearson Canada Inc.
36


 1


(c) [L]!v = −17, so !v ! Null(L).


4
A3 By definition of the matrix of a linear mapping, !y i is in the range of L if and only if the system
[L]!x = !y i is consistent. !v ∈ Null(L) if and only if [L]!v = !0.
(a) Row reducing the augmented matrix for [L]!x = !y 1 we find that the system is inconsistent so !y 1
is not in the range of L.
(b) Row reducing the augmented matrix for [L]!x = !y 2 gives

 
3   1
 1 0 −1
 
 0 1
3 −5   0

∼
 1 2
1
1   0

 
2 5
1
5
0
0
1
0
0

0
1 

0
1 

1 −2 

0
0
 
 1
 
The system is consistent with solution !x =  1. Thus, L(1, 1, −2) = !y 2 .
 
−2
 
0
4
(c) [L]!v =  , so !v ! Null(L).
4
8
A4 Every vector !y ∈ Range(L) has the form


 
 
3x1 + 5x2 
3
 5


 
 
L(!x ) =  x1 − x2  = x1 1 + x2 −1


 
 
x1 − x2
1
−1
    

3  5



   

1 , −1
Hence, C = 
spans Range(L) and is linearly independent, so it is a basis for Range(L).



1 −1

If !x ∈ Null(L), then we have
 


0
3x1 + 5x2 
0 = L(x1 , x2 ) =  x1 − x2 
 


0
x1 − x2
Row reducing the corresponding coefficient matrix gives

 

5 1 0
3

 

1 −1 ∼ 0 1
1 −1
0 0
Thus, the only solution is x1 = x2 = 0. Hence, Null(L) = {!0} and so a basis for Null(L) is the empty
set.
A5 Every vector !y ∈ Range(L) has the form
!
"
! "
! "
2x1 − x2
2
−1
L(!x ) =
= x1
+ x2
4x1 − 2x2
4
−2
Copyright © 2020 Pearson Canada Inc.
37
Hence,
Range(L) = Span
4! " ! "5
4! "5
2 −1
2
,
= Span
4 −2
4
4! "5
2
Thus, C =
spans Range(L) and is linearly independent, so it is a basis for Range(L).
4
If !x ∈ Null(L), then we have
! "
!
"
0
2x1 − x2
= L(x1 , x2 ) =
0
4x1 − 2x2
Row reducing the corresponding coefficient matrix gives
!
" !
"
2 −1
1 −1/2
∼
4 −2
0
0
Thus, the solution is
"
1/2
!x = t
,
1
!
t∈R
4! "5
1/2
Hence, B =
spans Null(L) and is linearly independent, so it is a basis for Null(L).
1
A6 Every vector !y ∈ Range(L) has the form
"
! "
! "
x1 − 7x2
1
−7
L(!x ) =
= x1
+ x2
x1 + x2
1
1
!
4! " ! "5
1 −7
Hence, C =
,
spans Range(L) and is linearly independent, so it is a basis for Range(L).
1
1
If !x ∈ Null(L), then we have
! "
!
"
0
x − 7x2
= L(x1 , x2 ) = 1
0
x1 + x2
Row reducing the corresponding coefficient matrix gives
!
" !
"
1 −7
1 0
∼
1
1
0 1
Thus, the only solution is x1 = x2 = 0. Hence, Null(L) = {!0} and so a basis for Null(L) is the empty
set.
A7 Every vector !y ∈ Range(L) has the form
 
 
 x1 
1
 
 
L(!x ) = 2x1  = x1 2
 
 
3x1
3
  

1



 

2
Hence, C = 
spans Range(L) and is linearly independent, so it is a basis for Range(L).



3

Copyright © 2020 Pearson Canada Inc.
38
If !x ∈ Null(L), then we have
 
 
0
 x1 
 
 
0 = L(x1 , x2 ) = 2x1 
0
3x1
Row reducing the corresponding coefficient matrix gives

 

1 0 1 0

 

2 0 ∼ 0 0
3 0
0 0
Thus, the solution is
! "
0
!x = t
,
1
t∈R
4! "5
0
Hence, B =
spans Null(L) and is linearly independent, so it is a basis for Null(L).
1
A8 Every vector !y ∈ Range(L) has the form
"
! "
x1 + x2
1
L(!x ) =
= (x1 + x2 )
0
0
!
4! "5
1
spans Range(L) and is linearly independent, so it is a basis for Range(L).
0
If !x ∈ Null(L), then we have
! "
!
"
0
x1 + x2
= L(x1 , x2 , x3 ) =
0
0
Hence, C =
Hence, x2 = −x1 . Thus, we have
  

 
 
 x1   x1 
 1
0
  

 
 
!x =  x2  = −x1  = x1 −1 + x3 0
  

 
 
x3
x3
0
1
   

 1 0



   

−1 , 0
Thus, B = 
spans Null(L) and is linearly independent. Hence, B is a basis for Null(L).



 0 1

A9 Every vector !y ∈ Range(L) has the form
 
0
 
L(!x ) = 0
 
0
Hence, Range(L) = {!0} and so a basis for Range(L) is the empty set.
Observe that for any !x ∈ R3 , we have
 
0
 
L(x1 , x2 , x3 ) = 0
 
0
So, Null(L) = R3 . Thus, a basis for Null(L) is the standard basis for R3 .
Copyright © 2020 Pearson Canada Inc.
39
A10 Every vector !y ∈ Range(L) has the form
!
"
! "
! "
2x1
1
0
L(!x ) =
= 2x1
+ (−x2 + 2x3 )
−x2 + 2x3
0
1
4! " ! "5
1 0
Hence, C =
,
spans Range(L) and is linearly independent. So, it is a basis for Range(L).
0 1
If !x ∈ Null(L), then we have
! "
!
"
0
2x1
= L(x1 , x2 , x3 ) =
0
−x2 + 2x3
Hence, we must have 2x1 = 0 and −x2 + 2x3 = 0. So, x1 = 0 and x2 = 2x3 . Therefore, every vector
in the nullspace of L has the form
   
 
 x1   0 
0
 x2  = 2x3  = x3 2
   
 
x3
x3
1
  

0



 

2
Therefore, the set B = 
spans the nullspace and is linearly independent. Thus, it is a basis for



1

Null(L).
A11 Every vector !y ∈ Range(L) has the form


 
 
 
 x1 + 7x3 
1
0
7


 
 
 
L(!x ) =  x1 + x2 + x3  = x1 1 + x2 1 + x3 1


 
 
 
x1 + x2 + x3
1
1
1









1 0 7



     

1 , 1 , 1
Hence, 
spans Range(L), but is it linearly independent? We use the definition of linear









1 1 1

independence. Consider
 
 
 
 
0
1
0
7
0 = t1 1 + t2 1 + t3 1
 
 
 
 
0
1
1
1
Row reducing the corresponding coefficient matrix gives

 

7
1 0 7 1 0

 

1 1 1 ∼ 0 1 −6
1 1 1
0 0
0
Hence, the set is linearly dependent. In particular,
 
   
1
0 7
 
   
7 1 − 6 1 = 1
 
   
1
1
1
   

1 0



   

1 , 1
Therefore, by Theorem 1.4.3, we have that C = 
also spans Range(L). Since C is also







1 1

linearly independent, it is a basis for Range(L).
Copyright © 2020 Pearson Canada Inc.
40
If !x ∈ Null(L), then we have
 


0
 x1 + 7x3 
 


0 = L(x1 , x2 , x3 ) =  x1 + x2 + x3 
0
x1 + x2 + x3
This gives the same coefficient matrix as above. Hence, we see that the solution is
 
−7
 
!x = t  6 , t ∈ R
 
1





−7



 

 6
Hence, B = 
spans Null(L) and is linearly independent, so it is a basis for Null(L).





 1

A12 Every vector in the range of L has the form


 
 
 −x2 + 2x3 
−1
 2


 
 
L(!x ) =  x2 − 4x3  = x2  1 + x3 −4


 
 
−2x2 + 4x3
−2
4
   

−1  2



   

 1 , −4
Therefore, the set C = 
spans the range of L. Since C is also linearly independent, it is



−2  4

a basis for Range(L).
Let !x ∈ Null(L). Then,
 


0
 −x2 + 2x3 
0 = L(!x ) =  x2 − 4x3 
 


0
−2x2 + 4x3
Row reducing the corresponding coefficient matrix gives

 

2 0 1 0
0 −1

 

1 −4 ∼ 0 0 1
0
 

0 −2
4
0 0 0
The solution is
 
1
 
!x = s 0 ,
 
0
s∈R
 

1



 

Thus, B = 
spans Null(L) and is linearly independent. Hence, it is a basis for Null(L).
0



0

A13 Every vector in the range of L has the form


 
 
 
 
x4


1
 0
0
0
0
 1
0
0


x
3






 


 
 
 
 = x4 0 + x3  0 + x2 0 + x1 0
0
L(!x ) = 


 
 
 
 
x2


0
 0
1
0
x1 + x2 − x3
0
−1
1
1
Copyright © 2020 Pearson Canada Inc.
41
        



1  0 0 0





0  1 0 0















       

Therefore, the set C = 
spans the range of L.
0 ,  0 , 0 , 0





0  0 1 0





       



0 −1 1 1

To show it is linearly independent, we use the definition of linear independence. Consider
 
 
 
 
  

t4
0
1
 0
0
0 

0
0
 1
0
0 

t
3
 
 
 
 
  

0 = t1 0 + t2  0 + t3 0 + t4 0 = 

0
 
 
 
 
  

t2
0
0
 0
1
0 

0
0
−1
1
1
t1 + t2 − t3
This gives the system t4 = 0, t3 = 0, t2 = 0, and t1 + t2 − t3 = 0, which has the unique solution of
t1 = t2 = t3 = t4 = 0. Hence, C is also linearly independent and so it is a basis for Range(L).
Let !x ∈ Null(L). Then,
 


x4
0


0


x
3
 


0 = L(!x ) = 

0
 


x2
0


0
x1 + x2 − x3
As we just showed above this has the unique solution x1 = x2 = x3 = x4 = 0, so Null(L) = {!0}. Thus,
a basis for Null(L) is the empty set.
A14 Every vector in the range of L has the form
!
"
! "
! "
! "
! "
x1 − x2 + 2x3 + x4
1
−1
2
1
L(!x ) =
= x1
+ x2
+ x3
+ x1
x2 − 3x3 + 3x4
0
1
−3
3
4! " ! " ! " ! "5
1 −1
2 1
Therefore, the set
,
,
,
spans the range of L. Using Theorem 1.4.3, we get that
0
1 −3 3
4! " ! "5
1 −1
C=
,
also spans Range(L). Since C is also linearly independent, it is a basis for Range(L).
0
1
Let !x ∈ Null(L). Then,
! "
!
"
0
x1 − x2 + 2x3 + x4
= L(!x ) =
0
x2 − 3x3 + 3x4
Row reducing the corresponding coefficient matrix gives
!
" !
"
1 −1
2 1
1 0 −1 4
∼
0
1 −3 3
0 1 −3 3
The solution is
 
 
1
−4
3
−3


!x = s   + t   ,
1
 0
0
1
s, t ∈ R
   

1 −4







3 −3







Thus, B = 
,
spans Null(L) and is linearly independent. Hence, it is a basis for Null(L).
















1
0


   




0
1
Copyright © 2020 Pearson Canada Inc.
42
   

3  5



   

A15 From Problem A4, Theorem 3.4.4, and Theorem 3.4.6, we have that a basis for Col([L]) is 
1 , −1



 1 −1 

and a basis for Null([L]) is the empty set.
We have
 
 
3
 5
 
 
L(!e1 ) = 1 , L(!e2 ) = −1
 
 
1
−1


5
3


Thus, [L] = 1 −1.


1 −1
Row reducing [L] gives

 

5 1 0
3


1 −1 ∼ 0 1

 

1 −1
0 0
4! " ! "5
1 0
Thus, a basis for Row([L]) is
,
.
0 1
Row reducing [L]T gives
!
" !
"
3
1
1
1 0 0
∼
5 −1 −1
0 1 1
 

 0 



 

T
−1
Thus, a basis for Null([L] ) is 





 1

4!! ""5
1
A16 From Problem A8, Theorem 3.4.4, and Theorem 3.4.6, we have that a basis for Col([L]) is
0
    

 1 0



   

−1 , 0
and a basis for Null([L]) is 
.



 0 1

We have
! "
! "
! "
1
1
0
L(1, 0, 0) =
L(0, 1, 0) =
L(0, 0, 1) =
0
0
0
!
"
1 1 0
Thus, [L] =
.
0 0 0
  

1



 

1
Since [L] is already in RREF, we have that a basis for Row([L]) is 
.



0

Row reducing [L]T gives
4! "5
0
Hence, a basis for Null([L] ) is
.
1

 

1 0 1 0

 

1 0 ∼ 0 0
0 0
0 0
T
Copyright © 2020 Pearson Canada Inc.
43
A17 From Problem A9, Theorem 3.4.4, and Theorem 3.4.6, we have that a basis for Col([L]) is the empty
set and a basis for Null([L]) is the standard basis for R3 .
We have
 
 
 
0
0
0
 
 
 
L(1, 0, 0) = 0 L(0, 1, 0) = 0 L(0, 0, 1) = 0
 
 
 
0
0
0


0 0 0


Thus, [L] = 0 0 0.


0 0 0
      

0 0 0



     

0 , 0 , 0
By definition, Row([L]) = Span 
. Thus, Row([L]) = {!0} and so a basis for Row([L]) is



0 0 0

the empty set.
To find a basis for the left nullspace of [L] we solve the homogeneous system [L]T !x = !0. Clearly, the
solution space is R3 . Hence, the standard basis for R3 is a basis for Null([L]T ).
A18 From Problem A10, Theorem 3.4.4, and Theorem
we have that a basis for Col([L]) is the
 3.4.6,



0





 

2.
standard basis for R2 and a basis for Null([L]) is 



1

We have
! "
! "
! "
2
0
0
L(1, 0, 0) =
L(0, 1, 0) =
L(0, 0, 1) =
0
−1
2
!
"
2
0 0
Thus, [L] =
.
0 −1 2
Row reducing [L] gives
!
" !
"
2
0 0
1 0 0
∼
0 −1 2
0 1 −2
    

1  0



   

0 ,  1
Thus, 
is a basis for Row([L]).







 0 −2 

Row reducing [L]T gives

 

0 1 0
2

 

0 −1 ∼ 0 1
0
2
0 0
Thus, Null([L]T ) = {!0} and so a basis for Null([L]T ) is the empty set.
       



1  0 0 0





0  1 0 0















       









0
0
0
0
A19 From Problem A13, Theorem 3.4.4, and Theorem 3.4.6, a basis for Col([L]) is 
,   ,   ,  



















0  0 1 0







0 −1 1 1

and a basis for Null([L]) is the empty set.
Copyright © 2020 Pearson Canada Inc.
44
We have
 
0
0
 
L(!e1 ) = 0 ,
 
0
1

0 0
0
0
0 1

0 0
Thus, [L] = 0

1 0
0
1 −1 1
Row reducing [L] gives

1

0

0.

0

0
 
0
0
 
L(!e2 ) = 0 ,
 
1
1

0
0
0
0

0
0

0
1

1 −1
0
1
0
0
1




L(!e3 ) = 


 
1 1
 
0 0
 
0 ∼ 0
 
0 0
 
0
0
Thus, the standard basis for R4 is a basis for Row([L]).
Row reducing [L]T gives

 
1 1
0 0 0 0
 
0 0 0 1
1 0

0 1 0 0 −1 ∼ 0

 
1 0 0 0
0
0
  



0







0





 



1
Hence, 
is a basis for Null([L]T ).
















0







0


0

1

0 .

0

−1
0
1
0
0
0
0
0
1
0
0

0

0

0

1

0
0
1
0
0
0
0
0
0
0
0
1
0
 
1
0
 
L(!e4 ) = 0
 
0
0

0

0

0

1
 
 1
 
A20 proj!v maps a vector in R3 to a vector in the direction of −2. Hence, its range consists of all scalar
 
2
 
  


1
 1





 

 
−2
multiples of −2. Thus, a basis for the range is 
. The mapping maps all vectors orthogonal



 
 


2
2
 
 1
 
to −2 to the zero vector. Hence, its nullspace is the plane through the origin with normal vector
 
2
 
 1
 
!n = −2. Any two linearly independent vectors orthogonal to !n form a basis for this plane. So, one
 
2
    

2  2



   

1 ,  0
basis for the nullspace is 
.



0 −1

Copyright © 2020 Pearson Canada Inc.
45
 
3
 
A21 perp!v maps a vector in R3 to a vector in the plane orthogonal to 1. Hence, its range is the plane
 
2
 
3
 
through the origin with normal !n = 1. Any two linearly independent vectors orthogonal to !n form
 
2
   

 1  0



   

−3 , −2
a basis for this plane. So, one basis for the range is 
. The mapping maps all scalar



 0  1

 
  

3

3


 

1
multiples of 1 to the zero vector, so a basis for the nullspace is 
.



 
 

2
2
A22 refl!v maps a vector corresponding to a point in R3 to its mirror image on the opposite side of the plane
 
0
 
through the origin with normal vector !n = 1. Since every point in R3 is the image under reflection
 
0
of some point, the range of refl!v is R3 . Hence, any basis for R3 is also a basis for the range of the
reflection. For example, we could take the standard basis. Since the only point that is reflected to the
origin is the origin itself, the nullspace of the mapping is {!0}. Hence, a basis for the nullspace is the
empty set.
For Problems A23 - A26, there are infinitely many possible solutions.
A23 First observe from the given information that the standard matrix [L] must be 3 × 2. Since Null(L) =


4! "5
1 0
0


, the RREF R of [L] must have the form 0 1. Additionally, by Theorems 3.4.4 and 3.4.5, a


0
0 0
basis for Range(L) is formed by taking the columns from [L] that correspond to the columns in R that
contain leading ones. Hence, one choice for [L] is


2
 1


5
[L] = −1


1 −3
4! "5
1
A24 First observe from the given information that the standard matrix [L] must be 3 × 2. For B =
to
1


1 −1


0. Additionally, by Theorems 3.4.4 and
span Null(L), the RREF R of [L] must have the form 0


0
0
3.4.5, a basis for Range(L) is formed by taking the columns from [L] that correspond to the columns
in R that contain leading ones. Hence, [L] must have the form


1 a


[L] = 2 b


3 c
Copyright © 2020 Pearson Canada Inc.
46
So, that this matches with the RREF above, we see that we can take a = −1, b = −2, and c = −3.
Hence, one choice is


1 −1


[L] = 2 −2


3 −3
4! "5
1
A25 First observe from the given information that the standard matrix [L] must be 3 × 2. For B =
−2


1 1/2


to span Null(L), the RREF R of [L] must have the form 0 0 . Additionally, by Theorems 3.4.4


0 0
and 3.4.5, a basis for Range(L) is formed by taking the columns from [L] that correspond to the
columns in R that contain leading ones. Hence, [L] must have the form


1 a


[L] = 1 b


1 c
So, that this matches with the RREF above, we see that we can take a = 1/2, b = 1/2, and c = 1/2.
Hence, one choice is


1 1/2


[L] = 1 1/2


1 1/2
 

 3 



 

−2
A26 First observe from the given information that the standard matrix [L] must be 3 × 3. For B = 



 1



1 0 −3


2. Additionally, by Theorems 3.4.4
to span Null(L), the RREF R of [L] must have the form 0 1


0 0
0
and 3.4.5, a basis for Range(L) is formed by taking the columns from [L] that correspond to the
columns in R that contain leading ones. Hence, [L] must have the form


1 0 a


[L] = 0 1 b


0 1 c
So, that this matches with the RREF above, we see that we can take a = −3, b = 2, and c = 2. Hence,
one choice is


1 0 −3


2
[L] = 0 1


0 1
2
A27 Since !r0 is a solution of A!x = !b, that means A!r0 = !b. Then we get
A(!r0 + !n) = A!r0 + A!n
= !b + !0
= !b
since !n ∈ Null(A)
as required.
Copyright © 2020 Pearson Canada Inc.
47
A28 Since A has n = 4 columns, the number of variables in the system A!x = !0 is 4. We observe that
rank(A) = 2. Hence, the dimension of the solution space is n − rank(A) = 4 − 2 = 2.
A29 Since A has n = 5 columns, the number of variables in the system A!x = !0 is 5. We observe that
rank(A) = 3. Hence, the dimension of the solution space is n − rank(A) = 5 − 3 = 2.
A30 Row reducing A to RREF gives
!
" !
"
1 2
1 2
∼
2 4
0 0
4! "5
1
Hence, a basis for the row space is
. Only the first columns of the RREF form contains a leading
2
4! "5
1
one, so a basis for the column space is
. Writing R!x = !0 in equation form gives
2
x1 + 2x2 = 0
Thus, x2 = t ∈ R is a free variable, and we get x1 = −2t. The solution space is
!
"
! "
−2t
−2
!x =
=t
,
t
1
t∈R
4! "5
−2
Hence, a basis for the nullspace is
.
1
4! "5
−2
Observe that A = A , so a basis for Null(A ) is also
.
1
We have rank(A) + dim(Null(A)) = 1 + 1 = 2, and rank(AT ) + dim(Null(AT )) = 1 + 1 = 2.
T
T
A31 Row reducing A to RREF gives
" !
"
−7 3
1 0
∼
−3 1
0 1
4! " ! "5
4! " ! "5
1 0
−7 3
Hence, a basis for the row space is
,
, a basis for the column space is
,
. The only
0 1
−3 1
solution to A!x = !0 is the trivial solution. Hence, a basis for the nullspace is the empty set.
Row reducing AT to RREF gives
!
" !
"
−7 −3
1 0
∼
3
1
0 1
!
Thus, the only solution to AT !x = !0 is the trivial solution. Hence, a basis for the left nullspace is the
empty set.
We have rank(A) + dim(Null(A)) = 2 + 0 = 2, and rank(AT ) + dim(Null(AT )) = 2 + 0 = 2.
A32 Row reducing A to RREF gives
!
" !
"
1 2 1
1 0 0
∼
0 4 2
0 1 1/2
Copyright © 2020 Pearson Canada Inc.
48
    
4! " ! "5

1  0 



1 2
   





Hence, a basis for the row space is 
, a basis for the column space is
,
. Solving
0 ,  1 




0 4
 0 1/2 



 0 






−1/2
A!x = !0 we find that a basis for the nullspace is 
.





 1 

Row reducing AT to RREF gives

 

1 0 1 0

 

2 4 ∼ 0 1
1 2
0 0
Thus, the only solution to AT !x = !0 is the trivial solution. Hence, a basis for the left nullspace is the
empty set.
We have rank(A) + dim(Null(A)) = 2 + 1 = 3, and rank(AT ) + dim(Null(AT )) = 2 + 0 = 2.
A33 Row reducing A to RREF gives
!
" !
"
0 −2
4
2
0 1 −2 −1
∼
0
3 −6 −3
0 0
0
0
 

 0



4! "5


 



−2
 1

Hence, a basis for the row space is 
, a basis for the column space is
. Solving A!x = !0
 




−2
3









−1

     

1 0 0







0 2 1









we find that a basis for the nullspace is 
,
,
.






















0
1
0


     




0 0 1
Row reducing AT to RREF gives

 

0 1 −3/2
 0



−2
3 0
0 


 4 −6 ∼ 0
0 

 

2 −3
0
0
4! "5
3/2
T
!
Solving A !x = 0 we find that a basis for the left nullspace is
.
1
We have rank(A) + dim(Null(A)) = 1 + 3 = 4, and rank(AT ) + dim(Null(AT )) = 1 + 1 = 2.
A34 Row reducing A to RREF gives

 
1 1 −3 1 1

 
2 3 −8 4 ∼ 0
0 1 −2 3
0
      

 1  0 0





     



 0  1 0

Hence, a basis for the row space is 
.
  ,   ,  



−1
−2
0


     




 0
0 1

0 −1 0

1 −2 0

0
0 1
The first, second, and fourth columns of the
     

1 1 1



     

2 , 3 , 4
RREF form contain leading ones, so a basis for the column space is 
. Writing R!x = !0



0 1 3

Copyright © 2020 Pearson Canada Inc.
49
in equation form gives
x1 − x3 = 0
x2 − 2x3 = 0
x4 = 0
Thus, x3 = t ∈ R is a free variable, and we get x1 = t, x2 = 2t, and x4 = 0. The solution space is
 
 
 t
1
2t
2
!x =   = t   ,
 t
1
0
0
t∈R
  

1







2





Hence, a basis for the nullspace is 
.


1





 




0
Row reducing AT to RREF gives

 
2
0 1
 1
 
 1
3
1 0

−3 −8 −2 ∼ 0

 
1
4
3
0
0
1
0
0

0

0

1

0
Hence, Null(AT ) = {!0} and so a basis for Null(AT ) is the empty set.
We have rank(A) + dim(Null(A)) = 3 + 1 = 4, and rank(AT ) + dim(Null(AT )) = 3 + 0 = 3.
A35 Row reducing A to RREF gives

 

8 1 0 0
1 2

 

5 ∼ 0 1 0
1 1
 

1 0 −2
0 0 1
      

1 0 0



     

0 , 1 , 0
Hence, a basis for the row space is 
. Since every column of the RREF has a leading



0 0 1

      

1 2  8



     

1 , 1 ,  5
one, a basis for the column space is 
. Finally, it is clear that the only solution to



1 0 −2

R!x = !0 is !0. Thus, Null(A) = {!0}, so a basis for the nullspace is the empty set.
Row reducing AT to RREF gives

 

1 1 0 0
1 1

 

0 ∼ 0 1 0
2 1
 

8 5 −2
0 0 1
Hence, Null(AT ) = {!0} and so a basis for Null(AT ) is the empty set.
We have rank(A) + nullity(A) = 3 + 0 = 3, and rank(AT ) + dim(Null(AT )) = 3 + 0 = 3.
Copyright © 2020 Pearson Canada Inc.
50
A36 Row reducing A to RREF gives

 

1 3 1 0
2
2

 

2 −2 6 ∼ 0 1 −1
4
3 5
0 0
0






    


1  0
2  1







   
   





2 , −2
0
1
Hence, a basis for the row space is 
,
.
A
basis
for
the
column
space
is
. A basis


























 2 −1 
4

3
 


−2





 

 1
for Null(A) is 
.





 1

Row reducing AT to RREF gives

 

2 4 1 0
7/3
2

 

1 −2 3 ∼ 0 1 −1/3
3
6 5
0 0
0



−7/3






 1/3
Hence, a basis for Null(AT ) is 
.



  1 

We have rank(A) + nullity(A) = 2 + 1 = 3, and rank(AT ) + dim(Null(AT )) = 2 + 1 = 3.
A37 Row reducing A to RREF gives

 

1 2 4 1 2 4

 

1 2 4 ∼ 0 0 0
1 2 4
0 0 0
 
  


1
1







 
 



1
2
Hence, a basis for the row space is 
.
A
basis
for
the
column
space
is
. A basis for Null(A)












4

1

   

−2 −4



   

 1 ,  0
is 
.



 0  1

Row reducing AT to RREF gives

 

1 1 1 1 1 1
2 2 2 ∼ 0 0 0

 

4 4 4
0 0 0
    

−1 −1



   

T
 1 ,  0
Hence, a basis for Null(A ) is 
.








 0
1
We have rank(A) + nullity(A) = 1 + 2 = 3, and rank(AT ) + dim(Null(AT )) = 1 + 2 = 3.
A38 Row reducing A to RREF gives

 
2
9 1
 1
 
 0
1
7 0

−2 −2 −4 ∼ 0

 
1
1
2
0

0 −5

1
7

0
0

0
0
Copyright © 2020 Pearson Canada Inc.
51
   

    
 1  2






 1 0
   







   
 0  1





0
1
Hence, a basis for the row space is 
,
.
A
basis
for
the
column
space
is
,
.A





























−2
−2




−5 7



   




 1
1
  

 5



 

−7
basis for Null(A) is 
.





 1

Row reducing AT to RREF gives

 

1
1 0 −2 1 1 0 −2

 

2 −1
2 1 −1 1 ∼ 0 1

9 7
3 2
0 0
0
0
    

 2 −1





−2  1










T




Hence, a basis for Null(A ) is 
,
.
















1
0
   






 0
1
We have rank(A) + nullity(A) = 2 + 1 = 3, and rank(AT ) + dim(Null(AT )) = 2 + 2 = 4.
A39 Row reducing A to RREF gives

 

3 −1 6 1 0 0

 

2 5 ∼ 0 1 0
1
 

1
3 3
0 0 1
Hence, a basis for the row space is the standard basis for R3 . A basis for the column space is
     

3 −1 6



     

1 ,  2 , 5


. A basis for Null(A) is the empty set.








1
3 3
Row reducing AT to RREF gives

 

 3 1 1 1 0 0

 

−1 2 3 ∼ 0 1 0
6 5 3
0 0 1
Hence, a basis for Null(AT ) is the empty set.
We have rank(A) + nullity(A) = 3 + 0 = 3, and rank(AT ) + dim(Null(AT )) = 3 + 0 = 3.
A40 Row reducing A to RREF gives

 
1 −1 1 1
1
0 0 0

∼
1
1 1 0

 
1
2 4
0
0
1
0
0

0

0

1

0
Hence,
for the row space is the standard basis for R3 . A basis for the column space is
   a basis
 

1 −1 1





     



1  0 0

. A basis for Null(A) is the empty set.
  ,   ,  




1
1
1
     






1
2 4
Copyright © 2020 Pearson Canada Inc.
52
Row reducing AT to RREF gives

 

1
 1 1 1 1 1 0 0

 

−1 0 1 2 ∼ 0 1 0 −3
1 0 1 4
0 0 1
3
  

−1





 3








T


Hence, a basis for Null(A ) is 
.










−3
 






 1
We have rank(A) + nullity(A) = 3 + 0 = 3, and rank(AT ) + dim(Null(AT )) = 3 + 1 = 4.
A41 Row reducing A to RREF gives

1
2

1

3
1
2
1
3
 
0 1
 
0 0
∼
2 0
 
4
0
1
0
0
0

0

1

0

0
   

    
1 0






1 0
   






   

2 0





1
0
Hence, a basis for the row space is 
,
.
A
basis
for
the
column
space
is
,
. A basis for






























1 2
0 1






3 4
  

−1



 

 1
Null(A) is 

.



 0

T
Row reducing A to RREF gives

 

1 2 1 3 1 2 0 1
1 2 1 3 ∼ 0 0 1 2

 

0 0 2 4
0 0 0 0
    

−2 −1





   



 1  0

T
Hence, a basis for Null(A ) is 
,
.










 0 −2













 0
1
We have rank(A) + nullity(A) = 2 + 1 = 3, and rank(AT ) + dim(Null(AT )) = 2 + 2 = 4.
A42 Row reducing the matrix to RREF gives

1
1

2

3
2
2
4
6
0 3
1 7
0 6
1 13
 
0 1
 
1 0
∼
1 0
 
2
0
2
0
0
0
0
1
0
0
3
4
0
0

0

0

1

0
      



1 0 0







2 0 0











     







0
1
0
Hence, a basis for the row space is 
,
,
. The first, third, and fifth columns of the RREF







































3
4
0









     



0 0 1

Copyright © 2020 Pearson Canada Inc.
53
     

1 0 0





     



1 1 1

contain leading ones, so a basis for the column space is 
,
,
. Writing R!x = !0 in equation






















2
0
1








     




3 1 2
form gives
x1 + 2x2 + 3x4 = 0
x3 + 4x4 = 0
x5 = 0
Thus, x2 = s ∈ R and x4 = t ∈ R are free variables, and we get x1 = −2s − 3t, x3 = −4t, and x5 = 0.
Hence, the solution space is


 
 
−2s − 3t
−2
−3




 0



s
1





 
 
!x =  −4t  = s  0 + t −4 , s, t ∈ R


 
 
t


 0
 1
0
0
0
    



−2 −3







 1  0





   





0
−4
Hence, a basis for the nullspace is 
,
.












   




0
1







   



 0  0

Row reducing AT to RREF gives

1
2

0

3
0
1
2
1
7
1
Writing R!x = !0 in equation form gives
 
2 3 1
 
4 6 0
 
0 1 ∼ 0
 
6 13 0
 
1 2
0
0
1
0
0
0
0
0
1
0
0

0

1

1

0

0
x1 = 0
x2 + x4 = 0
x3 + x4 = 0
Thus, x4 = s ∈ R is a free variable, and we get x1 = 0, x2 = −s, and x3 = −s. Hence, the solution
space is
 
 
 0
 0
−1
−s
!x =   = s   , s ∈ R
−s
−1
s
1
  

 0







−1




T


Hence, a basis for Null(A ) is 
.










−1


 




 1
We have rank(A) + dim(Null(A)) = 3 + 2 = 5, and rank(AT ) + dim(Null(AT )) = 3 + 1 = 4.
Copyright © 2020 Pearson Canada Inc.
54
A43 Since AT !u = !0, we get that !u ∈ Null(AT ).
A44 Rather than trying to solve A!x = !v , we can instead use the Fundamental Theorem of Linear Algebra.
In particular, we know that if !v is not orthogonal to !u , then !v cannot be in Col(A) (since every vector
in Col(A) must be orthogonal to every vector in Null(AT )). We see that
!u · !v = 0(2) + (−1)(3) + 1(3) + 1(1) = 1
Thus, !v ! Col(A).
! ! Null(A).
A45 Observe that A!
w " !0, so w
! cannot help us determine whether or not !z ∈ Row(A). However, rather than just solving
A46 The vector w
AT !x = !z, we realize we only need to see if !z can be written as a linear combination of the basis
vectors for Row(A). So, we consider
 
 
   
1
 0
0  3
0
 1
0 −4
 
 
   
t1 2 + t2 −1 + t3 0 =  10
 
 
   
0
 0
1  2
3
1
1
7
Row reducing the corresponding matrix is rather easy since
get

 
0 0
3   1 0
 1
 
 0
1 0 −4   0 1

 2 −1 0 10  ∼  0 0

 
 0
0 1
2   0 0

 
3
1 1
7
0 0
Since the system is consistent, we have that !z ∈ Row(A).
the basis for Row(A) is simplified. We

3 
0

0 −4 

1
2 

0
0 

0
0
A47 Using the RREF R, we find that the general solution to A!x = !0 is
 
 
−2
−3
−1
 1
 
 
!x = s  1 + t  0 ,
 
 
 0
−1
0
1
s, t ∈ R
   



−2 −3







 1 −1









   





1
0
Thus, 
,
spans Null(A) and is clearly linearly independent, so it is a basis for Null(A).





























0
−1







   



 0  1

Copyright © 2020 Pearson Canada Inc.
55
 
1
0
 
A48 Observe that !x = 0 is a solution of A!x = !b. Thus, the Fundamental Theorem of Linear Algebra tell
 
0
0
us that the general solution of A!x = !b is
 
 
 
1
−2
−3
 1
−1
0
 
 
 


!x = 0 + s  1 + t  0 ,
 
 
 
0
 0
−1
0
0
1
s, t ∈ R
B Homework Problems
 
 1
 
B1 (a) L(−4, 5, 0) =  6
 
−9
(b) !y 2 is not in the range of L.
 
 0
 
(c) L(!v ) =  1, so !v ! Null(L).
 
−2
B2 (a) !y 1 is not in the range of L.
(b) !y 2 is not in the range of L.
 
0
 
(c) L(!v ) = 0, so !v ∈ Null(L).
 
0
 
4! " ! "5

 1



1 −1
 

 1
B3 A basis for Range(L) is
,
. A basis for Null(L) is 
.





0
1
 −1 

   

 0 0



4! " ! "5


   



1 0
 1 0

B4 A basis for Range(L) is
,
. A basis for Null(L) is 
,
.










 0 1


0 1









 −1 0 

4! "5
4! "5
0
1
B5 A basis for Range(L) is
. A basis for Null(L) is
.
1
−1
   

    
 0 1







−1
0




   











   
−3 0





0
2
B6 A basis for Range(L) is 
,
.
A
basis
for
Null(L)
is
,
.


































2
0




 1 0


   



 0 1

  

 1



 

 2
B7 A basis for Range(L) is 
. A basis for Null(L) is the empty set.



−1

Copyright © 2020 Pearson Canada Inc.
56
    
 


2  1
0






   

 







1
−1
1
B8 A basis for Range(L) is 
,
.
A
basis
for
Null(L)
is
.




























1
0

1
    
  


1 0
−2






   

 







3
1
1
B9 A basis for Range(L) is 
,
.
A
basis
for
Null(L)
is
.




























1 1
 0

    
 


1  0
1






   

 







0
1
1
B10 A basis for Range(L) is 
,
.
A
basis
for
Null(L)
is
.




























 1 −1 
1

   

0 −1



4! " ! "5


   



1 0
1  0

B11 A basis for Range(L) is
,
. A basis for Null(L) is 
,
.










0  1


2 1










0
0
 
4! " ! "5

 1



1 0
 

 1
B12 A basis for Col([L]) is
,
, a basis for Null([L]) is 
,





0 1
 −1 

    

 1 0



   

a basis for Row([L]) is 
, a basis for Null([L]T ) is the empty set.
−1 , 1



 0 1

   

 0 0



4! " ! "5


   



1 0
 1 0

B13 A basis for Col([L]) is
,
, a basis for Null([L]) is 
,
,
   




0 1
0
1













−1 0

    

1 0







0 1







a basis for Row([L]) is 
,
, a basis for Null([L]T ) is the empty set.
















0
0
   






0 1

4! "5
4! "5
4! "5
0
1
1
B14 A basis for Col([L]) is
, a basis for Null([L]) is
, a basis for Row([L]) is
, a basis
1
−1
1
4! "5
1
for Null([L]T ) is
.
0
   

    
 0 1







−1
0




   















   
−3 0





0
2
B15 A basis for Col([L]) is 
,
,
a
basis
for
Null([L])
is
,
,
   






















2
0




 1 0








   

0 1
    

  
 1 0










 0 2

1




T
0




a basis for Row([L]) is 
,
,
a
basis
for
Null([L]
)
is
.


























0
3


1

   



 −1 0 
Copyright © 2020 Pearson Canada Inc.
57
  

 1



 

 2
B16 A basis for Col([L]) is 
, a basis for Null([L]) is the empty set,





 −1 

   

−2 1

6) *7


   

 1 , 0
a basis for Row([L]) is 1 , a basis for Null([L]T ) is 
.



 0 1

    
  


2  1
0







   
 





1
1
−1
B17 A basis for Col([L]) is 
,
,
a
basis
for
Null([L])
is
,
















1  1

0

    
  


 1 1
−2







   
 

T




 1
0
0
a basis for Row([L]) is 
,
,
a
basis
for
Null([L]
)
is
.
















−1 1

 3

B18 A basis for the nullspace is the empty set, a basis for the range {!e1 , !e2 }.
   
 


1 1
−1







   
 





 1
1
0
B19 A basis for the nullspace is 
,
,
a
basis
for
the
range
is
.
















0 1

 1

B20 A basis for the nullspace is the empty set, a basis for the range {!e1 , !e2 , !e3 }.
For Problems B21 - B23, there are infinitely many correct answers.
B21 [L] =
!
2 1
2 1

 0

B22 [L] = −2

2
!
0
B23 [L] =
0

0

B24 [L] = 0

0
1
0
"

0

3

−3
"

0 0 0

0 0 0

0 0 0
B25 (a) Number of variables is 4. (b) Rank of A is 2. (c) Dimension of the solution space is 2.
B26 (a) Number of variables is 4. (b) Rank of A is 2. (c) Dimension of the solution space is 2.
B27 (a) Number of variables is 3. (b) Rank of A is 2. (c) Dimension of the solution space is 1.
B28 (a) Number of variables is 5. (b) Rank of A is 3. (c) Dimension of the solution space is 2.
4! " ! "5
1 0
B29 A basis for Col(A) is
,
. A basis for Null(A) is the empty set. A basis for Row(A) is
0 1
4! " ! "5
1 0
,
. A basis for Null(AT ) is the empty set.
0 1
rank(A) + dim Null(A) = 2 + 0 = 2, and rank(AT ) + dim Null(AT ) = 2 + 0 = 2.
Copyright © 2020 Pearson Canada Inc.
58
"5
4! "5
4! "5
3
1
0
. A basis for Null(A) is
. A basis for Row(A) is
. A basis for
−11
0
1
4!
B30 A basis for Col(A) is
4!
"5
11/3
Null(AT )
.
1
rank(A) + dim Null(A) = 1 + 1 = 2, and rank(AT ) + dim Null(AT ) = 1 + 1 = 2.
  
    
4! " ! "5


−2
1  0






1 1
 

   



0 ,  1
B31 A basis for Col(A) is
,
. A basis for Null(A) is 
. A basis for Row(A) is 
.
 1











0 1
 1
 2 −1 

A basis for Null(AT ) is the empty set.
rank(A) + dim Null(A) = 2 + 1 = 3, and rank(AT ) + dim Null(AT ) = 2 + 0 = 2.
    
 
    



 2  2
 0 
1  0 









   

 

   







0 ,  1 
−2
−2
−1
B32 A basis for Col(A) is 
,
.
A
basis
for
Null(A)
is
.
A
basis
for
Row(A)
is
.





































 1
 2
 0 1/2 

4
  

1



 

1
A basis for Null(AT ) is 
.





0

rank(A) + dim Null(A) = 2 + 1 = 3, and rank(AT ) + dim Null(AT ) = 2 + 1 = 3.
   

    
−1 −7







1
2




   















   
 1  0





−2
−5
B33 A basis for Col(A) is 
,
.
A
basis
for
Null(A)
is
,
. A basis for Row(A) is
   






















0
3




 −1 −3 









 0  1

   

  
1  0










1  0

−1




T
−1




,
.
A
basis
for
Null(A
)
is
.



























0
1


 1

   



 7 −3 
rank(A) + dim Null(A) = 2 + 2 = 4, and rank(AT ) + dim Null(AT ) = 2 + 1 = 3.
   
 
    



0 1
−2
1  0











   
 
   







0 ,  1
1
0
3
B34 A basis for Col(A) is 
,
.
A
basis
for
Null(A)
is
.
A
basis
for
Row(A)
is
.

























1 2

 1

2 −3

  

−2



 

−1
A basis for Null(AT ) is 
.



 1

rank(A) + dim Null(A) = 2 + 1 = 3, and rank(AT ) + dim Null(AT ) = 2 + 1 = 3.
      




  
−2 0 1
 1 













1
     









 

 3 0 0

 2/3

1
B35 A basis for Col(A) is 
. A basis for Null(A) is 
. A basis for Row(A) is 
.
  ,   ,  















0
1
0
0
     


1











 0 0 3
 −1/3 
   

−1 −1



   

 1 ,  0
A basis for Null(AT ) is 
.



 0  1

rank(A) + dim Null(A) = 1 + 3 = 4, and rank(AT ) + dim Null(AT ) = 1 + 2 = 3.
Copyright © 2020 Pearson Canada Inc.
59
    
 
   



1  4
1
0 0









   

 

   











2
−4
0
1
0
B36 A basis for Col(A) is 
,
.
A
basis
for
Null(A)
is
.
A
basis
for
Row(A)
is
,
.A














































1
0
0 1

5



−7/6 






 1/12
basis for Null(AT ) is 
.





 1 

rank(A) + dim Null(A) = 2 + 1 = 3, and rank(AT ) + dim Null(AT ) = 2 + 1 = 3.



     
 5/3






2  7  9








     

−1/3







1
14
6
B37 A basis for Col(A) is 
,
,
.
A
basis
for
Null(A)
is
. A basis for Row(A) is



























1


1  8 −7









 0 

    



 1   0  0







 0   1  0

,
,
. A basis for Null(AT ) is the empty set.















−5/3 1/3 0





    



0
0
1
rank(A) + dim Null(A) = 3 + 1 = 4, and rank(AT ) + dim Null(AT ) = 3 + 0 = 3.
B38 AT !u = !0, so !u ∈ Null(AT ).
B39 Yes, since !v is a linear combination of the basis vectors for Col(A).
! ∈ Null(A).
B40 A!
w = !0, so w
B41 !z is not a linear combination of the basis vectors for Row(A), so !z ! Row(A).
   

−2 −3





   




1
0





























 0  1





B42 
,
















0
2




   





 0  1





 0  0

 
 
 
0
−2
−3
 
 
 
0
1
 
 
 0
0
 0
 1
B43 !x =   + s   + t   , s, t ∈ R
1
 0
 2
 
 
 
0
0
 
 
 1
0
0
0
Copyright © 2020 Pearson Canada Inc.
60
C Conceptual Problems
C1 Since Range(L) = Col([L]) and Null(L) = Null([L]), we have that by the Rank-Nullity Theorem
n = rank([L]) + nullity([L]) = dim(Col([L])) + dim(Null([L]))
= dim(Range(L)) + dim(Null(L))
C2 Consider
Then,
c1 L(!v 1 ) + · · · + ck L(!v k ) = !0
L(c1!v 1 + · · · + ck!v k ) = !0
Therefore, c1!v 1 + · · · + ck!v k ∈ Null(L). But, Null(L) = {!0}, so c1!v 1 + · · · + ck!v k = !0. Thus, c1 = · · · =
ck = 0 since {!v 1 , . . . , !v k } is linearly independent. Hence, {L(!v 1 ), . . . , L(!v k )} is linearly independent.
C3 Let [L] be the standard matrix of L. By definition of the standard matrix, we have [L] is an n × n
matrix. Thus, we get that Null(L) = {!0} if and only if
!0 = L(!x ) = [L]!x
has a unique solution. By the System Rank Theorem (2), this is true if and only if rank[L] = n.
Moreover, by the System Rank Theorem (3), we have that !b = [L]!x = L(!x ) is consistent for all
!b ∈ Rn if and only if rank[L] = n. Hence, Null(L) = {!0} if and only if rank[L] = n if and only if
Range(L) = Rn .
C4 By definition, Null(L) is a subset of Rn . Also, Null(L) is non-empty since L(!0) = !0. Let !x , !y ∈
Null(L). Then
L(s!x + t!y ) = sL(!x ) + tL(!y ) = s(!0) + t(!0) = !0
so s!x + t!y ∈ Null(L). Therefore Null(L) is a subspace of Rn .
C5 (a) First observe that if !x ∈ Rn , then (M ◦ L)(!x ) = M(L(!x )) = M(!y ) for !y = L(!x ) = Rm . Hence
every vector in the range of M ◦ L is in the range of M. Thus, the range of M ◦ L is a subset of the
range of M. Since (M ◦ L)(!0) = !0, we have that !0 ∈ Range(M ◦ L). Let !y 1 , !y 2 ∈ Range(M ◦ L).
Then there exists !x 1 , !x 2 ∈ Rn such that (M ◦ L)(!x 1 ) = !y 1 and (M ◦ L)(!x 2 ) = !y 2 . Hence
s!y 1 + t!y 2 = s(M ◦ L)(!x 1 ) + t(M ◦ L)(!x 2 ) = (M ◦ L)(s!x 1 + t!x 2 )
with s!x 1 + t!x 2 ∈ Rn . Thus, s!y 1 + t!y 2 ∈ Range(M ◦ L). Thus, Range(M ◦ L) is a subspace of
Range(M).
(b) Let L : R2 → R2 and M : R2 → R2 defined by L(x14!
, x2 "5
) = (x1 , 0) and M(x1 , x2 ) = (x1 , x2 ). Then,
1
clearly Range(M) = R2 , but Range(M ◦ L) = Span
.
0
(c) Let !x ∈ Null(L). Then, (M ◦ L)(!x ) = M(L(!x )) = M(!0) = !0, so !x ∈ Null(M ◦ L). Thus, the
nullspace of L is a subset of the nullspace of M ◦ L. Since L(!0) = !0, we have !0 ∈ Null(L). Let
!x , !y ∈ Null(L). Then
L(s!x + t!y ) = sL(!x ) + tL(!y ) = s(!0) + t(!0) = !0
so s!x + t!y ∈ Null(L). Therefore Null(L) is a subspace of Null(M ◦ L).
Copyright © 2020 Pearson Canada Inc.
61
C6 Since rank(A) = 4, we have dim(Col(A)) = 4. Then, by the Rank-Nullity Theorem, nullity(A) =
7 − rank(A) = 7 − 4 = 3.
C7 The nullspace is as large as possible when the nullspace is the entire domain. Therefore, the largest
possible dimension for the nullspace if 4. The largest possible rank of a matrix is the minimum of the
number of rows and columns. Thus, the largest possible rank is 4.
C8 Since dim(Row(A)) = rank(A), we get by the Rank-Nullity Theorem that the dimension of the row
space is 5 − nullity(A) = 5 − 3 = 2.
C9 Let !b be in the column space of A. Then there exist an !x ∈ Rn such that A!x = !b. But then
A!b = A(A!x ) = A2 !x = On,n !x = !0
Hence, !b is in the nullspace of A.
 T
!a1 
 
C10 (1): Let A =  ... . If !n ∈ Null(A), then we have
 T 
!am
 


0
 !a1 · !n 
 . 


 ..  = A!n =  ... 


 
!am · !n
0
Thus, !ai · !n = 0 for 1 ≤ i ≤ m. For any !r ∈ Row(A) we can write
!r = c1!a1 + · · · + cm!am
Thus,
!r · !n = (c1!a1 + · · · + cm!am ) · !n = c1 (!a1 · !n) + · · · + cm (!am · !n) = 0
On the other hand, if !n ∈ Rn is a vector such that !r · !n = 0 for all !r ∈ Row(A), then we have !ai · !n = 0
for 1 ≤ i ≤ m as !ai ∈ Row(A) for 1 ≤ i ≤ m. Thus,

  
 !a1 · !n  0
 .   . 
A!n =  ..  =  .. 

  
!am · !n
0
So, !n ∈ Null(A).
Consequently, Null(A) = {!n ∈ Rn | !n · !r = 0 for all !r ∈ Row(A)}.
(2): Follows immediately from (1) by substituting AT in for A.
C11 Since A!x = !b is inconsistent, that means !b ! Col(A). By FTLA, every !b ∈ Rm has the form
!b = !u + !y
where !u ∈ Col(A) and !y ∈ Null(AT ). That is, AT !y = !0. Since !b ! Col(A), we must have !y " !0.
Finally, we have that
!b · !y = (!u + !y ) · !y = !u · !y + !y · !y = 0 + (!y ( " 0
C12 We are given that A!r0 = !b and A!y = !b. Thus,
A(!y − !r0 ) = A!y − !r0 = !b − !b = !0
Therefore, !y − !r0 ∈ Null(A). Let !n = !y − !r0 . Then, we have !y = !r0 + !n with !n ∈ Null(A) as required.
Copyright © 2020 Pearson Canada Inc.
62
Section 3.5
A Practice Problems
A1 Using the result of Example 3.5.4, we find that 3(−8) − 8(−3) = 0, so the matrix is not invertible.
!
"
1 1 −6
A2 Using the result of Example 3.5.4, we find that the inverse is
.
4
34 5
!
"
1 3 −6
A3 Using the result of Example 3.5.4, we find that the inverse is
.
4
24 2
!
"
1
5 4
A4 Using the result of Example 3.5.4, we find that the inverse is
.
23 −2 3

 

2 0 −1 
 1 0 1 1 0 0   1 0 0

 

A5  2 1 3 0 1 0  ∼  0 1 0 −1 1 −1 .

 

1 0 2 0 0 1
0 0 1 −1 0
1


 2 0 −1


Thus, the matrix is invertible, and its inverse is −1 1 −1.


−1 0
1

 

1
0 0 
 1 0 2 1 0 0   1 0 2

 

1 0 
A6  1 1 3 0 1 0  ∼  0 1 1 −1

 

3 1 7 0 0 1
0 0 0 −2 −1 1
Since its rank is less than its number of rows, it is not invertible.
A7 Using the result of Example 3.5.4, we find that 1(2) − 1(2) = 0, so the matrix is not invertible.

 

 2 −1 3 1 0 0   1 0 0 1/2 1/4 −7/4 

 

2 1 0 1 0  ∼  0 1 0
0 1/2 −1/2 
A8  0

 

0
0 1 0 0 1
0 0 1
0
0
1


1/2 1/4 −7/4


Thus, the matrix is invertible, and its inverse is  0 1/2 −1/2.


0
0
1
 


1 1 0 0   1 0 0 7 −5 −1 
 1 −2
 


2 0 1 0  ∼  0 1 0 5 −4 −1 
A9  1 −3
 


1
1 −3 0 0 1
0 0 1 4 −3 −1


7 −5 −1


Thus, the matrix is invertible, and its inverse is 5 −4 −1.


4 −3 −1

 

0 −1 1 
 0 0 1 1 0 0   1 0 0

 

1 0 
A10  0 1 1 0 1 0  ∼  0 1 0 −1

 

1 1 1 0 0 1
0 0 1
1
0 0


 0 −1 1


1 0.
Thus, the matrix is invertible, and its inverse is −1


1
0 0
Copyright © 2020 Pearson Canada Inc.
63

2
3 1 0 0
 0

2
3 0 1 0
A11  1

1 −5 −6 0 0 1
A12
A13
A14
A15
A16
A17
A18
 

1
0 
  1 0 0 −1
 

1
−1 
 ∼  0 1 0 −3

0 0 1 7/3 −2/3 2/3


1
0 
 −1

1
−1 .
Thus, the matrix is invertible, and its inverse is  −3


7/3 −2/3 2/3

 

1
0
0 
3 −3
 1 3 −3 1 0 0   1

 

4 0 1 0  ∼  0 −5 10 −2
1
0 
 2 1
 

2 1 −8 0 0 1
0
0
0 4/5 −7/5 1
Since its rank is less than its number of rows, it is not invertible.

 

 0 1 0 1 0 0   1 0 0 0 0 1 

 

 0 0 1 0 1 0  ∼  0 1 0 1 0 0 
1 0 0 0 0 1
0 0 1 0 1 0


0 0 1


Thus, the matrix is invertible, and its inverse is 1 0 0.


0 1 0

 

2
3
1
0 0 
 1 2 3 1 0 0   1

 4 5 6 0 1 0  ∼  0 −3 −6 −4
1 0 

 

7 8 9 0 0 1
0
0
0
1 −2 1
Since its rank is less than its number of rows, it is not invertible.

 

1
0 0 
 1 1 −1 1 0 0   1 1 −1

 

4 0 1 0  ∼  0 1
4
0
1 0 
 0 1
 

2 3
2 0 0 1
0 0
0 −2 −1 1
Since its rank is less than its number of rows, it is not invertible.

 

2 4
3 1 0 0 0   1 0 0 0
0 −3
1/2 −2 
 3
 

 0
1 0
1 0 1 0 0   0 1 0 0 −1 −2
3/2 −2 

∼



 2
2 4
2 0 0 1 0   0 0 1 0
0
1
0
1 

 

0 −1 1 −1 0 0 0 1
0 0 0 1
1
3 −3/2
2


1/2 −2
 0 −3

−1 −2
3/2 −2
Thus, the matrix is invertible, and its inverse is 
.
1
0
1
 0

1
3 −3/2
2

 

6 10 −5/2 −7/2 
 1 1 3 1 1 0 0 0   1 0 0 0

 0 2 1 0 0 1 0 0   0 1 0 0
1
2 −1/2 −1/2 

 ∼ 

 2 2 7 1 0 0 1 0   0 0 1 0 −2 −3
1
1 

 

0 6 3 1 0 0 0 1
0 0 0 1
0 −3
0
1


 6 10 −5/2 −7/2
 1
2 −1/2 −1/2
Thus, the matrix is invertible, and its inverse is 
.
1
1 
−2 −3

0 −3
0
1

 

1 2 1 0 0 0   1 0
5 −2
1
0 −2 0 
 1 2



 −1 1 −7 3 0 1 0 0   0 1 −2
2
0
0
1 0 

 ∼ 

 0 1 −2 2 0 0 1 0   0 0
0 −1
1
1 −3 0 

 

−1 2 −9 2 0 0 0 1
0 0
0
0 −3 −4
8 1
Copyright © 2020 Pearson Canada Inc.
64
Since its rank is less than its number of rows, it is not invertible.

 
 1 0 1 0 1 1 0 0 0 0   1 0 0 0 0 1
 0 1 0 1 0 0 1 0 0 0   0 1 0 0 0 0

 
A19  0 0 1 1 1 0 0 1 0 0  ∼  0 0 1 0 0 0

 
 0 0 0 1 2 0 0 0 1 0   0 0 0 1 0 0
0 0 0 0 1 0 0 0 0 1
0 0 0 0 1 0

1
1 0 −1
0 1
0
−1

1 −1
Thus, the matrix is invertible, and its inverse is 0 0

0
1
0 0
0 0
0
0


 2 0 −1


A20 From Problem A5 we have that B−1 = −1 1 −1.


−1 0
1
 
1
 
(a) The solution of B!x = 1 is
 
1

0 −1
1 −2 

1
0 −1
2 

0
1 −1
1 

0
0
1 −2 

0
0
0
1

−2

2

1.

−2

1
  
   
1  2 0 −1 1  1



   
!x = B−1 1 = −1 1 −1 1 = −1
  
   
1
−1 0
1 1
0
 
−1
 
(b) The solution of B!x =  0 is
 
1
  
   
−1  2 0 −1 −1 −3
  
   
!x = B−1  0 = −1 1 −1  0 =  0
  
   
1
−1 0
1
1
2
   
1 −1
   
(c) The solution of B!x = 1 +  0 is
   
1
1
   
 
       
1 −1
1
−1  1 −3 −2
   
 
       
!x = B−1 1 +  0 = B−1 1 + B−1  0 = −1 +  0 = −1
   
 
       
1
1
1
1
0
2
2
 
 3
 
(d) The solution of B!x =  1 is
 
−2
  
   
 3  2 0 −1  3  8
  
   
!x = B−1  1 = −1 1 −1  1 =  0
  
   
−2
−1 0
1 −2
−5
Copyright © 2020 Pearson Canada Inc.
65
A21 (a) Using the result of Example 3.5.4, we find that A
−1
(b) We have (AB) =
!
"
5 9
. Row reducing [AB | I] gives
9 16
!
Thus, (AB)
−1
5 9 1 0
9 16 0 1
"
∼
!
1 0 −16
9
0 1
9 −5
"
"
−16
9
=
. We also have
9 −5
!
−1 −1
B A
(c) Row reducing [3A | I] gives
!
"!
" !
"
−5
2
2 −1
−16
9
=
=
= (AB)−1
3 −1 −3
2
9 −5
!
6 3 1 0
9 6 0 1
"
2/3 −1/3
=
−1
2/3
#
$
(d) Row reducing AT | I gives
Hence, (3A)−1 =
!
!
T −1
Hence, (A )
"
!
"
2 −1
−5
2
−1
=
and B =
.
−3
2
3 −1
!
1
3
!
"
∼
!
1 0 2/3 −1/3
0 1 −1
2/3
"
"
2 −1
= 13 A−1 .
−3
2
2 3 1 0
1 2 0 1
"
∼
!
1 0
2 −3
0 1 −1
2
"
"
2 −3
=
. We have
−1
2
!
T
T −1
A (A )
"!
" !
"
2 3
2 −3
1 0
=
=
1 2 −1
2
0 1
!
as required.
"
!
"
!
"
3 3
7 −3
−3 1
1
−1
−1
−1
(e) We have A + B =
, so (A + B) = 3
, but A + B =
6 7
−6
3
0 1
!√
"
3/2 √
−1/2
A22 (a) We know that [Rπ/6 ] =
. The mapping Rπ/6 rotates each vector in the plane about
1/2
3/2
the origin counterclockwise through the angle π6 . The inverse of Rπ/6 maps each image vector
back to its original coordinates, so the inverse is a counterclockwise rotation through angle − π6 .
Hence,
!√
"
3/2 √1/2
−1
[Rπ/6 ] = [R−π/6 ] =
−1/2
3/2
!
"
1 0
(b) We know that the matrix of a vertical shear S by amount 2 is [S ] =
. By either row
2 1
!
"
1 0
reducing [S | I] or using geometrical arguments, we find that [S −1 ] =
.
−2 1
!
Copyright © 2020 Pearson Canada Inc.
66
! "
! "
x
x
(c) A reflection R in the line x1 − x2 = 0 maps a vector 1 in the plane to its mirror image 2 in
x2
x1
! "
! "
!
"
0
1
0 1
the line x1 − x2 = 0. Thus, R("e1 ) =
and R("e2 ) =
, hence [R] =
. The inverse of the
1
0
1 0
!
"
0 1
mapping is also a reflection in the line x1 − x2 = 0, so [R−1 ] =
= [R].
1 0
(d) A composition of mappings is represented by products of matrices, so
−1
−1
[(R ◦ S ) ] = ([R][S ])
−1
−1
= [S ] [R]
[(S ◦ R)−1 ] = ([S ][R])−1 = [R]−1 [S ]−1
"!
1 0 0
=
−2 1 1
!
"!
0 1
1
=
1 0 −2
!
" !
1
0
=
0
1
" !
0
−2
=
1
1
"
1
−2
"
1
0
A23 Let "v , "y ∈ Rn and s, t ∈ R. Then there exists "u , "x ∈ Rn such that "x = M("y ) and "u = M("v ). Then
L("x ) = "y and L("u ) = "v . Since L is linear L(s"x + t"u ) = sL("x ) + tL("u ) = s"y + t"v . It follows that
M(s"y + t"v ) = s"x + t"u = sM("y ) + tM("v )
so M is linear.
In each case, we notice that the given matrix is one elementary row operation away from the identity matrix and
the inverse is just the matrix where the inverse elementary row operation has been performed on the identity
matrix.
A24 The matrix
! " represents a horizontal shear by an amount −3. This linear transformation
!
" moves each
x1
x1 − 3x2
vector
parallel to the x1 -axis by an amount −3x2 to the new position
. The inverse
x2
x2
transformation maps each image vector from the new position by an amount 3x2 back to the original
position.
!
" Thus the inverse is a shear in the direction x1 by amount 3. The matrix of the inverse is
1 3
.
0 1
A25 The matrix represents a stretch in the x1 direction by a factor of 5. Thus, the inverse of this transformation must “shrink” the x1 coordinate back to its original value.
Hence,
the inverse is a stretch in
!
"
1/5 0
1
the x1 direction by a factor of 5 . The matrix of the inverse is
.
0 1
A26 The matrix represents a reflection of a vector in R3 over the x1 x3 -plane. Thus, the inverse of this
transformation must reflect the vector back over the x1 x3 -plane. Therefore, the transformation is its


0 0
1


own inverse. Hence, the matrix of the inverse is 0 −1 0.


0
0 1
A27 The
moves each vector
! " matrix represents a vertical shear by an amount 4. This linear! transformation
"
x1
x1
parallel to the x2 -axis by an amount 4x1 to the new position
. The inverse transformax2
4x1 + x2
tion maps each image vector from the new position by an amount −4x1 back to! the original
position.
"
1 0
Thus the inverse is a vertical shear by amount −4. The matrix of the inverse is
.
−4 1
Copyright © 2020 Pearson Canada Inc.
67
A28 The matrix represents a shear in R3 . As in (a) and (d), we just need to perform the reverse shear. Thus


0
1 0


the inverse is 0 1 −2.


0 0
1
A29 The matrix represents a reflection of a vector in R3 over the plane x2 = x3 . Thus, the inverse of this
transformation must reflect the vector back over this plane. Therefore, the transformation is its own


1 0 0


inverse. Hence, the matrix of the inverse is 0 0 1.


0 1 0
!
"
2 3
A30 The standard matrix of L is [L] =
. By Theorem 3.5.5, we have that L is invertible since [L] is
1 5
!
"
5 −3
invertible and that [M] = [L]−1 = 17
. Thus,
−1
2
5
3
1
2
M(x1 , x2 ) = x1 − x2 , − x1 + x2
7
7
7
7
+
,


1 1 3


A31 The standard matrix of L is [L] = 0 1 1. We need to find [L]−1 . Row reducing [[L] | I] gives


2 3 6

 
2
 1 1 3 1 0 0   1 0 0 −3 −3

 
0
1
1
0
1
0
0
1
0
−2
0
1
∼

 
2 3 6 0 0 1
0 0 1
2
1 −1
Thus, [M] = [L]−1


2
−3 −3


0
1. Hence,
= −2


2
1 −1




M(x1 , x2 , x3 ) = (−3x1 − 3x2 + 2x3 , −2x1 + x3 , 2x1 + x2 − x3 )
We must be careful to remember that matrix multiplication is not commutative. So, when multiplying both
sides of a matrix equation by a matrix A, we must either multiply it on the right of both sides of the equation
or on the left of both sides of the equation.
!
"−1
5 7
A32 Multiplying by
on the left of both sides gives
2 3
!
5 7
2 3
"−1 !
"
!
"−1 !
"
5 7
5 7
1 2 −1
X=
2 3
2 3
3 3 −1
!
"
!
"
1 0
−18 −15
4
X=
0 1
13
11 −3
"
−18 −15
4
Thus, X =
.
13
11 −3
!
Copyright © 2020 Pearson Canada Inc.
68
5 7
A33 Multiplying by
2 3
!
"−1
on the right of both sides gives


"!
"−1  1 2 !
"

 5 7 −1
5 7 5 7
X
=  3 5

 2 3
2 3 2 3
−2 1


!
" −1 3


1 0
X
= −1 4


0 1
−8 19
!


−1 3


Thus, X = −1 4.


−8 19
A34 We prove #the statement.$ If A is invertible, then AT is also invertible by the Invertible Matrix Theorem.
Let AT = "a1 · · · "an and consider
 
c 
#
$  .1 
"0 = c1"a1 + · · · + cn"an = "a1 · · · "an  .  = AT "c
 . 
cn
Since AT is invertible, this system has unique solution
"c = (AT )−1"0 = "0
Thus, c1 = · · · = cn = 0, so the columns of AT form a linearly independent set.
!
"
!
"
1 0
−1
0
A35 We disprove the statement by giving a counterexample. Let A =
and B =
. Then, A
0 1
0 −1
!
"
0 0
and B are both invertible, but A + B =
is not invertible.
0 0
!
"
!
"
1 1
1 1
A36 We disprove the statement by finding a counterexample. Let A =
,X=
. Then, we find
1 1
0 1
that
!
"
1 2
AX =
1 2
We can then calculate a suitable matrix C by solving
C = (X)−1 (AX) =
!
"!
" !
"
1 −1 1 2
0 0
=
0
1 1 2
1 2
Checking we find that indeed
"
1 2
XC =
= AX
1 2
!
but A ! C.
Copyright © 2020 Pearson Canada Inc.
69
B Homework Problems
!
"
4
1 1
B1 14
3 −2
B2
B4 The matrix is not invertible. B5
B7
B10
B13
B16
B17
B18
B19
B20
1
5
!
1
16
"
−1 −2
1 −3
!
0
8
2 −7
"


1
−1 −1


B3  6 −2 −3


−2
1
1


−3/4 −2 1


1 0
B6 −1/4


−1 −2 1


0
1/3 −1


The matrix is not invertible. B8  0 −2 −1
B9 The matrix is not invertible.


0
1
1




!
"
−1 
−1 0
 3 −6 −15
−8
−2




1
2
6
B11  0 1/2 0 
B12 16  2
2




5
1
1 0 1/2
−3
0 −3




3
4 −2
1 −2
0
 9
 0


 2
 5 −2
2
1 −1
3 −3




The matrix is not invertible. B14 
B15 


3
0
0
1
−10 −3 −5
−1


−4 −1 −2
1
−2
1 −1
1


3
0
0
−3

 6

2
−4
−2
1 


6  12
0
−6
0


0 −6
6
6




 


1/2 3/2 −1/2
 3/2
 1 
−11








6
−3 , (a) "x =  4 , (b) "x =  −1 , (c) "x = −44
B−1 =  1




 


0
−1
1/2
−1/2
1/2
7


 
 


 1 −1/3 1/3
 2 
3
−4/3


 
 


1/4 1/2, (a) "x =  7 , (b) "x = 4, (c) "x =  0 
B−1 = 1/4


 
 


1
0
1
10
7
−1
!
"
!
"
−5
7 −1
−2 1
−1
(a) A =
,B =
3 −4
1 0
!
"
!
"
7 18
13 −18
−1
(b) AB =
, (AB) =
= B−1 A−1
5 13
−5
7
!
"
−5/2 7/2
−1
(c) (2A) =
= 12 A−1
3/2 −2
!
"
−5
3
T −1
(d) (A ) =
= (A−1 )T
7 −4
√ "
√ "
!
!
1/2 − 3/2
1/2
3/2
−1
√
√
(a) Rπ/3 ] =
, [Rπ/3 ] =
3/2
1/2
− 3/2 1/2
!
"
!
"
1/2 0
2 0
(b) [T ] =
, [T ]−1 =
0 1/2
0 2
!
"
1
0
(c) [R] =
, [R]−1 = [R]
0 −1
Copyright © 2020 Pearson Canada Inc.
70
"
2
0
(d) [R ◦ T ] =
= [T ◦ R]−1
0 −2
!
"
!
"
1
0
1 0
B21
B22
0 −1/2
−2 1


!
"
2 0 0
1 3


B24
B25 0 1 0


0 1
0 0 1
.
1
B27 L−1 (x1 , x2 ) = 51 x1 + 25 x2 , − 15 x1 + 10
x2
−1
!

0

B23 0

1

 1

B26  0

−1

0 1

1 0

0 0

0 0

1 0

0 1
B28 L−1 (x1 , x2 ) = (−4x1 + −x2 , 7x1 + 2x2 )
B29 L−1 (x1 , x2 , x3 ) = (9x1 − 8x2 − 3x3 , −x1 + x2 , −2x1 + 2x2 + x3 )
B30 L−1 (x!1 , x2 , x3 )" = (−x1 − x3 , 3x1 − 2x2 −!2x3 , −x1"+ x2 + x3 )
14
3
1 −1
B31 X =
B32 X =
−5 −1
−2
3


!
"
−5 −5
5/7 −4/7


B34 X =
B35 X = −3 −3


10/7 −8/7
3
3
!
"
!
"
6 0
0 0
B37 X =
B38 X =
2 0
0 0
"
−5 13
B33 X =
3 −5
!
"
5/2 −3/2 1/2
B36 X =
15/2 −13/2 5/2
!
C Conceptual Problems
C1 Using properties of transpose and inverse we get
((AB)T )−1 = (BT AT )−1 = (AT )−1 (BT )−1 = (A−1 )T (B−1 )T
C2 I = A3 = AA2 , so A−1 = A2 .
C3 B(B4 + B2 + I) = I, so B−1 = B4 + B2 + I.
C4 (a) We must not assume that A−1 and B−1 exist, so we cannot use (AB)−1 = B−1 A−1 . We know that
/
0
(AB)−1 exists. Therefore, I = (AB)−1 AB = (AB)−1 A B and so, by Theorem 3.5.2, B is invertible
/
0
and B−1 = (AB)−1 A. Similarly, we have I = AB(AB)−1 = A B(AB)−1 , hence A is invertible.
!
"
1 0 0
(b) Let C =
and D = C T . Then C and D are both not invertible since they are not square,
0 1 0
!
"
1 0
but CD =
is invertible.
0 1
#
$
C5 We mimic what we did with square matrices. Let B = "b1 · · · "bm . Then, we require
#
$
#
"e1 · · · "em = A "b1
$ #
$
· · · "bm = A"b1 · · · A"bm
Hence, we just need to solve the m systems A"bi = "ei . If rank(A) = m, then all m systems will be
consistent by the System Rank Theorem (3). Hence, we can form the right inverse B from "b1 , . . . , "bm .
If rank(A) < m, then at least one of the systems will be inconsistent and A will not have a right
inverse.
Copyright © 2020 Pearson Canada Inc.
71
C6 If m < n, then rank(A) < n. Hence, each of the consistent systems A"bi = "ei has n − rank(A) > 0
free variables. Thus, each system has infinitely many solutions. Therefore, there are infinitely many
choices of columns "b1 , . . . , "bm .
C7 If m > n, then rank(A) < m, so at least one of the systems A"bi = "ei will be inconsistent by the System
Rank Theorem (3). Thus, A will not have a right inverse.
C8 Let A ∈ Mm×n (R). If A has a left inverse B, then B ∈ Mn×m (R) and we have BA = I. Thus, B has a
right inverse and hence by Problem C7 we must have m ≥ n. If A has a right inverse, then by Problem
C7 we must have m ≤ n. Thus, A cannot have both a left and a right inverse unless m = n.
#
$
C9 We need to find all 3 × 2 matrices B = "b1 "b2 such that
#
$
#
$
"e1 "e2 = AB = A"b1 A"b2
This gives us two systems A"b1 = "e1 and A"b2 = "e2 . We solve both by row reducing a double augmented
matrix. We get
!
" !
"
3 1 0 1 0
1 0 0
2/5 −1/5
∼
1 2 0 0 1
0 1 0 −1/5
3/5


 
 2/5
0


 
Thus, the general solution of the first system is −1/5 + t 0 , t ∈ R and the general solution of the


 
0
1


 
−1/5
0


 
second system is  3/5 + s 0 , s ∈ R. Therefore, every right inverse of A has the form


 
0
1


 2/5 −1/5


3/5
B = −1/5


t
s
#
$
C10 We need to find all 3 × 2 matrices B = "b1 "b2 such that
#
$
#
$
"e1 "e2 = AB = A"b1 A"b2
This gives us two systems A"b1 = "e1 and A"b2 = "e2 . We solve both by row reducing a double augmented
matrix. We get
!
" !
"
1 −2 1 1 0
1 0 1 −1 2
∼
1 −1 1 0 1
0 1 0 −1 1
 
 
−1
−1
 
 
Thus, the general solution of the first system is −1 + t  0 , t ∈ R and the general solution of the
 
 
0
1
 
 
2
−1
 
 
second system is 1 + s  0 , s ∈ R. Therefore, every right inverse of A has the form
 
 
0
1


−1 − t 2 − s


1 
B =  −1


t
s
Copyright © 2020 Pearson Canada Inc.
72
"T
1 −2 1
. Also, observe that if AB = I2 , then (AB)T = I2 and so BT AT = I2 .
1 −1 1
!
"
−1 − t −1 t
Thus, from our work in Problem C10, we have that all left inverses of B are
.
2−s
1 s
C11 Observe that B =
!
C12 To find all left inverses
of B," we use the same trick we did in Problem C11. We will find all right
!
1
0 3
T
inverses of B =
. We get
2 −1 3
!
1
0 3 1 0
2 −1 3 0 1
"
∼
!
1 0 3 1
0
0 1 3 2 −1
"
 
 
1
−3
 
 
Thus, the general solution of the first system is 2 + t −3 , t ∈ R and the general solution of the
 
 
0
1
 
 
 0
−3
 
 
second system is −1 + s −3 , s ∈ R. Therefore, every left inverse of B has the form
 
 
0
1
!
1 − 3t
2 − 3t
−3s −1 − 3s
"
t
s
C13 Consider
c1 A"v 1 + · · · + ck A"v k = "0
Then, we have
A(c1"v 1 + · · · + ck"v k ) = "0
Since A is invertible, the only solution to this equation is
c1"v 1 + · · · + ck"v k = A−1"0 = "0
Hence, we get c1 = · · · = ck = 0, because {"v 1 , . . . , "v k } is linearly independent.
" k } spans Rn ,
C14 First observe that by definition, A"
wi ∈ Rn . Let "y ∈ Rn , and let "z = A−1"y . Since {"
w1 , . . . , w
there exists d1 , . . . , dk ∈ R such that
"z = d1 w
" 1 + · · · + dk w
"k
" 1 + · · · + dk w
"k
A−1"y = d1 w
Multiplying on the left of both sides by A gives
" 1 + · · · + dk w
" k)
AA−1"y = A(d1 w
I"y = d1 A"
w1 + · · · + dk A"
wk
"y = d1 A"
w1 + · · · + dk A"
wk
Thus, {A"
w1 , . . . , A"
wk } also spans Rn .
Copyright © 2020 Pearson Canada Inc.
73
"
!
"
!
"
!
"
!
"
1 1
2 1
2 −1
2 −1
5 3
−1
−1
C15 (a) Let A =
and B =
. Then, A =
and B =
. Then, AB =
1 2
3 2
−1
1
−3
2
8 5
!
"
!
"
5 −3
7 −4
and (AB)−1 =
, but A−1 B−1 =
.
−8
5
−5
3
!
(b) If (AB)−1 = A−1 B−1 = (BA)−1 , then we must have AB = BA.
C16 (a) (1) ⇒ (2): If A is invertible, then by Theorem 3.5.2, rank(A) = n.
(2) ⇒ (1): If rank(A) = n, then A"x i =# "ei is consistent
$ for 1 ≤ i ≤ n by the System Rank Theorem
(3). Thus, A is invertible with A−1 = "x 1 · · · "x n .
(b) (2) ⇒ (3): Since rank(A) = n, the RREF of A has n leading ones. Thus, there is a leading one in
each row and in each column, so the RREF is I.
(c) (3) ⇒ (4): This follows from the System Rank Theorem (3) and the System Rank Theorem (2).
(d) (4) ⇒ (5): Let "x ∈ Null(A). Then A"x = "0. By (4), this has a unique solution "x = "0. Thus,
Null(A) = {"0}.
(e) (5) ⇒ (6): Since Null(A) = {"0}, we have that A"x = "0 has a unique solution. Thus, rank(A) = n
by Theorem 2.2.2(2). Thus, A"x = "b is consistent for all "b ∈ Rn , and so Col(A) = Rn .
(f) (6) ⇒ (7): Since Col(A) = Rn , we have that rank(A) = dim Col(A) = n. Thus, rank(AT ) = n.
Therefore, AT "x = "b is consistent for all "b ∈ Rn , and hence Row(A) = Rn .
(g) (7) ⇒ (8): Since Row(A) = Rn , we have that rank(AT ) = n. Thus, AT "x = "0 has unique solution
"x = "0, so Null(AT ) = {"0}.
(h) (8) ⇒ (9): Since Null(AT ) = {"0}, we have that rank(AT ) = n. Thus, AT "x i = "ei is consistent for
1 ≤ i ≤ n, so (AT )−1 exists.
(i) (9) ⇒ (1): By Theorem 3.5.3(3).
!
"
!
"
!
"
1 1
1 2
0 0
C17 FALSE. Take P =
,A=
, and B =
.
0 1
1 2
1 3
!
"
1 0
C18 FALSE. A =
satisfies AA = A, but A is not invertible.
0 0
C19 TRUE. Let "b ∈ Rn . Consider the system of equations A"x = "b. Then, we have
AA"x = A"b
"x = (AA)−1 A"b
Thus, A"x = "b is consistent for all "b ∈ Rn and hence A is invertible by the Invertible Matrix Theorem.
!
"
!
"
0 0
1 0
C20 FALSE. If A =
and B =
, then AB = O2,2 and B is invertible.
0 0
0 1
1!
" !
" !
" !
"2
1 0 1 0 0 1 0 1
C21 TRUE. One such basis is
,
,
,
.
0 1 0 2 1 0 2 0
C22 TRUE. If A has a column of zeroes, then AT has a row of zeroes and hence rank(AT ) < n. Thus, AT
is not invertible and so A is also not invertible by the Invertible Matrix Theorem.
C23 TRUE. If A"x = "0 has a unique solution, then Null(A) = {"0} and hence Col(A) = Rn by the Invertible
Matrix Theorem.
Copyright © 2020 Pearson Canada Inc.
74
C24 TRUE. We can rewrite the equation as I = A(−A + 2I). Hence, A is invertible and A−1 = −A + 2I.
#
$
C25 TRUE. Since A is not invertible, A"x = "0 has a solution "x with "x ! "0. Let A = "a1 · · · "an and
consider
"0 = x1"a1 + · · · + xn"an = A"x
Since there is a non-zero solution "x , the columns of A form a linearly dependent set.
C26 TRUE. Since the columns of A form a linearly dependent set, there is a non-zero vector "x such that
"0 = x1"a1 + · · · + xn"an = A"x
Thus, Null(A) ! {"0} and hence A is not invertible by the Invertible Matrix Theorem.
Section 3.6
A Practice Problems
To find the elementary matrix E, we perform the specified elementary row operation on the 3×3 identity matrix.


1 −5 0
1 0 and
A1 We have E = 0


0
0 1


 

1 −5 0  1 2 3  6 −13 −17


 

1 0 −1 3 4 = −1
3
4
EA = 0


 

0
0 1
4 2 0
4
2
0
which equals the matrix obtained from A by performing the elementary row operation.


1 0 0


A2 We have E = 0 0 1 and


0 1 0


 

1 0 0  1 2 3  1 2 3


 

EA = 0 0 1 −1 3 4 =  4 2 0


 

0 1 0
4 2 0
−1 3 4
which equals the matrix obtained from A by performing the elementary row operation.


0
1 0


0 and
A3 We have E = 0 1


0 0 −1


 

0  1 2 3  1
2 3
1 0

 
 

0 −1 3 4 = −1
3 4
EA = 0 1


 

0 0 −1
4 2 0
−4 −2 0
which equals the matrix obtained from A by performing the elementary row operation.
Copyright © 2020 Pearson Canada Inc.
75


1 0 0


A4 We have E = 0 6 0 and


0 0 1


 

1 0 0  1 2 3  1 2 3


 

EA = 0 6 0 −1 3 4 = −6 18 24


 

0 0 1
4 2 0
4 2 0
which equals the matrix obtained from A by performing the elementary row operation.


1 0 0


A5 We have E = 0 1 0 and


4 0 1


 

1 0 0  1 2 3  1 2 3


 

EA = 0 1 0 −1 3 4 = −1 3 4


 

4 0 1
4 2 0
8 10 12
which equals the matrix obtained from A by performing the elementary row operation.
To find the elementary matrix E, we perform the specified elementary row operation on the 4×4 identity matrix.








0 0
1 0
1 0 0 0
1 0 0 0
1 0 0 0

0 1
0 0 0 1


0 0 −1 0 1 0 0
, E −1 = 0 0 0 1
A6 E = 
A7 E = 
, E = 

0 0 1 0
1 0
0 0
0 0 1 0
0 0 1 0



0 0 −3 1
0 0 3 1
0 1 0 0
0 1 0 0








0 0
0 0
1 0
1 0
1 0 0 0
 1 0 0 0


0 1
0 1 0 0


0 0 −1 0 1
0 0
, E −1 =  0 1 0 0
A8 E = 
A9 E = 
, E = 

1
−2 0 1 0
0 0 −3 0
0 0 − 3 0
2 0 1 0


0 0
0 1
0 0
0 1
0 0 0 1
0 0 0 1


1





3 0 0 0
 3 0 0 0
0 0 1 0
0 0 1 0
0 1 0 0
 0 1 0 0
0 1 0 0
0 1 0 0
, E −1 = 

, E −1 = 

A10 E = 
A11 E = 
 0 0 1 0

1 0 0 0
1
0
0
0
0 0 1 0







0 0 0 1
0 0 0 1
0 0 0 1
0 0 0 1








1 1 0 0
1 −1 0 0
1 0 0 −3
1 0 0 3
0 1 0 0
0




0 1 0
1 0 0
0 −1 0 1 0 0
, E −1 = 

A12 E = 
A13
E
=


, E = 

0
0 0 1
0 1 0
0
0 0 1 0
0 0 1 0




0 0 0 1
0
0 0 1
0 0 0
1
0 0 0 1
A14 It is elementary. The corresponding elementary row operation is 5R1 .
A15 It is not elementary. There is no single elementary row operation that can row reduce this back to I.
A16 It is elementary. The corresponding elementary row operation is R1 + 2R2 .
A17 It is elementary. The corresponding elementary row operation is R3 + (−4)R2 .
A18 It is not elementary. Both row 1 and row 3 have been multiplied by −1.
A19 It is not elementary. We have multiplied row 1 by 3 and then added row 3 to row 1.
A20 It is elementary. The corresponding elementary row operation is R1 * R3 .
A21 It is not elementary. All three rows have been swapped.
Copyright © 2020 Pearson Canada Inc.
76
A22 It is elementary. A corresponding elementary row operation is 1R1 .
To find a sequence of elementary matrices such that Ek · · · E1 A = I, we row reduce A to I keeping track of
the elementary row operations used. Note that since we obtain one elementary matrix for each elementary row
operation used, it is wise to try to minimize the number of elementary row operations required.
A23 (a) We have

 1 3

 0 0
0 1

 1 3

 0 1
0 0
(b)
4
2
0
4
0
1









 1 3

∼  0 1

R2 * R3
0 0

R1 − 4R3  1 3

∼  0 1

0 0
4
0
2
0
0
1




1
2 R3
∼



 R1 − 3R2  1 0 0 



∼ 0 1 0 



0 0 1
The first elementary row operation performed was R2 * R3 . So, E1 is the 3 × 3 elementary


1 0 0


matrix associated with this row operation. Hence, E1 = 0 0 1. The second row operation


0 1 0


1 0 0 
performed was 12 R3 , so E2 = 0 1 0 . The third elementary row operation performed was


0 0 1/2


1 0 −4
0, and the fourth elementary row operation performed was R1 − 3R2
R1 − 4R3 , so E3 = 0 1


0 0
1


1 −3 0
1 0.
so E4 = 0


0
0 1
A−1


1 −2 −3


0
1
= E4 E3 E2 E1 = 0


0 1/2
0
(c) Since A−1 = E4 E3 E2 E1 we have that A = (E4 E3 E2 E1 )−1 = E1−1 E2−2 E3−1 E4−1 . Hence,
A24 (a) We have





1 0 0 1 0 0 1 0 4 1 3 0

 
 
 

A = 0 0 1 0 1 0 0 1 0 0 1 0





0 1 0 0 0 2 0 0 1 0 0 1

 1 2

 0 1
2 4

 1 2

 0 1
0 0
2
3
5
0
0
1









 1 2

∼  0 1

R3 − 2R1
0 0

R1 − 2R2  1 0

∼  0 1

0 0
2
3
1
0
0
1

 R1 − 2R3

 R2 − 3R3 ∼




Copyright © 2020 Pearson Canada Inc.
77
(b)
Observe that we performed the second and third operation “at the same time”. Technically speaking, we actually need to perform one and then the other, however, which order we do them in does
not matter. We will say that we did R2 − 3R3 first and then R1 − 2R3 . Therefore, the elementary




0
 1 0 0
1 0




row operations corresponding to these row operations are: E1 =  0 1 0, E2 = 0 1 −3,




−2 0 1
0 0
1




1 0 −2
1 −2 0




0, and E4 = 0
1 0.
E3 = 0 1




0 0
1
0
0 1
−1
A
(c)


4
−7 −2


1 −3
= E4 E3 E2 E1 =  6


−2
0
1
A = E1−1 E2−1 E3−1 E4−1
A25 (a) We have





1 0 0 1 0 0 1 0 2 1 2 0





= 0 1 0 0 1 3 0 1 0 0 1 0





2 0 1 0 0 1 0 0 1 0 0 1



0 −4  12 R1
0 −2
 2
 1



1
4 
1
4
∼  −2
 −2


3 −1 −5
3 −1 −5




0 −2 
 1
 1 0 −2 




1
0 
0 
∼  0 1
 0



0 −1
1 R3 + R2
0 0
1
(b)



 R2 + 2R1 ∼
R3 − 3R1


R1 + 2R3  1 0 0 


∼ 0 1 0 


0 0 1








1/2 0 0
1 0 0
 1 0 0
1 0 0








Thus, we get E1 =  0 1 0, E2 = 2 1 0, E3 =  0 1 0, E4 = 0 1 0, and








0 0 1
0 0 1
−3 0 1
0 1 1


1 0 2


E5 = 0 1 0.


0 0 1
A−1
(c)
A = E1−1 E2−1 E3−1 E4−1 E5−1


−1/2 2 2


1 0
= E5 E4 E3 E2 E1 =  1


−1/2 1 1






0 0 1 0 −2
2 0 0  1 0 0 1 0 0 1






1 0 0 1
0
= 0 1 0 −2 1 0 0 1 0 0






0 0 1
0 0 1 3 0 1 0 −1 1 0 0
1
Copyright © 2020 Pearson Canada Inc.
78
A26 (a) We have

 0 3

 1 6
1 3

 1 0

 0 3
0 0
(b)
1
2
2






 0 3 1  R1 * R2  1 0 0 





∼ 0 3 1 
∼
 R2 − 2R1 ∼  1 0 0 


R3 − R1
1 0 1
1 0 1 R3 − R1





0 
 1 0 0 
 1 0 0 





1 
∼  0 3 0 
∼ 0 1 0 
1
 R2 − R3




R
3 2
1
0 0 1
0 0 1

 1

Thus, we get E1 = −2

0



0
1 0
1



0 1 −1, and E6 = 0
0 0
1
0







0 0
 1 0 0
0 1 0
 1 0 0







1 0, E2 =  0 1 0, E3 = 1 0 0, E4 =  0 1 0, E5 =







0 1
−1 0 1
0 0 1
−1 0 1

0 0

1/3 0.

0 1
A−1
(c)
A = E1−1 E2−1 E3−1 E4−1 E5−1 E6−1
A27 (a) We have

−2

= E6 E5 E4 E3 E2 E1 =  0

1

1
0 

1/3 −1/3

−1
1







1 0 0 1 0 0 0 1 0 1 0 0 1 0 0 1 0 0







= 2 1 0 0 1 0 1 0 0 0 1 0 0 1 1 0 3 0







0 0 1 1 0 1 0 0 1 1 0 1 0 0 1 0 0 1




4 
 1 −2
 1 −2 4  R1 + 2R2




3 −4  R2 + R1 ∼  0
1 0 
∼
 −1



0
1
2
0
1 2
R3 − R2





 1 0 4  R1 − 4R3  1 0 0
 1 0 4 





∼  0 1 0 
∼ 0 1 0
 0 1 0 



1
0 0 2 2 R3
0 0 1
0 0 1
(b)












0 0
1 0 0
1 2 0
1
1 0 0 








1 0, E4 = 0 1 0 ,
Thus, we get E1 = 1 1 0, E2 = 0 1 0, E3 = 0








0 0 1
0 0 1
0 −1 1
0 0 1/2


1 0 −4


0.
E5 = 0 1


0 0
1
A−1


4
−2 
 5


1
0 
= E5 E4 E3 E2 E1 =  1


−1/2 −1/2 1/2
Copyright © 2020 Pearson Canada Inc.
79
(c)
A =E1−1 E2−1 E3−1 E4−1 E5−1
A28 (a) We have

 1

 −2
−4

 1

 0
0
(b)

1

Hence, E1 = 2

0

1
 0

and E5 = 0 1

0 0






 1 0 0 1 −2 0 1 0 0 1 0 0 1 0 4






1 0 0 1 0 0 1 0 0 1 0
= −1 1 0 0






0 0 1 0
0 1 0 1 1 0 0 2 0 0 1



0 −1 
 1 0 −1 



0 −2  R2 + 2R1 ∼  0 0 −4  − 14 R2 ∼



1
4 R3 + 4R1
0 1
0




0 −1 
 1 0 −1  R1 + R3  1 0 0




0
1 
0 
∼  0 1
∼ 0 1 0




1
0 R2 * R3
0 0
1
0 0 1







0 0
0
0
1 0 0
1
1 0 0







1 0, E2 = 0 1 0, E3 = 0 −1/4 0, E4 = 0 0 1,







0 1
4 0 1
0
0
1
0 1 0

1

0.

1
A−1
(c)






 1/2 −1/4 0


0
1
= E5 E4 E3 E2 E1 =  4


−1/2 −1/4 0
A = E1−1 E2−1 E3−1 E4−1 E5−1






0 0 1 0 0 1 0 −1
 1 0 0  1 0 0 1






0
= −2 1 0  0 1 0 0 −4 0 0 0 1 0 1






0 0 1 −4 0 1 0
0 1 0 1 0 0 0
1
B Homework Problems
B1 E
B3 E
B5 E
B7 E

0

= 0

1

1

= 0

0

1

= 0

0

1

= 0

0


0 1
1 3


1 0, EA = 0 2


0 0
1 0


0 0 
 1


1 0 , EA =  0


0 1/2
1/2


0 6
7 18


1 0, EA = 0 2


0 1
1 3


0 3
1 0


1 0, E −1 = 0 1


0 1
0 0

−1

−1

1

0
1 

2
−1 

3/2 −1/2

−5

−1

−1

−3

0

1
B2 E
B4 E
B6 E
B8 E

 1

= −2

0

−1

=  0

0

1

= 0

0

0

= 1

0
Copyright © 2020 Pearson Canada Inc.


0 0
 1


1 0, EA = −2


0 1
1


0 0
−1


1 0, EA =  0


0 1
1


0 0
1 0


1 1, EA = 1 5


0 1
1 3


1 0
0 1


0 0, E −1 = 1 0


0 1
0 0

0
1

2 −3

3 −1

0 −1

2 −1

3 −1

1

−2

−1

0

0

1
80
B9 E
B11 E
B13 E
B15 E

0 0

= 0 1

1 0

1 0

= 0 1

0 2

−1/3

=  0

0

1 0

= 0 0

0 1



1
0 0 1



0, E −1 = 0 1 0



0
1 0 0



0
0 0
1



0, E −1 = 0
1 0



1
0 −2 1



0 0
−3 0 0



1 0, E −1 =  0 1 0



0 1
0 0 1



0
1 0 0



1, E −1 = 0 0 1



0
0 1 0
B10 E
B12 E
B14 E
B16 E

 1

= −4

0

1

= 0

0

 1

=  0

−1

1

= 0

0


0 0
1


1 0, E −1 = 4


0 1
0


0 0
1


1 0, E −1 = 0


0 2
0


0 0
1


1 0, E −1 = 0


0 1
1


0 0
1


−3 0, E −1 = 0


0 1
0
B17 It is elementary. The corresponding elementary row operation is R1 − 12 R3 .
B18 It is not elementary.
B19 It is not elementary.
B20 It is elementary. The corresponding elementary row operation is R1 * R2 .
B21 It is not elementary.
B22 It is elementary. The corresponding elementary row operation is 3R2 .
!
"
1/2 2
B23
1 1
!
"
!
"
!
"
1 0
1 −1
1 0
B24 (a) E1 =
, E2 =
, E3 =
0 1/3
0
1
−1 1
!
"
1 −1/3
(b) A−1 =
−1
2/3
!
"!
"!
"
1 0 1 1 1 0
(c) A =
0 3 0 1 1 1
!
"
!
"
!
"
0 1
1
0
1 2
B25 (a) E1 =
, E2 =
, E3 =
1 0
0 1/3
0 1
!
"
2/3 1
(b) A−1 =
1/3 0
!
"!
"!
"
0 1 1 0 1 −2
(c) A =
1 0 0 3 0
1
!
"
!
"
!
"
!
"
1 −1
1/2 0
1 0
1 0
B26 (a) E1 =
, E2 =
, E3 =
, E4 =
0
1
0 1
0 1/2
−1 1
!
"
1/2 −1/2
(b) A−1 =
−1/2
1
!
"!
"!
"!
"
1 1 2 0 1 0 1 0
(c) A =
0 1 0 1 0 2 1 1
Copyright © 2020 Pearson Canada Inc.

0 0

1 0

0 1

0 0 

1 0 

0 1/2

0 0

1 0

0 1

0
0

−1/3 0

0
1
81
B27 (a)
(b)
(c)
B28 (a)
(b)
(c)
B29 (a)
(b)
(c)
B30 (a)
(b)
(c)
B31 (a)
"
!
"
!
"
1 0
1
0
1 −3
E1 =
, E2 =
, E3 =
3 1
0 1/10
0
1
!
"
1/10 −3/10
A−1 =
3/10
1/10
!
"!
"!
"
1 0 1 0 1 3
A=
−3 1 0 10 0 1










0 
 1 0 0
 1 0 0
1 −1 0
1 0
1 0 −4










1 0, E4 = 0 1
0 , E5 = 0 1
0,
E1 = −1 1 0, E2 =  0 1 0, E3 = 0










0 0 1
−1 0 1
0
0 1
0 0 −1/2
0 0
1


1 0 0


E6 = 0 1 1


0 0 1


−1
2 
 0


1 −1/2
A−1 = −1/2


1/2
0 −1/2







0 1 0 4 1 0
0
1 0 0 1 0 0 1 1 0 1 0







0 0 1 0 0 1 −1
A = 1 1 0 0 1 0 0 1 0 0 1







0 0 1 1 0 1 0 0 1 0 0 −2 0 0 1 0 0
1










0 0
1/4 0 0
 1 0 0
 1 0 0
1
1 0 1










1 0, E5 = 0 1 0
E1 =  0 1 0, E2 = −1 1 0, E3 =  0 1 0, E4 = 0










0 0 1
0 0 1
−6 0 1
0 −2 1
0 0 1


−3/4 −2 1


1 0
A−1 = −1/4


−1 −2 1






4 0 0 1 0 0 1 0 0 1 0 0 1 0 −1






0
A = 0 1 0 1 1 0 0 1 0 0 1 0 0 1






0 0 1 0 0 1 6 0 1 0 2 1 0 0
1










0
 1 0 0
1 −2 0
1 0 0
1 0 1
1 0










1 0, E3 = 0 1 0, E4 = 0 1 0, E5 = 0 1 −3
E1 =  0 1 0, E2 = 0










−2 0 1
0
0 1
0 1 1
0 0 1
0 0
1


1
−1 −1


A−1 =  6 −2 −3


−2
1
1






0 0 1 0 −1 1 0 0
1 0 0 1 2 0 1






1 0 0 1
0 0 1 3
A = 0 1 0 0 1 0 0






2 0 1 0 0 1 0 −1 1 0 0
1 0 0 1








0 1 0
1 0 0
1 0 0
1 −4 0








1 0,
E1 = 1 0 0, E2 = 0 1/2 0, E3 = 0 1 0, E4 = 0








0 0 1
0 0 1
1 0 1
0
0 1








0 0
0 
0
1
1 0
1 0 8
1 0








1 0, E6 = 0 1
0 , E7 = 0 1 0, E8 = 0 1 −3
E5 = 0








0 −6 1
0 0 −1/6
0 0 1
0 0
1
!
Copyright © 2020 Pearson Canada Inc.
82

 2

−1
(b) A =  −1

1/2

0 1

(c) A = 1 0

0 0
−1/3
1/2
−1/6

0 1

0 0

1 0

−4/3

1/2

−1/6







0 0  1 0 0 1 4 0 1 0 0 1 0
0 1 0 −8 1 0 0







2 0  0 1 0 0 1 0 0 1 0 0 1
0 0 1
0 0 1 3







0 1 −1 0 1 0 0 1 0 6 1 0 0 −6 0 0
1 0 0 1
C Conceptual Problems
!
"!
"!
"
!
"
0 1 1
0 1 −4
0 1
C1 (a) Row reducing A to I, we find that A =
. We see that
is the standard
1 0 0 −2 0
1
1 0
!
"
1
0
matrix of a reflection R in the line x1 = x2 ,
is the standard matrix of a stretch S by factor
0 −2
!
"
1 −4
−2 in the x2 -axis, and
is a shear H by factor −4 in the x1 -direction. Thus, L = R ◦ S ◦ H.
0
1
(b) Since the standard matrix of an invertible linear operator is invertible, it can be written as a
product of elementary matrices. Moreover, each elementary matrix corresponds to either a shear,
a stretch, or a reflection. Hence, since multiplying the matrices of two mappings corresponds
to a composition of the mappings, we have that we can write L as a composition of the shears,
stretches, and reflections.
C2 (a) We row reduce A to I keeping track of our elementary row operations.
!
"
!
"
1 2
1 2
R1 − R2 ∼
∼
2 6 R2 − 2R1
0 2
!
"
!
"
1 0
1 0
∼
1
0 2 2 R2
0 1
Hence,
"
1 0
E1 =
−2 1
!
and E3 E2 E1 A = I.
"
1 −1
E2 =
0
1
!
1 0
E3 =
0 1/2
!
"
(b) Since multiplying on the left is the same as applying the corresponding elementary row operations we get
!
"!
"! "
1 0 1 −1
3
"x =
0 1/2 0
1 −1
!
"! "
1 0
4
=
0 1/2 −1
!
"
4
=
−1/2
(c) We have
1 2 3
2 6 5
"
1 0
4
0 2 −1
"
!
!
R2 − 2R1
1
2 R2
∼
!
∼
!
1 2
3
0 2 −1
"
1 0
4
0 1 −1/2
R1 − R2 ∼
"
Copyright © 2020 Pearson Canada Inc.
83
C3 (a) By Theorem 3.6.3, there exists a sequence of elementary matrices E1 , . . . , Ek such that Ek · · · E1 A =
R. Let E = Ek · · · E1 . Then, since elementary matrices are invertible by Theorem 3.6.1 and a
product of invertible matrices is invertible by Theorem 3.5.3(2), we have that E is invertible.
!
"
1 1
(b) E is only unique if A is invertible, in which case E = RA−1 . For example, Let A =
which
1 1
!
"
!
"
1 1
1 0
has RREF R =
. The matrix E =
satisfies EA = R (we got E by only using
0 0
−1 1
!
"
0
1
R2 − R1 ). However, the matrix E =
also gives EA = R (we got E by first using R1 * R2
1 −1
and then R2 − R1 ).
C4 Let B = AE where E is an elementary matrix. Taking transposes of both sides gives BT = E T AT .
We observe that the transpose of an elementary matrix is still an elementary matrix. Thus, BT is the
matrix obtained from A by performing the elementary row operation associated with E T on AT . But,
the rows of AT are just the columns of B. Thus, multiplying on the right by an elementary performs
an elementary column operation on A.
C5 We have Ek · · · E1 A = B. Since elementary matrices are invertible by Theorem 3.6.1 and a product
of invertible matrices is invertible by Theorem 3.5.3(2), we have that Ek · · · E1 is invertible. Multiply
on the left by (Ek · · · E1 )−1 to get
(Ek · · · E1 )−1 Ek · · · E1 A = (Ek · · · E1 )−1 B
IA = E1−1 E2−1 · · · Ek−1 B
A=
E1−1 E2−1
by Theorem 3.5.3(2)
· · · Ek−1 B
Since Ei−1 are elementary matrices, we have, by definition, B is row equivalent to A.
C6 We have Ek · · · E1 A = B and F# · · · F1C = B. Thus,
F# · · · F1C = Ek · · · E1 A
Using the same steps as in (a), we can get
C = F1−1 · · · F#−1 Ek · · · E1 A
Thus, A is row equivalent to C.
Section 3.7
A Practice Problems
A1 (a) An elementary matrix is lower triangular if and only if it corresponds to an elementary row
operation of the form Ri + aR j where i > j or of the form aRi . In the former case, the inverse
elementary matrix corresponds to Ri − aR j and hence will be lower triangular. In the latter case,
the elementary matrix corresponds to 1a Ri and hence is also lower triangular.
Copyright © 2020 Pearson Canada Inc.
84
(b) Assume A, B ∈ Mn×n (R) are both lower triangular matrices. Then, by definition, we have ai j = 0
and bi j = 0 whenever i < j. Hence, for any i < j we have
(AB)i j = ai1 b1 j + ai2 b2 j + · · · + aii bi j + ai(i+1) b(i+1) j + · · · + ain bn j
= ai1 (0) + ai2 (0) + · · · + aii (0) + 0b(i+1) j + · · · + 0bn j
=0
So, (AB) is lower triangular.
A2 Row reducing we get
We get that




5 
5 
 −2 −1
 −2 −1




0 −2  R2 − 2R1 ∼  0
2 −12  = U
 −4



2
1
3
R3 + R1
0
0
8
(L)21 = 2,
L31 = −1
Since, we did not need perform an elementary row operation to put a 0 in U32 , we get that L32 = 0.
Hence,


 1 0 0


L =  2 1 0


−1 0 1
We now have the matrix in a row echelon form, so no further row operations are required. Thus, we
have



5
 1 0 0 −2 −1

 

2 −12
A = LU =  2 1 0  0



−1 0 1
0
0
8
A3 Row reducing we get

4
 1 −2

4
 3 −2
2
2 −5
We get that


4

 1 −2


4 −8
 R2 − 3R1 ∼  0
R3 − 2R1
0
6 −13
(L)21 = 3,
L31 = 2,




R3 − 23 R2


4 
 1 −2


4 −8  = U
∼ 0


0
0 −1
L32 = 3/2
Hence,
Therefore,


1 0 0


L = 3 1 0


2 3/2 1



4
1 0 0 1 −2



4 −8
A = LU = 3 1 0 0



2 3/2 1 0
0 −1
Copyright © 2020 Pearson Canada Inc.
85
A4 Row reducing we get
We get that






5 
5 
 2 −4 5 
 2 −4
 2 −4






9 −3  = U
5 2  R2 − R1 ∼  0
9 −3 
∼ 0
 2





1
0
0
1
2 −1 5 R3 − R1
0
3
0 R3 − 3 R2
(L)21 = 1,
L31 = 1,
L32 = 1/3
Hence,


1 0 0


L = 1 1 0


1 1/3 1
Therefore,
A5 Row reducing we get



5
1 0 0 2 −4



9 −3
A = LU = 1 1 0 0



1 1/3 1 0
0
1






5
3
4 
3
4 
3
4 
 1
 1 5
 1 5



 −2 −6 −1



 0 4
3  R2 + 2R1  0 4
5 11 
5
11 


∼
∼




 0
 0 2 −1 −1  R3 − 1 R2  0 0 −7/2 −13/2  = U
2 −1 −1 



2



0
0
0
0
0 0
0
0
0 0
0
0
Hence,

 1 0 0
−2 1 0
L = 
 0 1/2 1
0 0 0
Therefore,
A6 Row reducing we get

 1
 0

 0

0

 1 0 0
−2 1 0
A = LU = 
 0 1/2 1
0 0 0

0 1

0 0

0 0

1 0

0

0

0

1

5
3
4 

4
5
11 

0 −7/2 −13/2

0
0
0




1
1 
1
1 
 1 −2
 1 −2


 0 −3 −2

 0 −3 −2
1 
1 


∼
∼



 3 −3
 0
2 −1  R3 − 3R1
3 −1 −4 
R3 + R2




0
4 −3
0
0
4 −3
0
R4 + 43 R2



−2
1
1 
1
1 
 1 −2



 0 −3 −2
−3
−2
1 
1 
∼ 

=U
0
−3
−3 
0 −3 −3 
 0


0 −17/3 4/3 R4 − 17
0
0
0
7
9 R3
Copyright © 2020 Pearson Canada Inc.
86
Hence,

0
0
1
0
1
0
L = 
1
3 −1
0 −4/3 17/9
Therefore,

0
0
1
0
1
0
A = LU = 
1
3 −1
0 −4/3 17/9
A7 Row reducing we get

2
 −2 −1
 4
3 −2

 3
3
4

2 −1
2

 −2 −1 2
 0
1 2

 0
0 4

0
0 8
0
2
3
−4
0
2
0
0
Hence,


0 1 −2
1
1
 

0 0 −3 −2
1
 

0 0
0 −3 −3


1 0
0
0
7



0 

 −2 −1 2

 R + 2R
 0
1 2
2 
1
 2

∼
∼

 R3 + 3 R1  0 3/2 7
3 
R3 − 32 R2


2

R4 + R1
R4 + 2R2
0 −2 4 −4




 −2 −1 2 0 

 0
1 2 2 


∼
=U

 0
0 4 0 



R4 − 2R3
0
0 0 0

0 0
 1
 −2
1 0
L = 
−3/2 3/2 1
−1
−2 2
Therefore,
A8 Row reducing A we get

0 0
 1
 −2
1 0
A = LU = 
−3/2 3/2 1
−1
−2 2
!

0

0

0

1
3 −2
−1
5
"
R2 + 13 R1
∼

0

0

0

1


0 −2 −1 2 0
 

0  0
1 2 2


0  0
0 4 0


1
0
0 0 0
!
3 −2
0 13/3
"
=U
"
1
0
We get L =
. Hence,
−1/3 1
!
"!
"
1
0 3 −2
A = LU =
−1/3 1 0 13/3
!
To solve A"x 1 = "b1 we let "y = U"x 1 and solve L"y = "b1 . This gives
y1 = 3
1
− y1 + y2 = −1
3
Copyright © 2020 Pearson Canada Inc.
87
We solve this by forward substitution. We have y1 = 3 and hence y2 = −1 + 13 (3) = 0. Then we solve
! "
3
U"x 1 =
by back substitution. We have
0
3x1 − 2x2 = 3
13
0+
x2 = 0
3
! "
1
which gives x2 = 0 and x1 =
+ 2(0)) = 1. Hence the solution is "x 1 =
.
0
To solve A"x 2 = "b2 we let "y = U"x 2 and solve L"y = "b2 . This gives
1
3 (3
y1 = 4
1
− y1 + y2 = 9
3
We solve this by forward substitution. We have y1 = 4 and hence y2 = 9 + 13 (4) = 31/3. Then we
!
"
4
solve U"x 1 =
by back substitution. We have
31/3
3x1 − 2x2 = 4
13
0+
x2 = 31/3
3
which gives x2 = 31/13 and x1 = 13 (4 + 2(31/13)) = 38/13. Hence the solution is "x 2 =
A9 Row reducing A we get
Hence,
Therefore,





3 
3
 1 0
 1 0 3 
 1 0





−2
1
−3
R
+
2R
0
1
3
0
1
3
∼
∼
1

 2



−1 4
5
R3 + R1
0 4 8 R3 − 4R2
0 0 −4


 1 0 0


L = −2 1 0


−1 4 1



3
 1 0 0 1 0



3
A = LU = −2 1 0 0 1



−1 4 1 0 0 −4
To solve A"x 1 = "b1 we let "y = U"x 1 and solve L"y = "b1 . This gives
y1 = 3
−2y1 + y2 = −4
−y1 + 4y2 + y3 = −3
Copyright © 2020 Pearson Canada Inc.



 = U
!
"
38/13
.
31/13
88
We solve this by forward substitution. We have y1 = 3, y2 = −4 + 2y1 = −4 + 2(3) = 2, and
 
 3 
 
y3 = −3 + y1 − 4y2 = −3 + 3 − 4(2) = −8. Then we solve U"x 1 =  2  by back substitution. We have
 
−8
x1 + 3x3 = 3
x2 + 3x3 = 2
−4x3 = −8
which gives x3 = 2, x2 = 2 − 3x3 = 2 − 3(2) = −4, and x1 = 3 − 3x3 = 3 − 3(2) = −3. Hence, the
 
−3
 
solution is "x 1 = −4.
 
2
To solve A"x 2 = "b2 we let "y = U"x 2 and solve L"y = "b2 . This gives
y1 = 2
−2y1 + y2 = −5
−y1 + 4y2 + y3 = −2
We solve this by forward substitution. We have y1 = 2, y2 = −5 + 2y1 = −5 + 2(2) = −1, and
 
 2
 
y3 = −2 + y1 − 4y2 = −2 + 2 − 4(−1) = 4. Then we solve U"x 2 = −1 by back substitution. We have
 
4
x1 + 3x3 = 2
x2 + 3x3 = −1
−4x3 = 4
which gives x3 = −1, x2 = −1 − 3x3 = −1 − 3(−1) = 2, and x1 = 2 − 3x3 = 2 − 3(−1) = 5. Hence the
 
 5
 
solution is "x 2 =  2.
 
−1
A10 Row reducing A we get
Hence,

0 −2
 1

−1
−4
4

3 −4 −1
Therefore,





0 −2 
0 −2 

 1
 1





2 
2  = U
∼ 0 −4
 R2 + R1 ∼  0 −4



R3 − 3R1
0 −4
5 R3 − R2
0
0
3


 1 0 0


L = −1 1 0


3 1 1



0 −2
 1 0 0 1



2
A = LU = −1 1 0 0 −4



3 1 1 0
0
3
Copyright © 2020 Pearson Canada Inc.
89
To solve A"x 1 = "b1 we let "y = U"x 1 and solve L"y = "b1 . This gives
y1 = −1
−y1 + y2 = −7
3y1 + y2 + y3 = −5
We solve this by forward substitution. We have y1 = −1, y2 = −7 + y1 = −7 + (−1) = −8, and
 
−1
 
y3 = −5 − 3y1 − y2 = −5 − 3(−1) − (−8) = 6. Then we solve U"x 1 = −8 by back substitution. We
 
6
have
x1 − 2x3 = −1
−4x2 + 2x3 = −8
3x3 = 6
which gives x3 = 2 and −4x2 = −8 − 2x3 = −8 − 2(2) = −12, so x2 = 3. Then x1 = −1 + 2x3 =
 
3
 
−1 + 2(2) = 3. Hence the solution is "x 1 = 3.
 
2
To solve A"x 2 = "b2 we let "y = U"x 2 and solve L"y = "b2 . This gives
y1 = 2
−y1 + y2 = 0
3y1 + y2 + y3 = −1
We solve this by forward substitution. We have y1 = 2, y2 = y1 = 2, and y3 = −1 − 3y1 − y2 =
 
 2
 
−1 − 3(2) − 2 = −9. Then we solve U"x 2 =  2 by back substitution. We have
 
−9
x1 − 2x3 = 2
−4x2 + 2x3 = 2
3x3 = −9
which gives x3 = −3 and −4x2 = 2 − 2x3 = 2 − 2(−3) = 8, so x2 = −2. Then x1 = 2 + 2x3 =
 
−4
 
2 + 2(−3) = −4. Hence the solution is "x 2 = −2.
 
−3
A11 Row reducing A we get






1 
 1 0
 1 0 1 
 1 0 1 






∼ 0 2 2  = U
 −3 2 −1  R2 + 3R1 ∼  0 2 2 


−3 4
2 R3 + 3R1
0 4 5 R3 − 2R2
0 0 1
Copyright © 2020 Pearson Canada Inc.
90
Hence,
Therefore,


 1 0 0


L = −3 1 0


−3 2 1



 1 0 0 1 0 1



A = LU = −3 1 0 0 2 2



−3 2 1 0 0 1
To solve A"x 1 = "b1 we let "y = U"x 1 and solve L"y = "b1 . This gives
y1 = 3
−3y1 + y2 = −5
−3y1 + 2y2 + y3 = −1
We solve this by forward substitution. We have y1 = 3, y2 = −5 + 3y1 = −5 + 3(3) = 4, and
 
3
 
y3 = −1 + 3y1 − 2y2 = −1 + 3(3) − 2(4) = 0. Then we solve U"x 1 = 4 by back substitution. We have
 
0
x1 + x3 = 3
2x2 + 2x3 = 4
x3 = 0
which gives x3 = 0 and 2x2 = 4 − 2x3 = 4, so x2 = 2. Then x1 = 3 − x3 = 3. Hence the solution is
 
3
 
"x 1 = 2.
 
0
To solve A"x 2 = "b2 we let "y = U"x 2 and solve L"y = "b2 . This gives
y1 = −4
−3y1 + y2 = 4
−3y1 + 2y2 + y3 = −5
We solve this by forward substitution. We have y1 = −4, y2 = 4 + 3y1 = 4 + 3(−4) = −8, and
 
−4
 
y3 = −5 + 3y1 − 2y2 = −5 + 3(−4) − 2(−8) = −1. Then we solve U"x 2 = −8 by back substitution.
 
−1
We have
x1 + x3 = −4
2x2 + 2x3 = −8
x3 = −1
which gives x3 = −1 and 2x2 = −8 − 2x3 = −8 − 2(−1) = −6, so x2 = −3.
 
−3
 
Then x1 = −4 − x3 = −4 − (−1) = −3. Hence the solution is "x 2 = −3.
 
−1
Copyright © 2020 Pearson Canada Inc.
91
A12 Row reducing A we get



2 −3 0 
2 −3 0
 −1
 −1

 0 −1
 0 −1

3
1
3 1



∼ 
 3 −8
3 2  R3 + 3R1  0 −2 −6 2



1 −2
3 1
R4 + R1
0
0
0 1
Hence,
Therefore,

 1
 0
L = 
−3
−1

 1
 0
A = LU = 
−3
−1
0
1
2
0
0
1
2
0
0
0
1
0
0
0
1
0


2 −3

 −1

 0 −1
3


∼

0
0
−12

 R3 − 2R2 
0
0
0
0
1
0
1



 = U



0

0

0

1


0 −1
2 −3 0
 

0  0 −1
3 1


0  0
0 −12 0


1
0
0
0 1
To solve A"x 1 = "b1 we let "y = U"x 1 and solve L"y = "b1 . This gives
y1 = −6
y2 = 7
−3y1 + 2y2 + y3 = −4
−y1 + y4 = 5
We solve this by forward substitution. We have y1 = −6, y2 = 7, y3 = −4 + 3y1 − 2y2 = −36, and


 −6
 7
 by back substitution. We have
y4 = 5 + y1 = −1. Then we solve U"x 1 = 
−36
−1
−x1 + 2x2 − 3x3 = −6
−x2 + 3x3 + x4 = 7
−12x3 = −36
x4 = −1
which gives x4 = −1, x3 = 3, x2 = −(7 − 3x3 − x4 ) = 1, and x1 = −(−6 − 2x2 + 3x3 ) = −1. Hence the
 
−1
 1
solution is "x 1 =  .
 3
−1
"
To solve A"x 2 = b2 we let "y = U"x 2 and solve L"y = "b2 . This gives
y1 = 5
y2 = −3
−3y1 + 2y2 + y3 = 3
−y1 + y4 = −5
Copyright © 2020 Pearson Canada Inc.
92
We solve this by forward substitution. We have y1 = 5, y2 = −3, y3 = 3 + 3y1 − 2y2 = 24, and
 
 5
−3
y4 = −5 + y1 = 0. Then we solve U"x 2 =   by back substitution. We have
 24
0
−x1 + 2x2 − 3x3 = 5
−x2 + 3x3 + x4 = −3
−12x3 = 24
x4 = 0
which gives x4 = 0, x3 = −2, x2 = −(−3 − 3x3 − x4 ) = −3, and x1 = −(5 − 2x2 + 3x3 ) = −5. Hence
 
−5
−3
the solution is "x 2 =  .
−2
0
B Homework Problems
"!
"
1 0 2
3
B1
B2
2 1 0 −7



 1 0 0 −1 0 −1



B3 −3 1 0  0 3 −2
B4



2 2 1
0 0
6



1
 1 0 0 2 0



B5  2 1 0 0 1 −1
B6



−4 2 1 0 0
3



1 
 1 0 0 5 10



3/5
B7 2/5 1 0 0 1
B8



−1 6 1 0 0 −8/5



 1 0 0 −1 3 6

 

B9 −3 1 0  0 10 18
B10



−5 1 1
0 0 9



1
1
 1 0 0 0 1 −2

−2 1 0 0 0
1
2
9






B11 
B12
 

0 −23 −114
 6 11 1 0 0

3 10 1 1 0
0
0
24
!
"!
"
! "
! "
1 0 2
3
−6
13
B13 LU =
; "x 1 =
, "x 2 =
1/2 1 0 −1/2
5
−7



 
 
3
 1 0 0 1 2
−2
 3



 
 
7; "x 1 =  1, "x 2 = −1
B14 LU = −2 1 0 0 5



 
 
3 0 1 0 0 −8
−3
−1
!



3
 1 0 0 1 2



7
−2 1 0 0 5

3 0 1 0 0 −8



0 0 2
8 5
 1



1 0 0 −1 1
 1


−1 −2 1 0
0 12



0
0 1 2
3
1



1
0 0 2 −2
1


2 −3/2 1 0 0
0



1 0 0 3 1 2



1 1 0 0 0 0
1 0 1 0 0 0



2
7 
1 0 0 1

 

−2 
1 1 0 0 −8

1 3/4 1 0
0 −15/2



0
0
0 2 3 −1
3 
 1



 2
1
0
0 0 2
4 −5 



 −1
2
1
0 0 0 −5 12 



1/2 −1/4 −1/2 1 0 0
0 9/4
Copyright © 2020 Pearson Canada Inc.
93
B15 LU
B16 LU
B17 LU
B18 LU

1

= 2

1

 1

=  3

−4

 1

= −3

3

 1
−2
= 
 4
3

0 0 1 1

1 0 0 1

0 1 0 0

0 0 1 0

1 0 0 3

2 1 0 0

0 0 −3

1 0  0

−3 1
0

0 0 0 1

1 0 0 0

0 1 0 0

4 0 1 0

 


3
 4
−12

 


−1; "x 1 = −5, "x 2 =  4

 


1
2
4

 
 


 
4
2
−4
 2 
−4

 
 


 
3; "x 1 = 3 + t −1 , t ∈ R, "x 2 = −2/3 + t −1 , t ∈ R

 
 


 
0
0
1
0
1

 
 
2 −5
 1
−2

 
 
1 −7; "x 1 =  1, "x 2 = −3

 
 
0 −1
−2
2

 


−3
1
1
 2
 1/4




 1
 0 
1
0
0
.
; "x 1 =  , "x 2 = 
0 −3 −2
−1
−1/2

0
0 −4
3
1/4
Chapter 3 Quiz
E1 Since the number of columns of A equals the number of rows of B the product AB is defined and


" 2 −1 4 !
"

2 −5 −3 
−14 1 −17
3
0 2 =
AB =

−3
4 −7 
−1 10 −39
1 −1 5
!
E2 The number of columns of B does not equal the number of rows of A, so BA is not defined.
E3 A has 3 columns and 2 rows, so AT has 2 columns and 3 rows. Thus, the number of columns of B
equals the number of rows of AT , so BAT is defined. We have
E4 (a) We have


 

2 −1 4  2 −3 −3 −38


 

0 2 −5
4 =  0 −23
BAT = 3


 

1 −1 5 −3 −7
−8 −42
 
"  1 !
"
−3
0
4  
−11
 1 =
fA ("u ) =
2 −4 −1  
0
−2
 
!
"  4 !
"
−3
0
4  
−16
−2 =
fA ("v ) =
2 −4 −1  
17
−1
!
(b) We have


1
 4
#


1 = A "v
A −2


−1 −2
$ #
"u = A"v
$ !−16 −11"
A"u =
17
0
Copyright © 2020 Pearson Canada Inc.
94
E5 (a) We have
√


 
cos π/3 − sin π/3 0  √1/2 − 3/2 0


 
cos π/3 0 =  3/2
[R] = [Rπ/3 ] =  sin π/3
1/2
0

 

0
0 1
0
0
1
 
−1
 
(b) A normal to the plane −x1 −x2 +2x3 = 0 is "n = −1. The columns of [M] are found by calculating
 
2
the images of the standard basis vectors under the reflection.
Thus,
 
  

1
−1  2/3
"n · "e1
−1
 
  

"n = 0 − 2
refl"n ("e1 ) = "e1 − 2
−1 = −1/3
 
6   
+"n+2
0
2
2/3
 
  

0
−1 −1/3
"n · "e2
−1   
 

"n = 1 − 2
refl"n ("e2 ) = "e2 − 2
−1 =  2/3
2


6
+"n+
0
2
2/3
 
  

0
−1  2/3
"n · "e3
2   
 

"n = 0 − 2 −1 =  2/3
refl"n ("e3 ) = "e3 − 2
 

6  
+"n+2
1
2
−1/3


2
 2 −1
1 

2
2
[M] = [refl(−1,−1,2) ] = −1

3
2
2 −1
(c) A composition of mappings is represented by the product of the matrices. Hence,
√
 


 1/2 − 3/2 0  2 −1
2
 1 
 √

2
2
[R ◦ M] = [R][M] =  3/2
1/2
0 −1


3

2
2 −1
0
0
1
√
√
√ 

 2 + 3 −1 − 2 3 2 − 2 3
√
√
1 √

= 2 3 − 1 − 3 + 2 2 3 + 2
6

4
4
−2


2 1 0 5 
 1 0


E6 The reduced row echelon form of the augmented matrix A | "b is  0 1 −1 0 0 6 . We see


0 0
0 0 1 7
that x3 and x4 are free variables, so let x3 = s ∈ R and x4 = t ∈ R. Then, the general solution is
#
 
 
 
5
−2
−1
 1
 0
6


 
 
 
"x = 0 + s  1 + t  0 ,
 
 
 
0
 0
 1
7
0
0
$
s, t ∈ R
Copyright © 2020 Pearson Canada Inc.
95
Replacing "b with "0, we see that the solution space of A"x = "0 is
 
 
−2
−1
 0
 1
 
 
"x = s  1 + t  0 ,
 
 
 0
 1
0
0
s, t ∈ R
In particular, the solution set is obtained from the solution space of A"x = "0 by translating by the
 
5
6
 
vector 0.
 
0
7
E7 (a) By definition, "u and "v are in the column space if and only if the systems B"x = "u and B"x = "v are
consistent. Since the coefficient matrix is the same for the two systems, we can row reduce the
3
4
doubly augmented matrix B | "u | "v . We get

 
2
0
4 −5  
 1
 
 −1 −1 −1 −3
6  

∼
 1
3
0
5 −7  

 
0
2 −1
3 −1
1
0
0
0
0
1
0
0
0
0
1
0

2 −1 

1 −2 

0 −3 

1
0
3
4
We see that the system B | "u is inconsistent so "u is not in the column space of B. However, the
3
4
system B | "v is consistent so "v is in the column space of B.
 
−1
3
4
 
(b) From the reduced row echelon form of B | "v we get that "x = −2 satisfies B"x = "v .
 
−3
E8 We first row reduce A to reduced row echelon form. We get

1
2

0

3
 
0
1 1 1 1
 
1
1 2 5 0
∼
2 −2 1 8 0
 
3
0 4 14
0

0
1 0 −1

1 −1 0
3

0
0 1
2

0
0 0
0
A basis for the row space
the set of non-zero rows from the reduced row echelon form. Hence,
 of A is
   


1






   0 0













 0  1 0



     







1
−1
0
a basis for Row(A) is 
,
,
. The columns from A which correspond to columns with







































0
0
1

     





−1  3 2

leading ones in the reduced row echelon
of A form a basis for the column space of A. Hence,
 form
    

1
0
1







     





2 1 2

a basis for the column space of A is 
. To find a basis for the nullspace we find the
  ,   ,  



0
2
1
     






3 3 4
Copyright © 2020 Pearson Canada Inc.
96
general solution to A"x = "0. Let x3 = s ∈ R and x5 = t ∈ R. Then the solution space is


−s + t
 s − 3t 


"x =  s  =


 −2t 
t
 
 
−1
 1
 1
−3
 
 
s  1 + t  0
 
 
 0
−2
0
1
   



−1  1





 1 −3











   





1
0
Therefore, a basis for Null(A) is 
,
. To find a basis for Null(AT ) we row reduce AT to get
























 0 −2







 0  1


1
0

1

1
1
 
2
0 3 1
 
1
2 3 0
 
1 −2 0 ∼ 0
 
2
1 4 0
 
5
8 14
0
0
1
0
0
0
0
0
1
0
0

1

1

1

0

0
Solving AT "x = "0, we find that x4 is a free variable, so we let x4 = t ∈ R and get
  

−1





−1








T


Thus, a basis for Null(A ) is 
 

.


−1




 



 1

 
 
−t
−1
−1
−t
"x =   = t  
−t
−1
t
1
E9 To find the inverse we row reduce [A | I]. We get

 1 0 0 −1 1 0 0
 0 0 1
0 0 1 0

 0 2 0
1
0 0 1

1 0 0
2 0 0 0
Hence, A−1
0
0
0
1


1/3
 2/3 0 0
 1/6 0 1/2 −1/6
.
= 
1 0
0 
 0

−1/3 0 0
1/3
 
  1 0 0
  0 1 0
 ∼ 
  0 0 1
 
0 0 0
2/3
0
0
1/6
0
0
1 −1/3

0 0
1/3 

0 1/2 −1/6 

1 0
0 

0 0
1/3
E10 To find the inverse we row reduce [A | I]. We get

 1 0

 1 1
2 1
p 1 0 0
0 0 1 0
1 0 0 1
 

1
0 0 
p
  1 0
 

1 0 
 ∼  0 1 −p −1

0 0 1 − p −1 −1 1
Copyright © 2020 Pearson Canada Inc.
97
A matrix is invertible if and only if its reduced row echelon form is I. We can get a leading one in
the third column only when 1 − p ! 0. Hence, the above matrix is invertible only for p ! 1. Assume
p ! 1, to get the inverse, we must continue row reducing.

 
p
1
0 0   1 0 0
 1 0

 
1 0  ∼  0 1 0
 0 1 −p −1
 
0 0 1 − p −1 −1 1
0 0 1
1
1−p
−1
1−p
−1
1−p
p
1−p
1−2p
1−p
−1
1−p


p
−p
 1
1 

−1 1 − 2p
p.
Thus, when p ! 1 we get that the inverse is

1 − p 
−1
−1
1
−p
1−p
p
1−p
1
1−p





E11 By definition, the range of L is a subset of Rm . We have L("0) = "0, so "0 ∈ Range(L). If "x , "y ∈
Range(L), then there exists "u , "v ∈ Rn such that L("u ) = "x and L("v ) = "y . Hence,
L(s"u + t"v ) = sL("u ) + tL("v ) = s"x + t"y
So, s"x + t"y ∈ Range(L). Thus, Range(L) is a subspace of Rm .
E12 To prove the set is linearly independent, we use the definition. Consider
c1 L("v 1 ) + · · · + ck L("v k ) = "0
Since L is linear we get L(c1"v 1 + · · · + ck"v k ) = "0. Thus c1"v 1 + · · · + ck"v k ∈ Null(L). But, Null(L) = {"0},
so c1"v 1 + · · · + ck"v k = "0. This implies that c1 = · · · = ck = 0 since {"v 1 , . . . , "v k } is linearly independent.
Therefore, {L("v 1 ), . . . , L("v k )} is linearly independent.
E13 (a) To find a sequence of elementary matrices such that Ek · · · E1 A = I, we row reduce A to I keeping
track of the elementary row operations used. We have


 1 0 −2 


 0 2 −3 
0 0
4




 1 0 −2  R1 + 2R3  1 0 0 



1
 0 1 −3/2  R2 + 3 R3 ∼ 0 1 0 
2 R2 ∼ 
2




1
R
0
0
1
0 0 1
3
4








1 0 0
1 0 0 
1 0 2
1 0 0 








Thus, we have E1 = 0 1/2 0, E2 = 0 1 0 , E3 = 0 1 0, and E4 = 0 1 3/2.








0 0 1
0 0 1/4
0 0 1
0 0 1
(b) We have
A = E1−1 E2−1 E3−1 E4−1





0 
1 0 0 1 0 0 1 0 −2 1 0





0 0 1 −3/2
= 0 2 0 0 1 0 0 1





0 0 1 0 0 4 0 0
1 0 0
1
E14 Let K = I3 . Then K M = MK for all 3 × 3 matrices M.
E15 For K M to be defined K must have 3 columns. For MK to be defined, K must have 4 rows. Hence,
if there exists such a K, it must be a 4 × 3 matrix. However, this implies that K M has size 4 × 4 and
that MK has size 3 × 3. Thus, K M cannot equal MK for any matrix K.
Copyright © 2020 Pearson Canada Inc.
98
! "
1
E16 The range is a subspace of R and hence cannot be spanned by
because this vector is not in R3 .
1
 
1
 
E17 Since the range of L consists of all vectors that are multiples of 1, the columns of L must be
 
2
 
! "
1
2
 
multiples of 1. Since the nullspace of L consists of all vectors that are multiples of
, we must
 
3
2


! "
1 −2/3
3


have the rows are multiples of
. Hence, the matrix of L is any multiple of 1 −2/3.


−2
2 −4/3
3
E18 The rank of this linear mapping is 3 and the nullity is 1. But then rank(L) + nullity(L) = 3 + 1 = 4 !
dim R3 . Hence, this contradicts the Rank Theorem, so there can be no such mapping L.
E19 This contradicts Theorem 3.5.2, so there can be no such matrix.
Chapter 3 Further Problems
F1 (a) Consider
"0 = c1 A"v 1 + · · · + ck A"v k = A(c1"v 1 + · · · + ck"v k )
This implies that c1"v 1 + · · · + ck"v k ∈ Null(A) and hence we can write it as a linear combination
of the basis vectors for Null(A). That is, there exist dk+1 , . . . , dn ∈ R such that
c1"v 1 + · · · + ck"v k = dk+1"v k+1 + · · · + dn"v n
c1"v 1 + · · · + ck"v k − dk+1"v k+1 − · · · − dn"v n = "0
Thus, c1 = · · · = ck = dk+1 = · · · = dn = 0 since {"v 1 , . . . , "v n } is linearly independent. Therefore,
{A"v 1 , . . . , A"v k } is linearly independent.
Let "b ∈ Col(A). Then "b = A"x for some "x ∈ Rn . Since "x ∈ Rn , there exist s1 , . . . , sn ∈ R such that
"b = A(s1"v 1 + · · · + sk"v k + sk+1"v k+1 + · · · + sn"v n )
= s1 A"v 1 + · · · + sk A"v k + sk+1 A"v k+1 + · · · + sn A"v n
But, "v i ∈ Null(A) for k + 1 ≤ i ≤ n, so A"v i = "0. Hence, we have
"b = s1 A"v 1 + · · · + sk A"v k + "0 + · · · + "0 = s1 A"v 1 + · · · + sk A"v k
Thus, Span{A"v 1 , . . . , A"v k } = Col(A). Therefore, we have shown that {A"v 1 , . . . , A"v k } is a basis for
Col(A).
(b) We have shown in part (a) that if dim Null(A) = n − k, then k = dim Col(A) = rank(A). Hence,
rank A + dim Null(A) = k + (n − k) = n
as required.
Copyright © 2020 Pearson Canada Inc.
99
F2 Since Span{"v 1 , . . . , "v k } ⊆ Span{"u 1 , . . . , "u k } there exists ai j such that
"v i = ai1"u 1 + · · · + aik"u k ,
1≤i≤k
 
ai1 
#
$
 
Let "ai =  ... , 1 ≤ i ≤ k. Let C = "a1 · · · "ak .
 
aik
Observe that
#
$ #
Thus, B "a1 · · · "ak = B"a1
B"ai = ai1"u 1 + · · · + aik"u k = "v i
$ #
$
· · · B"ak = "v 1 · · · "v k = A
F3 Certainly A(pI +qA) = pA+qA2 = (pI +qA)A, so matrices of this form do commute with A. Suppose
that
!
"!
" !
"!
"
a b 3 2
3 2 a b
=
c d 0 1
0 1 c d
so that
!
" !
"
3a 2a + b
3a + 2c 3b + 2d
=
3c 2c + d
c
d
By comparing corresponding entries, we see immediately that for this equation to be satisfied, it is
necessary and sufficient that c = 0 and a = b + d. Note
! that b and
" d may be chosen arbitrarily. Thus
b+d b
the general form of a matrix that commutes with A is
. We want to write this in the form
0
d
p + 3q
pI + qA =
0
!
"
2q
p+q
We equate corresponding entries and solve. Thus, let b = 2q, d = p + q, and we see that the general
matrix that commutes with A is
!
" !
"
b+d b
2q + p + q
2q
=
= pI + qA
0
d
0
p+q
as claimed.
F4 Let P, Q ∈ C(A), t ∈ R, so that PA = AP and QA = AQ. Then
(P + Q)A = PA + QA = AP + AQ = A(P + Q)
So, C(A) is closed under addition. Also,
(tP)A = t(PA) = t(AP) = A(tP)
Hence, C(A) is closed under scalar multiplication. Finally,
(PQ)A = P(QA) = P(AQ) = (PA)Q = (AP)Q = A(PQ)
Therefore, PQ ∈ C(A), and C(A) is closed under matrix multiplication.
Copyright © 2020 Pearson Canada Inc.
100

0 a12

F5 Let A = 0 0

0 0

a13 

a23 . Then

0


0 0 a12 a13 


0 
A2 = 0 0


0 0
0


0 0 0


A3 = 0 0 0


0 0 0
We claim that any n × n matrix such that all entries on and below the main diagonal are zero is
nilpotent. The general condition describing this kind of matrix is
ai j = 0
whenever j < i + 1
We now prove that such matrices are nilpotent. We have
(A2 )i j =
n
<
aik ak j
k=1
Since aik = 0 whenever k < i + 1, this sum can be rewritten
(A2 )i j =
n
<
aik ak j
k=i+1
But, ak j = 0 whenever j < k + 1 < i + 2, so (A2 )i j = 0 whenever j < i + 2. Using induction, we can
show that (Am )i j = 0 whenever j < i + m. Thus, (An )i j = 0 whenever j < i + n. Hence, (An )i j = 0 for
all i, j = 1, . . . , n. Thus, A is nilpotent.
F6 (a) reflθ maps (1, 0) to (cos 2θ, sin 2θ). To see this, think of rotating (1, 0) to lie along the line making
angle θ with the x1 -axis, then rotating a further angle θ so that the final image is as far from the
line as the original starting point.
/
0
reflθ maps (0, 1) to cos(− π2 + 2θ), sin(− π2 + 2θ) . To see this, note that (0, 1) starts at angle π/2;
to obtain its image under the reflection in the line, rotate it backwards by twice the angle π2 − θ.
By standard trigonometric identities,
π
cos(− + 2θ) = sin 2θ,
2
Hence,
π
sin(− + 2θ) = − cos 2θ
2
!
"
cos 2θ sin 2θ
[reflθ ] =
sin 2θ − cos 2θ
(b) It follows that
"!
"
cos 2α sin 2α cos 2θ sin 2θ
[reflα ◦ reflθ ] =
sin 2α − cos 2α sin 2θ − cos 2θ
!
"
cos 2α cos 2θ + sin 2α sin 2θ cos 2α sin 2θ − sin 2α cos 2θ
=
sin 2α cos 2θ − cos 2α sin 2θ sin 2α sin 2θ + cos 2α cos 2θ
!
"
cos 2(α − θ) − sin 2(α − θ)
=
sin 2(α − θ) cos 2(α − θ)
!
Thus the composition of these reflections is a rotation through angle 2(α − θ).
Copyright © 2020 Pearson Canada Inc.
101
F7 (a) For every !x ∈ R2 we have "L(!x )" = "!x ", so L(!x )·L(!x ) = "!x "2 for every !x ∈ R2 and "L(!x +!y )"2 =
"!x + !y "2 for every !x + !y ∈ R2 . Hence,
!
" !
"
L(!x + !y ) · L(!x + !y ) = (!x + !y ) · (!x + !y )
!
" !
"
L(!x ) + L(!y ) · L(!x ) + L(!y ) = !x · !x + 2!x · !y + !y · !y
L(!x ) · L(!x ) + 2L(!x ) · L(!y ) + L(!y ) · L(!y ) = "!x "2 + 2!x · !y + "!y "2
"!x "2 + 2L(!x ) · L(!y ) + "!y "2 = "!x "2 + 2!x · !y + "!y "2
L(!x ) · L(!y ) = !x · !y
as required.
(b) The columns of [L] are L(!e1 ) and L(!e2 ). We know that
L(!e1 ) · L(!e2 ) = !e1 · !e2 = 0
so the columns of [L] are orthogonal. Moreover, "L(!
# e$i )" = "!ei " = 1 for
# i =$ 1, 2, so the columns
v1
cos θ
have length 1. We may always write a vector !v =
of length 1 as
for some θ since we
v2
sin θ
have 1 = "!v "2 = v21 + v22 . If we take this to be the first column of L, then by orthogonality the
#
$ #
$
− sin θ
sin θ
second column must be either
or
. In the first case, the matrix of the isometry L
cos θ
− cos θ
is
#
$
cos θ − sin θ
[L] =
sin θ
cos θ
and the isometry L is a rotation through angle θ. In the second case,
#
$
cos θ
sin θ
[L] =
sin θ − cos θ
and L is a reflection in the line that makes angle θ/2 with the x1 -axis, by Problem F4.
F8 (a) Add the two equations: (A + B)X + (A + B)Y = C + D. Since (A + B)−1 exists,
X + Y = (A + B)−1 (C + D)
Similarly, by subtraction, (A − B)X − (A − B)Y = C − D, so
X − Y = (A − B)−1 (C − D)
Hence, by addition
&
1%
(A + B)−1 (C + D) + (A − B)−1 (C − D)
2
&
&
1%
1%
= (A + B)−1 + (A − B)−1 C + (A + B)−1 − (A − B)−1 D
2
2
X=
Similarly,
&
&
1%
1%
(A + B)−1 − (A − B)−1 C + (A + B)−1 + (A − B)−1 D
2
2
Thus we have shown that the required X and Y exist.
Y=
Copyright © 2020 Pearson Canada Inc.
102
(b) Using block multiplication, we can write the original equations as
#
$# $ # $
A B X
C
=
B A Y
D
We have seen that this#system
has a unique solution for arbitrary
$
# $C and D. Consider only the first
C
X
column of the matrix
and the first columns of the matrix
. The system
D
Y
#
$
A B
!x = !c
B A
must have a unique solution, so the coefficient matrix is invertible. From the solution for X and
Y, we see that
#
$−1
#
1 (A + B)−1 + (A − B)−1
A B
=
B A
2 (A + B)−1 − (A − B)−1
(A + B)−1 − (A − B)−1
(A + B)−1 + (A − B)−1
Copyright © 2020 Pearson Canada Inc.
$
Chapter 4 Solutions
Section 4.1
A Practice Problems
A1 (2 − 2x + 3x2 + 4x3 ) + (−3 − 4x + x2 + 2x3 ) = −1 − 6x + 4x2 + 6x3
A2 (−3)(1 − 2x + 2x2 + x3 + 4x4 ) = −3 + 6x − 6x2 − 3x3 − 12x4
A3 (2 + 3x + x2 − 2x3 ) − 3(1 − 2x + 4x2 + 5x3 ) = −1 + 9x − 11x2 − 17x3
A4 (2 + 3x + 4x2 ) − (5 + x − 2x2 ) = −3 + 2x + 6x2
A5 −2(−5 + x + x2 ) + 3(−1 − x2 ) = 7 − 2x − 5x2
!
"
!
"
2
A6 2 23 − 13 x + 2x2 + 13 3 − 2x + x2 = 73 − 43 x + 13
3 x
√
√
√
√
A7 2(1 + x + x2 ) + π(−1 + x2 ) = 2 − π + 2x + ( 2 + π)x2
A8 Clearly 0 ∈ Span B as 0 = 0(1 + x2 + x3 ) + 0(2 + x + x3 ) + 0(−1 + x + 2x2 + x3 ).
A9 2 + 4x + 3x2 + 4x3 is in the span of B if and only if there exists real numbers t1 , t2 , and t3 , such that
2 + 4x + 3x2 + 4x3 = t1 (1 + x2 + x3 ) + t2 (2 + x + x3 ) + t3 (−1 + x + 2x2 + x3 )
= (t1 + 2t2 − t3 ) + (t2 + t3 )x + (t1 + 2t3 )x2 + (t1 + t2 + t3 )x3
Since two polynomials are equal if and only if the coefficients of like powers of x are equal, this gives
the system of linear equations
t1 + 2t2 − t3 = 2
t2 + t3 = 4
t1 + 2t3 = 3
t1 + t2 + t3 = 4
Row reducing the corresponding augmented matrix gives

 
 1 2 −1 2   1 2 −1 2
 0 1
1 4   0 1
1 4

∼
 1 0
2 3   0 0
5 9

 
1 1
1 4
0 0
0 3/5
The system is inconsistent, so 2 + 4x + 3x2 + 4x3 is not in the span.
1
Copyright © 2020 Pearson Canada Inc.






2
A10 Repeating the steps in the solution of A9 we get the system
t1 + 2t2 − t3 = 0
t2 + t3 = −1
t1 + 2t3 = 2
t1 + t2 + t3 = 1
Row reducing the corresponding augmented matrix gives

 
0   1
 1 2 −1
 
 0 1
1 −1   0

∼
 1 0
2
2   0

 
1 1
1
1
0
0
1
0
0

2 
0

0 −1 

1
0 

0
0
   
t1   2
   
The system is consistent with solution t2  = −1. Thus, −x + 2x2 + x3 is in the span and
   
t3
0
−x + 2x2 + x3 = 2(1 + x2 + x3 ) + (−1)(2 + x + x3 ) + 0(−1 + x + 2x2 + x3 )
A11 Repeating the steps in the solution of A9 we get the system
t1 + 2t2 − t3 = −4
t2 + t3 = −1
t1 + 2t3 = 3
t1 + t2 + t3 = 0
Row reducing the corresponding augmented matrix gives

 
 1 2 −1 −4   1
 0 1
1 −1   0

∼
 1 0
2
3   0

 
1 1
1
0
0
0
1
0
0

0
1 

0 −2 

1
1 

0
0
   
t1   1
   
The system is consistent with solution t2  = −2. Thus, −4 − x + 3x2 is in the span and
   
t3
1
−4 − x + 3x2 = 1(1 + x2 + x3 ) + (−2)(2 + x + x3 ) + 1(−1 + x + 2x2 + x3 )
A12 Repeating the steps in the solution of A9 we get the system
t1 + 2t2 − t3 = −1
t2 + t3 = 7
t1 + 2t3 = 5
t1 + t2 + t3 = 4
Copyright © 2020 Pearson Canada Inc.
3
Row reducing the corresponding augmented matrix gives

 
 1 2 −1 −1   1
 0 1
1
7   0

∼
 1 0
2
5   0

 
1 1
1
4
0
0
1
0
0

0 −3 

0
3 

1
4 

0
0
   
t1  −3
   
The system is consistent with solution t2  =  3. Thus, −1 + 7x + 5x2 + 4x3 is in the span and
   
t3
4
−1 + 7x + 5x2 + 4x3 = (−3)(1 + x2 + x3 ) + 3(2 + x + x3 ) + 4(−1 + x + 2x2 + x3 )
A13 Repeating the steps in the solution of A9 we get the system
t1 + 2t2 − t3 = 2
t2 + t3 = 1
t1 + 2t3 = 0
t1 + t2 + t3 = 5
Row reducing the corresponding augmented matrix gives

 

 1 2 −1 2   1 2 −1 2 
 0 1
1 1   0 1
1 1 

∼



 1 0
2 0   0 0
5 0 

 

1 1
1 5
0 0
0 4
The system is inconsistent, so 2 + x + 5x3 is not in the span.
A14 Consider
0 = t1 (1 + 2x + x2 − x3 ) + t2 (5x + x2 ) + t3 (1 − 3x + 2x2 + x3 )
= (t1 + t3 ) + (2t1 + 5t2 − 3t3 )x + (t1 + t2 + 2t3 )x2 + (−t1 + t3 )x3
Comparing coefficients of powers of x we get the homogeneous system of equations
t1 + t3 = 0
2t1 + 5t2 − 3t3 = 0
t1 + t2 + 2t3 = 0
−t1 + t3 = 0
Row reducing the coefficient matrix of the system gives

 
1 1
 1 0
 2 5 −3 0

 ∼ 
 1 1
2 0

 
−1 0
1
0
0
1
0
0

0

0

1

0
Hence, the only solution is t1 = t2 = t3 = 0, so the set is linearly independent.
Copyright © 2020 Pearson Canada Inc.
4
A15 Consider
0 = t1 (1 + x + x2 ) + t2 x + t3 (x2 + x3 ) + t4 (3 + 2x + 2x2 − x3 )
= (t1 + 3t4 ) + (t1 + t2 + 2t4 )x + (t1 + t3 + 2t4 )x2 + (t3 − t4 )x3
Comparing coefficients of powers of x we get the homogeneous system of equations
t1 + 3t4 = 0
t1 + t2 + 2t4 = 0
t1 + t3 + 2t4 = 0
t3 − t4 = 0
Row reducing the coefficient matrix of the system gives

1
1

1

0
0
1
0
0
 
0
3 1
 
0
2 0
∼
1
2 0
 
1 −1
0
0
1
0
0

0
3

0 −1

1 −1

0
0
t4 is a free variable so the system has infinitely many solutions, so the set is linearly dependent. The
 
 
t1 
−3
t 
 1 
general solution of the homogeneous system is  2  = t  , so we have
t3 
 1 
t4
1
0 = (−3t)(1 + x + x2 ) + tx + t(x2 + x3 ) + t(3 + 2x + 2x2 − x3 ),
t∈R
A16 Consider
0 = t1 (3 + x + x2 ) + t2 (4 + x − x2 ) + t3 (1 + 2x + x2 + 2x3 ) + t4 (−1 + 5x2 + x3 )
= (3t1 + 4t2 + t3 − t4 ) + (t1 + t2 + 2t3 )x + (t1 − t2 + t3 + 5t4 )x2 + (2t3 + t4 )x3
Comparing coefficients of powers of x we get the homogeneous system of equations
3t1 + 4t2 + t3 − t4 = 0
t1 + t2 + 2t3 = 0
t1 − t2 + t3 + 5t4 = 0
2t3 + t4 = 0
Row reducing the coefficient matrix of the system gives

 
4 1 −1 1
3
 
1
1 2
0 0

∼
1 −1 1
5 0

 
0
0 2
1
0
0
1
0
0
0
0
1
0

0

0

0

1
Hence, the only solution is t1 = t2 = t3 = t4 = 0, so the set is linearly independent.
Copyright © 2020 Pearson Canada Inc.
5
A17 Consider
0 = t1 (1 + x + x3 + x4 ) + t2 (2 + x − x2 + x3 + x4 ) + t3 (x + x2 + x3 + x4 )
= (t1 + 2t2 ) + (t1 + t2 + t3 )x + (−t2 + t3 )x2 + (t1 + t2 + t3 )x3 + (t1 + t2 + t3 )x4
Comparing coefficients of powers of x we get the homogeneous system of equations
t1 + 2t2 = 0
t1 + t2 + t3 = 0
−t2 + t3 = 0
t1 + t2 + t3 = 0
t1 + t2 + t3 = 0
Row reducing the coefficient matrix of the system gives

 
2 0 1
1
 
1
1 1 0

0 −1 1 ∼ 0

 
1 1 0
1
 
1
1 1
0

0
2

1 −1

0
0

0
0

0
0
t3 is a free variable so the system has infinitely many solutions, so the set is linearly dependent. The
 
 
t1 
−2
 
 
general solution of the homogeneous system is t2  = t  1, so we have
 
 
t3
1
0 = (−2t)(1 + x + x3 + x4 ) + t(2 + x − x2 + x3 + x4 ) + t(x + x2 + x3 + x4 ),
t∈R
A18 Consider
a1 + a2 x + a3 x2 = t1 1 + t2 (x − 1) + t3 (x − 1)2
= t1 1 + t2 (x − 1) + t3 (x2 − 2x + 1)
= (t1 − t2 + t3 ) + (t2 − 2t3 )x + t3 x2


1 a1 
 1 −1


1 −2 a2 .
The corresponding augmented matrix is  0


0
0
1 a3
Since there is a leading one in each row the system is consistent for all polynomials a1 + a2 x + a3 x2 .
Thus, B spans P2 (R). Moreover, since there is a leading one in each column there is a unique solution
and so B is also linearly independent. Therefore, it is a basis for P2 (R).
Copyright © 2020 Pearson Canada Inc.
6
B Homework Problems
B1 −1 + 2x − 2x2 + 2x3
B3 −1 − 14x + 20x2 − 6x3
B5 0
B2 −2 − 4x + 6x2 − 6x3 − 10x4
B4 −3 − 5x2 + 9x3
B6 2 − 13 x + x2
B7 −2x2
B8 2 = 2(1 + x + x3 ) + 0(1 + 2x + x2 ) + (−2)(x + x3 )
B9 1 − x + x3 is not in the span.
B10 2 + 2x − x2 + 4x3 = 3(1 + x + x3 ) + (−1)(1 + 2x + x2 ) + (1)(x + x3 )
B11 6 + 4x + 2x2 = 4(1 + x + x3 ) + 2(1 + 2x + x2 ) + (−4)(x + x3 )
B12 The set is linearly dependent. We have
0 = t(0) + 0(1 + x2 ) + 0(x + x2 − x3 ),
t∈R
B13 The set is linearly independent.
B14 The set is linearly independent.
B15 The set is linearly dependent. We have
0 = (−t)(4x − 3x2 ) + (−2t)(1 − 3x + 2x2 ) + t(2 − 2x + x2 ),
t∈R
B16 The set is linearly independent.
B17 The set is linearly dependent. We have
0 = (3t)(4 + x − 2x3 ) + (−2t)(5 + 2x + x3 ) + t(−2 + x + 8x3 ),
t∈R
B18 The set is linearly independent.
B19 Show that c1 (1) + c2 (1 − x) + c3 (1 − x)2 = 0 + 0x + 0x2 has only the trivial solution c1 = c2 = c3 = 0.
Show that d1 (1) + d2 (1 − x) + d3 (1 − x)2 = a + bx + cx2 is consistent for all a, b, c ∈ R.
C Conceptual Problems
C1 If there does not exist such a polynomial q, then the equation
t1 p1 + · · · + tk pk = q
is consistent for all n-th degree polynomials q. This equation corresponds to a system of n + 1 equations (one for each power of x in the polynomials) in k unknowns (t1 , . . . , tk ). Hence, by the System
Rank Theorem (3), this implies that the rank of the coefficient matrix A corresponding to the system
is n + 1. However, this means that there are n + 1 leading ones in the RREF of A, but A only has
k < n + 1 columns. Since a leading one must appear in its own column, this is impossible. Hence,
there must exists a polynomial q which is not in the span of B.
Copyright © 2020 Pearson Canada Inc.
7
C2 Consider t1 p1 + · · · + tk pk = 0. As in Problem C1, this corresponds to a system of n + 1 equations in
k unknowns. By the System Rank Theorem (2), the system has at least k − (n + 1) > 0 parameters.
Hence, there is a non-trivial solution to this system. So, the set is linearly dependent.
C3 Let q ∈ Pn (R) and consider t1 p1 + · · · + tn+1 pn+1 = q. As in Problem C1, this corresponds to a
system of n + 1 equations in n + 1 unknowns. Since B is linearly independent, this system must have
a unique solution, so, by the System Rank Theorem (2), the rank of the coefficient matrix equals
n + 1. Therefore, by the System Rank Theorem (3), the system is consistent for all q ∈ Pn (R). Thus,
Span B = Pn (R).
Section 4.2
A Practice Problems
A1 Call the set S . Since the condition of the set is a linear equation, we suspect that S is a subspace. By
definition S is a subset of R4 and "0 ∈ S since 0 + 2(0) = 0.
 
 
 x1 
y1 
 x 
y 
2


Let "x =   and "y =  2  be vectors in S . Then, they satisfy the condition of S , so we have
 x3 
y3 
x4
y4
x1 + 2x2 = 0 and y1 + 2y2 = 0. Then,
satisfies


 sx1 + ty1 
 sx + ty 
2

s"x + t"y =  2
 sx3 + ty3 
sx4 + ty4
(sx1 + ty1 ) + 2(sx2 + ty2 ) = s(x1 + 2x2 ) + t(y1 + 2y2 ) = s(0) + t(0) = 0
So, s"x + t"y satisfies the condition of the set and hence s"x + t"y ∈ S . Therefore, S is a subspace of R4 .
A2 Call the set S . Since the condition of the set is a linear equation, we suspect that S is) a subspace.
*
a1 a2
By definition S is a subset of M2×2 (R) and O2,2 ∈ S since 0 + 2(0) = 0. Let A =
and
a3 a4
)
*
b b
B = 1 2 be vectors in S . Then, they satisfy the condition of S , so we have a1 + 2a2 = 0 and
b3 b4
b1 + 2b2 = 0. Then,
)
*
sa1 + tb1 sa2 + tb2
sA + tB =
sa3 + tb3 sa4 + tb4
satisfies
(sa1 + tb1 ) + 2(sa2 + tb2 ) = s(a1 + 2a2 ) + t(b1 + 2b2 ) = s(0) + t(0) = 0
So, sA + tB satisfies the condition of the set and hence sA + tB ∈ S . Therefore, S is a subspace of
M2×2 (R).
Copyright © 2020 Pearson Canada Inc.
8
A3 Call the set S . Since the condition of the set is a linear equation, we suspect that S is a subspace.
By definition S is a subset of P3 (R) and 0 = 0 + 0x + 0x2 + 0x3 ∈ S since 0 + 2(0) = 0. Let
p = a0 + a1 x + a2 x2 + a3 x3 and q = b0 + b1 x + b2 x2 + b3 x3 be vectors in S . Then, they satisfy the
condition of S , so we have a0 + 2a1 = 0 and b0 + 2b1 = 0. Then,
sp + tq = (sa0 + tb0 ) + (sa1 + tb1 )x + (sa2 + tb2 )x2 + (sa3 + tb3 )x3
satisfies
(sa0 + tb0 ) + 2(sa1 + tb1 ) = s(a0 + 2a1 ) + t(b0 + 2b1 ) = s(0) + t(0) = 0
So, sp + tq satisfies the condition of the set and hence sp + tq ∈ S . Therefore, S is a subspace of
P3 (R).
A4 At first glance we may be tempted to think that this is a subspace. In fact, if one is not careful, one
could think they have a proof that it is a subspace. The important thing to remember in this problem
is that the scalars in a vector space can be any real number. So, the condition
) that*the entries of the
1 1
matrices in the set are integers should be problematic. Indeed, observe that
is in the set, but
1 1
√ *
)
* )√
√ 1 1
2 √2
the scalar multiple 2
= √
is not in the set since the entries are not integers. Thus,
1 1
2
2
the set is not closed under scalar multiplication and hence is not a subspace.
A5 Since the condition
multiplication of entries we expect that it is not a subspace.
)
* of the) set involves
*
2 1
1 2
Observe that
and
are both in the set since 2(3) − 1(6) = 0 and 1(4) − 2(2) = 0. But,
6 3
2 4
)
*
3 3
their sum
is not in the set since 3(7) − 3(8) ! 0.
8 7
A6 Call the set S . The condition of the set is linear, so we suspect
) that *this is a subspace.
)
* By definition
a1 a2
b1 b2
S is a subset of M2×2 (R) and O2,2 ∈ S since 0 = 0. Let A =
and B =
be vectors in
0 0
0 0
S . Then, they satisfy the condition of S , so we have a1 = a2 and b1 = b2 . Then
)
*
sa1 + tb1 sa2 + tb2
sA + tB =
0
0
satisfies
sa1 + tb1 = sa2 + tb2
Thus, sA + tB satisfies the condition of the set and hence sA + tB ∈ S . Therefore, S is a subspace of
M2×2 (R).
A7 Observe that the zero polynomial 0 does not satisfy 0(1) = 1, so 0 is not in the set and hence it is not
a subspace.
A8 Call the set S . By definition S is a subset of P3 (R) and 0 ∈ S since 0(2) = 0. Let p, q ∈ S . Then, they
satisfy p(2) = 0 and q(2) = 0. Then,
(sp + tq)(2) = sp(2) + tq(2) = s(0) + t(0) = 0
So, sp + tq satisfies the condition of the set and hence sp + tq ∈ S . Therefore, S is a subspace of
P3 (R).
Copyright © 2020 Pearson Canada Inc.
9
*
)
*
1 0
−1 0
A9 By definition, the set is a subset of Mn×n (R). Observe that A =
and B =
are both in
0 1
0 0
)
*
0 0
row echelon form, but A + B =
is not in row echelon form. Thus, the set is not a subspace.
0 1
)
A10 By definition, the set is a subset of

a11

the set is non-empty. Let A =  0

0
Then, for any s, t ∈ R we have
Mn×n (R). The n × n zero matrix is an upper triangular matrix, so



· · · a1n 
b11 · · · b1n 


..  and B = 
..  be upper triangular matrices.
..
..
 0
.
.
. 
. 



0 ann
0
0 bnn

 sa11 + tb11

sA + tB = 
0

0
···
..
.
0
and is upper triangular, so the set is a subspace of Mn×n (R).

sa1n + tb1n 

..

.

sann + tbnn
A11 By definition, the set is a subset of Mn×n (R). The n × n zero matrix On,n satisfies OTn,n = On,n , so the
zero matrix is in the set. Let A and B be matrices such that AT = A and BT = B. Using properties of
the transpose gives
(sA + tB)T = sAT + tBT = sA + tB
Thus, (sA + tB) also satisfies the condition of the set. Thus, the set is a subspace of Mn×n (R).
A12 By definition, the set is a subset of P5 (R). The zero polynomial is an even polynomial, so the set is
non-empty. Let p(x) and q(x) be even polynomials. Then p(−x) = p(x) and q(−x) = q(x). Then, for
any s, t ∈ R we have
(sp + tq)(−x) = sp(−x) + tq(−x) = sp(x) + tq(x) = (sp + tq)(x)
So, sp + tq is an even polynomial. Thus, the set is a subspace of P5 (R).
A13 By definition, the set is a subset of P5 (R). Since P3 (R) is a vector space it contains the zero polynomial. Thus, (1+ x2 )(0) = 0 is in the set so it also contains the zero polynomial and hence is non-empty.
Let q1 and q2 be polynomials in the set. Then there exists polynomials p1 and p2 in P3 (R) such that
q1 = (1 + x2 )p1 and q2 = (1 + x2 )p2 . We have
+
,
sq1 + tq2 = s(1 + x2 )p1 + t(1 + x2 )p2 = (1 + x2 ) sp1 + tp2
Since P3 (R) is a vector space we get by V1 and V6 that sp1 + tp2 ∈ P3 (R). Thus, sq1 + tq2 is in the
set and hence the set is a subspace of P5 (R).
A14 By definition, the set is a subset of P5 (R). The zero polynomial is in the set since its satisfies the
conditions of the set. Let p(x) = a0 + a1 x + a2 x2 + a3 x3 + a4 x4 and q(x) = b0 + b1 x + b2 x2 + b3 x3 + b4 x4
be polynomials in the set. Then a0 = a4 , a1 = a3 , b0 = b4 , and b1 = b3 . Then,
sp(x) + tq(x) = (sa0 + tb0 ) + (sa1 + tb1 )x + (sa2 + tb2 )x2 + (sa3 + tb3 )x3 + (sa4 + tb4 )x4
satisfies sa0 + tb0 = sa4 + tb4 and sa1 + tb1 = sa3 + tb3 so it is in the set. Therefore, the set is a
subspace of P5 (R).
Copyright © 2020 Pearson Canada Inc.
10
A15 The zero polynomial does not satisfy the condition of the set, so it cannot be a subspace.
A16 The set is equal to Span{1, x, x2 } and hence is a subspace of P5 (R) by Theorem 4.2.2.
A17 By definition, the set is a subset of F . Call the set S . The zero vector in F is the function which maps
all x to 0. Thus, it certainly maps 3 to 0 and hence is in S . Let f, g ∈ S . Then f (3) = 0 and g(3) = 0.
Therefore, we get
(s f + tg)(3) = s f (3) + tg(3) = s(0) + t(0) = 0
Hence, S is a subspace of F .
A18 The zero vector in F does not satisfy the condition of the set, so it cannot be a subspace.
A19 By definition, the set is a subset of F . Call the set S . The zero vector in F is even so it is in S . Let
f, g ∈ S . Then f (−x) = f (x) and g(−x) = g(x). Thus, we get
(s f + tg)(−x) = s f (−x) + tg(−x) = s f (x) + tg(x) = (s f + tg)(x)
Hence, S is a subspace of F .
A20 Observe that f (x) = x2 + 1 is a function in the set since f (x) ≥ 0 for all x ∈ R. But then (−1) f (x) =
−x2 − 1 < 0 for all x ∈ R and hence is not in the set. Consequently, the set is not a subspace.
B Homework Problems
B1
B4
B7
B10
B13
B16
B19
B22
B25
It is not a subspace.
It is not a subspace.
It is a subspace.
It is a subspace.
It is a subspace.
It is a subspace.
It is a subspace.
It is not a subspace.
It is not a subspace.
B2
B5
B8
B11
B14
B17
B20
B23
B26
It is a subspace.
It is not a subspace.
It is not a subspace.
It is not a subspace.
It is not a subspace.
It is not a subspace.
It is a subspace.
It is a subspace.
It is not a subspace.
B3
B6
B9
B12
B15
B18
B21
B24
B27
It is a subspace.
It is a subspace.
It is a subspace.
It is a subspace.
It is not a subspace.
It is a subspace.
It is not a subspace.
It is a subspace.
It is not a subspace.
C Conceptual Problems
C1 We have
(−1)x = (−1)x + 0
by V4
= (−1)x + x + (−x)
= (−1)x + 1x + (−x)
= (−1 + 1)x + (−x)
= 0x + (−x)
by V5
by V10
by V8
by addition in R
= 0 + (−x)
by Theorem 4.2.1 (1)
= (−x) + 0
by V2
= (−x)
by V4
Copyright © 2020 Pearson Canada Inc.
11
C2
t0 = t (0 + (−0))
by V5
= t (0 + (−1)0)
by Theorem 4.2.1 (2)
= t0 + t[(−1)0]
by V9
= t0 + (−t)0
by V7
= (t + (−t))0
by V8
= 00
=0
by addition in R
by Theorem 4.2.1 (1)
C3 Assume that tx = 0. If t = 0, then the result holds. So, assume that t ! 0. Then, multiplying both
sides by 1t gives
1
1
(tx) = 0
t
-t .
1
t x = 0 by V7 and Theorem 4.2.1. (3)
t
1x = 0
x=0
by operations on R
by V10
C4 Since x ∈ V, −x ∈ V by V5. Hence, adding −x to both sides on the right gives
(x + y) + (−x) = (z + x) + (−x)
(y + x) + (−x) = (z + x) + (−x)
by V2
y + (x + (−x)) = z + (x + (−x))
by V3
y+0=z+0
y=z
by V5
by V4
C5 We need to show that all ten axioms of a vector space hold. Let x = (a, b) ∈ V, y = (c, d) ∈ V,
z = (e, f ) ∈ V and s, t ∈ R.
V1. x ⊕ y = (ad + bc, bd) and bd > 0 since b > 0 and d > 0, hence x ⊕ y ∈ V.
V2. x ⊕ y = (ad + bc, bd) = y ⊕ x.
V3. (x ⊕ y) ⊕ z = (ad + bc, bd) ⊕ (e, f ) = (ad f + bc f + bde, bd f ) = (a, b) ⊕ (c f + de, d f ) = x ⊕ (y ⊕ z).
+
,
V4. (0, 1) ∈ V and (a, b) ⊕ (0, 1) = a(1) + b(0), b(1) = (a, b). Hence 0 = (0, 1) is the zero element.
V5. The additive inverse of "x is (−ab−2 , b−1 ) which is in V since b−1 > 0 and
+
, +a a
(a, b) ⊕ (−ab−2 , b−1 ) = a(b−1 ) + (b)(−ab−2 ), b(b−1 ) =
− , 1) = (0, 1) = 0
b b
V6. t ( x = (tabt−1 , bt ) ∈ V as bt > 0 since b > 0.
V7.
s ( (t ( x) = s ( (tabt−1 , bt ) = (s(tabt−1 )(bt ) s−1 , (bt ) s )
/
0
= (st)ab st−1 , b st = (st) ( x
Copyright © 2020 Pearson Canada Inc.
12
V8.
+
, +
,
(s + t) ( x = (s + t)ab s+t−1 , b s+t = (sab s−1 )(bt ) + (b s )(tabt−1 ), b s bt
= (sab s−1 , b s ) ⊕ (tabt−1 , bt ) = [s ( x] ⊕ [t ( x]
V9.
t ( (x ⊕ y) = t ( (ad + bc, bd) = (t(ad + bc)(bd)t−1 , (bd)t )
= (tad(bd)t−1 + tbc(bd)t−1 , bt dt ) = (tcdt−1 bt + tabt−1 dt , bt dt )
= (tabt−1 , bt ) ⊕ (tcdt−1 , dt ) = [t ( (a, b)] ⊕ [t ( (c, d)] = [t ( x] ⊕ [t ( y]
V10. 1 ( x = (1ab1−1 , b1 ) = x.
Hence, V is a vector space.
C6 We need to show that all ten axioms of a vector space hold. Let x, y, z ∈ V and s, t ∈ R. Then, x > 0,
y > 0, and z > 0.
V1: Since x, y > 0 we have x ⊕ y = xy > 0 so x ⊕ y ∈ V.
V2: x ⊕ y = xy = yx = y ⊕ x
V3: x ⊕ (y ⊕ z) = x ⊕ yz = x(yz) = (xy)z = (xy) ⊕ z = (x ⊕ y) ⊕ z.
V4: Observe that 1 ∈ V and x ⊕ 1 = x(1) = x for all x ∈ V. Thus, the zero vector of V is 0 = 1.
! "
V5: We have x ⊕ 1x = x 1x = 1 = 0, so the additive inverse of x is 1x . Moreover, 1x ∈ V since 1x > 0
whenever x > 0.
V6: t ( x = xt ∈ V since xt > 0 whenever x > 0.
V7: s ( (t ( x) = s ( (xt ) = (xt ) s = xts = (ts) ( x.
V8: (s + t) ( x = x s+t = x s xt = x s ⊕ xt = [s ( x] ⊕ [t ( x].
V9: t(x ⊕ y) = t(xy) = (xy)t = xt yt = xt ⊕ yt = [t ( x] ⊕ [t ( y].
V10: 1x = x1 = x
Hence, V is a vector space.
C7 We need to show that all ten axioms of a vector space hold. Let a + bi, c + di, e + f i ∈ C and s, t ∈ R.
V1: (a + bi) + (c + di) = (a + c) + (b + d)i ∈ C.
V2: (a + bi) + (c + di) = (a + c) + (b + d)i = (c + a) + (d + b)i = (c + di) + (a + bi).
V3:
(a + bi) + [(c + di) + (e + f i)] = (a + bi) + [(c + e) + (d + f )i]
= (a + (c + e)) + (b + (d + f ))i
= ((a + c) + e) + ((b + d) + f )i
= [(a + c) + (b + d)i] + (e + f i)
= [(a + bi) + (c + di)] + (e + f i)
V4: (a + bi) + (0 + 0i) = a + bi, and 0 + 0i ∈ C, so 0 + 0i = 0
Copyright © 2020 Pearson Canada Inc.
13
V5: (a + bi) + (−a − bi) = 0 + 0i, and −a − bi ∈ C, so (−a − bi) is the additive inverse of any a + bi ∈ C.
V6: t(a + bi) = (ta) + (tb)i ∈ C.
V7: s(t(a + bi)) = s((ta) + (tb)i) = (st)a + (st)bi = (st)(a + bi)
V8: (s + t)(a + bi) = (s + t)a + (s + t)bi = sa + sbi + ta + tbi = s(a + bi) + t(a + bi)
V9: s[(a + bi) + (c + di)] = s((a + c) + (b + d)i) = sa + sc + sbi + sdi = s(a + bi) + s(c + di)
V10: 1(a + bi) = a + bi
Hence, C is a real vector space.
C8 We need to show that all ten axioms of a vector space hold. Let L, M, N ∈ L and s, t ∈ R. Then, L,
M, and N are linear mappings from V to Rn .
V1: By Theorem 3.2.4, L + M is a linear mapping from Rn → Rn , hence L + M ∈ L.
V2: For any "x ∈ Rn , we have
(L + M)("x ) = L("x ) + M("x ) = M("x ) + L("x ) = (M + L)("x )
V3: For any "x ∈ Rn we have
+
,
+
,
L + (M + N) ("x ) = L("x ) + (M + N)("x ) = L("x ) + M("x ) + N("x )
+
,
= L("x ) + M("x ) + N("x ) = (L + M)("x ) + N("x )
+
,
= (L + M) + N ("x )
Thus, L + (M + N) = (L + M) + N.
V4: Let Z : Rn → Rn be the linear mapping defined by Z("x ) = "0 for all "x ∈ Rn . Then, for any "x ∈ Rn
we have
(L + Z)("x ) = L("x ) + Z("x ) = L("x ) + "0 = L("x )
Thus, L + Z = L. Since Z("x ) is linear, we have Z("x ) ∈ L. Hence, Z("x ) is the zero vector of L.
V5: For any L ∈ L, define (−L) by (−L)("x ) = (−1)L("x ). Then, for any "x ∈ Rn we have
+
,
L + (−L) ("x ) = L("x ) + (−L)("x ) = L("x ) − L("x ) = "0 = Z("x )
Thus, L + (−L) = Z. Moreover, it is easy to verify that (−L) is linear, so (−L) ∈ L.
V6: By Theorem 3.2.4 sL is a linear mapping from Rn → Rn , hence sL ∈ L.
V7: For any "x ∈ Rn we have
+
,
+ ,
(st)L ("x )) = (st)L("x ) = s(tL("x )) = s tL ("x )
V8: For any "x ∈ Rn we have
+
,
(s + t)L ("x ) = (s + t)L("x ) = sL("x ) + tL("x ) = (sL)("x ) + (tL)("x ) = (sL + tL)("x )
V9: For any "x ∈ Rn we have
+
,
+
,
+
,
s(L + M) "x = s L + M ("x ) = s L("x ) + M("x ) = sL("x ) + sM("x ) = (sL + sM)("x )
V10: For any "x ∈ Rn we have
(1L)("x ) = 1L("x ) = L("x )
Copyright © 2020 Pearson Canada Inc.
14
C9 We know that {"0} and R2 are both subspaces of R2 . Let S be any other subspace. Since S ! {"0} (and
cannot be the empty set), there exists a vector "x ∈ S with "x ! "0. Then, since S is a subspace, we
have that t"x ∈ S for all t. By Theorem 4.2.2, the set {t"x | t ∈ R} = Span{"x } is a subspace of R2 .
Hence, all such subspaces are lines through the origin.
Assume we have another subspace T that contains two vectors "x and "y such that {"x , "y } is linearly
independent. Then, by definition, of a subspace, T contains all vectors of the form s"x +t"y for s, t ∈ R.
But, then T = R2 .
Thus, all the subspaces of R2 are the origin, lines through the origin, or R2 itself.
C10 (a) We need to show that all ten axioms of a vector space hold. Let (u1 , v1 ), (u2 , v2 ), (u3 , v3 ) ∈ U × V
and s, t ∈ R. Then u1 , u2 , u3 ∈ U and v1 , v2 , v3 ∈ V.
V1: Observe that u1 + u2 ∈ U and v1 + v2 ∈ V as U and V are both vector spaces. Thus,
(u1 , v1 ) ⊕ (u2 , v2 ) = (u1 + u2 , v1 + v2 ) ∈ U × V
V2:
(u1 , v1 ) ⊕ (u2 , v2 ) = (u1 + u2 , v1 + v2 )
= (u2 + u1 , v2 + v1 )
= (u2 , v2 ) ⊕ (u1 , v1 )
V3:
(u1 , v1 ) ⊕ [(u2 , v2 ) ⊕ (u3 , v3 )] = (u1 , v1 ) ⊕ (u2 + u3 , v2 + v3 )
= (u1 + (u2 + u3 ), v1 + (v2 + v3 ))
= ((u1 + u2 ) + u3 , (v1 + v2 ) + v3 )
= (u1 + u2 , v1 + v2 ) ⊕ (u3 , v3 )
= [(u1 , v1 ) ⊕ (u2 , v2 )] ⊕ (u3 , v3 )
V4: (u1 , v1 ) ⊕ (0U , 0V ) = (u1 + 0U , v1 + 0V ) = (u1 , v1 ). Thus, (0U , 0V ) ∈ U × V is the zero vector
of U × V.
V5: Let (u1 , v1 ) ∈ U × V. Since U and V are vector spaces (−u1 ) ∈ U and (−v1 ) ∈ V. Hence,
+
,
(−u1 ), (−v1 ) ∈ U × V. Moreover, we have
+
,
(u1 , v1 ) ⊕ (−u1 ), (−v1 ) = (u1 + (−u1 ), v1 + (−v1 )) = (0U , 0V )
V6: Observe that tu1 ∈ U and tv1 ∈ V as U and V are both vector spaces. Thus,
t ( (u1 , v1 ) = (tu1 , tv1 ) ∈ V × V
V7:
+
,
(st) ( (u1 , v1 ) = (st)u1 , (st)v1
+
,
= s(tu1 ), s(tv1 )
= s ( (tu1 , tv1 )
+
,
= s ( t ( (u1 , v1 )
Copyright © 2020 Pearson Canada Inc.
15
V8:
+
,
(s + t) ( (u1 , v1 ) = (s + t)u1 , (s + t)v1
= (su1 + tu1 , sv1 + tv1 )
= (su1 , sv1 ) ⊕ (tu1 , tv1 )
= s ( (u1 , v1 ) ⊕ t ( (u1 , v1 )
V9:
+
,
s ( (u1 , v1 ) ⊕ (u2 , v2 ) = s ( (u1 + u2 , v1 + v2 )
+
,
= s(u1 + u2 ), s(v1 + v2 )
= (su1 + su2 , sv1 + sv2 )
= (su1 , sv1 ) ⊕ (su2 , sv2 )
= s ( (u1 , v1 ) ⊕ s ( (u2 , v2 )
V10 1 ( (u1 , v1 ) = (1u1 , 1v1 ) = (u1 , v1 )
(b) By definition, U × {0V } is a subset of U × V. Also, (0U , 0V ) ∈ U × {0V }, so it is non-empty. For
any u1 , u2 ∈ U and t ∈ R we have
(u1 , 0V ) ⊕ (u2 , 0V ) = (u1 + u2 , 0V ) ∈ U × {0V }
and
t ( (u1 , 0V ) = (tu1 , 0V ) ∈ U × {0V }
Thus, U × {0V } is a subspace of U × V.
(c) With this rule for scalar multiplication, it is not a subspace because
0(u1 , v1 ) = (0U , v1 ) ! (0U , 0V )
and (0U , 0V ) is the zero vector in U × V.
C11 V2, V3, V7, V8, V10 are not satisfied.
C12 V10 is not satisfied.
C13 V8, V9 are not satisfied.
Copyright © 2020 Pearson Canada Inc.
16
Section 4.3
A Practice Problems
A1 Call the set B. To show that it is a basis for R3 we need to show that Span B = R3 and that B is
linearly independent. To prove that Span B = R3 , we need to show that every vector "x ∈ R3 can be
written as a linear combination of the vectors in B. Consider
 
 
 
  

 x1 
1
 1
2 t1 + t2 + 2t3 
 
 
 
  

 x2  = t1 1 + t2 −1 + t3 1 =  t1 − t2 + t3 
x3
2
−1
1
2t1 − t2 + t3
Row reducing the corresponding coefficient matrix gives

 

1 2 1 0 0
1

 

1 −1 1 ∼ 0 1 0
2 −1 1
0 0 1
Observe that the rank of the coefficient matrix equals the number of rows, so by the System-Rank
Theorem (1), the system is consistent for every "x ∈ R3 . Hence, Span B = R3 . Moreover, by the
System-Rank Theorem (2), there are no parameters in the solution. Therefore, we have a unique
solution when we take "x = "0, so B is also linearly independent. Hence, it is a basis for R3 .
A2 Since it only has two vectors in R3 it cannot span R3 by the Basis Theorem and hence it cannot be a
basis.
A3 Call the set B. To show that it is a basis for R3 we need to show that Span B = R3 and that B is
linearly independent. To check whether B is linearly independent, we consider
 
 
 
  

0
−1
2
1  −t1 + 2t2 + t3 
 
 
 
  

0 = t1  3 + t2 4 + t3 4 = 3t1 + 4t2 + 4t3 
0
5
0
2
5t1 + 2t3
Row reducing the corresponding coefficient matrix gives

 

−1 2 1 −1 2 1

 

 3 4 4 ∼  0 10 7
5 0 2
0 0 0
Since x3 is a free variable, the homogeneous system has infinitely many solutions and consequently
the set is linearly dependent. Therefore, it is not a basis.
A4 Since it has four vectors in R3 it is linearly dependent by the Basis Theorem and hence cannot be a
basis.
Copyright © 2020 Pearson Canada Inc.
17
A5 Consider
a + bx + cx2 = t1 (1 + x + 2x2 ) + t2 (1 − x − x2 ) + t3 (2 + x + x2 )
= (t1 + t2 + 2t3 ) + (t1 − t2 + t3 )x + (2t1 − t2 + t3 )x
(1)
2
This gives us the system of linear equations
t1 + t2 + 2t3 = a
t1 − t2 + t3 = b
2t1 − t2 + t3 = c
Row reducing the corresponding coefficient matrix gives

 

1 2 1 0 0
1

 

1 −1 1 ∼ 0 1 0
2 −1 1
0 0 1
Observe that the rank of the coefficient matrix equals the number of rows, so by the System Rank
 
a
 
Theorem (1), the system is consistent for every b ∈ R3 . Hence, equation (7) has a solution for
 
c
all a, b, c ∈ R. Thus, Span B = P2 (R). Moreover, by the System Rank Theorem (2), there are no
parameters in the solution. Therefore, we have a unique solution when we take a = b = c = 0, so B
is also linearly independent. Hence, it is a basis for P2 (R).
Compare this problem with A1. Thinking carefully about why these are the same will help your predict
what we are going to do in Section 4.4
A6 Since it only has two vectors in P2 (R) it cannot span P2 (R) by the Basis Theorem (as dim P2 (R) = 3)
and hence it cannot be a basis.
A7 Consider
a + bx + cx2 = t1 (1 − x + x2 ) + t2 (1 + 2x − x2 ) + t3 (3 + x2 )
= (t1 + t2 + 3t3 ) + (−t1 + 2t2 )x + (t1 − t2 + t3 )x
(2)
2
This gives us the system of linear equations
t1 + t2 + 3t3 = a
−t1 + 2t2 = b
t1 − t2 + t3 = c
Row reducing the corresponding coefficient matrix gives

 

1 3 1 0 2
 1
 

−1
2 0 ∼ 0 1 1

 

1 −1 1
0 0 0
By the System Rank Theorem (2), the system has infinitely many solutions. Thus, the set is linearly
dependent and hence is not a basis.
Copyright © 2020 Pearson Canada Inc.
18
A8 Consider
)
*
)
*
)
*
)
* )
*
a b
1 1
1
2
1 3
c1 + c2 + c3 c1 + 2c2 + 3c3
= c1
+ c2
+ c3
=
0 c
0 1
0 −1
0 2
0
c1 − c2 + 2c3
(3)
This gives us the homogeneous system of linear equations
a = c1 + c2 + c3
b = c1 + 2c2 + 3c3
c = c1 − c2 + 2c3
Row reducing the coefficient matrix of the homogeneous system we get

 

1 1 1 0 0
1

 

2 3 ∼ 0 1 0
1
 

1 −1 2
0 0 1
Observe that the rank of the coefficient matrix equals the number of rows, so by the System-Rank
 
a
 
Theorem (1), the system is consistent for every b ∈ R3 . Hence, equation (9) has a solution for all
 
c
a, b, c ∈ R. Thus, Span B = U. Moreover, by the System-Rank Theorem (2), there are no parameters
in the solution. Therefore, we have a unique solution when we take a = b = c = 0, so B is also
linearly independent. Hence, it is a basis for U.
A9 Since we have a spanning set, we only need to remove linearly dependent vectors until we have a
linearly independent set. Consider
 
 
 
 
  

t1 + 2t3 + t4
0
 1
0
 2
1 

0 = t1 −2 + t2 1 + t3  0 + t4 1 =  −2t1 + t2 + t4 
 
 
 
 
  

0
1
2
10
7
t1 + 2t2 + 10t3 + 7t4
Row reducing the corresponding coefficient matrix gives

 

 1 0 2 1 1 0 2 1

 

−2 1 0 1 ∼ 0 1 4 3
1 2 10 7
0 0 0 0
Thus we see that the first two columns of the reduced row echelon form make a linearly independent set and that the third and fourth columns can be written as linear combinations of the first two
columns. Hence, this is also true about the original matrix. In particular, this tells us that two times
the first vector in B plus four times the second vector in B equals the third vector, and the first vector
in B plus three times the second vector in B equals the fourth vector. Thus, one basis of Span B is
   

 1 0



   

−2 , 1
. Hence, the dimension is 2.




 1 2

Copyright © 2020 Pearson Canada Inc.
19
A10 Since we have a spanning set, we only need to remove linearly dependent vectors until we have a
linearly independent set. Consider
 
 
 
 
 
  

t1 − 2t2 − t3
0
1
−2
−1
0
0 

0 = t1 3 + t2 −6 + t3 −1 + t4 4 + t5 1 =  3t1 − 6t2 − t3 + 4t4 + t5 
 
 
 
 
 
  

0
2
−4
2
8
1
2t1 − 4t2 + 2t3 + 8t4 + t5
Row reducing the corresponding coefficient matrix gives

 

1 −2 −1 0 0 1 −2 0 2 0

 

0 1 2 0
3 −6 −1 4 1 ∼ 0

2 −4
2 8 1
0
0 0 0 1
Therefore,
and fourth vectors can be written as a linear combination of the others. Thus,
 second
    the



1
−1
0









     

3 , −1 , 1
is a basis for Span B. Hence, the dimension is 3.




2  2 1

A11 Since we have a spanning set, we only need to remove linearly dependent vectors until we have a
linearly independent set. Consider
)
*
)
*
)
*
)
*
)
*
0 0
1 1
0
1
1 −1
2
1
= t1
+ t2
+ t3
+ t4
0 0
−1 1
3 −1
2 −3
4 −3


t1 + t3 + 2t4


 t + t − t + t 
=  1 2 3 4 
−t1 + 3t2 + 2t3 + 4t4 
t1 − t2 − 3t3 − 3t4
Row reducing the corresponding coefficient matrix gives

 
0
1
2 1
 1
 
 1
1 −1
1 0

∼
−1
3
2
4 0

 
1 −1 −3 −3
0
0
1
0
0
0
0
1
0

1

1

1

0
Therefore, the fourth vector is a linear combination of the first three. Thus,
is a basis for Span B. Hence, the dimension is 3.
* )
* )
*9
1 1 0
1 1 −1
,
,
−1 1 3 −1 2 −3
8)
A12 Consider
)
*
)
*
)
*
)
*
)
*
)
*
0 0
1 1
2 2
0 2
2 0
3 1
= t1
+ t2
+ t3
+ t4
+ t5
0 0
1 1
2 2
1 1
1 1
2 2


 t1 + 2t2 + 2t4 + 3t5 
 t + 2t + 2t + t 
2
3
5 

=  1
t1 + 2t2 + t3 + t4 + 2t5 
t1 + 2t2 + t3 + t4 + 2t5
Copyright © 2020 Pearson Canada Inc.
20
Row reducing the corresponding coefficient matrix gives

1
1

1

1
2
2
2
2
0
2
1
1
2
0
1
1
 
3 1
 
1 0
∼
2 0
 
2
0
2
0
0
0

0
2
3

1 −1 −1

0
0
0

0
0
0
8)
* )
*9
1 1 0 2
Therefore, a basis is
,
. Hence, the dimension is 2.
1 1 1 1
A13 Consider
)


*
)
*  t1 + t4 
t + t + t 
0 0
1 0
0 1
0
1
1 1
= t1
+ t2
+ t3
+ t4
=  2 3 4 
0 0
0 1
1 0
0 −1
1 0
 t2 + t4 
t1 − t3
*
)
*
)
*
)
Row reducing the corresponding coefficient matrix gives

1
0

0

1
0
0
1
1
1
0
0 −1
 
1 1
 
1 0
∼
1 0
 
0
0
0
1
0
0
0
0
1
0

0

0

0

1
Thus, B is a linearly independent set and hence is a basis for Span B. Hence, the dimension is 4.
A14 The set B is clearly linearly independent since the highest power of x in each vector is different.
Thus, B is a basis for Span B and so the dimension is 3.
A15 Consider
0 = t1 (1 + x) + t2 (1 − x) + t3 (1 + x3 ) + t4 (1 − x3 ) = (t1 + t2 + t3 + t4 ) + (t1 − t2 )x + (t3 − t4 )x3
Row reducing the corresponding coefficient matrix gives

 
1 1
1 1
1
 
1 −1 0
0 0

∼
0
0 0
0 0

 
0
0 1 −1
0
0
1
0
0

0
1

0
1

1 −1

0
0
Hence, a basis for Span B is {1 + x, 1 − x, 1 + x3 } and so the dimension is 3.
A16 Consider
0 =t1 (1 + x + x2 ) + t2 (1 − x3 ) + t3 (1 − 2x + 2x2 − x3 ) + t4 (1 − x2 + 2x3 )
+ t5 (x2 + x3 )
=(t1 + t2 + t3 + t4 ) + (t1 − 2t3 )x + (t1 + 2t3 − t4 + t5 )x2
+ (−t2 − t3 + 2t4 + t5 )x3
Copyright © 2020 Pearson Canada Inc.
21
Row reducing the corresponding coefficient matrix gives

 
1
1
1 0 1
1
 
1
0 −2
0 0 0

∼
1
0
2 −1 1 0

 
0 −1 −1
2 1
0
0
1
0
0
0
0
1
0

0 4/7

0 −1 

0 2/7

1 1/7
Therefore, a basis for Span B is {1 + x + x2 , 1 − x3 , 1 − 2x + 2x2 − x3 , 1 − x2 + 2x3 } and hence the
dimension is 4.
A17 Alternate correct answers are possible.
(a) Since a plane is two dimensional, by the Basis Theorem, to find a basis for the plane we just need
 
 
1
1
 
 
to find two linearly independent vectors in the plane. Observe that the vectors 2 and 0 satisfy
 
 
0
2
    


1
1







   

2 , 0
the equation of the plane and hence are in the plane. Thus, 
is a basis for the plane.

    


0 2
(b) To extend our basis for the plane in part (a) to a basis for R3 we need to pick one vector which is
not a linear combination of the basis vectors for the plane. The basis vectors span the plane, so
any vector not in the plane will be linearly independent with the basis vectors. Clearly the normal
 
     

 2 
1 1  2



     

 
2 , 0 , −1
vector "n = −1 is not in the plane, so 
is a basis for R3 .



 
     


−1
0 2 −1
A18 Alternate correct answers are possible.
(a) Since a hyperplane in R4 is three dimensional, by the Basis Theorem, we just need
pick three
 to
    

1
   1 1





     



0  0 1

linearly independent vectors which satisfy the equation of the hyperplane. Hence, 
,
,














0 −1 0





      


1
0 0
is a basis for the hyperplane.
(b) To extend our basis for the hyperplane in part (a) to a basis for R4 , we just need to pick one
 
1
0
vector which does not lie in the hyperplane. Observe that   does not satisfy the equation of the
0
0
        


1
1
1
1










       




0  0  1 0


  ,   ,   ,  
hyperplane, and hence 
is a basis for R4 .













0
−1
0
0





















1  0  0 0

A19 Alternate correct answers are possible. Rather than use the approach in the text, we will demonstrate
an alternate approach.
Since we know that dim P3 (R) = 4, we need to add two vectors to the set {1 + x + x3 , 1 + x2 } so that
the set is still linearly independent. There are a variety of ways of picking such vectors. We observe
Copyright © 2020 Pearson Canada Inc.
22
that neither x2 nor x can be written as a linear combination of 1 + x + x3 and 1 + x2 , so we will try to
prove that B = {1 + x + x3 , 1 + x2 , x2 , x} is a basis for P3 . Consider
0 = c1 (1 + x + x3 ) + c2 (1 + x2 ) + c3 (x2 ) + c4 x = (c1 + c2 ) + (c1 + c4 )x + (c2 + c3 )x2 + c1 x3
Solving the corresponding homogeneous system we get c1 = c2 = c3 = c4 = 0, so B is a linearly
independent set of 4 vectors in P3 (R) and hence is a basis for P3 (R) by the Basis Theorem (since
dim P3 (R) = 4).
A20 Alternate correct answers are possible. Rather than use the approach in the text, we will demonstrate
an alternate approach.
Let "e1 , "e2 , "e3 , "e4 denote the standard basis vectors for M2×2 (R). Then, clearly {"v 1 , "v 2 , "e1 , "e2 , "e3 , "e4 }
spans M2×2 (R). Consider
"0 = c1"v 1 + c2"v 2 + c3"e1 + c4"e2 + c5"e3 + c6"e4
Row reducing the corresponding coefficient matrix gives

1
2

2

1
1
2
3
4
1
0
0
0
0
1
0
0
0
0
1
0
 
0 1
 
0 0
∼
0 0
 
1
0
0
1
0
0
0
0
1
0

0
4/5 −3/5

0 −1/5
2/5

0 −3/5
1/5

1 −6/5
2/5
If we ignore the last two columns, we see that this implies that {"v 1 , "v 2 , "e1 , "e2 } is a linearly independent set of 4 vectors in M2×2 (R) and hence is a basis for M2×2 (R) by the Basis Theorem (since
dim M2×2 (R) = 4).
8)
* )
* )
* )
* )
* )
*9
1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0
A21
,
,
,
,
,
0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1
A22 Since a = −c, every polynomial in S has the form
a + bx + cx2 = a + bx − ax2 = bx + a(1 − x2 )
Thus, B = {x, 1 − x2 } spans S. Moreover, the set is clearly linearly independent and hence is a basis.
Hence, the dimension of S is 2.
A23 Every polynomial in S has the form
a + bx + cx2 + dx3 = a + bx + cx2 + (a − 2b)x3 = a(1 + x3 ) + b(x − 2x3 ) + cx2
Thus, B = {1 + x3 , x − 2x3 , x2 } spans S. Moreover, the set is clearly linearly independent and hence
is a basis. Hence, the dimension of S is 3.
A24 Every matrix in S has the form
)
*
)
*
)
*
)
*
a b
1 0
0 1
0 0
=a
+b
+c
0 c
0 0
0 0
0 1
8)
* )
* )
*9
1 0 0 1 0 0
,
,
spans S and is clearly linearly independent, so it is a basis for
0 0 0 0 0 1
S. Thus, the dimension of S is 3.
Hence, B =
Copyright © 2020 Pearson Canada Inc.
23
A25 If "s ∈ S, then by the condition of S we have x1 + x2 + x3 = 0. Thus, x3 = −x1 − x2 , so every vector in
S has the form
  

 
 
 x1   x1 
 1
 0
  

 
 
 x2  =  x2  = x1  0 + x2  1
x3
−x1 − x2
−1
−1
   

 1  0



   

 0 ,  1
Hence, B = 
spans S and is clearly linearly independent, so it is a basis for S. Thus, the







 −1 −1 

dimension of S is 2.
A26 Every polynomial p(x) in S has 2 and 3 as a root and thus has both (x − 2) and (x − 3) as factors.
Since p(x) is also of degree at most 2, by factoring we get that every polynomial in S has the form
a(x − 2)(x − 3) = a(x2 − 5x + 6)
:
;
Hence, B = x2 − 5x + 6 spans S and is clearly linearly independent, so it is a basis for S. Thus, the
dimension of S is 1.
A27 Every matrix in S has the form
)
* )
*
)
*
)
*
a b
−c −c
−1 −1
0 0
=
=c
+d
c d
c
d
1
0
0 1
8)
* )
*9
−1 −1 0 0
Hence, B =
,
spans S and is clearly linearly independent, so it is a basis for S.
1
0 0 1
Thus, the dimension of S is 2.
B Homework Problems
B1 It is a basis.
B2 It is a basis.
B3 It is not a basis as it is linearly dependent and does not span R3 .
B4 It is not a basis as it does not span R3 .
B5 It is a basis.
B6 It is not a basis as it is linearly dependent.
B7 It is a basis.
B8 It is not a basis as it is linearly dependent and does not span P2 (R).
B9 It is not a basis as it is linearly dependent.
B10 It is not a basis as it does not span P2 (R).
B11 It is a basis.
B12 It is not a basis as it does not span S.
Copyright © 2020 Pearson Canada Inc.
24
B13
B14
B17
B20
It is a basis.
dim Span B = 3
dim Span B = 3
dim Span B = 2
B15 dim Span B = 2
B18 dim Span B = 3
B21 dim Span B = 3
B16 dim Span B = 2
B19 dim Span B = 4
B22 dim Span B = 3
B23 dim Span B = 2
For Problems B24 - B37, alternate correct answers are possible.
    
     


2  1
2  1  1






   

     

1 ,  0
1 ,  0 , −2
B24 (a) 
. (b) 
.


















 0 −1 
 0 −1
1
    
     


1 0
1 0  3







   
     





0 , 1 ,  5
0
1
B25 (a) 
,
.
(b)
.
















3 5

3 5 −1

    
      


 2  1
 2  1 2






   

     





−1 ,  1 , 4
−1
1
B26 (a) 
,
.
(b)
.


















 0 −2
 0 −2 3

    
     


2 0
2 0  3






   

     











0
1
0
1
0
B27 (a) 
,
.
(b)
,
,
.








































3 0
 3 0 −2 

      
       


1 0 0
1 0 0  1











0 1 0  3



0 1 0





























B28 (i) 
,
,
.
(ii)
,
,
,
.
















1 3 2
1 3 2 −1









     
       








0 0 1
0 0 1
2
      
       


2 3 1
2 3 1 −1











     
       





1 0 0

1 0 0  2

B29 (i) 
. (ii) 
.
  ,   ,   ,  
0 , 1 , 0







0
1
0
3
       



     








0 0 1
0 0 1
1
B30 {1 − x2 + x3 , 1 + x + 2x2 + x3 , 1 + x3 , 1 − x + 2x2 }.
8)
* )
* )
* )
*9
1 −1 2 −3 1 0 1 −2
B31
,
,
,
.
3
2 3
1 1 0 0
0
 

 −2 



 

5/3
B32 
. dim S = 1.





 1 

B33 {1 + x2 , x + x2 }. dim S = 2.
:
;
B34 1 + x3 , x + x3 , x2 . dim S = 3.
8)
* )
*9
3 1 2 0
B35
,
. dim S = 2.
0 0 0 1
:
;
B36 x − x2 . dim S = 1.
Copyright © 2020 Pearson Canada Inc.
25
8)
* )
* )
*9
−2 1 1 0 −1 0
B37
,
,
. dim S = 3.
0 0 1 0
0 1
C Conceptual Problems
C1 If S is an n-dimensional subspace of V, then S has a basis B containing n vectors. Therefore, B is
a linearly independent set of n vectors in V and hence is a basis for V by the Basis Theorem. Thus,
S = Span B = V.
C2 First we show that {v1 , v2 + tv1 } is a spanning set for V. Let w ∈ V. Then since {v1 , v2 } is a basis for V
we can write w = a1 v1 + a2 v2 . We now want to write w as a linear combination of v1 and (v2 + tv1 ).
Observe that
w = a1 v1 + a2 v2 = a1 v1 + a2 v2 + ta2 v1 − ta2 v1
= a1 v1 − ta2 v1 + a2 v2 + ta2 v1 = (a1 − ta2 )v1 + a2 (v2 + tv1 )
Thus, {v1 , v2 + tv1 } is a spanning set for V. Hence, by the Basis Theorem, we get that {v1 , v2 + tv1 } is
a basis for V since dim V = 2.
C3 Let w = a1 v1 + a2 v2 + a3 v3 ∈ V. We can write this as
w = (a1 − ta3 )v1 + (a2 − sa3 )v2 + a3 (v3 + tv1 + sv2 )
Thus, {v1 , v2 , v3 + tv1 + sv2 } is a spanning set for V. Hence, by the Basis Theorem, we get that
{v1 , v2 , v3 + tv1 + sv2 } is a basis for V since dim V = 3.
C4 This statement is true as it is the contrapositive of the Basis Theorem.
C5 This is false. Since dim P2 (R) = 3, we have that every basis for P2 (R) has 3 vectors in it.
C6 This is false. Taking a = b = 1 and c = d = 2, gives {"v 1 + "v 2 , 2"v 1 + 2"v 2 } which is clearly linearly
dependent.
C7 This is true, it is the Basis Reduction Theorem.
C8 This is true by the Basis Expansion Theorem.
8)
* )
* )
* )
*9
1 0 2 0 3 0 4 0
C9 This is false.
,
,
,
is clearly linearly dependent and hence not a basis
0 0 0 0 0 0 0 0
for M2×2 (R).
8) *9
1
C10 This is false. Observe that S = Span
is a subspace of R2 , but no subset of the standard basis for
1
R2 is a basis for S.
C11 This is true. Consider
"0 = c1 ("x − "y) + c2 ("x + "y) + c3 (2"z) = (c1 + c2 )"x + (−c1 + c2 )"y + 2c3"z
Since B is linearly independent, we must have
c1 + c2 = 0,
−c1 + c2 = 0,
2c3 = 0
The only solution is c1 = c2 = c3 = 0, so C is linearly independent. Since C is a set of 3 linearly
independent vectors in a vector space with dimension 3, C is a basis for V.
Copyright © 2020 Pearson Canada Inc.
26
" 2, w
" 3 } be a basis for T. Consider
C12 Let B = {"v 1 , "v 2 , "v 3 } be a basis for S and C = {"
w1 , w
" 1 + c5 w
" 2 + c6 w
" 3 = "0
c1"v 1 + c2"v 2 + c3"v 3 + c4 w
Then, have
" 1 − c5 w
" 2 − c6 w
"3
c1"v 1 + c2"v 2 + c3"v 3 = −c4 w
(4)
" 1 −c5 w
" 2 −c6 w
" 3 ∈ T,
Assume for a contradiction that S∩T = {"0}. Since c1"v 1 +c2"v 2 +c3"v 3 ∈ S and −c4 w
equation (10) gives
c1"v 1 + c2"v 2 + c3"v 3 = 0
" 1 − c5 w
" 2 − c6 w
"3 = 0
−c4 w
" 1, w
" 2, w
" 3 } is a linearly
Hence, c1 = c2 = c3 = 0 and c4 = c5 = c6 = 0. herefore, {"v 1 , "v 2 , "v 3 , w
independent set of 6 vectors in V, but this contradicts the fact that dim V = 5. Hence, S ∩ T ! {"0}.
C13 We see that 0 ( (a, b) = (0, 1) is the zero vector and so cannot be in our basis. Thus, we consider the
vector (1, 1). Scalar multiples of (1, 1) give k ( (1, 1) = (k(1)(1)k−1 , 1k ) = (k, 1). This clearly does not
span V, so we need to pick another vector which will change the second coordinate. We pick (0, 2).
Since (0, 2) is not a multiple of (1, 1), we have that B = {(1, 1), (0, 2)} is linearly independent.
Consider
(a, b) = c1 ( (1, 1) + c2 ( (0, 2)
= (c1 , 1) ( (0, 2c2 )
= (c1 2c2 + 0, 2c2 )
Thus, we require that b = 2c2 > 0, hence c2 = log2 (b), and a = c1 b, so c1 = ba since b > 0. This is
valid for all a, b ∈ R with b > 0. Hence, B also spans V and thus is a basis. Therefore, dim V = 2.
C14 We see that 0 ( 1 = 10 = 1. Thus, 0 = 1. Thus, we consider the set {2}. For any x ∈ V, we see that
log2 (x) ( 2 = 2log2 (x) = x
Thus, {2} spans V and is clearly linearly independent (since 2 is not the zero vector). Hence, it is a
basis for V and dim V = 1.
C15 We observe that B = {1, i} spans C since every complex number is a + bi. Moreover, B is linearly
independent since there is no real number t such that ti = 1. Thus, B is a basis for C and hence C is a
2 dimensional real vector space. (We will see in Chapter 9 that C is a 1 dimensional complex vector
space).
C16 First observe that for any (a, 1 + a) ∈ V we have "0V = 0(a, 1 + a) = (0a, 1 + 0a) = (0, 1). Hence, (0, 1)
is the zero vector. Now, for any (a, 1 + a) ∈ V we have
(a, 1 + a) = (a1, 1 + a1) = a(1, 1 + 1) = a(1, 2)
So B = {(1, 2)} spans V. Moreover, B is clearly linearly independent (since (1, 2) is not the zero
vector) and hence is a basis for V.
Copyright © 2020 Pearson Canada Inc.
27
Section 4.4
A Practice Problems
A1 (a) It is easy to verify that the two vectors satisfy the equation of the plane and hence are in the plane.
Also, they are clearly linearly independent, so we have a set of two linearly independent vectors
in a two dimensional vector space, so the set is a basis.
 
 
 
3
5
3
 
 
 
(b) The vectors 2 and 2 do not satisfy the equation of the plane, so they are not in the plane. 2
 
 
 
1
3
2
satisfies the equation of the plane, so it is in the plane. To find its B-coordinates, we need to find
t1 and t2 such that
 
   
1
 0 3
 
   
t1 2 + t2  2 = 2
 
   
0
−1
2
Row reducing the corresponding augmented matrix gives

 

0 3   1 0
3 
 1

 

2 2  ∼  0 1 −2 
 2
 

0 −1 2
0 0
0
 
) *
3
3
 
Thus, 2 =
.
 
−2
2B
A2 (a) To find the coordinates, we need to solve the three systems
s1 (1 + x2 ) + s2 (1 − x + 2x2 ) + s3 (−1 − x + x2 ) = 4 − 2x + 7x2
t1 (1 + x2 ) + t2 (1 − x + 2x2 ) + t3 (−1 − x + x2 ) = −2 − 2x + 3x2
Row reducing the triple augmented matrix gives
 


4 −2   1 0 0 4
2 
1 −1
 1
 


 0 −1 −1 −2 −2  ∼  0 1 0 1 −1 
1
2
1
7
3
0 0 1 1
3
 
 
4
 2
 
 
Hence, [q]B = 1, and [r]B = −1.
 
 
1
3
(b) We need to solve
t1 (1 + x2 ) + t2 (1 − x + 2x2 ) + t3 (−1 − x + x2 ) = 2 − 4x + 10x2
Row reducing the augmented matrix gives

 

1 −1
2   1 0 0 6 
 1

 

 0 −1 −1 −4  ∼  0 1 0 0 
1
2
1 10
0 0 1 4
Copyright © 2020 Pearson Canada Inc.
28
 
6
 
Hence, [2 − 4x + 10x2 ]B = 0. We then have
 
4
     
4  2 6
     
[4 − 2x + 7x2 ]B + [−2 − 2x + 3x2 ]B = 1 + −1 = 0
     
1
3
4
= [2 − 4x + 10x2 ]B = [(4 − 2) + (−2 − 2)x + (7 + 3)x2 ]B
A3 The coordinates of x and y with respect to B are determined by finding s1 , s2 , t1 , and t2 such that
s1
' (
' ( ' (
1
2
5
+ s2
=
3
−5
−7
and
t1
' (
' ( ' (
1
2
−7
+ t2
=
3
−5
8
Since both systems have the same coefficient matrix, we row reduce the doubly augmented matrix to
get
'
( '
(
1
2
5 −7
1 0 1 −19/11
∼
3 −5 −7
8
0 1 2 −29/11
' (
1
Thus, we see that s1 = 1, s2 = 2, t1 = −19/11, and t2 = −29/11. Therefore, [x]B =
and
2
'
(
−19/11
[y]B =
.
−29/11
A4 For x we recognize this as
' (the same problem as in A3 except the order of the vectors in B is different.
2
Hence, we have [x]B =
. The coordinates of y with respect to B are determined by finding t1 and
1
t2 such that
' (
' ( ' (
2
1
9
t1
+ t2
=
−5
3
−5
We row reduce the corresponding augmented matrix to get
'
2 1
9
−5 3 −5
(
∼
'
Thus, t1 = 32/11 and t2 = 35/11. Therefore, [y]B =
1 0 32/11
0 1 35/11
'
(
(
32/11
.
35/11
A5 The coordinates of x and y with respect to B are determined by finding s1 , s2 , s3 , t1 , t2 , and t3 such
that
 
 
   
1
0
1  8
 
 
   
s1 0 + s2 1 + s3 1 = −7
 
 
   
1
1
0
3
 
 
   
1
0
1  3
 
 
   
t1 0 + t2 1 + t3 1 = −5
 
 
   
1
1
0
2
Copyright © 2020 Pearson Canada Inc.
29
Since both systems have the same coefficient matrix, we row reduce the doubly augmented matrix to
get

 

8
3   1 0 0
9
5 
 1 0 1

 

 0 1 1 −7 −5  ∼  0 1 0 −6 −3 
1 1 0
3
2
0 0 1 −1 −2
 
 
 9
 5
 
 
Thus, s1 = 9, s2 = −6, s3 = −1, t1 = 5, t2 = −3, and t3 = −2. Therefore, [x]B = −6 and [y]B = −3.
 
 
−1
−2
A6 The coordinates of x and y with respect to B are determined by finding s1 , s2 , t1 , and t2 such that
 
   
1
−1  5
0
 1 −2
s1   + s2   =  
1
−1  5
1
0
3
 
   
1
−1 −1
0
 1  3
t1   + t2   =  
1
−1 −1
1
0
2
and
Both systems have the same coefficient matrix, we row reduce the doubly augmented matrix to get

 
5 −1  
 1 −1
 
 0
1 −2
3  

∼
 1 −1
5 −1  

 
1
0
3
2
1
0
0
0

3 2 
0

1 −2 3 

0
0 0 

0
0 0
(
' (
3
2
Thus, we see that s1 = 3, s2 = −2, t1 = 2, and t2 = 3. Therefore, [x]B =
and [y]B =
.
−2
3
'
A7 We need to find s1 , s2 , t1 , and t2 such that
s1 (1 + 3x) + s2 (2 − 5x) = 5 − 7x
t1 (1 + 3x) + t2 (2 − 5x) = 1
Row reducing the corresponding doubly augmented matrix gives
'
5 1
1
2
3 −5 −7 0
(
∼
'
1 0 1 5/11
0 1 2 3/11
(
' (
'
(
1
5/11
and [q]B =
.
2
3/11
NOTE: Compare the result of [p]B to A3 [x]B .
Thus, [p]B =
A8 We need to find s1 , s2 , s3 , t1 , t2 , and t3 such that
s1 (1 + x + x2 ) + s2 (1 + 3x + 2x2 ) + s3 (4 + x2 ) = −2 + 8x + 5x2
and
t1 (1 + x + x2 ) + t2 (1 + 3x + 2x2 ) + t3 (4 + x2 ) = −4 + 8x + 4x2
Copyright © 2020 Pearson Canada Inc.
30
Row reducing the corresponding doubly augmented matrix gives

 
5
2
 1 1 4 −2 −4   1 0 0

 
8
8  ∼  0 1 0
1
2
 1 3 0
 
1 2 1
5
4
0 0 1 −2 −2
 
 
 5
 2
 
 
Thus, [p]B =  1 and [q]B =  2.
 
 
−2
−2




A9 We need to find s1 , s2 , s3 , t1 , t2 , and t3 such that
s1 (1 + x2 ) + s2 (1 + x + 2x2 + x3 ) + s3 (x − x2 + x3 ) = 2 + x − 5x2 + x3
and
t1 (1 + x2 ) + t2 (1 + x + 2x2 + x3 ) + t3 (x − x2 + x3 ) = 1 + x + 4x2 + x3
Row reducing the corresponding doubly augmented matrix gives

0
2 1
 1 1
 0 1
1
1 1

 1 2 −1 −5 4

0 1
1
1 1
 

5 −1 
  1 0 0

  0 1 0 −3
2 
 ∼ 

  0 0 1
4 −1 
 

0 0 0
0
0
 
 
 5
−1
 
 
Thus, [p]B = −3 and [q]B =  2.
 
 
4
−1
A10 We need to find s1 , s2 , s3 , t1 , t2 , and t3 such that
(
'
(
'
( '
(
1 1
0 1
2 0
0 1
s1
+ s2
+ s3
=
1 0
1 1
0 −1
1 2
'
and
t1
'
(
'
(
'
( '
(
1 1
0 1
2
0
−4 1
+ t2
+ t3
=
1 0
1 1
0 −1
1 4
Row reducing the corresponding doubly augmented matrix gives

2
 1 0
 1 1
0

 1 1
0

0 1 −1
 
0 −4   1 0
 
1
1   0 1
∼
1
1   0 0
 
2
4
0 0

0 −2 −2 

0
3
3 

1
1 −1 

0
0
0
 
 
−2
−2
 
 
Thus, [A]B =  3 and [B]B =  3.
 
 
1
−1
Copyright © 2020 Pearson Canada Inc.
31
A11 We need to find s1 , s2 , t1 , and t2 such that
'
(
'
( '
(
1 1 0
0 2 −1
1 3 −1
s1
+ s2
=
0 1 1
1 3 −1
1 4
0
and
t1
'
(
'
( '
(
1 1 0
0 2 −1
3 −1 2
+ t2
=
0 1 1
1 3 −1
−2 −3 5
Row reducing the corresponding doubly augmented matrix gives

 

0
1
3   1 0 1
3 
 1
 


2
3 −1   0 1 1 −2 
 1



 0 −1 −1
2   0 0 0
0 




∼
 0
1
1 −2   0 0 0
0 

 

3
4 −3   0 0 0
0 
 1
 

1 −1
0
5
0 0 0
0
' (
' (
1
3
and [B]B =
.
Thus, [A]B =
1
−2
A12 We need to determine if there exists s1 , s2 , s3 , t1 , t2 , t3 such that
'
(
'
(
'
( '
1 2
2 1
−2 2
−1
s1
+ s2
+ s3
=
1 3
−1 2
4 10
2
'
(
'
(
'
( '
1 2
2 1
−2 2
6
t1
+ t2
+ t3
=
1 3
−1 2
4 10
−3
Row reducing the corresponding augmented matrix gives

 
2 −2 −1
6   1 0
 1
 
 2
1
2
1
3   0 1

∼
 1 −1
4
2 −3   0 0

 
3
2 10
7 −2
0 0
1
7
(
3
2
(

1 
0 −1/2

0
1/2
2 

1
3/4 −1/2 

0
0
0




−1/2
 1 




Hence, [A]B =  1/2 and [B]B =  2 .




3/4
−1/2
A13 We need to determine if there exists s1 , s2 , s3 , t1 , t2 , t3 such that
'
(
'
(
'
( '
(
1 3
1 −2
0 1
3 −5
s1
+ s2
+ s3
=
2 3
1
2
−1 1
8
3
'
(
'
(
'
( '
(
1 3
1 −2
0 1
0
3
t1
+ t2
+ t3
=
2 3
1
2
−1 1
3 −1
Row reducing the corresponding augmented matrix gives
 

3
0   1
1
0
 1
 
 3 −2
1 −5
3   0

∼
 2
1 −1
8
3   0
 

3
2
1
3 −1
0
0
1
0
0

0
1
1 

0
2 −1 

1 −4 −2 

0
0
0
Copyright © 2020 Pearson Canada Inc.
32
 
 
 1
 1
 
 
Hence, [A]B =  2 and [B]B = −1.
 
 
−4
−2
A14 To find the change of coordinates matrix Q from B-coordinates to S-coordinates
)' ( ' (* we need to find the
1 0
coordinates of the vectors in B with respect to the standard basis S =
,
. We get
0 1
'' ( ' ( ( '
(
1
0
1 0
Q=
=
1S 2S
1 2
To find the change of coordinates matrix from S-coordinates to B-coordinates, we need to find the
coordinates of the vectors in S with respect to the basis B. Observe that
' (
' ( ' (
1 0
1
1
1
−
=
1
0
2 2
' (
' ( ' (
1 0
1
0
0
+
=
1
2
1
2
'
(
1
0
Thus, the change of coordinates matrix P from S-coordinates to B-coordinates is P =
.
−1/2 1/2
A15 To find the change of coordinates matrix Q from B-coordinates to S-coordinates we need to find the
      

1 0 0



     

0 , 1 , 0
coordinates of the vectors in B with respect to the standard basis S = 
. We get



0 0 1

 
3
 
Q = 4
 
1S
 
0
 
1
0S
   

−2  3 0 −2
   

−3  = 4 1 −3
3S
1 0
3
To find the change of coordinates matrix from S-coordinates to B-coordinates, we need to find the
coordinates of the vectors in S with respect to the basis B. So, we need to solve the systems of
equations
 
 
   
3
0
−2 1
 
 
   
a1 4 + a2 1 + a3 −3 = 0
 
 
   
1
0
3
0
 
 
   
3
0
−2 0
 
 
   
b1 4 + b2 1 + b3 −3 = 1
 
 
   
1
0
3
0
 
 
   
3
0
−2 0
 
 
   
c1 4 + c2 1 + c3 −3 = 0
 
 
   
1
0
3
1
Row reducing the triple augmented matrix gives

 

3/11
0 2/11 
 3 0 −2 1 0 0   1 0 0
 4 1 −3 0 1 0  ∼  0 1 0 −15/11 1 1/11 

 

1 0
3 0 0 1
0 0 1 −1/11 0 3/11
Copyright © 2020 Pearson Canada Inc.
33


 3/11 0 2/11


Thus, the change of coordinates matrix P from S-coordinates to B-coordinates is P = −15/11 1 1/11.


−1/11 0 3/11
A16 To find the change of coordinates matrix Q from B-coordinates to S-coordinates
2
3 we need to find the
coordinates of the vectors in B with respect to the standard basis S = 1, x, x2 . We get


1
1
4
5 1

Q = [1]S [1 − 2x]S [1 − 4x + 4x2 ]S = 0 −2 −4


0
0
4
To find the change of coordinates matrix from S-coordinates to B-coordinates, we need to find the
coordinates of the vectors in S with respect to the basis B. So, we need to solve the systems of
equations
a1 (1) + a2 (1 − 2x) + a3 (1 − 4x + 4x2 ) = 1
b1 (1) + b2 (1 − 2x) + b3 (1 − 4x + 4x2 ) = x
c1 (1) + c2 (1 − 2x) + c3 (1 − 4x + 4x2 ) = x2
Row reducing the triple augmented matrix gives

 

1
1 1 0 0   1 0 0 1
1/2
1/4 
 1

 

 0 −2 −4 0 1 0  ∼  0 1 0 0 −1/2 −1/2 
0
0
4 0 0 1
0 0 1 0
0
1/4


1/2
1/4
1


Thus, the change of coordinates matrix P from S-coordinates to B-coordinates is P = 0 −1/2 −1/2.


0
0
1/4
A17 To find the change of coordinates matrix Q from B-coordinates to S-coordinates
2
3 we need to find the
2
coordinates of the vectors in B with respect to the standard basis S = 1, x, x . We get


4
5 1 0 1
Q = [1 + 2x + x2 ]S [x + x2 ]S [1 + 3x]S = 2 1 3


1 1 0
To find the change of coordinates matrix from S-coordinates to B-coordinates, we need to find the
coordinates of the vectors in S with respect to the basis B. So, we need to solve the systems of
equations
a1 (1 + 2x + x2 ) + a2 (x + x2 ) + a3 (1 + 3x) = 1
b1 (1 + 2x + x2 ) + b2 (x + x2 ) + b3 (1 + 3x) = x
c1 (1 + 2x + x2 ) + c2 (x + x2 ) + c3 (1 + 3x) = x2
Row reducing the triple augmented matrix gives
 

3/2 −1/2
1/2
 1 0 1 1 0 0   1 0 0
 

1/2
1/2
 2 1 3 0 1 0  ∼  0 1 0 −3/2
1 1 0 0 0 1
0 0 1 −1/2
1/2 −1/2






1/2
 3/2 −1/2


1/2
1/2.
Thus, the change of coordinates matrix P from S-coordinates to B-coordinates is P = −3/2


−1/2
1/2 −1/2
Copyright © 2020 Pearson Canada Inc.
34
A18 To find the change of coordinates matrix Q from B-coordinates to S-coordinates
2
3 we need to find the
coordinates of the vectors in B with respect to the standard basis S = 1, x, x2 . We get


1 0
4
5  1

0 1
Q = [1 − 2x + 5x2 ]S [1 − 2x2 ]S [x + x2 ]S = −2


5 −2 1
To find the change of coordinates matrix from S-coordinates to B-coordinates, we need to find the
coordinates of the vectors in S with respect to the basis B. So, we need to solve the systems of
equations
a1 (1 − 2x + 5x2 ) + a2 (1 − 2x2 ) + a3 (x + x2 ) = 1
b1 (1 − 2x + 5x2 ) + b2 (1 − 2x2 ) + b3 (x + x2 ) = x
c1 (1 − 2x + 5x2 ) + c2 (1 − 2x2 ) + c3 (x + x2 ) = x2
Row reducing the triple augmented matrix gives

 

1 0 1 0 0   1 0 0 2/9 −1/9 1/9 
 1

 

0 1 0 1 0  ∼  0 1 0 7/9 1/9 −1/9 
 −2
 

5 −2 1 0 0 1
0 0 1 4/9 7/9
2/9


1/9
2/9 −1/9


1/9 −1/9.
Thus, the change of coordinates matrix P from S-coordinates to B-coordinates is P = 7/9


4/9
7/9
2/9
A19 To find the change of coordinates matrix Q from B-coordinates to S-coordinates
2
3we need to find the
2 3
coordinates of the vectors in B with respect to the standard basis S = 1, x, x , x . We get


0 0 0 1
4
5 0 0 1 0

Q = [x2 ]S [x3 ]S [x]S [1]S = 
1 0 0 0
0 1 0 0
To find the change of coordinates matrix from S-coordinates to B-coordinates, we need to find the
coordinates of the vectors in S with respect to the basis B. We get


0 0 1 0

4
5 0 0 0 1

P = [1]B [x]B [x2 ]B [x3 ]B = 
0 1 0 0
1 0 0 0
A20 To find the change of coordinates matrix Q from B-coordinates to S-coordinates
)'
( ' we need
( ' to find
(* the
1 0 0 1 0 0
coordinates of the vectors in B with respect to the standard basis S =
,
,
. We
0 0 0 0 0 1
have
'
(
'
(
'
(
'
(
1 −1
1 0
0 1
0 0
=1
+ (−1)
+ (−1)
0 −1
0 0
0 0
0 1
'
(
'
(
'
(
'
(
0 −4
1 0
0 1
0 0
=0
+ (−4)
+ (−1)
0 −1
0 0
0 0
0 1
'
(
'
(
'
(
'
(
2 1
1 0
0 1
0 0
=2
+1
+1
0 1
0 0
0 0
0 1
Copyright © 2020 Pearson Canada Inc.
35
Hence,
Q=
''
(
1 −1
0 −1 S
'
(
0 −4
0 −1 S
'
2 1
0 1
( (
S


0 2
 1


= −1 −4 1


−1 −1 1
To find the change of coordinates matrix from S-coordinates to B-coordinates, we need to find the
coordinates of the vectors in S with respect to the basis B. So, we need to solve the systems of
equations
'
(
'
(
'
( '
(
1 −1
0 −4
2 1
1 0
a1
+ a2
+ a3
=
0 −1
0 −1
0 1
0 0
'
(
'
(
'
( '
(
1 −1
0 −4
2 1
0 1
b1
+ b2
+ b3
=
0 −1
0 −1
0 1
0 0
'
(
'
(
'
( '
(
1 −1
0 −4
2 1
0 0
c1
+ c2
+ c3
=
0 −1
0 −1
0 1
0 1
Row reducing the triple augmented matrix gives

 

0 2 1 0 0   1 0 0 1/3
2/9 −8/9 
 1

 

1/3 
 −1 −4 1 0 1 0  ∼  0 1 0 0 −1/3

−1 −1 1 0 0 1
0 0 1 1/3 −1/9
4/9


2/9 −8/9
1/3


1/3.
So, the change of coordinates matrix P from S-coordinates to B-coordinates is P =  0 −1/3


1/3 −1/9
4/9
A21 To find the change of coordinates matrix from B-coordinates to C-coordinates, we need to determine
the C-coordinates of the vectors in B. That is, we need to find c1 , c2 , d1 , d2 such that
' (
' ( ' (
' (
' ( ' (
2
5
3
2
5
5
c1
+ c2
=
,
d1
+ d2
=
1
2
1
1
2
3
We row reduce the corresponding doubly augmented matrix to get
( '
(
'
1 0 −1
5
2 5 3 5
∼
1 2 1 3
0 1
1 −1
'
(
−1
5
Thus, Q =
.
1 −1
To find the change of coordinates matrix from C-coordinates to B-coordinates, we need to determine
the B-coordinates of the vectors in C. That is, we need to find c1 , c2 , d1 , d2 such that
' (
' ( ' (
' (
' ( ' (
3
5
2
3
5
5
c1
+ c2
=
,
d1
+ d2
=
1
3
1
1
3
2
We row reduce the corresponding doubly augmented matrix to get
'
( '
(
3 5 2 5
1 0 1/4 5/4
∼
1 3 1 2
0 1 1/4 1/4
'
(
1/4 5/4
Thus, P =
. It is easy to verify that PQ = I.
1/4 1/4
Copyright © 2020 Pearson Canada Inc.
36
A22 To find the change of coordinates matrix from B-coordinates to C-coordinates, we need to determine
the C-coordinates of the vectors in B. That is, we need to find c1 , c2 , d1 , d2 such that
' (
' ( ' (
' (
' ( ' (
−1
5
1
−1
5
2
c1
+ c2
=
d1
+ d2
=
1
−4
3
1
−4
1
We row reduce the corresponding doubly augmented matrix to get
'
( '
(
−1
5 1 2
1 0 19 13
∼
1 −4 3 1
0 1 4 3
'
(
19 13
Hence, Q =
.
4 3
To find the change of coordinates matrix from C-coordinates to B-coordinates, we need to determine
the B-coordinates of the vectors in C. That is, we need to find c1 , c2 , d1 , d2 such that
' (
' ( ' (
' (
' ( ' (
1
2
−1
1
2
5
c1
+ c2
=
,
d1
+ d2
=
3
1
1
3
1
−4
We row reduce the corresponding doubly augmented matrix to get
'
( '
(
1 2 −1
5
1 0
3/5 −13/5
∼
3 1
1 −4
0 1 −4/5
19/5
'
(
3/5 −13/5
Thus, P =
. It is easy to verify that PQ = I.
−4/5
19/5
A23 To find Q we find the C-coordinates of the vectors in B by row reducing the corresponding triply
augmented matrix to get

 

1
1 1 −1
1   1 0 0 1/2 −1/2
1 
 1

 

3 −1 0
1 −2  ∼  0 1 0 0
1/4 −3/4 
 1
 

1 −1 −1 0
0
1
0 0 1 1/2 −3/4
3/4


1 
1/2 −1/2


1/4 −3/4.
Thus, Q =  0


1/2 −3/4
3/4
To find P we need to find the B-coordinates of the vectors in C. Thus, we need to solve the systems
b1 (1) + b2 (−1 + x) + b3 (1 − 2x + x2 ) = 1 + x + x2
c1 (1) + c2 (−1 + x) + c3 (1 − 2x + x2 ) = 1 + 3x − x2
d1 (1) + d2 (−1 + x) + d3 (1 − 2x + x2 ) = 1 − x − x2
Row reducing the corresponding triply augmented matrix gives

 

1 1
1
1   1 0 0 3
3 −1 
 1 −1
 

 0
1 −2 1
3 −1  ∼  0 1 0 3
1 −3 

 

0
0
1 1 −1 −1
0 0 1 1 −1 −1


3 −1
3


1 −3. It is easy to verify that PQ = I.
Thus, P = 3


1 −1 −1
Copyright © 2020 Pearson Canada Inc.
37
B Homework Problems
B1 (a) Show that it is linearly independent and spans the plane.
' (
' (
−2
2
(b) (i) [!x 1 ]B =
. (ii) !x 2 is not in the plane. (iii) [!x 3 ]B =
.
3
−1
B2 (a) Show that it is linearly independent and spans the plane.
' (
' (
'
(
3
3
−1/3
(b) (i) [!x 1 ]B =
. (ii) [!x 2 ]B =
. (iii) [!x 3 ]B =
.
−1
2
1
B3
B4
B6
B8
B10
B12
B14
 


 
−4
−13
 7
 


 
(a) (i) [p]B =  5. (ii) [q]B =  17. (iii) [r]B = −9.
 


 
3
7
−7
 

    
−6
−13  7 −6
 

    
(b) [2 − 4x + 10x2 ]B =  8. We have  17 + −9 =  8.
 

    
0
7
−7
0
 
 
' (
' (
 2
 3
2
−3
 
 
[x]B =
, [y]B =
B5 [x]B =  1, [y]B =  4
 
 
1
5
−1
−2
 
 


 
 5
 1
 1/2
 8
 
 


 
[p]B = −3, [q]B = −1
B7 [p]B =  2 , [q]B = −5
 
 


 
2
2
−1/2
6
 


 
 
2
 2 
−9
 5
1
−3/2
 
 

[A]B =  3, [B]B = −3
B9 [A]B =  , [B]B = 
 
 
3
 7/2
2
1
2
5/2
 
 
' (
' (
 5
 3
−4
3
 
 
[A]B =
, [B]B =
B11 [A]B = −4, [B]B = −5
 
 
3
1
−1
6
 


'
(
'
(
3/2
−1/2
1
−3
 


[p]B = 1/2, [q]B =  5/2
B13 [A]B =
, [B]B =
 


3
2
1/2
3/2
 
 
 
 
 1
 −1 
 2
 3
 
 
 
 
[A]B =  2, [B]B = 1/2
B15 [A]B = −1, [B]B = −1
 
 
 
 
−3
1/2
3
2
 
 
 3
1/2
 
 
B16 [A]B =  2, [B]B =  2 
 
 
−3
1
(
1
1
B17 The change of coordinates matrix Q from B-coordinates to S-coordinates is
.
1 −1
'
(
1/2
1/2
The change of coordinates matrix P from S-coordinates to B-coordinates is P =
.
1/2 −1/2
Copyright © 2020 Pearson Canada Inc.
'
38
B18
B19
B20
B21
B22
B23
B24
B25
(
2 5
The change of coordinates matrix Q from B-coordinates to S-coordinates is
.
1 3
'
(
3 −5
The change of coordinates matrix P from S-coordinates to B-coordinates is P =
.
−1
2


5
3 4


The change of coordinates matrix Q from B-coordinates to S-coordinates is 2 2 −1.


1 1 −1


9 −14
−1


13.
The change of coordinates matrix P from S-coordinates to B-coordinates is P =  1 −8


0
1 −2


4
1 −2


1 −4.
The change of coordinates matrix Q from B-coordinates to S-coordinates is 0


0
0
1


1 2 4


The change of coordinates matrix P from S-coordinates to B-coordinates is P = 0 1 4.


0 0 1


1 2
1


0 2.
The change of coordinates matrix Q from B-coordinates to S-coordinates is 1


1 −2 1


2
 4 −5


0.
The change of coordinates matrix P from S-coordinates to B-coordinates is P =  1 −1


−2
3 −1
'
(
2 −4
The change of coordinates matrix Q from B-coordinates to S-coordinates is
.
3
5
'
(
5/22 2/11
The change of coordinates matrix P from S-coordinates to B-coordinates is P =
.
−3/22 1/11
'
(
3 2
The change of coordinates matrix from B-coordinates to C-coordinates is Q =
.
2 1
'
(
−1
2
The change of coordinates matrix from C-coordinates to B-coordinates is P =
.
2 −3
'
(
3/4 1/6
The change of coordinates matrix from B-coordinates to C-coordinates is Q =
.
1/4 1/6
'
(
2 −2
The change of coordinates matrix from C-coordinates to B-coordinates is P =
.
−3
9


1 −1
 1


1
5.
The change of coordinates matrix from B-coordinates to C-coordinates is Q =  0


−1 −1
0


−5 −1 −6


1
5.
The change of coordinates matrix from C-coordinates to B-coordinates is P =  5


−1
0 −1
'
Copyright © 2020 Pearson Canada Inc.
39
C Conceptual Problems
C1 Yes, vi = wi for all i. The easiest way to see this is to consider special cases. The B-coordinates of v1
 
 
1
1
0
0
 
 
are given by [v1 ]B =  . . We are told that [v1 ]B = [v1 ]C , hence [v1 ]C =  . . But this means that
 .. 
 .. 
 
 
0
0
v1 = w1 + 0w2 + · · · + 0wn = w1
Similarly, vi = wi for all i.
C2 Write the new basis in terms of the old, and make these the columns of the matrix: Let the vectors in
B be denoted w1 , w2 , w3 , w4 . Then we have
w1 = 0v3 + 0v2 + 0v4 + 1v1
w2 = 0v3 + 1v2 + 0v4 + 0v1
w3 = 1v3 + 0v2 + 0v4 + 0v1
w4 = 0v3 + 0v2 + 1v4 + 0v1


 0 0 1 0 
 0 1 0 0 

P = 
 0 0 0 1 
1 0 0 0
' (
' (
' 6' (7(
3
3
3
C3 (a) Let !x =
. Then, we have
= L
.
5
5B
5 C
' (
' (
' (
' (
' (
3
2
1
3
1
Observe that
=1
+ 1 , hence
=
.
5
3
2
5B
1
' 6' (7(
' (
6' (7
' (
' ( ' (
3
1
3
2
1
3
Thus, L
=
which implies that L
=1
+1
=
.
5 C
1
5
1
1
2
(b) We repeat what we did in (a), but in general. We first need to find c1 and c2 such that
' (
' (
' ( '
(
x1
2
1
2c1 + c2
= c1
+ c2
=
x2
3
2
3c1 + 2c2
Row reducing the corresponding augmented matrix gives
'
2 1
3 2
x1
x2
(
∼
'
1 0
2x1 − x2
0 1 −3x1 + 2x2
(
Hence,
' (
' ( '
(
2
1
x1
L(!x ) = (2x1 − x2 )
+ (−3x1 + 2x2 )
=
1
1
−x1 + x2
Copyright © 2020 Pearson Canada Inc.
40
C4 Let Q be the change of coordinates matrix from B-coordinates to C-coordinates. By definition of the
change of coordinates matrix we have that [x]B = P[x]C and [x]C = Q[x]B for any x ∈ V. Hence, we
get
[x]B = P[x]C = PQ[x]B
for all [x]B ∈ Rn . Thus, PQ = I by the Matrices Equal Theorem. Therefore, P is invertible and
Q = P−1 by Theorem 3.5.2.
C5 Consider
Then, by Theorem 4.4.1, we get
0 = c1 [v1 ]B + · · · + cn [vn ]B
0 = [c1 v1 + · · · + cn vn ]B
Hence, by definition of B-coordinates we get that
c1 v1 + · · · + cn vn = 0v1 + · · · + 0vn = 0
Thus, since B is linearly independent, this implies that c1 = · · · = cn = 0. Consequently, {[v1 ]B , . . . , [vn ]B }
is a linearly independent set of n vectors in Rn . Hence, by the Basis Theorem it is a basis for Rn .
C6 This is true by definition of the coordinate vector.
C7 This is false. For example, take B to be the standard basis for R2 and take C =
' (
' (
' (
1
1
1
v=
we have [v]B =
but [v]C =
.
1
1
0
)' ( ' (*
1 0
,
. Then, for
1 1
C8 This is true. We have
0 = [v]B − [w]B
= [v − w]B
by Theorem 4.4.1
But, the only vector that has coordinates all 0 is the zero vector. Thus, v − w = 0. Thus, v = w.
 
 x1 
 
C9 This is true. Let B = {v1 , . . . , vn } and !x =  ... . Then, take v = x1 v1 + · · · + xn vn . Hence, by definition,
 
xn
we have that [v]B = !x .
C10 (a) By definition, we have
4
P = [!v 1 ]S
5 4
5
· · · [!v n ]S = !v 1 · · · !v n
(b) By definition, we have !x = [!x ]S = P[!x ]B and that P is invertible by Theorem 4.4.2. Thus,
multiplying both sides by P−1 on the left gives
P−1 !x = P−1 P[!x ]B = I[!x ]B = [!x ]B
as required.
(c)
4
5
B = [A!v 1 ]B · · · [A!v n ]B
4
5
= P−1 A!v 1 · · · P−1 A!v n
4
5
= P−1 A !v 1 · · · !v n
= P−1 AP
Copyright © 2020 Pearson Canada Inc.
41
Section 4.5
A Practice Problems
 
 
 x1 
y1 
 
 
A1 For any !x =  x2  and !y = y2  in R3 , and s, t ∈ R we have
 
 
x3
y3
L(s!x + t!y ) = L(sx1 + ty1 , sx2 + ty2 , sx3 + ty3 )
'
(
sx1 + ty1 + sx2 + ty2
=
sx1 + ty1 + sx2 + ty2 + sx3 + ty3
'
(
'
(
x1 + x2
y1 + y2
=s
+t
x1 + x2 + x3
y1 + y2 + y3
= sL(!x ) + tL(!y )
Hence, L is linear.
 
 
 x1 
y1 
 
 
A2 For any !x =  x2  and !y = y2  in R3 , and s, t ∈ R we have
 
 
x3
y3
L(s!x + t!y ) = L(sx1 + ty1 , sx2 + ty2 , sx3 + ty3 )
= (sx1 + ty1 + sx2 + ty2 ) + (sx1 + ty1 + sx2 + ty2 + sx3 + ty3 )x
= s[(x1 + x2 ) + (x1 + x2 + x3 )x] + t[(y1 + y2 ) + (y1 + y2 + y3 )x]
= sL(!x ) + tL(!y )
Hence, L is linear.
'
(
'
(
a0 b0
a1 b1
A3 For any A0 =
and A1 =
in M2×2 (R), and s, t ∈ R we have
c0 d0
c1 d1
6'
(7
sa0 + ta1 sb0 + tb1
tr(sA0 + tA1 ) = tr
sc0 + tc1 sd0 + td1
= sa0 + ta1 + sd0 + td1
= s(a0 + d0 ) + t(a1 + d1 )
= s tr(A0 ) + t tr(A1 )
Hence, tr is linear.
A4 For any p(x) = a0 + b0 x + c0 x2 + d0 x3 and q(x) = a1 + b1 x + c1 x2 + d1 x3 in P3 (R), and s, t ∈ R we
have
T (sp + tq) = T (sa0 + ta1 + (sb0 + tb1 )x + (sc0 + tc1 )x2 + (sd0 + td1 )x3 )
'
(
sa0 + ta1 (sb0 + tb1 )
=
(sc0 + tc1 ) (sd0 + td1 )
'
(
'
(
a b
a b
= s 0 0 +t 1 1
c0 d0
c1 d1
= sT (p) + tT (q)
Hence, T is linear.
Copyright © 2020 Pearson Canada Inc.
42
(
'
(
1 0
−1 0
A5 Let A =
and B =
. Then, we have
0 1
0 −1
'
0 0
det(A + B) = det
0 0
6'
(7
=0
but
det A + det B = 1 + 1 = 2
Hence, det does not preserve addition and so is not a linear mapping.
A6 For any p(x) = a0 + b0 x + c0 x2 and q(x) = a1 + b1 x + c1 x2 in P2 (R), and s, t ∈ R we have
L(sp + tq) = L(sa0 + ta1 + (sb0 + tb1 )x + (sc0 + tc1 )x2 )
8
9 8
9
= sa0 + ta1 − (sb0 + tb1 ) + (sb0 + tb1 ) + (sc0 + tc1 ) x2
= s[(a0 − b0 ) + (b0 + c0 )x2 ] + t[(a1 − b1 ) + (b1 + c1 )x2 ]
= sL(p) + tL(q)
Hence, L is linear.
A7 Observe that for any !x ∈ R2 we have
6' (7 '
(
0
0 1
T (0!x ) = T
=
0
1 0
(
0 0
. Hence, T is not linear.
0 0
'
(
'
(
a0 b0
a1 b1
A8 For any A0 =
and A1 =
in M2×2 (R), and s, t ∈ R we have
c0 d0
c1 d1
'
(
'
(
'
(
0 0
0 0
0 0
M(sA0 + tA1 ) =
=s
+t
= sM(A0 ) + tM(A1 )
0 0
0 0
0 0
But, 0T (!x ) =
'
Hence, M is linear.
 
 
  

 x1 
2
 x1   x1 + x3 
 
 
  

A9 y is in the range of L if there exists x =  x2  ∈ R3 such that 0 = L  x2  =  0 .
 
 
  

x3
3
x3
x2 + x3
Comparing entries gives the system of linear equations
x1 + x3 = 2
x2 + x3 = 3
Observe that the system is consistent so y is in the range of L. Moreover, x3 is a free variables, so
there are infinitely many vectors x such that L(x) = y. In particular, if we let x3 = t ∈ R, then the
general solution is

  
 
2 − t 2
−1

  
 
x = 3 − t = 3 + t −1

  
 
t
0
1
Hence, for any t ∈ R we get a different vector x which is mapped to y by L. For example, taking t = 1
 
1
 
we get L 2 = y.
 
1
Copyright © 2020 Pearson Canada Inc.
43
A10 We need to determine if there exists a polynomial a + bx + cx2 such that
'
(
'
(
2 0
a+c
0
= L(a + bx + cx2 ) =
0 3
0
b+c
This corresponds to exactly the same system of linear equations as in (a). Hence, we know that y is
8
9
in the range of L and L (2 − t)1 + (3 − t)x + tx2 = y for any t ∈ R.
A11 We need to determine if there exists a polynomial a + bx + cx2 such that
1 + x = L(a + bx + cx2 ) = (b + c) + (−b − c)x
Comparing coefficients of powers of x we get the system of equations
b+c=1
−b − c = 1
This system is clearly inconsistent, hence y is not in the range of L.
 
 x1 
 x 
A12 We need to determine if there exists x =  2  ∈ R4 such that
 x3 
x4
'
(
'
(
−1 −1
−2x2 − 2x3 − 2x4
x1 + x4
= L(x) =
−2
2
−2x1 − x2 − x4 2x1 − 2x2 − x3 + 2x4
Comparing entries we get the system
−2x2 − 2x3 − 2x4 = −1
x1 + x4 = −1
−2x1 − x2 − x4 = −2
2x1 − 2x2 − x3 + 2x4 = 2
Row reducing the corresponding augmented matrix gives

 
 0 −2 −2 −2 −1  
 1
0
0
1 −1  

∼
 −2 −1
0 −1 −2  

 
2 −2 −1
2
2

1 0
−1 
0
1

0 −2 −2 −2 −1 

0 0
1
2 −7/2 

0 0
0
0 17/2
Hence, the system is inconsistent, so y is not in the range of L.
Copyright © 2020 Pearson Canada Inc.
44
For Problems A13 - A18, alternate correct answers are possible.
A13 Every vector in the range of L has the form
'
(
' (
' (
x1 + x2
1
0
= (x1 + x2 )
+ x3
x1 + x2 + x3
1
1
Therefore,
vector in the range of L can be written as a linear combination of these two vectors.
)' every
( ' (*
1 0
Let B =
,
. Then, we have shown that Range(L) = Span B. Additionally, B is clearly linearly
1 1
independent. Hence, B is a basis for the range of L.
To find a basis for the nullspace of L we need to do a similar procedure. We first need to find the
general form of a vector in the nullspace and then write it as a linear combination of vectors. To find
 
 x1 
 
the general form, we let !x =  x2  be any vector in the nullspace. Then, by definition of the nullspace
 
x3
and of L we get
' (
'
(
0
x1 + x2
= L(!x ) =
0
x1 + x2 + x3
Hence, we must have x1 + x2 = 0 and x1 + x2 + x3 = 0. So, x3 = 0 and x2 = −x1 . Therefore, every
vector in the nullspace of L has the form
  

 
 x1   x1 
 1
  

 
 x2  = −x1  = x1 −1
x3
0
0
 

 1



 

−1
Therefore, the set 
spans the nullspace and is clearly linearly independent (it only contains





 0

one non-zero vector). Thus, it is a basis for Null(L).
Then we have rank(L) + nullity(L) = 2 + 1 = dim R3 as predicted by the Rank-Nullity Theorem.
A14 Every vector in the range of L has the form
(a + b) + (a + b + c)x = (a + b)(1 + x) + cx
Therefore, the set {1 + x, x} spans the range of L. It is clearly linearly linearly independent and hence
is a basis for the range of L.
 
a
 
Let b ∈ Null(L). Then,
 
c
 
a
 
0 + 0x = L b = (a + b) + (a + b + c)x
 
c
Thus, we get a + b = 0 and a + b + c = 0. Therefore, c = 0 and b = −a. So, every vector in the
nullspace of L has the form
   
 
a  a
 1
b = −a = a −1
   
 
c
0
0
Copyright © 2020 Pearson Canada Inc.
45
 

 1 



 

−1
Therefore, the set 
spans the nullspace and is clearly linearly independent. Thus, it is a basis





 0

for Null(L).
Then we have rank(L) + nullity(L) = 2 + 1 = dim R3 as predicted by the Rank-Nullity Theorem.
A15 Every vector in the range of L has the form
a − 2b + (a − 2b)x2 = (a − 2b)(1 + x2 )
Therefore, the set {1 + x2 } spans the range of L. It is clearly linearly linearly independent and hence
is a basis for the range of L.
Let a + bx ∈ Null(L). Then,
0 + 0x + 0x2 = L (a + bx) = a − 2b + (a − 2b)x2
Thus, we get a − 2b = 0. Therefore, a = 2b. So, every vector in the nullspace of L has the form
a + bx = 2b + bx = b(2 + x)
Therefore, the set {2 + x} spans the nullspace and is clearly linearly independent. Thus, it is a basis
for Null(L).
Then we have rank(L) + nullity(L) = 1 + 1 = dim P1 (R) as predicted by the Rank-Nullity Theorem.
A16 Every vector in the range of L has the form


 
 
 
 a − b 
1
−1
0


 
 
 
 2a + b  = a 2 + b  1 + c 0
a−b+c
1
−1
1
     

1 −1 0



     

2 ,  1 , 0
Therefore, the set B = 
spans the range of L. Consider









 1 −1 1 

 
 
   
1
−1
0 0
 
 
   
c1 2 + c2  1 + c3 0 = 0
 
 
   
1
−1
1
0
Row reducing the coefficient matrix of the corresponding system gives

 

1 −1 0 1 0 0
 

2
1 0 ∼ 0 1 0

 

1 −1 1
0 0 1
Thus, the only solution is c1 = c2 = c3 = 0. Consequently, B is linearly linearly independent and
hence is a basis for the range of L.
Let a + bx + cx2 ∈ Null(L). Then,
 


0
@
A  a − b 
 
2
0 = L a + bx + cx =  2a + b 
0
a−b+c
This gives us the exact same system as above. Hence, the only solution is a = b = c = 0. Therefore,
Null(L) = {0 + 0x + 0x2 }. Thus, a basis for Null(L) is the empty set.
Then we have rank(L) + nullity(L) = 3 + 0 = dim P2 (R) as predicted by the Rank-Nullity Theorem.
Copyright © 2020 Pearson Canada Inc.
46
A17 The'range(of tr is R since we can pick a + d to be any real number. Hence, a basis for Range(tr) is {1}.
a b
Let
∈ Null(L). Then,
c d
6'
(7
a b
0=L
=a+d
c d
Thus, we get d = −a. So, every vector in the nullspace of L has the form
'
(
'
(
'
(
'
(
a b
1
0
0 1
0 0
=a
+b
+c
c d
0 −1
0 0
1 0
)'
( '
( '
(*
1
0 0 1 0 0
Therefore, the set
,
,
spans the nullspace and is clearly linearly independent.
0 −1 0 0 1 0
Thus, it is a basis for Null(L).
We have rank(tr) + nullity(tr) = 1 + 3 = dim M2×2 (R).
A18 Observe that the range of T is M2×2 (R). Therefore,
)' we( can
' pick
( any
' basis
( ' for M(*2×2 (R) to be the basis
1 0 0 1 0 0 0 0
for the range of T . We pick the standard basis
,
,
,
.
0 0 0 0 1 0 0 1
Let a + bx + cx2 + dx3 ∈ Null(T ). Then
'
(
'
(
0 0
a b
2
3
= L(a + bx + cx + dx ) =
0 0
c d
Hence, a = b = c = d = 0. Thus, Null(T ) = {0} and so a basis for Null(L) is the empty set.
We have rank(L) + nullity(L) = 4 + 0 = dim P3 (R).
A19 We have
(L ◦ M)(a + bx) = L(M(a + bx)) = L(3a − b, −a + b)
= [(3a − b) + (−a + b)] + [2(3a − b) + 3(−a + b)]x
= 2a + (3a + b)x
Hence, they are not inverses of each other.
A20 We have
(L ◦ M)(y1 , y2 , y2 ) = L(M(y1 , y2 , y3 )) = L((y1 + y2 − 3y3 ) + (y2 − 2y3 )x + y3 x2 )

  
(y1 + y2 − 3y3 ) − (y2 − 2y3 ) + y3  y1 

  
(y2 − 2y3 ) + 2y3
= 
 = y2 

y3
y3
(M ◦ L)(a + bx + cx2 ) = M(L(a + bx + cx2 ))
= M(a − b + c, b + 2c, c)
= ([a − b + c] + [b + 2c] − 3c) + [(b + 2c) − 2c]x + cx2
= a + bx + cx2
Thus, L and M are inverses of each other.
Copyright © 2020 Pearson Canada Inc.
47
A21 Observe that we have
 
 
 
 
a1 
1
0
0
 
 
 
 
L a2  = a1 L 0 + a2 L 1 + a3 L 0
 
 
 
 
a3
0
0
1
= a1 x2 + a2 (2x) + a3 (1 + x + x2 )
= a3 + (2a2 + a3 )x + (a1 + a3 )x2
A22 Since every vector in the range needs to be a linear combination of the three matrices, we can simply
take
'
(
'
(
'
( '
(
@
A
1 0
0 1
0 0
a b
2
L a + bx + cx = a
+b
+c
=
0 0
0 0
0 1
0 c
It is easy to verify that indeed Null(L) = {0}.
A23 By the Rank-Nullity Theorem, as long as we get rank(L) = 2, we automatically have nullity(L) = 2.
 
0
1
So, we require that the range be spanned by two linearly independent vectors; and we need   to be
1
0
included in the range and to be mapped to by the given matrix.
One possibility is L : M2×2 (R) → R4 defined
L
6'
a b
c d
(7
 
   
0
0 0
0 a
1
= a   + d   =  
0
1 d
0
0
0
B Homework Problems
B1 Show that L(sx + ty) = sL(x) + tL(y).
B2 Show that L(sp + tq) = sL(p) + tL(q).
B3 Show that L(sA + tB) = sL(A) + tL(B).
B4 Show that L(sp + tq) = sL(p) + tL(q).
B5 Show that L(sA + tB) = sL(A) + tL(B).
B6 Show that L(sA + tB) = sL(A) + tL(B).
B7 Show that T (sA + tB) = sT (A) + tT (B).
B8 L is not linear.
B9 M is linear.
B10 N is not linear.
B11 L is linear.
B12 T is linear.
Copyright © 2020 Pearson Canada Inc.
48
 
1/3
 
B13 y is in the range of L. We have L 1/3 = y.
 
1/3
B14 y is in the range of L. We have L(−17 + 10x) = y.
B15 y is not in the range of L.
B16 y is in the range of L. We have L
6'
5/3
2/3
B17 y is in the range of L. We have L
6'
(7
−2 −3/2
= y.
0
0
(7
= y.
B18 y is not in the range of L.
For Problems B19 - B28, alternate correct answers are possible.
B19 A basis for Range(L) is {1, x, x2 }. A basis for Null(L) is the empty set.
We have rank(L) + nullity(L) = 3 + 0 = dim R3 .
    

1 −2



   

1 ,  1
B20 A basis for Range(L) is 
. A basis for Null(L) is the empty set.







 1 −1 

We have rank(L) + nullity(L) = 2 + 0 = dim P1 (R).
2 3
B21 A basis for Range(L) is {1, x}. A basis for Null(L) is x2 .
We have rank(L) + nullity(L) = 2 + 1 = dim P2 (R).
B22 A basis for Range(L) is the empty set. A basis for Null(L) is the standard basis for M2×2 (R).
We have rank(L) + nullity(L) = 0 + 4 = dim M2×2 (R).
  
)'
( '
(*

 2



1 −1 2 0
 

−1
B23 A basis for Range(L) is
,
. A basis for Null(L) is 
.





0
1 1 2
 1

We have rank(L) + nullity(L) = 2 + 1 = dim R3 .
B24 A basis for Range(L) is {1 + x, x + x2 }. A basis for Null(L) is the empty set.
We have rank(L) + nullity(L) = 2 + 0 = dim D.
)'
( '
( '
(*
)'
(*
1
0 −1 1 0 −1
1 1
B25 A basis for Range(L) is
,
,
. A basis for Null(L) is
.
0 −1
0 0 1
0
1 1
We have rank(L) + nullity(L) = 3 + 1 = dim M2×2 (R).
)'
( '
(*
2
3
1 −1 0
0
2
B26 A basis for Range(L) is 1, x . A basis for Null(L) is
,
.
0
0 1 −1
We have rank(L) + nullity(L) = 2 + 2 = dim M2×2 (R).
)'
( '
( '
( '
(*
1 0 0 1 0 0 0 0
B27 A basis for Range(T ) is
,
,
,
. A basis Null(T ) is the empty set.
0 0 0 0 1 0 0 1
We have rank(T ) + nullity(T ) = 4 + 0 = dim M2×2 (R).
)'
( '
( '
(*
−1 0 0
2 −2 −1
B28 A basis for Range(L) is
,
,
. A basis Null(L) is the empty set.
−2 0 0 −2
2 −1
We have rank(L) + nullity(L) = 3 + 0 = dim P2 (R).
Copyright © 2020 Pearson Canada Inc.
49
B29 They are not inverses.
B30 They are inverses.
B31 They are inverses.
B32 L(a + bx + cx2 ) =
a b
B33 L
c d
6'
(7
'
(
a − b + 3c
−a + b + 2c
 
 0 
 
=  a 
 
2a
B34 L(a + bx + cx2 ) = (a − b − c) + (b + c)x + bx2
C Conceptual Problems
C1 We have L(0V ) = L(00V ) = 0L(0V ) = 0W .
C2 By definition Null(L) is a subset of V. By Problem C1, 0V ∈ Null(L), so Null(L) is non-empty. Let
x, y ∈ Null(L) so that L(x) = 0W and L(y) = 0W . Then,
L(sx + ty) = sL(x) + tL(y) = s0W + t0W = 0W
Hence, Null(L) is a subspace of V.
C3 By definition Range(L) is a subset of W. By Problem C1, 0W ∈ Range(L), so Range(L) is non-empty.
For any x, y ∈ Range(L) there exists u, v ∈ V such that L(u) = x and L(v) = y. Then we get,
L(su + tv) = sL(u) + tL(v) = sx + ty
So, sx + ty ∈ Range(L). Hence, Range(L) is a subspace of W.
C4 We have
(M ◦ L)(sx + ty) = M(L(sx + ty)) = M(sL(x) + tL(y))
= s(M(L(x))) + t(M(L(y))) = s(M ◦ L)(x) + t(M ◦ L)(y)
So, M ◦ L is linear.
C5 Assume both L−1 and M are inverses of L. For any x ∈ W, there exists y such that M(x) = y. Then,
L(y) = L(M(x)) = x
since L and M are inverses. Hence,
L−1 (x) = L−1 (L(y)) = y = M(x)
Thus, L−1 (x) = M(x) for all x ∈ W, so L−1 = M. Therefore, the inverse is unique.
Copyright © 2020 Pearson Canada Inc.
50
C6 (a) Assume that {v1 , . . . , vk } is linearly dependent. Then there exists coefficients t1 , . . . , tk not all zero
such that t1 v1 + · · · + tk vk = 0V . But then
0W = L(0V ) = L(t1 v1 + · · · + tk vk ) = t1 L(v1 ) + · · · + tk L(vk )
with t1 , . . . , tk not all zero. This contradicts the fact that L(v1 ), . . . , L(vk ) is linearly independent.
(b) One simple example is to define L : V → W by L(v) = 0W for all v ∈ V. Then, for any
set {v1 , . . . , vk } we have {L(v1 ), . . . , L(vk )} only contains the zero vector and hence is linearly
dependent.
C7 Using the Rank-Nullity Theorem, we get that Range(L) = W if and only if rank(L) = n if and only if
nullity(L) = 0 if and only if Null(L) = {0}.
C8 (a) Every vector v in the range of M ◦ L can be written in the form of v = M(L(x)) for some x ∈ V.
But, L(x) = y for some y ∈ U. Hence, v = M(y), so the range of M ◦ L is a subset of the range of
M. Since ranges are always subspaces, this implies that Range(M ◦ L) is a subspace of Range(M).
Hence, dim Range(M ◦ L) ≤ dim Range(M) which gives rank(M ◦ L) ≤ rank(M).
(b) The nullspace of L is a subspace of the nullspace of M ◦ L, because if L(x) = 0U , then M(L(x)) =
M(0U ) = 0W . Therefore nullity(L) ≤ nullity(M ◦ L). So, by the Rank-Nullity Theorem
rank(M ◦ L) = dim V − nullity(M ◦ L) ≤ dim V − nullity(L) = rank(L)
(c) There are many possible correct answers. One possibility is to define L(x1 , x2 ) = (x1 , 0) and
M(x1 , x2 ) = (0, x2 ). Then, L and M both have rank 1, but M ◦ L = M(L(x1 , x2 )) = M(x1 , 0) =
(0, 0), so rank(M ◦ L) = 0.
C9 Suppose that rank(L) = r. Let {v1 , . . . , vr } be a basis for Range(L). Since vi = L(xi ) for some xi ∈ V,
M(vi ) = M(L(xi )) ∈ Range(M ◦ L). Thus the set B = {M(v1 ), . . . , M(vr )} is in the range of M ◦ L and
we have
t1 M(v1 ) + · · · + tr M(vr ) = 0
M(t1 v1 + · · · + tr vr ) = 0
t1 v1 + · · · + tr vr = 0
since the nullspace of M is {0}. Hence t1 = · · · = tr = 0 is the only solution and so B is linearly
independent. Thus, rank(M ◦ L) ≥ r = rank(L). Also, by Problem C8 (b) we know that rank(M ◦ L) ≤
rank(L), hence rank(M ◦ L) = rank(L) as required.
C10 We define x + y = (x1 + y1 , x2 + y2 , . . .) and tx = (tx1 , tx2 , . . .). We have
(L ◦ R)(x) = L(0, x1 , x2 , . . .) = (x1 , x2 , . . .) = x
but
(R ◦ L)(x) = R(x2 , x3 , . . .) = (0, x2 , x3 , . . .) ! x
Thus R is a right inverse for L but not a left inverse. Moreover, observe that there cannot be a left
inverse for L since there is no way of recovering x1 .
Copyright © 2020 Pearson Canada Inc.
51
Section 4.6
A Practice Problems
A1 By definition, the columns of the matrix of L with respect to the basis B are the B-coordinates of the
images of the basis vectors. In this question, we are given the images of the basis vectors as a linear
combination of the basis vectors, so the coordinates are just the coefficients of the linear combination.
To calculate [L(!x )]B we use the formula
[L(!x )]B = [L]B [!x ]B
We have
L(!v 1 ) = 0!v 1 + 1!v 2
L(!v 2 ) = 2!v 1 + (−1)!v 2
Hence,
So,
4
[L]B = [L(!v 1 )]B
' (
0
⇒ [L(!v 1 )]B =
'1 (
2
⇒ [L(!v 2 )]B =
−1
[L(!v 2 )]B
5
(
0
2
=
1 −1
'
(' ( ' (
0
2 4
6
[L(!x )]B = [L]B [!x ]B =
=
1 −1 3
1
'
A2 We have
L(!v 1 ) = 2!v 1 + 0!v 2 + (−1)!v 3
L(!v 2 ) = 2!v 1 + 0!v 2 + (−1)!v 3
L(!v 3 ) = 0!v 1 + 4!v 2 + 5!v 3
Hence,
4
[L]B = [L(!v 1 )]B
So,
A3 We have
 
 2
 
⇒ [L(!v 1 )]B =  0
 
−1
 
 2
 
⇒ [L(!v 2 )]B =  0
 
−1
 
0
 
⇒ [L(!v 3 )]B = 4
 
5
[L(!v 2 )]B [L(!v 3 )]B
5


2 0
 2


0 4
=  0


−1 −1 5

  

2 0  3  12
 2

  

0 4  3 =  −4
[L(!x )]B = [L]B [!x ]B =  0

  

−1 −1 5 −1
−11
(
' (
' (
' (
−3
1
−1
−3
= (−3)
+0
⇒ [L(1, 1)]B =
−3
1
2
' (
' (
' (
'0 (
−4
1
−1
0
L(−1, 2) =
=0
+ (4)
⇒ [L(−1, 2)]B =
8
1
2
4
L(1, 1) =
'
Copyright © 2020 Pearson Canada Inc.
52
Hence,
4
[L]B = [L(1, 1)]B
A4 We have
[L(−1, 2)]B
5
(
−3 0
=
0 4
'
(
' (
' (
' (
−1
1
−1
0
=0
+1
⇒ [L(1, 1)]B =
2
1
2
1' (
' (
' (
' (
2
1
−1
2
L(−1, 2) =
=2
+0
⇒ [L(−1, 2)]B =
2
1
2
0
L(1, 1) =
'
Hence,
4
[L]B = [L(1, 1)]B
A5 We have
[L(−1, 2)]B
5
0 2
=
1 0
'
(
' (
' (
' (
' (
3
1
1
3
L(1, 1) =
=3
+0
⇒ [L(1, 1)]B =
1' ( −1
0' (
'3 (
' (
1
1
1
0
L(1, −1) =
=0
+1
⇒ [L(1, −1)]B =
−1
1
−1
1
Hence,
4
[L]B = [L(1, 1)]B
Observe that
5 '3 0 (
[L(1, −1)]B =
0 1
' (
' (
' (
2
1
1
=1
+1
0
1
−1
' (
1
Hence, [!x ]B =
. Therefore,
1
(' ( ' (
3 0 1
3
[L(!x )]B = [L]B [!x ]B =
=
0 1 1
1
'
A6 We have
(
' (
' (
' (
−7
1
1
0
=0
+ (−7)
⇒ [L(1, −3)]B =
−7
−3
1
−7
' (
' (
' (
' (
5
1
1
0
L(1, 1) =
=0
+5
⇒ [L(1, 1)]B =
5
−3
1
5
L(1, −3) =
Hence,
'
4
[L]B = [L(1, −3)]B
Observe that
Hence, [!x ]B =
5 ' 0 0(
[L(1, 1)]B =
−7 5
' (
' (
' (
0
1
1
= (−1/2)
+ (1/2)
2
−3
1
'
(
−1/2
. Therefore,
1/2
('
( ' (
0 0 −1/2
0
[L(!x )]B = [L]B [!x ]B =
=
−7 5
1/2
6
'
Copyright © 2020 Pearson Canada Inc.
53
A7 We have
' (
' (
' (
' (
7
1
1
11
L(1, 2) =
= 11
+ (−4)
⇒ [L(1, 2)]B =
6
2
4
−4
' (
' (
' (
' (
13
1
1
16
L(1, 4) =
= 16
+ (−3)
⇒ [L(1, 4)]B =
20
2
4
−3
Hence,
4
[L]B = [L(1, 2)]B
Observe that
5 ' 11 16(
[L(1, 4)]B =
−4 −3
' (
' (
' (
0
1
1
= (−1)
+1
2
2
4
' (
−1
Hence, [!x ]B =
. Therefore,
1
(' ( ' (
11 16 −1
5
[L(!x )]B = [L]B [!x ]B =
=
−4 −3
1
1
'
A8 We have
Hence,
 
 
 
 
 
0
−1
0
1
1
 
 
 
 
 
L(−1, 1, 0) = 1 = 1  1 + 0 2 + 1 0
⇒ [L(−1, 1, 0)]B = 0
 
 
 
 
 
1
0
1
1
1
 
 
 
 
 
3
−1
0
1
−2
 
 
 
 
 
L(0, 2, 1) = 2 = (−2)  1 + 2 2 + 1 0 ⇒ [L(0, 2, 1)]B =  2
 
 
 
 
 
3
0
1
1
1
 
 
 
 
 
2
−1
0
1
−2
 
 
 
 
 
L(1, 0, 1) = 0 = (−2)  1 + 1 2 + 0 0 ⇒ [L(1, 0, 1)]B =  1
 
 
 
 
 
1
0
1
1
0
4
[L]B = [L(−1, 1, 0)]B
Observe that
 
1
 
Hence, [!x ]B = 1. Therefore,
 
1
[L(0, 2, 1)]B
[L(1, 0, 1)]B
5


1 −2 −2


2
1
= 0


1
1
0
 
 
 
 
0
−1
0
1
3 = 1  1 + 1 2 + 1 0
 
 
 
 
2
0
1
1

   
1 −2 −2 1 −3

   
2
1 1 =  3
[L(!x )]B = [L]B [!x ]B = 0

   
1
1
0 1
2
Copyright © 2020 Pearson Canada Inc.
54
A9 We have
Hence,
 
 
 
 
3
1
1
1
 
 
 
 
L(1, 0, 0) = 1 = 4 0 + (−3) 1 + 2 2
 
 
 
 
2
0
0
1
 
 
 
 
3
1
1
1
 
 
 
 
L(1, 1, 0) = 2 = 4 0 + (−4) 1 + 3 2
 
 
 
 
3
0
0
1
 
 
 
 
1
1
1
1
 
 
 
 
L(1, 2, 1) = 4 = 1 0 + (−4) 1 + 4 2
 
 
 
 
4
0
0
1
4
[L]B = [L(1, 0, 0)]B
Observe that


 14


Hence, [!x ]B = −20. Therefore,


9
 
 4
 
⇒ [L(1, 0, 0)]B = −3
 
2
 
 4
 
⇒ [L(1, 1, 0)]B = −4
 
3
 
 1
 
⇒ [L(1, 2, 1)]B = −4
 
4
[L(1, 1, 0)]B [L(1, 2, 1)]B
5


4
1
 4


= −3 −4 −4


2
3
4
 
 
 
 
 3
1
1
1
 
 
 
 
−2 = 14 0 + (−20) 1 + 9 2
9
0
0
1


 

4
1  14 −15
 4


 

[L(!x )]B = [L]B [!x ]B = −3 −4 −4 −20 =  2


 

2
3
4
9
4
(
' (
1
2
, which is the normal to the line of reflection, and the vector !v 2 =
−2
1
which is orthogonal to !v 1 . By geometrical arguments, a basis adapted to refl(1,−2) is B = {!v 1 , !v 2 }.
To determine the matrix of refl(1,−2) with respect to the basis B, calculate the B-coordinates of the
images of the basis vectors. We observe that if we reflect the normal vector !v 1 over the line we will
get −!v 1 . Since the vector !v 2 is orthogonal to !v 1 it lies on the line of reflection, so reflecting it will not
change the vector. Hence,
A10 Consider the vector !v 1 =
'
refl(1,−2) (!v 1 ) = −!v 1 = (−1)!v 1 + 0!v 2
refl(1,−2) (!v 2 ) = !v 2 = 0!v 1 + 1!v 2
Thus,
4
[refl(1,−2) ]B = [refl(1,−2) (!v 1 )]B
[refl(1,−2) (!v 2 )]B
5
(
−1 0
=
0 1
'
(
' (
1
2
A11 Consider the vector !v 1 =
, which is a direction vector for the projection, and the vector !v 2 =
−2
1
which is orthogonal to !v 1 . By geometrical arguments, a basis adapted to perp(1,−2) is B = {!v 1 , !v 2 }.
'
Copyright © 2020 Pearson Canada Inc.
55
To determine the matrix of perp(1,−2) with respect to the basis B, calculate the B-coordinates of the
images of the basis vectors. We observe that the perpendicular of the projection of !v 1 onto !v 1 is just
!0, and hence the perpendicular of the projection of !v 2 onto !v 1 is !v 2 .
perp(1,−2) (!v 1 ) = !0 = 0!v 1 + 0!v 2
perp(1,−2) (!v 2 ) = !v 2 = 0!v 1 + 1!v 2
Thus,
4
[perp(1,−2) ]B = [perp(1,−2) (!v 1 )]B
5 '0 0 (
[perp(1,−2) (!v 2 )]B =
0 1
 
 
 2
1
 
 
A12 Consider the vector !v 1 =  1, which is a direction vector for the projection, and the vectors !v 2 = 0
 
 
−1
2
 
0
 
and !v 3 = 1 which are orthogonal to !v 1 . By geometrical arguments, a basis adapted to proj(2,1,−1)
 
1
is B = {!v 1 , !v 2 , !v 3 }. To determine the matrix of proj(2,1,−1) with respect to the basis B, calculate the
B-coordinates of the images of the basis vectors. We observe that the projection of !v 1 onto !v 1 is just
!v 1 , and the projection of any vector orthogonal to !v 1 onto !v 1 is !0. Hence,
proj(2,1,−1) (!v 1 ) = !v 1 = 1!v 1 + 0!v 2 + 0!v 3
proj(2,1,−1) (!v 2 ) = !0 = 0!v 1 + 0!v 2 + 0!v 3
proj(2,1,−1) (!v 3 ) = !0 = 0!v 1 + 0!v 2 + 0!v 3
Thus,
4
5
[proj(2,1,−1) ]B = [proj(2,1,−1) (!v 1 )]B [proj(2,1,−1) (!v 2 )]B [proj(2,1,−1) (!v 3 )]B


1 0 0


= 0 0 0


0 0 0
 
 
−1
1
 
 
A13 Consider the vector !v 1 = −1, which is normal to the plane of reflection, and the vectors !v 2 = 0
 
 
1
1
 
0
 
and !v 3 = 1, which lie in the plane and are therefore orthogonal to !v 1 . By geometrical arguments,
 
1
a basis adapted to refl(−1,−1,1) is B = {!v 1 , !v 2 , !v 3 }. To determine the matrix of refl(−1,−1,1) with respect
to the basis B, calculate the B-coordinates of the images of the basis vectors. We observe that if we
reflect the normal vector !v 1 over the plane we will get −!v 1 . Since the vectors !v 2 and !v 3 are orthogonal
to !v 1 reflecting them will not change them. Hence,
refl(−1,−1,1) (!v 1 ) = −!v 1 = (−1)!v 1 + 0!v 2 + 0!v 3
refl(−1,−1,1) (!v 2 ) = !v 2 = 0!v 1 + 1!v 2 + 0!v 3
refl(−1,−1,1) (!v 3 ) = !v 3 = 0!v 1 + 0!v 2 + 1!v 3
Copyright © 2020 Pearson Canada Inc.
56
Thus,
4
[refl(−1,−1,1) ]B = [refl(−1,−1,1) (!v 1 )]B [refl(−1,−1,1) (!v 2 )]B


−1 0 0


=  0 1 0


0 0 1
[refl(−1,−1,1) (!v 3 )]B
5
A14 (a) We need to find t1 , t2 , and t3 such that
 
 
 
  

1
1
 1
0  t1 + t2 
2 = t1 0 + t2 −1 + t3 1 =  −t2 + t3 
 
 
 
  

4
1
0
2
1t1 + 2t3
Row reducing the corresponding augmented matrix gives

 

1 0 1   1 0 0
2 
 1
 0 −1 1 2  ∼  0 1 0 −1 

 

1
0 2 4
0 0 1
1
 
 
1
 2
 
 
Thus, 2 = −1
 
 
4B
1
(b) Using the result in part (a) and by observation we get
 
 
 
 
1
1
 1
0
 
 
 
 
L(1, 0, 1) = 2 = 2 0 + (−1) −1 + 1 1
 
 
 
 
4
1
0
2
 
 
 
 
0
1
 1
0
 
 
 
 
L(1, −1, 0) = 1 = 0 0 + 0 −1 + 1 1
 
 
 
 
2
1
0
2
 
 
 
 
 2
1
 1
0
 
 
 
 
L(0, 1, 2) = −2 = 0 0 + 2 −1 + 0 1
 
 
 
 
0
1
0
2
Hence,
4
[L]B = [L(1, 0, 1)]B [L(1, −1, 0)]B
[L(0, 1, 2)]B
5


 2 0 0


= −1 0 2


1 1 0
(c) Using the result of part (a), the result of part (b), and the formula [L(!x )]B = [L]B [!x ]B we get
 

   
1
 2 0 0  2 4
 

   
[L(1, 2, 4)]B = [L]B 2 = −1 0 2 −1 = 0
 

   
4B
1 1 0
1
1
Thus, by definition of B-coordinates, we have
 
 
   
1
 1
0 4
 
 
   
L(1, 2, 4) = 4 0 + 0 −1 + 1 1 = 1
 
 
   
1
0
2
6
Copyright © 2020 Pearson Canada Inc.
57
A15 (a) We need to find t1 , t2 , and t3 such that
 
 
 
  

−1
 1
1
0  t1 + t2 
 
 
 
  

 7 = t1  0 + t2 2 + t3 1 =  2t2 + t3 
6
−1
0
1
−t1 + t3
Row reducing the corresponding augmented matrix gives




 

1 1 0 −1   1 0 0 −3 
 

0 2 1
7  ∼  0 1 0
2 
 

−1 0 1
6
0 0 1
3
 
−3
 
Thus, [!x ]B =  2.
 
3
(b) We have
 
 
 
 
0
 1
1
0
 
 
 
 
L(1, 0, −1) = 1 = 0  0 + 0 2 + 1 1
 
 
 
 
1
−1
0
1
 
 
 
 
−2
 1
1
0
 
 
 
 
L(1, 2, 0) =  0 = (−2)  0 + 0 2 + 0 1
 
 
 
 
2
−1
0
1
 
 
 
 
 5
 1
1
0
 
 
 
 
L(0, 1, 1) =  3 = 2  0 + 3 2 + (−3) 1
 
 
 
 
−5
−1
0
1
Hence,
4
[L]B = [L(1, 0, −1)]B
[L(1, 2, 0)]B [L(0, 1, 1)]B
5


2
0 −2


0
3
= 0


1
0 −3
(c) Using the result of part (a), the result of part (b), and the formula [L(!x )]B = [L]B [!x ]B we get
 

  

2 −3  2
−1
0 −2
 

  

0
3  2 =  9
[L(−1, 7, 6)]B = [L]B  7 = 0
 

  

6B
1
0 −3
3
−12
Thus, by definition of B-coordinates, we have
 
 
  

 1
1
0  11
 
 
  

L(−1, 7, 6) = 2  0 + 9 2 + (−12) 1 =  6
 
 
  

−1
0
1
−14
(
1 3
A16 The matrix of L with respect to the standard basis is [L]S =
.
−8 7
'
Copyright © 2020 Pearson Canada Inc.
58
The change of coordinates matrix from B to S is
'' ( ' ( ( '
(
1
1
1 1
P=
=
2S 4S
2 4
'
(
2 −1/2
−1
So, the change of coordinates matrix from S to B is P =
.
−1
1/2
Hence, the B-matrix of L is
'
('
('
( '
(
2 −1/2
1 3 1 1
11 16
−1
[L]B = P [L]S P =
=
−1
1/2 −8 7 2 4
−4 −3
'
(
1 −6
A17 The matrix of L with respect to the standard basis is [L]S =
.
−4 −1
The change of coordinates matrix from B to S is
'' ( ' ( ( '
(
3
1
3 1
P=
=
−2 S 1 S
−2 1
'
(
1 1 −1
−1
So, the change of coordinates matrix from S to B is P =
.
3
5 2
Hence, the B-matrix of L is
'
('
('
( '
(
1 1 −1
1 −6
3 1
5
0
−1
[L]B = P [L]S P =
=
3 −4 −1 −2 1
0 −5
5 2
'
(
4 −6
A18 The matrix of L with respect to the standard basis is [L]S =
.
2
8
The change of coordinates matrix from B to S is
'' ( ' ( ( '
(
3
7
3 7
P=
=
1S 3S
1 3
'
(
1 3 −7
So, the change of coordinates matrix from S to B is P−1 =
.
3
2 −1
Hence, the B-matrix of L is
'
('
('
( '
(
1 3 −7 4 −6 3 7
−40 −118
[L]B = P−1 [L]S P =
=
3 2
8 1 3
18
52
2 −1
'
(
16 −20
A19 The matrix of L with respect to the standard basis is [L]S =
.
6 −6
The change of coordinates matrix from B to S is
'' ( ' ( ( '
(
5
4
5 4
P=
=
3S 2S
3 2
'
(
1 −2
4
−1
So, the change of coordinates matrix from S to B is P =
.
2 3 −5
Hence, the B-matrix of L is
'
('
('
( '
(
1 −2
4 16 −20 5 4
4 0
−1
[L]B = P [L]S P =
=
0 6
2 3 −5 6 −6 3 2
Copyright © 2020 Pearson Canada Inc.
59


1 1
3


4 2.
A20 The matrix of L with respect to the standard basis is [L]S = 0


1 −1 5
The change of coordinates matrix from B to S is
 
1
 
P = 1
 
0S
 
0
1
 
1S
   

1  1 0 1
0  = 1 1 0
   

1S
0 1 1
So, the change of coordinates matrix from S to B is P−1
Hence, the B-matrix of L is


1 −1
 1
1

1
1.
= −1

2
1 −1
1



 

1 −1 3
1 1 1 0 1 4 2 0
 1
1








1
1 0
4 2 1 1 0 = 0 4 2
[L]B = P−1 [L]S P = −1








2
1 −1
1 1 −1 5 0 1 1
0 0 4


 4 1 −3


A21 The matrix of L with respect to the standard basis is [L]S = 16 4 −18.


6 1 −5
The change of coordinates matrix from B to S is
 
1
 
P = 1
 
1S
 
0
3
 
1S
   

1  1 0 1
2  = 1 3 2
   

1S
1 1 1
So, the change of coordinates matrix from S to B is P−1
Hence, the B-matrix of L is


3
−1 −1


0
1.
= −1


2
1 −3



 

3  4 1 −3 1 0 1 2
0 0
−1 −1








0
1 16 4 −18 1 3 2 = 0 −2 0
[L]B = P−1 [L]S P = −1



 

2
1 −3 6 1 −5 1 1 1
0
0 3
A22 We need to write find the B-coordinates of the images of the basis vectors. So, we need to solve
 
 
 
 
2
1
0
0
 
 
 
 
L(1, 1, 1) = 2 = a1 1 + a2 1 + a3 0
 
 
 
 
0
1
1
1
 
 
 
 
 1
1
0
0
 
 
 
 
L(0, 1, 1) =  2 = b1 1 + b2 1 + b3 0
 
 
 
 
−1
1
1
1
 
 
 
 
 0
1
0
0
 
 
 
 
L(0, 0, 1) =  1 = c1 1 + c2 1 + c3 0
 
 
 
 
−1
1
1
1
Copyright © 2020 Pearson Canada Inc.
60
We get three systems of linear equation with the same coefficient matrix. Hence, we row reduce the
triple augmented matrix to get

 

1
0   1 0 0
2
1
0 
 1 0 0 2
 

 1 1 0 2
2
1  ∼  0 1 0
0
1
1 

 

1 1 1 0 −1 −1
0 0 1 −2 −3 −2
The columns in the augmented part are the B-coordinates of the images of the basis vectors, hence


1
0
 2


1
1
[L]B =  0


−2 −3 −2
A23 We need to write find the B-coordinates of the images of the basis vectors. So, we need to solve
L(1 + x2 ) = 1 + x2 = a1 (1 + x2 ) + a2 (−1 + x) + a3 (1 − x + x2 )
L(−1 + x) = −1 + x2 = b1 (1 + x2 ) + b2 (−1 + x) + b3 (1 − x + x2 )
L(1 − x + x2 ) = 1 = c1 (1 + x2 ) + c2 (−1 + x) + c3 (1 − x + x2 )
We get three systems of linear equation with the same coefficient matrix. Hence, we row reduce the
triple augmented matrix to get

 

1 1 −1 1   1 0 0 1 −1
1 
 1 −1

 

1 −1 0
0 0  ∼  0 1 0 0
2 −1 
 0
 

1
0
1 1
1 0
0 0 1 0
2 −1
The columns in the augmented part are the B-coordinates of the images of the basis vectors, hence


1
1 −1


2 −1
[L]B = 0


0
2 −1
A24 We need to write find the B-coordinates of the images of the basis vectors. We have
D(1) = 0 = 0(1) + 0x + 0x2
D(x) = 1 = 1(1) + 0x + 0x2
D(x2 ) = 2x = 0(1) + 2x + 0x2
Hence,


0 1 0


[D]B = 0 0 2


0 0 0
A25 We need to write find the B-coordinates of the images of the basis vectors. So, we need to solve
6'
(7 '
(
'
(
'
(
'
(
1 1
1 1
1 1
1 0
1 1
T
=
= a1
+ a2
+ a3
0 0
0 2
0 0
0 1
0 1
6'
(7 '
(
'
(
'
(
'
(
1 0
1 1
1 1
1 0
1 1
T
=
= b1
+ b2
+ b3
0 1
0 2
0 0
0 1
0 1
6'
(7 '
(
'
(
'
(
'
(
1 1
1 2
1 1
1 0
1 1
T
=
= c1
+ c2
+ c3
0 1
0 3
0 0
0 1
0 1
Copyright © 2020 Pearson Canada Inc.
61
We get three systems of linear equation with the same coefficient matrix. Hence, we row reduce the
triple augmented matrix to get

 

 1 1 1 1 1 1   1 0 0 −1 −1 −2 

 

0
0 −1 
 1 0 1 1 1 2  ∼  0 1 0

0 1 1 2 2 3
0 0 1
2
2
4
The columns in the augmented part are the B-coordinates of the images of the basis vectors, hence


−1 −1 −2


0 −1
[T ]B =  0


2
2
4
B Homework Problems
B1 [L]B
B2 [L]B
B3 [L]B
B4 [L]B
B5 [L]B
B6 [L]B
B7 [L]B
B8 [L]B
B9 [L]B
B10 [L]B
B11 [L]B
(
' (
2 3
7
=
, [L(−!v 1 + 3!v 2 )]B =
1 4
11


 
2 0 4
0


 
= 1 2 0, [L(!x )]B = 0


 
2 1 3
0


 
1
 1 0
1


 
0, [L(!x )]B = 6
=  2 2


 
−1 1 −2
1
'
(
0 1
=
0 1
'
(
0 3
=
−2 0
'
(
1 1
=
0 0
'
(
' (
1
0
1
=
, [L(!x )]B =
0 −6
6
'
(
'
(
−6 −11
−5/2
=
, [L(!x )]B =
3
8
5/2
'
(
' (
−2 5
−3
=
, [L(!x )]B =
3 2
−5


 
0 0
−2
−2


 
=  0 −2 0, [L(!x )]B =  4


 
0
0 4
4


 
0 −1
−1
−1


 
7
5, [L(!x )]B = −6
=  0


 
0 −10 −7
8
'
Copyright © 2020 Pearson Canada Inc.
62
)' ( ' (*
'
(
3 −2
0 0
B12 B =
,
, [perp(3,2) ]B =
2
3
0 1
)' ( ' (*
'
(
−1
2
1 0
B13 B =
,
, [proj(−1,−2) ]B =
−2 −1
0 0


      

 2  1 0
0 0 0



     



 1 , −2 , 2
B14 B = 
, [perp(2,1,−2) ]B = 0 1 0












 −2
0 1
0 0 1


      

1  2  3
−1 0 0




     
2 , −1 ,  0, [refl(1,2,3) ]B =  0 1 0
B15 B = 





3  0 −1

0 0 1
 
 


 
5
2
0 0 0
 9
 
 


 
B16 (a) 4 = 2 (b) [L]B = 0 1 0 (c) L(5, 4, 5) =  2
 
 


 
5B
3
2 1 1
11
 
 


1
1
2 0 1
 
 


B17 (a) 4 = 4 (b) [L]B = 0 0 1 (c) L(1 + 4x + 4x2 ) = 11 + 9x + 3x2
 
 


4B
3
0 1 0
'
(
'
(
15
8
38
B18 [L]B =
, [L(!x )]B =
−19 −10
−48
'
(
' (
5 0
25
B19 [L]B =
, [L(!x )]B =
0 1
3
'
(
'
(
0 −5
−20
B20 [L]B =
, [L(!x )]B =
0 10
40
'
(
' (
28 −46
10
B21 [L]B =
, [L(!x )]B =
14 −23
5


0
−1 0


0
B22 [L]B =  0 5


0 0 −1


7 0 −1


0
B23 [L]B = 0 2


0 0 14


0 0
2


B24 [L]B = 0 −4 0


0
0 6


 12 20 −7


6
B25 [L]B = −6 −9


4
7 −2
'
(
4 1
B26 [L]B =
−1 0
Copyright © 2020 Pearson Canada Inc.
63
B27
B28
B29
B30
B31


3/2 1 1/2


[L]B = 3/2 1 1/2


1/2 1 3/2


 3 0 1


[L]B =  2 0 1


−1 1 1


1 1
0
1

0
2 1 −1


[L]B = 

1
0 −1 0

0
0 0
0


16 −1
−8


6 −3
[D]B = −6


10 −18
2
'
(
−1 −1
[T ]B =
2
3
C Conceptual Problems
C1 We have [L]B = P−1 [L]S P so [L]S = P[L]B P−1 . Thus,
[L]C = Q−1 [L]S Q = Q−1 P[L]B P−1 Q
C2 We have
4
5 4
5 'd1 0 (
A !v 1 !v 2 = !v 1 !v 2
0 d2
4
5 4
5
A!v 1 A!v 2 = d1!v 1 d2!v 2
Thus, the vectors !v 1 and !v 2 must satisfy the special condition
A!v 1 = d1!v 1
and
A!v 2 = d2!v 2
We shall see in Chapter 6 that such vectors are called eigenvectors of A.
C3 Assume L is invertible and let B be any basis for V. First observe that [L]B is an n × n matrix. Let
!x ∈ Null([L]B ) and let v ∈ V be the vector such that [v]B = !x . Then,
!0 = [L]B !x = [L]B [v]B = [L(v)]B
But, the only vector whose coordinates are all zeros, is the zero vector. That is,
L(v) = 0
But, by Theorem 4.5.5, we have that Null(L) = {0}. Hence, v = 0. Thus, we must have !x = !0 and so
Null([L]B ) = {!0}. Therefore, by the Invertible Matrix Theorem, [L]B is invertible.
Copyright © 2020 Pearson Canada Inc.
64
C4 (a) Observe that we have
[L]B [vi ]B = [L(vi )]B = [0]B = !0
Thus, [vi ]B ∈ Null([L]B ) for 1 ≤ i ≤ k.
Consider
!0 = c1 [v1 ]B + · · · + ck [vk ]B = [c1 v1 + · · · + ck vk ]B
Since 0 is the only vector that has coordinates of all zeroes, we have that
c1 v1 + · · · + ck vk = 0
and hence c1 = · · · = ck = 0 since {v1 , . . . , vk } is linearly independent. So, {[v1 ]B , . . . , [vk ]B } is
linearly independent.
Let !x ∈ Null([L]B ). Let v ∈ V be the vector such that [v]B = !x . Then we have
!0 = [L]B !x = [L]B [v]B = [L(v)]B
Thus, L(v) = 0 and hence v ∈ Null(L). Therefore, there exists d1 , . . . , dk such that
v = d1 v1 + · · · + dk vk
Hence,
!x = [v]B = [d1 v1 + · · · + dk vk ]B = d1 [v1 ]B + · · · + dk [vk ]B
Consequently, {[v1 ]B , . . . , [vk ]B } also spans Null([L]B ), and hence is a basis for Null([L]B ).
(b) In (a) we have shown that dim Null([L]B ) = dim Null(L). Therefore, by the Rank-Nullity Theorem (the matrix version Theorem 3.4.9 and the linear mapping version Theorem 4.5.2), we have
that
rank(L) = dim V − dim Null(L) = dim V − dim Null([L]B ) = rank([L]B )
C5 Let B = {v1 , . . . , vn } be a basis for a vector space V and let L : V → V be a linear operator. Then, for
any x ∈ V, we can write x = b1 v1 + · · · + bn vn . Therefore,
L(x) = L(b1 v1 + · · · + bn vn ) = b1 L(v1 ) + · · · + bn L(vn )
Taking B-coordinates of both sides gives
[L(x)]B = [b1 L(v1 ) + · · · + bn L(vn )]B
= b1 [L(v1 )]B + · · · + bn [L(vn )]B
 
b 
4
5  .1 
= [L(v1 )]B · · · [L(vn )]B  .. 
 
bn
C6 Suppose that x = b1 v1 + · · · + bn vn . Then we have
C [L]B [x]B
4
= [L(v1 )]C
 
b 
5  .1 
· · · [L(vn )]C  .. 
 
bn
= b1 [L(v1 )]C + · · · + bn [L(vn )]C
= [b1 L(v1 ) + · · · + bn L(vn )]C
= [L(b1 v1 + · · · + bn vn )]C
= [L(x)]C
Copyright © 2020 Pearson Canada Inc.
65
C7 We have
D(1) = 0 = 0(1) + 0x
D(x) = 1 = 1(1) + 0x
D(x2 ) = 2x = 0(1) + 2x
Thus, C DB =
'
(
0 1 0
.
0 0 2
C8 We have
L(1, −1) = x2 = 0(1 + x2 ) + 1(1 + x) + 1(−1 − x + x2 )
L(1, 2) = 3 + x2 = 3(1 + x2 ) − 2(1 + x) − 2(−1 − x + x2 )


3
0


Thus, C LB = 1 −2.


1 −2
C9 We have
(
'
1 0
1
= −2
0 3
0
'
(
'
3
0
1
T (1, 2) =
=4
0 −1
0
T (2, −1) =
'
( '
(
'
(
'
(
1
1 0
1 1
0 0
+
+2
+0
0
0 1
0 1
1 0
(
'
(
'
(
'
(
1
1 0
1 1
0 0
+3
−4
+0
0
0 1
0 1
1 0


4
−2

 1
3


Thus, C T B = 
.
 2 −4
0
0
C10 We have
(
' ( ' (
2
1
1
=3
−
−1
0
1
' (
' (
' (
1
1
1
L(1 + x) =
=1
+0
0
0
1
' (
' (
' (
0
1
1
2
L(−1 + x + x ) =
= −2
+2
2
0
1
L(1 + x2 ) =
'
(
3 1 −2
Thus, C LB =
.
−1 0
2
'
Copyright © 2020 Pearson Canada Inc.
66
Section 4.7
A Practice Problems
A1 Let !x ∈ Null(L). Then, we have
' (
'
(
0
2x1 − x2
= L(x1 , x2 ) =
0
3x1 − 5x2
Row reducing the coefficient matrix of the corresponding homogeneous system gives
'
( '
(
2 −1
1 0
∼
3 −5
0 1
Thus, Null(L) = {!0} so L is one-to-one by Theorem 4.7.1. Since the domain and codomain have the
same dimension, we can apply Theorem 4.7.4 to get that L is also onto.
A2 Let !x ∈ Null(L). Then, we have
 


0
 x1 + 2x2 
 


0 = L(x1 , x2 ) =  x1 − x2 
0
x1 + x2
Row reducing the coefficient matrix of the corresponding homogeneous system gives

 

2 1 0
1

 

1 −1 ∼ 0 1
1
1
0 0
Thus, Null(L) = {!0} so L is one-to-one by Theorem 4.7.1. By the Rank-Nullity Theorem, we have
dim Range(L) = dim R2 − nullity(L) = 2 − 0 = 2
Thus, dim Range(L) ! dim R3 , so L cannot be onto.
A3 Observe that L(−1, 1, 1) = (0, 0). Thus, Null(L) ! {!0} so L is not one-to-one by Theorem 4.7.1. Let
!y ∈ R2 and consider
' (
'
(
y1
x1 + x2
= L(x1 , x2 , x3 ) =
y2
x2 − x3
Solving, we find that one solution is x1 = y1 − y2 , x2 = y2 , x3 = 0. That is,
' (
y
L(y1 − y2 , y2 , 0) = 1
y2
Hence, L is onto.
Copyright © 2020 Pearson Canada Inc.
67
A4 Let !x ∈ Null(L). Then, we have
 


0
 x1 + 3x3 
 


0 = L(x1 , x2 , x3 ) =  x1 + x2 + x3 
0
−x1 − 2x2 + x3
Row reducing the coefficient matrix of the corresponding homogeneous system gives

 

0 3 1 0
3
 1

 

1 1 ∼ 0 1 −2
 1
 

−1 −2 1
0 0
0
Thus, Null(L) ! {!0} so L is not one-to-one by Theorem 4.7.1. Since the domain and codomain have
the same dimension, we can apply Theorem 4.7.4 to get that L is also not onto.
'
(
a b
A5 Let
∈ Null(L). Then, we have
c d
'
(
6'
(7 '
(
0 0
a b
a+c b+c
=L
=
0 0
c d
a+d
b
Row reducing the coefficient matrix of the corresponding homogeneous system gives

 

1 0 1 0 1 0 0 0
0 1 1 0 0 1 0 0

 

1 0 0 1 ∼ 0 0 1 0

 

0 1 0 0
0 0 0 1
Thus, Null(L) = {!0} so L is one-to-one by Theorem 4.7.1. Since the domain and codomain have the
same dimension, we can apply Theorem 4.7.4 to get that L is also onto.
'
(
a b
A6 Let
∈ Null(L). Then, we have
c d
 


6'
(7 a + c
0


a b
 

0 = L c d = b − c
0
a+b
Row reducing the coefficient matrix of the corresponding homogeneous system gives

 

1 1 0
1
1 0

 

0 1 −1 ∼ 0 1 −1
1 1
0
0 0
0
Thus, Null(L) ! {0} so L is not one-to-one by Theorem 4.7.1.


1
1 0


Since the rank of 0 1 −1 is not equal to its number of rows, by the System-Rank Theorem (3),


1 1
0
the system


1 x1 
 1 0
 0 1 −1 x2 


1 1
0 x3
is not consistent for all !x ∈ R3 . Thus, L is not onto.
Copyright © 2020 Pearson Canada Inc.
68
A7 Observe that L(1 + 3x) = 0 + 0x. Thus, Null(L) ! {0 + 0x} so L is not one-to-one by Theorem 4.7.1.
Since the domain and codomain have the same dimension, we can apply Theorem 4.7.4 to get that L
is also not onto.
A8 Let a + bx + cx2 ∈ Null(L). Then, we have
0 + 0x + 0x2 = L(a + bx + cx2 ) = (a + 4b) + (2b + c)x + (a − 2c)x2
Row reducing the coefficient matrix of the corresponding homogeneous system gives

 

0 1 0 −2 
1 4

 

1 ∼ 0 1 1/2
0 2
 

1 0 −2
0 0
0
Thus, Null(L) ! {!0} so L is not one-to-one by Theorem 4.7.1. Since the domain and codomain have
the same dimension, we can apply Theorem 4.7.4 to get that L is also not onto.
A9 Let a + bx ∈ Null(L). Then, we have
' (
'
(
0
a + b(1)
= L(a + bx) =
0
a + b(2)
Row reducing the coefficient matrix of the corresponding homogeneous system gives
'
( '
(
1 1
1 0
∼
1 2
0 1
Thus, Null(L) = {!0} so L is one-to-one by Theorem 4.7.1. Since the domain and codomain have the
same dimension, we can apply Theorem 4.7.4 to get that L is also onto.
A10 Let a + bx ∈ Null(L). Then, we have
 


0
a + b
0 = L(a + bx) = a − b
 


0
a+b
This implies that a = b = 0. Hence, Null(L) = {0}, so L is one-to-one by Theorem 4.7.1. By the
Rank-Nullity Theorem, we have
dim Range(L) = dim P1 (R) − nullity(L) = 2 − 0 = 2
Thus, dim Range(L) ! dim R3 , so L cannot be onto.
A11 Let a + bx + cx2 ∈ Null(L). Then, we have
'
(
'
(
0 0
a−b
b−c
2
= L(a + bx + cx ) =
0 0
a − c 2a + b − 3c
Row reducing the coefficient matrix of the corresponding homogeneous system gives

 

0 1 0 −1
1 −1



0
1 −1 0 1 −1

∼

1
0 −1 0 0
0

 

2
1 −3
0 0
0
Copyright © 2020 Pearson Canada Inc.
69
Thus, Null(L) ! {0} so L is not one-to-one by Theorem 4.7.1. By the Rank-Nullity Theorem, we have
dim Range(L) = dim P2 (R) − nullity(L) = 3 − nullity(L) < 3
Thus, dim Range(L) ! dim M2×2 (R), so L cannot be onto.
A12 Let !a ∈ Null(L). Then, we have
0 + 0x + 0x2 = L(a, b, c) = (−b + 2c) + (a + 3b − c)x + (2a + 2b)x2
Row reducing the coefficient matrix of the corresponding homogeneous system gives

 

2 1 0 0
0 −1
 

1
3 −1 ∼ 0 1 0

 

2
2
0
0 0 1
Thus, Null(L) = {!0} so L is one-to-one by Theorem 4.7.1. Since the domain and codomain have the
same dimension, we can apply Theorem 4.7.4 to get that L is also onto.
A13 Since dim P2 (R) = 3 and dim R2 = 2, they are not isomorphic by Theorem 4.7.3.
 
a
b
A14 We define L : P3 (R) → R4 by L(a + bx + cx2 + dx3 ) =  .
 c 
d
To prove that it is an isomorphism, we must prove that it is linear, one-to-one, and onto.
Linear: Let any two elements of P3 (R) be p(x) = a0 +b0 x+c0 x2 +d0 x3 and q(x) = a1 +b1 x+c1 x2 +d1 x3
and let s, t ∈ R. Then
8
9
L(sp + tq) = L s(a0 + b0 x + c0 x2 + d0 x3 ) + t(a1 + b1 x + c1 x2 + d1 x3 )
8
9
= L (sa0 + ta1 ) + (sb0 + tb1 )x + (sc0 + tc1 )x2 + (sd0 + td1 )x3


 
 
 sa0 + ta1 
a0 
a1 
 sb0 + tb1 
b0 
 
 = s   + t b1  = sL(p) + tL(q)
= 
c0 
c1 
 sc0 + tc1 
 
 
sd0 + td1
d0
d1
Therefore, L is linear.
One-to-one: Let p(x) = a + bx + cx2 + dx3 ∈ Null(L). Then
 
 
0
a
0
 
  = L(a + bx + cx2 + dx3 ) = b
 c 
0
 
 
0
d
Hence, a = b = c = d = 0. Thus, the only vector in Null(L) is {0}. Therefore, L is one-to-one by
Theorem 4.7.1.
 
 
a
a
b
b
Onto: For any   ∈ R4 we have L(a + bx + cx2 + dx3 ) =  . Hence L is onto.
 c 
 c 
d
d
Thus, L is a linear, one-to-one, and onto mapping from P3 (R) to R4 and so it is an isomorphism.
Copyright © 2020 Pearson Canada Inc.
70
A15 We define L : M2×2 (R) → R4 by L
6'
a b
c d
(7
 
a
b
=  .
 c 
d
To prove that it is an isomorphism, we must prove that it is linear, one-to-one, and onto.
'
(
'
(
a b
a b
Linear: Let any two elements of M2×2 (R) be A = 0 0 and B = 1 1 and let s, t ∈ R. Then
c0 d0
c1 d1
6 '
(
'
(7
6'
(7
a b
a b
sa0 + ta1 sb0 + tb1
L(sA + tB) = L s 0 0 + t 1 1 = L
c0 d0
c1 d1
sc0 + tc1 sd0 + td1


 
 
 sa0 + ta1 
a0 
a1 
 sb + tb 
b 
 
1
 = s  0  + t b1  = sL(A) + tL(B)
=  0
c0 
c1 
 sc0 + tc1 
 
 
sd0 + td1
d0
d1
Therefore, L is linear.
'
(
a b
One-to-one: Let A =
∈ Null(L). Then
c d
 
 
0
6'
(7 a
0
  = L a b = b
 c 
0
c d
 
 
0
d
Hence, a = b = c = d = 0. Thus, the only vector in Null(L) is {0}. Therefore, L is one-to-one by
Theorem 4.7.1.
 
 
a
6'
(7 a
b
b
a b
Onto: For any   ∈ R4 we have L
=  . Hence L is onto.
c d
 c 
 c 
d
d
Thus, L is a linear, one-to-one, and onto mapping from M2×2 (R) to R4 and so it is an isomorphism.
'
(
a b
A16 We define L : P3 (R) → M2×2 (R) by L(a + bx + cx2 + dx3 ) =
.
c d
To prove that it is an isomorphism, we must prove that it is linear, one-to-one, and onto.
Linear: Let any two elements of P3 (R) be p(x) = a0 +b0 x+c0 x2 +d0 x3 and q(x) = a1 +b1 x+c1 x2 +d1 x3
and let s, t ∈ R. Then
8
9
L(sp + tq) = L s(a0 + b0 x + c0 x2 + d0 x3 ) + t(a1 + b1 x + c1 x2 + d1 x3 )
8
9
= L (sa0 + ta1 ) + (sb0 + tb1 )x + (sc0 + tc1 )x2 + (sd0 + td1 )x3
'
(
sa0 + ta1 sb0 + tb1
=
sc0 + tc1 sd0 + td1
'
(
'
(
a0 b0
a1 b1
=s
+t
c0 d0
c1 d1
= sL(p) + tL(q)
Copyright © 2020 Pearson Canada Inc.
71
Therefore, L is linear.
One-to-one: Let p(x) = a + bx + cx2 + dx3 ∈ Null(L). Then
'
(
'
(
0 0
a b
2
3
= L(a + bx + cx + dx ) =
0 0
c d
Hence, a = b = c = d = 0. Thus, the only vector in Null(L) is {0}. Therefore, L is one-to-one by
Theorem 4.7.1.
'
(
'
(
a b
a b
2
3
Onto: For any
∈ M2×2 (R) we have L(a + bx + cx + dx ) =
. Hence L is onto.
c d
c d
Thus, L is a linear, one-to-one, and onto mapping from P3 (R) to M2×2 (R) and so it is an isomorphism.
A17 By definition a plane P through the origin 'in(R3 has basis of the form B = {!v 1 , !v 2 }.
a
We define L : P → R2 by L(a!v 1 + b!v 2 ) =
.
b
To prove that it is an isomorphism, we must prove that it is linear, one-to-one, and onto.
Linear: Let any two elements of P be !p 1 = a1!v 1 + a2!v 2 and !p 2 = b1!v 1 + b2!v 2 and let s, t ∈ R. Then
8
9
L(s!p 1 + t!p 2 ) = L s(a1!v 1 + a2!v 2 ) + t(b1!v 1 + b2!v 2 )
8
9
= L (sa1 + tb1 )!v 1 + (sa2 + t2 b2 )!v 2
'
(
sa1 + tb1
=
sa2 + tb2
' (
' (
a1
b
=s
+t 1
a2
b2
= sL(!p 1 ) + tL(!p 2 )
Therefore, L is linear.
One-to-one: Let !p = a1!v 1 + a2!v 2 ∈ Null(L). Then
' (
' (
0
a
= L(a1!v 1 + a2!v 2 ) = 1
0
a2
Hence, a1 = a2 = 0. Thus, the only vector in Null(L) is {!0}. Therefore, L is one-to-one by Theorem
4.7.1.
' (
' (
x1
x
2
Onto: For any
∈ R we have L(x1!v 1 + x2!v 2 ) = 1 . Hence L is onto.
x2
x2
Thus, L is a linear, one-to-one, and onto mapping from P to R2 and so it is an isomorphism.
'
(
8 a1 0 9
A18 Define L : D → P1 (R) by L
= a1 + a2 x.
0 a2
Copyright © 2020 Pearson Canada Inc.
72
(
'
(
a1 0
b1 0
Linear: Let any two elements of D be A =
and B =
and let s, t ∈ R. Then
0 a2
0 b2
'
'
8 sa1 + tb1
L(sA + tB) = L
0
(
9
0
sa2 + tb2
= (sa1 + tb1 ) + (sa2 + tb2 )x
= s(a1 + a2 x) + t(b1 + b2 x) = sL(A) + tL(B)
Therefore, L is linear.
'
(
a1 0
One-to-one: Let
∈ Null(L). Then,
0 a2
0 + 0x = L
'
(
8 a1 0 9
= a1 + a2 x
0 a2
This gives a1 = a2 = 0. Thus, the only vector in Null(L) is {0}. Therefore, L is one-to-one by Theorem
4.7.1.
'
(
a
0
Onto: For any a1 + a2 x ∈ P1 (R) we can pick A = 1
∈ D so that we have L(A) = a1 + a2 x.
0 a2
Hence, L is onto.
Thus, L is an isomorphism from D to P1 (R).
      

1 0 0







0 1 0









A19 We can find that a basis for T is 
,
,
. Hence, dim T = 3. But, dim M2×2 (R) = 4, so by






















0
0
1


     




1 0 0
Theorem 4.7.3, they are not isomorphic.
    
)'
( '
( '
(*

1  0



1 0 0 1 0 0
   

0 ,  1
A20 We can find that a basis for S is
,
,
, and a basis for V is 
. Thus,



0 1 1 0 0 1
1 −1

dim S = 3, but dim V = 2, so by Theorem 4.7.3, they are not isomorphic.
2
A21 We know that a general vector
' in P( has the form (x − 1)(a2 x + a1 x + a0 ). Thus, we define L : P → U
8
9
a a1
by L (x − 1)(a1 x + a0 ) = 2
.
0 a0
Linear: Let any two elements of P be a = (x − 1)(a2 x2 + a1 x + a0 ) and b = (x − 1)(b2 x2 + b1 x + b0 )
and let s, t ∈ R. Then
8
9
L(sa + tb) = L s(x − 1)(a2 x2 + a1 x + a0 ) + t(x − 1)(b2 x2 + b1 x + b0 )
8
9
= L (x − 1)((sa2 + tb2 )x2 + (sa1 + tb1 )x + (sa0 + tb0 )
'
(
sa2 + tb2 sa1 + tb1
=
0
sa0 + tb0
'
(
'
(
a2 a1
b2 b1
=s
+t
= sL(a) + tL(b)
0 a0
0 b0
Therefore, L is linear.
Copyright © 2020 Pearson Canada Inc.
73
One-to-one: Let (x − 1)(a2 x2 + a1 x + a0 ) ∈ Null(L). Then,
'
(
'
(
8
9
0 0
a2 a1
2
= L (x − 1)(a2 x + a1 x + a0 ) =
0 0
0 a0
This gives a2 = a1 = a0 = 0. Thus, the only vector in Null(L) is {0}. Therefore, L is one-to-one by
Theorem 4.7.1.
'
(
'
(
a b
a b
2
Onto: For any
∈ U we can pick a = (x − 1)(ax + bx + c) ∈ P so that we have L(a) =
.
0 c
0 c
Hence, L is onto.
Thus, L is an isomorphism from P to U.
B Homework Problems
B1 Not one-to-one nor onto.
B3 Not one-to-one, it is onto.
B5 One-to-one and onto.
B2 One-to-one, not onto.
B4 Not one-to-one nor onto.
B6 It is one-to-one, but not onto.
B7 One-to-one and onto.
' (
a
B8 Define L(a + bx) =
.
b
B9 They are not isomorphic.
6'
(7
a b
B10 Define L
= a + bx + cx2 + dx3 .
c d
 
 
1
1
 
 
B11 Define L(x1 , x2 ) = x1 0 + x2 2.
 
 
1
1
  
 
 1
0
 1
1
B12 Define L a   + b   = a(1 + x) + bx2 .
 0
0
0
1
B13 They are not isomorphic.
B14 Let B be any basis for V. Define L(!v ) = [!v ]B
C Conceptual Problems
C1 Suppose that L is one-to-one and let x ∈ Null(L). Then L(x) = 0 = L(0). Hence, by definition of
one-to-one, x = 0. Thus, Null(L) = {0}. On the other hand, assume Null(L) = {0}. If L(u1 ) = L(u2 ),
then
0 = L(u1 ) − L(u2 ) = L(u1 − u2 )
Thus, u1 − u2 ∈ Null(L) and hence, u1 − u2 = 0. Therefore, u1 = u2 as required.
Copyright © 2020 Pearson Canada Inc.
74
C2 (a) Let x ∈ Null(M ◦ L). Then, 0 = (M ◦ L)(x) = M(L(x)). Hence, L(x) = 0, since Null(M) = {0} by
Theorem 4.7.1. But, L(x) = 0 also implies x = 0 by Theorem 4.7.1. Thus, Null(M ◦ L) = {0} and
so M ◦ L is one-to-one by Theorem 4.7.1.
(b) Let M(x1 , x2 ) = (x1 ) and L(x1 ) = (x1 , x1 ). Then, (M ◦ L)(x1 ) = (x1 ), so M is not one-to-one, but
M ◦ L is one-to-one.
(c) It is not possible. If L is not one-to-one, then the nullspace of L is not trivial, so the nullspace of
M ◦ L cannot be trivial.
C3 Let L : V → W and M : W → U. Let u ∈ U. Since M is onto, there is some w ∈ W such
that M(w) = u. But, L is also onto, so given w ∈ W, there is a v ∈ V such that w = L(v). Thus,
u = (M ◦ L)(v). Hence, M ◦ L is also onto.
C4 Assume L is invertible. Let x ∈ Null(L). Then, L(x) = 0 and
0 = L−1 (0) = L−1 (L(x)) = Id(x) = x
Thus, Null(L) = {0} and hence L is one-to-one by Theorem 4.7.1.
Let y ∈ V. Let x ∈ U be the vector such that L−1 (y) = x. Then, we have
L(x) = L(L−1 (y)) = Id(y) = y
Thus, L is also onto.
On the other hand, assume L is one-to-one and onto. Since L is onto, for any v ∈ V there is a u ∈ U
such that L(u) = v. Since L is one-to-one, there is exactly one such u. Hence, we may define a
mapping L−1 : V → U by L−1 (v) = u if and only if v = L(u). It is easy to verify that this mapping is
linear.
C5 (a) Let C = {L(!u 1 ), . . . , L(!u n )}. Consider
0 = c1 L(!u 1 ) + · · · + cn L(!u n )
Since L is linear we get,
0 = L(c1!u 1 + · · · + cn!u n )
Hence, c1!u 1 +· · ·+cn!u n ∈ Null(L). But, L is one-to-one so Null(L) = {0}. Thus, c1!u 1 +· · ·+cn!u n =
0 and hence c1 = · · · = cn = 0. Therefore, C is linearly independent.
Let v ∈ V. Since L is onto, there exists u ∈ U such that L(u) = v. Since B is a basis for U, we get
that
v = L(d1!u 1 + · · · + dn!u n ) = d1 L(!u 1 ) + · · · + dn L(!u n )
Thus, C also spans V, and hence is a basis. Thus,
dim V = n = dim U
(b) Assume dim V = n = dim U and let {v1 , . . . , vn } be a basis for V. Define L : U → V by
L(t1 u1 + · · · + tn un ) = t1 v1 + · · · + tn vn
It is easy to verify that L is linear, one-to-one, and onto. Hence, U and V are isomorphic.
Copyright © 2020 Pearson Canada Inc.
75
C6 Let {u1 , . . . , un } be a basis for U.
Suppose that L is one-to-one. By Exercise 1, {L(u1 ), . . . , L(un )} is linearly independent and hence a
basis for V. Thus, every v ∈ V can be written as
v = t1 L(u1 ) + · · · + tn L(un ) = L(t1 u1 + · · · + tn un )
so L is onto.
If L is onto, then {L(u1 ), . . . , L(un )} is a spanning set for V and hence a basis for V. Let v ∈ Null(L).
Then,
0 = L(v) = L(t1 u1 + · · · + tn un ) = t1 L(u1 ) + · · · + tn L(un )
But, {L(u1 ), . . . , L(un )} is a basis and hence linearly independent. Thus, t1 = · · · = tn = 0, so v = 0.
Thus, Null(L) = {0} and hence L is one-to-one by Theorem 4.7.1.
C7 Assume that C = {L(!v 1 ), . . . , L(!v n )} is a basis for W. Let y ∈ W. Then, we have
y = d1 L(!v 1 ) + · · · + dn L(!v n ) = L(d1!v 1 + · · · + dn!v n )
Therefore, L is onto. Thus, since V and W are isomorphic, L is also one-to-one by Theorem 4.7.4.
Hence, L is an isomorphism.
On the other hand, assume that L is an isomorphism. Following the steps in Problem C5(a), we find
that C = {L(!u 1 ), . . . , L(!u n )} is a basis for W.
C8 Since (u1 , 0) + (u2 , 0) = (u1 + u2 , 0), and t(u1 , 0) = (tu1 , 0); U × {0V } is a subspace of U × V. Define
an isomorphism L : U × {0V } → U by L(u, 0) = u. Check that this is linear, one-to-one, and onto.
8
9
C9 Check that L (u1 , u2 ), v = (u1 , u2 , v) is an isomorphism.
8
9
C10 Check that L (u1 , . . . , un ), (v1 , . . . , vm ) = (u1 , . . . , un , v1 , . . . , vm ) is an isomorphism.
C11 Let x ∈ U so that L(x) ∈ V. Then (M ◦ L)(x) ∈ V, so (L−1 ◦ M ◦ L)(x) ∈ U. Thus, L−1 ◦ M ◦ L is a
linear mapping from U to U (a composition of linear mappings is linear).
Note that since L is invertible, both L and L−1 are one-to-one, and hence have trivial nullspaces.
The range of L−1 ◦ M ◦ L is the “isomorphic image” in U under the mapping L−1 of the range of M ◦ L.
Since L is an isomorphism, L is onto, so the range of L is all of the domain of M; hence the range of
M ◦ L equals the range of M. Thus the range of L−1 ◦ M ◦ L is the subspace in U that is isomorphic
(under L−1 ) to the range of M.
Similarly, the nullspace of L−1 ◦ M ◦ L is isomorphic to the nullspace of M. In particular, it is the
isomorphic image (under L−1 ) of the nullspace of M.
C12 We disprove the statement with a counter example. Let U = R2 and V = R2 and L : U → V be the
linear mapping defined by L(!x ) = !0. Then, clearly dim U = dim V, but L is not an isomorphism since
it isn’t one-to-one nor onto.
C13 By definition, we have that dim(Range(L)) = rank(L). Thus, the Rank-Nullity Theorem gives us that
dim(Range(L)) = dim V − nullity(L)
If dim V < dim W, then dim V − nullity(L) < dim W and hence the range of L cannot be equal to W.
Thus, we have proven the statement is true.
Copyright © 2020 Pearson Canada Inc.
76
C14 We disprove the statement with a counter example. Let V = R3 and W = R2 and L : V → W be the
linear mapping defined by L(!x ) = !0. Then, clearly dim V > dim W, but L is not onto.
C15 The Rank-Nullity Theorem tells us that
nullity(L) = dim V − rank(L) ≥ dim V − dim W > 0
since rank(L) ≤ dim W. Thus, nullity(L) ! {!0} and hence L is not one-to-one by Theorem 4.7.1.
Therefore, we have proven the statement is true.
Chapter 4 Quiz
E1 The given set is a subset of M4×3 (R) and is non-empty since it clearly contains the zero matrix. Let A
and B be any two vectors in the set. Then, a11 + a12 + a13 = 0 and b11 + b12 + b13 = 0. Then, the first
row of sA + tB satisfies
sa11 + tb11 + sa12 + tb12 + sa13 + tb13 = s(a11 + a12 + a13 ) + t(b11 + b12 + b13 ) = s(0) + t(0) = 0
Thus, it is a subspace of M4×3 (R) and hence a vector space.
E2 The given set is a subset P3 (R) and is non-empty since it clearly contains the zero polynomial. Let
p(x) and q(x) be in the set. Then p(1) = 0, p(2) = 0, q(1) = 0, and q(2) = 0. Hence, sp + tq satisfies
(sp + tq)(1) = sp(1) + tq(1) = 0
and
(sp + tq)(2) = sp(2) + tq(2) = 0
Thus, it is a subspace and hence a vector space.
E3 The set is not a vector space since it is not closed under scalar multiplication. For example
is not in the set since it contains non-integer entries.
1
2
'
(
1 1
1 1
E4 The given set is a subset of R3 and is non-empty since it clearly contains !0. Let !x and !y be in the set.
Then, x1 + x2 + x3 = 0 and y1 + y2 + y3 = 0. Then, s!x + t!y satisfies
sx1 + ty1 + sx2 + ty2 + sx3 + ty3 = s(x1 + x2 + x3 ) + t(y1 + y2 + y3 ) = 0 + 0 = 0
Thus, it is a subspace of R3 and hence it is a vector space.
E5 Let p, q ∈ S. Then p(0) = 1 and q(0) = 1. Thus, (p + q)(0) = p(0) + q(0) = 1 + 1 = 2. Thus, p + q " S,
so S is not a subspace.
E6 By definition S is' a subset
( of M2×2 (R).
' Taking
( a = b = c = 0 satisfies 0+0 = −2(0), so the zero matrix
a1 0
b1 0
is in S. Let A =
and B =
both be in S. Then, a1 + a2 = −2a3 and b1 + b2 = −2b3 .
a2 a3
b2 b3
Observe that for any s, t ∈ R we have
'
(
sa1 + tb1
0
sA + tB =
sa2 + tb2 sa3 + tb3
satisfies
(sa1 + tb1 ) + (sa2 + tb2 ) = s(a1 + a2 ) + t(b1 + b2 ) = s(−2a3 ) + t(−2b3 ) = −2(sa3 + tb3 )
Copyright © 2020 Pearson Canada Inc.
77
Thus, S is a subspace of M2×2 (R).
Any matrix A ∈ S satisfies
'
Thus,
−a2 − 2a3
a2
(
'
(
'
(
0
−1 0
−2 0
= a2
+ a3
a3
1 0
0 1
)'
( '
(*
−1 0 −2 0
,
spans S and is clearly linearly independent. Therefore, it is a basis for S.
1 0
0 1
E7 By definition S is a subset of P2 (R). Taking a = c = 0 satisfies 0 = 0, so the zero polynomial is in S.
Let p(x) = a1 + c1 x2 and q(x) = a2 + c2 x2 both be in S. Then, a1 = c1 and a2 = c2 . Observe that for
any s, t ∈ R we have
(sp + tq)(x) = (sa1 + ta2 ) + (sc1 + tc2 )x2
satisfies sa1 + ta2 = sc1 + tc2 . Thus, S is a subspace of P2 (R).
Any p(x) = a + cx2 ∈ S satisfies
a + ax2 = a(1 + x2 )
2
3
Thus, 1 + x2 spans S and is clearly linearly independent. Therefore, it is a basis for S.
E8 By Theorem 4.2.2, S is a subspace of P2 (R). We need to determine whether the spanning set is
linearly independent. Consider
c1 (1 + x2 ) + c2 (2 + x) + c3 (x − 2x2 ) = 0 + 0x + 0x2
This gives the homogeneous system
c1 + 2c2 = 0
c2 + c3 = 0
c1 − 2c3 = 0
Row reducing the corresponding coefficient matrix gives

 

1 2 0  1 0 −2

 

1
0 1 1  ∼ 0 1

1 0 −2
0 0
0
Hence, the set is linearly dependent. In particular, we have
(−2)(1 + x2 ) + 1(2 + x) = x − 2x2
2
3
Thus, we get that 1 + x2 , 2 + x also spans S and is clearly linearly independent. Therefore, it is a
basis for S.
E9 A set of five vectors in M2×2 (R) must be linearly dependent by the Basis Theorem, so the set cannot
be a basis.
Copyright © 2020 Pearson Canada Inc.
78
E10 Consider
t1
'
(
'
(
'
(
'
( '
(
1 1
0
1
0 1
2
2
0 0
+ t2
+ t3
+ t4
=
2 1
1 −1
1 3
4 −2
0 0
Row reducing the coefficient matrix of the corresponding system gives

0
1
1
1

2
1

1 −1
 
0
2 1
 
1
2 0
∼
1
4 0
 
3 −2
0
0
1
0
0

0
2

0
1

1 −1

0
0
Thus, the system has infinitely many solutions, so the set is linearly dependent and hence it is not a
basis.
E11 A set of three vectors in M2×2 (R) cannot span M2×2 (R) by the Basis Theorem, so the set cannot be a
basis.
E12 (a) Consider
 
 
 
   
1
1
3
 1 0
1
3
 1 0
0
t1   + t2   + t3   + t4   =  
1
 1 0
1
0
1
1
0
−2
0
Row reducing the coefficient matrix of the corresponding system gives

1
0

1

1
1
1
0
1
 
3
1 1
 
3
1 0
∼
1
1 0
 
0 −2
0
0
1
0
0

0
0

0 −2

1
1

0
0
Thus, B = {!v 1 , !v 2 , !v 3 } is a linearly independent set. Moreover, !v 4 can be written as a linear
combination of the !v 1 , !v 2 , !v 3 , so B also spans S. Hence, it is a basis for S and so dim S = 3.
(b) We need to find constants t1 , t2 , t3 such that
 
 
   
1
1
3  0
1
3  2
0
t1   + t2   + t3   =  
1 −1
1
0
1
1
0
−3
 
−2
 
Thus, [!x ]B = −1.
 
1

1
0

1

1
1
1
0
1
 
3
0 1
 
3
2 0
∼
1 −1 0
 
0 −3
0
0
1
0
0

0 −2

0 −1

1
1

0
0
Copyright © 2020 Pearson Canada Inc.
79
E13 To find the change of coordinates matrix from C-coordinates to B-coordinates, we need to determine
the B-coordinates of the vectors in C. That is, we need to find c1 , c2 , d1 , d2 such that
1 − 3x = c1 (1 + x) + c2 (2 − x)
1 + 2x = d1 (1 + x) + d2 (2 − x)
We row reduce the corresponding doubly augmented matrix to get
'
1
2
1 1
1 −1 −3 2
(
∼
'
1 0 −5/3
5/3
0 1
4/3 −1/3
(
(
−5/3
5/3
.
4/3 −1/3
To find the change of coordinates matrix from B-coordinates to C-coordinates, we need to determine
the C-coordinates of the vectors in B. That is, we need to find c1 , c2 , d1 , d2 such that
Thus, Q =
'
1 + x = c1 (1 − 3x) + c2 (1 + 2x)
2 − x = d1 (1 − 3x) + d2 (1 + 2x)
We row reduce the corresponding doubly augmented matrix to get
'
1 1 1
2
−3 2 1 −1
(
∼
'
1 0 1/5 1
0 1 4/5 1
(
'
(
1/5 1
Thus, P =
.
4/5 1
 
 
0
1
 
 
E14 (a) Let !v 1 = 1 and !v 2 = 0. Then, {!v 1 , !v 2 } is a basis for the plane since it is a set of two linearly
 
 
0
1
independent vectors in the plane.
 
 1
 
(b) Since !v 3 =  0 does not lie in the plane, the set B = {!v 1 , !v 2 , !v 3 } is linearly independent and
 
−1
hence a basis for R3 .
(c) By geometric arguments, we have that
L(!v 1 ) = !v 1 = 1!v 1 + 0!v 2 + 0!v 3
L(!v 2 ) = !v 2 = 0!v 1 + 1!v 2 + 0!v 3
L(!v 3 ) = −!v 3 = 0!v 1 + 0!v 2 + (−1)!v 3
So


0
1 0


0
[L]B = 0 1


0 0 −1
Copyright © 2020 Pearson Canada Inc.
80
(d) The change of coordinates matrix from B-coordinates to S-coordinates (standard coordinates) is


1
0 1


0
P = 1 0


0 1 −1
It follows that
[L]S = P[L]B P−1
E15 The change of coordinates matrix from S to B is
Hence,


0 0 1


= 0 1 0


1 0 0


1
1 0


P = 1 1 −1


0 1
1




11/3
 1 −1 2
 0 2/3




0 1 P = −1 2/3 −10/3
[L]B = P−1 −1




−2
1 0
0 1/3
1/3
E16 If t1 L(v1 ) + · · · + tk L(vk ) = 0, then
0 = L(t1 v1 + · · · + tk vk )
and hence t1 v1 + · · · + tk vk ∈ Null(L). Thus,
t1 v1 + · · · + tk vk = 0
and hence t1 = · · · = tk = 0 since {v1 , . . . , vk } is linearly independent. Thus, {L(v1 ), . . . , L(vk )} is
linearly independent.
E17 FALSE. Rn is an n-dimensional subspace of Rn .
E18 TRUE. The dimension of P2 (R) is 3, so a set of 4 polynomials in P2 (R) must be linearly dependent
by the Basis Theorem.
E19 FALSE. The number of components in a coordinate vector is the number of vectors in the basis.
So, if B is a basis for a 4 dimensional subspace, then the B-coordinate vector would have only 4
components.
E20 TRUE. Both ranks are equal to the dimension of the range of L.
E21 FALSE. If L : P2 (R) → P2 (R) is a linear mapping, then the range of L is a subspace of P2 (R), but
the column space of [L]B is a subspace of R3 . Hence, they cannot be equal.
E22 FALSE. The mapping L : R → R2 given by L(x1 ) = (x1 , 0) is one-to-one, but dim R ! dim R2 .
Copyright © 2020 Pearson Canada Inc.
81
Chapter 4 Further Problems
F1 Let B = {!v 1 , . . . , !v k } be a basis for S. Extend {!v 1 , . . . , !v k } to a basis C = {!v 1 , . . . , !v k , !v k+1 , . . . , !v n } for
V. Define a mapping L by
L(c1!v 1 + · · · + cn!v n ) = ck+1!v k+1 + · · · + cn!v n
For any !x , !y ∈ V and t ∈ R we have
8
9
L(t!x + !y ) = L (tc1 + d1 )!v 1 + · · · + (tcn + dn )!v n
= (tck+1 + dk+1 )!v k+1 + · · · + (tcn + dn )!v n
= t(ck+1!v k+1 + · · · + cn!v n ) + dk+1!v k+1 + · · · + dn!v n
= tL(!x ) + L(!y )
Hence L is linear.
If !x ∈ Null(L), then
!0 = L(!x ) = L(c1!v 1 + · · · + cn!v n ) = ck+1!v k+1 + · · · + cn!v n
Hence, ck+1 = · · · = cn = 0, since {!v k+1 , . . . , !v n } is linearly independent. Thus, !x ∈ Span{!v 1 , . . . , !v k } =
S.
If !x ∈ S, then !x = c1!v 1 + · · · + ck!v k and
L(!x ) = L(c1!v 1 + · · · + ck!v k ) = !0
Therefore, !x ∈ Null(L).
Thus, Null(L) = S as required.
F2 Suppose that the n × n matrix A is row equivalent to both R and S . Denote the columns of R by !r j ,
and the columns of S by !s j . Then R is row equivalent to S . In particular, the two matrices have the
same solution space. So, R!x = !0 if and only if S !x = !0.
First, note that !r j = !0 if and only if R!e j = !0 if and only if S !e j = !0 if and only if !s j = !0. Thus, R and
S must have the same zero columns. Therefore, to simplify the discussion, we assume that R and S
do not have any zero columns.
Then the first column of R and S must both be !e1 . By definition of RREF, the next column of R is
either !r2 = !e2 or !r2 = r12!e1 . In the first case, the first two columns of R are linearly independent,
so the first two columns of S must be linearly independent because R!x = !0 if and only if S !x = !0.
Hence, !s2 = !e2 . In the second case, we have R(−r12!e1 + !e2 ) = !0, so
!0 = S (−r12!e1 + !e2 ) = −r12 !s1 + !s2
Thus, !s2 = r12 !s1 = r12!e1 = !r2 .
Suppose that the first k columns are equal and that in these k columns there are j leading ones.
Thus, by definition of RREF, we know that the columns containing the leading ones are !e1 , . . . , !e j .
As above, either !rk+1 = !e j+1 or !rk+1 = r1(k+1)!e1 + · · · + r j(k+1)!e j . In either case, we can show that
!s j+1 = !r j+1 using the fact that R!x = !0 if and only if S !x = !0. Hence, by induction R = S .
Copyright © 2020 Pearson Canada Inc.
82
F3 (a) The zero matrix is a magic square of weight zero, so MS 3 is non-empty. Suppose that A is a
magic square of weight k and B is a magic square of weight j. Then
(A + B)11 + (A + B)12 + (A + B)13 = a11 + b11 + a12 + b12 + a13 + b13 = k + j
Similarly, all other row sums, column sums, and diagonal sums are k + j, so A + B is a magic
square of weight k + j.
By a similar calculation, all row sums, column sums, and diagonal sums of tA are tk, so tA is a
magic square of weight tk. Thus MS 3 is closed under addition and scalar multiplication, so it is
a subspace of M3×3 (R).
(b) Suppose that A and B are magic squares of weights a and b respectively. Then
wt(tA + B) = (ta11 + b11 ) + (ta12 + b12 ) + (ta13 + b13 ) = t wt(A) + wt(B)
So wt is a linear mapping.


1 0 a


(c) Since X 1 = b c d is in the nullspace of wt, all row sums are equal to zero. From the first


e f g
row, a = −1. The column sums are also zero, so e = −1 − b, f = −c, g = 1 − d. The diagonal
sums are also zero, so 1 + c + 1 − d = 0 and −1 + c − 1 − b = 0. These give d = 2 + c and
b = −2 + c. Next, the second row sum is zero, so −2 + c + c + 2 + c = 0 which implies c = 0.




 1 0 −1
0 1 h




2. Now consider X 2 =  i j k . From the first row, h = −1. From the
Thus, X 1 = −2 0




1 0 −1
l m n
columns, l = −i, m = −1 − j, n = 1 − k. From the diagonal sums we find that k = 1 + j and


1 −1
 0


0
1.
i = −1 + j. From the second row sum it then follows that j = 0. Thus, X 2 = −1


1 −1
0


0 0 p


Finally, consider O = q r s . From the first row, p = 0. From the columns, t = −q, u = −r,


t u v
and v = −s. From the diagonals, r = −v and r = −t = q. Then the second row gives r = 0, and


0 0 0


O = 0 0 0.


0 0 0
3
2
Does B = X 1 , X 2 form a basis for this nullspace? It is easy to see that sX 1 + tX2 = O if and only
if s = t = 0, so B is linearly independent. Suppose that A is in Null(wt). Then
0 = wt(A) − a11 wt(X 1 ) − a12 wt(X 2 ) = wt(A − a11 X 1 − a12 X 2 )


0 0 ∗


This implies that A − a11 X 1 − a12 X 2 ∈ Null(wt). By construction, it is of the form ∗ ∗ ∗.


∗ ∗ ∗
From our work above, this implies that A − a11 X 1 − a12 X 2 = O, so
A = a11 X 1 + a12 X 2
Thus, B is a basis for Null(wt).
Copyright © 2020 Pearson Canada Inc.
83
(d) Clearly all row, column, and diagonal sums of J are equal to 3, so J is a magic square of weight
3. Let A be any magic square of weight k. Then,
6
7
k
k
wt A − J = wt(A) − wt(J) = k − k = 0
3
3
Therefore, A − 3k J is in the nullspace of wt. Thus, for some s, t ∈ R we have A − 3k J = sX 1 + tX 2 .
That is
k
A = J + sX 1 + tX 2
3
2
3
2
3
(e) Since X 1 , X 2 is linearly independent and J " Null(wt), it follows that J, X 1 , X 2 is linearly
independent. Part (d) shows that this set spans MS 3 and hence is a basis for MS 3 .
(f)
(g) A is of weight 6, so k/3 = 2. Hence,






1 −1
1 1 1
 1 0 −1
 0






2 + t −1
0
1
A = 2 1 1 1 + s −2 0






1 1 1
1 0 −1
1 −1
0
 
 2
 
It is easy to check that s = 1 and t = −1, so the coordinates of A with respect to this basis is  1.
 
−1
)' (*
)' (*
1
1
F4 Consider the subspace T = Span
of R2 . Observe that both S1 = Span
and S2 =
1
0
)' (*
−1
Span
are both complements of T.
1
F5 Let f and g be even functions. Then
( f + g)(−x) = f (−x) + g(−x) = f (x) + g(x) = ( f + g)(x)
(t f )(−x) = t f (−x) = t f (x)
so the set of even function is a subspace of continuous functions. The proof that the odd functions
form a subspace is similar.
Observe that if f is both even and odd, then
f (x) = f (−x) = − f (x)
Hence, f (x) = 0 for all x. Thus, the intersection of the subspace of even functions and the subspace
of odd functions only contains {0}.
For an arbitrary function f , write
f (x) =
Let
f+ =
9 18
9
18
f (x) + f (−x) + f (x) − f (−x)
2
2
9
18
f (x) + f (−x) ,
2
f− =
9
18
f (x) − f (−x)
2
Copyright © 2020 Pearson Canada Inc.
84
Then,
9 18
9
18
f (−x) + f (x) = f (x) + f (−x) = f + (x)
2
2
8
9
9
1
18
f − (−x) = f (−x) − f (x) = − f (x) − f (−x) = − f − (x)
2
2
f + (−x) =
So f + is even and f − is odd. Thus every function can be represented as a sum of an even function
and an odd function. Thus, the subspace of even functions and the subspace of odd functions are
complements of each other.
! l } be a basis for some complement T. Since
F6 (a) Let {!v 1 , . . . , !v k } be a basis for S and let {!
w1 , . . . , w
! 1, . . . , w
! l } is a spanning set for Rn . Suppose for some s1 , . . . , sk , t1 , . . . , tl
S + T = Rn , {!v 1 , . . . , !v k , w
! 1 + · · · + tl w
! l = !0
s1!v 1 + · · · + sk!v k + t1 w
Then
! 1 − · · · − tl w
!l
s1!v 1 + · · · + sk!v k = −t1 w
and this vector lies in both S and T. Since S ∩ T = {!0}, and we know that {!v 1 , . . . , !v k } and
! l } are both linearly independent, it follows that s1 = · · · = sk = t1 = · · · = tl = 0. Thus,
{!
w1 , . . . , w
! 1, . . . , w
! l } is also linearly independent and hence a basis for Rn . Therefore, l = n − k.
{!v 1 , . . . , !v k , w
(b) Yes. Suppose that S is neither {!0} nor Rn . Then S is k-dimensional with 0 < k < n. Let
{!v 1 , . . . , !v k } be a basis for S, and let {!v k+1 , . . . , !v n } be a basis for some complement T. Then
Span{!v k + !v k+1 , !v k+2 , . . . , !v n } is an n − k dimensional subspace of Rn , not equal to T. Thus, it is
another complement for S. Therefore, only {!0} and Rn have unique complements.
! ∈ T. Every vector in T can be
F7 Suppose that T = Span{!v , S} and U = Span{!
w, S}. Suppose that w
! = t!v + !s. If w
! " S, t cannot be zero, so
written in the form t!v + !s for some t ∈ R and !s ∈ S. Hence, w
!v =
1
(!
w − !s)
t
for some !s ∈ S. Thus, !v ∈ U.
F8 S ∩ T is a subspace of S, so it is certainly finite dimensional. Suppose that dim S = s, dim T = t, and
dim S ∩ T = k. We consider first the case where k < t, s < t. Then there is a basis {v1 , . . . , vk } for
S ∩ T. This can be extended to a basis for S by adding vectors {w1 , . . . , w s−k }. Since k < t, T contains
vectors not in S. In fact, since T has dimension t, we can extend {v1 , . . . , vk } to a basis for T by adding
a linearly independent set of vectors {z1 , . . . , zt−k } in T. But then
{v1 , . . . , vk , w1 , . . . , w s−k , z1 , . . . , zt−k }
is a minimal spanning set for S + T, and hence a basis, and
dim(S + T) = k + (s − k) + (t − k) = s + t − k
= dim S + dim T − dim S ∩ T
If s = k, then S ∩ T = S. Similarly, if t = k, T = S ∩ T. In either case, the result can be proven by a
slight modification of the argument above.
Copyright © 2020 Pearson Canada Inc.
Chapter 5 Solutions
Section 5.1
A Practice Problems
A1
A2
A3
A4
A5
A6
A7
!!
!!2
!!7
!!
!!−3
!! 2
!!
!!1
!!1
!!
!!2
!!3
!!
!! 7
!!−3
!!
!!−2
!!−5
!
−4!!!
= 2(5) − (−4)7 = 38
5!!
!
1!!!
= (−3)(1) − 1(2) = −5
1!!
!
1!!!
= 1(1) − 1(1) = 0
1!!
!
4!!!
= 2(6) − 4(3) = 0
6!!
!
5!!!
= 7(7) − 5(−3) = 64
7!!
!
−1!!!
= (−2)(−11) − (−1)(−5) = 17
−11!!
!!
!
!!1 3 −4!!!
2!!! = 1(0)(3) = 0.
The matrix is upper triangular, so, !!!0 0
!!0 0
3!!
A8 Since the third column is all zeros we get the determinant is 0.
!!
!
0 0 0!!
!!5
!!3 −4 0 0!!!
! = 5(−4)(1)(1) = −20.
A9 The matrix is lower triangular, so !!
2 1 0!!
!!1
!
!1
2 0 1!
A10 We have
!!
!
!!
!
!!
!
!! 3
4
0!!!
!! 1 −1!!!
!! 2 −1!!!
!! 2
!
1+1
1+2
1 −1!! = 3(−1) !
+ 4(−1) !
+0
!!
!−1
!−4
2!!
2!!
!
!−4 −1
2!
-S
= 3[1(2) − (−1)(−1)] + 4(−1)[(2(2) − (−1)(−4)]
= 3(1) − 4(0) = 3
1
Copyright © 2020 Pearson Canada Inc.
2
A11 We have
!!
!
!!
!!
!!
!!
!!
!!
!! 3 2 1!!!
!!−1 4 5!! = 3(−1)1+1 !!4 5!! + 2(−1)1+2 !!−1 5!! + 1(−1)1+3 !!−1 4!!
!!2 1!!
!! 3 1!!
!! 3 2!!
!!
!
! 3 2 1!!
= 3[4(1) − 5(2)] + 2(−1)[(−1)(1) − 5(3)] + 1[(−1)(2) − 4(3)]
= 3(−6) − 2(−16) + 1(−14) = 0
A12 We have
!!
!!0
!!1
!!
!0
!
!!
!!
5
0!!!
!! = 0 + 5(−1)1+2 !!1 −9!! + 0
8
−9
!!0
!!
√
1!!
2
1!
= 5(−1)[1(1) − (−9)(0)] = −5
A13 We have
!!
!
!!
!!
!! 5 0
0!!!
!! 8 1 −9!! = 5(−1)1+1 !!1 −9!! + 0 + 0
!!0
!! √
!!
1!!
! 2 0
1!
= 5(1)[1(1) − (−9)(0)] = 5
A14 We have
!!
1
!! 2
!! 0
3
!!
0
!!−4
! 3 −5
!
!!
!
!!
!
0 −1!!
!!
!! 3 2
!! 0 2
1!!!
1!!!
2
1!
! =2(−1)1+1 !!! 0 2 −2!!! + 1(−1)1+2 !!!−4 2 −2!!! +
2 −2!!
!!−5 2
!! 3 2
!
1!!
1!!
2
1!
!!
!
!! 0
3 2!!!
0 2!!
0 + (−1)(−1)1+4 !!!−4
!! 3 −5 2!!!
We now need to evaluate each of these 3 × 3 determinants. We have
!!
!
!!
!!
!!
!!
!!
!!
!! 3 2
1!!!
!! 0 2 −2!! = 3(−1)1+1 !!2 −2!! + 2(−1)1+2 !! 0 −2!! + 1(−1)1+3 !! 0 2!!
!!2
!!−5
!!−5 2!!
!!
!
1!!
1!!
!−5 2
1!!
= 3[2(1) − (−2)(2)] − 2[0(1) − (−2)(−5)] + 1[0(2) − 2(−5)]
= 3(6) − 2(−10) + 1(10) = 48
!!
!!
!!
!!
!!
!!
!! 0 2
1!
!!−4 2 −2!!! = 0 + 2(−1)1+2 !!−4 −2!! + 1(−1)1+3 !!−4 2!!
!! 3
!! 3 2!!
!!
!
1!!
! 3 2
1!!
= (−2)[(−4)(1) − (−2)(3)] + 1[(−4)(2) − 2(3)]
= (−2)(2) + 1(−14) = −18
Copyright © 2020 Pearson Canada Inc.
3
!!
!
!!
!!
!!
!
!! 0
3 2!!!
0!!!
!!−4
!
1+2 !!−4 2!!
1+3 !!−4
!
0 2! = 0 + 3(−1) !
+ 2(−1) !
!!
! 3 2!!
! 3 −5!!
! 3 −5 2!!
= (−3)[(−4)(2) − 2(3)] + 2[(−4)(−5) + 0(3)]
= (−3)(−14) + 2(20) = 82
Hence,
!!
1
!! 2
!! 0
3
!!
−4
0
!!
! 3 −5
A15 We have
!!
0
4
!! 1
!! 2 −3
4
!!
3
2
!!−1
! 1
1 −2
!
0 −1!!
!
2
1!!
! = 2(48) + (−1)(−18) + 1(82) = 196
2 −2!!
!
2
1!
!
!!
!
!!
!
0!!
!!
!!−3
!! 2 −3 1!!!
4 1!!!
1!
! = 1(−1)1+1 !!! 3
2 4!!! + 0 + 4(−1)1+3 !!!−1
3 4!!! + 0
4!!
!
!
!
!
! 1 −2 4!
! 1
1 4!!
4!
Next, we evaluate each of the 3 × 3 determinants.
!!
!
!!
!
!!
!
!!
!
!!−3
4 1!!!
!! 2 4!!!
!!3 4!!!
!!3
2!!!
!! 3
!
1+1
1+2
1+3
2 4!! = (−3)(−1) !
+ 4(−1) !
+ 1(−1) !
!!
!−2 4!!
!1 4!!
!1 −2!!
! 1 −2 4!!
= (−3)[(2)(4) − 4(−2)] − 4[(3(4) − 4(1)] + 1[3(−2) − 2(1)]
= (−3)(16) − 4(8) + 1(−8) = −88
!!
!!
!!
!
!!
!
!!
!
!! 2 −3 1!!
!!3 4!!!
!!−1 4!!!
!!−1 3!!!
!!−1
!
1+1
1+2
1+3
3 4!! = 2(−1) !
+ (−3)(−1) !
+ 1(−1) !
!!
!1 4!!
! 1 4!!
! 1 1!!
! 1
1 4!!
= 2[(3(4) − 4(1)] + 3[(−1)(4) − 4(1)] + 1[(−1)(1) − 3(1)]
= 2(8) + 3(−8) + 1(−4) = −12
Hence,
!!
0
4
!! 1
!! 2 −3
4
!!
3
2
!!−1
! 1
1 −2
A16 We get
!
0!!
!
1!!
! = 1(−88) + 4(−12) = −136
4!!
!
4!
!!
!
!!
!
!!
!
!! 1 3 −1!!!
!! 2 0!!!
!! 1 −1!!!
!! 2 1
!
1+2
2+2
0!! = 3(−1) !
+ 1(−1) !
+0
!!
!−1 5!!
!−1
5!!
!−1 0
5!!
= (−3)[2(5) − 0(−1)] + [1(5) − (−1)(−1)] = −26
!!
!!
!!
!
!!
!
!! 1 2 −1!!
!!2 −1!!!
!! 1 −1!!!
!! 3 1
!
2+1
2+2
0!! = 3(−1) !
+ 1(−1) !
+0
!!
!0
!−1
5!!
5!!
!−1 0
5!!
Thus, det A = det AT .
= (−3)[2(5) − 0(−1)] + [1(5) − (−1)(−1)] = −26
Copyright © 2020 Pearson Canada Inc.
4
A17 We get
!!
!! 1
!!−2
!!
!! 3
! 4
2
0
0
5
!
!!
!
!!
!
3
4!!
!!
!!−2 2
!! 1 3 4!!!
5!!!
2
5!
! = 2(−1)1+2 !!! 3 1
4!!! + 0 + 0 + 5(−1)4+2 !!!−2 2 5!!!
1
4!!
!
!! 3 1 4!!
!
! 4 1 −2!!
1 −2!
We now evaluate the 3 × 3 determinants by expanding along the first rows to get
!!
!
!!
!
!!
!
!!
!
!!−2 2
5!!!
!!1
!!3
!!3 1!!!
4!!!
4!!!
!! 3 1
!
1+1
1+2
1+3
4!! = (−2)(−1) !
+ 2(−1) !
+ 5(−1) !
!!
!1 −2!!
!4 −2!!
!4 1!!
! 4 1 −2!!
= (−2)[(1(−2) − 4(1)] − 2[(3(−2) − 4(4)] + 5[3(1) − 1(4)] = 51
!!
!!
!!
!!
!!
!!
!!
!!
!! 1 3 4!!
!!−2 2 5!! = 1(−1)1+1 !!2 5!! + 3(−1)1+2 !!−2 5!! + 4(−1)1+3 !!−2 2!!
!!1 4!!
!! 3 4!!
!! 3 1!!
!!
!
! 3 1 4!!
= [2(4) − 5(1)] − 3[(−2)(4) − 5(3)] + 4[(−2)(1) − 2(3)] = 40
Hence,
For AT we get
!!
!! 1
!!−2
!!
!! 3
! 4
2
0
0
5
!
3
4!!
!
2
5!!
! = (−2)(51) + 5(40) = 98
1
4!!
!
1 −2!
!!
!
!!
!
!!
!
4!!
!!1 −2 3
!!
!!−2 3
!!1 −2 3!!!
4!!!
!!2
0 0
5!
!!
!! = 2(−1)1+2 !!! 2 1
1!!! + 0 + 0 + 5(−1)4+2 !!!3
2 1!!!
3
2
1
1
!!
!!
!! 5 4 −2!!
!!4
5 4!!
!4
5 4 −2!
We now evaluate the 3 × 3 determinants by expanding along the first column to get
!!
!
!!
!
!!
!
!!
!
!!−2 3
4!!!
!!1
!!3
!!3 4!!!
1!!!
4!!!
!! 2 1
!
1+1
1+2
1+3
1!! = (−2)(−1) !
+ 2(−1) !
+ 5(−1) !
!!
!4 −2!!
!4 −2!!
!1 1!!
! 5 4 −2!!
= (−2)[(1(−2) − 4(1)] − 2[(3(−2) − 4(4)] + 5[3(1) − 1(4)] = 51
!!
!!
!!
!
!!
!
!!
!
!!1 −2 3!!
!!2 1!!!
!!−2 3!!!
!!−2 3!!!
!!3
!
1+1
1+2
1+3
2 1!! = 1(−1) !
+ 3(−1) !
+ 4(−1) !
!!
!5 4!!
! 5 4!!
! 2 1!!
!4
5 4!!
= [2(4) − 5(1)] − 3[(−2)(4) − 5(3)] + 4[(−2)(1) − 2(3)] = 40
Hence,
!!
!
4!!
!!1 −2 3
!
!!2
0 0
5!!
!!
! = (−2)(51) + 5(40) = 98 = det A
2 1
1!!
!!3
!
!4
5 4 −2!
Copyright © 2020 Pearson Canada Inc.
5
A18 Since the third column is all zeroes, the determinant is 0.
A19 Since there are no zeros in the matrix, it does not matter which row or column we expand along. We
will expand along the second column.
!!
!
!!
!
!!
!
!!
!
!!−5
2 −4!!!
!! 2
!!−5 −4!!!
!!−5 −4!!!
6!!!
!! 2 −4
!
1+2
2+2
3+2
6!! = 2(−1) !
+ (−4)(−1) !
+ 2(−1) !
!!
!−6 −3!!
!−6 −3!!
! 2
6!!
!−6
2 −3!!
= −2[(2(−3) − 6(−6)] − 4[(−5)(−3) − (−4)(−6)] − 2[6(−5) − 2(−4)]
= (−2)(30) + (−4)(−9) + (−2)(−22) = 20
A20 Since there are two zeros in last row, we expand along the last row.
!!
!
!!
!
!!1 −3 4!!!
!!1 4!!!
!!9
!
3+2
5 0!! = 0 + (−2)(−1) !
+0
!!
!9 0!!
!0 −2 0!!
= (−2)(−1)[1(0) − 4(9)] = −72
A21 Since there are two zeros in the second row, we expand along the second row.
!!
!
!!
!!
!!1 −3 4!!!
!!0 −2 0!! = 0 + (−2)(−1)2+2 !!1 4!! + 0
!!9 0!!
!!
!
!9
5 0!!
= (−2)(1)[1(0) − 4(9)] = 72
A22 Expanding the determinant along the first column gives
!!
!
!!
!!
!!
!
!!2 1
5!!!
5!!!
!!4 3 −1!! = 2(−1)1+1 !!3 −1!! + 4(−1)2+1 !!1
!!1 −2!!
!!1 −2!! + 0
!!
!
!0 1 −2!!
= 2[(3(−2) − (−1)(1)] − 4[1(−2) − 5(1)]
= 2(−5) − 4(−7) = 18
A23 Expanding the determinant along the third column gives
!!
!
!!
!
4 0
1!!
!!−3
!!−3
4
1!!!
!! 4 −1 0 −6!!!
!!
! = 0 + 0 + 0 + 3(−1)4+3 !!! 4 −1 −6!!!
!! 1 −1 0 −3!!!
!! 1 −1 −3!!
! 4 −2 3
6!
Next, we need to evaluate the 3 × 3 determinant. Expanding along the first column gives
!!
!
!!
!!
!!
!
!!
!
!!−3
4
1!!!
1!!!
1!!!
!! 4 −1 −6!! = (−3)(−1)1+1 !!−1 −6!! + 4(−1)2+1 !! 4
3+1 !! 4
!!−1 −3!!
!!−1 −3!! + (−1) !!−1 −6!!
!!
!
! 1 −1 −3!!
= −3[(−1)(−3) − (−6)(−1)] − 4[4(−3) − 1(−1)] + [4(−6) − 1(−1)]
= (−3)(−3) − 4(−11) − 23 = 30
Copyright © 2020 Pearson Canada Inc.
6
Thus,
!!
!
4 0
1!!
!!−3
!! 4 −1 0 −6!!!
!!
! = (−3)(30) = −90
!! 1 −1 0 −3!!!
! 4 −2 3
6!
A24 This matrix is not upper or lower triangular, so we cannot apply Theorem 5.1.3. However, the technique used to prove the theorem can be used to evaluate this determinant.
Expanding the determinant along the bottom row gives
!!
!
!!
!
!!
!
5 −7 8!!
!! 1
!!
!! 5 −7 8!!!
!! 5 −7 8!!!
!! 2 −1
3 0!
!!
! = 1(−1)4+1 !!!−1
3 0!!! + 0 + 0 + 0 = (−1) !!!−1
3 0!!!
2
0 0!!
!!−4
!
!
!
!
! 2
! 2
0 0!
0 0!!
! 1
0
0 0!
We then expand this determinant along the bottom row to get
!!
!
5 −7 8!!
!! 1
!!
!
!
!! 2 −1
!!−7 8!!!
3 0!!
3+1
!!
! = (−1)2(−1) !
! 3 0!!
2
0 0!!
!!−4
!
! 1
0
0 0!
= (−1)(2)[(−7)(0) − (8)(3)] = 48
A25 Expanding along the second row gives
!!
!! 8
!! 0
!!
!! 3
!−1
0
0
1
2
2
2
1
1
!
!!
!
1!!
!!
!! 8 0 1!!!
0!
! = 0 + 0 + 2(−1)2+3 !!! 3 1 1!!! + 0
1!!
!!−1 2 0!!
!
0!
Expanding along the third column now gives
!!
!! 8 0 2
!! 0 0 2
!!
!! 3 1 1
!−1 2 1
!
1!!
!!
!
!!
!
"
#
!
!! 3 1!!!
!! 8 0!!!
0!!
1+3
2+3
!! = (−2) 1(−1) !
+
1(−1)
+
0
!!−1 2!!
!−1 2!!
1!
!!
0
$%
& %
&'
= (−2) 3(2) − 1(−1) − 8(2) − 0(−1)
= (−2)[7 − 16] = 18
A26 Expanding the determinant along the first column gives


!!
!
!!
!
6
1
2
0

!!6
!! 6
1
2!!!
1
2!!!
0
5 −1
1
3+1 !!

1!!! + 5(−1)4+1 !!! 5 −1
1!!!
3 −5 −3 −5 = 0 + 0 + 3(−1) !!5 −1
!!−5 −3 −5!!


!6 −3 −6!!
5
6 −3 −6
Copyright © 2020 Pearson Canada Inc.
7
Next, we need to evaluate the 3 × 3 determinants. Expanding both along the first row gives
!!
!
!!
!
!!
!
!!
!
!!6
1
2!!!
!!−1
!!5
!!5 −1!!!
1!!!
1!!!
!!5 −1
!
1+1
1+2
1+3
1!! = 6(−1) !
+ 1(−1) !
+ 2(−1) !
!!
!−3 −6!!
!6 −6!!
!6 −3!!
!6 −3 −6!!
= 6[(−1)(−6) − 1(−3)] − [5(−6) − 1(6)] + 2[5(−3) − (−1)6]
= 6(9) − (−36) + 2(−9) = 72
!!
!!
!!
!
!!
!
!!
!
!! 6
1
2!!
!!−1
!! 5
!! 5 −1!!!
1!!!
1!!!
!! 5 −1
!
1+1
1+2
1+3
1!! = 6(−1) !
+ 1(−1) !
+ 2(−1) !
!!
!−3 −5!!
!−5 −5!!
!−5 −3!!
!−5 −3 −5!!
= 6[(−1)(−5) − 1(−3)] − [5(−5) − 1(−5)] + 2[5(−3) − (−1)(−5)]
= 6(8) − (−20) + 2(−20) = 28
Hence,


6
1
2
0

0
5 −1
1

3 −5 −3 −5 = 3(72) − 5(28) = 76


5
6 −3 −6
A27 Continually expanding along the first column gives
!!
!
!!
3 4 −5 7!!
!!1
!
!!−3
!!0 −3 1
2 3!!
!
!!
!!
1+1 !! 0
0 4
1 0! = 1(−1) !
!!0
!
!! 0
!!0
0 0 −1 8!!
! 0
!!
!!
0
0 0
4 3
!
1
2 3!!
!
4
1 0!!
!
0 −1 8!!
!
0
4 3!
!!
!
!!4
1 0!!!
= 1(−3)(−1)1+1 !!!0 −1 8!!!
!!0
4 3!!
!!
!!
1+1 !!−1 8!!
= 1(−3)(4)(−1) !
! 4 3!!
= (1)(−3)(4)[(−1)(3) − 8(4)] = 420
A28 Expanding along the first row gives
1+1
det E1 = 1(−1)
!!
!
!!0 1!!!
!!1 0!! + 0 + 0 = 1(0 − 1) = −1
A29 The matrix is upper triangular, hence det E2 = 1(1)(1) = 1.
A30 The matrix is upper triangular, hence det E3 = (−3)(1)(1) = −3.
A31 The matrix is lower triangular, hence det E4 = 1.
Copyright © 2020 Pearson Canada Inc.
8
B Homework Problems
B1 38
B4 −2
B7 5
B2 2
B5 34
B8 0
B3 0
B6 30
B9 15
B10 61
B13 x
B11 −12
B14 2
B12 −6
B15 12
B16
B19
B22
B25
B17
B20
B23
B26
B18
B21
B24
B27
−7
−39/2
30
−2
30
−3 − π
−81
1
0
−48
1
−1
B28 C11 = 4, C12 = −3, C21 = −1, C22 = 2
B29 C11 = 5, C12 = −3, C13 = 1, C21 = 0, C22 = 1, C23 = −2, C31 = 0, C32 = 0, C33 = 5
B30 C11 = −2, C12 = −3, C13 = −1, C21 = −2, C22 = 0, C23 = 2, C31 = −2, C32 = 3, C33 = −1
B31 C11 = 6, C12 = 22, C13 = 4, C21 = 18, C22 = −11, C23 = 1, C31 = 6, C32 = 11, C33 = −7
B32 det(A + B) = 14, det A + det B = 12
B33 (det A)(det B) = −13, det(AB) = −13, det(BA) = −13
B34 det(3A) = 117, 3 det A = 39
B35 det A = 13, det A−1 = 1/13 det AT = 13
C Conceptual Problems
C1 We will prove this for upper triangular matrices by induction on n. The proof for lower triangular
matrices is similar. If A is 2 × 2, then
!!
!
!!a11 a12 !!!
det A = !
= a11 a22
! 0 a22 !!
Assume the result holds for (n − 1) × (n − 1) matrices. Then for an n × n upper triangular matrix A
we can expand along the first column to get
det A = a11 (−1)1+1 det A(1, 1)
But, A11 is the upper triangular matrix obtained from A be deleting the first row and first column. So,
by the inductive hypothesis
det A = a11 a22 · · · ann
as required.
C2 Since a diagonal matrix is upper triangular, the result follows by Theorem 5.1.3.
Copyright © 2020 Pearson Canada Inc.
9
C3
C4
C5
C6
#
"
#
1 −1
1 2
We could take either A or B to be the zero matrix, but instead we take A =
and B =
3
5
1 0
!!
!!
!!2 1!!
so that det(A + B) = !
= 6 = det A + det B.
!4 5!!
"
#
"
#
1 0
−1
0
Take A =
and B =
, then det A + det B = 2, but det(A + B) = 0.
0 1
0 −1
"
#
0 0
Take A =
. Then det(cA) = 0 = c det A.
0 0
!!
!
"
#
!!c 0!!!
1 0
Take A =
, then det cA = !
= c2 ! c det A, for all c ∈ R.
!0 c!!
0 1
"
#
2 5
We could use A = I, but instead we use something more interesting. Take A =
. We find using
3 8
"
#
8 −5
the methods of Chapter 3 that A−1 =
. Hence, det A−1 = 1 = det A.
−3
2
"
#
"
#
1 0
1 0
−1
Take A =
. Then, A =
and hence det A = 2, but det A−1 = 21 .
0 2
0 1/2
"
#
a b
Let n = 2. Then A =
and hence det A = 0. Assume that all (n − 1) × (n − 1) matrices with
a b
identical rows have determinant 0. Let A be an n × n matrix with its i-th row equal to its j-th row.
Expand the determinant along any other row k of A. All the cofactors of row k will be determinants
of (n − 1) × (n − 1) matrices that have two identical rows, hence by the inductive hypothesis are all 0.
Thus, det A = 0 as well.
"
C7 Doing a cofactor expansion along the i-th row gives det E1 = c(−1)i+iCii where Cii is an (n−1)×(n−1)
identity matrix. Thus, det Cii = 1 by Theorem 5.1.3 and hence det E1 = c.
C8 If i > j, then E2 is lower triangular. If i < j, then E3 is upper triangular. In either case, det E2 = 1
since it has all 1s along the main diagonal.
!!
!
!!0 1!!!
C9 We will do this by induction. Let n = 2. Then det E2 = !
= −1. Assume that if E is an
!1 0!!
(n − 1) × (n − 1) elementary matrix corresponding to a row swap, then det E = −1. Consider an
n × n elementary matrix E3 that corresponds to swapping row i and row j. Expanding the determinant
along any other row k gives det E3 = 1(−1)k+k Ckk . Observe that Ckk is now an elementary matrix
corresponding to a row swap. Thus, by the inductive hypothesis, det E3 = 1(−1)k+k (−1) = −1.
C10 (a) Expanding along the first row, we find that the equation is
(a2 − b2 )x1 + (b1 − a1 )x2 + (a1 b2 − a2 b1 ) = 0
so it is certainly a line in R2 unless both a2 − b2 = 0 and b1 − a1 = 0, in which case the equation
says only 0 = 0. If (x1 , x2 ) = (a1 , a2 ), the matrix has two equal rows, and the determinant is zero.
Hence (a1 , a2 ) lies on the line, and similarly (b1 , b2 ) lies on the line.


 x1 x2 x3 1
a a2 a3 1
 = 0.
(b) The equation is det  1
b1 b2 b3 1
c1 c2 c3 1
Copyright © 2020 Pearson Canada Inc.
10
Section 5.2
A Practice Problems
A1 Since performing the row operations R2 − 3R1 and R3 + R1 do not change the determinant, we get
!

 !!
2
4!!!
 1 2 4 !!1


det  3 1 0 = !!!0 −5 −12!!!

 !
!0
−1 3 2
5
6!!
The row operation R3 + R2 does not change the determinant, so
!!
! !
!
!!1
2
4!!! !!!1
2
4!!!
!!0 −5 −12!! = !!0 −5 −12!! = 1(−5)(−6) = 30
!!
! !
!
!0
5
6!! !!0
0 −6!!
Hence, A is invertible since det A ! 0.
A2 Since performing R1 % R3 multiplies the determinant by −1, we get
!!
!


!!1 1 1!!!
3 2 2


det 2 2 1 = (−1) !!!2 2 1!!!


!!3 2 2!!
1 1 1
Performing the row operations R2 − 2R1 and R3 − 3R1 do not change the determinant, we get
!!
!
!!
!
!!1 1 1!!!
!!1
1
1!!!
0 −1!!
(−1) !!!2 2 1!!! = (−1) !!!0
!!3 2 2!!
!!0 −1 −1!!!
Since performing R2 % R3 multiplies the determinant by −1, we get
!!
!
!!
!
!!1
!!1
1
1!!!
1
1!!!
0 −1!!! = (−1)(−1) !!!0 −1 −1!!! = (−1)(−1)(1)(−1)(−1) = 1
(−1) !!!0
!!0 −1 −1!!
!!0
0 −1!!
Hence, A is invertible since det A ! 0.
A3 Since performing the row operations R1 − R2 and R3 − R2 do not change the determinant, we get
!

 !
0 0!!
 5 2 −1 1 !!! 4 0
 1 2 −1 1 ! 1 2 −1 1!!
!!
 = !!
det 
1 4 !! 2 0
2 3!!
 3 2
 !!
!
−2 0
3 5
−2 0
3 5!
Performing 14 R1 has the effect of factoring out the 4. We also perform R4 + R3 which does not change
the determinant. So,
!!
!
!!
!
0 0!!
0 0!!
!! 4 0
!!1 0
!
!! 1 2 −1 1!!
!1 2 −1 1!!!
!!
!! = 4 !!!
!
2 3!
2 0
2 3!!
!! 2 0
!
!
!!
!
!−2 0
3 5!
0 0
5 8!
Copyright © 2020 Pearson Canada Inc.
11
Next, we perform the operations R2 − R1 and R3 − 2R1 which do not change the determinant. Then
!!
!
!!
!
0 0!!
1 0
0 0!!
!!1 0
!
!!
!
!1 2 −1 1!!!
0 2 −1 1!!
!! = 4 !!!
!
4 !!!
2 3!
2 3!!
!!2 0
!!0 0
!
!
!0 0
!0 0
5 8!
5 8!
Finally, we perform the operation R4 − 52 R3 to get
!!
!
!!
!
0 0!!
0 0 !!
!!1 0
!!1 0
!
!0 2 −1 1!!
!0 2 −1 1 !!!
!! = 4 !!!
! = 4(1)(2)(2)(1/2) = 8
4 !!!
2 3!
2 3 !!
!!0 0
!!0 0
!
!
!0 0
!0 0
5 8!
0 1/2!
Hence, A is invertible since det A ! 0.
A4 Since performing the row operations R2 + 2R1 , R3 − 2R1 , and R4 − R1 do not change the determinant,
we get
!

 !
1
3
1 !!1 1 3 1!!
 1

!
−2 −2 −4 −1 !0 0 2 1!!
!!
 = !!
det 
2
8
3 !!0 0 2 1!!
 2
 !!
!
1
1
7
3
0 0 4 2!
There are two identical rows, so det A = 0 by Theorem 5.2.3. Hence, A is not invertible.
A5 We first perform 15 R1 to get
!!


 5 10 5 −5
!! 1
 1 3 5

7
 = 5 !!! 1
det 
!! 1
3
 1 2 6

!!
−1 7 1
1
−1
2
3
2
7
!
1 −1!!
!
5
7!!
!
6
3!!
!
1
1!
The row operations R2 − R1 , R3 − R1 , and R4 + R1 do not change the determinant, so
!!
!
!!
!
!! 1 2 1 −1!!!
!!1 2 1 −1!!!
! 1 3 5
!0 1 4
7!!
8!!
!! = 5 !!!
!
5 !!!
3!
4!!
!! 1 2 6
!!0 0 5
!
!
!−1 7 1
!0 9 2
1!
0!
The row operation R4 − 9R2 does not change the determinant, so
!!
!
!!
!
1 −1!!
!!1 2 1 −1!!!
!!1 2
!
!0 1 4
!0 1
8!!
4
8!!
!! = 5 !!!
!
5 !!!
4!
5
4!!
!!0 0 5
!!0 0
!
!0 9 2
!0 0 −34 −72!!
0!
Finally, the row operation R4 + 34
5 R3 also does not change the determinant, hence
!!
!
!!
!
1 −1!!
−1 !!
!!1 2
!!1 2 1
!
!
!0 1
!0 1 4
4
8!!
8 !!
!! = 5 !!!
! = 5(1)(1)(5)(−224/5) = −1120
5 !!!
5
4!
0 0 5
4 !!
!!0 0
!
!!
!
!0 0 −34 −72!!
0 0 0 −224/5!
Consequently, A is invertible.
Copyright © 2020 Pearson Canada Inc.
12
A6 We first use R2 − R1 , R3 − 2R1 , and R4 − R1 which do not change the determinant. So,
!

 !
2
1
2 !!1
2
1
2!!
1
 !
!
1
5
0
5 !!0
3 −1
3!!
!
det 
 = !!
4
4
6 !0
0
2
2!!
2
 !!
!
1 −1 −4 −5
0 −3 −5 −7!
Using the row operation R4 + R2 and then using the row operation R4 + 3R3 , which do not change the
determinant, we get
!!
! !
! !
!
2
1
2!! !!1 2
1
2!! !!1 2
1 2!!
!!1
! !
! !
!
!!0
3 −1
3!! !!0 3 −1
3!! !!0 3 −1 3!!
!!
!! = !!
!! = !!
! = 1(3)(2)(2) = 12
0
2
2! !0 0
2
2! !0 0
2 2!!
!!0
!
!
!
!
!
!0 −3 −5 −7! !0 0 −6 −4! !0 0
0 2!
Consequently, A is invertible.
A7 The column operation C2 − 2C1 does not change the determinant, so
!!
! !
!
!!1 2 4!!! !!!1 0 4!!!
!!1 2 4!! = !!1 0 4!! = 0
!!
! !
!
!1 2 4!! !!1 0 4!!
A8 The column operation C2 + C1 does not change the determinant, so
!!
! !
!
!! 2
3 5!!! !!! 2 5 5!!!
!!−1
1 0!!! = !!!−1 0 0!!! = 0
!!
! 7 −6 1!! !! 7 1 1!!
since the matrix has two identical columns.
A9 Using the column operations C2 − C1 and C3
cofactor expansion along the first row gives
!!
! !
!! 3 3
3!!! !!! 3
!! 1 2 −2!! = !! 1
!!
! !
!−1 5 −7!! !!−1
− C1 , which do not change the determinant, and a
!
!!
!
0
0!!!
!!1 −3!!!
!
1 −3!! = 3 !
= 36
!6 −6!!
6 −6!!
A10 Using the column operation C2 − C1 and then the row operation R3 − R1 , which do not change the
determinant, followed by a cofactor expansion along the bottom row gives
!!
! !
! !
!
!!
!!
!!2 4 5!!! !!!2 2 5!!! !!!2 2 5!!!
!!1 1 3!! = !!1 0 3!! = !!1 0 3!! = 1 !!2 5!! = 6
!!0 3!!
!!
! !
! !
!
!3 5 5!! !!3 2 5!! !!1 0 0!!
A11 Using the row operations R2 − R1 and R3 − R1 , which do not change the determinant, and a cofactor
expansion along the first column gives
!!
! !
!
!!
!
!!1 −1
2!!! !!!1 −1
2!!!
!!2 −4!!!
!!1
!
!
!
1 −2!! = !!0
2 −4!! = 1 !
= 14
!!
!3
1!!
!1
2
3!! !!0
3
1!!
Copyright © 2020 Pearson Canada Inc.
13
A12 Using the row operations R2 − 2R1 and R3 + R1 , followed by the row operation R3 + R2 , which do not
change the determinant, gives
!!
! !
! !
!
!! 2 4 2!!! !!!2
4
2!!! !!!2
4
2!!!
!! 4 2 1!! = !!0 −6 −3!! = !!0 −6 −3!! = 2(−6)(1) = −12
!!
! !
! !
!
!−2 2 2!! !!0
6
4!! !!0
0
1!!
A13 Using the row operations R2 − 2R1 , R3 − 3R1 , and R4 − R1 , which do not change the determinant,
followed by a cofactor expansion along the first column gives twice gives
!!
!!1
!!2
!!
!!3
!1
2
4
6
3
1
1
5
4
! !
2!! !!1
! !
5!! !!0
!=!
9!! !!0
! !
3 ! !0
!
!!
!
2
1 2!!
!!
!
!!
!!0 −1 1!!!
!!−1 1!!!
0 −1 1!
!
!
! = 1 !!0
2 3!! = 1 !
= −5
! 2 3!!
0
2 3!!
!!1
!!
3 1!!
1
3 1
A14 Using the column operations C3 + C1 and C4 + 3C1 , which do not change the determinant, and then
a cofactor expansion along the first row gives
!!
! !
!
!!
!
0 −2 −6!! !! 2
0
0
0!!
!! 2
!
!
!!
!!−6 −2
5!!!
!! 2 −6 −4 −1!! !! 2 −6 −2
5!
!!
!! = !!
!! = 2 !!!−4
2 −6!!!
−3
−4
5
3
−3
−4
2
−6
!!
!! !!
!!
!!−1 −5 −4!!
!−2 −1 −3
2! !−2 −1 −5 −4!
Next use the row operations R1 − 6R3 and R2 − 4R3 , which do not change the determinant, and a
cofactor expansion along the first column to get
!!
!
!!
!
!!
!
!!−6 −2
!! 0 28 29!!!
5!!!
!!28 29!!!
!
!
!
!
2 −6! = 2 !! 0 22 10!! = 2(−1) !
2 !!−4
= 716
!22 10!!
!!−1 −5 −4!!!
!!−1 −5 −4!!
A15 Using R2 + R1 and R3 − 2R1 , which do not change the determinant, and then a cofactor expansion
along the first column gives
!!
! !
!
!!
!
!! 1
2
3!!! !!!1
2
3!!!
!! 5 −5!!!
!!−1
!
!
!
3 −8!! = !!0
5 −5!! = 1 !
= 5(−4) − (−5)(−9) = −65
!!
!−9 −4!!
! 2 −5
2!! !!0 −9 −4!!
A16 Using C1 + C2 , which does not change the determinant, and a cofactor expansion along the second
column gives
!!
! !
!
!!
!!
!! 2 3 1!!! !!!5 3 1!!!
!!−2 2 0!! = !!0 2 0!! = 2 !!5 1!! = 2[5(4) − 1(4)] = 32
!!4 4!!
!!
! !
!
! 1 3 4!! !!4 3 4!!
A17 Using R2 + R1 and R3 + R1 , which do not change the determinant, and then a cofactor expansion along
the third column gives
!!
! !
!
!!
!
!! 6
!!6 8 −8!!!
8 −8!!! !!! 6 8 −8!!!
!! 7
5
8!!! = !!!13 13
0!!! = 13(4) !!!1 1
0!!! = 0
!!
!
!
!
!
!−2 −4
!1 1
8! ! 4 4
0!
0!!
Copyright © 2020 Pearson Canada Inc.
14
A18 Using 71 R2 , R1 − R2 , R3 − 2R2 , and R4 + 3R2 , which do not change the determinant, gives
!!
!
!!
!! 1 10 7 −9!!!
!! 1 10
!! 7 −7 7
!
! 1 −1
7!
!!
!! = 7 !!!
2!
!! 2 −2 6
!! 2 −2
!
!−3 −3 4
!−3 −3
1!
!
!!
!
7 −9!!
!!0 11 6 −10!!!
!!
!1 −1 1
1
1!
1!!
!! = 7 !!!
!
6
2!
0 4
0!!
!!0
!!
!
!0 −6 7
4
1
4!
Using a cofactor expansion along the third row and then along the first column gives
!!
!
!!
!
!!0 11 −10!!!
!! 11 −10!!!
!
!
1!! = 28(−1) !
= (7)(4) !!1 −1
= (−28)(44 − 60) = 448
!−6
4!!
!!0 −6
!
4!
A19 Using R3 + R1 and R4 + R1 , which do not change the determinant, followed by a cofactor expansion
along the first column gives
!!
! !
!
!!
!
2 6
4!! !!−1 2 6 4!!
!!−1
!
!
!!
!!3 5 6!!!
!! 0
!
!
3 5
6! ! 0 3 5 6 !
!!
!! = !!
! = (−1) !!!1 10 2!!!
1
−1
4
−2
!!
!! !! 0 1 10 2!!!
!!4 7 6!!
! 1
2 1
2! ! 0 4 7 6 !
Using R1 − 3R2 and R3 − 4R2 , which do not change the determinant, followed by a cofactor expansion
along the first row gives
!!
!
!!
!
!!
!
!!3 5 6!!!
!!0 −25
0!!!
!!1
2!!!
!
!
!
!
10
2! = (−1)(−25)(−1) !
(−1) !!1 10 2!! = (−1) !!1
= −25(−2) = 50
!0 −2!!
!!4 7 6!!
!!0 −33 −2!!!
A20 Using R2 − R1 , R3 − R1 , and R4 − R1 and then factoring out the common factors gives
!!
!
!
a
a2
a3 !! !!
!!1
2
2
3
3!
!!0 b − a b2 − a2 b3 − a3 !!! !!!b − a b − a b − a !!!
2
2
c3 − a3 !
det A = !!
2
2
3
3 ! = !c − a c − a
!!0 c − a c − a c − a !!! !!!d − a d2 − a2 d3 − a3 !!!
!0 d − a d2 − a2 d3 − a3 !
!!
!
!!1 b + a b2 + ba + a2 !!!
= (b − a)(c − a)(d − a) !!!1 c + a c2 + ca + a2 !!!
!!1 d + a d2 + da + a2 !!
Now using R2 − R1 and R3 − R1 and then factoring out the common factors gives
!!
!
!!1 b + a
b2 + ba + a2 !!!
det A = (b − a)(c − a)(d − a) !!!0 c − b c2 + ca − b2 − ba !!!
!!0 d − b d2 + da − b2 − ba!!
!!
!
!! c − b c2 + ca − b2 − ba !!!
= (b − a)(c − a)(d − a) !
!d − b d2 + da − b2 − ba!!
!!
!
!!1 a + b + c !!!
= (b − a)(c − a)(d − a)(c − b)(d − b) !
!1 a + b + d!!
= (b − a)(c − a)(d − a)(c − b)(d − b)(d − c)
Copyright © 2020 Pearson Canada Inc.
15
A21 We have that
det A = 1(4) − p(2) = 4 − 2p
A is invertible when 0 ! det A = 4 − 2p. Therefore, A is invertible for all p ! 2.
A22 We have that
det A = p2 + 2 > 0
for all p. Thus, A is invertible for all p.
A23 We have that
det A = p
A is invertible when 0 ! det A = p. Therefore, A is invertible for all p ! 0.
A24 Using C3 − C2 and a cofactor expansion, we get
!!
!
!!
!! 3
2 0!!!
! 3
!
!
p 0! = 2 !!
det A = !!−8
!−8
!! 5 −3 2!!!
!
2!!!
= 6p + 32
p!!
A is invertible when 0 ! det A = 6p + 32. Therefore, A is invertible for all p ! − 16
3 .
A25 Using R1 − 2R2 and R3 − 4R2 and a cofactor expansion, we get
!!
!
!!
!
!!0
1
3!!!
!! 1
3!!!
!
!
1
−1!! = (−1) !
det A = !!1
= −2 + 3(p − 4) = 3p − 14
! p − 4 2!!
!!0 p − 4
2!!
A is invertible when 0 ! det A = 3p − 14. Therefore, A is invertible for all p !
14
3 .
A26 Using R1 − 2R4 and R3 − R2 , and a cofactor expansion, we get
!!
!!0
!0
det A = !!!
!!0
!1
3 −1
1
2
0
5
0
1
!
!!
p!!
!!
!!3 −1
1!
!! = (−1) !!!1
2
5!
!
!
!0
5
0!
!
p!!!
1!!!
5!!
Using R1 − 3R2 and another cofactor expansion gives
!!
!
!!
!
!!0 −7 p − 3!!!
!!−7 p − 3!!!
!
!
2
1 !! = (−1)(−1) !
det A = (−1) !!1
= −35 − 5(p − 3) = −5p − 20
! 5
5 !!
!!0
5
5 !!
Therefore, A is invertible when 0 ! det A = −5p − 20. Hence, A is invertible for all p ! −4.
A27 Using R2 − R1 , R3 − R1 , and R4 − R1 , and a cofactor expansion gives
!!
!!1
!0
det A = !!!
!!0
!0
1 1
1 2
3 8
7 26
!
1 !! !!
! !1 2
3 !! !!
! = !3 8
15 !! !!
! !7 26
p − 1!
Copyright © 2020 Pearson Canada Inc.
!
3 !!!
15 !!!
p − 1!!
16
Next, we use R2 − 3R1 and R3 − 7R1 , and a cofactor expansion to get
!!
!
!
!!1 2
3 !!! !!
!! 2
6 !!!
!
!
!
!
6
det A = !0 2
=!
! = 2(p − 22) − 72 = 2p − 116
!!0 12 p − 22!!! !12 p − 22!
Consequently, A is invertible when 2p − 116 ! 0, hence for all p ! 58.
A28 Using C1 − C4 , R3 + R2 , and R4 − R2 gives

1
0
det A = 
0
0
p
2
0
0
1
p
0
0

1

2
 = 1(2)(0)(1) = 0
2

1
Thus, there is not value of p for which the matrix is invertible.
"
#
7 0
A29 We have det A = 13 and det B = 14. Next, we find that AB =
, so
4 26
det AB = 182 = (13)(14) = det A det B
A30 Using R1 + 2R2 and R3 + 3R2 and a cofactor expansion, we get


!!
!
 0 8 −1
!!8 −1!!!


det A = −1 2 −1 = (−1)(−1) !
= −2
!6 −1!!


0 6 −1
Using R1 + R2 and R3 + 4R2 and a cofactor expansion, we get
!!
!
!!
!
!! 0 3 7!!!
!!3 7!!!
!
!
det B = !!−1 0 5!! = (−1)(−1) !
= 56
!1 21!!
!! 0 1 21!!


7 25
 2


Next, we find that AB = −7 −4 7. To evaluate det AB we use C2 − C1 and C3 + C1 to get


11 11 8
!!
!
!!
!
!!
!
!! 2 5 27!!!
!!5 27!!!
!! 2 5!!!
!
!
det AB = !!−7 3 0!! = 11 !
+ 19 !
!3 0!!
!−7 3!!
!! 11 0 19!!
= −891 + 779 = −112 = (−2)(56) = det A det B
A31 We have
!!
!
!!3 − λ
2 !!!
= (3 − λ)(5 − λ) − 2(4) = λ2 − 8λ + 7 = (λ − 7)(λ − 1)
!! 4
5 − λ!!
Thus, the determinant is 0 when λ = 7 or λ = 1.
Copyright © 2020 Pearson Canada Inc.
17
A32 We have
!!
!
!!4 − λ
−3 !!!
= (4 − λ)(−4 − λ) − (−3)(3) = λ2 − 7
!! 3
−4 − λ!!
√
Thus, the determinant is 0 when λ = ± 7.
A33 Using R3 − R1 and then C1 + C3 gives
!!
! !
! !
!
!!1 − λ
1
1 !!! !!!1 − λ
1
1 !!! !!!2 − λ
1
1 !!!
!! −1 −1 − λ
1 !!! = !!! −1 −1 − λ 1 !!! = !!! 0
−1 − λ 1 !!!
!!
!
!
!
!
! 1
1
1 − λ! ! λ
0
−λ! ! 0
0
−λ!!
= (2 − λ)(−1 − λ)(−λ)
Thus, the determinant is 0 when λ = 2, −1, or 0.
A34 Using R3 − R2 and then C2 + C3 gives
!!
! !
! !
!
!!2 − λ
2
2 !!! !!!2 − λ
2
2 !!! !!!2 − λ
4
2 !!!
!! 2
3−λ
1 !!! = !!! 2
3−λ
1 !!! = !!! 2
4−λ
1 !!!
!!
!
!
!
!
! 2
1
3 − λ! ! 0
−2 + λ 2 − λ! ! 0
0
2 − λ!!
A cofactor expansion along the bottom row now gives
!!
!
!!2 − λ
4 !!!
= (2 − λ) !
= (2 − λ)(λ2 − 6λ)
! 2
4 − λ!!
Thus, the determinant is 0 when λ = 2, 6, or 0.
A35 Using R2 + R3 and then C2 − C3 gives
!!
! !
! !
!
!!−3 − λ
6
−2!!! !!!−3 − λ
6
−2 !!! !!!−3 − λ
8
−2 !!!
!! −1
2 − λ −1!!! = !!! 0
−1 − λ −1 − λ!!! = !!! 0
0
−1 − λ!!!
!!
! 1
−3 −λ!! !! 1
−3
−λ !! !! 1
−3 + λ
−λ !!
A cofactor expansion along the middle row now gives
!!
!
!!−3 − λ
8 !!!
= (−1 − λ)(−1) !
= −(1 + λ)(λ2 − 1)
! 1
−3 + λ!!
Thus, the determinant is 0 when λ = ±1.
A36 Using C2 − C3 and then R3 + R2 gives
!!
! !
! !
!
!!4 − λ
2
2 !!! !!!4 − λ
0
2 !!! !!!4 − λ
0
2 !!!
!! 2
4−λ
2 !!! = !!! 2
2−λ
2 !!! = !!! 2
2−λ
2 !!!
!!
!
!
!
!
! 2
2
4 − λ! ! 2
−2 + λ 4 − λ! ! 4
0
6 − λ!!
Now a cofactor expansion along the second column gives
!!
!
!!4 − λ
2 !!!
= (2 − λ) !
= (2 − λ)(λ2 − 10λ + 16) = (2 − λ)(λ − 2)(λ − 8)
! 4
6 − λ!!
Thus, the determinant is 0 when λ = 2 or λ = 8.
Copyright © 2020 Pearson Canada Inc.
18
A37 Since rA is the matrix where each of the n rows of A must been multiplied by r, we can use Theorem
5.2.1 n times to get det(rA) = rn det A.
A38 We have AA−1 is I, so
1 = det I = det AA−1 = (det A)(det A−1 )
by Theorem 5.2.7. Since det A ! 0, we get det A−1 =
1
det A .
A39 By Theorem 5.2.7., we have 1 = det I = det A3 = (det A)3 . Taking cube roots of both sides gives
det A = 1.
B Homework Problems
B1 det A = 1, hence A is invertible.
B2 det A = 0, hence A is not invertible.
B3 det A = 0, hence A is not invertible.
B4 det A = 16, hence A is invertible.
B5 det A = 4, hence A is invertible.
B6 det A = −3, hence A is invertible.
B7 det A = 0, hence A is not invertible.
B8 det A = 0, hence A is not invertible.
B9
B12
B15
B18
−68
0
8
0
B10
B13
B16
B19
72
−20
19
−240
B11
B14
B17
B20
0
−45
90
−12
B21 det A = −5p + 5, A is invertible for p ! 1
B22 det A = 7, A is invertible for all p
B23 det A = −12 + 2p, A is invertible for p ! 6
B24 det A = −2p, A is invertible for p ! 0
B25 det A = 2 − 4p, A is invertible for p !
1
2
B26 det A = 0, A is not invertible for all p
B27 det A = p2 , A is invertible for p ! 0
B28 det A = 20 − 4p, A is invertible for p ! 5
"
#
5 −3
B29 det A = 19, det B = −3, det AB = det
= −57
6 −15
"
#
−11
0
B30 det A = −11, det B = −11, det AB = det
= 121
0 −11


3
 7 10


B31 det A = 0, det B = −2, det AB = det  3 10 −3 = 0


16 16 12
Copyright © 2020 Pearson Canada Inc.
19


1 0 0


B32 det A = −1, det B = −1, det AB = det 0 1 0 = 1


0 0 1
B33 λ = 1, 7
B34 λ = 0, 2
B35 λ = 2, 7, 14
B36 λ = −5, 1, 3
C Conceptual Problems
C1 The 2 × 2 case is proven on page 322. Assume the result holds for (n − 1) × (n − 1) matrices. Expand
the determinant of B along a row that was not swapped. Say,
det B = bi1Ci1 + · · · + binCin
The cofactors Ci j of B are just the cofactors of A with their rows swapped. Since these cofactors are
determinants of (n − 1) × (n − 1) matrices, these will just be negatives of the cofactors of A by the
inductive hypothesis. That is, if we let Ci∗j denote the cofactors of A, then we have
Ci j = −Ci∗j ,
for 1 ≤ j ≤ n
Using this and the fact that the i-th row of B equals the i-th row of A we get
det B = bi1Ci1 + · · · + binCin
∗
∗
= ai1 (−Ci1
) + · · · + ain (−Cin
)
∗
∗
= −(ai1Ci1
+ · · · + ainCin
)
= − det A
C2 The 2 × 2 case is proven on page 323. Assume the result holds for (n − 1) × (n − 1) matrices. Expand
the determinant of B along a row i where i ! j and i ! k. Say,
det B = bi1Ci1 + · · · + binCin
Observe that the cofactors Ci# are the 2 × 2 cofactors Ci#∗ of A with the operation Rk + rR j applied to
them. Hence, by the inductive hypothesis, we have that
Ci# = Ci#∗ ,
for 1 ≤ # ≤ n
Using this and the fact that the i-th row of B equals the i-th row of A we get
det B = bi1Ci1 + · · · + binCin
∗
∗
= ai1Ci1
+ · · · + ainCin
= det A
Copyright © 2020 Pearson Canada Inc.
20
C3 We have det A = det AT = det(−A). Observe that −A is the matrix obtained from A by multiplying
each of the n rows of A by −1. Thus, by Theorem 5.2.1, we have det(−A) = (−1)n det A. Thus, since
n is odd det(−A) = − det A. Hence, det A = − det A which implies that det A = 0.
C4 (a) If A is orthogonal, then we have I = AAT . Thus, by Theorem 5.2.7, we get that
det I = det(AAT )
1 = det A det AT
= det A det A
Thus, taking the square root of both sides we get det A = ±1.
"
#
−1 0
(b) Take A =
. It is easy to verify that AT = A−1 and det A = −1.
0 1
C5 By Theorem 5.2.7, we have
det(P−1 AP) = det P−1 det A det P = det A det P−1 det P
= det A det(P−1 P) = det A det I = det A
C6 Assume A = U B where det U = 1. Then,
det A = det(U B) = det U det B = 1(det B) = det B
On the other hand, assume det A = det B. Taking U = AB−1 we get
U B = (AB−1 )B = A(B−1 B) = AI = A
and
/
.
/
1
1
det U = det(AB ) = det A det(B ) = det A
= det A
=1
det B
det A
−1
−1
.
C7 We have
det(AB) = det((−1)AB)
det A det B = (−1)n det A det B
det A det B = − det A det B
since n is odd. If A is invertible, then det A ! 0, so we can divide by det A to get
det B = − det B
which implies that det B = 0 and so B is not invertible.
C8 True. By the Invertible Matrix Theorem.
"
#
"
#
−1
0
1 0
C9 False. Take A =
and B =
. Then, det A + det B = 1 + 1 = 2, but det(A + B) = 0.
0 −1
0 1
C10 True. We have det(A + BT ) = det(A + BT )T = det(AT + (BT )T ) = det(AT + B).
Copyright © 2020 Pearson Canada Inc.
21
"
#
" #
0 0
0
$
C11 False. Take A =
and b =
. The system A$x = $b is consistent, but det A = 0.
0 0
0
C12 True. By Theorem 5.2.7, we have 0 = det(AB) = det A det B.
"
#
2
3
C13 False. Take A =
, then A2 = I, but det A = −1.
−1 −2
C14 (a) Expanding along the first row gives


!!
!
!!
!
!!
!
a + p b + q c + r
!!e f !!!
!!d f !!!
!!d e!!!



e
f  = (a + p) !
det  d
− (b + q) !
+ (c + r) !
!h k !!
!g k !!
!g h!!


g
h
k
!!
!
!!
!
!!
!
!!
!
!!
!
!!
!
!!e f !!!
!!d f !!!
!!d e!!!
!!e f !!!
!!d f !!!
!!d e!!!
= a!
− b!
+ c!
+ p!
− q!
+r!
!h k !!
!g k !!
!g h!!
!h k !!
!g k !!
!g h!!




a b c 
 p q r 




= det d e f  + det d e f 




g h k
g h k
(b) Repeat the steps taken in part (a) on the second row to obtain the answer
C15
C16

a b

det d e

g h


c 
 p q


f  + det d e


k
g h





r 
a b c
 p q r 





f  + det  x y z  + det  x y z 





k
g h k
g h k
!! !!
!!

 !!
a + b p + q u + v  !!a + b p + q u + v !! !!a + b p + q u + v !!


det b + c q + r v + w = !!!c − a r − p w − u!!! = !!!c − a r − p w − u!!!

 !
!c + a r + p w + u!! !! 2c
c+a r+ p w+u
2r
2w !!
!!
!!
!!
!
!!a + b p + q u + v !!
!!a + b p + q u + v!!!
−p
−u !!!
= 2 !!!c − a r − p w − u!!! = 2 !!! −a
!! c
!! c
r
w !!
r
w !!
!!
!!
!!
!!


!! b
!!b q v !!
q
v !!
a p u 


= 2 !!!−a −p −u!!! = −2 !!!a p u !!! = 2 det b q v 


!! c
!
!
!
! c r w!
r
w!
c r w

 !!
1
1
1
1
 !!1

 !!
1 + 2a  = !0
a
det 1 1 + a

 !
!0 2a + a2
1 (1 + a)2 (1 + 2a)2
! !
1 !!! !!!1 1
2a !!! = !!!0 a
4a + 4a2 !! !!0 a2
Copyright © 2020 Pearson Canada Inc.
!
1 !!!
2a !!! = 2a3
4a2 !!
22
Section 5.3
A Practice Problems
A1
A2
A3
A4

0
0
Let A = 
0
0

0
0
Let A = 
0
0

0
0
Let A = 
0
0

0
0
Let A = 
0
0
A5 (a)
(b)
A6 (a)
(b)
A7 (a)
(b)
2
2
1
−1

  

1
2
C11   8

C  −10
2
1
21
.
. Then, we get that $x =   = 

2 −1
C
5



31
  


2 −1
C41
1

  

2 2
2
C11   9




C21  −10
2 1
2
.
. Then, we get that $x =   = 
2 1 −1
C31   4

1 2 −1
C41
−6

   
1 1 2
C11   4

C  −2
0 2 4
21
. Then, we get that $x =   =  .
−1 0 1
C31   2



   
1 0 1
C41
−2

   
3 4 −4
C11   4

C21  −9
1 1 −1
. Then, we get that $x =   =  .
2 2 −3
C31  −3

−3 1
2
C41
−1


 1 0 −3


1.
We get adj(A) = −1 0


1 2 −5




0
0
−2
 1 0 −3




1 
0. So det A = −2, and A−1 = −2
1.
A adj(A) =  0 −2
−1 0



0
0 −2
1 2 −5


7 −14 −13


7
5.
We get adj(A) = 0


0
0
1




7 0 0
7 −14 −13




7
5.
A adj(A) = 0 7 0. So det A = 7, and A−1 = 71 0




0 0 7
0
0
1


0
−dc
 ce


0.
We get adj(A) =  0 ae − bd


−cb
0
ac


0
0
ace − bcd



0
ace − bcd
0
A adj(A) = 
. So det A = ace − bcd. If ace − bcd ! 0, then

0
0
ace − bcd


0
−dc
 ce

1
 0 ae − bd
0.
A−1 = ace−bcd


−cb
0
ac
Copyright © 2020 Pearson Canada Inc.
23
A8
A9
A10
A11
A12


b −2b −13


ab
5a.
(a) We get adj(A) = 0


0
0
a




ab 0 0 
b −2b −13



1 
0
ab
5a.
(b) A adj(A) =  0 ab 0 . So det A = ab. If ab ! 0, then A−1 = ab




0 0 ab
0
0
a


−3 
−1 3 + t


(a) We get adj(A) =  5 2 − 3t −2 − 2t.


−2 −11
−6


0
0
−2t − 17



−1
0
−2t − 17
0
(b) A adj(A) = 
. So det A = −2t − 17. If −2t − 17 ! 0, then A =

0
0
−2t − 17


−3 
−1 3 + t
1
 5 2 − 3t −2 − 2t.
−2t−17 


−2 −11
−6


t −t2 + 2
 −1


−t
t − 3 .
(a) We get adj(A) =  1


−t + 1 −1 3t − 2
 2

0
0
−t + t − 1


. So det A = −t2 + t − 1. Since −t2 + t − 1 ! 0
0
−t2 + t − 1
0
(b) A adj(A) = 


2
0
0
−t + t − 1


t −t2 + 2
 −1

1
 1
t − 3 .
−t
for any t ∈ R we get that A−1 = −t2 +t−1


−t + 1 −1 3t − 2
!!
!
"
#
!!1 3!!!
10 −3
We have det A = !
= −2 and adj(A) =
. Hence,
!4 10!!
−4
1
"
#
1
1 10 −3
−1
A =
adj(A) =
1
det A
−2 −4
!!
!!
"
#
!3 −5!!
−1 5
We have det A = !!
= 7 and adj(A) =
. Hence,
!2 −1!!
−2 3
"
#
1
1 −1 5
−1
A =
adj(A) =
det A
7 −2 3
A13 We have
!!
!! 4
1
det A = !!! 2 −3
!!−2
6


42
22
−6


14
10. Hence,
and adj(A) = −2


6 −26 −14
A−1
! !
!
!!
!
7!!! !!! 0 13 7!!!
!!13 7!!!
!
!
!
1!! = !! 0 3 1!! = (−2) !
= 16
! 3 1!!
0!! !!−2 6 0!!




42
22
21 11
−6
−3
1
1 
 1

−2
14
10 = −1
7
5
=
adj(A) =
 8

det A
16 
6 −26 −14
3 −13 −7
Copyright © 2020 Pearson Canada Inc.
24
A14 We have
!!
! !
!
!! 4
0 −4!!! !!! 4
0
0!!!
1!!! = !!! 0 −1
1!!! = 4
det A = !!! 0 −1
!!−2
!
!
2 −1! !−2
2 −3!!


−1 −8 −4
and adj(A) = −2 −12 −4. Hence,


−2 −8 −4
A−1
A15 We have


−1 −8 −4
1
1 

=
adj(A) = −2 −12 −4

det A
4
−2 −8 −4
!!
!!2 1
det A = !!!1 2
!!4 1


 3 −1 −1


0 −1. Hence,
and adj(A) =  2


−7
2
3
A−1
A16 We have
! !
!
!!
!
1!!! !!!0 −3 −1!!!
!!−3 −1!!!
!
!
!
1!! = !!1
2
1!! = 1(−1) !
=1
!−7 −2!!
!
!
!
2! !0 −7 −2!


 3 −1 −1
1


0 −1
adj(A) =  2
=


det A
−7
2
3
!!
! !
!
!! 4
0 −2!!! !!!4
0 −2!!!
2!!! = !!!0 −1
2!!! = 4
det A = !!! 0 −1
!!−4
1 −1!! !!0
1 −3!!


−1 −2 −2


and adj(A) = −8 −12 −8. Hence,


−4 −4 −4
A−1
A17 We have


−1 −2 −2
1
1 

=
adj(A) = −8 −12 −8

det A
4
−4 −4 −4
!!
! !
! !
!
!! 1 5 3!!! !!!1
5
3!!! !!!1
5
3!!!
det A = !!! 3 1 1!!! = !!!0 −14 −8!!! = !!!0 −14 −8!!! = −56
!!−6 2 2!! !!0
32 20!! !!0
0
4!!


2
 4 −16


20
8. Hence,
and adj(A) = −12


0 −28 −14
A−1


2
 4 −16
1
1 

−12
20
8
=
adj(A) =

det A
−56 
0 −28 −14
Copyright © 2020 Pearson Canada Inc.
25
"
#
2 −3
A18 The coefficient matrix is A =
, so det A = 19. Hence,
3
5
!
!
1 !!!6 −3!!! 51
x1 =
=
!
5!! 19
19 !7
!!
!!
1 !!2 6!! −4
x2 =
!
!=
19 !3 7! 19
#
51/19
.
−4/19
"
#
3
3
A19 The coefficient matrix is A =
, so det A = −15. Hence,
2 −3
Thus, the solution is $x =
"
!
!
1 !!!2
3!!! −21 7
x1 =
=
!
!=
−15 !5 −3! −15 5
!!
!!
1 !!3 2!!
11
11
x2 =
=−
!!2 5!! =
−15
−15
15
#
7/5
Thus, the solution is $x =
.
−11/15
"


1 −4
 7


1, so
A20 The coefficient matrix is A = −6 −4


4 −1 −2
Hence,
!!
! !
!
!!
!
!!
!
!! 7
1 −4!!! !!! 7 1 −4!!!
!!22 −15!!!
!! 0 −3!!!
!
!
!
!
1! = !22 0 −15! = (−1) !
det A = !!−6 −4
= (−1) !
= −33
!11 −6!!
!11 −6!!
!! 4 −1 −2!!! !!!11 0 −6!!!
!!
!3
1 !!
!0
x1 =
−33 !!!
6
!!
! 7
1 !!
!−6
x2 =
−33 !!!
4
!!
! 7
1 !!
!−6
x3 =
−33 !!!
4
!
!!
!
!
!
!!3
1 −4!!!
1 −4!!!
1
3 !!!−4 1!!! −21 21
!
!
!
!0 −4
−4
1!! =
1!! =
=
!
!=
−33 !!!
−33 !−3 6! −11 11
−1 −2!!
0 −3
6!!
!
!!
!
!
!
!! 7 3 −4!!!
3 −4!!!
1 !
−3 !!! −6 1!!! −26
!
!
! −6 0
0
1!! =
1!! =
!
!=
−33 !!!
−33 !−10 6!
11
6 −2!!
−10 0
6!!
!!
!!
!!
!
!
! 7
1 3!!
1 3!!
1 !!
3 !!! −6 −4!!! −22
!
!
! −6 −4 0!! =
−4 0!! =
=2
!
!=
−33 !!!
−33 !−10 −3! −11
−1 6!!
−10 −3 0!!


 21/11


Thus, the solution is $x = −26/11.


2
Copyright © 2020 Pearson Canada Inc.
26


3 −5
2


2, so
A21 The coefficient matrix is A = 3 −1


5
4 −6
Hence,
!!
! !
!
!!
!
!!2
3 −5!!! !!!11
0 1!!!
!!11 1!!!
!
!
!
!
2!! = !! 3 −1 2!! = (−1) !
det A = !!3 −1
= −5
!17 2!!
!!5
!
!
!
4 −6! !17
0 2!
!!
!2
1 !!
!1
x1 =
−5 !!!
3
!!
!2
1 !!
!3
x2 =
−5 !!!
5
!!
!2
1 !!
!3
x3 =
−5 !!!
5
!
!!
!2
3 −5!!!
1 !!
!
!1
−1
2!! =
−5 !!!
4 −6!!
3
!!
!!
!−4
2 −5!!
1 !!
!
!! 0
!
1
2! =
−5
!!−4
!
3 −6!
!!
!!
!−4
3 2!!
1 !!
!! 0
−1 1!!! =
−5
!
!!−4
4 3!


 3/5


Thus, the solution is $x = −12/5.


−8/5


5 3 5


A22 The coefficient matrix is A = 2 4 5, so


7 2 4
Hence,
!
!
!
5 −9!!!
−1 !!!5 −9!!! 3
!
0
0!! =
!
!=
−5 !7 −12! 5
7 −12!!
!
!
!
2 −9!!!
1 !!!−4 −9!!! −12
!
!
1
0! =
!=
!
−5 !−4 −12!
5
3 −12!!
!
!
!
5 2!!!
−1 !!!−4 5!!! −8
!
0 1!! =
!!−4 7!! =
−5
5
!
7 3!
!!
! !
! !
!
!
!!5 3 5!!! !!! 5 3 5!!! !!!14 3 5!!! !!
!!14 5!!!
!
!
!
!
!
!
det A = !!2 4 5!! = !!−3 1 0!! = !! 0 1 0!! = !
! = −9
!!7 2 4!! !! 7 2 4!! !!13 2 4!! !13 4!
!!
!2
1 !!
!1
x1 =
−9 !!!
1
!!
!5
1 !!
!2
x2 =
−9 !!
!7
!!
!5
1 !!
!2
x3 =
−9 !!!
7
!
!!
!0
3 5!!!
1 !!
!
!0
4 5!! =
−9 !!!
2 4!!
1
!!
!!
!−9
2 5!!
1 !!
!−5
1 5!!! =
−9 !!
!
! 7
1 4!
!!
!!
!−9
3 2!!
1 !!
!−5
4 1!!! =
−9 !!!
!
!
2 1
7


−5/9


Thus, the solution is $x = −8/3.


23/9
!
!
!
−1 −3!!!
1 !!!−1 −3!!! −5
!
2
1!! =
=
!
1!!
−9 ! 2
9
2
4!!
!
!!
!
0 −3!!!
!!−9 −3!!! −24 −8
1
0
1!!! =
(−1) !
=
=
!−5
1!!
−9
9
3
1
4!!
!
!
!
−1 0!!!
1 !!!−9 −1!!! 23
!
2 0!! =
=
!
2!!
−9 !−5
9
!
!
2 1
Copyright © 2020 Pearson Canada Inc.
27


9 3
2


A23 The coefficient matrix is A = 2 −2 3, so


0
3 3
!!
! !
!
!!
!
!!2
9 3!!! !!!2
9 3!!!
!!2 3!!!
!
!
!
!
det A = !!2 −2 3!! = !!0 −11 0!! = (−11) !
= −66
!0 3!!
!!0
3 3!! !!0
3 3!!
Hence,
!!
! 1
1 !!
! 1
x1 =
−66 !!
!−5
!!
!2
1 !!
!2
x2 =
−66 !!!
0
!!
!2
1 !!
!2
x3 =
−66 !!!
0
!
!!
!
!! 1
9 3!!!
9 3!!!
1 !
! 0 −11 0!!! =
−2 3!!! =
−66 !!
!
!−5
3 3!
3 3!!
!
!!
!
!!2
1 3!!!
1 3!!!
1 !
!!0
1 3!!! =
0 0!!! = 0
−66
!
!
!0 −5 3!!
−5 3!
!!
!!
!
!!2
9
1!!
9
1!!!
1 !
!!0 −11
−2
1!!! =
0!!! =
−66
!
!
!0
3 −5!
3 −5!!


 3 


Thus, the solution is $x =  0 .


−5/3
!!
!
!! 1 3!!! 18
1
(−11) !
=
=3
!−5 3!!
−66
6
!!
!
!!2
1
1!!! −10 −5
(−11) !
=
=
!0 −5!!
−66
6
3
B Homework Problems


 7
−10

B1 $x = 

 11
−2


 30
 10

B4 $x = 

−20
10
 
 1
 1
B2 $x =  
−7
9
 
 1
−2
B5 $x =  
 0
0
 
−5
 5
B3 $x =  
 4
−7
 
43
76
B6 $x =  
 4
16
#
c −b
0
a
"
#
"
#
ac 0
1 c −b
(b) A adj(A) =
, det A = ac, A−1 = ac
.
0 ac
0
a


 10 0 0


B8 (a) adj(A) = −3 5 0


81 15 50




50 0 0
 10 0 0


1 
−3 5 0.
(b) A adj(A) =  0 50 0, det A = 50, A−1 = 50




0 0 50
81 15 50
B7 (a) adj(A) =
"
Copyright © 2020 Pearson Canada Inc.
28
B9
B10
B11
B12
B13
B14
B15


4 −5
−3


(a) adj(A) =  1 −4 −1


−2
0
2




4 −5
0
0
−8
−3


1 
 1 −4 −1.
0, det A = −8, A−1 = −8
(b) A adj(A) =  0 −8




−2
0
2
0
0 −8


 0 t − 15 −t + 15


t − 6 
(a) adj(A) = −6 t + 6


2
−7
−3




0
0
−30 + 2t

 0 t − 15 −t + 15


, det A = −30+2t, A−1 = 1 −6 t + 6
0
−30 + 2t
0
t − 6 .
(b) A adj(A) = 
−30+2t 




0
0
−30 + 2t
2
−7
−3


−1 −3t 
 t


7
9 
(a) adj(A) =  −3


2t + 3 −9 t − 12




0
0 
−1 −3t 
−3 + 7t
 t



1 
 −3
−3 + 7t
0 , det A = −3 + 7t, A−1 = −3+7t
7
9 .
(b) A adj(A) =  0




0
0
−3 + 7t
2t + 3 −9 t − 12


9
−7 
 1


(a) adj(A) = −1 3t − 3 −2t + 3


0 −t − 2
t+2




0
0 
9
−7 
 1
2 + t




1 
2+t
0 , det A = 2 + t, A−1 = 2+t
(b) A adj(A) =  0
−1 3t − 3 −2t + 3.


0
0
2+t
0 −t − 2
t+2


−t − 1 −t2 + 3 t + 3


2t − 3
−2 
(a) adj(A) =  1


1
t−2
−2




0
0 
−t − 1 −t2 + 3 t + 3
1 − t




1 
2t − 3
−2 .
1−t
0 , det A = 1 − t, A−1 = 1−t
(b) A adj(A) =  0
 1



1
t−2
−2
0
0
1−t
 2

 t − 1 −t + 1 −t + 1


(a) adj(A) = −t + 1 t2 − 1 −t + 1


−t + 1 −t + 1 t2 − 1
3

0
0
t − 3t + 2



3
3
0
t − 3t + 2
0
(b) A adj(A) = 
, det A = t − 3t + 2,

3
0
0
t − 3t + 2
 2

 t − 1 −t + 1 −t + 1

1
−t + 1 t2 − 1 −t + 1.
A−1 = t3 −3t+2


−t + 1 −t + 1 t2 − 1
"
#
"
#
1
1
9 5
5 −5
B16
6
78 −3 7
−10 −8
Copyright © 2020 Pearson Canada Inc.
29
B17
B19
B21
B23
B26
B29


7
 6 −5
1 
 0 10 −20

30 
−6
0
18


55 −22 2
1 

 0
11 4

55 
0
0 5


7 −24
 8
1 

−24 −21
14

−58 
2
9 −6
" #
3
−4
"
#
81/41
11/41


 0 


 8/7 
31/28
B18
B20
B22
"
−13/5
B24
−46/15
 
1/4
 
B27 5/4
 
0
#
B30


1
4
−4
1 

−7
10 −4

−11 
8 −13
3


3
−30 21
1 
−12 14 −6

−24 
6 −5 −3


0
0
10
0


1 0
0 −10
20


15
10 0 −5 −5

2
4
8 −26
"
#
107/73
B25
38/73
 
1/4
 
B28  −1 
 
1/2


−10


 7
11
C Conceptual Problems
C1 (a) Let vi j denote the i-th entry of $v j . Then, for any 1 ≤ j ≤ n − 1 we have
$n · $v j = v1 jC11 + v2 jC21 + · · · + vn jCn1
Now, observe that ai j = vi( j−1) , so we have
$n · $v j = a1( j+1)C11 + a2( j+1)C21 + · · · + an( j+1)Cn1
But, this is a false expansion where we are using the coefficients from the j + 1-st column of
A (with j ≥ 1), but the cofactors from the 1-st column of A. Hence, by the False Expansion
Theorem, we get that $n · $v j = 0 for all 1 ≤ j ≤ n − 1.
(b) By definition, $n is a normal vector for the hyperplane. Thus, an equation for the hyperplane is
C11 x1 + C21 x2 + · · · + Cn1 xn = 0
!!
!!
!!
!
!!−1 3 2!!
!!3 2!!!
!
!
!
!
C2 det A = 2 ! 1 0 0! = −2 !
= −18.
!0 3!!
!! 2 0 3!!
(A−1 )23 =
1
1 0
adj(A) 23
det A
(A−1 )42 =
1
1 0
adj(A) 42
det A
!!
!!2
−1
=
(−1)3+2 !!!0
18
!!0
!!
!!2
−1
=
(−1)2+4 !!!0
18
!!0
!
0 1!!!
18
3 2!!! =
=1
18
!
!
0 3
!
−1 0!!!
1 0!!! = 0
2 0!!
Copyright © 2020 Pearson Canada Inc.
30

#11

C3 Let L =  0

0

#12 #13 

#22 #23 . Observe that

0 #33
adj(L)21 = C12
adj(L)31 = C13
adj(L)32 = C23
Thus, L−1 =
1
det L
adj(L) is lower triangular.
!!
!0
= (−1) !!
!0
!!
1+3 !!0
= (−1) !
!0
!!
2+3 !!#11
= (−1) !
!0
1+2
!
#23 !!!
=0
#33 !!
!!
#22 !!
=0
0 !!
!
#12 !!!
=0
0 !!
Ni
C4 (a) By Cramer’s Rule we have xi = det
det A where Ni is the matrix obtained from A by replacing its
i-th column by $a j (its j-th column). Thus if i ! j, then Ni has two equal columns, so det Ni = 0.
Thus, xi = 0 for i ! j. If i = j, then Ni = A, so x j = 1. Therefore, $x = $e j .
(b) Since A is invertible, A$x = $a j has a unique solution. We know the linear mapping with matrix A
maps $e j to $a j , so $e j is the unique solution.
C5 Assume adj A is invertible. Hence, there exists a matrix B such that (adj A)B = I. Then,
A(adj A)B = AI
(det A)IB = A
Since A is not invertible, we have det A = 0, thus this implies that
On,n = A
But, if A is the zero matrix, then adj A is also the zero matrix (all the cofactors of the zero matrix are
0). This contradicts the assumption that adj A is invertible. Thus, adj A cannot be invertible.
C6 We have
A adj A = (det A)I
1
A adj A = I
det A
Thus, (adj A)−1 =
Since A−1 =
1
det A
1
det A A.
adj A, we have
1
adj A−1
det A−1
det A−1 A = adj A−1
1
A = adj A−1
det A
(A−1 )−1 =
Copyright © 2020 Pearson Canada Inc.
31
C7 We have A adj A = (det A)I. Thus,
det(A adj A) = det ((det A)I)
det A det(adj A) = (det A)n
Since det A ! 0, this gives
det(adj A) = (det A)n−1
C8 We have A adj A = (det A)I, so
(adj A)(adj(adj A)) = (det adj A)I
(det A)A−1 (adj(adj A)) = (det A)n−1 I
Multiply both sides by
1
det A A
by C7
to get
adj(adj A) = (det A)n−2 A
C9 If A and B are both invertible, then AB is invertible, and hence we have
adj(AB) = det(AB)(AB)−1
= (det A det B)B−1 A−1
= (det B)B−1 (det A)A−1
= adj(B) adj(A)
C10 Since A is invertible, AT is invertible by the Invertible Matrix Theorem. We have
adj AT = det AT (AT )−1
= det A(A−1 )T
2
3T
= (det A)A−1
= (adj A)T
Section 5.4
A Practice Problems
!! "
#!
!!
2 −1 !!!
A1 (a) We have Area($u , $v ) = !det
= 9.
!
1
4 !!
" #
" #
8
5
(b) We have A$u =
and A$v =
. Thus,
4
7
!! "
#!
!!
8 5 !!!
Area(A$u , A$v ) = !det
= 36
!
4 7 !!
!!
!
!!3 2!!!
We have det A = !
= 4. Thus, using equation (5.4) gives
!1 2!!
Area(A$u , A$v ) = | det A|Area($u , $v ) = |4|(9) = 36
Copyright © 2020 Pearson Canada Inc.
32
!! "
#!
!!
3 1 !!!
A2 (a) We have Area($u , $v ) = !det
= 18.
!
0 6 !!
" #
" #
3
13
(b) We have A$u =
and A$v =
. Thus,
3
13
!! "
#!
!!
3 13 !!!
Area(A$u , A$v ) = !det
=0
!
3 13 !!
!!
!
!!1 2!!!
We have det A = !
= 0. Thus, using equation (5.4) gives
!1 2!!
Area(A$u , A$v ) = | det A|Area($u , $v ) = |0|(18) = 0
!! "
#!
!!
3 2 !!!
A3 (a) We have Area($u , $v ) = !det
= 11.
!
5 7 !!
" #
" #
18
34
(b) We have A$u =
and A$v =
. Thus,
19
20
!! "
#!
!!
18 34 !!!
Area(A$u , A$v ) = !det
= | − 286| = 286
!
19 20 !!
!!
!
!!−4 6!!!
We have det A = !
= −26. Thus, using equation (5.4) gives
! 3 2!!
Area(A$u , A$v ) = | det A|Area($u , $v ) = | − 26|(11) = 286
" #
" #
3
2
A4 We have A$u =
, A$v =
. Hence,
1
−2
!! "
#!
!!
3
2 !!!
Area(A$u , A$v ) = !det
= | − 8| = 8
!
1 −2 !!
$ is
A5 (a) The volume determined by $u , $v , and w
!! 4
$ ) = !!det $u $v
volume($u , $v , w
(b) We have
!! 
!!
!
2
2
1

!!
!

5! ! 

$ !! = !!!det 3 −1 5!!! = |63| = 63
w
!! 4 −5 2!!
!!
!
!!1 −1 3!!!
0 1!!! = 42
det A = !!!4
!!0
2 5!!
Hence, the volume of the image parallelepiped is
$ ) = |42|63 = 2646
volume(A$u , A$v , A$
w) = | det A|volume($u , $v , w
Copyright © 2020 Pearson Canada Inc.
33
$ is
A6 (a) The volume determined by $u , $v , and w
!! 4
$ ) = !!det $u $v
volume($u , $v , w
(b) We have
!! 
!
1 2!!!
5!! !!! 0

$ !! = !!det 2 −5 1!!! = 41
w


!!
3
5 6 !!
!!
!
!!3 −1
2!!!
2 −1!!! = 78
det A = !!!3
!!1
4
5!!
Hence, the volume of the image parallelepiped is
$ ) = |78|41 = 3198
volume(A$u , A$v , A$
w) = | det A|volume($u , $v , w
A7
A8
A9
A10
A11


−1 3 1


We have det  2 5 1 = 4. Thus, the points are not collinear and the area of the triangle is Area =


3 7 1
1
2 |4| = 2.


−4 −1 1


3 1 = 0. Thus, the points are collinear.
We have det  2


5
5 1


0 2 1


We have det 3 3 1 = 5. Thus, the points are not collinear and the area of the triangle is Area =


4 5 1
1
2 |5| = 5/2.


1 4 1


We have det 6 7 1 = −44. Thus, the points are not collinear and the area of the triangle is


9 0 1
Area = 12 | − 44| = 22.


−3 −26 1


2 1 = 0. Thus, the points are collinear.
We have det  1


3
16 1
A12 (a) The 4-volume determined by $v 1 , $v 2 , $v 3 , and $v 4 is
!! 
!! 1
! 0
volume($v 1 , $v 2 , $v 3 , $v 4 ) = !!!det 
!! 2
!
0
0
1
1
3
0
2
3
0
!
1!!
!
0!!
! = |5| = 5
2!!
!
5!
(b) The 4-volume determined by the images A$v 1 , A$v 2 , A$v 3 , and A$v 4 is
volume(A$v 1 , A$v 2 , A$v 3 , A$v 4 ) = | det A|volume($v 1 , $v 2 , $v 3 , $v 4 ) = | − 49||5| = 245
Copyright © 2020 Pearson Canada Inc.
34
!! 4
5!!
A13 The n-volume of the parallelotope induced by $v 1 , . . . , $v n is !!det $v 1 · · · $v n !!. Since adding a multiple of one column to another does not change the determinant, we get that
!! 4
5! ! 4
5!
!!det $v 1 · · · $v n !!! = !!!det $v 1 · · · $v n + t$v 1 !!!
which is the volume of the parallelotope induced by $v 1 , . . . , $v n−1 , $v n + t$v 1 .
B Homework Problems
"
#
"
#
−9
−12
B1 (a) Area($u , $v ) = 3, (b) A$u =
, A$v =
, det A = −5, Area(A$u , A$v ) = 15
−4
−7
" #
" #
9
−1
B2 (a) Area($u , $v ) = 7, (b) A$u =
, A$v =
, det A = −9, Area(A$u , A$v ) = 63
0
−7
" #
" #
−7
−6
B3 (a) Area($u , $v ) = 4, (b) A$u =
, A$v =
, det A = 1, Area(A$u , A$v ) = 4
10
8
B4 Area = 1
B5 Area = 3
B6 Area = 4
B7 Area = 1
B8 Area = 1
B9 Area = 4
B10 Area = 2
$ ) = 2, (b) det A = 12, Area(A$u , A$v , A$
B11 (a) Area($u , $v , w
w) = 24.
$ ) = 6, (b) det A = 10, Area(A$u , A$v , A$
B12 (a) Area($u , $v , w
w) = 60.
B13 Area = 5
B14 Area = 7/2
B15 The points are collinear.
B16 Area = 1
B17 Area = 9
B18 The points are collinear.
B19 Area = 5/2
B20 (a) 4-Volume($v 1 , $v 2 , $v 3 , $v 4 ) = 6,
B21 (a) 4-Volume($v 1 , $v 2 , $v 3 , $v 4 ) = 44,
(b) 4-Volume(A$v 1 , A$v 2 , A$v 3 , A$v 4 ) = 24
(b) 4-Volume(A$v 1 , A$v 2 , A$v 3 , A$v 4 ) = 176
Copyright © 2020 Pearson Canada Inc.
35
C Conceptual Problems
!! 4
5!!
C1 The n-volume of the parallelotope induced by 2$v 1 , $v 2 , . . . , $v n is !!det 2$v 1 $v 2 · · · $v n !!. Since multiplying a column of a matrix by a factor of 2 multiplies the determinant by a factor of 2, we get
that
!! 4
! 4
5!
5!
!!det 2$v 1 $v 2 · · · $v n !!! = 2 !!!det $v 1 $v 2 · · · $v n !!!
Thus, the volume of the parallelotope induced by $v 1 , . . . , $v n is half of the volumne of the parallelotope
induced by 2$v 1 , $v 2 , . . . , $v n .
C2 We have L($x ) = A$x and M($x ) = B$x . Then, M ◦ L = M(L($x )) = B(A$x ) = (BA)$x . Hence, for any
$u , $v , w
$ ∈ R3 we have
!! 4
5!!
w !!
Volume(BA$u , BA$v , BA$
w) = !!det BA$u BA$v BA$
!! 0 4
5 1!!
$ !!
= !!det BA $u $v w
!! 4
5!!
$ !!
= | det BA| !!det $u $v w
$)
= | det BA|Volume($u , $v , w
Hence, the volume is multiplied by a factor of | det BA|.
Chapter 5 Quiz
!!
!3
E1 !!
!2
!!
!!3
E2 !!!0
!!
0
!
5!!!
= 3(−1) − 5(2) = −13. Thus, the matrix is invertible.
−1!!
!
5 √
−2!!!
!
2
2!! = 3(2)(3) = 18. Thus, the matrix is invertible.
!
0
3!
E3 Since the row operations R1 + R2 and R3 + 2R2 do not change the determinant we get
!!
! !
!
!!3
2
1!!! !!!5
0 3!!!
!!2 −2
2!!! = !!!2 −2 2!!! = 0
!!
!1
4 −1!! !!5
0 3!!
Thus, the matrix is not invertible.
E4 Expanding along the third column gives
!!
4
!!−2
!! 1 −2
!!
6
!!−3
! 1 −1
0
2
0
0
!
!!
!
0!!
!!
!!−2
4 0!!!
9!
! = 2(−1) !!!−3
6 3!!!
3!!
!! 1 −1 0!!
!
0!
Expanding along the third column again we get
!!
!
!!
!
!!−2
4 0!!!
!!−2
4!!!
!
!
6 3! = (−2)(3)(−1) !
(−2) !!−3
= −12
! 1 −1!!
!! 1 −1 0!!!
Thus, the matrix is invertible.
Copyright © 2020 Pearson Canada Inc.
36
E5 Since the row operations R2 + 2R1 , R3 − R1 , and R4 − R1 do not change the determinant we get
!!
! !
2
7 −8!! !!3
!! 3
! !
!!−6 −1 −9
20!! !!0
!!
!=!
8 21 −17!! !!0
!! 3
! !
! 3
5 12
1! !0
Now using the row operations R3 − 2R2 and R4 − R2 gives
!!
!!3
!!0
!!
!!0
!0
! !
2 7 −8!! !!3
! !
3 5
4!! !!0
!=!
6 14 −9!! !!0
! !
3 5
9! !0
2
3
0
0
!
2 7 −8!!
!
3 5
4!!
!
6 14 −9!!
!
3 5
9!
!
7 −8!!
!
5
4!!
! = 3(3)(4)(5) = 180
4 −17!!
!
0
5!
Thus, the matrix is invertible.
E6 Expanding along the fifth row and then along the first row gives
!!
!!0
!!0
!!
!!0
!!0
!!
5
2
0
0
0
0
0
0
0
4
0
0
3
0
0
0
!
!!
0!!
!
!!2
0!!
!0
!!
1! = 5 !!!
!
!!0
0!!
!0
!!
6
0
3
0
0
!
!!
!
0!!
!!
!!0 3 0!!!
0!
! = 5(2) !!!0 0 1!!!
1!!
!!4 0 0!!
!
0!
!!
!
!!3 0!!!
= 5(2)(4) !
= 5(2)(4)(3)(1) = 120
!0 1!!
Thus, the matrix is invertible.
E7 We have
0
0
0
4
!!
!
!! k
2 1!!!
!!0
3 k!!! = k(3 + 4k) + 2(2k − 3) = 4k2 + 7k − 6
!!
!2 −4 1!!
Hence, the matrix is not invertible whenever 4k2 + 7k − 6 = 0. Applying the quadratic formula, we
get
6
√
−7 ± 72 − 4(4)(−6)
7
145
k=
=− ±
2(4)
8
8
Hence, the matrix is invertible for all k ! − 78 ±
√
145
8 .
E8 Since multiplying a row by a constant multiplies the determinant by the constant, we get det B =
3(7) = 21.
E9 We require four swaps to move the first row to the bottom and all other rows up. In particular, we
keep swapping the row that starts in the top row with the row beneath it until it is at the bottom. Since
swapping rows multiplies the determinant by −1 we get det C = (−1)4 (7) = 7.
E10 2A multiplies each entry in A (in particular, each entry in each of the 5 rows) by 2. Hence, factoring
out the 2 from each of the 5 rows we get det(2A) = 25 (7) = 224.
E11 By Theorem 5.2.7, det A−1 =
1
det A
= 17 .
Copyright © 2020 Pearson Canada Inc.
37
E12 By Theorem 5.2.7, det(AT A) = det AT det A = det A det A = 7(7) = 49.
E13 Using R3 − R2 followed by C2 + C3 gives
!!
! !
! !
!
!!2 − λ
2
1 !!! !!!2 − λ
2
1 !!! !!!2 − λ
3
1 !!!
!! 2
1−λ
2 !!! = !!! 2
1−λ
2 !!! = !!! 2
3−λ
2 !!!
!!
!
!
!
!
! 2
2
1 − λ! ! 0
1 + λ −1 − λ! ! 0
0
−1 − λ!!
= (−1 − λ)[(2 − λ)(3 − λ) − 3(2)] = −(1 + λ)[λ2 − 5λ] = −(1 + λ)λ(λ − 5)
Thus, the determinant is 0 when λ = −1, 0, 5.


2
 2 −6


6 −1.
E14 (a) We have adj(A) = −4


2 −6 −1


0
0
−6


0. Thus, det A = −6.
(b) A adj(A) =  0 −6


0
0 −6
(c) We have that (A−1 )31 =
1
det A (adj(A))31
=
1
−6 (2)
= − 13 .
!!
!
!! 2 3
1!!!
E15 The coefficient matrix is A = !!! 1 1 −1!!!. We get
!!−2 0
2!!
!!
! !
!
!!
!
!! 2 3
1!!! !!! 2 3 3!!!
!!3 3!!!
!
!
!
!
det A = !! 1 1 −1!! = !! 1 1 0!! = (−2) !
=6
!1 0!!
!!−2 0
!
!
!
2! !−2 0 0!
Hence,
!!
!
!!
!
!! 2
!!0
1
1!!!
3
3!!!
1 !
1
−3
1
!! 1 −1 −1!!! = !!!1 −1 −1!!! =
x2 =
=−
det A !!
6 !!
6
2
−2
1
2!!
0 −1
0!!
E16 (a) The volume of the parallelepiped is
!! 
!!
!!  1
2
0
!!


$ ) = !!!det  1 −1 3!!! = 33
Volume($u , $v , w

!! −2
3 4 !!
(b) The volume of the image parallelepiped is

0
3
E17 Let A = 
1
3
0
3
3
4
$ ) = | − 24|(33) = 792
Volume(A$u , A$v , A$
w) = | det A|volume($u , $v , w

0
0

−1 −1
. We find that a vector $x that is orthogonal to $v 1 , $v 2 , $v 3 is
5
4

2
1
   
C11  −8
C   7
$x =  12  =  
C13  −1
C14
−2
Copyright © 2020 Pearson Canada Inc.
38
E18 We find that
Thus, the points are collinear.
!!
! !
!
!!−2 −1 1!!! !!!0 15
3!!!
!! 1
8 1!!! = !!!1
8
1!!! = 0
!!
!
!
! 2 11 1! !0 −5 −1!!
E19 We find that
!!
! !
!
!!−1 5 1!!! !!!0
0 1!!!
!! 2 7 1!! = !!3
2 1!!! = −14
!!
!! !!
! 3 3 1! !4 −2 1!!
Thus, the area of the triangle is 21 | − 14| = 7.
E20 We find that
!!
! !
!
!!
!
!!−3 4 1!!! !!!−3
1 1!!!
!!−3
1!!!
!! 0 3 1!! = !! 0
!
0 1!! = (−1) !
= −1
!!
! !
! 2 −1!!
! 2 2 1!! !! 2 −1 1!!
Thus, the area of the triangle is 12 | − 1| = 1/2.
Chapter 5 Further Problems
4
F1 Let A = $a1
5
· · · $an . Since all row sums are equal to zero, we get that
$a1 + · · · + $an = $0
Therefore the columns are linearly dependent, so det A = 0.
F2 From the assumption, det A and det A−1 are both integers. But, det A−1 =
if and only if det A = ±1.
1
det A ,
and
1
det A
is an integer
F3 Denote the corners of the triangle by PA , PB , and PC according to the angle at the point. First, suppose
that all three angles are less than or equal to π2 . Let Q be the point on the line from PA to PB such that
PC$ Q is orthogonal to PA$PB . Observe that we then have
$ B * = b cos A + a cos B
c = *PA$PB * = *P$A Q* + *QP
Similarly,
a = c cos B + b cos C
b = a cos C + c cos A
Rewrite the equations,
b cos A + a cos B
=c
c cos B + b cos C = a
c cos A
+ a cos C = b
Copyright © 2020 Pearson Canada Inc.
39
and solve for cos A by Cramer’s Rule:
!!
!!c
!!a
!!
!b
cos A = !!
!!b
!!0
!!
!c
!
a 0!!!
c b!!!
0 a!! c(ca) − a(a2 − b2 ) b2 + c2 − a2
!=
=
2abc
2bc
a 0!!!
!
c b!!
0 a!!
The proof for obtuse triangles is similar.
"
#
b44 b45
F4 (a) Let B =
. Expand the big determinant along its fifth column (which contains the second
b54 b55
column of B), then along the fourth column (which contains the first column of B):
det
"
A
O2,3
O3,2
B
#
= −b45 det
"
A
O1,3
O3,1
b54
#
+ b55
"
A
O1,3
O3,1
b44
#
= (−b45 b54 + b44 b55 ) det A = det A det B
(b) By six row interchanges (each of two rows in B with each of three rows in A), we can transform
this matrix into the matrix of part (a). Hence, the answer is (−1)6 det A det B = det A det B.
F5 (a) If a = b, V3 (a, b, c) = 0 because two rows are equal. Therefore (a − b) divides V3 (a, b, c).
Similarly, (b − c) and (c − a) divide V3 (a, b, c). Hence, for some t,
V3 (a, b, c) = t(c − a)(c − b)(b − a) = t(b − a)c2 + · · ·
The cofactor of c2 in the determinant is (b−a), so t = 1. Therefore, V3 (a, b, c) = (c−a)(c−b)(b−a).
(b) The same argument shows that (d − a), (d − b), (d − c), (c − a), (c − b), (b − a) are factors of
V4 (a, b, c, d). The cofactor d3 in the determinant is exactly V3 (a, b, c), and this agrees with the
coefficient of d3 in (d − a)(d − b)(d − c)V3 (a, b, c). Hence,
V4 (a, b, c, d) = (d − a)(d − b)(d − c)V3 (a, b, c)
F6 (a) We have
!!
!!a11
!!a21
!!
!! 0
!0
a12
a22
0
0
a13
a23
a33
a43
!
!!
!
!!
!
a14 !!
!!
!!a22 a23 a24 !!!
!!a12 a23 a24 !!!
a24 !
! = a11 !!! 0 a33 a34 !!! − a21 !!! 0 a33 a34 !!!
a34 !!
!! 0 a
!
!! 0 a
!
!
43 a44 !
43 a44 !
a44 !
!!
!
!!
!
!!a33 a34 !!!
!!a33 a34 !!!
= a11 a22 !
− a12 a21 !
!a43 a44 !!
!a43 a44 !!
!!
!
!!a33 a34 !!!
= (a11 a22 − a12 a21 ) !
!a43 a44 !!
= det A1 det A4
Copyright © 2020 Pearson Canada Inc.
40
(b) Let
#
"
#
"
#
"
#
1 0
1 0
1 0
2 0
A1 =
, A2 =
, A3 =
, A4 =
0 2
0 3
0 1
0 5
"
Then det A1 det A4 − det A2 det A3 = 2(10) − 3(1) = 17. However
!!
!!1
!!0
!!
!!1
!0
0
2
0
1
1
0
2
0
! !
0!! !!1
! !
3!! !!0
!=!
0!! !!0
! !
5! !0
Thus, det A ! det A1 det A4 − det A2 det A3 .
0
0
0
1
!
1
0!!
!
0 −7!!
!=7
1
0!!
!
0
5!
Copyright © 2020 Pearson Canada Inc.
Chapter 6 Solutions
Section 6.1
A Practice Problems
A1 We have
"! " ! "
! "
3 4 1
7
1
A!v 1 =
=
=7
2 5 1
7
1
!
"! " ! "
! "
3 4 −2
−2
−2
A!v 2 =
=
=1
2 5
1
1
1
!
Thus, !v 1 is an eigenvector of A corresponding to the eigenvalue λ = 7, and !v 2 is an eigenvector of A
corresponding to the eigenvalue λ = 1.
A2 We have
"! " !
"
! "
2 12 1
14
1
A!v 1 =
=
!λ
−2 −8 1
−10
1
!
"! " ! "
! "
2 12 −2
8
−2
A!v 2 =
=
= (−4)
−2 −8
1
−4
1
!
Thus, !v 1 is not an eigenvector of A, but !v 2 is an eigenvector of A corresponding to the eigenvalue
λ = −4.
A3 We have

−10

A!v 1 = −10

2

−10

A!v 2 = −10

2
   
 
9
5 1 −5
1
   
 
9
5 0 = −5 ! λ 0
   
 
3 −1 1
1
1
   
 
9
5 1 0
1
   
 
9
5 0 = 0 = 0 0
   
 
3 −1 2
0
2
Thus, !v 1 is not an eigenvector of A, but !v 2 is an eigenvector of A corresponding to the eigenvalue
λ = 0.
1
Copyright © 2020 Pearson Canada Inc.
2
A4 We have

−10

A!v 1 = −10

2

−10

A!v 2 = −10

2
   
 
9
5  1 −6
 1
   
 
9
5  1 = −6 = −6  1
   
 
3 −1 −1
6
−1
  

 
9
5  1 −14
 1
   

 
9
5 −1 = −14 ! λ −1
  

 
3 −1
1
−2
1
Thus, !v 1 is an eigenvector of A corresponding to the eigenvalue λ = −6, but !v 2 is not an eigenvector
of A.
A5 We have

   
 
1 3 1 16
 4
1

   
 
6 1 3 = 29 ! λ 3
A!v 1 =  8

   
 
−2 −2 3 3
1
3

   
 
1 3 −2 −2
 4
−2

   
 
6 1  3 =  3 = 1  3
A!v 2 =  8

   
 
−2 −2 3
1
1
1
Thus, !v 1 is not an eigenvector of A, but !v 2 is an eigenvector of A corresponding to the eigenvalue
λ = 1.
A6 We have
))
)
))−λ
1 )))
C(λ) = )
= λ2 − 5λ + 6 = (λ − 2)(λ − 3)
)−6 5 − λ))
Hence, the eigenvalues are λ1 = 2 and λ2 = 3.
To find a basis for the eigenspace of λ1 = 2 we row reduce A − λ1 I. We get
" !
"
−2 1
1 −1/2
A − 2I =
∼
−6 3
0
0
!
*! "+
1
Thus, a basis for the eigenspace of λ1 is
.
2
To find a basis for the eigenspace of λ2 = 3 we row reduce A − λ2 I. We get
" !
"
−3 1
1 −1/3
A − 3I =
∼
−6 2
0
0
!
*! "+
1
Thus, a basis for the eigenspace of λ2 is
.
3
A7 We have
))
)
))1 − λ
3 )))
C(λ) = )
= λ2 − 2λ + 1 = (λ − 1)2
) 0
1 − λ))
Hence, the only eigenvalue is λ1 = 1.
Copyright © 2020 Pearson Canada Inc.
3
To find a basis for the eigenspace of λ1 = 1 we row reduce A − λ1 I. We get
" !
"
0 3
0 1
A − 1I =
∼
0 0
0 0
!
*! "+
1
Thus, a basis for the eigenspace of λ1 is
.
0
A8 We have
))
)
))2 − λ
0 )))
C(λ) = )
= (λ − 2)(λ − 3)
) 0
3 − λ))
Hence, the eigenvalues are λ1 = 2 and λ2 = 3.
To find a basis for the eigenspace of λ1 = 2 we row reduce A − λ1 I. We get
" !
"
0 0
0 1
A − 2I =
∼
0 1
0 0
!
*! "+
1
Thus, a basis for the eigenspace of λ1 is
.
0
To find a basis for the eigenspace of λ2 = 3 we row reduce A − λ2 I. We get
" !
"
−1 0
1 0
A − 3I =
∼
0 0
0 0
!
*! "+
0
Thus, a basis for the eigenspace of λ2 is
.
1
A9 We have
))
)
))−26 − λ
10 )))
C(λ) = )
= λ2 − 3λ − 4 = (λ + 1)(λ − 4)
) −75
29 − λ))
Hence, the eigenvalues are λ1 = −1 and λ2 = 4.
For λ1 = −1,
!
−25
A − (−1)I =
−75
*! "+
2
Thus, a basis for the eigenspace of λ1 is
.
5
For λ2 = 4,
!
−30
A − 4I =
−75
*! "+
1
Thus, a basis for the eigenspace of λ2 is
.
3
A10 We have
" !
"
10
1 −2/5
∼
30
0
0
" !
"
10
1 −1/3
∼
25
0
0
))
)
))1 − λ
3 )))
C(λ) = )
= λ2 − 3λ − 10 = (λ − 5)(λ + 2)
) 4
2 − λ))
Hence, the eigenvalues are λ1 = 5 and λ2 = −2.
Copyright © 2020 Pearson Canada Inc.
4
For λ1 = 5,
" !
"
−4
3
1 −3/4
A − 5I =
∼
4 −3
0
0
*! "+
3
Thus, a basis for the eigenspace of λ1 is
.
4
For λ2 = −2,
!
" !
"
3 3
1 1
A − (−2)I =
∼
4 4
0 0
*! "+
−1
Thus, a basis for the eigenspace of λ2 is
.
1
A11 We have
!
))
)
))3 − λ
−3 )))
C(λ) = )
= λ2 + 3λ = λ(λ + 3)
) 6
−6 − λ))
Hence, the eigenvalues are λ1 = 0 and λ2 = −3.
For λ1 = 0,
!
3
A − 0I =
6
*! "+
1
Thus, a basis for the eigenspace of λ1 is
.
1
For λ2 = −3,
!
6
A − (−3)I =
6
*! "+
1
Thus, a basis for the eigenspace of λ2 is
.
2
A12 We have
" !
"
−3
1 −1
∼
−6
0 0
" !
"
−3
1 −1/2
∼
−3
0
0
))
)
))1 − λ
3
5 )))
2−λ
7 ))) = (1 − λ)(2 − λ)(3 − λ)
C(λ) = ))) 0
)) 0
0
3 − λ))
Hence, the eigenvalues are λ1 = 1, λ2 = 2, and λ3
For λ1 = 1,

0 3

A − 1I = 0 1

0 0


 

1



 

0
Thus, a basis for the eigenspace of λ1 is 
.





0

= 3.
 

5 0 1 0
 

7 ∼ 0 0 1
 

2
0 0 0
For λ2 = 2,

 

−1 3 5 1 −3 0

 

0 1
A − 2I =  0 0 7 ∼ 0

 

0 0 1
0
0 0
  

3



 

1
Thus, a basis for the eigenspace of λ2 is 
.



0

Copyright © 2020 Pearson Canada Inc.
5
For λ3 = 3,

 

3 5 1 0 −13
−2

 

A − 3I =  0 −1 7 ∼ 0 1 −7

 

0
0 0
0 0
0
  

13



 

 7
Thus, a basis for the eigenspace of λ3 is 
.





 1

A13 We have
))
) )
)
))−4 − λ
6
6 ))) )))−4 − λ
0
6 )))
2−λ
4 ))) = ))) −2
−2 − λ
4 )))
C(λ) = ))) −2
)) −1
3
1 − λ)) )) −1
2 + λ 1 − λ))
))
))
))−4 − λ
0
6 ))
)
)
−2 − λ
4 ))) = (−2 − λ)(λ2 − λ − 2)
= ) −2
)) −3
0
5 − λ))
= −(λ + 2)(λ − 2)(λ + 1)
Thus, the eigenvalues are λ1 = −2, λ2 = 2, and λ3 = −1.
For λ1 = −2,

 

−2 6 6 1 0 0

 

A + 2I = −2 4 4 ∼ 0 1 1

 

−1 3 3
0 0 0
  

 0



 

−1
Thus, a basis for the eigenspace of λ1 is 
.



 1

For λ1 = 2,

 

6 1 0 −2
−6 6

 

4 ∼ 0 1 −1
A − 2I = −2 0

 

−1 3 −1
0 0
0
 

2



 

1
Thus, a basis for the eigenspace of λ2 is 
.



1

For λ3 = −1,

 

−3 6 6 1 0 −2

 

0
A + I = −2 3 4 ∼ 0 1

 

−1 3 2
0 0
0
 

2



 

0
Thus, a basis for the eigenspace of λ3 is 
.



1

Copyright © 2020 Pearson Canada Inc.
6
A14 We have
))
) )
)
))3 − λ −1
2 ))) )))2 − λ 2 − λ 0 )))
C(λ) = ))) −1 3 − λ −2))) = ))) −1 3 − λ −2)))
)) 0
2
−λ)) )) 0
2
−λ))
))
))
))2 − λ
0
0 ))
= ))) −1 4 − λ −2))) = (2 − λ)(λ2 − 4λ + 4)
)) 0
2
−λ))
= −(λ − 2)3
Thus, the only eigenvalue is λ1 = 2.
For λ1 = 2,

 

2 1 0
1
 1 −1

 

1 −2 ∼ 0 1 −1
A − 2I = −1

 

0
2 −2
0 0
0
  

−1



 

 1
Thus, a basis for the eigenspace of λ1 is 
.





 1

A15 We have
))
) )
)
))−λ
1
2 ))) )))1 − λ
0
1 − λ )))
3 ))) = ))) 0
2−λ
3 )))
C(λ) = ))) 0 2 − λ
)) 1
−1 −1 − λ)) )) 1
−1 −1 − λ))
))
))
))1 − λ
0
0 ))
2−λ
3 ))) = (1 − λ)(λ2 − 1)
= ))) 0
)) 1
−1 −2 − λ))
= −(λ − 1)2 (λ + 1)
Thus, the eigenvalues are λ1 = 1 and λ2 = −1.
For λ1 = 1,

 

1
2 1 0 1
−1

 

1
3 ∼ 0 1 3
A − I =  0

 

1 −1 −2
0 0 0
  

−1



 

−3
Thus, a basis for the eigenspace of λ1 is 
.





 1

For λ1 = −1,

 

1 2 1 0 1
1

 

3 3 ∼ 0 1 1
A + I = 0

 

1 −1 0
0 0 0
  

−1



 

−1
Thus, a basis for the eigenspace of λ2 is 
.



 1

Copyright © 2020 Pearson Canada Inc.
7
A16 From what we learned in Problems A7, A8, and A12, we see that the eigenvalues are λ1 = 2 and
λ2 = 3. Moreover, each appear as a single root of the characteristic polynomial, so the algebraic
multiplicity of each eigenvalue is 1. Verify this result by finding and factoring the characteristic
polynomial.
For λ1 = 2,
!
" !
"
0 −2
0 1
A − 2I =
∼
0
1
0 0
*! "+
1
Thus, a basis for the eigenspace of λ1 is
. Since a basis for the eigenspace contains a single
0
vector, the dimension of the eigenspace is 1. Hence, the geometric multiplicity of λ1 = 2 is 1.
For λ2 = 3,
!
" !
"
−1 −2
1 2
A − 3I =
∼
0
0
0 0
*! "+
−2
Thus, a basis for the eigenspace of λ2 is
. Since a basis for the eigenspace contains a single
1
vector, the dimension of the eigenspace is 1. Hence, the geometric multiplicity of λ2 = 3 is 1.
A17 We see by inspection that λ1 = 2 is a double root of the characteristic polynomial. Hence, it has
algebraic multiplicity 2. Verify this result by finding and factoring the characteristic polynomial.
For λ1 = 2,
!
" !
"
0 −2
0 1
A − 2I =
∼
0
0
0 0
*! "+
1
Thus, a basis for the eigenspace of λ1 is
. Since a basis for the eigenspace contains a single
0
vector, the dimension of the eigenspace is 1. Hence, the geometric multiplicity of λ1 = 2 is 1.
A18 We have
))
)
))1 − λ
1 )))
C(λ) = )
= λ2 − 4λ + 4 = (λ − 2)2
) −1 3 − λ))
Hence, the eigenvalue is λ1 = 2. Since it is a double root of the characteristic polynomial we have
the algebraic multiplicity is 2.
For λ1 = 2,
!
" !
"
−1 1
1 −1
A − 2I =
∼
−1 1
0
0
*! "+
1
Thus, a basis for the eigenspace of λ1 is
. Since a basis for the eigenspace contains a single
1
vector, the dimension of the eigenspace is 1. Hence, the geometric multiplicity of λ1 = 2 is 1.
A19 We have
))
) )
)
))−λ
−5
3 ))) )))−λ
−5
3 )))
6 ))) = )))−2 −6 − λ
6 )))
C(λ) = )))−2 −6 − λ
))−2
−7
7 − λ)) )) 0 −1 + λ 1 − λ))
))
)
))−λ
−5
−2)))
= ))) −2 −6 − λ −λ))) = −(−1 + λ)(λ2 − 4)
)) 0 −1 + λ
0))
= −(λ − 1)(λ − 2)(λ + 2)
Copyright © 2020 Pearson Canada Inc.
8
Thus, the eigenvalues are λ1 = 1, λ2 = 2, and λ3 = −2, each with algebraic multiplicity 1.
For λ1 = 1,

 

−1 −5 3 1 0 −3
0
A − I = −2 −7 6 ∼ 0 1

 

−2 −7 6
0 0
0
  

3



 

0
Thus, a basis for the eigenspace of λ1 is 
. Since a basis for the eigenspace contains a single





1

vector, the dimension of the eigenspace is 1. Hence, the geometric multiplicity of λ1 = 1 is 1.
For λ2 = 2,

 

1
−2 −5 3 1 0

A − 2I = −2 −8 6 ∼ 0 1 −1

 

−2 −7 5
0 0
0
  

−1



 

Thus, a basis for the eigenspace of λ2 is 
. Since a basis for the eigenspace contains a single
 1



 1

vector, the dimension of the eigenspace is 1. Hence, the geometric multiplicity of λ2 = 2 is 1.
For λ3 = −2,

 

 2 −5 3 1 0 −1
A − (−2)I = −2 −4 6 ∼ 0 1 −1

 

−2 −7 9
0 0
0
  

1



 

1
Thus, a basis for the eigenspace of λ3 is 
. Since a basis for the eigenspace contains a single



1

vector, the dimension of the eigenspace is 1. Hence, the geometric multiplicity of λ3 = −2 is 1.
A20 We have
))
) )
) )
)
))2 − λ
2
2 ))) )))2 − λ
2
2 ))) )))2 − λ
4
2 )))
2−λ
2 ))) = ))) 2
2 − λ 2 ))) = ))) 2
4 − λ 2 )))
C(λ) = ))) 2
)) 2
)
)
)
)
2
2 − λ) ) 0
λ
−λ) ) 0
0
−λ))
= −λ(λ2 − 6λ) = −λ2 (λ − 6)
Thus, the eigenvalues are λ1 = 0 with algebraic multiplicity 2 and λ2 = 6 with algebraic multiplicity
1.
For λ1 = 0,

 

2 2 2 1 1 1

 

A − 0I = 2 2 2 ∼ 0 0 0

 

2 2 2
0 0 0
   

−1 −1



   

 1 ,  0
Thus, a basis for the eigenspace of λ1 is 
. The dimension of the eigenspace is 2, so the








 0
1
geometric multiplicity of λ1 = 0 is 2.
Copyright © 2020 Pearson Canada Inc.
9
For λ2 = 6,

 

2
2 1 0 −1
−4

 

2 ∼ 0 1 −1
A − 6I =  2 −4

 

2
2 −4
0 0
0
  

1



 

1
Thus, a basis for the eigenspace of λ2 is 
. The dimension of the eigenspace is 1, so the geometric



1

multiplicity of λ2 = 6 is 1.
A21 We have
))
) )
) )
)
))3 − λ
1
1 ))) )))3 − λ
1
1 ))) )))3 − λ
2
1 )))
3−λ
1 ))) = ))) 1
3−λ
1 ))) = ))) 1
4−λ
1 )))
C(λ) = ))) 1
)) 1
)
)
)
)
1
3 − λ) ) 0
−2 + λ 2 − λ) ) 0
0
2 − λ))
= (2 − λ)(λ2 − 7λ + 10) = −(λ − 2)(λ − 2)(λ − 5)
Thus, the eigenvalues are λ1 = 2 with algebraic multiplicity 2 and λ2 = 5 with algebraic multiplicity
1.
For λ1 = 2,

 

1 1 1 1 1 1

 

A − 2I = 1 1 1 ∼ 0 0 0

 

1 1 1
0 0 0
   

−1 −1



   

 1 ,  0
Thus, a basis for the eigenspace of λ1 is 
. The dimension of the eigenspace is 2, so the



 0  1

geometric multiplicity of λ1 = 2 is 2.
For λ2 = 5,

 

1
1 1 0 −1
−2

 

1 ∼ 0 1 −1
A − 5I =  1 −2

 

1
1 −2
0 0
0
  

1



 

1
Thus, a basis for the eigenspace of λ2 is 
. The dimension of the eigenspace is 1, so the geometric



1

multiplicity of λ2 = 5 is 1.
A22 We have
))
)
))1 − λ
3
1 )))
1−λ
2 ))) = (1 − λ)(λ2 − 2λ − 3)
C(λ) = ))) 0
)) 0
2
1 − λ))
= −(λ − 1)(λ − 3)(λ + 1)
Thus, the eigenvalues are λ1 = 1, λ2 = 3, and λ3 = −1 each with algebraic multiplicity 1.
For λ1 = 1,

 

0 3 1 0 1 0

 

A − 1I = 0 0 2 ∼ 0 0 1

 

0 2 0
0 0 0
Copyright © 2020 Pearson Canada Inc.
10
  

1



 

0
Thus, a basis for the eigenspace of λ1 is 
. The dimension of the eigenspace is 1, so the geometric





0

multiplicity of λ1 = 1 is 1.
For λ2 = 3,

 

3
1 1 0 −2
−2

 

2 ∼ 0 1 −1
A − 3I =  0 −2

 

0
2 −2
0 0
0
  

2



 

1
Thus, a basis for the eigenspace of λ2 is 
. The dimension of the eigenspace is 1, so the geometric





1

multiplicity of λ2 = 3 is 1.
For λ3 = −1,

 

2 3 1 1 0 −1

 

1
A + 1I = 0 2 2 ∼ 0 1

 

0 2 2
0 0
0
 

 1



 

−1
Thus, a basis for the eigenspace of λ2 is 
. The dimension of the eigenspace is 1, so the geo



 1

metric multiplicity of λ3 = −1 is 1.
!
"
! "
! "
27
84
1
1
A23 Let A =
. If !x 0 =
, then !y 0 =
;
−7 −22
0
0
!x 1 =
!y 1 =
!x 2 =
!x 3 =
"! " ! "
27
84 1
27
A!y 0 ≈
=
−7 −22 0
−7
!
"
1
0.968
!x 1 ≈
−0.251
$!x 1 $
!
"
!
"
5.052
0.971
!y 2 =
A!y 1 ≈
,
−1.254
−0.241
!
"
!
"
5.973
0.9701
!y 3 =
A!y 2 ≈
,
−1.495
−0.2428
!
"
! "
0.970
4
0.970
It appears that ym →
. We see that −0.242 ≈ −4, so it appears that !v 1 =
is an eigenvector
−0.242
−1
of A. Indeed we find that A!v 1 = 6!v 1 , so the dominant eigenvalue is λ = 6.
!
"
! "
! √ " !
"
5
0
1
1/ √2
0.707
A24 Let A =
. If !x 0 =
, then !y 0 =
≈
;
0 −2
1
0.707
1/ 2
!
"!
" !
"
5
0 0.707
3.535
!x 1 = A!y 0 ≈
≈
0 −2 0.707
−1.414
!
"
1
0.929
!y 1 =
!x 1 ≈
−0.371
$!x 1 $
!
"
!
"
4.645
0.987
!x 2 = A!y 1 ≈
!y 2 =
,
0.742
0.158
!
Copyright © 2020 Pearson Canada Inc.
11
"
4.935
!x 3 = A!y 2 ≈
,
−0.316
!
It appears that ym →
is λ = 5.
A25 Let A =
!
"
0.998
!y 3 =
−0.064
!
! "
! "
1
1
, so !v 1 =
is an eigenvector of A. The corresponding dominant eigenvalue
0
0
"
! "
! "
3.5 4.5
1
1
. If !x 0 =
, then !y 0 =
;
4.5 3.5
0
0
!x 1 =
!y 1 =
!x 2 =
!x 3 =
"
3.5
A!y 0 ≈
4.5
!
"
1
0.614
!x 1 ≈
0.789
$!x 1 $
!
"
5.701
A!y 1 ≈
,
5.525
!
"
5.645
A!y 2 ≈
,
5.667
!
"
0.718
!y 2 =
0.696
!
"
0.706
!y 3 =
0.708
!
"
! "
0.707
1
It appears that ym →
, so it appears that !v 1 =
is an eigenvector of A. Indeed we find that
0.707
1
A!v 1 = 8!v 1 , so the dominant eigenvalue is λ = 8.
!
"
! "
! √ " !
"
4 3
1
1/ √2
0.707
A26 Let A =
. If !x 0 =
, then !y 0 =
≈
;
6 7
1
0.707
1/ 2
!
"
4.950
9.192
!
"
1
0.474
!y 1 =
!x 1 ≈
0.880
$!x 1 $
!
"
4.538
!x 2 = A!y 1 ≈
,
9.001
!
"
4.479
!x 3 = A!y 2 ≈
,
8.951
!x 1 = A!y 0 ≈
!
"
0.450
0.893
!
"
0.447
!y 3 =
0.894
!y 2 =
!
"
! "
0.447
1
0.447
1
It appears that ym →
. We see that 0.894 ≈ 2 , so it appears that !v 1 =
is an eigenvector of
0.894
2
A. Indeed we find that A!v 1 = 10!v 1 , so the dominant eigenvalue is λ = 10.
!
Copyright © 2020 Pearson Canada Inc.
12
B Homework Problems
B1 !v 1 is not an eigenvector. !v 2 is an eigenvector corresponding to the eigenvalue λ = 2.
B2 !v 1 is not an eigenvector. !v 2 is an eigenvector corresponding to the eigenvalue λ = 2.
B3 !v 1 is an eigenvector corresponding to the eigenvalue λ = 2. !v 2 is not an eigenvector.
B4 !v 1 is an eigenvector corresponding to the eigenvalue λ = 7. !v 2 is an eigenvector corresponding to the
eigenvalue λ = 14.
B5 !v 1 is an eigenvector corresponding to the eigenvalue λ = 5. !v 2 is not an eigenvector.
*! "+
−2
B6 The eigenvalues are λ1 = 1 and λ2 = 6. The eigenspace of λ1 is Span
. The eigenspace of λ2
1
*! "+
1
is Span
.
2
*! "+
−3
B7 The eigenvalues are λ1 = 1 and λ2 = 6. The eigenspace of λ1 is Span
. The eigenspace of λ2
2
*! "+
1
is Span
.
1
*! "+
1
B8 The eigenvalues are λ1 = 4 and λ2 = −3. The eigenspace of λ1 is Span
. The eigenspace of λ2
1
*! "+
−6
is Span
.
1
B9 The eigenvalues are λ1 = 5 and λ2 = −3. The eigenspace of λ1 is Span
*! "+
0
is Span
.
1
B10 The only eigenvalue is λ1 = 2. The eigenspace of λ1 is Span
*! "+
−4
. The eigenspace of λ2
1
*! "+
−1
.
3
*! "+
2
B11 The eigenvalues are λ1 = 1 and λ2 = −3. The eigenspace of λ1 is Span
. The eigenspace of λ2
1
*! "+
−2
is Span
.
1
  

 1



 

 4
B12 The eigenvalues are λ1 = 2, λ2 = 4, and λ3 = 1. The eigenspace of λ1 is Span 
. The eigenspace



−7

  
  


 0
0







 
 



0
−1
of λ2 is Span 
.
The
eigenspace
of
λ
is
Span
.




3








 1

1

Copyright © 2020 Pearson Canada Inc.
13
    

−1 1



   

 0 , 1
B13 The eigenvalues are λ1 = 3, and λ2 = −3. The eigenspace of λ1 is Span 
. The eigenspace







 1 0

  

 1



 

−1
of λ2 is Span 
.





 1

   

−1 1



   

B14 The eigenvalues are λ1 = 2, and λ2 = 1. The eigenspace of λ1 is Span 
. The eigenspace
 1 , 0



 0 1

  

−1



 

 1
of λ2 is Span 
.





 1

  

−1



 

 1
B15 The only eigenvalue is λ1 = 1. The eigenspace of λ1 is Span 
.



 1

   

−1 −1



   

 1 ,  0
B16 The eigenvalues are λ1 = −2, and λ2 = 4. The eigenspace of λ1 is Span 
. The eigenspace



 0  1

  

1



 

1
of λ2 is Span 
.

  


1
B17
B18
B19
B20
B21
    

1 1



   

0 , 1
The only eigenvalue is λ1 = 6. The eigenspace of λ1 is Span 
.



1 0

*! "+
0
λ1 = 2 has algebraic multiplicity 2; a basis for its eigenspace is
so it has geometric multiplicity
1
1.
*! "+
2
λ1 = 7 has algebraic multiplicity 1; a basis for its eigenspace is
so it has geometric multiplicity
1
*! "+
−1
1. λ2 = 1 has algebraic multiplicity 1; a basis for its eigenspace is
so it has geometric
1
multiplicity 1.
*! "+
−1
λ1 = 3 has algebraic multiplicity 2; a basis for its eigenspace is
so it has geometric multiplic2
ity 1.
 

1



 

0
λ1 = −5 has algebraic multiplicity 1; a basis for its eigenspace is 
so it has geometric multi


0

  

 1



 

10
plicity 1. λ2 = 3 has algebraic multiplicity 1; a basis for its eigenspace is 
so it has geometric



 2

Copyright © 2020 Pearson Canada Inc.
14
  

−1



 

−1
multiplicity 1. λ3 = −3 has algebraic multiplicity 1; a basis for its eigenspace is 
so it has





 1

geometric multiplicity 1.
 

−2



 

−1
B22 λ1 = 2 has algebraic multiplicity 1; a basis for its eigenspace is 
so it has geometric multiplic




 1

 

−3



 

−1
ity 1. λ2 = −1 has algebraic multiplicity 1; a basis for its eigenspace is 
so it has geometric





 2

  

−1



 

−1
multiplicity 1. λ3 = 1 has algebraic multiplicity 1; a basis for its eigenspace is 
so it has





 1

geometric multiplicity 1.
    

−1 −1



   

 0 ,  1
B23 λ1 = −1 has algebraic multiplicity 2; a basis for its eigenspace is 
so it has geometric



 1  0

  

1



 

1
multiplicity 2. λ2 = 8 has algebraic multiplicity 1; a basis for its eigenspace is 
so it has

  



1
geometric multiplicity 1.
    

−1 −2



   

 1 ,  0
B24 λ1 = −1 has algebraic multiplicity 2; a basis for its eigenspace is 
so it has geometric



 0  1

  

−1



 

 2
multiplicity 2. λ2 = 2 has algebraic multiplicity 1; a basis for its eigenspace is 
so it has



 1

geometric multiplicity 1.
  

 0



 

−1
B25 λ1 = 0 has algebraic multiplicity 1; a basis for its eigenspace is 
so it has geometric multi


 1

 



 1


−2
plicity 1. λ2 = 1 has algebraic multiplicity 1; a basis for its eigenspace is 
so it has geometric






 3

  



−1


 0
multiplicity 1. λ3 = −1 has algebraic multiplicity 1; a basis for its eigenspace is 
so it has






 1

geometric multiplicity 1.
  

−1



 

 0
B26 λ1 = 2 has algebraic multiplicity 2; a basis for its eigenspace is 
so it has geometric multi




 1

 

 1



 

−1
plicity 1. λ2 = 0 has algebraic multiplicity 1; a basis for its eigenspace is 
so it has geometric





 1

multiplicity 1.
Copyright © 2020 Pearson Canada Inc.
15
B27 y3
B28 y3
B29 y3
B30 y3
"
0.196
=
⇒ !v 1
0.980
!
"
0.895
=
⇒ !v 1
0.446
!
"
0.949
=
⇒ !v 1
0.315
!
"
0.832
=
⇒ !v 1
0.554
!
! "
1
=
and λ = 20.
5
! "
2
=
and λ = 9.
1
! "
3
=
and λ = 11.
1
! "
3
=
and λ = 6.
2
C Conceptual Problems
"
a b
C1 Let A =
. We are given that
c d
!
! "
! " ! "
1
1
2
A
=2
=
2
2
4
! "
! " ! "
1
1
3
A
=3
=
3
3
9
Hence,
a + 2b = 2
c + 2d = 4
a + 3b = 3
c + 3d = 9
"
0 1
Solving this system gives a = 0, b = 1, c = −6, d = 5. So, A =
.
−6 5


0 0 1


C2 Let A = 0 0 0. It is easy to show that λ1 = 0 is an eigenvalue with algebraic multiplicity 3. A


0 0 0
   

1 0



   

0 , 1
basis for its eigenspace is 
, and thus its geometric multiplicity is 2 which is less than its



0 0

!
algebraic multiplicity.


1 0 0


C3 Take A = 0 2 0. Then, A has distinct eigenvalues λ1 = 1, λ2 = 2, and λ3 = 3.


0 0 3
C4 (a) We have
(AB)(5!u + 3!v ) = A(5B!u + 3B!v ) = A(25!u + 9!v ) = 25A!u + 9A!v = 150!u + 90!v = 30(5!u + 3!v )
So, 5!u + 3!v is an eigenvector of C with eigenvalue 30.
Copyright © 2020 Pearson Canada Inc.
16
(b) Since !u and !v correspond to different eigenvalues, {!u , !v } is linearly independent and hence a basis
! = c1!u + c2!v . Therefore
for R2 . Thus, there exists coefficients c1 , c2 such that w
! "
6
AB!
w = AB(c1!u + c2!v ) = c1 (AB)!u + c2 (AB)!v = c1 (30!u ) + c2 (30!v ) = 30(c1!u + c2!v ) = 30!
w=
42
"
!
"
1 0
1 0
C5 It does not imply that λµ is an eigenvalue of AB. Let A =
and B =
. Then λ = 2 is an
0 2
0 1/2
!
"
1 0
eigenvalue of A and µ = 1 is an eigenvalue of B, but λµ = 2 is not an eigenvalue of AB =
.
0 1
!
C6 If A is not invertible, then det A = 0 and hence 0 = det(A − 0I) and so 0 is an eigenvalue of A.
On the other hand, if 0 is an eigenvalue of A, then 0 = det(A − 0I) = det A and so A is not invertible.
C7 If A is invertible, and A!v = λ!v , then
!v = A−1 A!v = A−1 (λ!v ) = λA−1!v
So A−1!v = λ1 !v . Hence, !v is an eigenvector of A−1 with eigenvalue 1/λ.
C8 Since A!v = λ!v and B!v = µ!v , it follows that
(A + B)!v = A!v + B!v = λ!v + µ!v = (λ + µ)!v
(AB)!v = A(B!v ) = A(µ!v ) = µA!v = µλ!v
Hence, !v is an eigenvector of A + B with eigenvalue λ + µ, and an eigenvector of AB with eigenvalue
µλ.
C9 (a) Since A!v = λ!v , it follows that
A2!v = A(A!v ) = A(λ!v ) = λA!v = λ2!v
Then, by induction,
Am!v = Am−1 (A!v ) = Am−1 (λ!v ) = λAm−1!v = λm!v
Hence, λm is an eigenvalue of Am with eigenvector !v .
(b) Consider the matrix of rotation in R2 through angle 2π/3:
√ "
−1/2
− 3/2
√
A=
3/2 −1/2
!
Then A leaves no direction unchanged so it has no real eigenvectors or eigenvalues. But, A3 = I
has eigenvalue 1, and every non-zero vector in R2 is an eigenvector of A3 .
C10 (a) If rank(A) < n, then 0 = det A = det(A − 0I), so 0 is an eigenvalue. The geometric multiplicity
of λ = 0 is the dimension of its eigenspace, which is the dimension of the nullspace of A − λI.
Thus, by the Rank-Nullity Theorem, the geometric multiplicity of λ = 0 is n − rank A = n − r.
Copyright © 2020 Pearson Canada Inc.
17


0 1 1


(b) The matrix A = 0 0 1, has rank 2, so the geometric multiplicity of λ = 0 is n − rank A =


0 0 0
3 − 2 = 1. But, the characteristic polynomial is C(λ) = −λ3 , so the algebraic multiplicity of λ = 0
is 3.
C11 By definition of matrix multiplication, we have
(A!v )i =
n
3
k=1
for 1 ≤ i ≤ n. Thus,
aik!v k =
n
3
aik (1) = c
k=1
 
 
c
1
 
 .. 
A!v =  .  = c  ... 
 
 
c
1
So !v is an eigenvector of A.
Section 6.2
A Practice Problems
A1 We have
"! " ! "
! "
11
6 2
28
2
=
= 14
9 −4 1
14
1
!
"! " !
"
! "
11
6 −1
7
−1
=
= −7
9 −4
3
−21
3
!
! "
! "
2
−1
Thus,
is an eigenvector of A with eigenvalue 14, and
is an eigenvector of A with eigenvalue
1
3
!
"
1 3 1
−1
−7. Using the methods of Section 3.5 we get that P =
. Hence,
7 −1 2
!
"!
"!
" !
"
1 3 1 11
6 2 −1
14
0
P AP =
=
3
0 −7
7 −1 2 9 −4 1
−1
A2 We have
"! " ! "
! "
6
5 1
11
1
=
!λ
3 −7 1
−4
1
!
"! " ! "
! "
6
5 2
17
2
=
!λ
3 −7 1
−1
1
!
Hence, the columns of P are not eigenvectors of A, so P does not diagonalize A.
Copyright © 2020 Pearson Canada Inc.
18
A3 We have
"! " ! "
! "
−2 2 1
−2
1
=
!λ
2 0 0
2
0
!
"! " ! "
! "
−2 2 −1
10
−1
=
!λ
2 0
4
−2
4
!
Hence, the columns of P are not eigenvectors of A, so P does not diagonalize A.
A4 We have
"! " ! "
! "
5 −8 2
2
2
=
=1
4 −7 1
1
1
!
"! " ! "
! "
5 −8 1
−3
1
=
= −3
4 −7 1
−3
1
!
! "
! "
2
1
is an eigenvector of A with eigenvalue 1, and
is an eigenvector of A with eigenvalue −3.
1
1
!
"
1 −1
−1
Using the methods of Section 3.5 we get that P =
. Hence,
−1
2
Thus,
"!
"!
" !
"
1 −1 5 −8 2 1
1
0
P AP =
=
−1
2 4 −7 1 1
0 −3
!
−1
A5 We have

2

4
4

2

4
4
   
 
4 4 −1  2
−1
    
 
2 4  1 = −2 = −2  1
   
 
4 2
0
0
0
   
 
4 4 −1  2
−1
   
 
2 4  0 =  0 = −2  0
   
 
4 2
1
−2
1

   
 
2 4 4 1 10
1
4 2 4 1 = 10 = 10 1

    
 
4 4 2 1
10
1
 
 
 
−1
−1
1
 
 
 
Thus,  1 and  0 are both eigenvectors of A with eigenvalue −2, and 1 is an eigenvector of A
 
 
 
0
1
1


2 −1
−1
1


2. Hence,
with eigenvalue 10. Using the methods of Section 3.5 we get that P−1 = −1 −1

3
1
1
1



 

2 −1 2 4 4 −1 −1 1 −2
0 0
−1
1


 

2 4 2 4  1
0 1 =  0 −2 0
P−1 AP = −1 −1


 

3
1
1
1 4 4 2
0
1 1
0
0 10
Copyright © 2020 Pearson Canada Inc.
19
A6 We have

0

1
2

   
 
0 1 1 3  9 
3
1 1 1 4 = 12 = 3 4

    
 
2 1 1 5
15
5

   
 
0 1 1  0 0
 0

   
 
1 1 1 −1 = 0 = 0 −1
2 1 1
1
0
1
   
 
1 1 −1  1
−1
   
 
1 1  0 =  0 = −1  0
   
 
1 1
1
−1
1
 
 
3
 0
 
 
Thus, 4 is an eigenvector of A with eigenvalue 3, −1 is an eigenvector of A with eigenvalue 0,
 
 
5
1
 
−1
 
and  0 is an eigenvector of A with eigenvalue −1. Using the methods of Section 3.5 we get that
 
1


1 1
 1
1
 4 −8 4. Hence,
P−1 =

12 
−9
3 3
A7 We have



 

1 1 0 1 1 3
0 −1 3 0
0
 1
1








 4 −8 4 1 1 1 4 −1
0 = 0 0
0
P−1 AP =













12
−9
3 3 2 1 1 5
1
1
0 0 −1
))
)
))7 − λ
3 )))
C(λ) = det(A − λI) = )
= (7 − λ)(−8 − λ)
) 0
−8 − λ))
Hence, the eigenvalues are λ1 = 7 and λ2 = −8 each with algebraic multiplicity 1.
For λ1 = 7 we get
!
" !
"
0
3
0 1
A − 7I =
∼
0 −5
0 0
*! "+
1
Thus, a basis for the eigenspace of λ1 is
so the geometric multiplicity is 1.
0
For λ2 = −8 we get
!
" !
"
15 3
1 1/5
A + 8I =
∼
0 0
0 0
*! "+
−1
Thus, a basis for the eigenspace of λ2 is
so the geometric multiplicity is 1.
5
*! " ! "+
1 −1
So, A is diagonalizable. In particular,
,
forms a basis for R2 of eigenvectors of A. We take
0
5
!
"
!
"
1 −1
7
0
these basis vectors to be the columns of P. Then P =
diagonalizes A to P−1 AP =
=
0
5
0 −8
D.
Copyright © 2020 Pearson Canada Inc.
20
A8 We have
))
)
))5 − λ
2 )))
C(λ) = )
= (5 − λ)2
) 0
5 − λ))
Hence, the only eigenvalue is λ1 = 5 with algebraic multiplicity 2.
For λ1 = 5 we get
!
" !
"
0 2
0 1
A − 5I =
∼
0 0
0 0
*! "+
1
Thus, a basis for the eigenspace of λ1 is
so the geometric multiplicity is 1.
0
Therefore, A is not diagonalizable since the geometric multiplicity of λ1 is less than its algebraic
multiplicity.
A9 We have
))
)
))1 − λ
9 )))
C(λ) = det(A − λI) = )
= λ2 + 3λ − 40 = (λ + 8)(λ − 5)
) 4
−4 − λ))
Hence, the eigenvalues are λ1 = −8 and λ2 = 5 each with algebraic multiplicity 1.
For λ1 = −8 we get
!
" !
"
9 9
1 1
A + 8I =
∼
4 4
0 0
*! "+
−1
Thus, a basis for the eigenspace of λ1 is
so the geometric multiplicity is 1.
1
For λ2 = 5 we get
!
" !
"
−4
9
1 −9/4
A − 5I =
∼
4 −9
0
0
*! "+
9
Thus, a basis for the eigenspace of λ2 is
so the geometric multiplicity is 1.
4
*! " ! "+
−1 9
So, A is diagonalizable. In particular,
,
forms a basis for R2 of eigenvectors of A. We take
1 4
!
"
!
"
−1 9
−8 0
−1
these basis vectors to be the columns of P. Then P =
diagonalizes A to P AP =
=
1 4
0 5
D.
A10 We have
))
)
))3 − λ
2 )))
C(λ) = det(A − λI) = )
= λ2 − 9λ + 8 = (λ − 8)(λ − 1)
) 5
6 − λ))
Hence, the eigenvalues are λ1 = 1 and λ2 = 8 each with algebraic multiplicity 1.
For λ1 = 1 we get
!
" !
"
2 2
1 1
A−I =
∼
5 5
0 0
*! "+
−1
Thus, a basis for the eigenspace of λ1 is
so the geometric multiplicity is 1.
1
For λ2 = 8 we get
!
" !
"
−5
2
1 −2/5
A − 8I =
∼
5 −2
0
0
Copyright © 2020 Pearson Canada Inc.
21
*! "+
2
Thus, a basis for the eigenspace of λ2 is
so the geometric multiplicity is 1.
5
*! " ! "+
−1 2
So, A is diagonalizable. In particular,
,
forms a basis for R2 of eigenvectors of A. We take
1 5
!
"
!
"
−1 2
1 0
−1
these basis vectors to be the columns of P. Then P =
diagonalizes A to P AP =
=
1 5
0 8
D.
A11 We have
))
)
))−2 − λ
3 )))
C(λ) = )
= λ2 + 5λ − 6 = (λ + 6)(λ − 1)
) 4
−3 − λ))
Hence, eigenvalues are λ1 = −6 and λ2 = 1 each with algebraic multiplicity 1.
For λ1 = −6 we get
!
" !
"
4 3
1 3/4
A + 6I =
∼
4 3
0 0
*! "+
−3
Thus, a basis for the eigenspace of λ1 is
so the geometric multiplicity is 1.
4
For λ2 = 1 we get
!
" !
"
−3
3
1 −1
A−I =
∼
4 −4
0
0
*! "+
1
Thus, a basis for the eigenspace of λ2 is
so the geometric multiplicity is 1.
1
*! " ! "+
−3 1
So, A is diagonalizable. In particular,
,
forms a basis for R2 of eigenvectors of A. We take
4 1
!
"
!
"
−3 1
−6 0
−1
these basis vectors to be the columns of P. Then P =
diagonalizes A to P AP =
=
4 1
0 1
D.
A12 We have
))
)
))3 − λ
6 )))
C(λ) = )
= λ2 + 21
) −5 −3 − λ))
Thus, by the quadratic formula, we get that
λ=
0±
√
√
−84
= ± 21i
2
Since A has complex eigenvalues it is not diagonalizable over R.
A13 We have
))
)
))3 − λ
0 )))
C(λ) = )
= (3 − λ)2
) −3 3 − λ))
Hence, the only eigenvalue is λ1 = 3 with algebraic multiplicity 2.
For λ1 = 3 we get
!
" !
"
0 0
1 0
A − 3I =
∼
−3 0
0 0
*! "+
0
Thus, a basis for the eigenspace of λ1 is
so the geometric multiplicity is 1.
1
Copyright © 2020 Pearson Canada Inc.
22
Therefore, A is not diagonalizable since the geometric multiplicity of λ1 is less than its algebraic
multiplicity.
A14 We have
))
)
))4 − λ
4 )))
C(λ) = )
= λ2 − 8λ = λ(λ − 8)
) 4
4 − λ))
Hence, eigenvalues are λ1 = 0 and λ2 = 8 each with algebraic multiplicity 1.
For λ1 = 0 we get
!
" !
"
4 4
1 1
A − 0I =
∼
4 4
0 0
*! "+
−1
Thus, a basis for the eigenspace of λ1 is
so the geometric multiplicity is 1.
1
For λ2 = 8 we get
!
" !
"
−4
4
1 −1
A − 8I =
∼
4 −4
0
0
*! "+
1
Thus, a basis for the eigenspace of λ2 is
so the geometric multiplicity is 1.
1
*! " ! "+
−1 1
So, A is diagonalizable. In particular,
,
forms a basis for R2 of eigenvectors of A. We take
1 1
!
"
!
"
−1 1
0 0
−1
these basis vectors to be the columns of P. Then P =
diagonalizes A to P AP =
=
1 1
0 8
D.
A15 We have
))
)
))−2 − λ
5 )))
C(λ) = )
= λ2 + 4λ − 21 = (λ + 7)(λ − 3)
) 5
−2 − λ))
Hence, eigenvalues are λ1 = −7 and λ2 = 3 each with algebraic multiplicity 1.
For λ1 = −7 we get
!
" !
"
5 5
1 1
A + 7I =
∼
5 5
0 0
*! "+
−1
Thus, a basis for the eigenspace of λ1 is
so the geometric multiplicity is 1.
1
For λ2 = 3 we get
!
" !
"
−5
5
1 −1
A − 3I =
∼
5 −5
0
0
*! "+
1
Thus, a basis for the eigenspace of λ2 is
so the geometric multiplicity is 1.
1
*! " ! "+
−1 1
So, A is diagonalizable. In particular,
,
forms a basis for R2 of eigenvectors of A. We take
1 1
!
"
!
"
−1 1
−7 0
these basis vectors to be the columns of P. Then P =
diagonalizes A to P−1 AP =
=
1 1
0 3
D.
Copyright © 2020 Pearson Canada Inc.
23
A16 We have
))
)
))−λ 1
0 )))
1 ))) = −λ(λ + 1)(λ − 2)
C(λ) = ))) 1 −λ
)) 1
1 1 − λ))
Hence, the eigenvalues are λ1 = −1, λ2 = 2, and λ3 = 0 each with algebraic multiplicity 1.
For λ1 = −1 we get

 

1 1 0 1 1 0

 

A + I = 1 1 1 ∼ 0 0 1

 

1 1 2
0 0 0
  

−1



 

 1
Thus, a basis for the eigenspace of λ1 is 
so the geometric multiplicity is 1.





 0

For λ2 = 2 we get

 

1
0 1 0 −1/3
−2

 

1 ∼ 0 1 −2/3
A − 2I =  1 −2

 

1
1 −1
0 0
0
 

1



 

2
Thus, a basis for the eigenspace of λ2 is 
so the geometric multiplicity is 1.





3

For λ3 = 0 we get

 

0 1 0 1 0 1

 

A − 0I = 1 0 1 ∼ 0 1 0

 

1 1 1
0 0 0
  

−1



 

 0
Thus, a basis for the eigenspace of λ3 is 
so the geometric multiplicity is 1.





 1

     

−1 1 −1



     

 1 , 2 ,  0
So, A is diagonalizable. In particular, 
forms a basis for R3 of eigenvectors of A.










 0 3

1


−1 1 −1


0 diagonalizes A to
We take these basis vectors to be the columns of P. Then P =  1 2


0 3
1


−1 0 0


P−1 AP =  0 2 0 = D.


0 0 0
A17 We have
))
)
))6 − λ −9
−5 )))
4 ))) = −(λ − 1)2 (λ − 5)
C(λ) = ))) −4 9 − λ
)) 9
−17 −8 − λ))
So, the eigenvalues are λ1 = 1 with algebraic multiplicity 2 and λ2 = 5 with algebraic multiplicity 1.
For λ1 = 1 we get

 

 5 −9 −5 1 0 −1

 

8
4 ∼ 0 1
0
A − I = −4

 

9 −17 −9
0 0
0
Copyright © 2020 Pearson Canada Inc.
24
  

1



 

0
Thus, a basis for the eigenspace of λ1 is 
so the geometric multiplicity is 1.



1

Therefore, A is not diagonalizable since the geometric multiplicity of λ1 does not equal its algebraic
multiplicity.
A18 We have
))
)
))−2 − λ
7
3 )))
2−λ
1 ))) = −(λ − 1)(λ2 + 1)
C(λ) = ))) −1
)) 0
2
1 − λ))
So, the eigenvalues of A are 1, i, and −i. Hence, A is not diagonalizable over R.
A19 We have
))
)
))−1 − λ
6
3 )))
−4 − λ −3 ))) = −(λ − 2)2 (λ + 1)
C(λ) = ))) 3
)) −6
12
8 − λ))
Thus, the eigenvalues are λ1 = 2 with algebraic multiplicity 2 and λ2 = −1 with algebraic multiplicity
1.
For λ1 = 2 we get

 

6
3 1 −2 −1
−3

 

0
0
A − 2I =  3 −6 −3 ∼ 0

 

−6 12
6
0
0
0
   

2 1



   

1 , 0
So, a basis for the eigenspace of λ1 is 
so the geometric multiplicity is 2.




0 1

For λ2 = −1 we get

 

6
3 1 0 −1/2
 0

 

1/2
A + I =  3 −3 −3 ∼ 0 1

 

−6 12
9
0 0
0
  

 1



 

−1
A basis for the eigenspace of λ2 is 
so the geometric multiplicity is 1.



 2

      

2 1  1



     

1 , 0 , −1
Therefore, A is diagonalizable. In particular, 
forms a basis for R3 of eigenvectors

      



0 1
2


1
2 1


of A. We take these basis vectors to be the columns of P. Then P = 1 0 −1 diagonalizes A to


0 1
2


0
2 0


0 = D.
P−1 AP = 0 2


0 0 −1
Copyright © 2020 Pearson Canada Inc.
25
A20 We have
))
) )
)
))−λ
6
−8 ))) )))−λ
6
−8 )))
−4 ))) = )))−2 4 − λ
−4 )))
C(λ) = )))−2 4 − λ
))−2
2
−2 − λ)) )) 0 −2 + λ 2 − λ))
))
)
))−λ −2 −8 )))
= ))) −2 −λ −4 ))) = (2 − λ)(λ2 − 4)
)) 0
0 2 − λ))
= −(λ − 2)2 (λ + 2)
So, the eigenvalues are λ1 = 2 with algebraic multiplicity 2 and λ2 = −2 with algebraic multiplicity
1.
For λ1 = 2 we get

 

1
−2 6 −8 1 0

A − 2I = −2 2 −4 ∼ 0 1 −1

 

−2 2 −4
0 0
0
  

−1



 

 1
Thus, a basis for the eigenspace of λ1 is 
so the geometric multiplicity is 1.





 1

Therefore, A is not diagonalizable since the geometric multiplicity of λ1 does not equal its algebraic
multiplicity.
A21 We have
))
) )
)
))2 − λ
2
2 ))) )))−1 − λ
0
1 + λ)))
2−λ
2 ))) = ))) 2
2−λ
2 )))
C(λ) = ))) 2
)) 3
2
1 − λ)) )) 3
2
1 − λ))
))
))
))−1 − λ
0
0 ))
)
)
2−λ
4 ))) = (−1 − λ)(λ2 − 6λ)
=) 2
)) 3
2
4 − λ))
= −(λ + 1)λ(λ − 6)
Hence, the eigenvalues are λ1 = −1, λ2 = 0, and λ3 = 6 each with algebraic multiplicity 1.
For λ1 = −1 we get

 

3 2 2 1 0 2/5

 

A + I = 2 3 2 ∼ 0 1 2/5

 

3 2 2
0 0 0
  

−2



 

−2
Thus, a basis for the eigenspace of λ1 is 
so the geometric multiplicity is 1.





 5

For λ2 = 0 we get

 

2 2 2 1 0 −1

 

2
A − 2I = 2 2 2 ∼ 0 1

 

3 2 1
0 0
0
Copyright © 2020 Pearson Canada Inc.
26
  

 1



 

−2
Thus, a basis for the eigenspace of λ2 is 
so the geometric multiplicity is 1.





 1

For λ3 = 6 we get

 

2
2 1 0 −1
−4

 

2 ∼ 0 1 −1
A − 0I =  2 −4

 

3
2 −5
0 0
0
  

1



 

1
Thus, a basis for the eigenspace of λ3 is 
so the geometric multiplicity is 1.





1

     

−2  1 1



     

−2 , −2 , 1
So, A is diagonalizable. In particular, 
forms a basis for R3 of eigenvectors of A.










 5
1 1


1 1
−2


We take these basis vectors to be the columns of P. Then P = −2 −2 1 diagonalizes A to


5
1 1


−1 0 0


P−1 AP =  0 0 0 = D.


0 0 6
A22 We have
))
)
))2 − λ
0
0 )))
1 ))) = (2 − λ)(λ2 − 3λ + 2) = −(λ − 2)2 (λ − 1)
C(λ) = ))) −1 −λ
)) −1 −2 3 − λ))
Thus, the eigenvalues are λ1 = 2 with algebraic multiplicity 2 and λ2 = 1 with algebraic multiplicity
1.
For λ1 = 2 we get

 

0 0 1 2 −1
 0

 

0
A − 2I = −1 −2 1 ∼ 0 0

 

−1 −2 1
0 0
0
   

−2 1



   

 1 , 0
So, a basis for the eigenspace of λ1 is 
so the geometric multiplicity is 2.







 0 1

For λ2 = 1 we get

 

0 0 1 0
0
 1

 

A − I = −1 −1 1 ∼ 0 1 −1

 

−1 −2 2
0 0
0
  

0



 

1
A basis for the eigenspace of λ2 is 
so the geometric multiplicity is 1.



1

      

−2 1 0



     

 1 , 0 , 1
Therefore, A is diagonalizable. In particular, 
forms a basis for R3 of eigenvectors



 0 1 1



−2 1 0


of A. We take these basis vectors to be the columns of P. Then P =  1 0 1 diagonalizes A to


0 1 1
Copyright © 2020 Pearson Canada Inc.
27


2 0 0


P−1 AP = 0 2 0 = D.


0 0 1
A23 We have
))
) )
)
))−3 − λ
−3
5 ))) )))2 − λ
−3
5 )))
10 − λ
−13 ))) = ))) 0
10 − λ
−13 )))
C(λ) = ))) 13
)) 3
)
)
2
−1 − λ) )2 − λ
2
−1 − λ))
))
))
))2 − λ
−3
5 ))
)
10 − λ
−13 ))) = (2 − λ)(λ2 − 4λ + 5)
= )) 0
)) 0
5
−6 − λ))
Using the quadratic formula, we find that the roots of λ2 − 4λ + 5 are
√
4 ± 16 − 4(1)(5)
λ=
=2±i
2(1)
So, the eigenvalues of are 2, 2 + i and 2 − i. Hence, A is not diagonalizable over R.
B Homework Problems
! "
! "
2
−1
B1
is an eigenvector with eigenvalue 7 and
is an eigenvector with eigenvalue 0.
3
2
!
"
!
"
1 2 1 −1
7 0
P−1 =
, P AP =
.
0 0
7 −3 2
! "
! "
2
1
B2
is an eigenvector with eigenvalue 4, but
is not an eigenvector of A. So, P does not diagonalize
1
1
A.
B3 The columns of P are not eigenvectors of A, so P does not diagonalize A.
! "
! "
−1
2
B4
is an eigenvector with eigenvalue 2 and
is an eigenvector with eigenvalue 5.
2
−1
!
"
!
"
1 1 2 −1
2 0
P−1 =
, P AP =
.
0 5
3 2 1
 
 
 1
1
 
 
B5 −1 is an eigenvector with eigenvalue 5, 1 is an eigenvector with eigenvalue 3, and
 
 
1
0




1
3
−1
5 0 0




2
3, P−1 AP = 0 3 0.
eigenvector with eigenvalue 1. P−1 = −1




1 −1 −2
0 0 1
B6 The second and third columns of P are not eigenvectors of A, so P does not diagonalize A.
 
 
1
1
 
 
B7 1 is an eigenvector with eigenvalue −1, 0 is an eigenvector with eigenvalue 3, and
 
 
1
1




1
1
−1
−1 0 0




0, P−1 AP =  0 3 0.
eigenvector with eigenvalue 3. P−1 =  1 −1




1
0 −1
0 0 3
Copyright © 2020 Pearson Canada Inc.
 
3
 
0 is an
1
 
1
 
1 is an
0
28
"
!
"
2 0
3 0
B8 It is diagonalizable with P =
and D =
.
1 1
0 1
!
B9 It is not diagonalizable.
"
!
−2 4
−6
and D =
1 1
0
!
"
!
−1 2
1
B11 It is diagonalizable with P =
and D =
1 5
0
!
"
!
1 5
−1
B12 It is diagonalizable with P =
and D =
1 1
0
B10 It is diagonalizable with P =
!
"
0
.
0
"
0
.
8
"
0
.
3
B13 It is not diagonalizable.
B14 It is not diagonalizable.




0
 3 −1 −1
−3 0




0 −6 and D =  0 7
0.
B15 It is diagonalizable with P = −10




7
1
1
0 0 −5




1 0
0
2 0 0




B16 It is diagonalizable with P = 1 −2 0 and D = 0 0 0.




1
3 1
0 0 1




1 1 −1
3 0 0




B17 It is diagonalizable with P = 0 1 −1 and D = 0 3 0.




2 0
1
0 0 2
B18 It is not diagonalizable.

−1

B19 It is diagonalizable with P =  2

0

3

B20 It is diagonalizable with P = 0

1
B21 It is not diagonalizable.


1 −2
−3


0 −1 and D =  0


1
2
0


−1 4
1 0


1 3 and D = 0 1


0 2
0 0

1

B22 It is diagonalizable with P = 0

1

 1

B23 It is diagonalizable with P = −1

1


1 −1
0


−2 −1 and D = 0


3
1
0


1 −1
0


1
1 and D = 0


0
2
0

0 0

−3 0.

0 6

0

0.

2

0
0

−2
0.

0 −4

0 0

1 0.

0 3
Copyright © 2020 Pearson Canada Inc.
29
C Conceptual Problems
C1 Observe that if P−1 AP = B for some invertible matrix P, then
4
5
det(B − λI) = det(P−1 AP − λP−1 P) = det P−1 (A − λI)P
= det P−1 det(A − λI) det P = det(A − λI)
Thus, since A and B have the same characteristic polynomial, they have the same eigenvalues.
C2 If A and B are similar, then there exists an invertible matrix P such that P−1 AP = B. First, observe
that if !y ∈ Col(B), then there exist !x such that !y = B!x = P−1 AP!x . Define !z such that !z = P!x . Then,
we have that !y = P−1 A!z. So !y ∈ Col(P−1 A). Therefore rank(P−1 A) ≥ rank(B).
Let !x ∈ Null(A) so that A!x = !0. Then P−1 A!x = P−1!0 = !0. Thus !x ∈ Null(P−1 A). Hence,
dim Null(P−1 A) ≥ dim Null(A) which implies that rank(A) ≥ rank(P−1 A) by the Rank-Nullity Theorem.
Therefore, we have shown that rank(B) ≤ rank(P−1 A) ≤ rank(A). However, we can also write A =
Q−1 BQ where Q = P−1 . Thus, by the same argument, rank(B) ≥ rank(A) and hence rank(A) =
rank(B).
C3 (a)
tr(AB) =
n 3
n
3
i=1 k=1
aik bki =
n 3
n
3
bki aik = tr(BA)
k=1 i=1
(b)
tr(B) = tr(P−1 AP) = tr(P−1 (AP)) = tr((AP)P−1 ) = tr(APP−1 ) = tr(A)
C4 (a) If P−1 AP = D, then multiplying on the left by P gives AP = PD. Now, multiplying on the right
by P−1 gives A = PDP−1 as required.
!
"
!
"
!
"
1 1
2 0
0 1
−1
(b) Let P =
and D =
. Then we get A = PDP =
.
2 3
0 3
−6 5






1
1
0 0
2
3
1
2
−1






1 −1 and D = 0 −2 0. Then we get A = PDP−1 = −5
8
5.
(c) Let P = 0






1 −1
2
0
0 3
6 −8 −4
C5 (a) We prove the result by induction on k. If k = 1, then P−1 AP = D implies A = PDP−1 and so the
result holds. Assume the result is true for some k. We then have
Ak+1 = Ak A = (PDk P−1 )(PDP−1 ) = PDk P−1 PDP−1 = PDk IDP−1 = PDk+1 P−1
as required.
(b) The characteristic polynomial of A is
))
)
))−1 − λ
6
3 )))
−4 − λ −3 ))) = −λ3 + 3λ2 − 4
C(λ) = det(A − λI) = ))) 3
)) −6
12
8 − λ))
= −(λ − 2)2 (λ + 1)
Copyright © 2020 Pearson Canada Inc.
30
Hence, λ1 = 2 is an eigenvalue with algebraic multiplicity 2 and λ2 = −1 is an eigenvalue with
algebraic multiplicity 1.

 

6
3 1 −2 −1
−3

 

0
0. Thus, a basis for the eigenspace is
For λ1 = 2 we get A − λ1 I =  3 −6 −3 ∼ 0

 

−6 12
6
0
0
0
   

2 1



   





1
0
,
. Hence, the geometric multiplicity of λ1 is 2.

















0 1


 

  

6
3 1 0 −1/2
 0
 1



 


 

−1
1/2. Therefore 
For λ2 = −1 we get A − λ2 I =  3 −3 −3 ∼ 0 1
is a basis for




 

 

−6 12
9
0 0
0
2
the eigenspace. Hence, the geometric multiplicity of λ2 is 1.
For every eigenvalue of A the geometric multiplicity equals



1
2 1
2



diagonalizable. We can take P = 1 0 −1 and get D = 0



0 1
2
0


 1 −1 −1


4
3 and
P−1 = −2


1 −2 −1
A5 = PD5 P−1
the algebraic multiplicity, so A is

0
0

2
0. We now find that

0 −1



 

1 32 0
0  1 −1 −1  −1
66
33
2 1

 
 
 

0 −2
4
3 =  33 −34 −33
= 1 0 −1  0 32



 

0 1
2 0 0 −1
1 −2 −1
−66 132
98
C6 (a) Suppose that A is an n × n matrix and P−1 AP = D. Then, the diagonal entries of D are the
eigenvalues λ1 , . . . , λn of A. Since A is similar to D, they both have the same trace by Theorem
6.2.1. Thus, tr A = tr D = λ1 + · · · + λn .
(b) Observe that

 
 

a
a
a + b − b
 a a a 1 1 1

 
 

a
a+b−b
a
A − bI = 
 = a a a ∼ 0 0 0

a
a
a+b−b
a a a
0 0 0
Thus, λ1 = b is an eigenvalue with geometric multiplicity 2. Therefore, the algebraic multiplicity
of λ1 is at least 2. From part (a), we have
λ1 + λ1 + λ2 = tr A = (a + b) + (a + b) + (a + b) = 3a + 3b
Thus, λ2 = 3a + b. Hence, if a = 0, we have λ1 = b is an eigenvalue with algebraic multiplicity
3 and geometric multiplicity 3, or if a ! 0, we have λ1 = b is an eigenvalue with algebraic
and geometric multiplicity 2 and λ2 = 3a + b is an eigenvalue with algebraic and geometric
multiplicity 1.
C7 (a) If A is an n × n diagonalizable matrix, then A is similar to a diagonal matrix D = diag(λ1 , . . . , λn ).
Hence, by Theorem 6.2.1 det A = det D = λ1 · · · λn .
(b) For any polynomial p(x), the constant term is p(0). Thus, for the characteristic polynomial of A,
the constant term is C(0) = det(A − 0I) = det A.
Copyright © 2020 Pearson Canada Inc.
31
(c) By the Fundamental Theorem of Algebra, a polynomial with real coefficients can be written as a
product of first degree factors (possible using complex and repeated zeros of the polynomial). In
particular,
det(A − λI) = (−1)n (λ − λ1 )(λ − λ2 ) · · · (λ − λn )
where the λi may be complex or repeated. Set λ = 0:
det A = (−1)n (−λ1 )(−λ2 ) · · · (−λn ) = λ1 · · · λn
(Note that the product of the characteristic roots is real even if some of them are complex.)
C8 By Problem C7, 0 is an eigenvalue of A if and only if det A = 0. But det A = 0 if and only if A is not
invertible.
C9 Let D = diag(λ1 , . . . , λn ) so that P−1 AP = D. Hence,
P−1 (A − λ1 I)P = P−1 AP − λ1 I = D − λ1 I = diag(0, λ2 − λ1 , · · · , λn − λ1 )
Since A − λ1 I is diagonalized to this form, its eigenvalues are 0, λ2 − λ1 , . . . , λn − λ1 .
Section 6.3
A Practice Problems
A1 We have
))
)
))4 − λ −1 )))
C(λ) = )
= λ2 − 9λ + 18 = (λ − 3)(λ − 6)
) −2 5 − λ))
Thus, the eigenvalues of A are λ1 = 3 and λ2 = 6.
For λ1 = 3 we get
" !
"
1 −1
1 −1
A − λ1 I =
∼
−2
2
0
0
!
*! "+
1
Hence, a basis for Eλ1 is
.
1
For λ2 = 6 we get
" !
"
−2 −1
1 1/2
A − λ2 I =
∼
−2 −1
0 0
!
*! "+
−1
Hence, a basis for Eλ2 is
.
2
"
!
"
1 −1
3 0
It follows that A is diagonalized by P =
to D =
. Thus, we have A = PDP−1 and so
1
2
0 6
!
"!
"!
"
1 −1 27
0
2/3 1/3
3
3 −1
A = PD P =
1
2 0 216 −1/3 1/3
!
"
90 −63
=
−126 153
!
It is easy to verify that this does equal A3 .
Copyright © 2020 Pearson Canada Inc.
32
A2 We have
))
)
))−6 − λ −10 )))
C(λ) = )
= λ2 − λ − 2 = (λ − 2)(λ + 1)
) 4
7 − λ))
Thus, the eigenvalues of A are λ1 = 2 and λ2 = −1.
For λ1 = 2 we get
A − λ1 I =
Hence, a basis for Eλ1 is
!
" !
"
−8 −10
1 5/4
∼
4
5
0 0
*! "+
−5
.
4
For λ2 = −1 we get
" !
"
−5 −10
1 2
A − λ2 I =
∼
4
8
0 0
!
*! "+
−2
Hence, a basis for Eλ2 is
.
1
"
!
"
−5 −2
2
0
It follows that A is diagonalized by P =
to D =
. Thus, we have A = PDP−1 and
4
1
0 −1
so
!
100
A
A3 We have
"!
"6 !
"7
1 1
−5 −2 2100
0
2
= PD P =
4
1 0
(−1)100 3 −4 −5


−5 · 2100 + 8 −5 · 2101 + 10
1 

= 

3  102
103
2 −4
2 −5
100 −1
!
))
)
))2 − λ
2 )))
C(λ) = )
= λ2 + 3λ − 4 = (λ − 1)(λ + 4)
) −3 −5 − λ))
Thus, the eigenvalues of A are λ1 = 1 and λ2 = −4.
For λ1 = 1 we get
" !
"
1
2
1 2
A − λ1 I =
∼
−3 −6
0 0
!
Hence, a basis for Eλ1 is
*! "+
−2
.
1
For λ2 = −4 we get
" !
"
6
2
1 1/3
A − λ2 I =
∼
−3 −1
0 0
!
*! "+
−1
Hence, a basis for Eλ2 is
.
3
Copyright © 2020 Pearson Canada Inc.
33
"
!
"
−2 −1
1
0
It follows that A is diagonalized by P =
to D =
. Thus, we have A = PDP−1 and
1
3
0 −4
so
!
"!
"6
!
"7
1
−2 −1 1
0
3
1
100
100 −1
A = PD P =
1
3 0 4100 −5 −1 −2
!
"
1 −6 + 4100 −2 + 2 · 4100
=−
5 3 − 3 · 4100 1 − 6 · 4100
!
A4 We have
))
)
))−2 − λ
2 )))
C(λ) = )
= λ2 − 3λ − 4 = (λ + 1)(λ − 4)
) −3
5 − λ))
Thus, the eigenvalues are λ1 = −1 and λ2 = 4.
For λ1 = −1 we have
" !
"
−1 2
1 −2
A − λ1 I =
∼
−3 6
0
0
!
Hence, a basis for Eλ1 is
*! "+
2
.
1
For λ2 = 4 we have
" !
"
−6 2
1 −1/3
A − λ2 I =
∼
−3 1
0
0
!
Hence, a basis for Eλ2 is
*! "+
1
.
3
It follows that A is diagonalized by P =
200
A
A5 We have
!
"
!
"
2 1
−1 0
to D =
. Thus, we have A = PDP−1 and so
1 3
0 4
"!
"6 !
"7
1 3 −1
2 1 1
0
= PD P =
1 3 0 4200 5 −1
2
!
"
200
200
1 6−4
−2 + 2 · 4
=
5 3 − 3 · 4200 −1 + 6 · 4200
!
200 −1
))
)
))6 − λ
−6 )))
C(λ) = )
= λ2 − 5λ + 6 = (λ − 2)(λ − 3)
) 2
−1 − λ))
Thus, the eigenvalues are λ1 = 2 and λ2 = 3.
For λ1 = 2 we have
" !
"
4 −6
1 −3/2
A − λ1 I =
∼
2 −3
0
0
!
Hence, a basis for Eλ1 is
For λ2 = 3 we have
*! "+
3
.
2
" !
"
3 −6
1 −2
A − λ2 I =
∼
2 −4
0
0
!
Copyright © 2020 Pearson Canada Inc.
34
*! "+
2
Hence, a basis for Eλ2 is
.
1
!
"
!
"
3 2
2 0
It follows that A is diagonalized by P =
to D =
. Thus, we have A = PDP−1 and so
2 1
0 3
200
A
!
"!
3 2 2200
=
2 1 0
"
−1
2
= PD P
3200
2 −3
!
"
200
200
201
201
−3(2 ) + 4(3 ) 3(2 ) − 2(3 )
=
−2201 + 2(3200 )
2202 − 3201
200 −1
0
"!
A6 We have
))
) )
)
))−2 − λ 1
1 ))) )))−1 − λ 1
1 )))
−λ
1 ))) = )))−1 − λ −λ
1 )))
C(λ) = ))) −1
)) −2
)
)
2 1 − λ) ) 0
2 1 − λ))
))
)
))−1 − λ
1
1 )))
−1 − λ
0 ))) = −(λ + 1)2 (λ − 1)
= ))) 0
)) 0
2
1 − λ))
Thus, the eigenvalues are λ1 = −1 and λ2 = 1.
For λ1 = −1 we have

 

−1 1 1 1 −1 −1

 

0
0
A − λ1 I = −1 1 1 ∼ 0

 

−2 2 2
0
0
0
    

1 1



   

1 , 0
Hence, a basis for Eλ1 is 
.



0 1

For λ2 = 1 we have

 

1 1 1 0 −1/2
−3

 

A − λ2 I = −1 −1 1 ∼ 0 1 −1/2

 

−2
2 0
0 0
0
  

1



 

1
Hence, a basis for Eλ2 is 
.



2





0 0
1 1 1
−1




It follows that A is diagonalized by P = 1 0 1 to D =  0 −1 0. Thus, we have A = PDP−1




0 1 2
0
0 1
and so
A100 = PD100 P−1 = PIP−1 = I
Copyright © 2020 Pearson Canada Inc.
35
A7 We have
))
) )
)
))7 − λ
−3
2 ))) )))−1 − λ 1 + λ
0 )))
−4 − λ
2 ))) = ))) 8
−4 − λ
2 )))
C(λ) = ))) 8
)) −10
)
)
4
−3 − λ) ) −10
4
−3 − λ))
))
))
))−1 − λ
0
0 ))
)
4−λ
2 ))) = (−1 − λ)(λ2 − λ) = −(λ + 1)λ(λ − 1)
= )) 8
)) −10
−6 −3 − λ))
Thus, the eigenvalues are λ1 = −1, λ2 = 0, and λ3 = 1.
For λ1 = −1 we have



A − λ1 I = 

  

−1



 

−2
Hence, a basis for Eλ1 is 
.





 1

 

8 −3
2 1 0 1
 

8 −3
2 ∼ 0 1 2
 

−10
4 −2
0 0 0
For λ2 = 0 we have



A − λ2 I = 

 

7 −3
2 1 0 1/2
 

8 −4
2 ∼ 0 1 1/2
 

−10
4 −3
0 0 0



A − λ2 I = 

 

6 −3
2 1 0 2/3
 

8 −5
2 ∼ 0 1 2/3
 

−10
4 −4
0 0 0
  

−1



 

−1
Hence, a basis for Eλ2 is 
.





 2

For λ3 = 1 we have
  

−2



 

−2
Hence, a basis for Eλ2 is 
.





 3





−1 −1 −2
−1 0 0




It follows that A is diagonalized by P = −2 −1 −2 to D =  0 0 0. Thus, we have




1
2
3
0 0 1
A = PDP−1 and so
100
A




0
−1 −1 −2 1 0 0  1 −1

 
 

2
= PD P = −2 −1 −2 0 0 0  4 −1




1
2
3 0 0 1 −3
1 −1


2
 5 −1


0
2
=  4


−8
2 −3
100 −1
Copyright © 2020 Pearson Canada Inc.
36
A8 We have
))
) )
)
))3 − λ −1
0 ))) )))3 − λ −1
0 )))
−λ
0 ))) = ))) 0
1 − λ 1 − λ)))
C(λ) = ))) 2
)) −2
)
)
1 1 − λ) ) −2
1
1 − λ))
))
))
))3 − λ −1
1 ))
)
1 − λ 0 ))) = (1 − λ)(λ2 − 3λ + 2) = −(λ − 1)2 (λ − 2)
= )) 0
)) −2
1
−λ))
Thus, the eigenvalues are λ1 = 1 and λ2 = 2.
For λ1 = 1 we have

 

 2 −1 0 1 −1/2 0

 

0
0
A − λ1 I =  2 −1 0 ∼ 0

 

−2
1 0
0
0
0
    

1 0



   

2 , 0
Hence, a basis for Eλ1 is 
.



0 1

For λ2 = 2 we have

 

0 1 0 1
 1 −1

 

0 ∼ 0 1 1
A − λ2 I =  2 −2

 

−2
1 −1
0 0 0
  

−1



 

−1
Hence, a basis for Eλ2 is 
.





 1





1 0 −1
1 0 0




It follows that A is diagonalized by P = 2 0 −1 to D = 0 1 0. Thus, we have A = PDP−1




0 1
1
0 0 2
and so




0  −1
1 0
1 0 −1 1 0




0   2 −1 1
A100 = PD100 P−1 = 2 0 −1 0 1





0 1
1 0 0 2100 −2
1 0


−1 + 2101 1 − 2100 0


= −2 + 2101 2 − 2100 0


101
100
2−2
−1 + 2
1
A9 Since the second column does not sum to 1, it is not a Markov matrix.
A10 Since the columns sum to 1, it is a Markov matrix. If λ = 1, then
!
" !
"
−0.7
0.6
1 −6/7
A − λI =
∼
0.7 −0.6
0
0
! "
6/7
Therefore, an eigenvector corresponding to λ1 is
. The components in the state vector must sum
1
to 1, so the invariant state corresponding to λ1 is
! " !
"
1 6/7
6/13
=
6
1
7/13
7 +1
Copyright © 2020 Pearson Canada Inc.
37
A11 Since the columns sum to 1, it is a Markov matrix. If λ = 1, then
!
" !
"
−0.5
0.4
1 −4/5
A − λI =
∼
0.5 −0.4
0
0
"
4/5
Therefore, an eigenvector corresponding to λ1 is
. The components in the state vector must sum
1
to 1, so the invariant state corresponding to λ1 is
! " ! "
1 4/5
4/9
=
4
5/9
+1 1
!
5
A12 Since the columns sum to 1, it is a Markov matrix. If λ = 1, then
!
" !
"
−0.2
0.5
1 −5/2
A − λI =
∼
0.2 −0.5
0
0
"
5/2
Therefore, an eigenvector corresponding to λ1 is
. The components in the state vector must sum
1
to 1, so the invariant state corresponding to λ1 is
! " ! "
1 5/2
5/7
=
5
2/7
+1 1
!
2
A13 Since the second column does not sum to 1, it is not a Markov matrix.
A14 Since the columns sum to 1, it is a Markov matrix. If λ = 1, then

 

0.1
0  1 0 −1
−0.1

 

−0.1
0.1 ∼ 0 1 −1
A − λI =  0

 

0.1
0
−0.1
0 0
0
 
1
 
Therefore, an eigenvector corresponding to λ1 is 1. The components in the state vector must sum
 
1
to 1, so the invariant state corresponding to λ1 is
   
1 1/3
1
   
1 = 1/3
1 + 1 + 1    
1
1/3
A15 (a) Let Rm be the fraction of people dwelling in rural areas (as a decimal), and Um be the fraction of
people dwelling in urban areas. Then Rm + Um = 1, and
Rm+1 = 0.85Rm + 0.05Um
Um+1 = 0.15Rm + 0.95Um
Or, in vector form,
!
" !
"! "
Rm+1
0.85 0.05 Rm
=
Um+1
0.15 0.95 Um
Copyright © 2020 Pearson Canada Inc.
38
"
0.85 0.05
The transition matrix is T =
. Since T is a Markov matrix, it necessarily has an
0.15 0.95
! "
! "
1
1/4
eigenvalue λ1 = 1. An eigenvector corresponding to λ1 is !v 1 =
. The state vector is then
,
3
3/4
which is fixed under the transformation with matrix T . If follow that in the long run, 25% of the
population will be rural dwellers and 75% will be urban dwellers.
! "
−1
(b) Another eigenvalue of T is λ2 = 0.8, with corresponding eigenvector !v 2 =
. It follows that T
1
!
"
!
"
1 −1
1 1
is diagonalized by P =
. We get P−1 = 41
. Since these eigenvectors form a basis
3
1
−3 1
for R2 ,
! "
! "
! "
! "
R0
1
−1
c
= c1
+ c2
=P 1
U0
3
1
c2
!
So
! "
! "
!
"
1
c1
R0 + U0
−1 R0
=P
=
c2
U0
4 −3R0 + U0
By linearity,
Tm
!
"
R0
= c1 T m!v 1 + c2 T m!v 2
U0
= c1 λm
v 1 + c2 λm
v2
1!
2!
! "
! "
1
1
1
m −1
= (R0 + U0 )
+ (−3R0 + U0 )(0.8)
3
1
4
4
After 50 years, or 5 decades,
T
5
!
"
! "
! "
1
1
0.5
1
5 −1
= (0.5 + 0.5)
+ (−1)(0.8)
0.5
3
1
4
4
!
"
1 1 + 0.3277
≈
4 3 − 0.3277
!
"
0.33
≈
0.67
Therefore, after 5 decades, approximately 33% of the population will be rural dwellers and 67%
will be urban dwellers.
A16 Let Am be the number of cars returned to the airport (as a fraction), and let T m and Cm be the number
of cars returned to the train station and city centre, respectively. Then Am + T m + Cm = 1, and
8
Am +
10
1
=
Am +
10
1
=
Am +
10
Am+1 =
T m+1
Cm+1
3
Tm +
10
6
Tm +
10
1
Tm +
10
3
Cm
10
1
Cm
10
6
Cm
10
Copyright © 2020 Pearson Canada Inc.
39
Or in vector form



 Am+1 
8
1 


1
T
=
 m+1 
10 
1
Cm+1


8 3 3
1 
1 6 1. Since T
The transition matrix is T = 10


1 1 6
 
3 3  Am 
 
6 1  T m 
 
1 6 Cm
is a Markov matrix, it necessarily has an eigen-
 
 
3
3/5
 
 
value λ1 = 1. An eigenvector corresponding to λ1 is !v 1 = 1. The state vector is then 1/5, which is
 
 
1
1/5
fixed under the transformation with matrix T . If follow that in the long run, 60% of the cars will be at
the airport, 20% of the cars will be at the train station, and 20% of the cars will be at the city centre.
A17 (a) Let Jm be the percent of customers that deal with Johnson in month m and let T m be the percent
of customers that deal with Thomson. We are given that
Jm+1 = .3Jm + 0.8T m
T m+1 = .7Jm + 0.2T m
Or in vector form
"
!
"! "
1 3 8 Jm
Jm+1
=
T m+1
10 7 2 T m
!
"
1 3 8
The transition matrix is T =
. Since T is a Markov matrix, it necessarily has an eigen10 7 2
! "
!
"
8
8/15
value λ1 = 1. An eigenvector corresponding to λ1 is !v 1 =
. The state vector is then
,
7
7/15
8
which is fixed under the transformation with matrix T . So, in the long run,
= 53% of the
15
7
customers will deal with Johnson and
= 47% will deal with Thomson.
15
! "
−1
(b) Another eigenvalue of T is λ2 = −1/2, with corresponding eigenvector !v 2 =
. It follows that
1
!
"
!
"
1
8 −1
1 1
T is diagonalized by P =
. We get P−1 =
. Since these eigenvectors form a
7
1
−7
8
15
basis for R2 ,
! "
! "
! "
! "
J0
8
−1
c
= c1
+ c2
=P 1
T0
7
1
c2
!
So
! "
! "
!
"
1
c1
J0 + T 0
−1 J0
=P
=
c2
T0
15 −7J0 + 8T 0
By linearity,
! "
J0
T
= c1 T m!v 1 + c2 T m!v 2
T0
m
= c1 λm
v 1 + c2 λm
v2
1!
2!
! "
! "
1
1
8
−1
=
(J0 + T 0 )
+ (−7J0 + 8T 0 )(−1/2)m
7
1
15
15
Copyright © 2020 Pearson Canada Inc.
40
Hence, if J0 = 0.25 and T 0 = 0.75, we get
T
m
!
"
! "
! "
1 8
1
0.25
m −1
=
+ (4.25)(−0.5)
0.75
1
15 7
15
! "
! "
y
a
A18 The solution will be of the form
= ceλt , so substitute this into the original system and use the
z
b
! "
! "
d λt a
a
fact that ce
= λceλt
to get
b
b
dt
! " !
"
! "
a
3
2 λt a
λce
=
ce
b
4 −4
b
λt
! "
!
"
a
3
2
Cancelling the common factor ce , we see that
is an eigenvector of
with eigenvalue λ.
b
4 −4
!
"
3
2
Thus, we need to find the eigenvalues of A =
. We have
4 −4
λt
))
)
))3 − λ
2 )))
C(λ) = )
= λ2 + λ − 20 = (λ + 5)(λ − 4)
) 4
−4 − λ))
Hence, the eigenvalues are λ1 = −5 and λ2 = 4. For λ1 = −5
" !
"
8 2
1 1/4
A − (−5)I =
∼
4 1
0 0
!
"
! "
−1
−1
. So, one solution to the system is e−5t
.
4
4
For λ2 = 4,
!
" !
"
−1
2
1 −2
A − 4I =
∼
4 −8
0
0
! "
! "
2
4t 2
Hence, an eigenvector corresponding to λ2 is . So, one solution to the system is e
.
1
1
Since the system is a linear homogeneous system, the general solution will be an arbitrary linear
combination of the two solutions. The general solution is therefore
Hence, an eigenvector corresponding to λ1 is
!
! "
! "
! "
y
−5t −1
4t 2
= ae
+ be
,
z
4
1
a, b ∈ R
! "
! "
y
a
A19 The solution will be of the form
= ceλt , so substitute this into the original system and use the
z
b
! "
! "
d λt a
a
fact that ce
= λceλt
to get
b
b
dt
! " !
"
! "
a
0.2
0.7 λt a
λce
=
ce
b
0.1 −0.4
b
λt
Copyright © 2020 Pearson Canada Inc.
41
! "
!
"
a
0.2
0.7
Cancelling the common factor ce , we see that
is an eigenvector of
with eigenvalue
b
0.1 −0.4
!
"
0.2
0.7
λ. Thus, we need to find the eigenvalues of A =
. We have
0.1 −0.4
))
)
))0.2 − λ
0.7 )))
C(λ) = )
= λ2 + 0.2λ − 0.15 = (λ + 0.5)(λ − 0.3)
) 0.1
−0.4 − λ))
λt
Hence, the eigenvalues are λ1 = −0.5 and λ2 = 0.3. For λ1 = −0.5
!
" !
"
0.7 0.7
1 1
A − (−0.5)I =
∼
0.1 0.1
0 0
! "
! "
−1
−0.5t −1
Hence, an eigenvector corresponding to λ1 is
. So, one solution to the system is e
.
1
1
For λ2 = 0.3,
!
" !
"
−0.1
0.7
1 −7
A − 0.3I =
∼
0.1 −0.7
0
0
! "
! "
7
0.3t 7
Hence, an eigenvector corresponding to λ2 is . So, one solution to the system is e
.
1
1
Since the system is a linear homogeneous system, the general solution will be an arbitrary linear
combination of the two solutions. The general solution is therefore
! "
! "
! "
y
−0.5t −1
0.3t 7
= ae
+ be
, a, b ∈ R
z
1
1
!
"
−1
4
A20 We need to find the eigenvalues of A =
. We have
8 −5
))
)
))−1 − λ
4 )))
C(λ) = )
= λ2 + 6λ − 27 = (λ + 9)(λ − 3)
) 8
−5 − λ))
Hence, the eigenvalues are λ1 = −9 and λ2 = 3. For λ1 = −9
!
" !
"
8 4
1 1/2
A + 9I =
∼
8 4
0 0
! "
! "
−1
−9t −1
Hence, an eigenvector corresponding to λ1 is
. So, one solution to the system is e
.
2
2
For λ2 = 3,
!
" !
"
−4
4
1 −1
A − 3I =
∼
8 −8
0
0
! "
! "
1
3t 1
Hence, an eigenvector corresponding to λ2 is . So, one solution to the system is e
.
1
1
Since the system is a linear homogeneous system, the general solution will be an arbitrary linear
combination of the two solutions. The general solution is therefore
! "
! "
! "
y
−9t −1
3t 1
= ae
+ be
, a, b ∈ R
z
2
1
Copyright © 2020 Pearson Canada Inc.
42
"
0.5 0.3
A21 We need to find the eigenvalues of A =
. We have
0.1 0.3
!
))
)
))0.5 − λ
0.3 )))
C(λ) = )
= λ2 − 0.8λ + 0.12 = (λ − 0.6)(λ − 0.2)
) 0.1
0.3 − λ))
Hence, the eigenvalues are λ1 = 0.6 and λ2 = 0.2. For λ1 = 0.6
" !
"
−0.1
0.3
1 −3
A − 0.6I =
∼
0.1 −0.3
0
0
!
! "
! "
3
3
. So, one solution to the system is e0.6t .
1
1
For λ2 = 0.2,
!
" !
"
0.3 0.3
1 1
A − 0.2I =
∼
0.1 0.1
0 0
! "
! "
−1
0.2t −1
Hence, an eigenvector corresponding to λ2 is
. So, one solution to the system is e
.
1
1
Since the system is a linear homogeneous system, the general solution will be an arbitrary linear
combination of the two solutions. The general solution is therefore
Hence, an eigenvector corresponding to λ1 is
! "
! "
! "
y
3
−1
= ae0.6t
+ be0.2t
,
z
1
1
a, b ∈ R
"
4 2
A22 We need to find the eigenvalues of A =
. We have
2 1
!
))
)
))4 − λ
2 )))
C(λ) = )
= λ2 − 5λ = λ(λ − 5)
) 2
1 − λ))
Hence, the eigenvalues are λ1 = 0 and λ2 = 5. For λ1 = 0
" !
"
4 2
1 1/2
A − 0I =
∼
2 1
0 0
!
!
"
! "
−1
0t −1
Hence, an eigenvector corresponding to λ1 is
. So, one solution to the system is e
.
2
2
For λ2 = 5,
!
" !
"
−1
2
1 −2
A − 5I =
∼
2 −4
0
0
! "
! "
2
2
Hence, an eigenvector corresponding to λ2 is . So, one solution to the system is e5t .
1
1
Since the system is a linear homogeneous system, the general solution will be an arbitrary linear
combination of the two solutions. The general solution is therefore
! "
! "
! "
y
−1
5t 2
=a
+ be
,
z
2
1
a, b ∈ R
Copyright © 2020 Pearson Canada Inc.
43
"
3 6
A23 We need to find the eigenvalues of A =
. We have
5 2
!
))
)
))3 − λ
6 )))
C(λ) = )
= λ2 − 5λ − 24 = (λ − 8)(λ + 3)
) 5
2 − λ))
Hence, the eigenvalues are λ1 = 8 and λ2 = −3. For λ1 = 8
" !
"
−5 6
1 −6/5
A − 8I =
∼
5 −6
0
0
!
! "
! "
6
6
. So, one solution to the system is e8t .
5
5
For λ2 = −3,
!
" !
"
6 6
1 1
A + 3I =
∼
5 5
0 0
! "
! "
−1
−3t −1
Hence, an eigenvector corresponding to λ2 is
. So, one solution to the system is e
.
1
1
Since the system is a linear homogeneous system, the general solution will be an arbitrary linear
combination of the two solutions. The general solution is therefore
Hence, an eigenvector corresponding to λ1 is
! "
! "
! "
y
6
−1
= ae8t
+ be−3t
,
z
5
1
a, b ∈ R
"
7 4
A24 We need to find the eigenvalues of A =
. We have
2 9
!
))
)
))7 − λ
4 )))
C(λ) = )
= λ2 − 16λ + 55 = (λ − 5)(λ − 11)
) 2
9 − λ))
Hence, the eigenvalues are λ1 = 5 and λ2 = 11. For λ1 = 5
" !
"
2 4
1 2
A − 5I =
∼
2 4
0 0
!
!
"
! "
−2
5t −2
Hence, an eigenvector corresponding to λ1 is
. So, one solution to the system is e
.
1
1
For λ2 = 11,
!
" !
"
−4
4
1 −1
A − 11I =
∼
2 −2
0
0
! "
! "
1
1
Hence, an eigenvector corresponding to λ2 is . So, one solution to the system is e11t .
1
1
Since the system is a linear homogeneous system, the general solution will be an arbitrary linear
combination of the two solutions. The general solution is therefore
! "
! "
! "
y
5t −2
11t 1
= ae
+ be
,
z
1
1
a, b ∈ R
Copyright © 2020 Pearson Canada Inc.
44
"
0.4 0.4
A25 We need to find the eigenvalues of A =
. We have
0.6 0.6
!
))
)
))0.4 − λ
0.4 )))
C(λ) = )
= λ2 − λ = λ(λ − 1)
) 0.6
0.6 − λ))
Hence, the eigenvalues are λ1 = 0 and λ2 = 1. For λ1 = 0
" !
"
0.4 0.4
1 1
A − 0I =
∼
0.6 0.6
0 0
!
!
"
! "
−1
0t −1
Hence, an eigenvector corresponding to λ1 is
. So, one solution to the system is e
.
1
1
For λ2 = 1,
!
" !
"
−0.6
0.4
1 −2/3
A − 1I =
∼
0.6 −0.4
0
0
! "
! "
2
2
Hence, an eigenvector corresponding to λ2 is . So, one solution to the system is et .
3
3
Since the system is a linear homogeneous system, the general solution will be an arbitrary linear
combination of the two solutions. The general solution is therefore
! "
! "
! "
y
−1
2
=a
+ bet
,
z
1
3
a, b ∈ R


0
 −1 −1


3
8. We have
A26 We need to find the eigenvalues of A = −13


11 −5 −8
))
)
))−1 − λ −1
0 )))
3−λ
8 ))) = −λ(λ + 2)(λ + 4)
C(λ) = ))) −13
)) 11
−5 −8 − λ))
Hence, the eigenvalues are λ1 = 0, λ2 = −2, and λ3 = −4. For λ1 = 0

 

0 1 0 −1/2
 −1 −1

 

3
8 ∼ 0 1
1/2
A − 0I = −13

 

11 −5 −8
0 0
0
 
 
 1
 1
 
 
Hence, an eigenvector corresponding to λ1 is −1. So, one solution to the system is e0t −1.
 
 
2
2
For λ2 = −2,

 

0 1 0 −1
 1 −1

 

5
8 ∼ 0 1 −1
A + 2I = −13

 

11 −5 −6
0 0
0
Copyright © 2020 Pearson Canada Inc.
45
 
 
1
1
 
 
Hence, an eigenvector corresponding to λ2 is 1. So, one solution to the system is e−2t 1.
 
 
1
1
For λ3 = −4,

 

0 1 0 1
 3 −1



7
8 ∼ 0 1 3
A + 4I = −13

 

11 −5 −4
0 0 0
 
 
−1
−1
 
 
Hence, an eigenvector corresponding to λ3 is −3. So, one solution to the system is e−4t −3. Since
 
 
1
1
the system is a linear homogeneous system, the general solution will be an arbitrary linear combination of the three solutions. The general solution is therefore
 
 
 
 
 x
 1
1
−1
y = a −1 + be−2t 1 + ce−4t −3 ,
 
 
 
 
z
2
1
1
a, b, c ∈ R


5 −5
 1


6. We have
A27 We need to find the eigenvalues of A =  6 −3


−4
1
2
))
)
))1 − λ
5
−5 )))
6 ))) = −(λ − 3)(λ − 6)(λ + 9)
C(λ) = ))) 6 −3 − λ
)) −4
1
2 − λ))
Hence, the eigenvalues are λ1 = 3, λ2 = 6, and λ3 = −9. For λ1 = 3

 

5 −5 1 0
0
−2

 

6 ∼ 0 1 −1
A − 3I =  6 −6

 

−4
1 −1
0 0
0
 
 
0
0
 

3t 
Hence, an eigenvector corresponding to λ1 is 1. So, one solution to the system is e 1.
 
 
1
1
For λ2 = 6,

 

5 −5 1 0 1
−5

 

6 ∼ 0 1 0
A − 6I =  6 −9

 

−4
1 −4
0 0 0
 
 
−1
−1
 
 
Hence, an eigenvector corresponding to λ2 is  0. So, one solution to the system is e6t  0.
 
 
1
1
For λ3 = −9,

 

 10 5 −5 1 0 −2

 

6 ∼ 0 1
3
A + 9I =  6 6

 

−4 1 11
0 0
0
Copyright © 2020 Pearson Canada Inc.
46
 
 
 2
 2
 

−9t 
Hence, an eigenvector corresponding to λ3 is −3. So, one solution to the system is e −3. Since
 
 
1
1
the system is a linear homogeneous system, the general solution will be an arbitrary linear combination of the three solutions. The general solution is therefore
 
 
 
 
 x
0
−1
 2
 


3t 
6t 
−9t 


−3 ,


y
1
0
=
ae
+
be
+
ce
 
 
 
 
z
1
1
1
a, b, c ∈ R
B Homework Problems
B1
B2
B3
B4
B5
B6
B7
! 99
"
3
2(399 )
A = 99
.
3
2(399 )
!
"
−1 + 2101 2 − 2101
100
A =
.
−1 + 2100 2 − 2100
9
8
9
 1 8 100
1
101
 2 + 2

−2
+
2
9 3 8
9 .
A100 =  138 100
1
101
+1
+1
3 −2
3 2
!
"
−5(−2)99 − 3(299 ) −(−2)100 + 2100
100
A =
.
15(−2)98 − 15(298 ) 3(−2)99 + 5(299 )
 100

−3(2100 ) −3(2100 )
 2


A100 =  2101 −6(2100 ) −6(2100 ).
 101

−2
6(2100 )
6(2100 )


2 − 2100 1 − 2100 −1 + 2100 


2100
0
A100 =  0
.

2 − 2101 1 − 2100 −1 + 2101
!
"
7/13
It is a Markov matrix. The invariant state is
.
6/13
100
B8 It is not a Markov matrix.
"
2/3
B9 It is a Markov matrix. The invariant state is
.
1/3
!
B10 It is not a Markov matrix.


7/13


B11 It is a Markov matrix. The invariant state is 2/13.


4/13


10/27


B12 It is a Markov matrix. The invariant state is 14/27.


1/9
B13 (a) Let xt be the number of RealWest’s customers and let yt be the number of Newservices customers
at month t. We get
!
" !
"! "
xt+1
0.7 0.1 xt
=
yt+1
0.3 0.9 yt
Copyright © 2020 Pearson Canada Inc.
47
! " ! " ! " !
"
x1
0.4 x2
0.34
(b)
=
,
=
.
y1
0.6 y2
0.66
"
1/4
.
3/4
!
"
0.9 0.2
B14 The transition matrix is A =
.
0.1 0.8
(c) The steady state distribution is
!
(a) We find that A is diagonalized by P =
t
t −1
A = PD P
2/3
(b)
1/3
!
!
"
2 −1
to D = diag(1, 0.7). Thus,
1
1

8 9t
"!
"!
"
7
1 2 + 10
2 −1 1
0
1/3 1/3
8
9t

=
=

1
1 0 (7/10)t −1/3 2/3
3 1 − 7
!
10
8 9t 
7 

8 10 9t 
7
1+2
2−2
10
"
B15 In the long run, 54% of the bicycles will be at the residence, 29% will be at the library, and 17% will
be at the athletic centre.
! "
! "
! "
y
t −1
4t 1
B16
= ae
+ be
, a, b ∈ R
z
1
2
! "
! "
! "
y
−1
3
B17
= ae−3t
+ be5t , a, b ∈ R
z
1
1
! "
! "
! "
y
−2
2
B18
= ae−t
+ be7t , a, b ∈ R
z
1
1
! "
! "
! "
y
1
1
B19
= ae2t
+ be3t , a, b ∈ R
z
2
3
! "
! "
! "
y
−2
1
B20
=a
+ be7t , a, b ∈ R
z
1
3
! "
! "
! "
y
−t 3
−4t 1
B21
= ae
+ be
, a, b ∈ R
z
2
1
 
 
 
! "
1
1
1
y
 



t
3t 



B22
= a 0 + be 1 + ce 2, a, b, c ∈ R.
 
 
 
z
1
1
0
 
 
 
! "
1
1
−1
y




 
B23
= ae6t 1 + be−3t 0 + ce−6t −1, a, b, c ∈ R.
 
 
 
z
2
1
1
 
 
 
! "
−3
3
1
y
 
 
 
B24
= ae2t  0 + be4t 2 + ce6t 2, a, b, c ∈ R.
 
 
 
z
1
1
1
Copyright © 2020 Pearson Canada Inc.
48
C Conceptual Problems
C1 Multiplying both sides of P−1 AP = B on the left by P and on the right by P−1 gives
A = PBP−1
Thus,
A3 = (PBP−1 )(PBP−1 )(PBP−1 ) = PBP−1 PBP−1 PBP−1
= PB(P−1 P)B(P−1 P)BP−1 = PBIBIBP−1
= PB3 P−1
C2 (a) The characteristic equation is
"
t11 − λ
t12
det
= λ2 − (t11 + t22 )λ + (t11 t22 − t12 t21 )
t21
t22 − λ
!
However, (λ − λ1 )(λ − λ2 ) = λ2 − (λ1 + λ2 )λ + λ1 λ2 . So, we must have λ1 + λ2 = t11 + t22 . Since
λ1 = 1, λ2 = t11 + t22 − 1.
!
"
1−a
b
(b) The transition matrix can be written as
. We could easily verify that the given veca
1−b
tor is an eigenvector corresponding to λ = 1, but it is straightforward to determine the eigenvector
too. For λ = 1,
!
"
!
" !
"
1−a
b
−a
b
−a b
−I =
∼
a
1−b
a −b
0 0
! "
b
So, a corresponding eigenvector is any multiple of . Since we require the sum of the compoa
!
"
b/(a + b)
nents to be 1, we take the eigenvector to be
.
a/(a + b)
C3 (a) Since column sums of T are all 1,
n
3
k=1
as required.
(b) If T!v = λ!v , then
n
:
k=1
n
:
k=1
tk j = 1. Then
 n

 n

n 3
n 3
n
3
3
 3




 tk j x j  =
(T !x )k =
xj
 tk j  x j =
k=1
(T!v )k = λ
n
:
k=1
n
3
k=1
n
:
k=1
j=1
vk =
j=1
k=1
vk . However, by part (a),
λ
Since λ ! 1, we must have
j=1
n
:
k=1
(T!v )k =
n
n
3
3
(T!v )k =
vk
k=1
k=1
vk = 0.
Copyright © 2020 Pearson Canada Inc.
n
:
k=1
vk . Thus,
49
Chapter 6 Quiz
E1 (a) (i) We have

   
 5 −16 −4 3 −1

   
 2 −7 −2 1 = −1
−2
8
3 0
2
 
3
 
So, 1 is not an eigenvector of A.
 
0
(ii) We have

   
 5 −16 −4 1 1

   
 2 −7 −2 0 = 0
−2
8
3 1
1
 
1
 
So, 0 is an eigenvector with eigenvalue 1.
 
1
(iii) We have

   
 5 −16 −4 4 4

   
 2 −7 −2 1 = 1
−2
8
3 0
0
 
4
 
So, 1 is an eigenvector with eigenvalue 1.
 
0
(iv) We have

   
 5 −16 −4  2 −2

   
 2 −7 −2  1 = −1
−2
8
3 −1
1
 
 2
 
So,  1 is an eigenvector with eigenvalue −1.
 
−1
 
1
 
(b) Part (a) shows that λ1 = 1 is an eigenvalue of A with corresponding eigenvectors !v 1 = 0,
 
1
 
 
4
 2
 
 
!v 2 = 1, and that λ2 = −1 is an eigenvalue of A with corresponding eigenvector !v 3 =  1.
 
 
0
−1
3
Since {!v 1 , !v 2 , !v 3 } forms a basis for R of eigenvectors of A, we have that A is diagonalized by


2
1 4


1 to D = diag(1, 1, −1).
P = 0 1


1 0 −1
E2 We have
))
)
))4 − λ
2
2 )))
C(λ) = ))) −1 1 − λ −1 ))) = −(λ − 2)2 (λ − 4)
)) 1
1
3 − λ))
Copyright © 2020 Pearson Canada Inc.
50
The eigenvalues are λ1 = 2 with algebraic multiplicity 2 and λ2 = 4 with algebraic multiplicity 1.
For λ1 = 2 we get

 

2
2 1 1 1
 2

 

A − 2I = −1 −1 −1 ∼ 0 0 0

 

1
1
1
0 0 0
    

−1 −1



   

 1 ,  0
Thus, a basis for the eigenspace of λ1 is 
. So, the geometric multiplicity is 2.








 0

1
For λ2 = 4 we get

 

2
2 1 0 −2
 0

 

1
A − 4I = −1 −3 −1 ∼ 0 1

 

1
1 −1
0 0
0
  

 2



 

−1
A basis for the eigenspace of λ2 is 
. So, the geometric multiplicity is 1.



 1

E3 We have
))
)
))2 − λ
4
−5 )))
4−λ
−4 ))) = −(λ − 1)3
C(λ) = ))) 1
)) 1
3
−3 − λ))
The only distinct eigenvalue is λ1 = 1 with algebraic multiplicity 3.
For λ1 = 1 we get

 

1 4 −5 1 0 −1

 

A − 1I = 1 3 −4 ∼ 0 1 −1

 

1 3 −4
0 0
0
  

1



 

1
Thus, a basis for the eigenspace of λ1 is 
. So, the geometric multiplicity is 1.



1

E4 We first need to find the eigenvalues of A by factoring the characteristic polynomial. We have
))
)
))−1 − λ
4 )))
C(λ) = )
= (λ − 1)(λ − 3)
) −2
5 − λ))
Hence, the eigenvalues are λ1 = 1 and λ2 = 3. Since there are two distinct eigenvalues, the matrix is
diagonalizable by Theorem 6.2.4.
For λ1 = 1 we get
!
" !
"
−2 4
1 −2
A − 1I =
∼
−2 4
0
0
*! "+
2
Thus, a basis for the eigenspace of λ1 is
.
1
For λ2 = 3 we get
!
" !
"
−4 4
1 −1
A − 3I =
∼
−2 2
0
0
*! "+
1
Thus, a basis for the eigenspace of λ2 is
.
1
Copyright © 2020 Pearson Canada Inc.
51
"
2 1
We take the columns of P to be a basis for R of eigenvectors of A. Hence, we take P =
.
1 1
The corresponding matrix D is the diagonal matrix whose diagonal entries are the eigenvalues corresponding columnwise to the eigenvectors in P. Hence, D = diag(1, 3).
2
!
E5 We first need to find the eigenvalues of A by factoring the characteristic polynomial. We have
))
)
))4 − λ −1)))
C(λ) = )
= (λ − 2)2
) 4
−λ))
Hence, the only distinct eigenvalue is λ1 = 2.
For λ1 = 2 we get
!
" !
"
2 −1
1 −1/2
A − 2I =
∼
4 −2
0
0
*! "+
1
Thus, a basis for the eigenspace of λ1 is
.
2
Therefore, the geometric multiplicity of λ1 is 1. But, the algebraic multiplicity of λ1 is 2, so A is not
diagonalizable by Theorem 6.2.3.
E6 We first need to find the eigenvalues of A by factoring the characteristic polynomial. We have
))
)
))5 − λ −4
−2 )))
C(λ) = ))) −4 5 − λ −2 ))) = −λ(λ − 9)2
)) −2
−2 8 − λ))
Hence, the distinct eigenvalues are λ1 = 9 and λ2 = 0.
For λ1 = 9, we get

 

−4 −4 −2 2 2 1

 

A − 9I = −4 −4 −2 ∼ 0 0 0

 

−2 −2 −1
0 0 0
    

−1 −1



   

 1 ,  0
Thus, a basis for the eigenspace of λ1 is 
.








 0
2
For λ2 = 0, we get

 

 5 −4 −2 1 0 −2

 

5 −2 ∼ 0 1 −2
A − 0I = −4

 

−2 −2
8
0 0
0
 

2



 

2
Thus, a basis for the eigenspace of λ2 is 
.





1



−1 −1 2


0 2.
We take the columns of P to be a basis for R3 of eigenvectors of A. Hence, we take P =  1


0
2 1
The corresponding matrix D is the diagonal matrix whose diagonal entries are the eigenvalues corresponding columnwise to the eigenvectors in P. Hence, D = diag(9, 9, 0).
Copyright © 2020 Pearson Canada Inc.
52
E7 We first need to find the eigenvalues of A by factoring the characteristic polynomial. We have
))
)
))−3 − λ
1
0 )))
−7 − λ −8 ))) = −λ(λ + 4)(λ + 2)
C(λ) = ))) 13
)) −11
5
4 − λ))
Hence, the eigenvalues are λ1 = 0, λ2 = −4, and λ3 = −2. Since there are three distinct eigenvalues,
the matrix is diagonalizable by Theorem 6.2.4.
For λ1 = 0 we get

 

1
0 1 0 1
 −3

 

A − 0I =  13 −7 −8 ∼ 0 1 3

 

−11
5
4
0 0 0
  

−1



 

−3
Thus, a basis for the eigenspace of λ1 is 
.



 1

For λ2 = −4 we get

 

1
0 1 0 −1/2
 1

 

1/2
A − (−4)I =  13 −3 −8 ∼ 0 1

 

−11
5
8
0 0
0
  

 1



 

−1
Thus, a basis for the eigenspace of λ2 is 
.



 2

For λ3 = −2 we get

 

1
0 1 0 −1
 −1

 

A − (−2)I =  13 −5 −8 ∼ 0 1 −1

 

−11
5
6
0 0
0
  

1



 

1
Thus, a basis for the eigenspace of λ3 is 
.



1



1 1
−1


We take the columns of P to be a basis for R3 of eigenvectors of A. Hence, we take P = −3 −1 1.


1
2 1
The corresponding matrix D is the diagonal matrix whose diagonal entries are the eigenvalues corresponding columnwise to the eigenvectors in P. Hence, D = diag(0, −4, −2).
!
"
! "
! √ " !
"
2 9
1
1/ √2
0.707
E8 Let A =
. If !x 0 =
, then !y 0 =
≈
;
−4 22
1
0.707
1/ 2
"
7.778
,
12.728
!
"
8.722
!x 2 = A!y 1 ≈
,
16.686
!
"
8.902
!x 3 = A!y 2 ≈
,
17.644
!x 1 = A!y 0 ≈
!
!
"
0.521
0.853
!
"
0.463
!y 2 =
0.886
!
"
0.450
!y 3 =
0.893
!y 1 =
Copyright © 2020 Pearson Canada Inc.
53
"
! "
0.450 1
0.450
1
It appears that ym →
. We see that
≈ , so it appears that !v 1 =
is an eigenvector of
0.893
2
0.893 2
A. Indeed we find that A!v 1 = 20!v 1 , so the dominant eigenvalue is λ = 20.
!
E9 We have
))
)
))5 − λ
6 )))
C(λ) = )
= λ2 − 3λ + 2 = (λ − 1)(λ − 2)
) −2 −2 − λ))
Thus, the eigenvalues are λ1 = 1 and λ2 = 2.
For λ1 = 1 we have
" !
"
4
6
1 3/2
A − λ1 I =
∼
−2 −3
0 0
!
*! "+
−3
Hence, a basis for Eλ1 is
.
2
For λ2 = 2 we have
A − λ2 I =
Hence, a basis for Eλ2 is
!
" !
"
3
6
1 2
∼
−2 −4
0 0
*! "+
−2
.
1
"
!
"
−3 −2
1 0
to D =
. Thus, we have A = PDP−1 and so
2
1
0 2
!
"!
"!
"
−3 −2 1
0
1
2
= PD100 P−1 =
2
1 0 2100 −2 −3
!
"
−3 + 4 · 2100 −6 + 6 · 2100
=
2 − 2 · 2100
4 − 3 · 2100
It follows that A is diagonalized by P =
A100
!
E10 The condition det A = 0 can be rewritten det(A − 0I) = 0, it follows that 0 is an eigenvalue of A.
Similarly, −2 and 3 are eigenvalues, so A has three distinct real eigenvalues. Hence, each eigenvalue
has algebraic and geometric multiplicity 1.
(a) The solution space A!x = !0 is one-dimensional, since it is the eigenspace corresponding to the
eigenvalue 0.
(b) 2 cannot be an eigenvalue, because we already have three eigenvalues for the 3 × 3 matrix A.
Hence, there are no vectors that satisfy A!x = 2!x , so the solution space is zero dimensional in
this case.
(c) The range of the mapping determined by A is the subspace spanned by the eigenvectors corresponding to the non-zero eigenvalues. Hence, the rank of A is two.
Alternately, since A!x = !0 is one-dimensional, we could apply the Rank-Nullity Theorem, to get
that
rank(A) = n − nullity(A) = 3 − 1 = 2
E11 Since the columns in A sum to 1 it is a Markov matrix. If λ = 1, then

 

0.1
0  1 0 −0.5
−0.1

 

−0.2
0.1 ∼ 0 1 −0.5
A − λI =  0

 

0.1
0.1 −0.1
0 0
0
Copyright © 2020 Pearson Canada Inc.
54
 
1
 
Therefore, an eigenvector corresponding to λ1 is 1. The components in the state vector must sum
 
2
to 1, so the invariant state corresponding to λ1 is
   
1 1/4
1
   
1 = 1/4
1 + 1 + 2    
2
1/2
which is invariant under the transformation with matrix A.
! "
! "
y
a
E12 The solution will be of the form
= ceλt , so substitute this into the original system and use the
z
b
! "
! "
d λt a
a
= λceλt
to get
fact that ce
b
b
dt
! " !
"
! "
a
0.1 0.2 λt a
λce
=
ce
b
0.3 0.2
b
λt
! "
!
"
a
0.1 0.2
Cancelling the common factor ce , we see that
is an eigenvector of
with eigenvalue λ.
b
0.3 0.2
!
"
0.1 0.2
Thus, we need to find the eigenvalues of A =
. We have
0.3 0.2
λt
))
)
))0.1 − λ
0.2 )))
C(λ) = )
= λ2 − 0.3λ − 0.04 = (λ + 0.1)(λ − 0.4)
) 0.3
0.2 − λ))
Hence, the eigenvalues are λ1 = −0.1 and λ2 = 0.4. For λ1 = −0.1
A − (−0.1)I =
!
" !
"
0.2 0.2
1 1
∼
0.3 0.3
0 0
!
"
! "
−1
−0.1t −1
Hence, an eigenvector corresponding to λ1 is
. So, one solution to the system is e
. For
1
1
λ2 = 0.4,
!
" !
"
−0.3
0.2
1 −2/3
A − 0.4I =
∼
0.3 −0.2
0
0
! "
! "
2
0.4t 2
Hence, an eigenvector corresponding to λ2 is . So, one solution to the system is e
.
3
3
Since the system is a linear homogeneous system, the general solution will be an arbitrary linear
combination of the two solutions. The general solution is therefore
! "
! "
! "
y
−0.1t −1
0.4t 2
= ae
+ be
,
z
1
3
a, b ∈ R
E13 Since A is invertible, 0 is not an eigenvalue of A (see Section 6.2 Problem C8). Then, if A!x = λ!x we
get !x = λA−1 !x , so A−1 !x = λ1 !x .
Copyright © 2020 Pearson Canada Inc.
55
E14 Since A is diagonalizable, we have that there exists an invertible matrix P such that
P−1 AP = diag(λ1 , . . . , λ1 ) = λ1 I
Multiplying on the left by P and on the right by P−1 gives
A = P(λ1 I)P−1 = λ1 PP−1 = λ1 I
E15 Let λ be the eigenvalue corresponding to !v . Hence, A!v = λ!v . Now observe that for any t ! 0, we
have that
A(t!v ) = tA!v = t(λ!v ) = λ(t!v )
Thus, t!v is also an eigenvector of A.
E16 If A is diagonalizable, then there exists an invertible matrix P such that P−1 AP = D where D is
diagonal. Since A and B are similar, there exists an invertible matrix Q such that Q−1 BQ = A. Thus,
we have that
D = P−1 (Q−1 BQ)P = (P−1 Q−1 )B(QP) = (QP)−1 B(QP)
Thus, by definition, B is also diagonalizable.
!
"
1 1
E17 This is false. The matrix A =
is invertible, but not diagonalizable.
0 1
E18 This is false. The zero matrix is diagonal and hence diagonalizable, but is not invertible.
E19 This is true. It is Theorem 6.2.4.
E20 This is false. If !v = !0, then !v is not an eigenvector.
E21 This is true. If the RREF of A − λI is I, then (A − λI)!v = !0 has the unique solution !v = !0. Hence, by
definition, λ is not an eigenvalue.
E22 This is true. If L(!v ) = λ!v where !v ! !0, then we have λ!v = L(!v ) = [L]!v as required.
Chapter 6 Further Problems
F1 We have
"
a−λ
b
C(λ) = det
= λ2 − (a + c)λ + ac − b2
b
c−λ
!
Thus, by the quadratic formula, the eigenvalues of A are
A
A
a + c + (a + c)2 − 4(ac − b2 ) a + c + (a − c)2 + 4b2
λ+ =
=
2
A
A 2
a + c − (a + c)2 − 4(ac − b2 ) a + c − (a − c)2 + 4b2
λ− =
=
2
2
If a = c and b = 0, then A is already diagonal and thus can be diagonalized by P = I. Otherwise, A
has two distinct real eigenvalues. We have
√

 

 a−c− (a−c)2 +4b2
 1
√ 2b

b
2
2



2
a−c− (a−c) +4b 
 ∼ 

√
A − λ+ I = 
2
2
c−a− (a−c) +4b 
0
0
b
2
Copyright © 2020 Pearson Canada Inc.
56


√−2b


2
2
Hence, an eigenvector for λ+ is !v 1 =  a−c− (a−c) +4b . Also,
1
√

 
 a−c+ (a−c)2 +4b2
 1
2
√ b 2 2  ∼ 
A − λ− I = 
c−a+ (a−c) +4b 
0
b
2


√−2b


2 +4b2 
a−c+
(a−c)

.
so, an eigenvector for λ− is !v 2 = 
1
B
C
Hence, A is digaonalized by P = !v 1 !v 2 to D = diag(λ+ , λ− ).
a−c+
√ 2b
(a−c)2 +4b2
0



F2 (a) Suppose that !v is any eigenvector of A with eigenvalue λ. Since the algebraic multiplicity of λ is
1, the geometric multiplicity is also 1, and hence the eigenspace of λ is Span{!v }.
AB!v = BA!v = B(λ!v ) = λB!v
Thus, B!v is an eigenvector of A with eigenvalue λ. Thus, B!v ∈ Span{!v }, so B!v = µ!v for some µ.
Therefore, !v is an eigenvector of B.
!
"
!
"
1 0
1 0
(b) Let A =
and B =
. Then AB = BA and every non-zero vector in R2 is an eigenvector
0 1
0 2
! "
1
of A with eigenvalue 1, but , for example, is not an eigenvector of B.
1
F3 We have
B(AB − λI) = BAB − λB = (BA − λI)B
so
D
E
D
E
det B(AB − λI) = det (BA − λI)B
But, the determinant of a product is the product of the determinants, so
det B det(AB − λI) = det(BA − λI) det B
Since det B ! 0, we get
det(AB − λI) = det(BA − λI)
Thus AB and BA have the same characteristic polynomial, hence the same eigenvalues.
F4 We will prove this by induction. If k = 1, then the result is trivial, since by definition of an eigenvector !v 1 ! !0. Assume that the result is true for some k ≥ 1. To show {!v 1 , . . . , !v k , !v k+1 } is linearly
independent we consider
(1)
c1!v 1 + · · · + ck!v k + ck+1!v k+1 = !0
Multiplying both sides of equation (1) by A − λk+1 I gives
!0 = c1 (λ1 − λk+1 )!v 1 + · · · + ck (λk − λk+1 )!v k + ck+1 (λk+1 − λk+1 )!v k+1
= c1 (λ1 − λk+1 )!v 1 + · · · + ck (λk − λk+1 )!v k + !0
By our induction hypothesis, {!v 1 , . . . , !v k } is linearly independent and hence all the coefficients must
be 0. But, λi ! λk+1 for all 1 ≤ i ≤ k, so we must have c1 = · · · = ck = 0. Thus, equation (1) becomes
!0 + ck+1!v k+1 = !0
But !v k+1 ! !0 since it is an eigenvector, hence ck+1 = 0. Therefore, the set is linearly independent.
Copyright © 2020 Pearson Canada Inc.
57
F5 Since the eigenvalues are distinct, the corresponding eigenvectors form a basis, so
!x = x1!v 1 + · · · + xn!v n
Note that
(A − λi I)(A − λ j I) = A2 − λi A − λ j A + λi λ j I = (A − λ j I)(A − λi I)
so these factors commute. Note also that
(A − λi I)!v i = !0
for every i. Hence,
(A − λ1 I)(A − λ2 I) · · · (A − λn I)!x = !0
Since this is true for every !x ∈ Rn ,
(A − λ1 I)(A − λ2 I) · · · (A − λn I) = On,n
The characteristic polynomial of A is
(−1)n (λ − λ1 )(λ − λ2 ) · · · (λ − λn ) = (−1)n λn + cn−1 λn−1 + · · · + c1 λ + c0
Thus, if we define
C(X) = (−1)n (X − λ1 I)(X − λ2 I) · · · (X − λn I) = (−1)n X n + cn−1 X n−1 + · · · + c1 X + c0 I
for any n × n matrix X, we get that
(−1)n An + cn−1 An−1 + · · · + c1 A + c0 I = On,n
F6 By Problem F5, we have
On,n = (−1)n An + cn−1 An−1 + · · · + c1 A + c0 I
4
5
−c0 I = A (−1)n An−1 + cn−1 An−2 + · · · + c1 I
Since A is invertible, det A ! 0 and thus, by Section 6.2 Problem C7, the product of the eigenvalues
c0 ! 0. Hence,
5
14
A−1 = − (−1)n An−1 + cn−1 An−2 + · · · + c1 I
c0
Copyright © 2020 Pearson Canada Inc.
Chapter 7 Solutions
Section 7.1
A Practice Problems
! " ! "
1
2
A1 The set is orthogonal since
·
= 1(2) + 2(−1) = 0. The norm of each vector is
2 −1
  1   2 
##! "##
##! "##

$
√
√


## 1 ## √ 2
#
#
2
 √5   √5 

.
 √2  ,  −1


## 2 ## = 1 + 22 = 5 and ### −1 ### = 22 + (−1)2 = 5. The orthonormal set is 


√ 


5
5
! " ! "
1 1
A2 The set is not orthogonal since
·
= 1(1) + (−3)(3) = −8.
−3 3
A3 Calculating the dot products of each pair of vectors, we find
   
 4 −1
   
−1 ·  0 = 4(−1) + (−1)(0) + 1(4) = 0
1
4
   
 4  4
−1 · 17 = 4(4) + (−1)(17) + 1(1) = 0
   
1
1
   
−1  4
 0 · 17 = (−1)(4) + 0(17) + 4(1) = 0
   
4
1
Hence, the set is orthogonal. The norm of each vector is
## ##
## 4## $
##−1## = 42 + (−1)2 + 12 = √18
## ##
# 1#
## ##
##−1## $
## 0## = (−1)2 + 02 + 42 = √17
## ##
# 4#
## ##
## 4## √
##17## = 42 + 172 + 12 = √306
## ##
# 1#
1
Copyright © 2020 Pearson Canada Inc.
2
√ 
√  

√  

 4/ 18 −1/ 17  4/ 306






√
√

−1/ 18 ,  0  , 17/ 306
The orthonormal set is 
.



√






√
√



 1/ 18  4/ 17  1/ 306

   
 1  2
   
A4 Observe that −1 · −1 = 2(1) + (−1)(−1) + 1(1) = 4 ! 0. These two vectors are not orthogonal, so
   
1
1
the set is not orthogonal.
A5 Calculating the dot products of each pair of vectors, we find
   
 1 2
   
 3 · 0 = 1(2) + 3(0) + (−1)(2) = 0
−1 2
   
 1  3
   
 3 · −2 = 1(3) + 3(−2) + (−1)(−3) = 0
−1 −3
   
2  3
0 · −2 = 2(3) + 0(−2) + 2(−3) = 0
   
2 −3
Hence, the set is orthogonal. The norm of each vector is
## ##
## 1## $
## 3## = 12 + 32 + (−1)2 = √11
## ##
# −1 #
## ##
##2## √
##0## = 22 + 02 + 22 = √8 = 2 √2
## ##
#2#
## ##
## 3## $
##−2## = (3)2 + (−2)2 + (−3)2 = √22
## ##
# −3 #
√   √  
√ 


 1/ 11 1/ 2  3/ 22





√  
√ 
 





0
The orthonormal set is 
,
,
.





3/
11
−2/
22







√






√
√


 
 




−1/ 11 1/ 2 −3/ 22 
A6 Calculating the dot products of each pair of vectors, we find
   
 1 −2
−2 ·  1 = 1(−2) + (−2)(1) + 2(2) = 0
   
2
2
   
 1 2
   
−2 · 2 = 1(2) + (−2)(2) + 2(1) = 0
2 1
   
−2 2
   
 1 · 2 = (−2)(2) + 1(2) + 2(1) = 0
2 1
Copyright © 2020 Pearson Canada Inc.
3
Hence, the set is orthogonal. The norm of each vector is
## ##
## 1## $
##−2## = 12 + (−2)2 + 22 = √9 = 3
## ##
# 2#
## ##
##−2## $
## 1## = (−2)2 + 12 + 22 = √9 = 3
## ##
# 2#
## ##
##2## √
##2## = 22 + 22 + 12 = √9 = 3
## ##
#1#
 
  


 1/3 −2/3 2/3







 ,  1/3 , 2/3

−2/3
The orthonormal set is 
.


















 2/3
2/3 1/3 
A7 Calculating the dot products of each pair of vectors, we find
   
1 −2
1  0
  ·   = 1(−2) + 1(0) + 1(1) + 1(1) = 0
1  1
   
1
1
   
1  0
1  0
  ·   = 1(0) + 1(0) + 1(1) + 1(−1) = 0
1  1
   
1 −1
   
−2  0
 0  0
  ·   = (−2)(0) + 0(0) + 1(1) + 1(−1) = 0
 1  1
   
1 −1
Hence, the set is orthogonal. The norm of each vector is
## ##
##1##
##1## √
√
## ## = 12 + 12 + 12 + 12 = 4 = 2
##1##
#1#
## ##
##−2##
## 0## $
√
## ## = (−2)2 + 02 + 12 + 12 = 6
## 1##
# 1#
## ##
## 0##
## 0## $
√
## ## = 02 + 02 + 12 + (−1)2 = 2


## 1##
# −1 #
Copyright © 2020 Pearson Canada Inc.
4
√  

  


−2/
6  0 
1/2

















 



1/2  0√   0√ 





The orthonormal set is 
,
.
  , 















1/2
1/
2
1/
6




 

  



1/2  1/ √6 −1/ √2

   
1 −2
0  1
A8 Observe that   ·   = 1(−2) + 0(1) + 1(1) + 1(0) = −1 ! 0. These two vectors are not orthogonal,
1  1
1
0
so the set is not orthogonal.
 
 
 
 1
2
−2
 
 
 
A9 Let us denote the vectors of the basis B by !v 1 = −2, !v 2 = 2, !v 3 =  1.
 
 
 
2
1
2
(a) Using Theorem 7.1.2, we get
!v 1 · w
!
1(4) + (−2)(3) + 2(5)
=
=
2
9
#!v 1 #
!v 2 · w
!
2(4) + 2(3) + 1(5) 19
c2 =
=
=
2
9
9
#!v 2 #
!v 3 · w
!
(−2)(4) + 1(3) + 2(5)
c3 =
=
=
2
9
#!v 3 #
c1 =
! = 89 !v 1 +
So, w
19
v2
9!
8
9
5
9
+ 59 !v 3 .
(b) Using Theorem 7.1.2, we get
!v 1 · !x
1(1) + (−2)(0) + 2(0)
=
=
2
9
#!v 1 #
!v 2 · !x
2(1) + 2(0) + 1(0) 2
c2 =
=
=
2
9
9
#!v 2 #
!v 3 · !x
(−2)(1) + 1(0) + 2(0)
c3 =
=
=
9
#!v 3 #2
c1 =
1
9
−2
9
So, !x = 19 !v 1 + 29 !v 2 − 29 !v 3 .
(c) Using Theorem 7.1.2, we get
!v 1 · !y
1(−2) + (−2)(2) + 2(−1) −8
=
=
2
9
9
#!v 1 #
!v 2 · !y
2(−2) + 2(2) + 1(−1) −1
c2 =
=
=
9
9
#!v 2 #2
!v 3 · !y
(−2)(−2) + 1(2) + 2(−1) 4
c3 =
=
=
9
9
#!v 3 #2
c1 =
So, !y = − 89 !v 1 − 19 !v 2 + 49 !v 3 .


 


 1/3
2/3
−2/3


 


A10 Let us denote the vectors of the basis B by !v 1 = −2/3, !v 2 = 2/3, !v 3 =  1/3. Observe that


 


2/3
1/3
2/3
#!v 1 # = #!v 2 # = #!v 3 # = 1.
Copyright © 2020 Pearson Canada Inc.
5
(a) Using Theorem 7.1.2, we get
1
−2
2
8
(4) +
(3) + (5) =
3
3
3
3
2
2
1
19
! = (4) + (3) + (5) =
c2 = !v 2 · w
3
3
3
3
−2
1
2
5
!=
c3 = !v 3 · w
(4) + (3) + (5) =
3
3
3
3
!=
c1 = !v 1 · w
! = 83 !v 1 +
So, w
19
v2
3!
+ 53 !v 3 .
(b) Using Theorem 7.1.2, we get
1
−2
2
(3) +
(−7) + (2) = 7
3
3
3
2
2
1
c2 = !v 2 · !x = (3) + (−7) + (2) = −2
3
3
3
−2
1
2
c3 = !v 3 · !x =
(3) + (−7) + (2) = −3
3
3
3
c1 = !v 1 · !x =
So, !x = 7!v 1 − 2!v 2 − 3!v 3 .
(c) Using Theorem 7.1.2, we get
1
−2
2
22
(2) +
(−4) + (6) =
3
3
3
3
2
2
1
2
c2 = !v 2 · !y = (2) + (−4) + (6) =
3
3
3
3
−2
1
2
4
c3 = !v 3 · !y =
(2) + (−4) + (6) =
3
3
3
3
c1 = !v 1 · !y =
So, !y =
22
v1
3!
+ 23 !v 2 + 43 !v 3 .
 
 
 
 
1
 1
−1
 0
1
−1
 0
 1
A11 Let us denote the vectors of the basis B by !v 1 =  , !v 2 =  , !v 3 =  , !v 4 =  .
1
 1
 1
 0
1
−1
0
−1
(a) Using Theorem 7.1.2, we get
1(2) + 1(4) + 1(−3) + 1(5)
=2
4
1(2) + (−1)(4) + 1(−3) + (−1)(5) −5
c2 = !v 2 · !x =
=
4
2
(−1)(2) + 0(4) + 1(−3) + 0(5)
5
c3 = !v 3 · !x =
=−
2
2
0(2) + 1(4) + 0(−3) + (−1)(5)
1
c4 = !v 4 · !x =
=−
2
2
c1 = !v 1 · !x =
So, !x = 2!v 1 − 52 !v 2 − 52 !v 3 − 12 !v 4 .
Copyright © 2020 Pearson Canada Inc.
6
(b) Using Theorem 7.1.2, we get
1(−4) + 1(1) + 1(3) + 1(−5) −5
=
4
4
1(−4) + (−1)(1) + 1(3) + (−1)(−5) 3
c2 = !v 2 · !y =
=
4
4
(−1)(−4) + 0(1) + 1(3) + 0(−5) 7
c3 = !v 3 · !y =
=
2
2
0(−4) + 1(1) + 0(3) + (−1)(−5)
c4 = !v 4 · !y =
=3
2
c1 = !v 1 · !y =
So, !y = − 45 !v 1 + 34 !v 2 + 72 !v 3 + 3!v 4 .
(c) Using Theorem 7.1.2, we get
1(3) + 1(1) + 1(0) + 1(1) 5
=
4
4
1(3) + (−1)(1) + 1(0) + (−1)(1) 1
!=
c2 = !v 2 · w
=
4
4
(−1)(3) + 0(1) + 1(0) + 0(1)
3
!=
c3 = !v 3 · w
=−
2
2
0(3) + 1(1) + 0(0) + (−1)(1)
!=
c4 = !v 4 · w
=0
2
!=
c1 = !v 1 · w
A12
A13
A14
A15
A16
A17
! = 54 !v 1 + 14 !v 2 − 32 !v 3 + 0!v 4 .
So, w
!
"!
" !
"
5/13 12/13 5/13 12/13
1 0
T
A A=
=
. So, it is orthogonal.
12/13 −5/13 12/13 −5/13
0 1
!
"!
"
!
"
3/5 −4/5
3/5
4/5
1
24/25
T
A A =
=
. Therefore, it is not orthogonal since the
4/5 −3/5 −4/5 −3/5
24/25
1
columns of the matrix are not orthogonal.
!
"!
"
!
"
2/5 1/5 2/5 −1/5
1/5 0
T
A A =
=
. It is not orthogonal as the columns are not unit
−1/5 2/5 1/5
2/5
0 1/5
vectors.
√
√ "! √
√ " !
!
"
1/ √2 1/ √2 1/ √2 −1/ √2
1 0
AT A =
=
. So, it is orthogonal.
0 1
−1/ 2 1/ 2 1/ 2
1/ 2


 

2/3 2/3 1/3
2/3 −2/3  1
0
4/9
 1/3






1/3 =  0
1
−4/9. Therefore, it is not orthogAT A =  2/3 −2/3 1/3 2/3 −2/3


 

−2/3
1/3 2/3 2/3
1/3
2/3
4/9 −4/9
1
onal since the third column is not orthogonal to the first or second column.


 

2/3
2/3 1/3
2/3
2/3 1 0 0
1/3


 

1/3 2/3 −2/3
1/3 = 0 1 0. Hence, it is orthogonal.
AT A = 2/3 −2/3


 

2/3
1/3 −2/3 2/3
1/3 −2/3
0 0 1
Copyright © 2020 Pearson Canada Inc.
7
√
√ 
√

 1/ 6 1/ 3
1/
2
√
√


A18 Since B is orthonormal, the matrix P = −2/ 6 1/ 3
 is orthogonal. Hence, the rows of
0
√
√
√


1/ 6 1/ 3 −1/ 2
√ 
 √  
√  

1/ 6 −2/ 6  1/ 6




 √  
√ 
√  







,
,
.
P are orthonormal. Thus, we can pick another orthonormal basis to be 
1/
3
1/
3





1/
3














√
√




1/ 2  0  −1/ 2
A19 By Theorem 7.1.1, B = {!v 1 , . . . , !v n } is linearly independent. Thus, it is a basis for Rn by Theorem
2.3.6.
B Homework Problems
√ "3
2! √ " !
2/ √13
3/ √13
B1 The set is orthogonal.
,
3/ 13 −2/ 13
√ " ! √ "3
2!
2/ √5 1/ √5
B2 The set is orthogonal.
,
−1/ 5 2/ 5
√  
√   √ 


 1/ 6 −1/ 3 1/ 2





√
√  
 






0
B3 The set is orthogonal. 
,
,
 2/ √6  1/ √3  √ 













1/ 3 1/ 2 
−1/ 6
B4 The set is not orthogonal.
B5 The set is not orthogonal.
√ 
 

 
0




−3/
14
1/2









√

 

 











0

 1/ √6 −1/2 


√




,
,
B6 The set is orthogonal. 

 














1/2
−1/
6
2/
14




 


 




 2/ √6  1/2  1/ √14
!
"
!
"
1/10
23/10
w]B =
(b) [!x ]B =
B7 (a) [!
−3/10
1/10
!
"
!
"
−1/25
−18/25
B8 (a) [!
w]B =
(b) [!x ]B =
7/25
1/25
√ "
√ "
!
!
−1/ √5
9/ √5
B9 (a) [!
w]B =
(b) [!x ]B =
2/ 5
−3/ 5




 5/6
 −1/6




B10 (a) [!
w]B = −4/11
(b) [!x ]B = −4/11




5/66
5/66
 
 
 3
 2
 
 
B11 (a) [!x ]B = −1
(b) [!y ]B = −2
 
 
−1
1
(c)
(c)
(c)
(c)
(c)
"
9/10
[!y ]B =
13/10
!
"
−6/25
[!y ]B =
17/25
√ "
!
−1/ √5
[!y ]B =
−8/ 5


 −2/3


[!y ]B = −7/11


−8/33


 33/7 


[!
w]B =  0 


−1/7
B12 The matrix is not orthogonal since the columns are not orthogonal to each other.
B13 The matrix is orthogonal.
B14 The matrix is orthogonal.
Copyright © 2020 Pearson Canada Inc.
!
8
B15 The matrix is not orthogonal since the columns are not unit vectors.
B16 The matrix is orthogonal.
C Conceptual Problems
C1 Suppose that P is orthogonal, so that PT P = I. Then for every !x ∈ Rn we have
#P!x #2 = (P!x ) · (P!x ) = !x T PT P!x = !x T !x = #!x #2
So #P!x #2 = #!x #2 . (The converse is also true; see Problem F2(b).)
C2 (a) We have 1 = det I = det(PT P) = det PT det P = (det P)2 . Hence, det P = ±1.
!
"
2 0
(b) Let A =
. Then det A = 1, but A is certainly not orthogonal.
2 1/2
C3 Suppose that λ is a real eigenvalue of the orthogonal matrix P with eigenvector !v . Then P!v = λ!v .
Thus, by part (a), #!v # = #P!v # = |λ|#!v #. So |λ| = 1, and λ = ±1.
C4 Let P and Q be orthogonal. Then PT P = I and QT Q = I. Hence,
(PQ)T (PQ) = QT PT PQ = QT IQ = QT Q = I
Thus, PQ is orthogonal.
C5 Let !v T1 , . . . , !v Tn denote the rows of P. Then, the columns of PT are !v 1 , . . . , !v n . By the usual rule for
matrix multiplication,
(PPT )i j = (!v Ti )!v j = !v i · !v j
Hence, PPT = I if and only if !v i · !v i = 1 and !v i · !v j = 0 for all i ! j. But this is true if and only if the
rows of P form an orthonormal set.
Section 7.2
A Practice Problems
A1 We have
! "
! " ! "
(!v 1 · !x )
(!v 2 · !x )
4 2
7 −3
−1
!v 1 +
!v 2 =
projS (!x ) =
+
=
2
13 3
13 2
#!v 1 #2
#!v 2 #2
! " ! " ! "
−1
−1
0
perpS (!x ) = !x − projS (!x ) =
−
=
2
2
0
A2 We have
 
  

1 
0 11/2
(!v 1 · !x )
(!v 2 · !x )
11   9   
0 + 1 =  9 
!v 1 +
!v 2 =
projS (!x ) =

2   1   
#!v 1 #2
#!v 2 #2
1
0
11/2
  
 

6 11/2  1/2
  
 

perpS (!x ) = !x − projS (!x ) = 9 −  9  =  0 
  
 

5
11/2
−1/2
Copyright © 2020 Pearson Canada Inc.
9
A3 We have
A4 We have
A5 We have
A6 We have
A7 We have
 
   
1
−3 2
(!v 1 · !x )
(!v 2 · !x )
11   −3    


 0 = 0
0
!
!
projS (!x ) =
v
+
v
=
+
 
1
2
10   10    
#!v 1 #2
#!v 2 #2
3
1
3
     
 √2  2  √0 
     
perpS (!x ) = !x − projS (!x ) =  5 − 0 =  5
     
3
3
0
 
   
 1
1 4
(!v 2 · !x )
0   12    
(!v 1 · !x )


1 = 4
!v 1 +
!v 2 = −2 +
projS (!x ) =
6   3    
#!v 1 #2
#!v 2 #2
1
1
4
     
7 4  3
     
perpS (!x ) = !x − projS (!x ) = 4 − 4 =  0
     
1
4
−3
 
 
   
1
 1
 1 1
(!v 1 · !x )
(!v 2 · !x )
(!v 3 · !x )
9   −1   0    




 1 = 2
1
−1
!
!
!
projS !x =
v
+
v
+
v
=
+
+




1
2
3
6  
2   3    
#!v 1 #2
#!v 2 #2
#!v 3 #2
2
0
−1
3
 
0
 
perpS (!x ) = !x − projS (!x ) = 0
 
0
 
  

1
 1 −5/7
(!v 2 · !x )
11   9   
(!v 1 · !x )
3 −  1 =  6/7
!v 1 +
!v 2 =
projS (!x ) =

14   6   
#!v 1 #2
#!v 2 #2
2
−2
32/7
  
 

1 −5/7  12/7
  
 

perpS (!x ) = !x − projS (!x ) = 0 −  6/7 = −6/7
  
 

5
32/7
3/7
 
 
   
1
0
 1  2 
(!v 2 · !x )
(!v 3 · !x )
7 0 9 1 −3  0 9/2
(!v 1 · !x )
!v 1 +
!v 2 +
!v 3 =   +   +
projS !x =
 = 
2 1 2 0
2 −1  5 
#!v 1 #2
#!v 2 #2
#!v 3 #2
0
1
0
9/2
    

2  2   0 
3 9/2 −3/2

perpS (!x ) =   −   = 
5  5   0 
6
9/2
3/2
Copyright © 2020 Pearson Canada Inc.
10
A8 We have
 
   
 1
1 5/2
−1 25 2  5 
(!v 1 · !x )
(!v 2 · !x )


  =  
!v 1 +
!v 2 = 0   +
projS (!x ) =
#!v 1 #2
#!v 2 #2
−1 10 1 5/2
1
2
5
    

2 5/2 −1/2
3  5   −2 

perpS (!x ) =   −   = 
5 5/2  5/2
6
5
1
 
v1 
 
A9 We need to find all vectors v2  ∈ R3 such that
 
v3
   
v1   1
   
0 = v2  ·  2 = v1 + 2v2 − v3
   
v3 −1
We find the general solution of this homogeneous system is
 
 
 
v1 
−2
1
 
 
 
v
1
=
s
+
t
 2 
 
0 , s, t ∈ R
v3
0
1
    

1 −2



   

⊥
0 ,  1
Hence, a basis for S is 
.








1
0
 
v1 
 
A10 We need to find all vectors v2  ∈ R3 such that
 
v3
   
v1  1
   
0 = v2  · 1 = v1 + v2 + v3
   
v3 1
   
v1  −1
   
0 = v2  ·  1 = −v1 + v2 + 3v3
   
v3
3
Row reducing the coefficient matrix of the corresponding homogeneous system gives
!
" !
"
1 1 1
1 0 −1
∼
−1 1 3
0 1
2
Thus, the general solution is
 
 
v1 
 1
v2  = t −2 ,
 
 
v3
1
t∈R
Copyright © 2020 Pearson Canada Inc.
11
  

 1



 

⊥
−2
Hence, a basis for S is 
.





 1

 
v1 
 
A11 We need to find all vectors v2  ∈ R3 such that
 
v3
   
v1  1
   
0 = v2  · 0 = v1 + v3
   
v3 1
   
v1   2
   
0 = v2  · −1 = 2v1 − v2 + v3
   
v3
1
Row reducing the coefficient matrix of the corresponding homogeneous system gives
!
" !
"
1
0 1
1 0 1
∼
2 −1 1
0 1 1
Thus, the general solution is
  

−1



 

−1
Hence, a basis for S⊥ is 
.



 1

 
v1 
v2  =
 
v3
 
−1
 
s −1 ,
 
1
s∈R
 
v1 
v 
A12 We need to find all vectors  2  ∈ R4 such that
v3 
v4
   
v1   2
v   1
0 =  2  ·   = 2v1 + v2 − v3
v3  −1
v4
0
   
 v1  1
 v  2
0 =  2  ·   = v1 + 2v2 + v3 + v4
 v3  1
vv4 1
   
 v1   0
 v  −1
0 =  2  ·   = −v2 + 3v3 + v4
 v3   3
vv4
1
Copyright © 2020 Pearson Canada Inc.
12
Row reducing the coefficient matrix of the corresponding homogeneous system gives

 

1 −1 0 1 0 0 1/12
2

 

2
1 1 ∼ 0 1 0 1/4
1
 

0 −1
3 1
0 0 1 5/12
Thus, the general solution is
  

−1





 



−3

⊥
Hence, a basis for S is 
.






−5










 12 

 
v1 
v 
 2  =
v3 
 
v4
 
−1
−3
s   ,
−5
12
s∈R
 
v1 
v 
A13 We need to find all vectors  2  ∈ R4 such that
v3 
v4
   
v1  1
v  1
0 =  2  ·   = v1 + v2 + v3 + 2v4
v3  1
v4 2
   
v1  2
v  1
0 =  2  ·   = 2v1 + v2 + 2v3 + v4
v3  2
v4 1
   
v1  −1
v   1
0 =  2  ·   = −v1 + v2 − v3 + 4v4
v3  −1
v4
4
Row reducing the coefficient matrix of the corresponding homogeneous system gives

 

1 2 1 0 1 −1
 1 1

 

2 1 ∼ 0 1 0
3
 2 1
 

−1 1 −1 4
0 0 0
0
Thus, the general solution is
 
v1 
v2 
  =
v3 
 
v4
    

−1  1





 0 −3










⊥




Hence, a basis for S is 
,
.
















1
0
   






 0
1
 
 
−1
 1
−3
 0
s   + t   ,
 1
 0
0
1
s, t ∈ R
Copyright © 2020 Pearson Canada Inc.
13
   

1 1



   





0
1
! 2} = 
A14 Denote the given basis by B = {!
w1 , w
,
.
















0 0

! 1 and S1 = Span{!v 1 }.
First Step: Let !v 1 = w
Second Step: Let
 
   
1
1 0
  1    
!v 2 = perpS1 (!
w2 ) = 1 − 0 = 1
  1   
0
0
0


 


1 0



   

0 , 1
.
Hence, an orthogonal basis for the subspace spanned by B is 







0 0

   

1 1



   

1 , 0
! 2} = 
A15 Denote the given basis by B = {!
w1 , w
.







0 0

! 1 and S1 = Span{!v 1 }.
First Step: Let !v 1 = w
Second Step: We have
 
  

1
1  1/2
  1   

perpS1 (!
w2 ) = 0 − 1 = −1/2
  2  

0
0
0

  

1  1/2




  

1 , −1/2
.
Hence, an orthogonal basis for the subspace spanned by B is 








0

0
   

1  1



   

3 ,  2
! 2} = 
A16 Denote the given basis by B = {!
w1 , w
.







 2 −1 

! 1 and S1 = Span{!v 1 }.
First Step: Let !v 1 = w
Second Step: We have
 
  

 1
1  9/14
  5   

3 = 13/14
perpS1 (!
w2 ) =  2 −
  14   

−1
2
−12/7

  

1  9/14




  

3 ,  13/14
Hence, an orthogonal basis for the subspace spanned by B is 
.







 2 −12/7 

      

2 3 1



     

1 , 1 , 1
! 2, w
! 3} = 
A17 Denote the given basis by B = {!
w1 , w
.









2 1 1

! 1 and S1 = Span{!v 1 }.
First Step: Let !v 1 = w
Second Step: Let
and define S2 = Span{!v 1 , !v 2 }.
   
 
2  1
3
9
   
 
!v 2 = perpS1 (!
w2 ) = 1 − 1 =  0
  9   
2
−1
1
Copyright © 2020 Pearson Canada Inc.
14
Third Step: We have
 
−1
 
We take !v 3 =  4.
 
−1
 
 
  

1
2
 1 −1/9
5
0
 
 
  

perpS2 (!
w3 ) = 1 − 1 −  0 =  4/9
  9  2  

1
2
−1
−1/9
     

2  1 −1



     

1 ,  0 ,  4
Hence, an orthogonal basis for the subspace spanned by B is 
.



2 −1 −1

     

4  2  1



     

1 ,  3 ,  1
! 2, w
! 3} = 
A18 Denote the given basis by B = {!
w1 , w
.



2 −4 −1

! 1 and S1 = Span{!v 1 }.
First Step: Let !v 1 = w
Second Step: We have
  

 
4  10/7
 2

  3   
1 = 20/7
perpS1 (!
w2 ) =  3 −

  21   
2
−30/7
−4
 
 1
 
So, we take !v 2 =  2 and define S2 = Span{!v 1 , !v 2 }.
 
−3
Third Step: We have
 
 
   
 1
4
 1 0
  3   6    
1
2 = 0
perpS2 (!
w3 ) =  1 −
−


  21   14    
−1
2
−3
0
! 3 ∈ S2 , so we exclude it.
Thus, w
   

4  1



   

1 ,  2
Hence, an orthogonal basis for the subspace spanned by B is 
.







 2 −3 

     

1  2 −1





     



0 −1  1

! 2, w
! 3} = 
A19 Denote the given basis by B = {!
w1 , w
,
,
.














0  1  1















1
2
1
! 1 and S1 = Span{!v 1 }.
First Step: Let !v 1 = w
Second Step: Let
and define S2 = Span{!v 1 , !v 2 }.
 
   
 2
1  0
−1 4 0 −1
!v 2 = perpS1 (!
w2 ) =   −   =  
 1 2 0  1
2
1
0
Copyright © 2020 Pearson Canada Inc.
15
! 3 = 0 = !v 2 · w
! 3 , so w
! 3 is already orthogonal to !v 1 and !v 2 . Hence, we
Third Step: Observe that !v 1 · w
! 3.
can simply take !v 3 = w
     

1  0 −1







0 −1  1









Hence, an orthogonal basis for the subspace spanned by B is 
,
,
.






0  1  1





     




1
0
1
       

1 1  1 1





       



1 0  3 0

! 2, w
! 3, w
! 4} = 
A20 Denote the given set by B = {!
w1 , w
,
,
,
.


















0 1 −2 0

















1 1
1 1
! 1 and S1 = Span{!v 1 }.
First Step: Let !v 1 = w
Second Step: We have
 
  

1
1  1/3
0 2 1 −2/3

perpS1 (!
w2 ) =   −   = 
1 3 0  1 
1
1
1/3
Since we can take any scalar multiple of this vector, to make future calculations easier, we pick
 
 1
−2
!v 2 =   and define S2 = Span{!v 1 , !v 2 }.
 3
1
Third Step: We have
 
 
   
 1
1
 1 0
 3 5 1 −10 −2 0
  =  
perpS2 (!
w3 ) =   −   −
15  3 0
−2 3 0
1
1
1
0
! 3 ∈ Span{!
! 2 }, so we can omit it.
w1 , w
Hence, w
Third Step: We have
 
 1
−2
Hence, we take !v 3 =  .
−2
1
 
 
  

1
1
 1  1/5
0 2 1 2 −2 −2/5
  = 

perpS2 (!
w4 ) =   −   −
0 3 0 15  3 −2/5
1
1
1
1/5
     

1  1  1





     



1 −2 −2

Thus, an orthogonal basis for the subspace spanned by B is 
.
  ,   ,  



0
3
−2








     



1  1  1

    

1 1



   

1 , 1
! 2, w
! 3} = 
A21 Denote the given basis by B = {!
w1 , w
.



0 1

Copyright © 2020 Pearson Canada Inc.
16
! 1 and S1 = Span{!v 1 }.
First Step: Let !v 1 = w
Second Step: Let
 
   
1
1 0
  2    
!v 2 = perpS1 (!
w2 ) = 1 − 1 = 0
  2   
1
0
1
   

1 0



   

1 , 0
.
Hence, an orthogonal basis for the subspace spanned by B is 







0 1

Normalizing each vector we get that an orthonormal basis for the subspace spanned by B is
 √   







1/ √2 0









0
,


1/
2




  





 0

1
     

 2 2 −1



     

 1 , 1 , −1
! 2, w
! 3} = 
w1 , w
.
A22 Denote the given basis by B = {!










 −2 1
1
! 1 and S1 = Span{!v 1 }.
First Step: Let !v 1 = w
Second Step: We have
 
   
2
 2 4/3
  3    
perpS1 (!
w2 ) = 1 −  1 = 2/3
  9   
1
−2
5/3
 
4
 
Thus, we take !v 2 = 2 and define S2 = Span{!v 1 , !v 2 }.
 
5
Third Step: We have
 
 
  

−1
 2
4  1/5
  −5   −1   

perpS2 (!
w3 ) = −1 −
 1 −
2 = −2/5
 
9
45
1
−2
5
0
 
 1
 
So, we take !v 3 = −2.
 
0
     

 2 4  1



     

 1 , 2 , −2
Hence, an orthogonal basis for the subspace spanned by B is 
.










 −2 5
0
Normalizing each vector we get that an orthonormal basis for the subspace spanned by B is

√ 
  √  

4/ 45  1/ 5


2/3









 1/3 , 2/ √45 , −2/ √5














√



−2/3 5/ 45  0 

Copyright © 2020 Pearson Canada Inc.
17
      

1 0  1







1 1 −1









! 2, w
! 3} = 
A23 Denote the given basis by B = {!
w1 , w
,
,
.






0 1 −1





     




 1 1 −1 
! 1 and S1 = Span{!v 1 }.
First Step: Let !v 1 = w
Second Step: We have
 
  

0
1 −2/3
1 2 1  1/3

perpS1 (!
w2 ) =   −   = 
1 3 0  1 
1
1
1/3
 
−2
 1
Thus, we take !v 2 =   and define S2 = Span{!v 1 , !v 2 }.
 3
1
Third Step: We have
 
  

 
1
−2  2/5
 1
−1 −1 1 −7  1 −1/5

  −
  = 
perpS2 (!
w3 ) =   −
3 0 15  3  2/5
−1
1
1
−1/5
−1
 
 2
−1
So, we take !v 3 =  .
 2
−1
     

1 −2  2





     



1  1 −1

Hence, an orthogonal basis for the subspace spanned by B is 
.
  ,   ,  



0
3
2








     



1  1 −1

Normalizing each vector we get that an orthonormal basis for the subspace spanned by B is
 √  
√  
√ 



1/ √3 −2/ √15  2/ √10








 

 



1/ 3  1/ √15 −1/ √10





,
,



















 0   3/ 15  2/ 10



√ 
√ 
√ 



1/ 3  1/ 15 −1/ 10

      



1  1 1





0  0 1



















     







1
−1
1
! 2, w
! 3} = 
w1 , w
,
,
.
A24 Denote the given basis by B = {!










































0  1 1



1  0 1

! 1 and S1 = Span{!v 1 }.
First Step: Let !v 1 = w
Copyright © 2020 Pearson Canada Inc.
18
Second Step: We have
 
   
 1
1  1
 0
   
  0 0  0
perpS1 (!
w2 ) = −1 − 1 = −1
  3    
 1
0  1
0
1
0
 
 1
 0
 
So, we take !v 2 = −1 and define S2 = Span{!v 1 , !v 2 }.
 
 1
0
Third Step: We have
 
−1
 3
 
So, we take !v 3 =  1.
 
 2
0
 
 
  

1
1
 1 −1/3
 
  

1
  3 0 1  0  1 
perpS2 (!
w3 ) = 1 − 1 − −1 =  1/3
  3   3   

1
0
 1  2/3
1
1
0
0
     



1  1 −1





0  0  3







     







1
−1
1
Hence, an orthogonal basis for the subspace spanned by B is 
,
,
.







































0
1
2



     



1  0  0

Normalizing each vector we get that an orthonormal basis for the subspace spanned by B is
A25 The closest vector is
√ 
√  
 √  

1/ 3  1/ 3 −1/ √15






 
 





0
0


3/
15











√  
√ 

 √  



,
,
1/
3
−1/
3




1/
15




 
 




√
√



 0   1/ 3  2/ 15
















√









 1/ 3
0
0
! "
(!v 1 · !x )
4 2
!v 1 =
projS (!x ) =
13 3
#!v 1 #2
A26 The spanning set is an orthogonal set, so we have that the closet vector is
 
  

 3
 1  5/2
(!v 1 · !x )
(!v 2 · !x )
13   3   
−4 +  1 =  −1 
!v 1 +
!v 2 =
projS (!x ) =

26   3   
#!v 1 #2
#!v 2 #2
−1
−1
−3/2
Copyright © 2020 Pearson Canada Inc.
19
A27 Observe that the spanning set for S is not an orthogonal
! " set. Thus, we must apply the Gram-Schmidt
1
!1 =
Procedure first to get an orthogonal basis. Let w
and S1 = Span{!
w1 }. We find that
1
4! "5 ! "
! " !
"
3 1
2
2
1/2
perpS1
=
−
=
1
1
−1/2
2 1
! "
1
!2 =
! 2 } is an orthogonal basis for the spanned set. We can now use the
So, we take w
. Then, {!
w1 , w
−1
projection formula. Thus, the closest vector is
! "
! " ! "
(!
w1 · !x )
(!
w2 · !x )
1 1
−3 1
−1
!1 +
!2 =
projS (!x ) =
w
w
+
=
2
2 1
2 −1
#!
w1 #2
#!
w2 #2
A28 Observe that the spanning set is not an orthogonal set. Thus, we must apply the Gram-Schmidt
 
1
 
! 1 = 1 and S1 = Span{!
Procedure first to get an orthogonal basis. Let w
w1 }. We find that
 
2
   
  

 1  1
1  4/3
    −2   

perpS1 −1 = −1 −
1 = −2/3
   
6
−1
−1
2
−1/3
 
 4
 
! 2 = −2 and S2 = Span{!
! 2 }. Then, we have that
w1 , w
So, we take w
 
−1
   
 
   
6 6
1
 4 0
12
21
   
1 −
−2 = 0
perpS2 0 = 0 −
    6   21    
3
3
2
−1
0
 
6
 
! 2 }. Since there are no remaining vectors, we have that {!
! 2 } is an orthogonal
w1 , w
w1 , w
Thus, 0 ∈ Span{!
 
3
basis for the spanned set. We can now use the projection formula. We have
 
   
1
 4 9/2
(!
w1 · !x )
(!
w2 · !x )
19   7    


1
!
!
projS (!x ) =
w
+
w
=
+


−2 = 5/2
1
2
6   21    
#!
w1 #2
#!
w2 #2
2
−1
6
A29 Observe that the spanning set is not an orthogonal set. Thus, we must apply the Gram-Schmidt
 
1
1
! 1 =   and S1 = Span{!
Procedure first to get an orthogonal basis. Let w
w1 }. We find that
1
0
  

   
1 −1/3
1 1
1 1 4 1 −1/3

perpS1   =   −   = 
2 2 3 1  2/3
0
1
1
1
Copyright © 2020 Pearson Canada Inc.
20
 
−1
−1
! 2 =  . Then, {!
! 2 } is an orthogonal basis for the spanned set. We can now use the
So, we take w
w1 , w
 2
3
projection formula. We have
 
  

1
−1 −1/5
(!
w1 · !x )
(!
w2 · !x )
0 1 3 −1 −1/5
!
!
projS (!x ) =
w
+
w
=
 +
 =

1
2
3 1 15  2  2/5
#!
w1 #2
#!
w2 #2
0
3
3/5
A30 The spanning set is an orthogonal set, so we have that the closet vector is
 
  

 1
 1  10/7
(!v 2 · !x )
3  0 3  1  3/7
(!v 1 · !x )
!
!
v
+
v
=
projS (!x ) =
 +  =

1
2
3 −1 7 −1 −10/7
#!v 1 #2
#!v 2 #2
1
−2
1/7
B Homework Problems
 
 
2
 1
 
 
B1 projS (!x ) = 2, perpS (!x ) = −1
 
 
2
0
 


2/5
 8/5
 


B3 projS (!x ) = 1/5, perpS (!x ) = −16/5
 


6
0
 
 
3
0
 
 
B5 projS (!x ) = 5, perpS (!x ) = 0
 
 
7
0
B7
B9
B12
B15
 


5/2
−1/2
−1/2
3/2

projS (!x ) =  , perpS (!x ) = 

3/2
 1/2
5/2
1/2
   
  


−3 −3





−2

   


−5




1
0
,
B10



























 0
 3

1
   
    






−1 −1
 7  12



















 2  5
−5 −7

,
B13
,
   
   










3
0
1
0














   

   




0
3
0
1
   
    


 2  6
−2  2










   
   






 3 −5

 1  0

B16 

12 ,  0
 0 , −3










   
   






 0  4

 0  7

B2
B4
B6
B8




 12/5 
−7/5




projS (!x ) =  4 , perpS (!x ) =  1 




−1/5
16/5
 


3/2
 3/2
 


projS (!x ) = 3/2, perpS (!x ) = −3/2
 


3
0
 


1/2
 1/2
 1 
 0 

projS (!x ) =  , perpS (!x ) = 
 2 
 0 
1/2
−1/2
 
 
2
 1
2
−1
projS (!x ) =  , perpS (!x ) =  
4
 0
7
0
 



 5


−7
B11 








 8

 



−2







 4

B14 
 




−8





 


1

  

 2  2 



  





1
5/2
B17 
,
















 −1 13/2 

Copyright © 2020 Pearson Canada Inc.
21
B18
B21
B24
B27
B29
B31
B34
   

1  1



   

1 , −1
B19









1
0
     

 1 3  1





     



 2 0 −5

,
,
B22
























1
2
6







     




 −1 5 −3 
  
√ 



2/3 −1/ √2





  



2/3
,
B25


1/
2




















 1/3

0

√ 
√  




−2/
1/
7










√   √ 21








−1/ √7  2/ √21 



,















1/
7
−2/
21






√   √ 




 2/ 7
3/ 21 
  
√ 
√  



 0   1/√ 3  6/ √54





1/3  4/ 27 −4/ 54




  







√
√


,
,
























2/3
−1/
27
1/
54







  





√
√





 2/3 −1/ 27
1/ 54 


−1/3


B32
 5/3
5/3
 
−2
 0
B35
 
−1
    
   


2  5
 1 5






   

   









1
−2
0
4
,
B20
,



































 2 −4 
 −1 5 

     
     


−1  4  2
1  0  2










 2 −1  2
     







     
2  1  4

,
,
B23
,
,















































−1
−3
1
0
−3
3
















     
     







 1


3 −1
2 −1 −5 
√ 
√ 
 √  
 √  


1/ 6 −2/ 21
1/ 3  1/ 14











√
√
√










 ,  4/ 21
 , −3/ √14
B26
1/
3


1/
6



 √  
 √  



√ 
√ 








 2/ 6 −1/ 21 
 1/ 3
2/ 14 

  
 

1/2 −1/2  0√ 







  





1/2  1/2 −1/ 2


B28 
,
,
  
 









0
1/2
−1/2





 


√ 

  

1/2
1/2
1/ 2 
 √  
√  
√  




1/ √3 −1/ √15 −3/ √15  0√ 






 
  1/ 15 −1/ 3







2/
15
1/ 3 











√
√
√




B30 
,
,
,





























0
1/
3
3/
15
−1/
15











 
 





√
√
√
√






 1/ 3 −1/ 15

1/ 3 
2/ 15
 
! "
−3
2
 
B33  3
 
4
0
 
 
5
5/3
3
 
7/3
B36
1
 
 
4/3
1
C Conceptual Problems
C1 Let {!v 1 , !v 2 , . . . , !v k } be a basis for S. By Theorem 7.2.1, S⊥ is the solution space of the system !v 1 · !x =
 T
!v 1 
 
0, !v 2 ·!x = 0, . . . , !v k ·!x = 0. Since {!v 1 , !v 2 , . . . , !v k } is linearly independent, the matrix A =  ...  has rank
 T 
!v k
k. Therefore, by the Rank-Nullity Theorem, the solution space (the nullspace of A) has dimension
n − k. Thus, dim S⊥ = n − k.
C2 If !x ∈ S ∩ S⊥ , then !x ∈ S and !x ∈ S⊥ . Since every vector in S⊥ is orthogonal to every vector in S,
!x · !x = 0. Hence, !x = !0 by Theorem 1.5.1(2).
C3 Since every vector in S⊥ is orthogonal to every vector in S, B = {!v 1 , !v 2 , . . . , !v k , !v k+1 , . . . , !v n } is an
orthogonal set in Rn . Moreover, none of the vectors can be the zero vector, since they are from a basis.
Hence, by Theorem 7.1.1, B is linearly independent. But, a linearly independent set of n vectors in
Rn is a basis for Rn by Theorem 2.3.6, so B is a basis for Rn .
C4 If !s ∈ S, then !s · !x = 0 for all !x ∈ S⊥ by definition of S⊥ . But, the definition of (S⊥ )⊥ is (S⊥ )⊥ =
{!x ∈ Rn | !x · !s = 0 for all !s ∈ S⊥ }. Hence !s ∈ (S⊥ )⊥ . So S ⊆ (S⊥ )⊥ Also, by Problem C1, dim(S⊥ )⊥ =
n − dim S⊥ = n − (n − dim S) = dim S. Hence S = (S⊥ )⊥ .
Copyright © 2020 Pearson Canada Inc.
22
C5 Let {!v 1 , . . . , !v k } be an orthonormal basis for S, and let {!v k+1 , . . . , !v n } be an orthonormal basis for S⊥ .
Then, {!v 1 , . . . , !v n } is an orthonormal basis for Rn . Thus, any !x ∈ Rn can be written
!x = (!v 1 · !x )!v 1 + · · · + (!v n · !x )!v n
Then
<
=
perpS (!x ) = !x −projS (!x ) = !x − (!v 1 ·!x )!v 1 +· · ·+(!v k ·!x )!v k = (!v k+1 ·!x )!v k+1 +· · ·+(!v n ·!x )!v n = projS⊥ (!x )
C6 We have !q 1 = #!a11 # !a1 . Observe that !a1 ! !0 since B is linearly independent. Therefore, !a1 ·!q 1 = #!a1 # !
0. Because {!q 1 , !q 2 } is an orthonormal basis for Span B, we have that
!a2 = (!a2 · !q 1 )!q 1 + (!a2 · !q 2 )!q 2
Thus, !a2 · !q 2 ! 0 since otherwise !a2 is a scalar multiple of !a1 . Therefore, det R = (!a1 · !q 1 )(!a2 · !q 2 ) ! 0
and hence R is invertible.
C7 If !x ∈ Span{!v 1 , . . . , !v k−1 , !v k + t1!v 1 + · · · + tk−1!v k−1 }, then
!x = c1!v 1 + · · · + ck−1!v k−1 + ck (!v k + t1!v 1 + · · · + tk−1!v k−1 )
= (c1 + ck t1 )!v 1 + · · · + (ck−1 + ck tk−1 )!v k−1 + ck!v k ∈ Span{!v 1 , . . . , !v k }
On the other hand, if !x ∈ Span{!v 1 , . . . , !v k }, then
!x = c1!v 1 + · · · + ck−1!v k−1 + ck!v k
= c1!v 1 + · · · + ck−1!v k−1 + ck (−t1!v 1 − · · · − tk−1!v k−1 + !v k + t1!v 1 + · · · + tk−1!v k−1 )
= (c1 − ck t1 )!v 1 + · · · + (ck−1 − ck tk−1 )!v k−1 + ck (!v k + t1!v 1 + · · · + tk−1!v k−1 )
Hence, !x ∈ Span{!v 1 , . . . , !v k−1 , !v k + t1!v 1 + · · · + tk−1!v k−1 }.
C8 Using the fact that !x T !y = !x · !y , we get that
>? T
@ AT
?
@
!v 1!v 1 + · · · + !v k!v Tk !x = !x T !v 1!v T1 + · · · + !v k!v Tk
= !x T !v 1!v T1 + · · · + !x T !v k!v Tk
= (!x · !v 1 )!v T1 + · · · + (!x · !v k )!v Tk
?
@
= (!x · !v 1 )!v 1 + · · · + (!x · !v k )!v k T
?
@
= projS (!x ) T
Thus,
@
(!v 1!v T1 + · · · + !v k!v Tk !x = projS (!x ) = [projS ]!x
for all !x ∈ Rn . Thus, by the Matrices Equal Theorem, [projS ] = !v 1!v T1 + · · · + !v k!v Tk .
Copyright © 2020 Pearson Canada Inc.
23
Section 7.3
A Practice Problems
 


9
1 1
1 2
6
! "


 
a
A1 Let !y = 5 and !a =
. Since we want an equation of the form y = a + bt we let X = 1 3 and
 


b
3
1 4
1
1 5
y
The normal system gives
!
X T X!a = X T !y
"! " ! "
5 15 a
24
=
15 55 b
53
y = 10.5 − 1.9t
Row reducing the corresponding
augmented matrix gives
!
5 15 24
15 55 53
"
∼
!
1 0
21/2
0 1 −19/10
Thus, the line of best fit is y =
"
21 19
− t.
2
10
t
O
 


2
1 −2
2

1 −1
! "
 


a
0.
A2 Let !y = 4 and !a =
. Since we want an equation of the form y = a + bt we let X = 1
 


b
1
4
1

5
1
2
The normal system gives
X T X!a = X T !y
!
"! " ! "
5 0 a
17
=
0 10 b
8
Since it is easy to calculate (X T X)−1 we solve this as
T
−1
!a = (X X)
!
y
y = 3.4 + 0.8t
" !
"
17
17/5
=
8
4/5
Thus, the line of best fit is y =
17 4
+ t.
5
5
Copyright © 2020 Pearson Canada Inc.
O
t
24
 
3
 
2
a
 
 
A3 Let !y = 0 and !a = b. Since we want an equation of the form y = a + bt + ct2 we let
 
 
c
2
8




1 −2 (−2)2 
1 −2 4
1 −1 (−1)2 
1
y
1 1




2

0
0  = 1
0 0. The normal system
X = 1




2 

1
1
1
1
1 1




1
2
22
1
2 4
gives
X T X!a = X T !y
Row reducing the corresponding augmented matrix gives

 

 5 0 10 15   1 0 0 3/7 

 

 0 10 0 10  ∼  0 1 0 1 
10 0 34 48
0 0 1 9/7
Thus, the equation of best fit is y =
3
7
+t+
y=
9 2
7t .
3
7
+ t + 97 t2
t
 


! "
4
1 −1
a
 


0. The
A4 Let !y = 1 and !a =
. Since we want an equation of the form y = a + bt we let X = 1
 


b
1
1
1
normal system gives
X T X!a = X T !y
!
"
! "
3 0
6
!a =
0 2
−3
"
2
Since it is easy to calculate (X X) , we solve this as !a = (X X) X !y =
. Thus, the equation
−3/2
3
of best fit is y = 2 − t.
2
 


! "
 2
1 −2
a
 


1. The
A5 Let !y = −1 and !a =
. Since we want an equation of the form y = a + bt we let X = 1
 


b
−2
1
1
normal system gives
T
T
−1
!
−1
T
!
X T X!a = X T !y
"
! "
3 0
−1
!a =
0 6
−7
Since it is easy to calculate (X T X)−1 , we solve this as !a = (X T X)−1 X T !y =
1 7
of best fit is y = − − t.
3 6
Copyright © 2020 Pearson Canada Inc.
!
"
−1/3
. Thus, the equation
−7/6
25
 


1
1 −2
! "
1

1 −1
a
. The
A6 Let !y =   and !a =
. Since we want an equation of the form y = a + bt we let X = 
b
0
2
1

3
1
1
normal system gives
X T X!a = X T !y
!
"
! "
4 −2
7
!a =
−2
6
0
!
" !
"
4 −2 7
1 0 21/10
∼
−2
6 0
0 1 7/10
21
7
Thus, the equation of best fit is y =
+ t.
10 10
 


1
1 −1
! "
2

1 0 
a
. The
A7 Let !y =   and !a =
. Since we want an equation of the form y = a + bt we let X = 
b
2
1 1 
3
1 2
normal system gives
X T X!a = X T !y
!
"
! "
4 2
8
!a =
2 6
7
!
" !
"
4 2 8
1 0 17/10
∼
2 6 7
0 1 3/5
17 3
Thus, the equation of best fit is y =
+ t.
10 5
 
 
4
a
 
 
A8 Let !y = 1 and !a = b. Since we want an equation of the form y = a + bt + ct2 we let
 
 
1
c




1 −1 (−1)2  1 −1 1

 

0
02  = 1
0 0. The normal system gives
X = 1




1
1
12
1
1 1
X T X!a = X T !y


 
 6
3 0 2
0 2 0 !a = −3


 
2 0 2
5
Row reducing the corresponding augmented matrix gives

 

6   1 0 0
1 
 3 0 2

 

 0 2 0 −3  ∼  0 1 0 −3/2 
2 0 2
5
0 0 1
3/2
3
3
Thus, the equation of best fit is y = 1 − t + t2 .
2
2
Copyright © 2020 Pearson Canada Inc.
26
 
 
 2
a
 
 
A9 Let !y = −2 and !a = b. Since we want an equation of the form y = a + bt + ct2 we let
 
 
3
c




2
1 −2 (−2)  1 −2 4

 

0
02  = 1
0 0. The normal system gives
X = 1



2 
1
2
2
1
2 4
X T X!a = X T !y


 
3 0 8
 3


 
0 8 0 !a =  2
8 0 32
20
Row reducing the corresponding augmented matrix gives

 3 0 8 3

 0 8 0 2
8 0 32 20
 

  1 0 0 −2 
 

 ∼  0 1 0 1/4 
0 0 1 9/8
9
1
Thus, the equation of best fit is y = −2 + t + t2 .
4
8
 
 
 3
a
 
 
A10 Let !y =  0 and !a = b. Since we want an equation of the form y = a + bt + ct2 we let
 
 
−2
c




2
1 −1 (−1)  1 −1 1

 

1
12  = 1
1 1. The normal system gives
X = 1



2 
1
2
2
1
2 4
X T X!a = X T !y


 
3 2 6 
 1


 
2 6 8  !a = −7
6 8 18
−5
Row reducing the corresponding augmented matrix gives

 

1   1 0 0
5/3 
 3 2 6

 

 2 6 8 −7  ∼  0 1 0 −3/2 
6 8 18 −5
0 0 1 −1/6
Thus, the equation of best fit is y =
5 3
1
− t − t2 .
3 2
6
 
 
2
a
3
 


A11 Let !y =   and !a = b. Since we want an equation of the form y = a + bt + ct2 we let
 
1
 
c
2
Copyright © 2020 Pearson Canada Inc.
27

 

1 −1 (−1)2  1 −1 1
1 −1 (−1)2  1 −1 1
 = 
. The normal system gives
X = 
1
12  1
1 1
1
 

1
1
12
1
1 1
X T X!a = X T !y


 
4 0 4
 8


 
0 4 0 !a = −2
4 0 4
8
Row reducing the corresponding augmented matrix gives
 


8   1 0 1
2 
 4 0 4
 0 4 0 −2  ∼  0 1 0 −1/2 
 


4 0 4
8
0 0 0
0
Since the system has infinitely many solutions, we take the solution with smallest length. We find
1
that it is y = 1 − t + t2 .
2
 
! "
4
b
 
A12 Let !y = 1 and !a =
. Since we want an equation of the form y = bt + ct2 we let
 
c
1

 

−1 (−1)2  −1 1

 

02  =  0 0. The normal system gives
X =  0




1
12
1 1
!
X T X!a = X T !y
"
! "
2 0
−3
!a =
0 2
5
Since it is easy to calculate (X T X)−1 , we solve this as
!a = (X T X)−1 X T !y =
!
"
−3/2
5/2
3
5
Thus, the equation of best fit is y = − t + t2 .
2
2


 1
! "
 0


 and !a = b . Since we want an equation of the form y = bt + dt3 we let
A13 Let !y = 
d
 −1
−20

 

−1 (−1)3  −1 −1
 0


03   0
0
X = 
=


. The normal system gives
13   1
1
 1



2
23
2
8
X T X!a = X T !y
!
"
!
"
6 18
−42
!a =
18 66
−162
Copyright © 2020 Pearson Canada Inc.
28
Row reducing the corresponding augmented matrix gives
!
" !
"
6 18 −42
1 0
2
∼
18 66 −162
0 1 −3
Thus, the equation of best fit is y = 2t − 3t3 .
 
 1
 1
! "
 
a




A14 Let !y =  2 and !a =
. Since we want an equation of the form y = a + ct2 we let
 
c
 3
−2



2
1 (−2)  1 4
1 (−1)2  1 1

 

02  = 1 0. The normal system gives
X = 1

 

12  1 1
1



1
22
1 4
!
X T X!a = X T !y
"
! "
5 10
5
!a =
10 34
0
Row reducing the corresponding augmented matrix gives
!
" !
"
5 10 5
1 0 17/7
∼
10 34 0
0 1 −5/7
17 5 2
Thus, the equation of best fit is y =
− t .
7
7
 
 
2
a
3
 
A15 Let !y =   and !a = b. Since we want an equation of the form y = a + bt + c2t we let
 
1
c
2



−1 
1 −1 2  1 −1 1/2
1 −1 2−1  1 −1 1/2
 = 
. The normal system gives
X = 
1 21  1
1 2 
1



1
2 22
1
2 4
X T X!a = X T !y




7 
4 1
 8 




9  !a =  0 
1 7



7 9 41/2
25/2
Row reducing the corresponding augmented matrix gives

 

7
8   1 0 0
0 
 4 1
 

 1 7
9
0  ∼  0 1 0 −9/5 

 

7 9 41/2 25/2
0 0 1
7/5
7
9
Thus, the equation of best fit is y = − t + 2t .
5
5
Copyright © 2020 Pearson Canada Inc.
29
 


! "
2
 3
1
B
C
x




1
A16 We have !b =  5, !x =
, and A = 2 −3. Write the augmented matrix A | !b and row reduce:
 


x2
−4
1 −5

2
3
 1

5
 2 −3
1 −5 −4
 

3 
2
  1
 

 ∼  0 −7 −1 
0
0 −6
Thus, the system is inconsistent. The !x that minimizes #A!x − !b# must satisfy AT A!x = AT !b. Row
reducing the corresponding augmented matrix gives
" !
"
!
1 0 3
6 −9 9
∼
−9 38 11
0 1 1
! "
3
minimizes #A!x − !b#.
1
 


! "
2
2 −2
B
C
x




A17 We have !b = 1, !x = 1 , and A = 2 −3. Write the augmented matrix A | !b and row reduce:
 


x2
8
1
1
Therefore, !x =

 

2 
 2 −2 2   2 −2

 

 2 −3 1  ∼  0 −1 −1 
1
1 8
0
0
5
Thus, the system is inconsistent. The !x that minimizes #A!x − !b# must satisfy AT A!x = AT !b. Row
reducing the corresponding augmented matrix gives
" !
"
!
1 0 41/9
9 −9 14
∼
−9 14 1
0 1
3
"
41/9
Therefore, !x =
minimizes #A!x − !b#.
3
 


! "
1
 5
2
B
C
x
 


A18 We have !b = −1, !x = 1 , and A = 2 −1. Write the augmented matrix A | !b and row reduce:
 


x2
3
3
2
!

 
1
5   1
1 −2
 2

 
9
 2 −1 −1  ∼  0 −1
3
2
3
0
0 −24




Thus, the system is inconsistent. The !x that minimizes #A!x − !b# must satisfy AT A!x = AT !b. Row
reducing the corresponding augmented matrix gives
!
" !
"
17 6 17
1 0 5/11
∼
6 6 12
0 1 17/11
"
5/11
Therefore, !x =
minimizes #A!x − !b#.
17/11
!
Copyright © 2020 Pearson Canada Inc.
30
 


! "
3
 1 2
B
C
x




1
A19 We have !b = 3, !x =
, and A =  1 0. Write the augmented matrix A | !b and row reduce:
 


x2
2
−1 1




 
1 2 3   1 0 3
 
1 0 3  ∼  0 1 0
 
−1 1 2
0 0 5




Thus, the system is inconsistent. The !x that minimizes #A!x − !b# must satisfy AT A!x = AT !b. Row
reducing the corresponding augmented matrix gives
!
3 1 4
1 5 8
"
6/7
Therefore, !x =
minimizes #A!x − !b#.
10/7
 

 
2
 1
 x1 
1
 0
 
A20 We have !b =  , !x =  x2 , and A = 
 
1
 1
x3
2
−1
reduce:

0
1
 1
 0
1
2

 1
2 −1

−1 −1 −1
"
∼
!
1 0 6/7
0 1 10/7
"
!

0
1

B
C
1
2
. Write the augmented matrix A | !b and row
2 −1

−1 −1
 

2   1 0
2 
1
 

1   0 1
2
1 
∼

1   0 0 −6 −3 
 

2
0 0
0
4
Thus, the system is inconsistent. The !x that minimizes #A!x − !b# must satisfy AT A!x = AT !b. Row
reducing the corresponding augmented matrix gives
 


 3 3 1 1   1 0 0 3/10 
 


0 
 3 6 1 1  ∼  0 1 0

1 1 7 1
0 0 1 1/10


3/10


Therefore, !x =  0  minimizes #A!x − !b#.


1/10
 


! "
1
 1 3
B
C
x
 


A21 We have !b = 7, !x = 1 , and A = −1 1. Write the augmented matrix A | !b and row reduce:
 


x2
6
−2 1

 
1
 1 3 1   1 3

 
−1
1
7
0
1
2
∼

 
−2 1 6
0 0 −6




Thus, the system is inconsistent. The !x that minimizes #A!x − !b# must satisfy AT A!x = AT !b. Row
reducing the corresponding augmented matrix gives
Copyright © 2020 Pearson Canada Inc.
31
!
6 0 −18
0 11
16
"
∼
1 0
−3
0 1 16/11
!
"
!
"
−3
Therefore, !x =
minimizes #A!x − !b#.
16/11
 


 
2 1
 1
0

 x1 
 2
1
B
C
1 2
 
A22 We have !b =  , !x =  x2 , and A = 
. Write the augmented matrix A | !b and row
 
 0
1 −1 3
x3
−4
0
1 0
reduce:

 

3
2 1
1   1 −1
0 
 0
 

 1
1 2
2   0
0 −1 10 

∼



 1 −1 3
0   0
0
0 19 

 

0
1 0 −4
0
1
0 −4
Thus, the system is inconsistent. The !x that minimizes #A!x − !b# must satisfy AT A!x = AT !b. Row
reducing the corresponding augmented matrix gives

 
 2 0 5 2   1 0 0 1

 
 0 7 1 0  ∼  0 1 0 0
5 1 14 5
0 0 1 0




 
1
 
Therefore, !x = 0 minimizes #A!x − !b#.
 
0
 


2
1 2
!
"
3
1 2
B
C
x
. Write the augmented matrix A | !b and row reduce:
A23 We have !b =  , !x = 1 , and A = 


x2
2
2 3
3
2 3

 
 1 2 2  
 1 2 3  

 
 1 3 2  ∼ 

 
1 3 3
1
0
0
0
2
0
1
0
2
1
0
1






Thus, the system is inconsistent. The !x that minimizes #A!x − !b# must satisfy AT A!x = AT !b. Row
reducing the corresponding augmented matrix gives
!
Therefore, !x =
!
4 10 10
10 26 25
"
∼
!
1 0 5/2
0 1 0
"
5/2
minimizes #A!x − !b#.
0
Copyright © 2020 Pearson Canada Inc.
"
32
B Homework Problems
y
y
B1
y=3+
2 t
5
B2
y=a+bt
O
O
x
x
34 9
+ t
7
7
12 13
y=
− t
7
7
1
y = 3 + t + 0t2
2
4 3
2
y = − − t − t2
3 5
3
1
5 2
y=− t+ t
2
2
7
1 3
y= t− t
6
6
B3 y = 1 − 2t
B4 y =
B5 y = −
B6
B7
B8
B9
B11
B13
B15
24
6
− t
13 13
3
1
y = 3 + t + t2
2
2
1 7
1
y = + t − t2
4 4
4
1 2
y=3+ t
2
9 1 2
y= − t
2 3
7
5
y = − t + t2
2
2
B17 y = 3 − 2t+1
3
sin(t) − cos(t)
2
!
"
−13/6
B21 !x =
11/3
! "
7/2
B23 !x =
−2
!
"
−4/3
B25 !x =
7/3
B19 y =
B10
B12
B14
B16 y = 2 − t − 3t3
7
sin(t) + 5 cos(t)
2! "
−1
B20 !x =
3
! "
2
B22 !x =
1/2
!
"
1
B24 !x =
−7/10
B18 y =
Copyright © 2020 Pearson Canada Inc.
33
C Conceptual Problems
C1 Since A!x = !b is consistent, we have that !b ∈ Col(A). Also, observe that
AT A!y = AT !b ⇒ AT (A!y − !b) = !0
Thus, (A!y − !b) ∈ Null(AT ). But, A!y ∈ Col(A) and !b ∈ Col(A), so (A!y − !b) ∈ Col(A). Then, by FTLA,
A!y − !b = !0 and hence A!y = !b.
C2 Since !b is orthogonal to the columns of A, we have that

  
 T
!a1 · !b 0
!a1 

  
 
AT !b =  ...  !b =  ...  =  ... 
 

  
!aTn
0
!an · !b
Thus, the normal system AT A!x = AT !b is homogeneous and hence has !x = !0 as a solution.
C3 Equation (7.1) says
A!a = projCol(A) (!y )
The method of least squares, tells us that
!a = (AT A)−1 AT !y
Thus,
projCol(A) (!y ) = A(AT A)−1 AT !y
C4 (a) AA+ A = A[(AT A)−1 AT ]A = A[(AT A)−1 AT A] = A(I) = A
(b) A+ AA+ = [(AT A)−1 AT ]A[(AT A)−1 AT ] = [(AT A)−1 AT A][(AT A)−1 AT ] = IA+ = A+
(c) Observe that
(A+ )T = ((AT A)−1 AT )T = (AT )T ((AT A)−1 )T = A((AT A)T )−1 = A(AT A)−1
Thus,
(AA+ )T = [A(AT A)−1 ]AT = A[(AT A)−1 )AT ] = AA+
(d) (A+ A)T = AT A(AT A)−1 = I = (AT A)−1 AT A = [(AT A)−1 AT ]A = A+ A
C5 By direct calculation we have

 n
 T 

 !1  B
 D
C


 n
X T X =  !tT  !1 !t !t 2 =  ti
 i=1
 !2 T 
n
(t )
 D
 ti2
i=1
n
D
i=1
n
D
i=1
n
D
i=1
Copyright © 2020 Pearson Canada Inc.
ti
ti2
ti3

n
D
ti2 
i=1 

n

D
ti3 
i=1 

n
D

ti4 
i=1
34
C6 (a) Suppose that m + 1 of the ti are distinct and consider a linear combination of the columns of X.
 
 
 m  
1
t1 
t1  0
1
t 
tm  0
 
 2 
2  
c0  .  + c1  .  + · · · + cm  .  =  . 
.
.
 . 
 . 
 ..   .. 
 
 
 m  
1
tn
tn
0
Now, let p(t) = c0 + c1 t + · · · + cm tm . Equating coefficients we see that t1 , t2 , . . . , tn are all roots
of p(t). Then p(t) is a degree m polynomial with at least m + 1 distinct roots, hence p(t) must be
the zero polynomial. Consequently, ci = 0 for 0 ≤ i ≤ n. Thus, the columns of X are linearly
independent.
(b) To show this implies that X T X is invertible we will use the Invertible Matrix Theorem. In particular, we will show that the only solution to X T X!v = !0 is !v = !0.
Assume X T X!v = !0. Then
#X!v #2 = (X!v )T X!v = !v T X T X!v = !v T !0 = 0
Hence, X!v = !0. This implies that !v = !0 since the columns of X are linearly independent. Thus
X T X is invertible as required.
Section 7.4
A Practice Problems
For Problems A1 - A5, we use what was learned in Exercise 7.4.1.
E!
" !
"F
1 3 2 5
A1
,
= 1(2) + 3(5) + (−2)1 + 1(1) = 16
−2 1 1 1
E!
" !
"F
3
6
3 0
A2
,
= 3(3) + 6(0) + 4(−3) + (−2)(−2) = 1
4 −2 −3 2
## !
"#
√
## 1 0 ### √ 2
A3 #
= 1 + 02 + 02 + 12 = 2
#
# 0 1 #
## !
# 1
A4 ##
# 2
## !
# 3
A5 ##
# −2
2
0
" ##
√
## √ 2
## = 1 + 22 + 22 + 02 = 9 = 3
"#
√
−1 ### $ 2
= 3 + (−1)2 + (−2)2 + 12 = 15
#
1 #
A6 +x − 2x2 , 1 + 3x, = 0(1) + −1(4) + (−6)(7) = −46
A7 +2 − x + 3x2 , 4 − 3x2 , = 2(4) + 4(1) + 12(−8) = −84
√
√
A8 #3 − 2x + x2 # = 32 + 22 + 32 = 22
√
√
A9 #9 + 9x + 9x2 # = |9|#1 + x + x2 # = 9 12 + 32 + 72 = 9 59
Copyright © 2020 Pearson Canada Inc.
35
A10 We know that dim P2 (R) = 3, so the fact that + , , only has two terms in it makes us think this is not
an inner product. To prove this we observe that
+a + bx + cx2 , a + bx + cx2 , = a2 + (a + b + c)2
Hence, we see that we can make this expression equal 0 by taking a = 0 and b = −c. For example,
+x − x2 , x − x2 , = 0. Thus, this function is not positive definite, so it is not an inner product.
A11 The fact that all the terms are in absolute values means that this function can never take negative
values. So, we suspect that this function cannot be bilinear. Indeed, +−p, q, ! −+p, q,, so it is not an
inner product.
A12 This function has three terms and looks quite a bit like the inner products for P2 (R) we have seen
before. So, we suspect that it is an inner product. We have
+a + bx + cx2 , a + bx + cx2 , = (a − b + c)2 + 2a2 + (a + b + c)2 ≥ 0
and +a + bx + cx2 , a + bx + cx2 , = 0 if and only if a − b + c = 0, a = 0, and a + b + c = 0. Solving
this homogeneous system we find that a = b = c = 0. Hence, the function is positive definite. Also
for any p, q ∈ P2 (R) we have
+p, q, = p(−1)q(−1) + 2p(0)q(0) + p(1)q(1)
= q(−1)p(−1) + 2q(0)p(0) + q(1)p(1) = +q, p,
Hence, it is symmetric. Finally, for any p, q, r ∈ P2 (R) and s, t ∈ R
+p, tq + sr, =p(−1)[tq(−1) + sr(−1)] + 2p(0)[tq(0) + sr(0)] + p(1)[tq(1) + sr(1)]
=t[p(−1)q(−1) + 2p(0)q(0) + p(1)q(1)] + s[p(−1)r(−1) + 2p(0)r(0) + p(1)r(1)]
=t+p, q, + s+p, r,
So, it is also bilinear. Therefore, it is an inner product on P2 (R).
A13 Observe that the terms being multiplied by each other do not have to be the same, so we suspect this
is not positive definite. In particular, we have
+x, x, = (−1)(1) + 2(0)(0) + 1(−1) = −2
So, it is not an inner product.
A14 The definition involves squares, so we suspect that this function cannot be bilinear. Let p(x) = 1 and
q(x) = x. Then, we have
+2p, q, = +2, x, = 22 + 22 + 02 + 12 = 9
But,
2+p, q, = 2+1, x, = 2(12 + 12 + 02 + 12 ) = 6
Hence, it is not bilinear. Therefore, it is not an inner product.
Copyright © 2020 Pearson Canada Inc.
36
For Problems A15 - A18, we use what was learned in Exercise 7.4.1.
! 2, w
! 3} =
A15 (a) Denote the given basis by B = {!
w1 , w
! 1 and S1 = Span{!v 1 }.
First Step: Let !v 1 = w
" !
" !
"3
1 0 1 1 2
0
,
,
.
−1 1 0 1 1 −1
2!
Second Step: We have
perpS1 (!
w2 ) =
!
"
!
" !
"
2 1 0
1 1
1/3 1
−
=
0 1
2/3 1/3
3 −1 1
"
1 3
Thus, we take !v 2 =
and define S2 = Span{!v 1 , !v 2 }.
2 1
!
Third Step: We have
!
"
!
"
!
" !
"
0 1 0
3 1 3
2
0
9/5 −3/5
perpS2 (!
w3 ) =
−
−
=
1 −1
3/5 −6/5
3 −1 1
15 2 1
!
"
3 −1
So, we take !v 3 =
.
1 −2
2!
" !
" !
"3
1 0 1 3 3 −1
Hence, an orthogonal basis for the subspace spanned by B is
,
,
.
−1 1 2 1 1 −2
(b) We have
"5
!
"
!
"
!
" !
"
7 1 0
10 1 3
5 3 −1
4 3
4
5/3
=
+
+
=
−2 1
−2/3 7/3
3 −1 1
15 2 1
15 1 −2
2!
" !
" !
"3
2
0
1 0 1 1
! 2, w
! 3} =
A16 (a) Denote the given basis by B = {!
w1 , w
,
,
.
1 −1 −1 1 0 1
! 1 and S1 = Span{!v 1 }.
First Step: Let !v 1 = w
projS
4!
Second Step: We have
"
!
" !
"
0 2
1 0
0
1 0
perpS1 (!
w2 ) =
−
=
−1 1
−1 1
6 1 −1
!
"
1 0
Thus, we take !v 2 =
and define S2 = Span{!v 1 , !v 2 }.
−1 1
!
Third Step: We have
perpS2 (!
w3 ) =
So, we take !v 3 =
!
!
"
!
"
!
" !
"
1 2
2 1 0
1 1
0
0
1
−
−
=
0 1
1/2 1/2
6 1 −1
3 −1 1
"
0 2
.
1 1
2!
" !
" !
"3
2
0
1 0 0 2
Hence, an orthogonal basis for the subspace spanned by B is
,
,
.
1 −1 −1 1 1 1
Copyright © 2020 Pearson Canada Inc.
37
(b) We have
"5
!
"
!
5 2
7 1
4 3
0
projS
=
+
−2 1
6 1 −1
3 −1
2!
1
! 2, w
! 3} =
A17 (a) Denote the given basis by B = {!
w1 , w
2
! 1 and S1 = Span{!v 1 }.
First Step: Let !v 1 = w
4!
"
!
" !
"
5 0 2
0
4
5/3
+
=
1
−2/3 7/3
6 1 1
" !
" !
"3
0 1 1 3 −1
,
,
.
2 3 1 2
1
Second Step: We have
"
!
" !
"
9 1 0
1 1
0
1
perpS1 (!
w2 ) =
−
=
3 1
1 −1
9 2 2
!
"
0
1
Thus, we take !v 2 =
and define S2 = Span{!v 1 , !v 2 }.
1 −1
!
Third Step: Let
!v 3 = perpS2 (!
w3 ) =
!
"
!
"
!
" !
"
9 1 0
0 0
3 −1
1
2 −1
−
−
=
2
1
0 −1
9 2 2
3 1 −1
Hence, an orthogonal basis for the subspace spanned by B is
(b) We have
2!
" !
" !
"3
1 0 0
1 2 −1
,
,
.
2 2 1 −1 0 −1
"5
!
"
!
"
!
" !
2 1 0
0 0
4 2 −1
4 3
1
14/9
=
+
+
=
−2 1
4/9
9 2 2
3 1 −1
6 0 −1
2!
" !
" !
3
1 3
1 3
! 2, w
! 3, w
! 4} =
A18 (a) Denote the given basis by B = {!
w1 , w
,
,
−1 −2 0 −1 2
! 1 and S1 = Span{!v 1 }.
First Step: Let !v 1 = w
projS
4!
"
−2/3
−2/9
" !
"3
1 1 1
,
.
1 1 0
Second Step: We have
"
!
" !
"
12 3
3
1
1
3/5 1/5
perpS1 (!
w2 ) =
−
=
0 −1
4/5 3/5
15 −1 −2
!
"
3 1
Thus, we take !v 2 =
and define S2 = Span{!v 1 , !v 2 }.
4 3
!
Third Step: Let
"
!
"
!
" !
"
6 3
21 3 1
3 1
1
0 0
perpS2 (!
w3 ) =
−
−
=
2 1
0 0
15 −1 −2
35 4 3
!
! 3 ∈ S2 . So, we ignore w
! 3 and calculate projS2 (!
Thus, w
w4 ) instead.
"
!
"
!
" !
"
3 3
8 3 1
1 1
1
−2/7
4/7
perpS2 (!
w4 ) =
−
−
=
1 0
2/7 −2/7
15 −1 −2
35 4 3
!
Copyright © 2020 Pearson Canada Inc.
38
"
−1
2
Thus, we take !v 3 =
.
1 −1
!
Hence, an orthogonal basis for the subspace spanned by B is
(b) We have
projS
4!
4 3
−2 1
"5
" !
" !
"3
3
1 3 1 −1
2
,
,
.
−1 −2 4 3
1 −1
2!
!
"
!
"
!
" !
"
15 3
10 3 1
−1 −1
1
2
4
1
=
+
+
=
1 −1
0 −1
15 −1 −2
35 4 3
7
   

1 −1



   

! 2} = 
A19 (a) Denote the given basis by B = {!
w1 , w
.
1 ,  1




0
0
! 1 and S1 = Span{!v 1 }.
First Step: Let !v 1 = w
Second Step: We have
 
  

−1
1 −2/3
  −1   

perpS1 (!
w2 ) =  1 −
1 =  4/3
 
3   
0
0
0
 
    

−1
1 −1



   

 
1 ,  2
Thus, we take !v 2 =  2. Hence, an orthogonal basis for S is B = 
.




 
   

0
0
0
(b) We have
 
   
1
−1 1
+!v 2 , !e1 ,
2   −2    
+!v 1 , !e1 ,


 2 = 0
1
!
!
v
+
v
=
+
projS (!e1 ) =


1
2
3  
6    
#!v 1 #2
#!v 2 #2
0
0
0
     

1 1 0



     

0 , 3 , 4
! 2, w
! 3} = 
A20 (a) Denote the given basis by B = {!
w1 , w
.



0 2 1

! 1 and S1 = Span{!v 1 }.
First Step: Let !v 1 = w
Second Step: We have
   
 
1 0
1
  2    
perpS1 (!
w2 ) = 3 − 0 = 3
  2   
0
2
2
 
0
 
Thus, we take !v 2 = 3 and define S2 = Span{!v 1 , !v 2 }.
 
2
Third Step: Let

 
 
  
0
1
0  0 

  0   18   
3 = 10/7
!v 3 = perpS2 (!
w3 ) = 4 − 0 −

  2   21   
−5/7
1
0
2
 
 0
 
Thus, we take !v 3 =  2.
 
−1
Copyright © 2020 Pearson Canada Inc.
39
      

1 0  0



     

0 , 3 ,  2
Hence, an orthogonal basis for S is B = 
.









 0 2 −1 

(b) We have
 
 
   
1
0 
 0 1
+!v 1 , !e1 ,
+!v 2 , !e1 ,
+!v 3 , !e1 ,
2   0   0    




 2 = 0
0
3
!
!
!
projS (!e1 ) =
v
+
v
+
v
=
+
+
 
 
1
2
3
2   21   7    
#!v 1 #2
#!v 2 #2
#!v 3 #2
0
2
−1
0
   

1 2



   

1 , 2
! 2} = 
w1 , w
.
A21 (a) Denote the given basis by B = {!




1 1

! 1 and S1 = Span{!v 1 }.
First Step: Let !v 1 = w
Second Step: We have
 
  

2
1  1/2
  9  

perpS1 (!
w2 ) = 2 − 1 =  1/2
  6  

1
1
−1/2
 
    

 1
1  1



   

 
1 ,  1
Thus, we take !v 2 =  1. Hence, an orthogonal basis for S is B = 
.



 
1 −1

−1
(b) We have
 
   
1
 1 2/3
+!v 2 , !e1 ,
2   2    
+!v 1 , !e1 ,


!v 1 +
!v 2 = 1 +  1 = 2/3
projS (!e1 ) =
6  6   
#!v 1 #2
#!v 2 #2
1
−1
0
     

2 1 1



     

1 , 2 , 5
! 2, w
! 3} = 
w1 , w
.
A22 (a) Denote the given basis by B = {!



1 1 2

! 1 and S1 = Span{!v 1 }.
First Step: Let !v 1 = w
Second Step: We have
 
  

1
2 −1/2
  9   

1 = 5/4
perpS1 (!
w2 ) = 2 −
  12   

1
1
1/4
 
−2
 
Thus, we take !v 2 =  5 and define S2 = Span{!v 1 , !v 2 }.
 
1
Third Step: Let
 
   
 
2
−2 0
1
  15   27    
1
5 = 0
!v 3 = perpS2 (!
−
w3 ) = 5 −
  12   36    
1
1
0
2
! 3 ∈ S2 . So, we ignore w
! 3.
Thus, w
    

2 −2



   

1 ,  5
Hence, an orthogonal basis for S is B = 
.



1  1

Copyright © 2020 Pearson Canada Inc.
40
(b) We have
A23 We have that
 
  

2
−2  8/9
+!v 1 , !e1 ,
+!v 2 , !e1 ,
4   −4   
1 +
 5 = −2/9
!v 1 +
!v 2 =
projS (!e1 ) =
2
2



12
36   
#!v 1 #
#!v 2 #
1
1
2/9
+1 + 2x − x2 , x2 ,
0
(1 + 2x − x2 ) = (1 + 2x − x2 ) = 0
2
2
9
#1 + 2x − x #
+1 + 2x − x2 , 3 + 2x − x2 ,
11
projS (q) =
(1 + 2x − x2 ) =
(1 + 2x − x2 )
2
2
9
#1 + 2x − x #
projS (p) =
A24 We have that
+1 + 3x, x,
6
2
(1 + 3x) =
(1 + 3x) = (1 + 3x)
2
21
7
#1 + 3x#
+1 + 3x, 1 − 3x,
−15
projS (q) =
(1 + 3x) =
(1 + 3x)
21
#1 + 3x#2
projS (p) =
A25 Observe that +1, x, = 1(−1) + 1(0) + 1(1) = 0, so {1, x} is an orthogonal basis for S. Thus,
+1, 1 + x + x2 ,
+x, 1 + x + x2 ,
5
2
5
1
+
x= 1+ x= +x
2
2
3
2
3
#1#
#x#
+1, 1 − x2 ,
+x, 1 − x2 ,
1
0
1
projS (q) =
1+
x= 1+ x=
3
2
3
#1#2
#x#2
projS (p) =
A26 Observe that +1 + x, 1 − x2 , = 0(0) + 1(1) + 2(0) = 1, so {1 + x, 1 − x2 } is not an orthogonal basis for
S. Thus, we must apply the Gram-Schmidt Procedure.
First Step: Let r1 = 1 + x and S1 = Span{r1 }.
Second Step: We have
1
4 1
perpS1 (1 − x2 ) = (1 − x2 ) − (1 + x) = − x − x2
5
5 5
G
H
Thus, we take r2 = 4 − x − 5x2 . Hence, an orthogonal basis for S is B = 1 + x, 4 − x − 5x2 . Thus,
+1 + x, 1,
+4 − x − 5x2 , 1,
(1
+
x)
+
(4 − x − 5x2 )
#1 + x#2
#4 − x − 5x2 #2
3
2
= (1 + x) + (4 − x − 5x2 )
5
20
1
1
= 1 + x − x2
2
2
+1 + x, 2x + 3x2 ,
+4 − x − 5x2 , 2x + 3x2 ,
projS (q) =
(1
+
x)
+
(4 − x − 5x2 )
#1 + x#2
#4 − x − 5x2 #2
10
−10
=
(1 + x) +
(4 − x − 5x2 )
5
20
5
5
= x + x2
2
2
projS (p) =
Copyright © 2020 Pearson Canada Inc.
41
A27 We have
+v1 + · · · + vk , v1 + · · · + vk , =+v1 , v1 , + · · · + +v1 , vk , + +v2 , v1 , + +v2 , v2 ,+
+ · · · + +v2 , vk , + · · · + +vk , v1 , + · · · + +vk , vk ,
Since {v1 , . . . , vk } is orthogonal, we have +vi , v j , = 0 if i ! j. Hence,
#v1 + · · · + vk #2 = +v1 , v1 , + · · · + +vk , vk , = #v1 #2 + · · · + #vk #2
B Homework Problems
B1
B3
B5
B7
B9
12
√
18
√
54
−1
3
√
B11 3
B2
B4
B6
B8
B10
−19
1
0
−14
√
17
B12 It is an inner product.
B13 It is an inner product.
B14 It is not an inner product as it is not positive definite.
B15 It is not an inner product as it is not positive definite.
2!
" !
" !
"3
4!
"5 !
"
1 0 −1 2 0 −1
1 −1
8/5 −7/10
B16 (a)
,
,
(b) projS
=
0 1
2 1 1
0
0
2
3/10
7/5
2!
" !
" !
"3
4!
"5 !
"
1 2 0 −2
0 1
1 −1
−1/9 −13/18
B17 (a)
,
,
(b) projS
=
2 0 2
1 −1 4
0 2
5/18
2
   
   

1 −2
1 1



   

0 = 0




1
3
B18 (a) B = 
,
(b)
proj
   
S

   


0  0

1
0
   
   

1  1
1 3/5



   

   




B19 (a) B = 
(b) projS 0 = 3/5
1 ,  1




   
 1 −5 
1
1
   

3  0



   

B20 (a) B = 
2 , −1




1
4
   

0  1



   

2 , −1
B21 (a) B = 



1  4

B22 projS (p) =
B23 projS (p) =
   
1 5/6
   
(b) projS 0 = 1/3
   
1
7/6
  

1  1/3
  

(b) projS 0 = −1/9
  

1
13/9
5
5
5 2
13
11 + 11 x + 11 x , projS (q) = 11
− 25 + 25 x, projS (q) = 52 − 25 x
+
13
11 x
+
13 2
11 x
Copyright © 2020 Pearson Canada Inc.
42
B24 projS (p) = 1 − x − x2 , projS (q) = 1 + x − x2
B25 projS (p) = 0, projS (q) = 2 + x2
B26 projS (p) = − 32 + 23 x + 23 x2 , projS (q) = 3x2
B27 projS (p) =
1
5
+ 15 x, projS (q) =
2
5
−
1
10 x
+ 12 x2
C Conceptual Problems
C1 The statement is false. Consider the standard inner product on R2 and take !x =
Then, we get !x · !y = −1.
! "
! "
1
−1
and !y =
.
0
0
C2 The statement is true. We have
+x, 0, = +x, 0x, = 0+x, x, = 0
C3 The statement is true. Since +x, y, = 0, we have
+sx, ty, = st+x, y, = 0
C4 We have d(x, y) = #x − y# ≥ 0 by Theorem 7.4.1(1).
C5 By Theorem 7.4.1(1), we have that #x − y# = 0 if and only if x − y = !0. Therefore, d(x, y) = 0 if and
only if x = y.
C6 We have
$
$
d(x, y) = #x − y# = +x − y, x − y, = +(−1)(y − x), (−1)(y − x),
$
= (−1)(−1)+y − x, y − x, = #y − x# = d(y, x)
C7 Using Theorem 7.4.1(4) we get
d(x, z) = #x − z# = #x − y + y − z# ≤ #x − y# + #y − z# = d(x, y) + d(y, z)
C8 We have
d(x, y) = #x − y# = #x + z − z − y# = #x + z − (y + z)# = d(x + z, y + z)
C9 Using Theorem 7.4.1(2) we get
d(tx, ty) = #tx − ty# = #t(x − y)# = |t| #x − y# = |t| d(x, y)
C10 (a) Because the inner product is symmetric,
(G) ji = +!v j , !v i , = +!v i , !v j , = (G)i j
Copyright © 2020 Pearson Canada Inc.
43
(b) By the bilinearity of + , ,
+!x , !y , = +x1!v 1 + x2!v 2 + x3!v 3 , y1!v 1 + y2!v 2 + y3!v 3 ,
(c) We have
= x1 y1 +!v 1 , !v 1 , + x1 y2 +!v 1 , !v 2 , + · · · + x3 y3 +!v 3 , !v 3 ,
?
@
?
@
= x1 (G)11 + x2 (G)21 + x3 (G)31 y1 + x1 (G)12 + x2 (G)22 + x3 (G)32 y2 +
?
@
+ x1 (G)13 + x2 (G)23 + x3 (G)33 y3
 
B
C y1 
= x1 x2 x3 G y2 
 
y3

 

 < 1, 1 > < 1, x > < 1, x2 >  3 3 5




G =  < x, 1 > < x, x > < x, x2 >  = 3 5 9
 2



< x , 1 > < x2 , x > < x2 , x2 >
5 9 17
C11 (a) By the bilinearity of + , ,,
+!x , !y , = +x1!e1 + x2!e2 , y1!e1 + y2!e2 ,
= x1 +!e1 , y1!e1 + y2!e2 , + x2 +!e2 , y1!e1 + y2!e2 ,
= x1 y1 +!e1 , !e1 , + x1 y2 +!e1 , !e2 , + x2 y1 +!e2 , !e1 , + x2 y2 +!e2 , !e2 ,
(b) Since the inner product is symmetric,
g ji = +!e j , !ei , = +!ei , !e j , = gi j
Thus,
+!x , !y , = g11 x1 y1 + g12 x1 y2 + g21 x2 y1 + g22 x2 y2
"! "
B
C !g
g
y
= x1 x2 11 12 1
g21 g22 y2
(c) Let !v 1 =
√1 !
g11 e1 .
Then, +!v 1 , !v 1 , = 1. Let
! 2 = perp!v 1 (!e2 ) = !e2 −
w
g12
!e1
g11
! 2 /#!
Finally, let !v 2 = w
w2 #.
(d) By construction +!v 1 , !v 1 , = 1 = +!v 2 , !v 2 , and +!v 1 , !v 2 , = 0. So,
!
"
1 0
G̃ =
0 1
Then
+!x , !y , = + x˜1!v 1 + x˜2!v 2 , y˜1!v 1 + y˜2!v 2 ,
= x˜1 y˜1 +!v 1 , !v 1 , + x˜1 y˜2 +!v 1 , !v 2 , + x˜2 y˜1 +!v 2 , !v 1 , + x˜2 y˜2 +!v 2 , !v 2 ,
= x˜1 y˜1 + x˜2 y˜2
Copyright © 2020 Pearson Canada Inc.
44
Section 7.5
A Practice Problems
y
y = f (x)
y = projCP2π,3( f )
π2
y = projCP2π,7( f )
y = projCP2π,11( f )
A1
−π
π
t
π
t
π
t
y
A2
eπ
y = f (x)
y = projCP2π,3( f )
y = projCP2π,7( f )
y = projCP2π,11( f )
–π
y
A3
1
y = f (x)
y = projCP2π,3( f )
y = projCP2π,7( f )
y = projCP2π,11( f )
–π
Copyright © 2020 Pearson Canada Inc.
45
A4 CP2π,3 ( f ) =
1 1
− cos 2x
2 2
4
5
4
5
5
2 2 10
2
A5 CP2π,3 ( f ) = 1 + (2π − 14) sin x + −π + sin 2x + π −
sin 3x
2
3
9
2
A6 CP2π,3 ( f ) =
π2
4
+ 1 − 4 cos x + cos 2x − cos 3x
3
9
A7 CP2π,3 ( f ) = π − 2 sin x + sin 2x −
2
sin 3x
3
A8 CP2π,3 ( f ) =
π 2
1
2
1
− cos x + sin x − sin 2x −
cos 3x + sin 3x
4 π
2
9π
3
A9 CP2π,3 ( f ) =
4
4
sin x +
sin 3x
π
3π
A10 CP2π,3 ( f ) =
1 2 sin 1
sin 2
2 sin 3
+
cos x +
cos 2x +
cos 3x
π
π
π
3π
B Homework Problems
1 1
+ cos 2x
2 2
3
B3 CP2π,2 ( f ) = sin x
4
B1 CP2π,2 ( f ) =
B2 CP2π,2 ( f ) = −2 sin x + sin 2x
B4 CP2π,2 ( f ) =
π 2
1
− cos x + sin x − sin 2x
4 π
2
Chapter 7 Quiz
     
 1   0
 0   1
E1 Observe that  13   ·  12   = − 16 . Hence, the first and third vector are not orthogonal, so the set
 1  −1
1
0
is not orthogonal or orthonormal.
     
 1   0
 0   0
E2 Observe that  √13   ·  √15   = − √115 . So, the vectors are not orthogonal and hence the set is not
 1   1
1
−2
orthogonal or orthonormal.
Copyright © 2020 Pearson Canada Inc.
46
E3 We have
 
1
1 0
√   ·
3 1
1
 
1
1 0
√   ·
3 1
1
 
 1
1  1
√   ·
3 −1
0
Hence, the set is orthogonal. But, we also have
 
 1
1  1
√   = 0
3 −1
0
 
 0
1 −1
√   = 0
3 −1
1
 
 0
1 −1
√   = 0
3 −1
1
 
 
1
1




1 0 1 0
√   · √   = 1
3 1
3 1
1
1
 
 
 1
 1
1  1 1  1
√   · √   = 1
3 −1
3 −1
0
0
 
 
 0
 0
1 −1 1 −1
√   · √   = 1
3 −1
3 −1
1
1
Consequently, the set is orthonormal.
 
v1 
 
E4 We need to find all vectors v2  ∈ R3 such that
 
v3
   
v1  2
   
0 = v2  · 1 = 2v1 + v2 + 3v3
   
v3 3
Row reducing the coefficient matrix of the corresponding homogeneous system gives
B
C B
C
2 1 3 ∼ 1 1/2 3/2
Thus, the general solution is
 
v1 
v2  =
 
v3
 
 
−1
−3
 
 
s  2 + t  0 ,
 
 
0
2
s, t ∈ R
Copyright © 2020 Pearson Canada Inc.
47
    

−1 −3



   

⊥
 2 ,  0
Hence, a basis for S is 
.








 0
2
To turn this into an orthogonal basis, we apply the Gram-Schmidt Procedure.
 
−1
 
First Step: Let !v 1 =  2 and S1 = Span{!v 1 }.
 
0
Second Step: We have
 

 

−3
[r] − 1 −12/5
3
 

 

2 =  −6/5 
perpS1 (!
w2 ) =  0 − 
  5
 

2
0
2


−12


Thus, we take !v 2 =  −6. Hence, an orthogonal basis for S⊥ is {!v 1 , !v 2 }.


10
E5 Let !v 1 , !v 2 , !v 3 denote the vectors in B. We have
!v 1 · !x
2
= ,
!v 1 · !v 1 3
!v 2 · !x
5
= ,
!v 1 · !v 1 3
!v 3 · !x
4
=
!v 1 · !v 1 3
2
5
4
Hence, !x = !v 1 + !v 2 + !v 3 .
3
3
3
! 1, w
! 2, w
! 3, w
! 4 denote the vectors in B. Since the vectors are not orthogonal, we need to apply the
E6 Let w
Gram-Schmidt Procedure.
! 1 and S1 = Span{!v 1 }.
First Step: Let !v 1 = w
Second Step: We have

 
  
 2
1  27/11
−5

 
  
1 = 27/11
perpS1 (!
w2 ) =  2 −

  11   
−18/11
−3
3
 
 3
 
We take !v 2 =  3 and define S2 = Span{!v 1 , !v 2 }.
 
−2
Third Step: We have
 
 
   
 3
1
 3 0
−6
26
 
 
 3 = 0
1
perpS2 (!
w3 ) =  3 −
−
  11   22    
−4
3
−2
0
! 3 ∈ Span{!
! 2 }. So, we ignore w
! 3 and use w
! 4 . We have
w1 , w
Hence, w
 
 
   
3
1
 3 0
  21   8    
1
3 = 0
perpS2 (!
w4 ) = 3 −
−


  11   22    
5
3
−2
0
! 4 ∈ Span{!
! 2 }.
Thus, w
w1 , w
   

1  3



   

1 ,  3
Hence, an orthogonal basis for the subspace spanned by B is 
.



3 −2

Copyright © 2020 Pearson Canada Inc.
48
Therefore,
 
 2
!v 1 · !x
!v 2 · !x
 
!
!
projS (!x ) =
v
+
v
=
 2
1
2
2
2
#!v 1 #
#!v 2 #
−1
     
 1  2 −1
     
perpS (!x ) = !x − projS (!x ) =  3 −  2 =  1
     
−1
−1
0
"
1 1
E7 Let A =
. Since det A = 0, it follows that
2 2
!
+A, A, = det(AA) = (det A)(det A) = 0
Hence, it is not an inner product.
E8 We verify that + , , satisfies the three properties of the inner product.
+A, A, = a211 + 2a212 + 2a221 + a222 ≥ 0
and equals zero if and only if A = O2,2 .
+A, B, =a11 b11 + 2a12 b12 + 2a21 b21 + a22 b22
=b11 a11 + 2b12 a12 + 2b21 a21 + b22 a22 = +B, A,
+A, sB + tC, =a11 (sb11 + tc11 ) + 2a12 (sb12 + tc12 )+
+ 2a21 (sb21 + tc21 ) + a22 (sb22 + tc22 )
=s(a11 b11 + 2a12 b12 + 2a21 b21 + a22 b22 )+
+ t(a11 c11 + 2a12 c12 + 2a21 c21 + a22 c22 )
=s+A, B, + t+A, C,
Thus, it is an inner product.
! 1, w
! 2, w
! 3.
E9 (a) Denote the vectors in the spanning set for S by w
! 1 and S1 = Span{!v 1 }.
First Step: Let !v 1 = w
Second Step: Let
Define S2 = Span{!v 1 , !v 2 }.
   
 
1 0
1
1 2 0 1
!v 2 = perpS1 (!
w2 ) =   −   =  
1 2 1 0
0
1
1
Third Step: Let
 
   
 
1
0 −1
1
3 4 0 4 1  1
!v 3 = perpS2 (!
w3 ) =   −   −   =  
3 2 1 2 0  1
0
1
−1
1
I
J
Hence, an orthogonal basis for S is !v 1 , !v 2 , !v 3 .
Copyright © 2020 Pearson Canada Inc.
49
(b) By the Approximation Theorem the closest point in S to !x is projS (!x ). We find that
 
 
  

1
0
−1  5/4
0 0 −1 1 −5  1 −7/4
projS (!x ) =   +
 +
 =

2 1
2 0
4  1 −5/4
0
1
−1
3/4
! 1 and then apply the
E10 One way of solving this is to first make a spanning set for R3 that contains w
 
 
 0
1
 
 
! 2 =  1 and w
! 3 = 0.
Gram-Schmidt Procedure. We take w
 
 
−1
0
! 1 and S1 = Span{!v 1 }.
First Step: Let !v 1 = w
Second Step: Let
Define S2 = Span{!v 1 , !v 2 }.
 
   
 0
1  0
0
 
   
!v 2 = perpS1 (!
w2 ) =  1 − 2 =  1
  9   
−1
2
−1
Third Step: We have
 
 4
 
So, we take !v 3 = −1.
 
−1
 
  

 
1
 0  8/9
1

  1  0  
perpS2 (!
w3 ) = 0 − 2 −  1 = −2/9

  9  2  
2
−1
−2/9
0
! 1 is {!v 1 , !v 2 , !v 3 }.
Hence, an orthogonal basis for R3 that contains the vector w


 
1 −1
2
1

 


0
. We also let !y = 3 and
E11 Since we want an equation of the form y = a + bt we let X = 
5
1
1

 
1
1
6
! "
a
!a =
. The normal system gives
b
X T X!a = X T !y
!
"
! "
4 1
16
!a =
1 3
9
Row reducing the corresponding augmented matrix gives
!
4 1 16
1 3 9
Thus, the equation of best fit is y =
"
∼
!
1 0 39/11
0 1 20/11
39 20
+ t.
11 11
Copyright © 2020 Pearson Canada Inc.
"
50


 
1 (−1)2 
4


 
2 
2



0 . We also let !y = 1 and
E12 Since we want an equation of the form y = a + ct we let X = 1


 
1
12
5
! "
a
!a =
. The normal system gives
c
!
X T X!a = X T !y
"
! "
3 2
10
!a =
2 2
9
Row reducing the corresponding augmented matrix gives
!
3 2 10
2 2 9
"
!
∼
1 0 1
0 1 7/2
"
7
Thus, the equation of best fit is y = 1 + t2 .
2
 


!
"
 3
1 2
x




E13 We have !b =  1, !x = 1 , and A = 1 1. The !x that minimizes #A!x − !b# must satisfy AT A!x =
 


x2
−1
0 1
AT !b. Row reducing the corresponding augmented matrix gives
!
2 3 4
3 6 6
"
∼
!
1 0 2
0 1 0
"
! "
2
Therefore, !x =
minimizes #A!x − !b#.
0
 


! "
1
1 −2
x




1
1. The !x that minimizes #A!x − !b# must satisfy AT A!x =
E14 We have !b = 2, !x =
, and A = 2
 


x2
2
3
2
AT !b. Row reducing the corresponding augmented matrix gives
!
14 6 11
6 9 4
"
∼
!
1 0
5/6
0 1 −1/9
"
"
5/6
Therefore, !x =
minimizes #A!x − !b#.
−1/9
!
E15 We have PT P = I and RT R = I. Hence,
(PR)T (PR) = RT PT PR = RT IR = RT R = I
Thus, PR is orthogonal.


1 0


E16 Observe that the matrix P = 0 1 has orthonormal columns, but is not orthogonal. Therefore, the


0 0
statement is false.
Copyright © 2020 Pearson Canada Inc.
51
B
C
E17 Let P = !v 1 · · · !v n . First, observe that PT P is defined and will be an n × n matrix. Since P has
orthonormal columns, then we have !v i · !v j = 0 for any i ! j. Hence, for i ! j we have
(PT P)i j = !v i · !v j = 0
Since P has orthonormal columns, then we have that each !v i is a unit vector. Thus,
(PT P)ii = !v i · !v i = 1
Therefore, PT P = I.
E18 The statement may be false when the basis {!v 1 , . . . , !v k } is not an orthogonal basis for S.
E19 We proved that perpS (!x ) ∈ S⊥ . By definition, projS (!x ) ∈ S and so, since !x ∈ S we also have that
!x − projS (!x ) ∈ S. Hence, perpS (!x ) is in S and S⊥ . So, by Theorem 7.2.2(2), perpS (!x ) = !0.
E20 Assume that !z is another vector in S such that
#!x − !z# < #!x − !v #
for all !v ∈ S, !v ! !z. But, !y ∈ S, so that would imply that
#!x − !z# < #!x − !y #
But, since !z ∈ S we must also have that
#!x − !y # < #!x − !z#
Hence, !z cannot exist.
E21 (a) First Step: Let r1 = 1 and S1 = Span{r1 }.
Second Step: We have
perpS1 (x − x2 ) = (x − x2 ) −
−2
2
(1) = + x − x2
3
3
Thus, we take r2 = 2 + 3x − 3x2 . Hence, an orthogonal basis for S is B = {r1 , r2 }.
(b) We have
5
4
1 + (2 + 3x − 3x2 )
3
24
1
1
= 2 + x − x2
2
2
projS (1 + x + x2 ) =
perpS (1 + x + x2 ) = 1 + x + x2 − projS (1 + x + x2 ) = −1 +
Copyright © 2020 Pearson Canada Inc.
1
3
x + x2
2
2
52
Chapter 7 Further Problems
F1 Assume that for some orthogonal basis B for W we have projW (!v ) = !a and perpW (!v ) = !b. Also
assume that for a different orthogonal basis C for W we have projW (!v ) = !x and perpW (!v ) = !y . Let
!c = !a − !x . Observe that !c ∈ W since !a, !x ∈ W.
By definition, we have !a + !b = !v = !x + !y . Since !a = !c + !x , we can rewrite this as
!c + !x + !b = !x + !y
Simplifying, we find that !c = !y − !b ∈ W⊥ since !b, !y ∈ W⊥ .
Thus, !c ∈ W and !c ∈ W⊥ , hence !c = !0. Thus, !a = !x .
F2 (a) For every !x ∈ R3 we have #L(!x )# = #!x #, so L(!x )·L(!x ) = #!x #2 for every !x ∈ R3 and #L(!x +!y )#2 =
#!x + !y #2 for every !x + !y ∈ R3 . Hence,
?
@ ?
@
L(!x + !y ) · L(!x + !y ) = (!x + !y ) · (!x + !y )
?
@ ?
@
L(!x ) + L(!y ) · L(!x ) + L(!y ) = !x · !x + 2!x · !y + !y · !y
L(!x ) · L(!x ) + 2L(!x ) · L(!y ) + L(!y ) · L(!y ) = #!x #2 + 2!x · !y + #!y #2
#!x #2 + 2L(!x ) · L(!y ) + #!y #2 = #!x #2 + 2!x · !y + #!y #2
as required.
L(!x ) · L(!y ) = !x · !y
(b) If [L] is orthogonal, then [L]T [L] = I. As in Section 7.1 Problem C3, it follows that
?
@ ?
@
[L]!x · [L]!x = !x T [L]T [L]!x = !x · !x
so L is an isometry.
3
Conversely, suppose that L is an isometry. Let {!e1 , !e2 , !e3 } be the standard basis
B for R . Since L pre-C
serves dot products, {L(!e1 ), L(!e2 ), L(!e3 )} is an orthonormal set. Thus, P = L(!e1 ) L(!e2 ) L(!e3 )
is orthogonal.
(c) Let L be an isometry of R3 . The characteristic polynomial of [L] is of degree three, so it has either
1 or 3 real roots (since complex roots come in complex conjugate pairs).
(d) We first note that we must not assume that an eigenvalue of algebraic multiplicity greater than 1
has geometric multiplicity greater than 1.
For simplicity, denote the standard matrix of L by A, and suppose that 1 is an eigenvalue with
!u .CLet !v and w
! be vectors such that {!u , !v , w
! } is an orthonormal basis for R3 . Then
eigenvector
B
! is an orthogonal matrix. We calculate
P = !u !v w
 T
!u  B
C
 
!
PT AP = !v T  A !u !v w
 T
!
w
 T

w 
!u A!u !u T A!v !u T A!

 T
= !v A!u !v T A!v !v T A!
w 
 T

! A!u w
! T A!v w
! T A!
w
w
 T

w 
!u !u !u T A!v !u T A!

 T
T
T
= !v !u !v A!v !v A!
w 
 T

! !u w
! T A!v w
! T A!
w
w
Copyright © 2020 Pearson Canada Inc.
53
 
1
 
! } is orthonormal the first column is 0. Since PT AP is orthogonal, this implies that
Since {!u , !v , w
 
0
B
C
the first row must be 1 0 0 Thus,
1
P AP =
021
T
!
012
A∗
"
where A∗ is orthogonal because its columns are orthonormal.
Consider det(A − λI). Expanding the determinant along the first column gives
det(A − λI) = (1 − λ) det(A∗ − λI)
as required. The case where one eigenvalue is -1 is similar.
(e) The possible forms of a 2 × 2 orthogonal matrix were determined in Problem F7 of Chapter 3.
It follows that A∗ is either the matrix of a rotation, the matrix of a reflection, or the matrix of
a composition of a rotation and a reflection. Note that the possibilities include A∗ = I (rotation
∗
through
! angle" 0), A = −I (rotation through angle π, or a composition of two reflections), and
1
0
A∗ =
(a reflection). Thus, with one eigenvalue of A being 1, we have that A describes a
0 −1
rotation, a reflection, or a composition of reflections.
In the case where the first eigenvalue of A is -1, we get that A is the matrix of a composition of a
reflection with any of the mappings in the previous case.
F3 If A2 = I and A = AT , then AT A = A2 = I, so A is orthogonal.
Suppose that A2 = I and AT A = I, then AT = AT A2 = (AT A)A = A, so A is symmetric.
If A = AT and AT A = I, then A2 = AT A = I, so A is an involution.
F4 The vector !x is in (S + T)⊥ if and only if !x · !s = 0 for every !s ∈ S and !x · !t = 0 for every !t ∈ T. Then
!x ∈ S⊥ and !x ∈ T⊥ , so !x ∈ S⊥ ∩ T⊥ .
Conversely, if !x ∈ S⊥ ∩ T⊥ , then !x · !s = 0 for every !s ∈ S and !x · !t = 0 for every !t ∈ T. Thus,
!x · (!s + !t) = 0 for every !s ∈ S and !t ∈ T. Thus, !x ∈ (S + T)⊥ .
F5 Let {!u 1 , . . . , !u k } be an orthonormal basis for Si . Since Si is a subset of Si+1 we can extend this orthonormal basis to an orthonormal basis {!u 1 , . . . , !u $ } for Si+1 . Let ci = +!u i , !v , for 1 ≤ i ≤ $ Observe
that
# projSi (!v )#2 = #c1!u 1 + · · · + ck!u k #2 =
# projSi+1 (!v )#2 = #c1!u 1 + · · · + c$!u $ #2 =
≥ # projSi (!v )#2
K
K
c21 + · · · + c2k
c21 + · · · + c2$
By definition, we have that
projSi (!v ) + perpSi (!v ) = !v = projSi+1 (!v ) + perpSi+1 (!v )
Copyright © 2020 Pearson Canada Inc.
54
Hence,
# projSi (!v ) + perpSi (!v )#2 = # projSi+1 (!v ) + perpSi+1 (!v )#2
# projSi (!v )#2 + # perpSi (!v )#2 = # projSi+1 (!v )#2 + # perpSi+1 (!v )#2
# perpSi (!v )#2 = # projSi+1 (!v )#2 − # projSi (!v )#2 + # perpSi+1 (!v )#2
# perpSi (!v )#2 ≥ # perpSi+1 (!v )#2
as required.
F6 Since A is invertible, its columns {!a1 , . . . , !an } form a basis for Rn . Perform the Gram-Schmidt Procedure on the basis, thus producing an orthonormal basis {!q 1 , . . . , !q n }. It is a feature of this procedure
that for each i,
Span{!a1 , . . . , !ai } = Span{!q 1 , . . . , !q i }
Hence, we can determine coefficients ri j such that
!a j = r1 j!q 1 + · · · + r j j!q j + 0!q j+1 + · · · + 0!q n
Thus we obtain
B
C B
!a1 · · · !an = !q 1

r11

C  0
· · · !q n 

 0
0
r12

···
r1n 

.. 
.
r22 . .
. 

.. ..
.
. r(n−1)n 

··· 0
rnn
That is, A = QR where Q is orthogonal and R is upper triangular.
Copyright © 2020 Pearson Canada Inc.
Chapter 8 Solutions
Section 8.1
A Practice Problems
A1 Since AT = A, A is symmetric.
A3 C is not symmetric since c12 ! c21 .
A2 Since BT = B, B is symmetric.
A4 Since DT = D, D is symmetric.
For Problems A5 - A16, alternate correct answers are possible.
A5 We have
!!
!
!!1 − λ −3 !!!
C(λ) = !
= λ2 − 2λ − 8 = (λ − 4)(λ + 2)
! −3 1 − λ!!
The eigenvalues are λ1 = 4 and λ2 = −2.
For λ1 = 4 we get
# "
#
−3 −3
1 1
A − 4I =
∼
−3 −3
0 0
$" #%
−1
A basis for the eigenspace of λ1 is
.
1
For λ2 = −2 we get
"
# "
#
3 −3
1 −1
A − (−1)I =
∼
−3
3
0 0
$" #%
1
A basis for the eigenspace of λ2 is
.
1
Observe that, as predicted by$"Theorem
# " #%8.1.2, the eigenvectors in the different eigenspaces are or−1 1
thogonal. Hence, we get that
,
forms an orthogonal basis for R2 of eigenvectors of A. To
1 1
orthogonally diagonalize A we need to form an orthogonal matrix P whose columns are eigenvectors of A. It is very important to remember that the columns of an orthogonal matrix must form an
2
orthonormal
Rn . Thus, we must normalize
√ basis
√ #%
$"
# " for
" the√vectors√in#the orthogonal basis for R . We
−1/ √2 1/ √2
−1/ √2 1/ √2
get
,
. Hence, the matrix P =
orthogonally diagonalizes A to
1/ 2 1/ 2
1/ 2 1/ 2
"
#
4
0
PT AP =
.
0 −2
"
1
Copyright © 2020 Pearson Canada Inc.
2
A6 We have
!!
!
!!5 − λ
3 !!!
C(λ) = !
= λ2 − 2λ − 24 = (λ + 4)(λ − 6)
! 3
−3 − λ!!
The eigenvalues are λ1 = −4 and λ2 = 6.
For λ1 = −4 we get
A − (−4)I =
"
# "
#
9 3
1 1/3
∼
3 1
0 0
$" #%
−1
A basis for the eigenspace of λ1 is
.
3
For λ2 = 6 we get
"
# "
#
−1
3
1 −3
A − 6I =
∼
3 −9
0
0
$" #%
3
A basis for the eigenspace of λ2 is
.
1
We normalize
for the eigenspaces
basis for R2 of eigenvec√ #%
√ #
$" the√basis
# "vectors
" to√get the orthonormal
−1/ √10 3/ √10
−1/ √10 3/ √10
tors of A
,
. Hence, P =
orthogonally diagonalizes A to
3/ 10 1/ 10
3/ 10 1/ 10
"
#
−4 0
PT AP =
.
0 6
A7 We have
!!
!
!!5 − λ
2 !!!
C(λ) = !
= λ2 − 7λ + 6 = (λ − 6)(λ − 1)
! 2
2 − λ!!
The eigenvalues are λ1 = 6 and λ2 = 1.
For λ1 = 6 we get
A − 6I =
"
# "
#
−1
2
1 −2
∼
2 −4
0
0
$" #%
2
A basis for the eigenspace of λ1 is
.
1
For λ2 = 1 we get
# "
#
4 2
1 1/2
A−I =
∼
2 1
0 0
$" #%
−1
A basis for the eigenspace of λ2 is
.
2
We normalize
for the eigenspaces
to √
get#the orthonormal basis for R2 of eigenvectors
√ vectors
$" √ # the
" basis
#%
" √
2/ √5 −1/ √5
2/ √5 −1/ √5
of A
,
. Hence, P =
orthogonally diagonalizes A to
1/ 5
2/ 5
1/ 5
2/ 5
"
#
6 0
PT AP =
.
0 1
"
A8 We have
!!
!
!!4 − λ
2 !!!
C(λ) = !
= λ2 − 5λ = λ(λ − 5)
! 2
1 − λ!!
The eigenvalues are λ1 = 0 and λ2 = 5.
Copyright © 2020 Pearson Canada Inc.
3
For λ1 = 0 we get
# "
#
4 2
1 1/2
A − 0I =
∼
2 1
0 0
$" #%
−1
A basis for the eigenspace of λ1 is
.
2
For λ2 = 5 we get
"
# "
#
−1
2
1 −2
A − 5I =
∼
2 −4
0
0
$" #%
2
A basis for the eigenspace of λ2 is
.
1
We normalize
for the eigenspaces
to √
get#the orthonormal basis for R2 of eigenvectors
√ the
√ vectors
√
$"
# "basis
#%
"
−1/ √5 2/ √5
−1/ √5 2/ √5
of A
,
. Hence, P =
orthogonally diagonalizes A to
2/ 5 1/ 5
2/ 5 1/ 5
"
#
0 0
PT AP =
.
0 5
"
A9 We have
!!
!
!!−λ 1
1 !!!
C(λ) = !!! 1 −λ 1 !!! = −(λ − 2)(λ + 1)2
!! 1
1 −λ!!
The eigenvalues are λ1 = 2 and λ2 = −1.
For λ1 = 2 we get

 

1
1 1 0 −1
−2

 

1 ∼ 0 1 −1
A − 2I =  1 −2

 

1
1 −2
0 0
0
  

1



 

A basis for the eigenspace of λ1 is 
.
1



1

For λ2 = −1 we get

1 1

A − (−1)I = 1 1

1 1
   

−1 −1



   

 1 ,  0
A basis for the eigenspace of λ2 is 
.



 0  1

 

1 1 1 1
 

1 ∼ 0 0 0
 

1
0 0 0
These vectors are not orthogonal to each other.
Since we require an orthonormal basis of eigenvectors of A, we need to find an orthonormal basis for
the eigenspace of λ2 . We can do this by applying the Gram-Schmidt Procedure to this set.
 
−1
 
Pick "v 1 =  1. Then S1 = Span{"v 1 } and
 
0
   
 
 
−1 −1
−1
−1
    1   1  
"v 2 = perpS1  0 =  0 −  1 = −1
    2   2  
1
1
0
2
Copyright © 2020 Pearson Canada Inc.
4
    

−1 −1



   

 1 , −1
Then, 
is an orthogonal basis for the eigenspace of λ2 .








 0
2
3
We normalize the √basis vectors
the√eigenspaces
to get the√orthonormal
√ basis√for R of eigen√  for

 




1/ 3 −1/ 2 −1/ 6
1/ 3 −1/ 2 −1/ 6





√ 
√
√ 
√  
 √
 √  




vectors of A 
,
,
.
Hence,
P
=
1/
3






1/ √3
−1/
6
1/ 2 −1/ √6 orthogonally
1/
2















√
√





1/ 3  0   2/ 6

1/ 3
0
2/ 6


0
0
2


0.
diagonalizes A to PT AP = 0 −1


0
0 −1
A10 We have
!!
!
!!1 − λ
0
−2!!!
C(λ) = !!! 0 −1 − λ −2!!! = −λ(λ − 3)(λ + 3)
!! −2
−2
−λ!!
The eigenvalues are λ1 = 0, λ2 = 3, and λ3 = −3.
For λ1 = 0 we get

 

0 −2 1 0 −2
 1

 

2
A − 0I =  0 −1 −2 ∼ 0 1

 

−2 −2
0
0 0
0
  

 2



 

−2
A basis for the eigenspace of λ1 is 
.





 1

For λ2 = 3 we get

 

0 −2 1 0 1 
−2

 

A − 3I =  0 −4 −2 ∼ 0 1 1/2

 

−2 −2 −3
0 0 0
  

 2



 

 1
A basis for the eigenspace of λ2 is 
.





 −2 

For λ3 = −3 we get

 

0 −2 1 0 −1/2
 4

 

2 −2 ∼ 0 1 −1 
A − (−3)I =  0

 

−2 −2
3
0 0
0
  

1



 

2
A basis for the eigenspace of λ3 is 
.



2

We normalize the basis vectors for the eigenspaces to get the orthonormal basis for R3 of eigenvec 
  




2/3 1/3
 2/3  2/3 1/3
 2/3







−2/3 ,  1/3 , 2/3. Hence, P = −2/3
1/3 2/3 orthogonally diagonalizes A to
tors of A 





 1/3 −2/3 2/3

1/3 −2/3 2/3


0
0 0


0.
PT AP = 0 3


0 0 −3
Copyright © 2020 Pearson Canada Inc.
5
A11 We have
!!
!
!!1 − λ
8
4 !!!
1 − λ −4 !!! = −(λ − 9)2 (λ + 9)
C(λ) = !!! 8
!! 4
−4 7 − λ!!
The eigenvalues are λ1 = 9 and λ2 = −9.
For λ1 = 9 we get

 

8
4 1 −1 −1/2
−8

 

0
0 
A − 9I =  8 −8 −4 ∼ 0

 

4 −4 −2
0
0
0
   

1 1



   

A basis for the eigenspace of λ1 is 
. These vectors are not orthogonal to each other. Since
0 , 1



2 0

we require an orthonormal basis of eigenvectors of A, we need to find an orthonormal basis for the
eigenspace of λ1 . We can do this by applying the Gram-Schmidt Procedure to this set.
 
1
 
Pick "v 1 = 0. Then S1 = Span{"v 1 } and
 
2
   
  

1 1
1  4/5
    1   

"v 2 = perpS1 1 = 1 − 0 =  1 
    5   

0
0
2
−2/5
    

1  4



   

0 ,  5
is an orthogonal basis for the eigenspace of λ1 .
Then, 







 2 −2 

For λ2 = −9 we get

 

8
4 1 0
2
10

 

A − (−9)I =  8 10 −4 ∼ 0 1 −2

 

4 −4 16
0 0
0
  

−2



 

A basis for the eigenspace of λ2 is 
.
 2



 1

We normalize
basis
for the
eigenspaces to get
the orthonormal
basis for R3 of eigenvectors
√ vectors
√
 √  the

 

 √



1/ 5  4/ 45 −2/3
1/ 5
4/ √45 −2/3





√


 
 




of A 
. Hence, P =  0
 0√  ,  5/ √45 ,  2/3
5/ √45
2/3 orthogonally diagonal

√












 2/ 5 −2/ 45
1/3 
2/ 5 −2/ 45
1/3


0
9 0


T
0
9
0.
izes A to P AP = 


0 0 −9
A12 We have
!!
!
!!1 − λ
2
1 !!!
1−λ
1 !!! = −(λ + 1)(λ − 4)(λ − 1)
C(λ) = !!! 2
!! 1
1
2 − λ!!
The eigenvalues are λ1 = −1, λ2 = 4, and λ3 = 1.
Copyright © 2020 Pearson Canada Inc.
6
For λ1 = −1 we get

 

2 2 1 1 1 0

 

A + I = 2 2 1 ∼ 0 0 1

 

1 1 3
0 0 0
  

−1



 

 1
A basis for the eigenspace of λ1 is 
.



 0

For λ2 = 4 we get

 

2
1 1 0 −1
−3

 

1 ∼ 0 1 −1
A − 4I =  2 −3

 

1
1 −2
0 0
0
  

1



 

1
A basis for the eigenspace of λ2 is 
.



1

For λ3 = 1 we get

 

0 2 1 1 0 1/2

 

A − I = 2 0 1 ∼ 0 1 1/2

 

1 1 1
0 0 0
  

−1



 

−1
A basis for the eigenspace of λ3 is 
.



 2

3
We normalize the basis
for the√eigenspaces
to get the orthonormal
√
√ basis√for R of eigen√  vectors

 √  



−1/ 2 1/ 3 −1/ 6
−1/ 2 1/ 3 −1/ 6





√ 
√
√ 
√
√   √  








vectors of A 
. Hence, P =  1/ 2 1/ 3 −1/ 6 orthogonally
 1/ 2 , 1/ √3 , −1/ √6


√ 
√






0
1/ 3
2/ 6 
0
1/ 3
2/ 6


−1 0 0
diagonalizes A to PT AP =  0 4 0.


0 0 1
A13 We have
!!
!
!!−λ 1 −1!!!
C(λ) = !!! 1 −λ 1 !!! = −(λ − 1)2 (λ + 2)
!!−1 1 −λ!!
The eigenvalues are λ1 = 1 and λ2 = −2.
For λ1 = 1 we get

 

1 −1 1 −1 1
−1

 

1 ∼ 0
0 0
A − I =  1 −1

 

−1
1 −1
0
0 0
    

1 −1



   

A basis for the eigenspace of λ1 is 
. To get an orthonormal basis of eigenvectors of A, we
1 ,  0




0
1
apply the Gram-Schmidt Procedure to this set.
Copyright © 2020 Pearson Canada Inc.
7
 
1
 
Pick "v 1 = 1. Then S1 = Span{"v 1 } and
 
0
   
  

−1 −1
1 −1/2
−1
   
1 =  1/2
"v 2 = perpS1  0 =  0 −

   
2   
1
1
0
1
    

1 −1



   

1 ,  1
is an orthogonal basis for the eigenspace of λ1 .
Then, 



0  2

For λ2 = −2 we get

 

 2 1 −1 1 0 −1

 

1 ∼ 0 1
1
A + 2I =  1 2

 

−1 1
2
0 0
0
  

 1



 

−1
A basis for the eigenspace of λ2 is 
.





 1

3
We normalize the √basis vectors
the√eigenspaces
to get the√orthonormal
√  for
√ basis√for R of eigen

 



1/ 2 −1/ 6  1/ 3
1/ 2 −1/ 6
1/ √3





√  
√
√ 
 √

 √  




vectors of A 
Hence,
P
=
,
,
.
−1/
3







1/
6
1/
2
1/
6
−1/
3 orthogonally
1/
2

















√
√
√
√





 0   2/ 6  1/ 3

0
2/ 6
1/ 3


0
1 0


0.
diagonalizes A to PT AP = 0 1


0 0 −2
A14 We have
!!
!
!!1 − λ
0
−1 !!!
1 !!! = −λ(λ − 3)(λ − 1)
C(λ) = !!! 0 1 − λ
!! −1
1
2 − λ!!
The eigenvalues are λ1 = 0, λ2 = 3, and λ3 = 1.
For λ1 = 0 we get

 

 1 0 −1 1 0 −1

 

1 ∼ 0 1
1
A + 0I =  0 1

 

−1 1
2
0 0
0
  

 1



 

−1
A basis for the eigenspace of λ1 is 
.





 1

For λ2 = 3 we get

 

0 −1 1 0
1/2
−2

 

1 ∼ 0 1 −1/2
A − 3I =  0 −2

 

−1
1 −1
0 0
0
  

−1



 

 1
A basis for the eigenspace of λ2 is 
.



 2

Copyright © 2020 Pearson Canada Inc.
8
For λ3 = 1 we get

 

 0 0 −1 1 −1 0

 

1 ∼ 0
0 1
A − I =  0 0

 

−1 1
1
0
0 0
  

1



 

1
A basis for the eigenspace of λ3 is 
.



0

3
We normalize
the eigenspaces to get the orthonormal
√ for
√ basis √for R of eigen√
√  vectors
the basis
  √  



 1/ 3 −1/ 6 1/ 2 
 1/ 3 −1/ 6 1/ 2




√   √  
√
√ 
√  
√








vectors of A 
,
,
,
.
Hence,
P
=
−1/ √3  1/ √6 1/ 2 
−1/ √3
1/ √6 1/ 2 orthogonally








 1/ 3  2/ 6  0  

1/ 3
2/ 6
0


0 0 0
diagonalizes A to PT AP = 0 3 0.


0 0 1
A15 We have
!!
!
!!1 − λ
2
−4 !!!
C(λ) = !!! 2 −2 − λ −2 !!! = −(λ + 3)2 (λ − 6)
!! −4
−2
1 − λ!!
The eigenvalues are λ1 = −3 and λ2 = 6.
For λ1 = −3 we get

 

2 −4 1 1/2 −1
 4

 

1 −2 ∼ 0 0
0
A + 3I =  2

 

−4 −2
4
0 0
0
    

 1 1



   

−2 , 0
A basis for the eigenspace of λ1 is 
. To get an orthonormal basis of eigenvectors of A, we







 0 1

apply the Gram-Schmidt Procedure to this set.
 
 1
 
Pick "v 1 = −2. Then S1 = Span{"v 1 } and
 
0
   
   
1 1
 1 4/5
1
   
   
"v 2 = perpS1 0 = 0 − −2 = 2/5
    5    
1
1
0
1
    

 1 4



   

−2 , 2
Then, 
is an orthogonal basis for the eigenspace of λ1 .







 0 5

For λ2 = 6 we get

 

2 −4 1 0 1 
−5

 

A − 6I =  2 −8 −2 ∼ 0 1 1/2

 

−4 −2 −5
0 0 0
  

−2



 

−1
A basis for the eigenspace of λ2 is 
.



 2

Copyright © 2020 Pearson Canada Inc.
9
We normalize
basis vectors for the
eigenspaces to get√the orthonormal
basis for R3 of eigenvectors
√

√ the
  √  





 1/ 5 4/ 45 −2/3
 1/ 5 4/ 45 −2/3





√
√
√


√









of A 
. Hence, P = −2/ 5 2/ 45 −1/3 orthogonally diagonal−2/ 5 , 2/ √45 , −1/3


√





 0  5/ 45  2/3

0
5/ 45
2/3


0 0
−3


T
0
−3
0.
izes A to P AP = 


0
0 6
A16 We have
!!
!
!!−2 − λ
2
−1 !!!
1−λ
−2 !!! = −(λ + 3)2 (λ − 3)
C(λ) = !!! 2
!! −1
−2 −2 − λ!!
The eigenvalues are λ1 = −3 and λ2 = 3.
For λ1 = −3 we get

 

2 −1 1 2 −1
 1

 

4 −2 ∼ 0 0
0
A + 3I =  2

 

−1 −2
1
0 0
0
   

−2 1



   

 1 , 0
A basis for the eigenspace of λ1 is 
. To get an orthonormal basis of eigenvectors of A, we



 0 1

apply the Gram-Schmidt Procedure to this set.
 
−2
 
Pick "v 1 =  1. Then S1 = Span{"v 1 } and
 
0
   
   
1 1
−2 1/5
    −2    
"v 2 = perpS1 0 = 0 −
 1 = 2/5
   
5    
1
1
0
1
    

−2 1



   

 1 , 2
is an orthogonal basis for the eigenspace of λ1 .
Then, 







 0 5

For λ2 = 3 we get

 

2 −1 1 0 1
−5

 

A − 3I =  2 −2 −2 ∼ 0 1 2

 

−1 −2 −5
0 0 0
  

−1



 

−2
A basis for the eigenspace of λ2 is 
.





 1

3
We normalize
vectors for the√eigenspaces
to get the orthonormal
√
√ basis for√ R of eigenvec the√basis
  √  




 
 

−2/ 5 1/ 30 −1/ 6



−2/ √5 1/ √30 −1/ √6

√
√ 
√









tors of A 
. Hence, P =  1/ 5 2/ 30 −2/ 6 orthogonally
 1/ 5 , 2/ √30 , −2/ √6


√ 
√




 0  5/ 30  1/ 6

0
5/ 30
1/ 6


0 0
−3


diagonalizes A to PT AP =  0 −3 0.


0
0 3
Copyright © 2020 Pearson Canada Inc.
10
B Homework Problems
B1 A is not symmetric since a12 ! a21 .
B2 B is not symmetric since b12 ! b21 .
B3 C is not symmetric since c31 ! c13 .
B4 Since DT = D, D is symmetric.
For Problems B5 - B16, alternate correct answers are possible.
√
√ #
"
"
#
−1/ √2 1/ √2
0 0
B5 P =
,D=
0 4
1/ 2 1/ 2
√ #
" √
"
#
2/ √5 −1/ √5
3 0
B6 P =
,D=
0 8
1/ 5
2/ 5
√
√
"
#
"
#
3/ √10 −1/ √10
5
0
B7 P =
,D=
0 −5
1/ 10
3/ 10
√ #
" √
"
#
1/ √5 −2/ √5
−3 0
B8 P =
,D=
0 2
2/ 5
1/ 5
√
√ 
√



 1/ 3 −1/ 2 1/ 6
0
−1 0
√
√




0
B9 P = −1/ 3
0√ 2/ √6, D =  0 7
√




0 0 −7
1/ 3
1/ 2 1/ 6
√
 √



2/ 45 −2/ 5 −1/3
0
6 0
√
 √



0
B10 P = 4/ 45
1/ 5 −2/3, D = 0 1
√




0 0 −3
5/ 45
0
2/3
√
√
√ 



−1/ 2
1/ √6 1/ √3
0 0 0




B11 P =  0
, D = 0 0 0
√ −2/ √6 1/ √3



0 0 3
1/ 2
1/ 6 1/ 3
√
√ 



−2/3 1/ 2 −1/ 18
−2 0 0
√




B12 P = −1/3
0√
4/ √18, D =  0 7 0




0 0 7
2/3 1/ 2
1/ 18
√
√
√ 



−1/ 2
1/ √6 1/ √3
0
6 0




0
, D = 0 6
B13 P =  0
√ −2/ √6 1/ √3



0 0 −6
1/ 2
1/ 6 1/ 3
√
√ 
 √


1/ 2 −1/ 6 −1/ 3
6 0 0
√
√ 



B14 P =  0
2/ √6 −1/ √3, D = 0 6 0
√




0 0 3
1/ 2
1/ 6
1/ 3
√
√ 
√



−2/ 5 1/ 30 −1/ 6
0 0
−1
√
√ 
√



B15 P =  1/ 5 2/ 30 −2/ 6, D =  0 −1 0
√ 
√



0
0 5
0
5/ 30
1/ 6
Copyright © 2020 Pearson Canada Inc.
11
√
√
√ 



 1/ 6 1/ 2 −1/ 3
5 0 0
√
√
√




B16 P = −1/ 6 1/ 2
1/ √3, D = 0 1 0
√




0 0 11
2/ 6
0
1/ 3
C Conceptual Problems
C1 We have that AT = A and BT = B. Using properties of the transpose we get:
(a) (A + B)T = AT + BT = A + B. Hence, A + B is symmetric.
(b) (AT A)T = AT (AT )T = AT A. Hence, AT A is symmetric.
(c) (AB)T = BT AT = BA. So, AB is not symmetric unless AB = BA.
(d) (A2 )T = (AA)T = AT AT = AA = A2 . Hence, A2 is symmetric.
C2 Suppose that A is symmetric, then for any "x , "y ∈ Rn we have
"x · (A"y ) = "x T A"y = "x T AT "y = (A"x )T "y = (A"x ) · "y
Conversely, assume "x · (A"y ) = (A"x ) · "y for all "x , "y ∈ Rn . This implies
"x T A"y = (A"x )T "y
"x T A"y = "x T AT "y
Since this is valid for all "y ∈ Rn we get that "x T A = "x T AT by the Matrices Equal Theorem. Taking
transposes of both sides gives AT "x = A"x . This is valid for all "x ∈ Rn , so using the Matrices Equal
Theorem again gives AT = A as required.
C3 We are assuming that there exists an orthogonal matrix P and diagonal matrix D such that PT AP = D.
Hence, A = PDPT and
AT = (PDPT )T = (PT )T DT PT = PDPT = A
Thus, A is symmetric.
C4 We are assuming that there exists an orthogonal matrix P and diagonal matrix D such that PT AP = D.
If A is invertible, then 0 ! det A = det D, so D is also invertible. We get that
D−1 = (PT AP)−1 = P−1 A−1 (PT )−1 = PT A−1 P
since P is orthogonal. Thus, A−1 is orthogonally diagonalized by P to D−1 .
"
#
"
#
1 −1
4 0
C5 Let P =
and D =
. We want to find A such that P−1 AP = D. So, we take
1
1
0 6
−1
A = PDP
#
5 −1
=
−1
5
"
It is easy to verify that A indeed has the correct eigenvalues and eigenvectors. NOTE: To use A =
PDPT you would need to normalize the columns of P.
Copyright © 2020 Pearson Canada Inc.
12
#
"
#
2 −1
−1 0
C6 Let P =
and D =
. We want to find A such that P−1 AP = D. So, we take
1
2
0 3
"
−1
A = PDP
"
#
−1/5 −8/5
=
−8/5 11/5
It is easy to verify that A indeed has the correct eigenvalues and eigenvectors. Note that you cannot
multiply A by a scalar.
"
#
1 2
C7 The statement is false. The 2×2 identity matrix I is symmetric and can be diagonalized by P =
1 3
which does not even have orthogonal columns.
√
√ #
"
1/ √2 1/ √2
C8 The statement is false. The matrix P =
is orthogonal, but it is not symmetric, so it
−1/ 2 1/ 2
is not orthogonally diagonalizable.
C9 We have (AB)T = BT AT = BA, so it seems that AB does not have "to be symmetric.
#
"To demonstrate
#
0 1
1 1
it might not be symmetric, we give a counter example. Take A =
and B =
. Then, A
1 1
1 1
"
#
1 1
and B are symmetric and hence orthogonally diagonalizable, but AB =
is not symmetric and
2 2
hence not orthogonally diagonalizable.
C10 If PT AP = B where BT = B, then A = PBPT and hence
AT = (PBPT )T = PBT PT = PBPT = A
Therefore, A is symmetric and hence orthogonally diagonalizable.
C11 If A is symmetric, then it is orthogonally diagonalizable by the Principal Axis Theorem. Thus, A is
diagonalizable and hence gλ = aλ by Theorem 6.2.3.
Section 8.2
A Practice Problems
A1 Q(x1 , x2 ) = x12 + 6x1 x2 − x22
A2 Q(x1 , x2 ) = 3x12 − 4x1 x2
A3 Q(x1 , x2 , x3 ) = x12 − 2x22 + 6x2 x3 − x32
A4 Q(x1 , x2 , x3 ) = −2x12 + 2x1 x2 + 2x1 x3 + x22 − 2x2 x3
A5 We have
!!
!
!!4 − λ −2 !!!
C(λ) = !
= (λ − 2)(λ − 6)
! −2 4 − λ!!
Hence, the eigenvalues are 2 and 6. Since both of the eigenvalues are positive we get that A is positive
definite by Theorem 8.2.2.
Copyright © 2020 Pearson Canada Inc.
13
A6 Since A is diagonal, the eigenvalues are the diagonal entries of A. Hence, the eigenvalues are 1 and
2. So A is positive definite by Theorem 8.2.2.
A7 We have
!!
!
!!1 − λ
0
0 !!!
−2 − λ
6 !!! = −(λ − 1)(λ − 10)(λ + 5)
C(λ) = !!! 0
!! 0
6
7 − λ!!
So, the eigenvalues are 1, 10, and −5. Since A has positive and negative eigenvalues we get that A is
indefinite by Theorem 8.2.2.
A8 We have
!!
!
!!−3 − λ
1
−1 !!!
−3 − λ
1 !!! = −(λ + 5)(λ + 2)2
C(λ) = !!! 1
!! −1
1
−3 − λ!!
So, the eigenvalues are −5 and −2. Since all eigenvalues of A are negative, we get that A is negative
definite by Theorem 8.2.2.
A9 We have
!!
!
!!7 − λ
2
−1 !!!
C(λ) = !!! 2 10 − λ −2 !!! = −(λ − 12)(λ − 6)2
!! −1
−2
7 − λ!!
So, the eigenvalues are 12 and 6. Since all eigenvalues of A are positive, we get that A is positive
definite by Theorem 8.2.2.
A10 We have
!!
!
!!−4 − λ −5
5 !!!
2−λ
1 !!! = −(λ + 9)(λ − 3)(λ − 6)
C(λ) = !!! −5
!! 5
1 2 − λ!!
So, the eigenvalues are −9, 3, and 6. Since A has positive and negative eigenvalues we get that A is
indefinite by Theorem 8.2.2.
"
#
1
−3/2
A11 (a) The corresponding symmetric matrix is A =
.
−3/2
1
(b) We have
!!
!
!!1 − λ −3/2!!!
C(λ) = !
= (λ − 5/2)(λ + 1/2)
!−3/2 1 − λ!!
The eigenvalues are λ1 = 5/2 and λ2 = −1/2.
For λ1 = 5/2 we get
"
# "
5
−3/2 −3/2
1
A− I =
∼
−3/2 −3/2
0
2
$" #%
−1
Hence, a basis for the eigenspace of λ1 is
.
1
For λ2 = −1/2 we get
9 :
"
# "
1
3/2 −3/2
1
A− − I =
∼
−3/2
3/2
0
2
$" #%
1
Hence, a basis for the eigenspace of λ2 is
.
1
Copyright © 2020 Pearson Canada Inc.
#
1
0
#
−1
0
14
√
√ #
"
#
−1/ √2 1/ √2
5/2
0
Therefore, A is orthogonally diagonalized by P =
to D =
.
0 −1/2
1/ 2 1/ 2
Let "x = P"y . Then we get
"
Q("x ) = "x T A"x = "y T D"y =
5 2 1 2
y − y
2 1 2 2
(c) Since A has positive and negative eigenvalues we get that Q("x ) is indefinite by Theorem 8.2.2.
"
#
5 −2
A12 (a) The corresponding symmetric matrix is A =
.
−2
2
(b) We have
!!
!
!!5 − λ −2 !!!
C(λ) = !
= (λ − 1)(λ − 6)
! −2 2 − λ!!
The eigenvalues are λ1 = 1 and λ2 = 6.
For λ1 = 1 we get
# "
#
4 −2
1 −1/2
A−I =
∼
−2
1
0
0
$" #%
1
Hence, a basis for the eigenspace of λ1 is
.
2
For λ2 = 6 we get
"
# "
#
−1 −2
1 2
A − 6I =
∼
−2 −4
0 0
$" #%
−2
Hence, a basis for the eigenspace of λ2 is
.
1
√ #
" √
"
#
1/ √5 −2/ √5
1 0
Therefore, A is orthogonally diagonalized by P =
to D =
.
0 6
2/ 5
1/ 5
Let "x = P"y . Then we get
Q("x ) = "x T A"x = "y T D"y = y21 + 6y22
"
(c) Since A has all positive eigenvalues we get that Q("x ) is positive definite by Theorem 8.2.2.
"
#
−7
2
A13 (a) The corresponding symmetric matrix is A =
.
2 −4
(b) We have
!!
!
!!−7 − λ
2 !!!
C(λ) = !
= (λ + 8)(λ + 3)
! 2
−4 − λ!!
The eigenvalues are λ1 = −8 and λ2 = −3.
For λ1 = −8 we get
# "
#
1 2
1 2
∼
2 4
0 0
$" #%
−2
Hence, a basis for the eigenspace of λ1 is
.
1
A + 8I =
"
Copyright © 2020 Pearson Canada Inc.
15
For λ2 = −3 we get
# "
#
−4
2
1 −1/2
A + 3I =
∼
2 −1
0
0
$" #%
1
Hence, a basis for the eigenspace of λ2 is
.
2
√
√ #
"
"
#
−2/ √5 1/ √5
−8
0
Therefore, A is orthogonally diagonalized by P =
to D =
.
0 −3
1/ 5 2/ 5
Let "x = P"y . Then we get
"
Q("x ) = "x T A"x = "y T D"y = −8y21 − 3y22
(c) Since A has all negative eigenvalues we get that Q("x ) is negative definite by Theorem 8.2.2.
"
#
−2 −3
A14 (a) The corresponding symmetric matrix is A =
.
−3 −2
(b) We have
!!
!
!!−2 − λ
−3 !!!
C(λ) = !
= (λ − 1)(λ + 5)
! −3
−2 − λ!!
The eigenvalues are λ1 = 1 and λ2 = −5.
For λ1 = 1 we get
"
# "
#
−3 −3
1 1
A − 1I =
∼
−3 −3
0 0
$" #%
−1
Hence, a basis for the eigenspace of λ1 is
.
1
For λ2 = −5 we get
"
# "
#
3 −3
1 −1
A + 5I =
∼
−3
3
0
0
$" #%
1
Hence, a basis for the eigenspace of λ2 is
.
1
√
√ #
"
"
#
−1/ √2 1/ √2
1
0
Therefore, A is orthogonally diagonalized by P =
to D =
.
0 −5
1/ 2 1/ 2
Let "x = P"y . Then we get
Q("x ) = "x T A"x = "y T D"y = 1y21 − 5y22
(c) Since A has both positive and negative eigenvalues we get that Q("x ) is indefinite by Theorem
8.2.2.
"
#
−2 6
A15 (a) The corresponding symmetric matrix is A =
.
6 7
(b) We have
!!
!
!!−2 − λ
6 !!!
C(λ) = !
= (λ − 10)(λ + 5)
! 6
7 − λ!!
The eigenvalues are λ1 = 10 and λ2 = −5.
Copyright © 2020 Pearson Canada Inc.
16
For λ1 = 10 we get
# "
−12
6
1
∼
6 −3
0
$" #%
1
Hence, a basis for the eigenspace of λ1 is
.
2
For λ2 = −5 we get
"
# "
3 6
1
A + 5I =
∼
6 12
0
$" #%
−2
Hence, a basis for the eigenspace of λ2 is
.
1
" √
1/ √5
Therefore, A is orthogonally diagonalized by P =
2/ 5
Let "x = P"y . Then we get
A − 10I =
"
#
−1/2
0
2
0
#
√ #
"
#
−2/ √5
10
0
to D =
.
0 −5
1/ 5
Q("x ) = "x T A"x = "y T D"y = 10y21 − 5y22
(c) Since A has positive and negative eigenvalues we get that Q("x ) is indefinite by Theorem 8.2.2.


3
 1 −1


1
3.
A16 (a) The corresponding symmetric matrix is A = −1


3
3 −3
(b) We have
!!
!
!!1 − λ −1
3 !!!
3 !!! = −(λ − 2)(λ − 3)(λ + 6)
C(λ) = !!! −1 1 − λ
!! 3
3 −3 − λ!!
The eigenvalues are λ1 = 2, λ2 = 3, and λ3 = −6.
For λ1 = 2 we get

 

3 1 1 0
−1 −1

 

3 ∼ 0 0 1
A − 2I = −1 −1

 

3
3 −5
0 0 0
  

−1



 

 1
Hence, a basis for the eigenspace of λ1 is 
.





 0

For λ2 = 3 we get

 

3 1 0 −1
−2 −1

 

3 ∼ 0 1 −1
A − 3I = −1 −2

 

3
3 −6
0 0
0
  

1



 

1
Hence, a basis for the eigenspace of λ2 is 
.



1

For λ3 = −6 we get

 

 7 −1 3 1 0 1/2

 

7 3 ∼ 0 1 1/2
A − (−6)I = −1

 

3
3 3
0 0 0
Copyright © 2020 Pearson Canada Inc.
17
  

 1



 

 1
Hence, a basis for the eigenspace of λ3 is 
.





 −2 

√
√ 
√



−1/ 2 1/ 3 1/ 6 
0
2 0
√
√
√




0.
Therefore, A is orthogonally diagonalized by P =  1/ 2 1/ 3 1/ 6  to D = 0 3
√ 
√



0 0 −6
0
1/ 3 −2/ 6
Let "x = P"y . Then we get
Q("x ) = "x T A"x = "y T D"y = 2y21 + 3y22 − 6y23
(c) Since A has positive and negative eigenvalues we get that Q("x ) is indefinite by Theorem 8.2.2.


1
0
−4


A17 (a) The corresponding symmetric matrix is A =  1 −5 −1.


0 −1 −4
(b) We have
!!
!
!!−4 − λ
1
0 !!!
−5 − λ
−1 !!! = −(λ + 3)(λ + 6)(λ + 4)
C(λ) = !!! 1
!! 0
−1
−4 − λ!!
The eigenvalues are λ1 = −3, λ2 = −6, and λ3 = −4.
For λ1 = −3 we get

 

1
0 1 0 1
−1

 

A + 3I =  1 −2 −1 ∼ 0 1 1

 

0 −1 −1
0 0 0
  

−1



 

−1
Hence, a basis for the eigenspace of λ1 is 
.



 1

For λ2 = −6 we get

 

1
0 1 0
1
2

 

1 −1 ∼ 0 1 −2
A + 6I = 1

 

0 −1
2
0 0
0
  

−1



 

 2
Hence, a basis for the eigenspace of λ2 is 
.



 1

For λ3 = −4 we get

 
1
0 1
0

 
A + 4I = 1 −1 −1 ∼ 0

 
0 −1
0
0
  

1



 

Hence, a basis for the eigenspace of λ3 is 
.
0



1

√

−1/ 3
√

Therefore, A is orthogonally diagonalized by P = −1/ 3
√

1/ 3
Let "x = P"y . Then we get

0 −1

1
0

0
0
√
√ 


−1/ √6 1/ 2
0
0
−3



0.
2/ √6
0√  to D =  0 −6



0
0 −4
1/ 6 1/ 2
Q("x ) = "x T A"x = "y T D"y = −3y21 − 6y22 − 4y23
Copyright © 2020 Pearson Canada Inc.
18
(c) Since A has all negative eigenvalues we get that Q("x ) is negative definite by Theorem 8.2.2.


 3 −1 −1


5
1.
A18 (a) The corresponding symmetric matrix is A = −1


−1
1
3
(b) We have
!!
!
!!3 − λ −1
−1 !!!
1 !!! = −(λ − 6)(λ − 3)(λ − 2)
C(λ) = !!! −1 5 − λ
!! −1
1 3 − λ!!
The eigenvalues are λ1 = 6, λ2 = 3, and λ3 = 2.
For λ1 = 6 we get

 

1
−3 −1 −1 1 0

 

1 ∼ 0 1 −2
A − 6I = −1 −1

 

−1
1 −3
0 0
0
  

−1



 

 2
Hence, a basis for the eigenspace of λ1 is 
.



 1

For λ2 = 3 we get

 

 0 −1 −1 1 0 1

 

2
1 ∼ 0 1 1
A − 3I = −1

 

−1
1
0
0 0 0
  

−1



 

−1
Hence, a basis for the eigenspace of λ2 is 
.



 1

For λ3 = 2 we get

 
 1 −1 −1 1

 
3
1 ∼ 0
A − 2I = −1

 
−1
1
1
0
  

1



 

0
Hence, a basis for the eigenspace of λ3 is 
.





1

√

−1/ 6
√

Therefore, A is orthogonally diagonalized by P =  2/ 6
√

1/ 6
Let "x = P"y . Then we get

0 −1

1
0

0
0
√ 
√


−1/ √3 1/ 2
6 0 0



to D = 0 3 0.
−1/ √3
√0


0 0 2
1/ 3 1/ 2
Q("x ) = "x T A"x = "y T D"y = 6y21 + 3y22 + 2y23
(c) Since A has all positive eigenvalues we get that Q("x ) is positive definite by Theorem 8.2.2.


 3 −2 4


6 2.
A19 (a) The corresponding symmetric matrix is A = −2


4
2 3
Copyright © 2020 Pearson Canada Inc.
19
(b) We have
!!
!
!!3 − λ −2
4 !!!
2 !!! = −(λ − 7)2 (λ + 2)
C(λ) = !!! −2 6 − λ
!! 4
2 3 − λ!!
The eigenvalues are λ1 = 7 and λ2 = −2.
For λ1 = 7 we get

 

4 1 1/2 −1
−4 −2

 

2 ∼ 0 0
0
A − 7I = −2 −1

 

4
2 −4
0 0
0
    

1 −1



   

0 ,  2
Hence, a basis for the eigenspace of λ1 is 
. Since we need an orthonormal basis we



1  0

 
1
 
apply the Gram-Schmidt Procedure. Pick "v 1 = 0. Then S1 = Span{"v 1 } and
 
1
   
  

−1 −1
1 −1/2
    −1   

"v 2 = perpS1  2 =  2 −
0 =  2 
   
2   
0
0
1
1/2
    

1 −1



   

0 ,  4
.
Thus, an orthogonal basis is 



1  1

For λ2 = −2 we get

 
 5 −2 4 1

 
8 2 ∼ 0
A + 2I = −2

 
4
2 5
0
  

−2



 

−1
Hence, a basis for the eigenspace of λ2 is 
.





 2

 √
1/ 2

Therefore, A is orthogonally diagonalized by P =  0
 √
1/ 2
Let "x = P"y . Then we get

0 1 

1 1/2

0 0
√



−1/ √18 −2/3
0
7 0



0.
4/ √18 −1/3 to D = 0 7



0 0 −2
1/ 18
2/3
Q("x ) = "x T A"x = "y T D"y = 7y21 + 7y22 − 2y23
(c) Q("x ) is indefinite by Theorem 8.2.2.
B Homework Problems
B1 4x12 − 2x1 x2 + 3x22
B3 −x12 − 2x22 − 3x32
B5 3x12 + 12x1 x2 + 10x1 x3 + 4x2 x3 + x32
B2 x12 − 10x1 x2 + x22
B4 −x12 + 2x1 x2 + 2x1 x3 − x22 + 2x2 x3 − x32
B6 2x12 + 2x1 x2 + 4x1 x3 − 3x22 − 2x2 x3 + 2x32
Copyright © 2020 Pearson Canada Inc.
20
B7 The eigenvalues are λ1 = 7 and λ2 = 2, so the matrix is positive definite.
B8 The eigenvalues are λ1 = −2 and λ2 = −15, so the matrix is negative definite.
B9 The eigenvalues are λ1 = −6, λ2 = 2, and λ3 = 7, so the matrix is indefinite.
B10 The eigenvalues are λ1 = 1, λ2 = 3, and λ3 = 6, so the matrix is positive definite.
B11 The eigenvalues are λ1 = −5, λ2 = 1, and λ3 = 7, so the matrix is indefinite.
B12 The eigenvalues are λ1 = −1, λ2 = −1, and λ3 = −4, so the matrix is negative definite.
√ #
"
#
" √
1 4
1/ √2 −1/ √2
2
2
B13 (a)
; (b) Q("x ) = 5y1 − 3y2 , P =
; (c) Q("x ) is indefinite.
4 1
1/ 2
1/ 2
√ #
"
#
" √
−2 6
3/ √13 −2/ √13
2
2
B14 (a)
; (b) Q("x ) = 2y1 − 11y2 , P =
; (c) Q("x ) is indefinite.
6 7
2/ 13
3/ 13
√ #
"
#
" √
2 2
1/ √5 −2/ √5
2
2
B15 (a)
; (b) Q("x ) = 6y1 + y2 , P =
; (c) Q("x ) is positive definite.
2 5
2/ 5
1/ 5
√
√ #
"
#
"
−5
6
−2/ √13 3/ √13
B16 (a)
; (b) Q("x ) = −14y21 − y22 , P =
; (c) Q("x ) is negative definite.
6 −10
3/ 13 2/ 13
√ #
"
#
" √
−3
6
3/ √13 −2/ √13
2
2
B17 (a)
; (b) Q("x ) = y1 − 12y2 , P =
; (c) Q("x ) is indefinite.
6 −8
2/ 13
3/ 13
√
√ #
"
#
"
−2 −3
−1/ √10 3/ √10
2
2
B18 (a)
; (b) Q("x ) = 7y1 − 3y2 , P =
; (c) Q("x ) is indefinite.
−3
6
3/ 10 1/ 10
√
√ 
 √


1/ 6 −1/ 2

1/
1 2
−1
√
√3
 √


2
2
2
B19 (a)  1 −1 2; (b) Q("x ) = 4y1 − 2y2 − 2y3 , P = 1/ 6
1/ 2
1/ √3; (c) Q("x ) is indefinite.


 √

2
2 2
2/ 6
0
−1/ 3
√
√ 
√



 1/ 3 −1/ 2 1/ 6
1
 4 −1
√ 
√



4 −1; (b) Q("x ) = 6y21 + 3y22 + 3y23 , P = −1/ 3
B20 (a) −1
0√ 2/ √6;
√




1 −1
4
1/ 3
1/ 2 1/ 6
(c) Q("x ) is positive definite.
√
√




−2/ 5 2/ 45 −1/3
0
2
−6
√
√




4; (b) Q("x ) = −6y21 − y22 − 10y23 , P =  1/ 5 4/ 45 −2/3;
B21 (a)  0 −6
√




2
4 −5
0
5/ 45
2/3
(c) Q("x ) is negative definite.




1
0√
0√ 
0
0
−2




2; (b) Q("x ) = −2y21 − 5y22 − y23 , P = 0 −1/ √2 1/ √2;
B22 (a)  0 −3




0
2 −3
0
1/ 2 1/ 2
(c) Q("x ) is negative definite.
√
√
√ 



−1/ 6 1/ 2 −1/ 3
2 −1
−1
√
√ 



2 −2; (b) Q("x ) = 4y21 −2y22 −2y23 , P = −2/ 6
B23 (a)  2
0√
1/ √3; (c) Q("x ) is indefinite.
√




−1 −2 −1
1/ 6 1/ 2
1/ 3
Copyright © 2020 Pearson Canada Inc.
21




2/3
 4 2 −2
−2/3 1/3




0; (b) Q("x ) = 7y21 + 4y22 + y23 , P = −1/3 2/3 −2/3; (c) Q("x ) is positive definite.
B24 (a)  2 3




−2 0
5
2/3 2/3
1/3
C Conceptual Problems
C1 By Theorem 8.2.1 there exists an orthogonal matrix P such that
Q("x ) = λ1 y21 + λ2 y22 + · · · + λn y2n
where "x = P"y and λ1 , . . . , λn are the eigenvalues of A. Clearly, Q("x ) < 0 for all "y ! "0 if and only
if the eigenvalues are all negative. Moreover, since P is orthogonal, it is invertible, hence "x = "0 if
and only if "y = "0 since "x = P"y . Thus we have shown that Q("x ) is negative definite if and only if all
eigenvalues of A are negative.
C2 By Theorem 8.2.1 there exists an orthogonal matrix P such that
Q("x ) = λ1 y21 + λ2 y22 + · · · + λn y2n
where "x = P"y and λ1 , . . . , λn are the eigenvalues of A. Clearly if some eigenvalues are positive and
some are negative, then Q("x ) > 0 for some "y and Q("x ) < 0 for some "y . On the other hand, if Q("x )
takes both positive and negative values, then we must have both positive and negatives values of λ.
C3 By Theorem 8.2.1 there exists an orthogonal matrix P such that
Q("x ) = λ1 y21 + λ2 y22 + · · · + λn y2n
where "x = P"y and λ1 , . . . , λn are the eigenvalues of A. Clearly, Q("x ) ≥ 0 for all "y ! "0 if and only if
the eigenvalues are all non-negative.
C4 Let λ be an eigenvalue of AT A with unit eigenvector "v . Then AT A"v = λ"v . Hence,
'A"v '2 = (A"v )T A"v = "v T AT A"v = "v T (λ"v ) = λ("v T "v ) = λ'"v ' = λ
Thus, since 'A"v ' ≥ 0 we have that λ ≥ 0.
C5 Since A is positive definite, we have that
aii = "eTi A"ei > 0
C6 If A is positive definite, then all of its eigenvalues are positive. Moreover, since A is symmetric it is
similar to a diagonal matrix D whose diagonal entries are the eigenvalues of A. But then, det A =
det D > 0. Hence A is invertible.
C7 Let λ be an eigenvalue of A−1 with eigenvector "v . Then A−1"v = λ"v . Multiplying both sides by A gives
Aλ"v = AA−1"v = "v
Thus, λ"v is an eigenvector of A with corresponding eigenvalue λ1 . Thus, λ > 0 since all the eigenvalues of A are positive.
Copyright © 2020 Pearson Canada Inc.
22
C8 Pick any "y ∈ Rn , "y ! "0. Then, P"y ! "0 since P is o
"y T PT AP"y = (P"y )T A(P"y ) > 0
since "x T A"x > 0 for all "x ! "0 as A is positive definite. Hence, PT AP is positive definite.
C9 (a) We have
1
1
(A + AT )T = (AT + A) = A+
2
2
1
1
− T
T T
(A ) = (A − A ) = (AT − A) = −A−
2
2
(A+ )T =
So A+ is symmetric and A− is skew-symmetric.
A+ + A− =
1
1
(A + AT ) + (A − AT ) = A
2
2
(b) Since (A− )T = −A− we have that (A− )ii = −(A− )ii and so (A− )ii = 0.
(c) We have (A+ )i j = 12 (ai j + a ji ) and (A− )i j = 12 (ai j − a ji ).
(d) We have
"x T A"x = "x T (A+ + A− )"x = "x T A+ "x + "x T A− "x
Since (A− )T = −A− , the real number "x T A− "x satisfies
"x T A− "x = ("x T A− "x )T = "x T (A− )T "x = −"x T A− "x
So, "x T A− "x = 0. Thus,
"x T A"x = "x T A+ "x
C10 (a) By the bilinearity of the inner product,
("x , "y ) = (x1"e1 + x2"e2 + · · · + xn"en , y1"e1 + y2"e2 + · · · + yn"en )
n
;
=
xi ("ei , y1"e1 + y2"e2 + · · · + yn"en )
i=1
=
n ;
n
;
i=1 j=1
xi y j ("ei , "e j )
(b) We have
("x , "y ) =
=
n ;
n
;
i=1 j=1
n ;
n
;
i=1 j=1
xi y j ("ei , "e j ) =
n ;
n
;
xi y j gi j
i=1 j=1
xi gi j y j = "x · (G"y ) = "x T G"y
(c) Since ("ei , "e j ) = ("e j , "ei ), gi j = g ji and so G is symmetric. Since ("x , "x ) = "x T G"x > 0 for all
non-zero "x , G is positive definite and has positive eigenvalues.
Copyright © 2020 Pearson Canada Inc.
23
<
=
(d) Since G is symmetric, there exists an orthogonal matrix P = "v 1 · · · "v n such that PT GP =
D = diag(λ1 , . . . , λn ) where the diagonal entries of D are the eigenvalues λ1 , . . . , λn of G. Then,
G = PDPT . Write "x˜ = PT "x so that "x˜ T = "x T P. It follows that
("x , "y ) = "x T G"y = "x T PDPT "y = "x˜ T D"y˜ = λ1 x˜1 y˜1 + λ2 x˜2 y˜2 + · · · + λn x˜n y˜n
(e) Note that since "x T G"x = ("x , "x ) > 0 for all non-zero "x , G is positive definite and hence its
eigenvalues are all positive. Thus, ("v i , "v i ) = λi > 0 and ("v i , "v j ) = 0. Now, let
"i =
w
1
1
"v i = √ "v i
'"v i '
λi
" i ) = 1 and ("
" j ) = 0. Thus it follows that {"
" n } is
so that ("
wi , w
wi , w
w1 , . . . , w
√ an˜ orthonormal basis
1
∗
√
" i = λ "v i , it also follows that "x i = λi "xi . Hence,
with respect to ( , ). Note that since w
i
< "x , "y >= x1∗ y∗1 + · · · + xn∗ y∗n
Section 8.3
A Practice Problems
A1 The quadratic form Q("x ) =
2x12
the characteristic polynomial is
+ 4x1 x2 −
x22
#
2
2
corresponds to the symmetric matrix A =
, so
2 −1
"
!!
!
!!2 − λ
2 !!!
C(λ) = !
= (λ − 3)(λ + 2)
! 2
−1 − λ!!
Thus, the eigenvalues of A are λ1 = 3 and λ2 = −2. Thus, by an orthogonal change of coordinates,
the equation can be brought into the diagonal form
3y21 − 2y22 = 6
This is an equation of√a hyperbola, √and we can sketch the graph in the y1 y2 -plane. We observe that
the y1 -intercepts are ( 2, 0) and (− 2, 0), and there are no intercepts on the y2 -axis. The asymptotes
of the hyperbola are determined by the equation 3y21 − 2y22 = 0. We determine that the asymptotes are
√
lines with equations y2 = ± √32 y1 .
To draw the graph of 2x12 + 4x1 x2 − x22 = 6 relative to the original x1 - and x2 -axis we need to find a
basis for R2 of eigenvectors of A.
For λ1 = 3,
"
# "
#
−1
2
1 −2
A − 3I =
∼
2 −4
0
0
$" #%
2
Thus, a basis for the eigenspace is
.
1
Copyright © 2020 Pearson Canada Inc.
24
For λ2 = −2,
# "
#
4 2
2 1
A − (−2)I =
∼
2 1
0 0
$" #%
−1
Thus, a basis for the eigenspace is
.
2
Next, we convert the equations of the asymptotes from the y1 y2 -plane to the x1 x2 -plane by using the
change of variables
"
√ #" # "
√
√ #
" #
" √
y1
2/ √5 1/ √5 x1
2x1 /√ 5 + x2 / √5
T
= P "x =
=
y2
−1/ 5 2/ 5 x2
−x1 / 5 + 2x2 / 5
Substituting these into the equations of the asymptotes gives
√ 9
:
1
2
3 2
1
− √ x1 + √ x2 = ± √ √ x1 + √ x2
5
5
2
5
5
√
√
√
√
− 2x1 + 2 2x2 = ±2 3x1 ± 3x2
√
√
√
√
(2 2 ∓ 3)x2 = ( 2 ± 2 3)x1
√
√
2±2 3
x2 = √
√ x1
2 2∓ 3
y2
Hence, the asymptotes are x2 ≈ 4.45x1 and x2 ≈ −0.445x1 .
Now to sketch the graph of 2x12 + 4x1 x2 − x22 = 6 in the x1 x2 " #
2
plane, we draw the new y1 -axis in the direction of
and
1
" #
−1
draw the new y2 -axis in the direction of
. Then, relative
2
to these new axes, sketch the graph of the hyperbola 3y21 −
2y22 = 6.
A2 The quadratic form Q("x ) =
2x12
+ 6x1 x2 +
so the characteristic polynomial is
10x22
x2
–1
2
2
1
y1
x1
#
2 3
corresponds to the symmetric matrix A =
,
3 10
"
!!
!
!!2 − λ
3 !!!
C(λ) = !
= (λ − 11)(λ − 1)
! 3
10 − λ!!
Thus, the eigenvalues of A are λ1 = 11 and λ2 = 1. Thus, by an orthogonal change of coordinates,
the equation can be brought into the diagonal form
11y21 + y22 = 11
This is an equation √
of an ellipse. We
√ observe that the y1 -intercepts are (1, 0) and (−1, 0), and the
y2 -intercepts are (0, 11) and (0, − 11).
To draw the graph of 2x12 + 6x1 x2 + 10x22 = 11 relative to the original x1 - and x2 -axis we need to find
a basis for R2 of eigenvectors of A.
For λ1 = 11,
Copyright © 2020 Pearson Canada Inc.
25
A − 11I =
Thus, a basis for the eigenspace is
"
# "
#
−9
3
1 −1/3
∼
3 −1
0
0
$" #%
1
.
3
x2
For λ2 = 1,
# "
#
–3
1 3
1 3
A−I =
∼
1
3 9
0 0
$" #%
y2
−3
Thus, a basis for the eigenspace is
.
1
Now to sketch the graph of 2x12 + 6x1 x2 + 10x22 = 11 in
the
of
" # x1 x2 -plane, we draw the new y1 -axis in the direction
" #
1
−3
and draw the new y2 -axis in the direction of
. Then,
3
1
relative to these new axes, sketch the graph of the ellipse
11y21 + y22 = 1.
"
A3 The quadratic form Q("x ) =
4x12
− 6x1 x2 +
so the characteristic polynomial is
4x22
y1
1
3
x1
#
4 −3
corresponds to the symmetric matrix A =
,
−3
4
"
!!
!
!!4 − λ −3 !!!
C(λ) = !
= (λ − 7)(λ − 1)
! −3 4 − λ!!
Thus, the eigenvalues of A are λ1 = 7 and λ2 = 1. Thus, by an orthogonal change of coordinates, the
equation can be brought into the diagonal form
7y21 + y22 = 12
√
√
This is an equation of an ellipse.
√ We observe
√ that the y1 -intercepts are ( 12/7, 0) and (− 12/7, 0),
and the y2 -intercepts are (0, 12) and (0, − 12).
To draw the graph of 4x12 − 6x1 x2 + 4x22 = 12 relative to the original x1 - and x2 -axis we need to find
a basis for R2 of eigenvectors of A.
For λ1 = 7,
"
# "
#
−3 −3
1 1
A − 7I =
∼
−3 −3
0 0
$" #%
−1
Thus, a basis for the eigenspace is
.
1
Copyright © 2020 Pearson Canada Inc.
26
x2
For λ2 = 1,
# "
#
3 −3
1 −1
A−I =
∼
−3
3
0
0
$" #%
1
Thus, a basis for the eigenspace is
.
1
Now to sketch the graph of 4x12 −6x1 x2 +4x22 = 12 in the x1 x2 " #
−1
plane, we draw the new y1 -axis in the direction of
and
1
" #
1
draw the new y2 -axis in the direction of . Then, relative to
1
these new axes, sketch the graph of the ellipse 7y21 + y22 = 12.
"
y2
–1
1
y1
1
1
x1
#
5
3
A4 The quadratic form Q("x ) =
corresponds to the symmetric matrix A =
, so
3 −3
!
!
the characteristic polynomial is
!!5 − λ
3 !!!
C(λ) = !!
= (λ − 6)(λ + 4)
! 3
−3 − λ!!
5x12
+ 6x1 x2 − 3x22
"
Thus, the eigenvalues of A are λ1 = 6 and λ2 = −4. Thus, by an orthogonal change of coordinates,
the equation can be brought into the diagonal form
6y21 − 4y22 = 15
This is an equation√of a hyperbola,√and we can sketch the graph in the y1 y2 -plane. We observe that the
y1 -intercepts are ( 5/2, 0) and (− 5/2, 0), and there are no intercepts on the y2 -axis. The asymptotes
of the hyperbola are determined by the equation 6y21 − 4y22 = 0. We determine that the asymptotes are
√
lines with equations y2 = ± √32 y1 .
To draw the graph of 5x12 + 6x1 x2 − 3x22 = 15 relative to the original x1 - and x2 -axis we need to find
a basis for R2 of eigenvectors of A.
"
# "
#
For λ1 = 6,
−1
3
1 −3
A − 6I =
∼
3 −9
0
0
$" #%
3
Thus, a basis for the eigenspace is
.
1
"
# "
#
For λ2 = −4,
9 3
1 1/3
A − (−4)I =
∼
3 1
0 0
$" #%
−1
Thus, a basis for the eigenspace is
.
3
Next, we convert the equations of the asymptotes from the y1 y2 -plane to the x1 x2 -plane by using the
change of variables
√
√ #" # "
√
√ #
" #
"
y1
3/ √10 1/ √10 x1
3x1 /√ 10 + x2 / √10
T
= P "x =
=
y2
−1/ 10 3/ 10 x2
−x1 / 10 + 3x2 / 10
Substituting these into the equations of the asymptotes gives
Copyright © 2020 Pearson Canada Inc.
27
√ 9
:
1
3
3 3
1
− √ x1 + √ x2 = ± √ √ x1 + √ x2
10
10
10
10
2
√
√
√
√
− 2x1 + 3 2x2 = ±3 3x1 ± 3x2
√
√
√
√
(3 2 ∓ 3)x2 = ( 2 ± 3 3)x1
√
√
2±3 3
x2 = √
√ x1
3 2∓ 3
y2
x2
–1
3
3
1
Hence, the asymptotes are x2 ≈ 2.633x1 and x2 ≈ −0.633x1 .
Now to sketch the graph of 5x12 +6x1 x2 −3x22 = 15 in the x1 x2 " #
3
plane, we draw the new y1 -axis in the direction of
and
1
" #
−1
draw the new y2 -axis in the direction of
. Then, relative
3
to these new axes, sketch the graph of the hyperbola 6y21 −
4y22 = 15.
A5 The quadratic form Q("x ) =
x12
− 4x1 x2 +
the characteristic polynomial is
x22
y1
x1
#
1 −2
corresponds to the symmetric matrix A =
, so
−2
1
"
!!
!
!!1 − λ −2 !!!
C(λ) = !
= (λ − 3)(λ + 1)
! −2 1 − λ!!
Thus, the eigenvalues of A are λ1 = 3 and λ2 = −1. Thus, by an orthogonal change of coordinates,
the equation can be brought into the diagonal form
3y21 − y22 = 8
This is an equation√of a hyperbola,√and we can sketch the graph in the y1 y2 -plane. We observe that the
y1 -intercepts are ( 8/3, 0) and (− 8/3, 0), and there are no intercepts on the y2 -axis. The asymptotes
of the hyperbola are determined by the equation 3y21 − y22 = 0. We determine that the asymptotes are
√
lines with equations y2 = ± 3y1 .
To draw the graph of x12 − 4x1 x2 + x22 = 8 relative to the original x1 - and x2 -axis we need to find a
basis for R2 of eigenvectors of A.
"
# "
#
For λ1 = 3,
−2 −2
1 1
A − 3I =
∼
−2 −2
0 0
$" #%
−1
Thus, a basis for the eigenspace is
.
1
"
# "
#
For λ2 = −1,
2 −2
1 −1
A+I =
∼
−2 2
0
0
$" #%
1
Thus, a basis for the eigenspace is
.
1
Copyright © 2020 Pearson Canada Inc.
28
Next, we convert the equations of the asymptotes from the y1 y2 -plane to the x1 x2 -plane by using the
change of variables
√
√ #" # "
√
√ #
" #
"
y1
−1/ √2 1/ √2 x1
−x1 /√ 2 + x2 /√ 2
T
= P "x =
=
y2
1/ 2 1/ 2 x2
x1 / 2 + x2 / 2
Substituting these into the equations of the asymptotes gives
√
√
√
√ ?
√ >
x1 / 2 + x2 / 2 = ± 3 −x1 / 2 + x2 / 2
√
√
x1 + x2 = ∓ 3x1 ± 3x2
√
√
(1 ∓ 3)x2 = (−1 ∓ 3)x1
√
−1 ∓ 3
x2 =
√ x1
1∓ 3
Hence, the asymptotes are x2 ≈ 3.73x1 and x2 ≈ 0.268x1 .
Now to sketch the graph of x12 − 4x1 x2 + x22 = 8 in the x1 x2 " #
−1
plane, we draw the new y1 -axis in the direction of
and
1
" #
1
draw the new y2 -axis in the direction of . Then, relative to
1
these new axes, sketch the graph of the hyperbola 3y21 − y22 =
8.
x12
A6 The quadratic form Q("x ) =
characteristic polynomial is
+ 4x1 x2 +
x22
x2
3
3
y2
x1
2
–2
y1
#
1 2
corresponds to the symmetric matrix A =
, so the
2 1
"
!!
!
!!1 − λ
2 !!!
C(λ) = !
= (λ − 3)(λ + 1)
! 2
1 − λ!!
Thus, the eigenvalues of A are λ1 = 3 and λ2 = −1. Thus, by an orthogonal change of coordinates,
the equation can be brought into the diagonal form
3y21 − y22 = 8
This is an equation√of a hyperbola,√and we can sketch the graph in the y1 y2 -plane. We observe that the
y1 -intercepts are ( 8/3, 0) and (− 8/3, 0), and there are no intercepts on the y2 -axis. The asymptotes
of the hyperbola are determined by the equation 3y21 − y22 = 0. We determine that the asymptotes are
√
lines with equations y2 = ± 3y1 .
To draw the graph of x12 + 4x1 x2 + x22 = 8 relative to the original x1 - and x2 -axis we need to find a
basis for R2 of eigenvectors of A.
"
# "
#
For λ1 = 3,
−2 2
1 −1
A − 3I =
∼
−2 2
0
0
$" #%
1
Thus, a basis for the eigenspace is
.
1
Copyright © 2020 Pearson Canada Inc.
29
For λ2 = −1,
A+I =
"
# "
#
2 2
1 1
∼
2 2
0 0
$" #%
−1
Thus, a basis for the eigenspace is
.
1
Next, we convert the equations of the asymptotes from the y1 y2 -plane to the x1 x2 -plane by using the
change of variables
√
√ #" # "
√
√ #
" #
"
y1
1/ √2 1/ √2 x1
x / 2 + x2 / √2
= PT "x =
= 1 √
y2
−1/ 2 1/ 2 x2
−x1 / 2 + x2 / 2
Substituting these into the equations of the asymptotes gives
√
√
√ ?
√ > √
−x1 / 2 + x2 / 2 = ± 3 x1 / 2 + x2 / 2
√
√
−x1 + x2 = ± 3x1 ± 3x2
√
√
(1 ∓ 3)x2 = (1 ± 3)x1
√
1± 3
x2 =
√ x1
1∓ 3
Hence, the asymptotes are x2 ≈ 3.73x1 and x2 ≈ 0.268x1 .
Now to sketch the graph of x12 + 4x1 x2 + x22 = 8 in the x1 x2 " #
1
plane, we draw the new y1 -axis in the direction of
and
1
" #
−1
draw the new y2 -axis in the direction of
. Then, relative
1
to these new axes, sketch the graph of the hyperbola 3y21 −
y22 = 8.
A7 The quadratic form Q("x ) =
3x12
− 4x1 x2 +
so the characteristic polynomial is
3x22
x2
3
3
y2
x1
2
−2
y1
#
3 −2
corresponds to the symmetric matrix A =
,
−2
3
"
!!
!
!!3 − λ −2 !!!
C(λ) = !
= (λ − 1)(λ − 5)
! −2 3 − λ!!
Thus, the eigenvalues of A are λ1 = 1 and λ2 = 5. Thus, by an orthogonal change of coordinates, the
equation can be brought into the diagonal form
y21 + 5y22 = 32
√
√
This is an equation of an√ellipse. We observe
√ that the y1 -intercepts are ( 32, 0) and (− 32, 0), and
the y2 -intercepts are (0, 32/5) and (0, − 32/5).
To draw the graph of 3x12 − 4x1 x2 + 3x22 = 32 relative to the original x1 - and x2 -axis we need to find
a basis for R2 of eigenvectors of A.
"
# "
#
For λ1 = 1,
2 −2
1 −1
A−I =
∼
−2
2
0
0
Copyright © 2020 Pearson Canada Inc.
30
Thus, a basis for the eigenspace is
For λ2 = 5,
$" #%
1
.
1
x2
# "
#
−2 −2
1 1
A − 5I =
∼
−2 −2
0 0
$" #%
−1
Thus, a basis for the eigenspace is
.
1
Now to sketch the graph of 3x12 −4x1 x2 +3x22 = 32 in the x1 x2 " #
1
plane, we draw the new y1 -axis in the direction of
and
1
" #
−1
draw the new y2 -axis in the direction of
. Then, relative
1
to these new axes, sketch the graph of the ellipse y21 + 5y22 =
32.
"
3
3
y1
x1
3
–3
y2
A8 We have
C(λ) = λ(λ − 5)
Thus, the eigenvalues are λ1 = 0 and λ2 = 5. Thus, Q("x ) = "x T A"x has diagonal form Q("x ) = 5y22 .
Therefore, the graph of "x T A"x = 1 is a rotation of the graph of 5y22 = 1 which is the graph of the two
parallel lines y2 = ± √15 . The graph of "x T A"x = −1 is a rotation of the graph of 5y22 = −1 which is
empty.
A9 We have
C(λ) = (λ − 6)(λ + 4)
Thus, the eigenvalues are λ1 = 6 and λ2 = −4. Thus, Q("x ) = "x T A"x has diagonal form Q("x ) =
6y21 − 4y22 . Therefore, the graph of "x T A"x = 1 is a rotation of the graph of 6y21 − 4y22 = 1 which
is a hyperbola opening in the y1 -direction. The graph of "x T A"x = −1 is a rotation of the graph of
6y21 − 4y22 = −1 which is a hyperbola opening in the y2 -direction.
A10 We have
C(λ) = −(λ − 2)(λ + 1)2
Thus, the eigenvalues are λ1 = 2, λ2 = −1, and λ3 = −1. Thus, Q("x ) = "x T A"x has diagonal form
Q("x ) = 2y21 − y22 − y23 . Therefore, the graph of "x T A"x = 1 is a rotation of the graph of 2y21 − y22 − y23 =
1 which is a hyperboloid of two sheets. The graph of "x T A"x = −1 is a rotation of the graph of
2y21 − y22 − y23 = −1 which is a hyperboloid of one sheet.
A11 We have
C(λ) = −λ(λ − 3)(λ + 3)
Thus, the eigenvalues are λ1 = 0, λ2 = 3, and λ3 = −3. Thus, Q("x ) = "x T A"x has diagonal form
Q("x ) = 3y22 − 3y23 . Therefore, the graph of "x T A"x = 1 is a rotation of the graph of 3y22 − 3y23 = 1 which
is a hyperbolic cylinder. The graph of "x T A"x = −1 is a rotation of the graph of 3y22 − 3y23 = −1 which
is a hyperbolic cylinder.
Copyright © 2020 Pearson Canada Inc.
31
A12 We have
C(λ) = −(λ − 9)2 (λ + 9)
Thus, the eigenvalues are λ1 = 9, λ2 = 9, and λ3 = −9. Thus, Q("x ) = "x T A"x has diagonal form
Q("x ) = 9y21 + 9y22 − 9y23 . Therefore, the graph of "x T A"x = 1 is a rotation of the graph of 9y21 + 9y22 −
9y23 = 1 which is a hyperboloid of one sheet. The graph of "x T A"x = −1 is a rotation of the graph of
9y21 + 9y22 − 9y23 = −1 which is a hyperboloid of two sheets.
A13 We have
C(λ) = −(λ − 9)2 λ
Thus, the eigenvalues are λ1 = 9, λ2 = 9, and λ3 = 0. Thus, Q("x ) = "x T A"x has diagonal form Q("x ) =
9y21 + 9y22 + 0y23 . Therefore, the graph of "x T A"x = 1 is a rotation of the graph of 9y21 + 9y22 + 0y23 = 1
which is an elliptic cylinder. The graph of "x T A"x = −1 is the empty set.
A14 We have
C(λ) = (λ − 8)(λ + 5)
Thus, the eigenvalues are λ1 = 8 and λ2 = −5. Thus, Q("x ) has diagonal form Q("x ) = 8y21 − 5y22 .
Therefore, the graph of Q("x ) = 1 is a rotation of the graph of 8y21 − 5y22 = 1 which is a hyperbola
opening in the y1 -direction. The graph of Q("x ) = 0 is two intersecting lines, and the graph of Q("x ) =
−1 is a rotation of the graph of 8y21 − 5y22 = −1 which is a hyperbola opening in the y2 -direction.


1 −2
 1


1 −2. We find that
A15 The corresponding symmetric matrix is A =  1


−2 −2
4
C(λ) = −(λ − 6)λ2
Thus, the eigenvalues are λ1 = 6, λ2 = 0, and λ3 = 0. Thus, Q("x ) = "x T A"x has diagonal form
Q("x ) = 6y21 + 0y22 + 0y23 . Therefore, the graph of Q("x ) = 1 is two parallel lines. The graph of
Q("x ) = 0 is a plane. The graph of Q("x ) = −1 is the empty set.


2 2 0


A16 The corresponding symmetric matrix is A = 2 3 2. We find that


0 2 4
C(λ) = −(λ − 3)(λ − 6)λ
Thus, the eigenvalues are λ1 = 3, λ2 = 6, and λ3 = 0. Thus, Q("x ) = "x T A"x has diagonal form
Q("x ) = 3y21 + 6y22 + 0y23 . Therefore, the graph of Q("x ) = 1 is an elliptic cylinder. The graph of
Q("x ) = 0 is a line. The graph of Q("x ) = −1 is the empty set.


1
0
4


5 −1. We find that
A17 The corresponding symmetric matrix is A = 1


0 −1
4
C(λ) = −(λ − 3)(λ − 4)(λ − 6)
Thus, the eigenvalues are λ1 = 3, λ2 = 4, and λ3 = 6. Thus, Q("x ) = "x T A"x has diagonal form
Q("x ) = 3y21 + 4y22 + 6y23 . Therefore, the graph of Q("x ) = 1 is an ellipsoid. The graph of Q("x ) = 0 is
the point (0, 0, 0). The graph of Q("x ) = −1 is the empty set.
Copyright © 2020 Pearson Canada Inc.
32
B Homework Problems
x2
B1
x2
B2
y1
y2
(2, 3)
(4, 3)
y2
(–3, 2)
x1
x1
( 3, –4)
y1
B3
y2
x2
x2
y2
B4
(–1, 5)
(–1, 2)
(2, 1)
y1
(5, 1) y1
x1
x1
B5
x2
B6
y1
2
1
y2
x2
–1
3
3
1
x1
1
–2
y2
B7 The graph of Q("x ) = 1 is a hyperbola. The graph of Q("x ) = −1 is a hyperbola.
B8 The graph of Q("x ) = 1 is the empty set. The graph of Q("x ) = −1 is an ellipse.
B9 The graph of Q("x ) = 1 is an ellipse. The graph of Q("x ) = −1 is the empty set.
B10 The graph of Q("x ) = 1 is two parallel lines. The graph of Q("x ) = −1 is the empty set.
B11 The graph of Q("x ) = 1 is the empty set. The graph of Q("x ) = −1 is an elliptic cylinder.
Copyright © 2020 Pearson Canada Inc.
y1
x1
33
B12 The graph of Q("x ) = 1 is a hyperboloid of one sheet. The graph of Q("x ) = −1 is a hyperboloid of
two sheets.
B13 The graph of Q("x ) = 1 is the empty set. The graph of Q("x ) = −1 is an ellipsoid.
B14 The graph of Q("x ) = 1 is the empty set. The graph of Q("x ) = −1 is two parallel planes.
B15 Q("x ) = 6y21 + y22 . The graph of Q("x ) = 1 is an ellipse. The graph of Q("x ) = 0 is the point (0, 0). The
graph of Q("x ) = −1 is the empty set.
B16 Q("x ) = 3y21 − y22 . The graph of Q("x ) = 1 is a hyperbola. The graph of Q("x ) = 0 is two intersecting
lines. The graph of Q("x ) = −1 is a hyperbola.
B17 Q("x ) = −y21 − 11y22 . The graph of Q("x ) = 1 the empty set. The graph of Q("x ) = 0 is the point (0, 0).
The graph of Q("x ) = −1 is an ellipse.
B18 Q("x ) = 2y21 + 0y22 . The graph of Q("x ) = 1 is two parallel lines. The graph of Q("x ) = 0 is a line. The
graph of Q("x ) = −1 is the empty set.
B19 Q("x ) = 3y21 + 6y22 − 2y23 . The graph of Q("x ) = 1 is a hyperboloid of one sheet. The graph of Q("x ) = 0
is a cone. The graph of Q("x ) = −1 is a hyperboloid of two sheets.
B20 Q("x ) = 2y21 + 9y22 + 0y23 . The graph of Q("x ) = 1 is an elliptic cylinder. The graph of Q("x ) = 0 is a
line. The graph of Q("x ) = −1 is the empty set.
B21 Q("x ) = 3y21 + 0y22 − 4y23 . The graph of Q("x ) = 1 is a hyperbolic cylinder. The graph of Q("x ) = 0 is
intersecting planes. The graph of Q("x ) = −1 is the empty set.
Section 8.4
C Conceptual Problems
C1 Let D = diag(#1 , #2 , #3 ). Then PT (βE)P = D. It follows that
PT (I + βE)P = (PT I + PT βE)P = PT P + PT βEP = I + D
and I + D = diag(1 + #1 , 1 + #2 , 1 + #3 ) as required.
Section 8.5
A Practice Problems
#
4 0
A1 We have A A =
. Hence, the eigenvalues of AT A are λ1 = 4 and λ2 = 4. Therefore, the singular
0 4
√
√
values of A are σ1 = λ1 = 2 and σ2 = λ2 = 2.
T
"
Copyright © 2020 Pearson Canada Inc.
34
A2 We have AT A =
"
√ #
3
2 3
√
. The characteristic polynomial of AT A is
2 3
7
!!
√ !
!!3 − λ 2 3 !!!
√
C(λ) = !
= λ2 − 10λ + 9 = (λ − 1)(λ − 9)
! 2 3 7 − λ!!
Hence, the eigenvalues of AT A√ordered from greatest
√ to least are λ1 = 9 and λ2 = 1. Therefore, the
singular values of A are σ1 = λ1 = 3 and σ2 = λ2 = 1.


5 5 5


A3 We have AT A = 5 5 5. The characteristic polynomial of AT A is


5 5 5
!!
! !
!
!!5 − λ
5
5 !!! !!!5 − λ
5
5 !!!
5−λ
5 !!! = !!! 0
−λ
λ !!!
C(λ) = !!! 5
!! 5
!
!
5
5 − λ! ! 5
5 5 − λ!!
!!
!!
!!5 − λ
5
10 !!
−λ
0 !!!
= !!! 0
!! 5
5 10 − λ!!
= −λ(λ2 − 15λ)
T
Hence, the only
√ eigenvalue of A A is λ1 = 15. Therefore, the only non-zero singular value
√ non-zero
of A is σ1 = λ1 = 15.
"
#
5 2
T
A4 We have A A =
. The eigenvalues of AT A are (ordered from greatest to least) λ1 = 9 and
2 8
√ #
" √ #
"
1/ √5
−2/ √5
λ2 = 4. Corresponding normalized eigenvectors are "v 1 =
for λ1 and "v 2 =
for λ2 .
2/ 5
1/ 5
√ #
" √
1/ √5 −2/ √5
Consequently, we take V =
. The singular values of A are σ1 = 3 and σ2 = 2. Thus
2/ 5
1/ 5
"
#
3 0
the matrix Σ is Σ =
. Next we compute
0 2
"
1
1 2
"u 1 =
A"v 1 =
σ1
3 −1
"
1
1 2
"u 2 =
A"v 2 =
σ2
2 −1
#" √ # " √ #
2 1/ √5
2/ √5
=
2 2/ 5
1/ 5
√ # "
√ #
#"
2 −2/ √5
−1/ √5
=
2
1/ 5
2/ 5
√
√ #
2/ √5 −1/ √5
Thus, we have U =
. Then A = UΣV T as required.
1/ 5
2/ 5
"
#
4 6
T
A5 We have A A =
. The eigenvalues of AT A are (ordered from greatest to least) λ1 = 16 and
6 13
√ #
" √ #
"
1/ √5
−2/ √5
λ2 = 1. Corresponding normalized eigenvectors are "v 1 =
for λ1 and "v 2 =
for λ2 .
2/ 5
1/ 5
"
Copyright © 2020 Pearson Canada Inc.
35
√
√ #
1/ √5 −2/ √5
Consequently, we take V =
. The singular values of A are σ1 = 4 and σ2 = 1. Thus
2/ 5
1/ 5
"
#
4 0
the matrix Σ is Σ =
. Next compute
0 1
"
"
#" √ # " √ #
1
1 2 3 1/ 5
√ = 2/ √5
"u 1 =
A"v 1 =
σ1
4 0 2 2/ 5
1/ 5
√ # "
√ #
"
#"
1
1 2 3 −2/ 5
−1/
√ =
√5
"u 2 =
A"v 2 =
σ2
1 0 2 1/ 5
2/ 5
√
√ #
2/ √5 −1/ √5
Thus, we have U =
. Then A = UΣV T as required.
1/ 5
2/ 5
"
#
6 6
A6 We have AT A =
. The eigenvalues of AT A are (ordered from greatest to least) λ1 = 15 and
6 11
λ2 = 2.
√ #
" √ #
"
2/ √13
−3/ √13
Normalized eigenvectors are "v 1 =
for λ1 and "v 2 =
for λ2 .
3/ 13
2/ 13
√ #
" √
2/ √13 −3/ √13
Hence, AT A is orthogonally diagonalized by V =
.
3/ 13
2/ 13
√



15
0


√
√
√ 

The singular values of A are σ1 = 15 and σ2 = 2. Thus the matrix Σ is Σ =  0
2.


0
0
"
Next compute

1
1
1 
"u 1 =
A"v 1 = √ 2
σ1
15 1

1
1
1 
"u 2 =
A"v 2 = √ 2
σ2
2 1
√ 


3 " √ # 11/√ 195


 2/ √13
1
=  7/ 195 
√
 3/ 13


1
5/ 195
√ 


√ #  3/ 26
3 "

√ 

 −3/ √13
1
= −4/ 26
√ 
 2/ 13

1
−1/ 26
We then need to extend {"u 1 , "u 2 } to an orthonormal basis for R3 . Since we are in R3 , we can use the
cross product. We have
    

11  3  13
    

 7 × −4 =  26
5
−1
−65
√
√
√ 
√ 


 1/ 30
11/ 95
3/ √26
1/ √30
√
√




So, we can take "u 3 =  2/ 30. Thus, we have U = 7/ 195 −4/ 26
2/ √30 . Then, A =
√ 
√


 √
−5/ 30
5/ 195 −1/ 26 −5/ 30
UΣV T as required.
"
#
12 −6
A7 We have AT A =
. The eigenvalues of AT A are (ordered from greatest to least) λ1 = 15 and
−6
3
λ2 = 0.
Copyright © 2020 Pearson Canada Inc.
36
√ #
" √ #
−2/ √5
1/ √5
Corresponding normalized eigenvectors are "v 1 =
for λ1 and "v 2 =
for λ2 .
1/ 5
2/ 5
√
√ #
"
−2/ √5 1/ √5
Hence, we take V =
.
1/ 5 2/ 5
√

 15 0
√


The singular values of A are σ1 = 15 and σ2 = 0. Thus, Σ =  0
0.


0
0
"
Next compute
√ 



√ # −1/ 3
2 −1 "

√ 
1
1 

−2/
5

√ = −1/ 3
"u 1 =
A"v 1 = √ 2 −1

√ 
σ1

15 2 −1 1/ 5
−1/ 3
We then need to extend {"u 1 } to an orthonormal basis for R3 . One way of doing this is to find an
    

−1 −1



   

T
T
 1 ,  0
orthonormal basis for Null(A ). A basis for Null(A ) is 
. To make this an orthogonal








 0
1
 
−1
 
basis, we apply the Gram-Schmidt Procedure. We take "v 1 =  1, and then we get
 
0
 
  

−1
−1 −1/2
  1   

 0 −  1 = −1/2
2
1
0
1
√ 
√ 


−1/ 2
−1/ 6
√ 
√



Thus, we pick "u 2 =  1/ 2 and "u 3 = −1/ 6. Consequently, we have
√ 



0
2/ 6
√
√
√


−1/ 3 −1/ 2 −1/ 6
√
√ 
√

U = −1/ 3
1/ 2 −1/ √6 and A = UΣV T as required.
√


−1/ 3
0
2/ 6
#
2 0
A8 We have A A =
. The eigenvalues of AT A are λ1 = 3 and λ2 = 2 with corresponding unit
0 3
" #
" #
0
1
eigenvectors "v 1 =
and "v 2 =
. Hence, we get
1
0
T
"
0 1
V=
1 0
"
and
√
 3

Σ =  0

0
#


√0 
2

0
Copyright © 2020 Pearson Canada Inc.
37
Next, we take
 
 1
1
1  
"u 1 = √ A"v 1 = √  1
3
3 −1
 
1
1
1  
"u 2 = √ A"v 2 = √ 0
2
2 1
Next, we need to extend {"u 1 , "u 2 } to an orthonormal basis for R3 by adding a left singular vector "u 3 .
 
 1
 
We take "u 3 = √16 −2.
 
−1
<
=
Then, U = "u 1 "u 2 "u 3 and A = UΣV T as required.
#
6 4
. The eigenvalues of AT A are (ordered from greatest to least)
4 6
λ2 = 2.
√ #
"
" √ #
−1/ √2
1/ √2
for λ1 and "v 2 =
for λ2 .
Normalized eigenvectors are "v 1 =
1/ 2
1/ 2
√ #
" √
1/ √2 −1/ √2
Hence, AT A is orthogonally diagonalized by V =
.
1/ 2
1/ 2
√
 10
√
√

The singular values of A are σ1 = 10 and σ2 = 2. Thus the matrix Σ is Σ =  0

0
A9 We have AT A =
"
Next compute




1 −1 " √ #  0√ 
1
1 


 1/ √2
2
"u 1 =
A"v 1 = √ 2
= 2/ √5


σ1
1/
2


10 1
1
1/ 5


√ # −1
1 −1 "
 
1
1 
−1/

√2 =  0
2
"u 2 =
A"v 2 = √ 2


 
σ2
1/
2
2 1
1
0
λ1 = 10 and


√0 
2.

0
We then need to extend {"u 1 , "u 2 } to an orthonormal basis for R3 . Since we are in R3 , we can use the
cross product. We have
     
0 −1  0
     
2 ×  0 = −1
1
0
2


 0√ 
<
=


So, we can take "u 3 = −1/ √5. Thus, we have U = "u 1 "u 2 "u 3 . Then, A = UΣV T as required.


2/ 5
Copyright © 2020 Pearson Canada Inc.
38


3 0 0


A10 We have AT A = 0 3 0. The only eigenvalue of AT A is λ1 = 3 with multiplicity 3.


0 0 3
      

1 0 0



     

An orthonormal basis for the eigenspace of λ1 is 
, so we can take V = I.
0 , 1 , 0



0 0 1

√
 3

√
√
√
 0
The singular values of A are σ1 = 3, σ2 = 3, and σ3 = 3. So, we get that Σ = 
 0

0
Next compute
√0
3
0
0

0 

√0 .
3

0
 
 1
1
1  1
"u 1 =
A"v 1 = √  
σ1
3 −1
0
 
0
1
1 1
"u 2 =
A"v 2 = √  
σ2
3 1
1
 
 1
1
1 −1
"u 3 =
A"v 3 = √  
σ3
3  0
1
We then need to extend {"u 1 , "u 2 , "u 3 } to an orthonormal basis for R4 . We find that a basis for Null(AT )
 

−1





 



 0

is 
. After normalizing this vector, we take
 




−1








  


1
√
√
√ 

0√
1/ √3 −1/ 3
 1/ √3


0√ 
 1/ √3 1/ √3 −1/ 3
T
U = 
 and A = UΣV as required.
−1/ 3 1/ 3
0
−1/
3

√
√
√ 

0
1/ 3
1/ 3
1/ 3
#
4 8
A11 We have A A =
. The eigenvalues of AT A are (ordered from greatest to least) λ1 = 20 and
8 16
√ #
"
" √ #
−2/ √5
1/ √5
for λ1 and "v 2 =
for λ2 .
λ2 = 0. Corresponding normalized eigenvectors are "v 1 =
2/ 5
1/ 5
√ #
" √
√
1/ √5 −2/ √5
Hence, we take V =
. The singular values of A are σ1 = 20 and σ2 = 0. Thus the
2/ 5
1/ 5
T
"
Copyright © 2020 Pearson Canada Inc.
39
√
 20

0
matrix Σ is Σ = 
 0
0

0

0
. Next compute
0

0

1
1
1 1
"u 1 =
A"v 1 = √ 
σ1
20 1
1

 
2 " √ # 1/2

 
1/2
2 1/ √5
=  

2 2/ 5
1/2

2
1/2
We then need to extend {"u 1 } to an orthonormal basis for R4 . We pick






−1/2
 1/2
 1/2
−1/2
−1/2
−1/2
 , "u = 



"u 2 = 
3
 1/2 , "u 4 = −1/2
 1/2




1/2
−1/2
1/2
<
=
Thus, we have U = "u 1 "u 2 "u 3 "u 4 . Then A = UΣV T as required.


2 0 0


A12 We have AT A = 0 2 2. The eigenvalues of AT A are (ordered from greatest to least) λ1 = 4,


0 2 2


 
 0√ 
1


 

1/
2
λ2 = 2, and λ3 = 0. Corresponding normalized eigenvectors are "v 1 =  √  for λ1 , "v 2 = 0 for λ2 ,
 


0
1/ 2




 0√ 1
 0√ 
0√ 




and "v 3 = −1/ √2 for λ3 . Hence, we take V = 1/ √2 0 −1/ √2. The non-zero singular values




1/ 2
1/ 2 0
1/ 2
"
#
√
√
2 √0 0
of A are σ1 = 4 = 2 and σ2 = 2. Thus the matrix Σ is Σ =
. Next compute
0
2 0


√ #
#  0  "


√
1
1 1
1
1 
1/ √2

"u 1 =
A"v 1 =
1/ 2 =
σ1
2 1 −1 −1  √ 
−1/ 2
1/ 2
 
"
# 1 " √ #
1
1 1
1
1  
1/ √2
0 =
"u 2 =
A"v 2 = √


1
−1
−1
σ2
1/ 2
2
0
<
=
Thus, we have U = "u 1 "u 2 . Then A = UΣV T as required.
"
#
6 6
T
A13 We have B B =
. The eigenvalues of BT B are (ordered from greatest to least) λ1 = 18 and
6 15
λ2 = 3.
√ #
" √ #
"
1/ √5
−2/ √5
Corresponding normalized eigenvectors are "v 1 =
for λ1 and "v 2 =
for λ2 .
2/ 5
1/ 5
√ #
" √
1/ √5 −2/ √5
Hence, we take V =
.
2/ 5
1/ 5
"
Copyright © 2020 Pearson Canada Inc.
40
√
 18

√
√
 0
The singular values of B are σ1 = 18 and σ2 = 3. Thus, Σ = 
 0

0
Next compute


√0 
3
.
0 

0
√ 



1 " √ #  3/ √90
 1



1
1  0
2 1/ √5
 4/ √90
"u 1 =
B"v 1 = √ 
= 

−4/ 90
σ1
18 −2 −1 2/ 5
√ 

1
3
7/ 90
√ 



−1/ 15
1 "
 1
√
√ # 

 2/ 15
1
1  0
2 −2/ √5

√ 
"u 2 =
B"v 2 = √ 
= 

σ2
 3/ √15
3 −2 −1 1/ 5


1
3
1/ 15
4
We then need to extend {"u 1 , "u 2 } to an orthonormal basis for
 R . Oneway of doing this is to find an

−1  4





   



−1 −1

T
T
orthonormal basis for Null(A ). A basis for Null(A ) is 
,
. To make this an orthogonal
   




0
2



















1
0
 
−1
−1
basis, we apply the Gram-Schmidt Procedure. Take "z1 =  . Then,
 0
1
   
 
−1  3
 4
−1 −3 −1 −2
  =  
"z2 =   −
3  0  2
 2
1
1
0
√
√




−1/ √3
 3/ √18




−1/ 3
−2/ √18
"
Thus, we pick "u 3 = 
and
u
=

. Consequently, we have
4
 0 
2/ √18


√




1/ 3
1/ 18
<
=
U = "u 1 "u 2 "u 3 "u 4 and B = UΣV T as required.
√ #"
√
√ # "
" √
#"
#
1/ √5 −2/ √5 1/3 0
2/ √5 1/ √5
1/3 −1/3
+
+ T
A14 A = VΣ U =
=
.
2/ 5
1/ 5 0 1/2 −1/ 5 2/ 5
√1/6 1/3
√
√ 
√
√ #" √
"
# −1/ 3 −1/ 3 −1/ 3


√
√
A15

−2/
5
1/
5
√
√ 1/ 15 0 0 −1/ 2
A+ = VΣ+ U T =
1/ √2
0

√
√
0
0 0 
1/ 5 2/ 5

−1/ 6 −1/ 6
2/ 6
"
#
2/15
2/15
2/15
=
−1/15 −1/15 −1/15




√ # 1/2
 0√ 1
0√  1/2
0√  " √
1/2





1/
2
−1/
2


√
√ = 1/4 −1/4
A16 A+ = VΣ+ U T = 1/ √2 0 −1/ √2  0 1/ 2


 1/ 2
1/ 2


1/4 −1/4
0
1/ 2 0
1/ 2 0
Copyright © 2020 Pearson Canada Inc.
41
B Homework Problems
√
B1 σ1 = 3, σ2 = 2
√
B2 σ1 = 26
√
√
B3 σ1 = 3, σ2 = 2
√ #
√ #
" √
"
#
" √
1/ √2 −1/√ 2
4 0
1/ √2 −1/√ 2
B4 U =
,Σ=
,V =
.
0 2
1/ 2 1/ 2
1/ 2 1/ 2
√ #
" √
"√
#
"
#
8 √0
1/ √2 −1/√ 2
1 0
B5 U =
,Σ=
,V =
.
0 1
0
2
1/ 2 1/ 2
√ #
√ #
" √
"
#
" √
1/ √5 −2/√ 5
5 0
1/ √5 −2/√ 5
B6 U =
,Σ=
,V =
.
0 0
2/ 5 1/ 5
2/ 5 1/ 5
√
√ 



"
#
2/3 −1/ 5
4/ √45
3 √0 


0 1


B7 U = 2/3
.
0√ −5/ √45, Σ = 0
5, V =


1 0


0 0
1/3
2/ 5
2/ 45
√
√ 

 √

√ #
" √
0 −1/ 2 1/ 2
2 2 0


1/ √2 −1/ √2




0√
0√ , Σ =  0
B8 U = 1
.
2, V =


1/ 2
1/ 2


0
0
0
1/ 2 1/ 2
√
√ 
√


√
√ #
" √
 3/ 34 −1/ 2 2/ 17
 17 0
√ 
√

1/ √2 −1/ √2





B9 U = −4/ 34
.
0√ 3/ √17, Σ =  0
1, V =
√


1/ 2
1/ 2


0
0
3/ 34
1/ 2 2/ 17
√
√ 
√


√
√ #
" √
 3/ 35 1/ 10

 7 0 
3/
14
√
√ 
√ 


2/ √5 −1/ √5


B10 U =  5/ 35
.
0
−2/ √14, Σ =  0
2, V =
√
√
1/ 5
2/ 5




0
0
−1/ 35 3/ 10 −1/ 14
√ 
 √
√

"
#
1/ 5 0 −2/ 5
 5 0


1 0


0√ , Σ =  0 1, V =
B11 U =  0√ 1
.


0 1


0 0
2/ 5 0
1/ 5
√
√
√ 

√

√ #
" √
 1/ 6 1/ 2
1/ √3
 12 0
√


1/ √2 −1/ √2




B12 U =  2/ 6
.
0√ −1/ √3, Σ =  0
0, V =
√


1/ 2
1/ 2


0
0
−1/ 6 1/ 2 −1/ 3
√ 
√ 
 √

 √
√
2/ 6
1/ 3
 8 0 0 
0√ −1/ √3
0√ −2/ √6
 √


 √


B13 U = 1/ 6 −1/ 2
1/ √3, Σ =  0 2 √0 , V = 1/ √3 −1/ √2
1/ √6.
√
√






0 0
2
1/ 6
1/ 2
1/ 3
1/ 3
1/ 2
1/ 6
√
√ 
√ 
 √
 √

√

1/ 3 −1/ 2

1/ 6
0
5/
30
1/


18
0
0


√
√6
√ 


 √
 √
.
, V = 1/ 3
B14 U = 1/ 6 −2/ 5 −1/ 30, Σ =  0
0
0
0
−2/
6

 √
√
√
√ 
√ 


 √
0
0
0
2/ 6
1/ 5 −2/ 30
1/ 3
1/ 2
1/ 6
√ 
√

#
"
#
"√
 1/ 3
0√ −2/ √6
√


3 √0 0
1 0
B15 U =
, V = −1/ 3 1/ 2 −1/ 6.
,Σ=
√
√
√
0 1
0
2 0


1/ 3 1/ 2
1/ 6
Copyright © 2020 Pearson Canada Inc.
42
√
√
√ 

√
√ #
"√
#
 1/ 11 1/ 2

3/
√
√22

1/ √2 −1/ √2
22 0 0
B16 U =
,Σ=
, V =  3/ 11
0√ −2/ √22.
√
0
0 0
1/ 2
1/ 2


−1/ 11 1/ 2 −3/ 22
√
 √

√

1/ 2
0√ −1/ 2
0√ 
 2 0 
"
#
√


 0

 0

1/
2
0
−1/
2
1
0
2


√
, V =
B17 U =  √
.
, Σ = 
0 1
0√
1/ 2
0√ 
0 
 0
1/ 2


0
0
0
1/ 2
0
1/ 2
√
√ 
√
 √

√
3/ 22
1/ √6 −1/ √3
1/ √11
 11 0 
√
√ #
" √
√
3/ 22 −1/ 6


 0
1/
3
1/
11
1/
2
−1/
3




, Σ = 
√
√
√2 .
√
, V =
B18 U = 


0 
0
2/ 6
1/ 3
0√ 
1/ 2
1/ 2
 √
 0



0
0
2/ 22
0
0
−3/ 11
"
#
"
#
1/2 0
1/4 1/2
B19 A+ =
B20 A+ =
0 0
−1/4 1/2
" #
"
#
2/5
−1/5
2/5 0
+
+
B21 A =
B22 A =
1/5
8/15 −7/30 1/6


"
#
1/4 −1/12
3/14 −3/7 5/14


0 
B23 A+ =
B24 A+ =  0


1/7
5/7 −3/7
0
1/3
"
C Conceptual Problems
C1 (a) If λ1 = 0 is not an eigenvalue, then the result holds. Assume λ1 = 0 is an eigenvalue of A. Then,
the dimension of the eigenspace of λ1 = 0 is dim Null(A − 0I) = dim Null(A). By the RankNullity Theorem, we get dim Null(A) = n − r. Since every symmetric matrix is diagonalizable,
by Theorem 6.2.3, we have that the algebraic multiplicity of λ1 = 0 is also n−r. Thus, the number
of non-zero eigenvalues is r.
(b) If A"x = "0, then AT A"x = AT "0 = "0. Hence, the nullspace of A is a subset of the nullspace of AT A.
On the other hand, consider AT A"x = "0. Then,
'A"x '2 = (A"x ) · (A"x ) = (A"x )T (A"x ) = "x T AT A"x = "x T "0 = 0
Thus, A"x = "0. Hence, the nullspace of AT A is also a subset of the nullspace of A, and the result
follows.
(c) Using (b), the Rank-Nullity Theorem gives
rank(AT A) = n − dim(Null(AT A)) = n − dim(Null(A)) = rank(A)
(d) Since AT A is symmetric, we have by (a) that it has r non-zero eigenvalues. By definition of
singular values, these will give exactly r non-zero singular values for A.
C2 By definition, the singular values of PA are the eigenvalues of (PA)T (PA) = AT PT PA = AT A. But,
the eigenvalues of AT A are the singular values of A as required.
Copyright © 2020 Pearson Canada Inc.
43
C3 Let λ be an eigenvalue of A with corresponding eigenvector "v . Then, we have
A"v = λ"v
Multiplying both sides by AT gives
AT A"v = λAT "v = λA"v = λ2"v
Thus, the corresponding singular value of A is
<
=
<
C4 Let U = "u 1 · · · "u m and V = "v 1
√
σ = λ2 = |λ|
=
· · · "v n .
By Theorem 8.5.2, {"u 1 , . . . , "u r } forms an orthonormal basis for Col(A).
By the Fundamental Theorem of Linear Algebra, the nullspace of AT is the orthogonal complement
of Col(A). We are given that {"u 1 , . . . , "u m } forms an orthonormal basis for Rm and hence a basis for the
orthogonal complement of Span{"u 1 , . . . , "u r } is B = {"u r+1 , . . . , "u m }. Thus, B is a basis for Null(AT ).
Since UΣV T is a singular value decomposition of A, we have that AV = UΣ and hence
A"v i = σi"u i
Since A has exactly r non-zero singular values, for r + 1 ≤ i ≤ n, we have that
A"v i = 0"u i = "0
Hence, {"v r+1 , . . . , "v n } is an orthonormal set of n − r vectors in the nullspace of A. But, since A has
rank r, we know by the Rank-Nullity Theorem, that dim Null(A) = n − r. So {"v r+1 , . . . , "v n } is a basis
for Null(A).
Hence, by the Fundamental Theorem of Linear Algebra we have that the orthogonal complement of
Null(A) is Row(A). Hence {"v 1 , . . . , "v r } forms a basis for Row(A).
C5 Let σ ! 0 be a singular value of A. Then, by definition, there exists a non-zero vector "v such that
AT A"v = σ"v
Observe that A"v ! "0 since σ ! 0. Multiply both sides by A to get
AAT A"v = σA"v
Take "x = A"v ! "0. Then, we have
AAT "x = σ"x
Let B = AT , then A = BT and the equation above is
BT B"x = σ"x
Hence, σ is a non-zero singular value of B = AT . Proving every non-zero singular value of AT is a
singular value of A is similar.
Copyright © 2020 Pearson Canada Inc.
44
<
=
C6 By definition, the left singular vectors of A are the columns of U = "u 1 · · · "u m . Let rank(A) = r.
For 1 ≤ i ≤ r, we have that "u i = σ1i A"v i where "v i is an eigenvector of AT A. Hence,
AAT "u i = AAT
9
:
1
1
1
A"v i = AAT A"v i = A(σ2"v i ) = σi A"v i = σ2i "u i
σi
σi
σi
Thus, "u i is an eigenvector of AAT . For r + 1 ≤ i ≤ m, we have that "u i ∈ Null(AT ). Thus,
AAT "u i = A("0) = "0 = 0"u i
So, "u i is an eigenvector of AAT .
C7 We have
A = UΣV T
<
=
= U σ1"e1 · · · σr"er "0 · · · "0 V T
<
=
= σ1 U"e1 · · · σr U"er "0 · · · "0 V T
 T
"v 
<
=  .1 
= σ1"u 1 · · · σr"u r "0 · · · "0  .. 
 T 
"v n
= σ1"u 1"v T1 + · · · + σr"u r"v Tr
by block multiplication
C8 Observe that we have
>
?−1
(AT A)−1 AT = (UΣV T )T (UΣV T ) ((UΣV T )T
>
?−1
= VΣT U T UΣV T VΣT U T
>
?−1
= VΣT ΣV T VΣT U T
If AT A is invertible, then by the Invertible Matrix Theorem, we have that rank(AT A) = n. Hence, by
Problem C1(c) we have that rank(A) = n. Thus, by Theorem 8.5.1, A has n non-zero singular values
and so
ΣT Σ = diag(σ21 , . . . , σ2n )
is invertible. Therefore, we get
(AT A)−1 AT = V(ΣT Σ)−1 V T VΣT U T
= V(ΣT Σ)−1 ΣT U T
Observe that
(ΣT Σ)−1 = diag(1/σ21 , . . . , 1/σ2n )
We get that
@
(ΣT Σ)−1 ΣT
A
ii
=
1
1
σ =
2 i
σi
σi
for 1 ≤ i ≤ n and all other entries of (ΣT Σ)−1 ΣT are 0. Therefore, (ΣT Σ)−1 ΣT = Σ+ . The result follows.
Copyright © 2020 Pearson Canada Inc.
45
C9 (a) We have
AA+ A = (UΣV T )(VΣ+ U T )(UΣV T )
= UΣΣ+ ΣV T
We get that
(ΣΣ+ )ii = σi
1
=1
σi
for 1 ≤ i ≤ rank(A) and all other entries of ΣΣ+ are 0. It follows that ΣΣ+ Σ = Σ. Hence,
AA+ A = UΣV T = A
(b) We have
A+ AA+ = (VΣ+ U T )(UΣV T )(VΣ+ U T )
= VΣ+ ΣΣ+ U T
We get that
(Σ+ Σ)ii =
1
σi = 1
σi
for 1 ≤ i ≤ rank(A) and all other entries of Σ+ Σ are 0. It follows that Σ+ ΣΣ+ = Σ+ . Hence,
A+ AA+ = VΣ+ U T = A+
(c) We have
>
?T
(AA+ )T = (UΣV T )(VΣ+ U T )
>
?T
= UΣΣ+ U T
From our work in part (a), we see that ΣΣ+ is symmetric. Thus, we have that
(AA+ )T = UΣΣ+ U T
= UΣV T VΣ+ U T
= AA+
(d) From our work in part (b), we see that Σ+ Σ is symmetric. Thus, we have that
>
?T
(A+ A)T = (VΣ+ U T )(UΣV T )
>
?T
= VΣ+ ΣV T
= VΣ+ ΣV T
= VΣ+ U T UΣV T
= A+ A
Copyright © 2020 Pearson Canada Inc.
46
Chapter 8 Quiz
E1 We have
!!
!
!!2 − λ −3
2 !!!
3 !!! = −(λ − 6)(λ + 3)(λ − 4)
C(λ) = !!! −3 3 − λ
!! 2
3
2 − λ!!
The eigenvalues are λ1 = 6, λ2 = −3, and λ3 = 4.
For λ1 = 6 we get

 

2 1 0
1
−4 −3

 

3 ∼ 0 1 −2
A − 6I = −3 −3

 

2
3 −4
0 0
0
  

 1



 

−2
A basis for the eigenspace of λ1 is 
.



−1

For λ2 = −3 we get

 

 5 −3 2 1 0 1

 

6 3 ∼ 0 1 1
A − (−3)I = −3

 

2
3 5
0 0 0
  

 1



 

 1
A basis for the eigenspace of λ2 is 
.



−1

For λ3 = 4 we get

 

2 1 0 −1
−2 −3

 

3 ∼ 0 1
0
A − 4I = −3 −1

 

2
3 −2
0 0
0
  

1



 

0
A basis for the eigenspace of λ3 is 
.



1

2
We normalize the basis
the eigenspaces to get the orthonormal
√  vectors
√
√ basis√for R of eigen√ for


  √ 


 1/ 6  1/ 3 1/ 2
 1/ 6
1/ √3 1/ 2





√  
√
√  









vectors of A 
. Hence, P = −2/ 6
1/ √3
0√  orthogonally

−2/ √6 ,  1/ √3 ,  0√ 

√





 −1/ 6 −1/ 3 1/ 2 

−1/ 6 −1/ 3 1/ 2


0 0
6

diagonalizes A to PT AP = 0 −3 0.


0
0 4
E2 2x12 + 8x1 x2 + 5x22
E3 x12 − 4x1 x3 − 3x22 + 8x2 x3 + x32
E4 We have
#
5 2
(a) The corresponding symmetric matrix is A =
.
2 5
"
Copyright © 2020 Pearson Canada Inc.
47
(b) The characteristic polynomial is
!!
!
!!5 − λ
2 !!!
C(λ) = !
= (λ − 3)(λ − 7)
! 2
5 − λ!!
Thus, the eigenvalues of A are λ1 = 3 and λ2 = 7.
For λ1 = 3,
"
# "
#
2 2
1 1
A − 3I =
∼
2 2
0 0
$" #%
−1
A basis for the eigenspace is
.
1
For λ2 = 7,
"
# "
#
−2
2
1 −1
A − 7I =
∼
2 −2
0
0
$" #%
1
A basis for the eigenspace is
.
1
√
√ #
"
−1/ √2 1/ √2
So, after normalizing the vectors, we find that P =
orthogonally diagonalizes
1/ 2 1/ 2
A. Hence, the change of variables "x = P"y brings Q("x ) into the form Q("x ) = 3y21 + 7y22 .
(c) Since the eigenvalues of A are all positive, Q("x ) is positive definite.
(d) The graph of Q("x ) = 1 is a rotation of the graph of 3y21 + 7y22 = 1, so it is an ellipse. The graph of
Q("x ) = 0 is a rotation of 3y21 + 7y22 = 0, which is a single point at the origin.
E5 We have


 2 −3 −3


2.
(a) The corresponding symmetric matrix is A = −3 −3


−3
2 −3
(b) The characteristic polynomial is
!!
!
!!2 − λ
−3
−3 !!!
2 !!! = −(λ + 5)(λ − 5)(λ + 4)
C(λ) = !!! −3 −3 − λ
!! −3
2
−3 − λ!!
Thus, the eigenvalues of A are λ1 = −5, λ2 = 5, and λ3 = −4.
For λ1 = −5,

 

 7 −3 −3 1 0 0

 

2
2 ∼ 0 1 1
A − (−5)I = −3

 

−3
2
2
0 0 0
  

 0



 

−1
A basis for the eigenspace is 



 1

For λ2 = 5,

 

2
−3 −3 −3 1 0

 

2 ∼ 0 1 −1
A − 5I = −3 −8

 

−3
2 −8
0 0
0
Copyright © 2020 Pearson Canada Inc.
48
  

−2



 

 1
A basis for the eigenspace is 
.





 1

For λ3 = −4,

 

 6 −3 −3 1 0 −1

 

1
2 ∼ 0 1 −1
A − (−4)I = −3

 

−3
2
1
0 0
0
  

1



 

1
A basis for the eigenspace is 
.



1

√
√ 

 0

−2/
6
1/
√
√
√3

So, after normalizing the vectors, we find that P = −1/ 2
1/ √6 1/ √3 orthogonally
√


1/ 2
1/ 6 1/ 3
diagonalizes A. Hence, the change of variables "x = P"y brings Q("x ) into the form Q("x ) =
−5y21 + 5y22 − 4y23 .
(c) Since A has positive and negative eigenvalues Q("x ) is indefinite.
(d) The graph of Q("x ) = 1 is a rotation of the graph of −5y21 + 5y22 − 4y23 = 1, so it is a hyperboloid
of two sheets. The graph of Q("x ) = 0 is a rotation of −5y21 + 5y22 − 4y23 = 0, which is a cone.
"
#
5 −1
E6 The quadratic form Q("x ) = 5x12 − 2x1 x2 + 5x22 corresponds to the symmetric matrix A =
.
−1
5
The characteristic polynomial is
!!
!
!!5 − λ −1 !!!
C(λ) = !
= (λ − 6)(λ − 4)
! −1 5 − λ!!
Thus, the eigenvalues of A are λ1 = 6 and λ2 = 4.
For λ1 = 6,
"
# "
#
−1 −1
1 1
A − 6I =
∼
−1 −1
0 0
$" #%
−1
A basis for the eigenspace is
.
1
For λ2 = 4,
"
# "
#
1 −1
1 −1
A − 4I =
∼
−1
1
0
0
$" #%
1
A basis for the eigenspace is
.
1
√
√ #
"
−1/ √2 1/ √2
So, after normalizing the vectors, we find that P =
orthogonally diagonalizes A.
1/ 2 1/ 2
Hence, the change of variables "x = P"y brings Q("x ) into the form Q("x ) = 6y21 + 4y22 .
Now, the graph of 6y21 + 4y22 = 12 is an ellipse in the y1 y2 -plane. We observe that the y1 -intercepts are
√
√
√
√
( 2, 0) and (− 2, 0), and the y2 -intercepts are (0, 3) and (0, − 3).
Now to sketch the graph of 5x12 − 2x1 x2 + 5x22 = 12 in the x1 x2 -plane, we draw the new y1 -axis in the
" #
" #
−1
1
direction of
and draw the new y2 -axis in the direction of
. Then, relative to these new axes,
1
1
Copyright © 2020 Pearson Canada Inc.
49
sketch the graph of the ellipse 6y21 + 4y22 = 12. The graph is also the graph of the original equation
5x12 − 2x1 x2 + 5x22 = 12. See the graph below:
x2
–1
1
y2
1
1
y1
x1
#
1 2
E7 The corresponding symmetric matrix is A =
. The characteristic polynomial is
2 1
"
C(λ) = λ2 − 2λ − 3 = (λ − 3)(λ + 1)
Thus, the eigenvalues of A are λ1 = 3 and λ2 = −1. Thus, the equation can be brought into the
diagonal form
3y21 − y22 = 8
√
√
This is an equation of a hyperbola. We observe that the y1 -intercepts are ( 8/3, 0) and (− 8/3, 0),
and there are no intercepts on the y2 -axis. The asymptotes
of the hyperbola
are determined by the
√
√
equation 3y21 − y22 = 0. This gives asymptotoes y2 = 3y1 and y2 = − 3y1 .
For λ1 = 3,
"
# "
#
−2
2
1 −1
A − 3I =
∼
2 −2
0
0
$" #%
1
Thus, a basis for the eigenspace is
.
1
For λ2 = −1,
# "
#
2 2
1 1
A+I =
∼
2 2
0 0
"
$" #%
−1
Thus, a basis for the eigenspace is
.
1
Next, we convert the equations of the asymptotes from the y1 y2 -plane to the x1 x2 -plane by using the
change of variables
√
√ #" #
" #
"
"
#
1 x1 + x2
y1
1/ √2 1/ √2 x1
T
= P "x =
= √
y2
−1/ 2 1/ 2 x2
2 −x1 + x2
Copyright © 2020 Pearson Canada Inc.
50
Substituting these into the equations of the asymptotes gives
9
:
√
1
1
√ (−x1 + x2 ) = 3 √ (x1 + x2 )
2
2
√
1+ 3
x2 =
√ x1
1− 3
9
:
√
1
1
√ (−x1 + x2 ) = − 3 √ (x1 + x2 )
2
2
√
1− 3
x2 =
√ x1
1+ 3
Hence, the asymptotes are x2 ≈ −3.732x1 and x2 ≈ −0.268x1 .
x2
3
3
y2
x1
Now to sketch the graph of x12 + 4x1 x2 + x22 = 8 in the x1 x2 " #
1
plane, we draw the y1 -axis in the direction of
and draw
2
1
" #
−2
y1
−1
the y2 -axis in the direction of
. Then, relative to these
1
axes, sketch the graph of the hyperbola x12 + 4x1 x2 + x22 = 8.
"
#
5 5
T
E8 We have A A =
. The eigenvalues of AT A are (ordered from greatest to least) λ1 = 10 and
5 5
√ #
" √ #
"
1/ √2
−1/ √2
λ2 = 0. Corresponding normalized eigenvectors are "v 1 =
for λ1 and "v 2 =
for λ2 .
1/ 2
1/ 2
√ #
" √
√
1/ √2 −1/ √2
Consequently, we take V =
. The singular values of A are σ1 = 10 and σ2 = 0.
1/ 2
1/ 2
"√
#
10 0
Thus, the matrix Σ is Σ =
. Next we compute
0
0
"
#" √ # " √ #
1
1 1 1 1/ 2
√ = 1/ √5
"u 1 =
A"v 1 = √
2
2
σ1
1/
2
2/ 5
10
√ #
"
−2/ √5
We then need to extend {"u 1 } to an orthonormal basis for R2 . We take "u 2 =
. So, we take
1/ 5
<
=
U = "u 1 "u 2 and A = UΣV T is a singular value decomposition of A.
"
#
3 2
T
E9 We have B B =
. The eigenvalues of BT B are (ordered from greatest to least) λ1 = 7 and
2 6
λ2 = 2.
√ #
" √ #
"
1/ √5
−2/ √5
Corresponding normalized eigenvectors are "v 1 =
for λ1 and "v 2 =
for λ2 .
2/ 5
1/ 5
√ #
" √
1/ √5 −2/ √5
Hence, we take V =
.
2/ 5
1/ 5
Copyright © 2020 Pearson Canada Inc.
51
√
 7
√
√

The singular values of B are σ1 = 7 and σ2 = 2. Thus, Σ =  0

0
Next compute


0
√ 
2.

0
√ 



2 " √ #  5/ √35
 1
1
1 


 1/ √5
1
"u 1 =
B"v 1 = √ −1
=  1/ 35
√


σ1


7 −1 −1 2/ 5
−3/ 35




√ #  0 
2 "
 1


√
1
1 

5
−2/

√ = 3/ 10
1
"u 2 =
B"v 2 = √ −1
√


σ2

2 −1 −1 1/ 5
1/ 10
We then need to extend {"u 1 , "u 2 } to an orthonormal basis for R3 . One way of doing this is to find
√ an


  

2/
14


2










√




 
−1. Normalizing gives "u 3 = −1/ 14.
orthonormal basis for Null(AT ). A basis for Null(AT ) is 

√ 
 




3
3 14
Consequently,
we
have
<
=
U = "u 1 "u 2 "u 3 and B = UΣV T is a singular value decomposition of B.
E10 We have that
1 1
A A=
1 1
"
T
#
T
The
of
√ are
" √eigenvalues
#
" A A
# λ1 = 2 and λ2 = 0 with corresponding orthonormal eigenvectors "v 1 =
1/ √2
−1/ √2
and "v 2 =
. Consequently, we take
1/ 2
1/ 2
√
√ #
1/ √2 −1/ √2
V=
1/ 2
1/ 2
"
√
2. So, we have take
" √
#
1/ 2 0
+
Σ =
0
0
The non-zero singular values of A is σ1 =
Next, we compute
" #
1
1
"u 1 =
A"v 1 =
0
σ1
Since we only have one vector" in# R2 , we need to extend {"u 1 } to an orthonormal basis {"u 1 , "u 2 } for R2 .
0
We see that we can take "u 2 =
. Hence,
1
U=
Then
"
1 0
0 1
#
1/2 0
A = VΣ U =
1/2 0
+
+
T
"
#
Copyright © 2020 Pearson Canada Inc.
52
E11 Since A is positive definite, we have that
("x , "x ) = "x T A"x ≥ 0
and ("x , "x ) = 0 if and only if "x = "0.
Since A is symmetric, we have that
("x , "y ) = "x T A"y = "x · A"y = A"y · "x = (A"y )T "x = "y T AT "x = "y T A"x = ("y , "x )
For any "x , "y ,"z ∈ Rn and s, t ∈ R we have
("x , s"y + t"z) = "x T A(s"y + t"z) = "x T A(s"y ) + "x T A(t"z)
= s"x T A"y + t"x T A"z = s("x , "y ) + t("x ,"z)
Thus, ("x , "y ) is an inner product on Rn .
E12 Since A is a 4 × 4 symmetric matrix, there exists an orthogonal matrix P that diagonalizes A. Since
the only eigenvalue of A is 3, we must have PT AP = 3I. Then multiply on the left by P and on the
right by PT and we get
A = P(3I)PT = 3PPT = 3I
"
#
"
#
2 3
2
0
E13 Let P =
and D =
. Then, the matrix
−3 2
0 −1
−1
A = PDP
"
#
−1/13 −18/13
=
−18/13
14/13
Has the desired eigenvalues and eigenvectors.
E14 If A is invertible, then rank(A) = n by the Invertible Matrix Theorem. Hence, A has n non-zero
singular values by Theorem 8.5.1. But, since an n × n matrix has exactly n singular values, the matrix
A cannot have 0 as a singular value.
On the other hand, if A is not invertible, then rank(A) < n by the Invertible Matrix Theorem. Hence,
A has less than n non-zero singular values by Theorem 8.5.1. So, A has 0 as a singular value.
E15 The statement is true. We have PT AP = B, so
B2 = (PT AP)(PT AP) = PT A(PPT )AP = PT AIAP = PT A2 P
Thus, A2 and B2 are orthogonally similar.
E16 The statement is true. We have PT AP = B, so
BT = (PT AP)T = PT AT (PT )T = PT AP = B
So, B is also symmetric.
E17 The statement is false. We can only guarantee eigenvectors
corresponding
to different eigenvalues
" #
" #
1
2
are orthogonal. For example, take A = I. Then "v 1 =
and "v 2 =
are both eigenvectors of A but
0
0
are not orthogonal.
Copyright © 2020 Pearson Canada Inc.
53
E18 This is true by the Principal Axis Theorem.
E19 The statement is false. Q("0) = 0 for any quadratic form.


0 0
1


E20 The statement is false. The matrix A = 0 −1 0 is indefinite since it has both a positive and a


0
0 0
negative eigenvalue, but it is not invertible.
"
#
−1 −2
E21 The statement is false. The matrix A =
has all negative entries, but it is indefinite since the
−2 −1
eigenvalues are 1 and −3.
E22 The statement is true. Let UΣV T be a singular value decomposition of A. Then,
| det A| = | det(UΣV T )| = | det U| | det Σ| | det V T | = 1(| det Σ|)(1) = σ1 . . . σn
since U and V T are orthogonal matrices.
Chapter 8 Further Problems


4 2 2


F1 (a) We get AT A = 2 5 1 which has eigenvalues λ1 = 8, λ2 = 4, and λ3 = 2 and corresponding


2 1 5
√ 

 √ 


−2/ 6
 0√ 
1/ √3
√ 



unit eigenvectors "v 1 = 1/ 3, "v 2 = −1/ √2, and "v 3 =  1/ 6. Thus, B = {"v 1 , "v 2 , "v 3 }.
√ 

 √ 


1/ 2
1/ 3
1/ 6
(b) We have

2
1
1 
"u 1 =
A"v 1 = √ 0
σ1
8 0

2 1
1
1 
"u 2 =
A"v 2 = 0 2
σ2
2
0 0

2
1
1 
"u 3 =
A"v 3 = √ 0
σ3
2 0
 √   √ 
1 1 1/ √3 4/ √24
 


2 0 1/ 3 = 2/ 24
 √   √ 
0 2 1/ 3
2/ 24
 


1  0√   0√ 





0 −1/√ 2 = −1/ √2

 

2 1/ 2
1/ 2
√  
√ 

1 1 −2/ √6 −2/ √12
 


2 0  1/ 6 =  2/ 12
√  
√ 

0 2
1/ 6
2/ 12
Hence, C = {"u 1 , "u 2 , "u 3 }.
Copyright © 2020 Pearson Canada Inc.
54
(c) We have
Hence,
√ 
 √ 
 √ 



4/ 3 √ 4/ 24
 0√ 
−2/ 12
√
√ 
√







L("v 1 ) = A"v 1 = 2/ 3 = 8 2/ 24 + 0 −1/ √2 + 0  2/ 12
√ 
 √ 
 √ 



1/ 2
2/ 3
2/ 24
2/ 12
√ 


 √ 



 0√ 
4/ 24
 0√ 
−2/ 12
√ 


 √ 



L("v 2 ) = A"v 2 = −2/√ 2 = 0 2/ 24 + 2 −1/ √2 + 0  2/ 12
√
√ 







2/ 2
1/ 2
2/ 24
2/ 12
√ 
√ 

 √ 



−2/ 6
4/ 24
 0√  √ −2/ 12
√
√
√ 







L("v 3 ) = A"v 3 =  2/ 6  = 0 2/ 24 + 0 −1/ √2 + 2  2/ 12
√ 
 √ 
 √ 



1/ 2
2/ 6
2/ 24
2/ 12
√
 8 0
 0 2
C [L]B = 

0 0
NOTE: Observe that A is not even diagonalizable.

0 

√0 
2
F2 Given A = QR, with Q orthogonal, let A1 = RQ. Then QT A = R, so A1 = (QT A)Q, and A1 is
orthogonally similar to A.
F3 Since A is symmetric, there is an orthogonal matrix Q such that QT AQ = D = diag(λ1 , . . . , λn ), where
λ1 , . . . , λn are the eigenvalues of A. Moreover,
since
all of the eigenvalues
√
√ A is positive semidefinite,
T
of A are non-negative. Define C = diag( λ1 , . . . , λn ) and let B = QCQ . Then,
B2 = (QCQT )(QCQT ) = QC 2 QT = QDQT = A
and
BT = (QCQT )T = QC T QT = QCQT = B
Moreover, since B is similar to C, we have that the eigenvalues of B are
is also positive semidefinite.
√
√
λ1 , . . . , λn and hence B
F4 (a) We have (AT A)T = AT (AT )T = AT A, so AT A is symmetric.
Let λ be any eigenvalue of AT A with corresponding unit eigenvector "x . Since, AT A is symmetric,
we have that λ is real. Also,
(A"x , A"x ) = (A"x )T (A"x ) = "x T AT A"x = "x T (λ"x ) = λ"x T "x = λ
Thus, λ ≥ 0.
(b) If A is invertible, we can repeat our work in (a) to get λ = (A"x , A"x ). But, since "x is an eigenvector,
we have that "x ! "0, so A"x ! "0 since A is invertible. Thus, λ > 0.
F5 (a) By Problem F4(b), AT A is positive definite. Let U be the symmetric positive square root of AT A,
as defined in Problem F3, so U 2 = AT A. Let Q = AU −1 . Then, because U is symmetric,
QT Q = (AU −1 )T AU −1 = (U T )−1 AT AU −1 = U −1 U 2 U −1 = I
and Q is an orthogonal matrix. Since Q = AU −1 , A = QU, as required.
Copyright © 2020 Pearson Canada Inc.
55
(b) V T = (QUQT )T = QU T QT = QUQT = V, so V is symmetric. Also, V Q = (QUQT )Q = QUI =
A. Finally, AAT = V QQT V T = V IV = V 2 .
(c) Write A = QU, as in part (a). Then the symmetric positive definite matrix U can be diagonalized,
and the diagonal matrix can then be factored so that with respect to a suitable orthonormal basis,
the matrix is orthogonally similar to




λ1 0 0 1 0 0 1 0 0 
 0 1 0 0 λ2 0 0 1 0 

 
 

0 0 1 0 0 1 0 0 λ3
and we see that it is the composition of three stretches along mutually orthogonal principal
axes. This map is orientation-preserving since, and we have assumed that A is the matrix of an
orientation-preserving mapping, so that Q = AU −1 is an orientation-preserving mapping. Thus,
by Problem F2 of Chapter 7, Q describes a rotation.
Copyright © 2020 Pearson Canada Inc.
Chapter 9 Solutions
Section 9.1
A Practice Problems
√
32 + (−5)2 = 34.
√
√
z = 2 − 7i and |z| = 22 + 72 = 53.
!
z = 4i and |z| = 02 + (−4)2 = 4.
!
√
z = −1 + 2i and |z| = (−1)2 + (−2)2 = 5.
√
z = 2 and |z| = 22 + 02 = 2.
!
√
z = −3 − 2i and |z| = (−3)2 + 22 = 13.
!
√
√
√
We have |z| = (−3)2 + (−3)2 = 18. Since −3 = 18 cos θ and −3 = 18 sin θ, we get θ =
− 3π
4 + 2πk, k ∈ Z. Thus, a polar form of z is
A1 z = 3 + 5i and |z| =
A2
A3
A4
A5
A6
A7
!
" "
#
"
##
√
√
3π
3π
3π
z = 18 cos −
+ i sin −
= 18e−i 4
4
4
A principal argument of z is Arg z = − 3π
4 .
$√
√
A8 We have |z| =
32 + (−1)2 = 2. Since 3 = 2 cos θ and −1 = 2 sin θ, we get θ = − π6 + 2πk, k ∈ Z.
Thus, a polar form of z is
% % π&
% π &&
π
z = 2 cos − + i sin −
= 2e−i 6
6
6
A principal argument of z is Arg z = − π6 .
$ √
√
A9 We have |z| = (− 3)2 + 12 = 2. Since − 3 = 2 cos θ and 1 = 2 sin θ, we get θ =
Thus, a polar form of z is
" " #
" ##
5π
5π
5π
z = 2 cos
+ i sin
= 2ei 6
6
6
A principal argument of z is Arg z =
5π
6 .
1
Copyright © 2020 Pearson Canada Inc.
5π
6
+ 2πk, k ∈ Z.
2
$
√
√
A10 We have |z| = (−2)2 + (−2 3)2 = 4. Since −2 = 4 cos θ and −2 3 = 4 sin θ, we get θ = − 2π
3 +2πk,
k ∈ Z. Thus, a polar form of z is
" "
#
"
##
2π
2π
2π
z = 4 cos −
+ i sin −
= 4e−i 3
3
3
A principal argument of z is Arg z = − 2π
3 .
!
A11 We have |z| = (−3)2 + 02 = 3. Since −3 = 3 cos θ and 0 = 3 sin θ, we get θ = π + 2πk, k ∈ Z. Thus,
a polar form of z is
z = 3 (cos (π) + i sin (π)) = 3eiπ
A principal argument of z is Arg z = π.
!
√
√
√
A12 We have |z| = (−2)2 + 22 = 8. Since −2 = 8 cos θ and 2 = 8 sin θ, we get θ =
Thus, a polar form of z is
3π
4
+ 2πk, k ∈ Z.
" " #
" ##
√
√ 3π
3π
3π
z = 8 cos
+ i sin
= 8ei 4
4
4
A principal argument of z is Arg z =
3π
4 .
A13 We have x = r cos θ = 1 cos(−π/3) =
we have z =
1
2
−
√
3
2 i.
1
2
and y = r sin θ = 1 sin(−π/3) = −
√
3
2 .
Thus, in standard form,
√
A14 We have x = r cos θ =√2 cos(−π/3) = 1 and y = r sin θ = 2 sin(−π/3) = − 3. Thus, in standard
form, we have z = 1 − 3i.
√
A15 We have x = r√cos θ = 2 cos(π/3) = 1 and y = r sin θ = 2 sin(π/3) = 3. Thus, in standard form, we
have z = 1 + 3i.
A16 We have x = r cos θ = 3 cos(3π/4) = − √32 and y = r sin θ = 3 sin(3π/4) =
form, we have z = −
√3
2
+
√3 i.
2
A17 We have x = r cos√
θ = 2 cos(−π/6) =
form, we have z = 3 − i.
A18 We have x = r cos θ = cos(5π/6) = −
have z = −
√
3
2
√
3
2
√
√3 .
2
Thus, in standard
3 and y = r sin θ = 2 sin(−π/6) = −1. Thus, in standard
and y = r sin θ = sin(5π/6) = 12 . Thus, in standard form, we
+ 12 i.
A19 (2 + 5i) + (3 + 2i) = 5 + 7i
A20 (2 − 7i) + (−5 + 3i) = −3 − 4i
A21 (−3 + 5i) − (4 + 3i) = −7 + 2i
A22 (−5 − 6i) − (9 − 11i) = −14 + 5i
A23 (1 + 3i)(3 − 2i) = [1(3) − 3(−2)] + [1(−2) + 3(3)]i = 9 + 7i
A24 (−2 − 4i)(3 − i) = [(−2)(3) − (−4)(−1)] + [−2(−1) + (−4)(3)]i = −10 − 10i
A25 (1 − 6i)(−4 + i) = [1(−4) − (−6)(1)] + [1(1) + (−6)(−4)]i = 2 + 25i
A26 (−1 − i)(1 − i) = [(−1)(1) − (−1)(−1)] + [(−1)(−1) + (−1)(1)]i = −2
Copyright © 2020 Pearson Canada Inc.
3
A27 We have Re(z) = 3 and Im(z) = −6.
A28 We have (2 + 5i)(1 − 3i) = 17 − i, so Re(z) = 17 and Im(z) = −1.
A29 We have
4
6−i
A30 We have
−1
i
6+i
6+i
=
24+4i
36+1
· ii =
−i
−1
= i. Thus, Re(z) = 0 and Im(z) = 1.
·
1
2 + 3i
2 − 5i
A32 We have
3 + 2i
1 + 6i
A33 We have
4−i
=
24
37
4
37 i.
+
Hence, Re(z) = 24/37 and Im(z) = 4/37.
2 − 3i 2 − 3i
2
3
=
=
− i.
2 − 3i
4+9
13 13
3 − 2i 6 − 10 − 4i − 15i
4
19
·
=
= − − i.
3 − 2i
9+4
13 13
4 + i 4 − 6 + i + 24i
2
25
·
=
= − + i.
4+i
16 + 1
17 17
√
√
A34 We have |z1 | = 12 + 12 = 2 and any argument θ of z1 satisfies
A31 We have
·
1=
Hence, cos θ =
√1
2
√1 .
2
and sin θ =
√
2 cos θ
Thus, θ =
z1 =
and
π
4
1=
√
2 sin θ
+ 2πk, k ∈ Z. So, a polar form of z1 is
√ %
π
π&
2 cos + i sin
4
4
$
√
√
We have |z2 | = 12 + ( 3)2 = 4 = 2 and any argument θ of z2 satisfies
1 = 2 cos θ
Hence, cos θ =
1
2
and sin θ =
√
3
2 .
Thus, θ =
π
3
and
√
3 = 2 sin θ
+ 2πk, k ∈ Z. So, a polar form of z2 is
%
π
π&
z2 = 2 cos + i sin
3
3
Thus,
% π π &&
√ % %π π&
z1 z2 = 2 2 cos
+
+ i sin
+
4 3
"
# 4 3
√
7π
7π
= 2 2 cos
+ i sin
12
12
√ % %
&
% π π &&
z1
2
π π
=
cos
−
+ i sin
−
z2
2
4 3
4 3
√ %
2
−π
−π &
=
cos
+ i sin
2
12
12
$ √
A35 We have |z1 | = (− 3)2 + (−1)2 = 2 and any argument θ of z1 satisfies
√
− 3 = 2 cos θ
and
− 1 = 2 sin θ
Copyright © 2020 Pearson Canada Inc.
4
√
3
2
and sin θ = − 12 . Thus, θ = 7π
6 + 2πk, k ∈ Z. So, a polar form of z1 is
"
#
7π
7π
z1 = 2 cos
+ i sin
6
6
!
√
We have |z2 | = 12 + (−1)2 = 2 and any argument θ of z2 satisfies
√
√
1 = 2 cos θ and − 1 = 2 sin θ
Hence, cos θ = −
Hence, cos θ =
√1
2
Thus,
and sin θ = − √12 . Thus, θ = − π4 + 2πk, k ∈ Z. So, a polar form of z2 is
√ %
−π
−π &
z2 = 2 cos
+ i sin
4
4
" "
#
"
##
√
7π π
7π π
z1 z2 = 2 2 cos
−
+ i sin
−
6
4
6
4
"
#
√
11π
11π
= 2 2 cos
+ i sin
12
12
" "
#
"
##
z1
2
7π −π
7π −π
= √ cos
−
+ i sin
−
z2
6
4
6
4
2
"
#
√
17π
17π
= 2 cos
+ i sin
12
12
A36 We have |z1 | =
$
√
(−1)2 + ( 3)2 = 2 and any argument θ of z1 satisfies
√
−1 = 2 cos θ and
3 = 2 sin θ
Hence, cos θ = − 12 and sin θ =
We have |z2 | =
Hence, cos θ =
Thus,
√
3
2 .
Thus, θ = 2π
3 + 2πk, k ∈ Z. So, a polar form of z1 is
"
#
2π
2π
z1 = 2 cos
+ i sin
3
3
$ √
( 3)2 + (−1)2 = 2 and any argument θ of z2 satisfies
√
3 = 2 cos θ and − 1 = 2 sin θ
√
3
2
and sin θ = − 12 . Thus, θ = − π6 + 2πk, k ∈ Z. So, a polar form of z2 is
%
−π
−π &
z2 = 2 cos
+ i sin
6
6
" "
#
"
##
2π −π
2π −π
z1 z2 = 2(2) cos
+
+ i sin
+
3
6
3
6
%
&
π
π
= 4 cos + i sin
= 4i
2#
" 2"
"
##
z1 2
2π −π
2π −π
=
cos
−
+ i sin
−
z2 2
3
6
3
6
√
"
#
5π
5π
3 1
= cos
+ i sin
=−
+ i
6
6
2
2
Copyright © 2020 Pearson Canada Inc.
5
A37 We have |z1 | =
!
√
(−3)2 + 32 = 18 and any argument θ of z1 satisfies
√
√
−3 = 18 cos θ and 3 = 18 sin θ
Hence, cos θ = − √12 and sin θ =
We have |z2 | =
Hence, cos θ =
√1 .
2
Thus, θ = 3π
4 + 2πk, k ∈ Z. So, a polar form of z1 is
"
#
√
3π
3π
z1 = 18 cos
+ i sin
4
4
$ √
(− 3)2 + 12 = 2 and any argument θ of z2 satisfies
√
− 3 = 2 cos θ and 1 = 2 sin θ
√
− 3
2
and sin θ = 12 . Thus, θ = 5π
6 + 2πk, k ∈ Z. So, a polar form of z2 is
"
#
5π
5π
z2 = 2 cos
+ i sin
6
6
Thus,
" "
#
"
##
√
3π 5π
3π 5π
z1 z2 = 2 18 cos
+
+ i sin
+
4
6
4
6
"
#
√
19π
19π
= 6 2 cos
+ i sin
12
12
√ " "
#
"
##
z1
18
3π 5π
3π 5π
=
cos
−
+ i sin
−
z2
2
4
6
4
6
√ %
&
18
−π
−π
=
cos
+ i sin
2
12
12
(
√ '
π
π
A38 We have 1 + i = 2 cos 4 + i sin 4 , so
)√ %
π
π &*4
2 cos + i sin
4
4
√ 4 )%
π
π &*4
= ( 2) cos + i sin
4
"
#4
4π
4π
= 4 cos
+ i sin
4
4
(1 + i)4 =
= 4(−1 + 0i) = −4
(
√ '
−π
A39 We have 3 − 3i = 3 2 cos −π
4 + i sin 4 , so
) √ %
−π
−π &*3
(3 − 3i)3 = 3 2 cos
+ i sin
4
4
√ 3 )%
−π
−π &*3
= (3 2) cos
+ i sin
4
4 #
"
√
−3π
−3π
= 54 2 cos
+ i sin
4
4
"
#
√
1
1
= 54 2 − √ − √ i = −54 − 54i
2
2
Copyright © 2020 Pearson Canada Inc.
6
A40 We have −1 −
'
(
√
−2π
3i = 2 cos −2π
3 + i sin 3 , so
+ "
#,4
√ 4
−2π
−2π
(−1 − 3i) = 2 cos
+ i sin
3
3
+"
#,4
−2π
−2π
4
= 2 cos
+ i sin
3
3
"
#
−8π
−8π
= 16 cos
+ i sin
3
3
√ 

√
 1
3 
= 16 − −
i = −8 − 8 3i
2
2
'
(
√
5π
A41 We have −2 3 + 2i = 4 cos 5π
6 + i sin 6 , so
+ "
#,5
√
5π
5π
5
(−2 3 + 2i) = 4 cos
+ i sin
6
6
+"
#,5
5π
5π
5
= 4 cos
+ i sin
6
6
"
#
25π
25π
= 1024 cos
+ i sin
6
6
√

√
 3 1 
= 1024 
+ i = 512( 3 + i)
2
2
A42 We have −1 = 1ei(π+2πk) . Thus, the fifth roots are
(1)1/5 ei(π+2πk)/5 ,
k = 0, 1, 2, 3, 4
Thus, the five fifth roots are
π
π
+ i sin
5
5
3π
3π
= cos
+ i sin
5
5
= cos π + i sin π
7π
7π
= cos
+ i sin
5
5
9π
9π
= cos
+ i sin
5
5
w0 = cos
w1
w2
w3
w4
π
A43 We have −16i = 16ei(− 2 +2πk) . Thus, the fourth roots are
π
(16)1/4 ei(− 2 +2πk)/4 ,
k = 0, 1, 2, 3
Copyright © 2020 Pearson Canada Inc.
7
Thus, the four fourth roots are
−π
−π
+ i2 sin
8
8
3π
3π
w1 = 2 cos
+ i2 sin
8
8
7π
7π
w2 = 2 cos
+ i2 sin
8
8
11π
11π
w3 = 2 cos
+ i2 sin
8
8
w0 = 2 cos
√
5π
A44 We have − 3 − i = 2ei(− 6 +2πk) . Thus, the third roots are
5π
(2)1/3 ei(− 6 +2πk)/3 ,
k = 0, 1, 2
Thus, the three third roots are
−5π
−5π
+ i21/3 sin
18
18
7π
7π
1/3
1/3
w1 = 2 cos
+ i2 sin
18
18
19π
19π
1/3
1/3
w2 = 2 cos
+ i2 sin
18
18
w0 = 21/3 cos
A45 We have 1 + 4i =
√ i(arctan(4)+2πk)
17e
. Let θ = arctan(4). Thus, the third roots are
√
( 17)1/3 ei(θ+2πk)/3 , k = 0, 1, 2
Thus, the three third roots are
θ
θ
+ i171/6 sin
3
3
θ + 2π
θ + 2π
1/6
1/6
w1 = 17 cos
+ i17 sin
3
3
θ + 4π
θ + 4π
1/6
1/6
w2 = 17 cos
+ i17 sin
3
3
w0 = 171/6 cos
B Homework Problems
B1 z = 3 − 4i, |z| = 5
√
√
B3 z = 2 + 3i, |z| = 7
√
B5 z = 1 + 3i, |z| = 10
√ −iπ/4
B7 z = 3 2e
, Arg z = − π4
√ iπ/2
B9 z = 3e , Arg z = π2
B11 z = 6e−i3π/4 , Arg z = − 3π
4
B13 3i
√
B15 −2 + 2 3i
√
B2 z = −2 + 3i, |z| = 13
B4 z = − 12 i, |z| = 21
√
B6 z = −3 − i, |z| = 10
B8 z = 2e−iπ/2 , Arg z = − π2
B10
B12
B14
B16
z = 3eiπ/6 , Arg z = π6
z = 12 e−i2π/3 , Arg z = − 2π
3
−2
√
− 2
Copyright © 2020 Pearson Canada Inc.
8
B17
B19
B21
B23
B25
B27
B29
B31
B33
B35
B37
B39
B41
B43
B45
B47
B49
B50
B51
B52
B53
B55
B57
B59
B61
√1
2
− 32
−
√
B18 3 3 + 3i
√1 i
2
√
3 3
2 i
√
+ 23 i
−5i
−1 + 2i
2−i
2
6 + 8i
−10 + 11i
−6
Re(z) = 3, Im(z) = 4
Re(z) = 6, Im(z) = −2
Re(z) = − 12 , Im(z) = 12
√
√
Re(z) = 0, Im(z) = −5
B42 Re(z) = 2, Im(z) = 2
3
4
1
5
B44 13
− 13
i
25 + 25 i
−1 + 2i
B46 −1 − i
1
13
1
−5 − 5 i
B48 25
− 18
25 i
'
'
(
'
((
'
'
(
' ((
√
√
z1
7Π
11Π
11Π
z1 z2 = 2 2 cos − 7Π
+
i
sin
−
,
=
2
cos
+
i
sin
12
12
z2
12
12
'
(( z
' ' (
' ((
√ ' ' 5Π (
5Π
1
11Π
11Π
1
z1 z2 = 8 2 cos − 12 + i sin − 12 , z2 = √2 cos 12 + i sin 12
'
(( z
(
'
((
√ ' ' Π(
√ ' '
Π
7Π
z1 z2 = 4 2 cos − 12
+ i sin − 12
, z12 = 2 2 cos − 7Π
12 + i sin − 12
' ' (
' (( z
' ' (
' ((
13Π
5Π
5Π
1
z1 z2 = 12 cos 13Π
+
i
sin
,
=
3
cos
+
i
sin
12
12
z2
12
12
−4 + 4i
B54 64i
√
−8 + 8 3i
B56 −64 √
√
−8 + 8 3i
B58 12 − 23 i
The roots are ei(π+2πk)/5 , 0 ≤ k ≤ 4
B60 The roots are 2ei(3π/2+2πk)/4 , 0 ≤ k ≤ 3
The roots are ei(2πk)/5 , 0 ≤ k ≤ 4
B62 The roots are (2)1/8 ei(π/4+2πk)/4 , 0 ≤ k ≤ 3
B20
B22
B24
B26
B28
B30
B32
B34
B36
B38
B40
−
1−i
−1
1 − 4i
3+i
−2i
10 − 10i
5i
Re(z) = −2, Im(z) = 3
Re(z) = 2, Im(z) = 0
2
3
Re(z) = 13
, Im(z) = 13
1
2
B63 The roots are (2)1/3 ei(−5π/6+2πk)/3 , 0 ≤ k ≤ 2
B64 The roots are (17)1/6 ei(θ+2πk)/3 , 0 ≤ k ≤ 2, where θ = arctan(4).
C Conceptual Problems
C1 Let z2 = x2 + iy2 .
(3) If z1 = yi, y ! 0, then z1 = −yi = −z1 . If z1 = −z1 , then x1 − iy1 = −x1 − iy1 . Hence, x1 = 0 and
so z1 = yi. We have y ! 0 since z1 ! 0. Thus, z1 is purely imaginary.
(5) Let z2 = a + bi.
(z1 z2 ) = (x + yi)(a + bi) = (xa − yb) + (xb + ay)i
= (xa − yb) − (xb + ay)i = (x − yi)(a − bi) = z1 z2
Copyright © 2020 Pearson Canada Inc.
9
(6) We use induction on n. From (5), z21 = z1 z1 = (z1 )2 . Assume that zk1 = (z1 )k . Then, zk+1
= z1 zk1 =
1
k
k+1
z1 (z1 ) = (z1 ) , as required.
(7) z1 + z1 = x + yi + x − yi = 2x = 2 Re(z1 )
(8) z1 − z1 = x + yi − (x − yi) = i2y = i2 Im(z1 )
(9) z1 z1 = (x + yi)(x − yi) = x2 + y2
3
4
C2 |z| = r = |z|. We have z = r(cos θ − i sin θ) = r cos(−θ) + i(sin(−θ)) . Thus, the argument of z is −θ.
C3 (a) eiθ = cos θ + i sin θ = cos θ − i sin θ = cos(−θ) + i(sin(−θ)) = e−iθ .
(b) eiθ + e−iθ = cos θ + i sin θ + cos(−θ) + i sin(−θ) = cos θ + i sin θ + cos θ − i sin θ = 2 cos θ.
C4
(c) eiθ − e−iθ = cos θ + i sin θ − (cos(−θ) + i sin(−θ)) = cos θ + i sin θ − cos θ + i sin θ = 2i sin θ.
z1 z2 = r1 (cos θ1 + i sin θ1 )r2 (cos θ2 + i sin θ2 )
= (r1 r2 )[cos θ1 cos θ2 − sin θ1 sin θ2 + i(cos θ1 sin θ2 + cos θ2 sin θ1 )]
= r1 r2 (cos(θ1 + θ2 ) + i sin(θ1 + θ2 ))
z1 r1 (cos θ1 + i sin θ1 )
=
z2 r2 (cos θ2 + i sin θ2 )
r1 (cos θ1 + i sin θ1 )(cos θ2 − i sin θ2 )
=
r2 (cos θ2 + i sin θ2 )(cos θ2 − i sin θ2 )
4
r1 3
=
(cos θ1 cos θ2 + sin θ1 sin θ2 ) + i(− cos θ1 sin θ2 + sin θ1 cos θ2 )
r2
4
r1 3
=
cos(θ1 − θ2 ) + i sin(θ1 − θ2 )
r2
C5 Let z1 = a + bi and z2 = c + di. If z1 + z2 is a negative real number, then b = −d and a + c < 0. Also,
we have z1 z2 = (ac − bd) + (ad + bc)i is a negative real number, so ad + bc = 0 and ac − bd < 0.
Combining these we get
0 = ad + bc = ad − dc = d(a − c)
so either d = 0 or a = c. If a = c, then ac − bd < 0 implies that a2 + b2 < 0 which is impossible.
Hence, we must have d = 0. But then b = −d = 0 so z1 and z2 are real numbers.
C6 Let z = a + bi. Then a2 + b2 = 1. Hence
1
1
1
1 − a + bi
1 − a + bi
=
=
=
1 − z 1 − a − bi 1 − a − bi 1 − a + bi (1 − a)2 + b2
1 − a + bi
1 − a + bi
=
=
2
2
1 − 2a + 1
1 − 2a + a + b
1−a
b
=
+
i
2 − 2a 2 − 2a
1
b
= +
i
2 2 − 2a
#
1
1
Hence, Re
= .
1−z
2
"
Copyright © 2020 Pearson Canada Inc.
10
C7 By de Moivre’s Formula we have
cos(3θ) + i sin(3θ) = (cos θ + i sin θ)3
= cos3 θ + 3i cos2 θ sin θ − 3 cos θ sin2 θ − i sin3 θ
= cos3 θ − 3 cos θ sin2 θ + i(3 cos2 θ sin θ − sin3 θ)
Hence,
'
(
Re(cos(3θ) + i sin(3θ)) = Re cos3 θ − 3 cos θ sin2 θ + i(3 cos2 θ sin θ − sin3 θ)
Thus,
cos 3θ = cos3 θ − 3 cos θ sin2 θ
Similarly
'
(
Im(cos(3θ) + i sin(3θ)) = Im cos3 θ − 3 cos θ sin2 θ + i(3 cos2 θ sin θ − sin3 θ)
so
sin 3θ = 3 cos2 θ sin θ − sin3 θ
C8 Let z = a + bi. Then,
Re(z) = a ≤ |a| =
√
√
a2 ≤ a2 + b2 = |z|
C9 (a) We have
|z1 + z2 |2 = (z1 + z2 )z1 + z2 = (z1 + z2 )(z1 + z2 )
= z1 z1 + z1 z2 + z1 z2 + z2 z2 = z1 z1 + z1 z2 + z1 z2 + z2 z2 .
(b) Taking z = z1 z2 in Theorem 9.1.1 (7) gives
z1 z2 + z1 z2 = 2 Re(z1 z2 )
By C8, we also have that Re(z) ≤ |z|. Hence,
2Re(z1 z2 ) ≤ 2|z1 z2 | = 2|z1 ||z2 | = 2|z1 ||z2 |.
(c) From (a) and (b) we get that
|z1 + z2 |2 = z1 z1 + z1 z2 + z1 z2 + z2 z2 ≤ z1 z1 + 2|z1 ||z2 | + z2 z2
= |z1 |2 + 2|z1 ||z2 | + |z2 |2 = (|z1 | + |z2 |)2
Copyright © 2020 Pearson Canada Inc.
11
Section 9.2
A Practice Problems
A1 Row reducing gives
+
1
−i
3−i
2 1 − 3i 8 − 2i
, +
, +
, +
,
1 −i 3 − i
1 −i 3 − i
1 0
2
∼
∼
∼
0 1−i
2
0 1 1+i
0 1 1+i
,
2
The system is consistent with unique solution z =
.
1+i
+
A2 Row reducing gives
+
+
1 − 2i
i
1 + 2i
−1 − 3i 1 + i 3 − i
1 − 25 + 15 i − 35 + 45 i
0
0
−2i
, +
, +
,
5 −2 + i −3 + 4i
1 − 25 + 15 i − 35 + 45 i
∼
∼
∼
−1 − 3i 1 + i
3−i
−1 − 3i
1+i
3−i
,
Thus, the system is inconsistent.
A3 Row reducing gives
+
, +
, +
,
2
−2i
8 − 8i
1
−i
4 − 4i
1 −i 4 − 4i
∼
∼
∼
1 − 2i −2 −7 − 10i
1 − 2i −2 −7 − 10i
0
i −3 + 2i
+
,
+
,
1 −i 4 − 4i
1 0 1 − 2i
∼
0 1 2 + 3i
0 1 2 + 3i
,
1 − 2i
The system is consistent with unique solution z =
.
2 + 3i
+
A4 Row reducing gives
+
1+i
2 2 + 2i
1 − i −2i 2 − 2i
, +
∼
1
1−i
2
1 − i −2i 2 − 2i
, +
,
1 1−i 2
∼
0
0 0
Hence,
+ , the +system, is consistent with one parameter. Let z3 = t ∈ C. Then, the general solution is
2
−1 + i
z=
+t
, t ∈ C.
0
1
A5 Row reducing gives
+
1 3 2 − 3i
9+i
2i 7i 7 + 4i −2 + 21i
, +
, +
, +
,
1 3 2 − 3i 9 + i
1 3 2 − 3i 9 + i
1 0 2 i
∼
∼
∼
0 i
1
3i
0 1
−i
3
0 1 −i 3
Hence, the system is consistent with one parameter. Let z3 = t ∈ C. Then, the general solution is
 
 
 i 
−2
 
 
z = 3 + t  i, t ∈ C.
 
 
0
1
Copyright © 2020 Pearson Canada Inc.
12
A6 Row reducing gives

 
 

0
−2i   1
i −2
−2i   1 0 −1 1 − i −3i 
0
 1 i −2

 
 

i 0 −i
1+i
3  ∼ 0
1
i
1+i
1  ∼ 0 1
i 1+i
1 

 
 

−2i 0 2i −2 − 2i −6
0 −2 −2i −2 − 2i −2
0 0
0
0
0
Hence, the system is consistent with two parameters. Let z3 = t ∈ C and z4 = s ∈ C. Then, the general
 
 


−3i
 1
−1 + i
 1
−i
−1 − i
, t, s ∈ C.
solution is z =   + t   + s 
 0
 1
 0 
0
0
1
A7 Row reducing gives

 i −1 −1 + i

1
 1 1 + i
−i 1 + i 3 − i

 1 i 1 + i

 0 1 −i
0 i
2
−2 + i
3 + 2i
2 + 2i
 
i
  1
 
1
1
+
i
 ∼
−i 1 + i


1 + 2i 



2  ∼


3i
1 + i 1 + 2i
1
3 + 2i
3 − i 2 + 2i



 ∼
 

1 0
i 1   1 0 0 2 
 

0 1 −i 2  ∼ 0 1 0 1 
 

0 0 1 i
0 0 1 i
 
2
 
Hence, the system is consistent with unique solution z = 1.
 
i
A8 Row reducing gives

−4 −7i −2i
 1 −3i

3i
3
9i
3i
 0
2i
8 −10i 20 6 − i
 
  1 −3i −4 −7i −2i
 
1 −i
3
1
 ∼ 0
0
2 −2i
6 2−i
Hence, the system is inconsistent.
A9 Row reducing gives

1−i
i
1+i
 1

−2
1
−
2i
−2
2i

2i
−2
−2 − 3i −1 + 3i
 
  1 i 1 + i 1 − i
 
2
 ∼ 0 1 2i
0 0 −5i −3 + i
 

  1 −3i −4 −7i −2i 
 

1 −i
3
1 
 ∼ 0

0
0
0
0 −i
 
1−i
  1 i 1 + i
 
0
1
2i
2
∼
 
1
0 0
1
− 5 − 35 i
The system is consistent with a unique solution. By back-substitution,
1
z3 = − −
5
3
i
"5
#
1 3
4 2
z2 = 2 − 2i − − i = + i
5 5
5 5
"
#
"
#
4 2
1 3
z1 = 1 − i − i + i − (1 + i) − − i = 1 − i
5 5
5 5


 1 − i 


Hence, the general solution is z =  45 + 25 i .
 1 3
−5 − 5i
Copyright © 2020 Pearson Canada Inc.




13
A10 Row reducing gives

1+i
2
1
1−i
 1

2
2
+
i
5
2
+
i
4−i

i −1 + i 1 + 2i 2i
1

1
1−i
 1 1 + i 2
 0
1
i
−1
−1
+ 2i

0
0
1
i
−i

5+i
 1 0 0 1 − 2i

0
−2 + 2i
 0 1 0
0 0 1
i
−i

 1 1 + i 2 1 1 − i

 0 −i 1 i 2 + i
0
0
1 i −i



 ∼
 
  1 0 3 − i 2 + i 4 − 2i
 
i
−1 −1 + 2i
 ∼ 0 1
0 0
1
i
−i







 ∼



 ∼
Hence, the system is consistent with one parameter. Let z4 = t ∈ C. Then, the general solution is




 5 + i 
−1 + 2i
−2 + 2i


 + t  0 , t ∈ C.
z = 


 −i 
 −i 


0
1
B Homework Problems
,
1 − 5i
B1 z =
1 − 2i
 
 
1
2i
 
 
B3 z = 0 + s  1, s ∈ C
 
 
1
0
+
B2
B4
B5 The system is inconsistent.
B6
B7 The system is inconsistent.
B8


 
1 + i
 i 


 
B9 The general solution is z =  0  + s 1, s ∈ C.


 
i
0
,
i
z=
−2i
 
 
2
 2
 
 
z = 2 + s −1, s ∈ C
 
 
0
1
 
 


 i 
−i
 0 
 0 
0
 1
, s, t ∈ C
z =   + s   + t 
3
 0
1 − i
0
0
1


1 + i


z =  2 


i
+
Section 9.3
A Practice Problems
+
, +
, +
,
−2 + i
3 + 4i
−5 − 3i
−
=
1
1−i
i

 
 

 2 − i   3 − 2i   5 − 3i 

 
 

A3  3 + i  +  4 + 7i  =  7 + 8i 

 
 

2 − 5i
−3 − 4i
−1 − 9i
A1
, +
,
2 + 5i
−10 + 4i
=
3 − 2i
4 + 6i

 

 2 − i  −4 − 3i

 

A4 (−1 − 2i)  3 + i  = −1 − 7i

 

2 − 5i
−12 + i
A2 2i
+
Copyright © 2020 Pearson Canada Inc.
14
A5 (a) We have
< +1 + 2i 3 + i,
[L] = L(1, 0) L(0, 1) =
1
1−i
;
(b) We have
, +
,+
, +
,
2 + 3i
1 + 2i 3 + i 2 + 3i
3 − 4i
L(2 + 3i, 1 − 4i) = [L]
=
=
1 − 4i
1
1 − i 1 − 4i
−1 − 2i
+
(c) Every vector in the range of L can be written as a linear combination of the columns of [L].
Hence, every vector z in Range(L) has the form
+
,
+
,
1 + 2i
3+i
z = t1
+ t2
1
1−i
+
,
+
,
1 + 2i
1 + 2i
= t1
+ (1 − i)t2
1
1
+
,
1 + 2i
= (t1 + (1 − i)t2 )
1
=+
,>
1 + 2i
Thus,
spans the range of L and is clearly linearly independent, so it forms a basis for
1
Range(L).
Let z be any vector in Null(L). Then we have
+ ,
+
,
+
, +
,
0
1 + 2i
3+i
(1 + 2i)z1 + (3 + i)z2
= [L]z = z1
+ z2
=
0
1
1−i
z1 + (1 − i)z2
Row reducing the coefficient matrix corresponding to the homogeneous system gives
+
, +
,
1 + 2i 3 + i
1 1−i
∼
1
1−i
0
0
=+
,>
−1 + i
Hence, a basis for Null(L) is
.
1
A6 We have
, +
,
2 − 3i
−2i
&u, v' = u · v =
·
= (2 − 3i)(−2i) + (−1 + 2i)(2 − 5i) = 2 + 5i
−1 + 2i 2 − 5i
+
A7 We have
&v, u' = &u, v' = 2 − 5i
!
!
√
(u( = &u, u' = (2 − 3i)(2 + 3i) + (−1 + 2i)(−1 − 2i) = 18
!
!
√
(v( = &v, v' = (2i)(−2i) + (2 + 5i)(2 − 5i) = 33
, +
,
1+i 1+i
&u, v' = u · v =
·
= (1 + i)(1 + i) + 3(1 − i) = 3 − i
3
1−i
+
&v, u' = &u, v' = 3 + i
!
!
√
(u( = &u, u' = (1 − i)(1 + i) + 3(3) = 11
!
!
(v( = &v, v' = (1 + i)(1 − i) + (1 − i)(1 + i) = 2
Copyright © 2020 Pearson Canada Inc.
15
A8 We have
&u, v' = u · v =
+
, +
,
−1 − 4i
3+i
·
= (−1 − 4i)(3 + i) + (2 + i)(1 + 3i) = −6i
2+i
1 + 3i
&v, u' = &u, v' = 6i
!
!
√
(u( = &u, u' = (−1 − 4i)(−1 + 4i) + (2 + i)(2 − i) = 22
!
!
√
(v( = &v, v' = (3 − i)(3 + i) + (1 − 3i)(1 + 3i) = 20
A9 We have
, + ,
1 − 2i
−i
&u, v' = u · v =
·
= (1 − 2i)(−i) + (−1 + 3i)(−2i) = 4 + i
−1 + 3i −2i
+
&v, u' = &u, v' = 4 − i
!
!
√
(u( = &u, u' = (1 − 2i)(1 + 2i) + (−1 + 3i)(−1 − 3i) = 15
!
!
√
(v( = &v, v' = i(−i) + 2i(−2i) = 5
,
+
,
−1 + 2i −1 − i
−1 − 2i 4 − 2i
∗
A10 We have ZW =
, hence (ZW) =
.
4 + 2i −1 + i
−1 + i −1 − i
+
,+
, +
,
0 2−i 1 1+i
−1 − 2i 4 − 2i
We also have W ∗ Z ∗ =
=
.
i
−i −i
2
−1 + i −1 − i
+
,
+
,
−1 + 2i −1
−1 − 2i 2 − i
∗
A11 We have ZW =
, hence (ZW) =
.
2+i
2
−1
2
+
,+
, +
,
−2i 1 1 − i
i
−1 − 2i 2 − i
We also have W ∗ Z ∗ =
=
.
−i i
1
−i
−1
2
+
A12 The columns of A are clearly orthogonal to each other and are unit vectors. Thus, the columns of A
form an orthonormal basis for C2 , so A is unitary.
A13 We have
+
,
+
,
+
,
1 1 −1 1
1 2 0
1 i
A A= √
=
=I
√
2 0 2
2 −i −i
2 −1 i
∗
Thus, A is unitary.
A14 We have
?+
, +
,@
1
1+i
,
= 1(1 + i) + (1 − i)(1) = 2
1+i
1
Hence, the columns of A are not orthogonal, so A is not unitary.
A15 We have
√ ,+
√
√ ,
√
(−1
−
i)/
3
1/
(−1 + √
i)/ 3 (1 − i)/
6
√
√3 = I
√
AA =
(1 + i)/ 6 2/ 6
1/ 3
2/ 6
∗
+
Therefore, A is unitary.
Copyright © 2020 Pearson Canada Inc.
16
A16 First Step: Let w1 = z1 and S = Span{w1 }.
Second Step: Determine projS1 (z2 ).
 
   5
 1
1  
&w1 , z2 '
  −2    34 
−2i
projS1 (z2 ) = z2 −
w
=
−
 
 i  = − i
1
3    31 
(w1 (2
−1
1
−3
 
 5
 
So, we let w2 = −4i and S2 = Span{w1 , w2 }.
 
−1
Third Step: Determine projS2 (z3 ).
 
 
  

0
1
 5 −1/2
&w1 , z3 '
&w2 , z3 '
4
−7
 
 
−4i =  −i 
w3 = z3 −
w1 −
w2 =  i  −  i  −

  3   42   
(w1 (2
(w2 (2
3
1
−1
3/2
Then, {w1 , w2 , w3 } is an orthogonal basis for S.
A17 First Step: Let w1 = z1 and S = Span{w1 }.
Second Step: Determine projS1 (z2 ).



 

1 + i
1 + i  2 
5i
&w1 , z2 '



 

w1 =  2i  − 1 − i = −1 + i
projS1 (z2 ) = z2 −

 5 
 

(w1 (2
3i
1
2i


 2 


So, we let w2 = −1 + i and S2 = Span{w1 , w2 }.


2i
Third Step: Determine projS2 (z3 ).
 



 

 i
1 + i
 2   i 
&w1 , z3 '
&w2 , z3 '
  1 + 3i 
 2 − 4i 
 

w3 = z3 −
w1 −
w2 = 2i −
1 − i −
−1 + i = −1 + i
2
2


5
10
(w1 (
(w2 (
2
1
2i
1−i
Then, {w1 , w2 , w3 } is an orthogonal basis for S.
 
 
1
 i 
 
 
A18 Let u = 0 and v = 1. Observe that
 
 
i
1
&u, v' = 1(i) + 0(1) + (−i)(1) = 0
Hence, {u, v} is an orthogonal basis for the subspace. Thus,
&v, z'
&u, z'
u+
v
2
(u(
(v(2
 
 
1
i
2 − 2i   6 + i  


1
=
0 +
2  
3  
i
1
2

 3 + i 


= 2 + 13 i

4 
3 + 3i
projS (z) =
Copyright © 2020 Pearson Canada Inc.
17




 1 
 −2i 




A19 Let w1 =  i  and w2 = −1 − i. Observe that




1+i
2−i
&u, v' = 1(−2i) + (−i)(−1 − i) + (1 − i)(2 − i) = −4i
Therefore, we must apply the Gram-Schmidt Procedure to make an orthogonal basis for the subspace.


 1 


We take v1 =  i  and


1+i


 −i 
&v1 , w2 '


v2 = w2 −
v1 = −2 − i
2


(v1 (
1
Then, {v1 , v2 } is an orthogonal basis for the subspace. Hence,
&v1 , z'
&v2 , z'
v1 +
v2
(v1 (2
(v2 (2




 1 
 −i 
−4 − i 
−7


 i  +
=
−2 − i

4 
7 
1+i
1


 −1 + 34 i 


=  94 
 7 5
−4 − 4i
projS (z) =
A20 &w, z' = &z, w' = 1 + 2i = 1 − 2i
A21 &(1 + i)z, w' = (1 − i)&z, w' = (1 − i)(1 + 2i) = 3 + i
A22 &w, (1 + 2i)z' = (1 + 2i)&w, z' = (1 + 2i)(1 − 2i) = 5
A23 &iz, −2iw' = (−i)(−2i)&z, w' = −2(1 + 2i) = −2 − 4i
A24 (a) We have
1 = det I = det(U ∗ U) = det(U ∗ ) det U = det U det U = | det U|2
Therefore, | det U| = 1.
+
,
i 0
(b) The matrix U =
is unitary and det U = i.
0 1
B Homework Problems
,
3+i
B1
4 − 2i


 3 


B3 −1 − 2i


2


 2 + 3i 


B5 −3 + i


8−i
+
,
−2 + 2i
B2
−4 + 2i


−4 − 3i


B4  2 − i 


−6 + 3i


 5 − i 


B6 2 + 10i


7 + 9i
+
Copyright © 2020 Pearson Canada Inc.
18
,
1 − i 1 + 3i
B7 (a) [L] =
2i
2−i
+
,
−6 + 10i
(b)
5 + 4i
+
(c) A basis for Range(L) is
,
1−i
−i
B8 (a) [L] =
−2i −1 − i
=+
, +
,>
1 − i 1 + 3i
,
. A basis for Null(L) is the empty set.
2i
2−i
+
B9
B10
B11
B12
B13
B14
B16
B18
B20
B22
=+
,>
=+
,>
1−i
−1 + i
(b) A basis for Range(L) is
. A basis for Null(L) is
.
−2i
2
√
√
&u, v' = 1 − 6i, &v, u' = 1 + 6i, (u( = 5, (v( = 10
√
√
&u, v' = 3 − 3i, &v, u' = 3 + 3i, (u( = 6, (v( = 6
√
√
&u, v' = 0, &v, u' = 0, (u( = 7, (v( = 35
√
√
&u, v' = 7 + 6i, &v, u' = 7 − 6i, (u( = 7, (v( = 15
√
√
&u, v' = −4
+ + 2i, &v, u',= −4 − 2i, (u( = 8, (v( = 8
+
,
−2 + 3i −i
9 + 7i
0
∗
∗ ∗
∗
(ZW) =
=W Z
B15 (ZW) =
= W ∗Z∗
1 − 3i 2i
−4 − i −2 + 2i
The matrix is unitary.
B17 The matrix is not unitary.
The matrix is unitary.
B19 The matrix is unitary.
     
     


1  i   1
i  i −1







     
     







i , −i , −1
0
2
i
,
,
B21


















     

     



i 1 −i
i
0
2
 5 


 4 i 
1 − i
 3 1 


projS (z) =  4 − 4 i
B23 projS (z) =  i 
 1 


3i
−4 + i
B24 &w, z' = 2 − i = 2 + i
B26 &iw, (1 + 2i)z' = (−i)(1 + 2i)(2 + i) = 5
B25 &(1 + i)z, w' = (1 − i)(2 − i) = 1 − 3i
B27 &iz, (1 − i)w' = (−i)(1 − i)(2 − i) = −3 − i
C Conceptual Problems
C1 (a) We have αz = (a + bi)z = ax − by + i(bx + ay). Thus, define Mα : R2 → R2 by
Mα (x, y) = (ax − by, bx + ay)
The standard matrix of this mapping is
,
a −b
[Mα ] =
b
a
+
as claimed.
(b) We may write
, √
+
,
a −b
cos θ − sin θ
2
2
= a +b
b a
sin θ
cos θ
√
√
where cos θ = a/ a2√+ b2 and sin θ = b/ a2 + b2 . Hence, this matrix describes a dilation or
contraction by factor a2 + b2 following a rotation through angle θ.
+
Copyright © 2020 Pearson Canada Inc.
19
(c) If α = 3 − 4i, then
,
+
,
3 4
3/5 4/5
[Mα ] =
=5
−4 3
−4/5 3/5
+
so cos θ = 3/5 and sin θ = −4/5, and hence θ ≈ −0.927 radians. Thus multiplication of a complex
number by α increases the modulus by a factor of 5 and rotates by an angle of θ. (Compare this
to the polar form of α.)
C2 Using Theorem 9.3.4 (1) we get
HH
H I I
HH 1 zHHH = III 1 III (z( = 1
H (z( H I (z( I
C3 Observe that the required statement is equivalent to
(z + w(2 ≤ ((z( + (w()2
Consider
(z + w(2 − ((z( + (w()2 = &z + w, z + w' − ((z(2 + 2(z((w( + (w(2 )
= &z, z' + &z, w' + &w, z' + &w, w' − (&z, z' + 2(z((w( + &w, w')
= &z, w' + &z, w' − 2(z((w(
= 2 Re(&z, w') − 2(z((w(
But for any α ∈ C we have Re(α) ≤ |α|, so
Re(&z, w') ≤ |&z, w'| ≤ (z((w(
by Theorem 9.3.4 (3).
C4 If &u, v' = 0, then we have
(u + v(2 = &u + v, u + v' = &u, u + v' + &v, u + v'
= &u, u' + &u, v' + &v, u' + &v, v'
= (u(2 + 0 + 0 + (v(2
= (u(2 + (v(2
The converse is not true. One counter example is: Consider V = C with its standard inner product
and let u = 1 + i and v = 1 − i. Then (u + v(2 = (2(2 = 4 and (u(2 + (v(2 = 2 + 2 = 4, but
&u, v' = (1 − i)(1 − i) = −2i ! 0.
C5 Since {$v1 , $v2 , $v3 } is a basis for R3 , it is linearly independent in R3 , and the vectors all have real entries.
In C3 , consider the equation
z1$v1 + z2$v2 + z3$v3 = $0
Suppose z1 = a1 + b1 i, z2 = a2 + b2 i, z3 = a3 + b3 i. Then this equation becomes
(a1 + b1 i)$v1 + (a2 + b2 i)$v2 + (a3 + b3 i)$v3 = $0
Copyright © 2020 Pearson Canada Inc.
20
Since $v1 , $v2 , $v3 have real entries, we can break the left hand side into its real and imaginary parts:
(a1$v1 + a2$v2 + a3$v3 ) + (b1$v1 + b2$v2 + b3$v3 )i = $0
Therefore,
a1$v1 + a2$v2 + a3$v3 = $0,
b1$v1 + b2$v2 + b3$v3 = $0
Since {$v1 , $v2 , $v3 } is linearly independent in R3 , a1 = a2 = a3 = b1 = b2 = b3 = 0. Therefore,
z1 = z2 = z3 = 0, and {$v1 , $v2 , $v3 } is linearly independent in C3 . Since C3 has dimension 3, this is a
basis for C3 .
C6 We prove this by induction. If A is a 1 × 1 matrix, then the result is obvious. Assume the result holds
for n − 1 × n − 1 matrices and consider an n × n matrix A. If we expand det A along the first row, we
get by definition of the determinant
det A =
n
J
a1iC1i (A)
i=1
where the C1i (A) represent the cofactors of A. But, each of these cofactors is the determinant of an
n − 1 × n − 1 matrix, so we have by our inductive hypothesis that C1i (A) = C1i (A). Hence,
det A =
n
J
a1iC1i (A) =
i=1
n
J
a1iC1i (A) = det A
i=1
C7 Since AA∗ = I and BB∗ = I we get
(AB)(AB)∗ = ABB∗ A∗ = I
Hence, AB is unitary.
C8 No changes are required in the previous definitions and theorems, except that the scalars are now
allowed to be complex numbers.
Section 9.4
A Practice Problems
A1 We have
II
I
II2 − λ 1 + i III
C(λ) = I
= λ2 − 3λ = λ(λ − 3)
I 1 − i 1 − λII
Hence, the eigenvalues are λ1 = 0 and λ2 = 3. For λ1 = 0 we get
, +
,
2
1+i
1 (1 + i)/2
A − 0I =
∼
1−i
1
0
0
+
Copyright © 2020 Pearson Canada Inc.
21
=+
,>
1+i
Hence, a basis for Eλ1 is
. For λ2 = 3 we get
−2
, +
,
−1 1 + i
1 −1 − i
A − 3I =
∼
1 − i −2
0
0
+
=+
,>
1+i
Hence, a basis for Eλ2 is
.
1
,
+
,
1+i 1+i
0 0
Therefore, A is diagonalized by P =
to D =
.
−2
1
0 3
+
A2 We have
II
I
II3 − λ
5 III
C(λ) = I
= λ2 + 16
I −5 −3 − λII
Solving λ2 + 16 = 0 we find that the eigenvalues are λ1 = 4i and λ2 = −4i. For λ1 = 4i we get
+
, +
,
3 − 4i
5
5 3 + 4i
A − 4iI =
∼
−5
−3 − 4i
0
0
=+
,>
3 + 4i
Hence, a basis for Eλ1 is
. For λ2 = −4i we get
−5
, +
,
3 + 4i
5
5 3 − 4i
A + 4iI =
∼
−5
−3 + 4i
0
0
+
Hence, a basis for Eλ2 is
=+
,>
3 − 4i
.
−5
Therefore, A is diagonalized by P =
A3 We have
+
,
+
,
3 + 4i 3 − 4i
4i
0
to D =
.
−5
−5
0 −4i
II
I
II1 − λ
i III
C(λ) = I
= λ2
I i
−1 − λII
Hence, the only eigenvalue is λ1 = 0 with algebraic multiplicity 2. We get
+
, +
,
1
i
1 i
A − 0I =
∼
i −1
0 0
Hence, the geometric multiplicity of λ1 is 1 and so the matrix is not diagonalizable.
A4 We have
II
I
IIi − λ
i III
C(λ) = I
= λ2 − 2iλ − 1 − 2i
I 2
i − λII
Hence, by the quadratic formula, we have
√
√
2i ± −4 − 4(−1 − 2i)
λ=
= i ± 2i = i ± (1 + i)
2
Copyright © 2020 Pearson Canada Inc.
22
Thus, λ1 = 1 + 2i and λ2 = −1.
For λ1 = 1 + 2i we get
A − λ1 I =
Hence, a basis for Eλ1 is
+
, +
,
−1 − i
i
1 (−1 − i)/2
∼
2
−1 − i
0
0
=+
,>
1+i
. For λ2 = −1 we get
2
, +
,
1+i
i
1 (−1 + i)/2
A − λ2 I =
∼
2
1+i
0
0
+
=+
,>
−1 − i
Hence, a basis for Eλ2 is
.
2
,
+
,
1 + i −1 − i
1 + 2i
0
Therefore, A is diagonalized by P =
to D =
.
2
2
0
−1
+
A5 The characteristic polynomial is
II
I
IIcos θ − λ − sin θ III
C(λ) = I
= λ2 − 2 cos θ + 1
I sin θ
cos θ − λII
Hence, by the quadratic formula, we have
λ=
√
!
4 cos2 θ − 4
= cos θ ± − sin2 θ = cos θ ± i sin θ
2
2 cos θ ±
Observe, that if sin θ = 0, then Rθ is diagonal, so we could just take P = I. So, we now assume that
sin θ ! 0.
For λ1 = cos θ + i sin θ, we get
Rθ − λ1 I =
Hence, a basis for Eλ1 is
+
, +
,
−i sin θ − sin θ
1 −i
∼
sin θ −i sin θ
0 0
=+ ,>
i
.
1
Similarly, for λ2 = cos θ − i sin θ, we get
, +
,
i sin θ − sin θ
1 i
Rθ − λ2 I =
∼
sin θ i sin θ
0 0
+
=+ ,>
−i
Hence, a basis for Eλ2 is
.
1
+
,
+
,
i −i
cos θ + i sin θ
0
Thus, P =
and D =
.
1 1
0
cos θ − i sin θ
Copyright © 2020 Pearson Canada Inc.
23
A6 We have
II
I I
I I
I
II2 − λ
2
−1 III III−λ
0
λ III III 0
0
λ III
2 III = III −4 1 − λ
2 III = III −2 1 − λ
2 III
C(λ) = III −4 1 − λ
II 2
2
−1 − λII II 2
2
−1 − λII II1 − λ
2
−1 − λII
= λ(−λ2 + 2λ − 5)
Hence, the eigenvalues are λ1 = 0 and the roots of −λ2 + 2λ − 5. By the quadratic formula, we get
that λ2 = 1 + 2i and λ3 = 1 − 2i. For λ1 = 0 we get

 

 2 2 −1 1 0 −1/2

 

2 ∼ 0 1
0 
A − λ1 I = −4 1

 

2 2 −1
0 0
0
  

1



 

0
Hence, a basis for Eλ1 is 
. For λ2 = 1 + 2i we get



2


 

−1  1 0 −1
1 − 2i 2

 

−2i
2  ∼ 0 1 −i
A − λ2 I =  −4

 

2
2 −2 − 2i
0 0
0
  

1



 

 i 
Hence, a basis for Eλ2 is 
. For λ3 = 1 − 2i we get





1


 

−1  1 0 −1
1 + 2i 2

 

2i
2  ∼ 0 1
i
A − λ3 I =  −4

 

2
2 −2 + 2i
0 0
0
  

 1



 

−i
Hence, a basis for Eλ3 is 
.





 1





0
0 
1 1 1
0




0 .
Therefore, A is diagonalized by P = 0 i −i to D = 0 1 + 2i




2 1 1
0
0
1 − 2i
A7 We have
II
I I
I I
I
II−6 − 3i − λ −2
−3 − 2i III III i − λ
−2
−3 − 2i III IIIi − λ −2 −3 − 2iIII
II = II
II = II 0
10
2−λ
5
0
2−λ
5
2−λ
5 III
C(λ) = III
II II
II II
II 8 + 6i
3 4 + 4i − λI I−2i + 2λ
3 4 + 4i − λI I 0
−1
−2 − λ II
= (i − λ)(λ2 + 1) = −(λ − i)2 (λ + i)
Hence, the eigenvalues are λ1 = i and λ2 = −i. For λ1 = i we get

 

−6 − 4i −2 −3 − 2i 1 0 1/2

 

2−i
5  ∼ 0 1 0 
A − λ1 I =  10

 

8 + 6i
3
4 + 3i
0 0 0
Copyright © 2020 Pearson Canada Inc.
24
  

−1



 

 0
Hence, a basis for Eλ1 is 
.





 2

Thus, A is not diagonalizable since gλ1 < aλ1 .
A8 We have
II
I I
I
II2 − λ
2
3 III III2 − λ
2
3 III
C(λ) = III 1 1 − λ −1 III = III 0 1 − λ 1 − λIII
II −1
0
2 − λII II −1
0
2 − λII
= (1 − λ)(λ2 − 4λ + 5)
II
I
II2 − λ
2
1 III
0 III
= III 0 1 − λ
II −1
0
2 − λII
Hence, the eigenvalues are λ1 = 1 and the roots of λ2 − 4λ + 5. By the quadratic formula, we get that
λ2 = 2 + i and λ3 = 2 − i. For λ1 = 1 we get

 

3 1 0 −1
 1 2

 

2
A − I =  1 0 −1 ∼ 0 1

 

−1 0
1
0 0
0
  

 1



 

−2
Hence, a basis for Eλ1 is 
. For λ2 = 2 + i we get





 1


 

2
3 1 0 i 
 −i

 

A − λ2 I =  1 −1 − i −1 ∼ 0 1 1

 

−1
0
−i
0 0 0
  

 −i



 

−1
Hence, a basis for Eλ2 is 
. For λ3 = 2 − i we get





 1


 

2
3 1 0 −i
 −i

 

A − λ3 I =  1 −1 − i −1 ∼ 0 1 1

 

−1
0
−i
0 0 0
  

 i



 

−1
Hence, a basis for Eλ3 is 
.



 1





i
0
0 
 1 −i
1




0 .
Therefore, A is diagonalized by P = −2 −1 −1 to D = 0 2 + i




1
1
1
0
0
2−i
A9 We have
II
I I
I I
I
II 5 − λ −1 + i
2i III III 1 − λ
0
i − iλ III III 1 − λ
0
0 III
1 − i III = III−2 − 2i 2 − λ
1 − i III = III−2 − 2i 2 − λ −1 + iIII
C(λ) = III−2 − 2i 2 − λ
II 4i
I
I
−1 − i −1 − λI I 4i
−1 − i −1 − λII II 4i
−1 − i 3 − λ II
= (1 − λ)(λ2 − 5λ + 4) = −(λ − 1)2 (λ − 4)
Copyright © 2020 Pearson Canada Inc.
25
Hence, the eigenvalues are λ1 = 1 and λ2 = 4. For λ1 = 1 we get

 

−1 + i 2i  1 (−1 + i)/4 i/2
 4

 

1
1 − i ∼ 0
0
0 
A − λ1 I = −2 − 2i

 

4i
−1 − i −2
0
0
0
  


1 − i −i





 ,  0

4
Hence, a basis for Eλ1 is 
. For λ2 = 4 we get









 0   2


 

−1 + i 2i  1 0
i
 1


 

−2
1 − i ∼ 0 1 (1 − i)/2
A − λ2 I = −2 − 2i

 

4i
−1 − i −5
0 0
0



 −2i 







−1
+
i
Hence, a basis for Eλ2 is 
.







  2 





1 − i −i −2i 
1 0 0




0 −1 + i to D = 0 1 0.
Therefore, A is diagonalized by P =  4




0
2
2
0 0 4
A10 We have
II
I I
I I
I
II2 − λ
1
−1 III III2 − λ
0
−1 III III2 − λ
0
−1 III
1−λ
0 III = III 2
1−λ
0 III = III 2
1−λ
0 III
C(λ) = III 2
II 3
−1 2 − λII II 3
1 − λ 2 − λII II 1
0
2 − λII
= (1 − λ)(λ2 − 4λ + 5)
Hence, the eigenvalues are λ1 = 1 and the roots of λ2 − 4λ + 5. By the quadratic formula, we get that
λ2 = 2 + i and λ3 = 2 − i. For λ1 = 1 we get

 

1 −1 1 0
0
1

 

0
0 ∼ 0 1 −1
A − I = 2

 

3 −1
1
0 0
0
  

0



 

1
Hence, a basis for Eλ1 is 
. For λ2 = 2 + i we get



1


 

1 −1 1 0 −(1 + 2i)/5
−i

 

0 ∼ 0 1 −(3 + i)/5 
A − λ2 I =  2 −1 − i

 

3
−1 −i
0 0
0



1 + 2i






 3 + i 
Hence, a basis for Eλ2 is 
. For λ3 = 2 − i we get



  5 


 

1
−1 1 0 −(1 − 2i)/5
 i

 

0 ∼ 0 1 −(3 − i)/5 
A − λ3 I = 2 −1 + i

 

3
−1
i
0 0
0
Copyright © 2020 Pearson Canada Inc.
26



1 − 2i






 3 − i 
Hence, a basis for Eλ3 is 
.





 5 





0
0 
0 1 + 2i 1 − 2i
1




0 .
Therefore, A is diagonalized by P = 1 3 + i 3 − i  to D = 0 2 + i




1
5
5
0
0
2−i
A11 We have
II
I I
I I
I
II−λ
−i
2 + i III III−λ
−i
2 + iIII III−λ
2
2 + iIII
2i III = III −i 2 − i − λ
2i III = III −i 2 + i − λ
2i III
C(λ) = III −i 2 − i − λ
II −i
I
I
I
I
2 − 2i
3i − λI I 0 −i + λ i − λI I 0
0
i − λII
= −(λ − i)(λ2 − (2 + i)λ + 2i) = −(λ − i)2 (λ − 2)
Hence, the eigenvalues are λ1 = i and λ2 = 2. For λ1 = i we get

 

−i
2 + i 1 0 2i
 i

 

A − iI = −i 2 − 2i 2i  ∼ 0 1 −1

 

−i 2 − 2i 2i
0 0
0
  

−2i



 

 1
Hence, a basis for Eλ1 is 
.



 1

Thus, A is not diagonalizable since gλ1 < aλ1 .
A12 We have
II
I I
II
II1 + i − λ
II
1
0 III III1 + i − λ
1
0
1−λ
−i III = III 0
1 − λ −1 − i + λIII
C(λ) = III 1
II 1
0
1 − λII II 1
0
1 − λ II
= (1 + i − λ)(1 − λ)2 + (−1 − i + λ) = −(λ − (1 + i))[λ2 − 2λ + 1 − 1]
= −(λ − (1 + i))(λ − 2)λ
Hence, the eigenvalues are λ1 = 1 + i, λ2 = 2, and λ3 = 0. For λ1 = 1 + i we get

 

0 1 0 1 0 −i

 

A − λ1 I = 1 −i −i ∼ 0 1 0

 

1 0 −i
0 0 0
  

 i 



 

0
Hence, a basis for Eλ1 is 
. For λ2 = 2 we get



1


 

1
0 1 0
−1 
−1 + i

 

−1 −i ∼ 0 1 −1 + i
A − λ2 I =  1

 

1
0 −1
0 0
0
Copyright © 2020 Pearson Canada Inc.
27



 1 






1 − i
Hence, a basis for Eλ2 is 
. For λ3 = 0 we get





 1 


 

1 
1 + i 1 0 1 0

 

1 −i ∼ 0 1 −1 − i
A − λ3 I =  1

 

1
0 1
0 0
0



 −1 






1 + i
Hence, a basis for Eλ3 is 
.





 1 





1
−1 
 i
1 + i 0 0




2 0.
Therefore, A is diagonalized by P = 0 1 − i 1 + i to D =  0




1
1
1
0
0 0
B Homework Problems
B1 P =
+
,
+
,
i −i
2+i
0
,D=
1 1
0
2−i
B2 The matrix is not diagonalizable.
+
,
+
,
−i 1
1 0
B3 P =
,D=
1 1
0 i
+
,
+
,
1+i 1−i
3 0
B4 P =
,D=
2
2
0 1
B5 The matrix is not diagonalizable.




0 
−i 0 −i
 i 0




i, D = 0 i
0 
B6 P =  0 1




1 0 2
0 0 1+i




 i −1 0
2 0 0 




1 i, D = 0 4 0 
B7 P = 1




0
0 1
0 0 2i
B8 The matrix is not diagonalizable.




0
0 
1 −1 1
2 + 2i




1 1, D =  0
−2 + 2i 0 
B9 P = 1




1
0 2
0
0
2i




−i i 2
8 0 0




B10 P =  i 0 1, D = 0 4 0




1 1 0
0 0 4




0
0 
−i −1 −1
0




0
1, D = 0 1 + i
0 
B11 P =  i




1
1
0
0
0
1+i
B12 The matrix is not diagonalizable.
Copyright © 2020 Pearson Canada Inc.
28
C Conceptual Problems
C1 If z is an eigenvector of A, then there exists an eigenvalue λ, such that Az = λz. Hence,
Az = Az = λz = λz
Hence, z is an eigenvector of A with eigenvalue λ.
C2 Since A is diagonalizable over C, there exists an invertible matrix P such that
P−1 AP = diag(λ1 , . . . , λn )
Thus, by definition, A and diag(λ1 , . . . , λn ) are similar. So, by Theorem 6.2.1, we have that
tr A = tr diag(λ1 , . . . , λn ) = λ1 + · · · + λn
C3 Since an n × n matrix has exactly n eigenvalues (including repetitions according to their algebraic
multiplicity), A has n eigenvalues. By Theorem 9.4.1, the non-real eigenvalues of A come in complex
conjugate pairs. Therefore, since n is odd, at least one eigenvalue cannot have a complex conjugate
pair and so much be real.
C4 We have
A($x + i$y ) = (a + bi)($x + i$y ) = a$x − b$y + i(b$x + a$y )
(1)
(a) If $x = $0, then we have A(i$y ) = −b$y + ia$y . But, the real part of the left side of this equation is
zero. Hence, the real part of the right side must also be zero. So, $y = $0, since b ! 0. This implies
that $x + i$y = $0 which contradicts the definition of an eigenvector so $x ! $0.
If $y = $0, then we have A$x = a$x + ib$x . But the left side has zero imaginary part, hence we must
have $x = $0 since b ! 0. But, then $x + i$y = $0 which contradicts the definition of an eigenvector
so $y ! $0.
(b) Suppose that $x = k$y , k ! 0. Then the real part and the imaginary part of (1) are
A(k$y ) = a(k$y ) − b$y = (ak − b)$y ⇒ A$y =
1
(ak − b)$y
k
A$y = b(k$y ) + a$y = (bk + a)$y
Since $y ! $0, setting these equal to each other we get
1
(ak − b)$y = (bk + a)$y ⇒ b(k2 + 1) = 0
k
which is impossible since b ! 0.
(c) Suppose that $v = p$x + q$y is an eigenvector of A. Since $v ∈ Rn and A ∈ Mm×n (R), $v must
correspond to a real eigenvalue µ of A. Therefore, we have
A$v = µ$v = µp$x + µq$y
and
A$v = A(p$x + q$y ) = p(a$x − b$y ) + q(b$x + a$y ) = (pa + qb)$x + (qa − pb)$y .
Copyright © 2020 Pearson Canada Inc.
29
Since {$x , $y } is linearly independent, the coefficients of $x and $y must be equal, so we get
pa + qb = µp
qa − pb = µq
Multiplying the first by q and the second by p gives
qpa + q2 b = µpq = pqa − p2 b ⇒ q2 b = −p2 b
Since b ! 0, this implies that p = 0 = q. But, this contradicts our assumption that $v is an
eigenvector of A. Thus, there can be no real eigenvector of A in Span{$x , $y }.
C5 (a) From parts (a) and (b) of C4 we have that B = {$x , $y } is linearly independent. Thus, by the
Invertible Matrix Theorem P is invertible.
Let L($x ) = A$x . By Theorem 4.6.1, we have that P−1 AP = [L]B . Moreover, from our work in C4
we have that
L($x ) = A$x = a$x − b$y
L($y ) = A$y = b$x + a$y
Hence,
;
< + a b,
P AP = [L]B = [L($x )]B [L($y )]B =
−b a
−1
$v . Then, repeating what we
(b) By C3, A has a real eigenvalue µ with corresponding
; real eigenvector
<
did in part (a), we let B = {$v , $x , $y } and take P = $v $x $y . We get that


0 0
µ


a b
P−1 AP = [L]B =  0


0 −b a
Section 9.5
A Practice Problems
,
11
8 − 8i
= A∗ A, so A is normal.
8 + 8i
27
+
,
+
,
5
−1 − 3i
5
1 − 3i
∗
∗
A2 We have AA =
, but A A =
. So, A is not normal.
−1 + 3i
3
1 + 3i
3
+
,
2 0
A3 We have AA∗ =
= A∗ A, so A is normal.
0 2
+
,
+
,
6
2 + 4i
6
4 + 2i
∗
∗
A4 We have AA =
, but A A =
. So, A is not normal.
2 − 4i
13
4 − 2i
13
A1 We have AA∗ =
+
Copyright © 2020 Pearson Canada Inc.
30
A5 We have
II
I
II1 + 2i − λ
−1 III
C(λ) = I
= λ2 + (−2 − 4i)λ − 4 + 4i
I −1
1 + 2i − λII
Using the quadratic formula, we find that the eigenvalues are λ1 = 2i and λ2 = 2 + 2i.
For λ1 = 2i we get
+
, +
,
1 −1
1 −1
A − λ1 I =
∼
−1
1
0
0
=+ ,>
1
Hence, a basis for Eλ1 is
. For λ2 = 2 + 2i we get
1
, +
,
−1 −1
1 1
A − λ2 I =
∼
−1 −1
0 0
+
=+ ,>
−1
Hence, a basis for Eλ2 is
.
1
√
√ ,
+
,
1/ √2 −1/ √2
2i
0
Therefore, A is unitarily diagonalized by U =
to D =
.
0 2 + 2i
1/ 2
1/ 2
+
A6 We have
II
I
II5i − λ −1 − iIII
C(λ) = I
= λ2 − 9iλ − 18
I 1 − i 4i − λ II
Using the quadratic formula, we find that the eigenvalues are λ1 = 3i and λ2 = 6i.
For λ1 = 3i we get
+
, +
,
2i −1 − i
1 − 21 + 12 i
A − λ1 I =
∼
1−i
i
0
0
=+
,>
1−i
Hence, a basis for Eλ1 is
. For λ2 = 6i we get
2
, +
,
−i −1 − i
1 1−i
A − λ2 I =
∼
1 − i −2i
0
0
+
Hence, a basis for Eλ2 is
=+
,>
−1 + i
.
1
√
√ ,
+
,
(1 − i)/
6 (−1 + √
i)/ 3
3i 0
√
Therefore, A is unitarily diagonalized by U =
to D =
.
0 6i
2/ 6
1/ 3
+
A7 We have
II
I
II3 − λ
5 III
C(λ) = I
= λ2 − 6λ + 34
I −5 3 − λII
Using the quadratic formula, we find that the eigenvalues are λ1 = 3 + 5i and λ2 = 3 − 5i.
Copyright © 2020 Pearson Canada Inc.
31
For λ1 = 3 + 5i we get
, +
,
−5i
5
1 i
A − λ1 I =
∼
−5 −5i
0 0
+
=+ ,>
−i
Hence, a basis for Eλ1 is
. For λ2 = 3 − 5i we get
1
, +
,
5i 5
1 −i
A − λ2 I =
∼
−5 5i
0 0
+
=+ ,>
i
Hence, a basis for Eλ2 is
.
1
√
√ ,
+
,
−i/ √2 i/ √2
3 + 5i
0
Therefore, A is unitarily diagonalized by U =
to D =
.
0
3 − 5i
1/ 2 1/ 2
+
A8 We have C(λ) = λ2 − 5λ + 4 = (λ − 4)(λ − 1). Thus, the eigenvalues are λ1 = 4 and λ2 = 1.
For λ1 = 4 we get
+
,
+
,
−2 1 + i
1+i
A − λ1 I =
⇒ z1 =
1 − i −1
2
For λ2 = 1 we get
,
+
,
1
1+i
−1 − i
⇒ z2 =
1−i
2
1


+
,
−1−i
√
√ 
 1+i
4 0
.
3 
Hence, we have D =
and U =  26
√
√1 
0 1
3
6
A − λI =
+
A9 We have C(λ) = λ2 + 1 = (λ − i)(λ + i). Thus, the eigenvalues are λ1 = i and λ2 = −i.
For λ1 = i we get
+
,
+ ,
−i
i
1
A − λ1 I =
⇒ z1 =
i −i
1
For λ2 = −i we get
A − λI =
 1
,
 √
i 0
Hence, we have D =
and U =  12
√
0 −i
2
+
A10 We have
II
I 4−λ
C(λ) = II √
I 2−i
√
,
+ ,
i i
−1
⇒ z2 =
i i
1

− √12 
.
1 
+
√
2
I
2 + iIII
2
I = λ − 6λ + 5 = (λ − 1)(λ − 5)
2−λ I
The eigenvalues are λ1 = 1 and λ2 = 5.
For λ1 = 1 we get
√
√
, +
,
+
2
+
i
3
1
(
2
+
i)/3
∼
A−I = √
0
0
2−i
1
=+ √
,>
2+i
A basis for the eigenspace of λ1 is
.
−3
Copyright © 2020 Pearson Canada Inc.
32
For λ2 = 5 we get
√
√
, +
,
−1
2+i
1 − 2−i
A − 5I = √
∼
0
0
2−i
−3
=+ √
,>
2+i
A basis for the eigenspace of λ2 is
.
1
2
We normalize=+the√ basis vectors
to get the
√ , +for
√ basis
√ for C , of eigen√ the eigenspaces
,>
+ √orthonormal
( 2 + i)/
√ 12 , ( 2 + i)/2 . Hence, U = ( 2 + i)/
√ 12 ( 2 + i)/2 unitarily
vectors of A
1/2
−3/ 12
−3/ 12
1/2
+
,
1 0
diagonalizes A to U ∗ AU =
.
0 5
+
A11 We have C(λ) = −(λ − 2)(λ2 − λ − 2) = −(λ − 2)2 (λ + 1) so we get eigenvalues λ1 = 2 and λ2 = −1.
For λ1 = 2 we get


 


 −1 0 1 + i
0
1 + i


 


0
0  ⇒ z1 = 1 , z2 =  0 
C − 2I =  0


 


1 − i 0 −2
0
1
For λ2 = −1 we get




0 1 + i
 2
1 + i




3
0  ⇒ z3 =  0 
C + I =  0




1−i 0
1
−2




1+i
√
√ 
0 1+i
0
2 0

3
6




0 and U = 1 0
0 .
Hence we have D = 0 2




√ 
0 0 −1
0 √13 −2
6
A12 We have
II
I
IIi − λ
0
0 III
−1 − λ 1 − iIII = (i − λ)(λ2 + λ − 2)
C(λ) = III 0
II 0
1+i
−λ II
The eigenvalues are λ1 = i, λ2 = −2, and λ3 = 1.
For λ1 = i we get

 

0
0  0 1 0
0

 

A − iI = 0 −1 − i 1 − i ∼ 0 0 1

 

0 1+i
−i
0 0 0
  

1



 

0
A basis for the eigenspace of λ1 is 
.





0

For λ2 = −2 we get

 

0
0  1 0
0 
2 + i

 

1
1 − i ∼ 0 1 1 − i
A + 2I =  0

 

0
1+i
2
0 0
0



 0 






−1 + i
A basis for the eigenspace of λ2 is 
.



  1 

Copyright © 2020 Pearson Canada Inc.
33
For λ3 = 1 we get

 

0
0  1 0
0
−1 + i


 

−2 1 − i ∼ 0 1 (−1 + i)/2
A − 1I =  0

 

0
1 + i −1
0 0
0



 0 






1 − i
A basis for the eigenspace of λ3 is 
.





 2 

Hence,


1
0 √
0 √ 


i)/ 3 (1 − i)/
6
U = 0 (−1 + √
√


0
1/ 3
2/ 6


0 0
 i


unitarily diagonalizes F to D = 0 −2 0.


0
0 1
B Homework Problems
B1 The matrix is not normal.
B2
B3 The matrix is normal.
B4
B5 The matrix is not normal.
B6
√ ,
√
+
+
,
(1 − 2i)/
6
6 0
√
√ 30 (−1 + 2i)/
B7 U =
,D=
0 0
5/ 30
1/ 6
√ ,
+ √
+
,
1/ √2 −1/ √2
1+i
0
B8 U =
,D=
0
−1 + i
1/ 2
1/ 2
√ ,
+ √
+
,
i/ √2 −i/ √2
−1 + 2i
0
B9 U =
,D=
0
−1 − 2i
1/ 2 1/ 2
√
√ ,
+
+
,
(3 − i)/
14
(1
+
3i)/
35
6i
0
√
√
B10 U =
,D=
0 −i
2/ 14
−5i/ 35
 √



1/ 2 −1/2 −1/2
0 √ 0
0
√
√ 



2i
0
B11 U =  0
−i/ 2 i/ 2, D = 0
√
√




0
0 − 2i
1/ 2
1/2
1/2
√



(1 + i)/ 8 (−1 − 3i)/4 (−1 + i)/4 
0
√



B12 U = (1 + i)/ 8 (−1 + i)/4 (−1 − 3i)/4, D = 0
√



0
1/ 2
1/2
1/2
The matrix is normal.
The matrix is normal.
The matrix is normal.

0
0

2
0

0 −2
Copyright © 2020 Pearson Canada Inc.
34
C Conceptual Problems
C1 (a) We have
(Uz(2 = (Uz)∗ (Uz) = z∗ U ∗ Uz = z∗ Iz = z∗ z = (z(2
(b) If λ is an eigenvalue of U with corresponding unit eigenvector $z, then
|λ| = |λ| (z( = (λz( = (Uz( = (z( = 1
,
i 0
(c) The matrix U =
is unitary and has eigenvalues i and −i.
0 −i
+
C2 Let λ1 and λ2 be the distinct eigenvalues corresponding to v and z respectively. Then, we have
λ2 &v, z' = &v, λ2 z' = &v, Az' = &A∗ v, z' = &Av, z' = &λ1 v, z' = λ1 &v, z' = λ1 &v, z'
Therefore, since λ1 ! λ2 we must have &v, z' = 0 as required.
C3 By Schur’s Theorem, we have that there exists a unitary matrix U such that U ∗ AU = T is upper
triangular. Then, observe that
T ∗ = (U ∗ AU)∗ = U ∗ A∗ (U ∗ )∗ = U ∗ AU = T
Thus, T is also lower triangular and hence is diagonal. Since A is unitarily similar to the diagonal
matrix T , it is unitarily diagonalizable.
C4 By Theorem 9.4.2 all the eigenvalues λ1 , . . . , λn are real. Also, by the Spectral Theorem for Hermitian
Matrices, A is unitarily similar to the diagonal matrix D = diag(λ1 , . . . , λn ). Thus,
det A = det D = λ1 . . . λn ∈ R
C5 (a) We have
AA∗ = A(iA) = iAA = (iA)A = A∗ A
so A is normal.
(b) Assume that v is an eigenvector of A corresponding to λ, we get
iλv = iAv = A∗ v = λv
by Theorem 9.4.6 (3). Thus, (iλ − λ)v = 0, so iλ = λ since v ! 0.
+
,
+
,
+
,
2 i
0
1+i
1 + i 2 + 2i
C6 (a) The matrices A =
and B =
are both Hermitian, but AB =
−i 2
1−i
0
2 − 2i 1 − i
is not Hermitian.
(b) (A2 )∗ = (AA)∗ = A∗ A∗ = AA = A2 . Thus, A2 is Hermitian.
(c) (A−1 )∗ = (A∗ )−1 = A−1 . Thus, A−1 is Hermitian.
Copyright © 2020 Pearson Canada Inc.
35
C7 (a) If A is Hermitian and unitary, then
,+
,
a
b + ci
a
b + ci
I=A A=
b − ci
d
b − ci
d
+
,
2
2
2
a +b +c
b(a + d) + c(a + d)i
=
b(a + d) − c(a + d)i
b2 + c2 + d2
+
∗
Either a + d = 0 or b =+ c = 0. ,
±1 0
If b = c = 0, then A =
.
0 ±1
If a+d = 0, we have a = −d, so a2 = d2 and the diagonal entries both require that a2 +b2 +c2 = 1.
Thus, b2 + c2 = 1 − a2 . Clearly |a| ≤ 1. Moreover,
since b and c satisfy
the equation of a circle,
√
√
2 cos θ and c =
2 sin θ. Note that in this
there exists some
angle
θ
such
that
b
=
1
−
a
1
−
a
√
√
case b + ci = 1 − a2 eiθ and b − ci = 1 − a2 e−iθ . Hence, the general form of A is
√



a
1 − a2 eiθ 

A =  √
1 − a2 e−iθ
−a
where |a| ≤ 1 and θ is any angle.
(b) If we+ add the ,additional requirement that A is diagonal, then the only possibilities are
±1 0
A=
.
0 ±1
(c) For a 3 × 3 matrix to be Hermitian and diagonal requires that A = diag(a, b, c) where a, b, c ∈ R.
If we require that the matrix is also unitary, then the columns must have length 1, so A =
diag(±1, ±1, ±1).
C8 Let B = {v1 , . . . , vn } be an orthonormal basis of V. Assume that L is Hermitian. By definition of [L]B
and our formula for finding coordinates with respect to a basis, we get
([L]B ) jk = &v j , L(vk )' = &L(v j ), vk ' = &vk , L(v j )' = ([L]B )k j
Hence, [L]B is Hermitian.
On the other hand, assume that [L]B is Hermitian. Then, we have
&v j , L(vk )' = ([L]B ) jk = ([L]B )k j = &vk , L(v j )' = &L(v j ), vk '
Let x, y ∈ V. Then we can write x = c1 v1 + · · · + cn vn and y = d1 v1 + · · · + dn vn . Observe that
&x, L(y)' = &c1 v1 + · · · + cn vn , L(d1 v1 + · · · + dn vn )'
= &c1 v1 + · · · + cn vn , d1 L(v1 ) + · · · + dn L(vn )'
J
=
c j dk &v j , L(vk )'
j,k
=
J
j,k
c j dk &L(v j ), vk '
= &c1 L(v1 ) + · · · + cn L(vn ), d1 v1 + · · · + dn vn '
= &L(c1 v1 + · · · + cn vn ), d1 v1 + · · · + dn vn '
= &L(x), y'
Hence, L is Hermitian.
Copyright © 2020 Pearson Canada Inc.
36
Chapter 9 Quiz
E1 z1 + z2 = 4 + 6i
E2 2z1 − iz2 = 6 + 8i − (−2 + i) = 8 + 7i
E3 z1 z3 = 11 − 2i
E4 z1 z2 z3 = 15 + 20i
E5
z1
z2
=
z2
z3
11
5
− 25 i
= − 35 + 45 i
'
(
(
√ '
−π
π
π
E7 (a) z1 = 2 cos −π
+
i
sin
,
z
=
2
2
cos
+
i
sin
2
3
3
4
4
√ %
−π
−π &
(b)
z1 z2 = 4 2 cos
+ i sin
12
12 #
"
z1
1
−7π
−7π
= √ cos
+ i sin
z2
12
12
2
E6
E8 w0 =
√1
2
+
√1 i,
2
w1 = − √12 −
√1 i
2
E9 Row reducing gives

1
 1 −i −1

−2
4i
0
−2
+ 6i

−i 2 4i
9
Hence, the system is inconsistent.
 
1
  1 −i −1
 
0
2i
−2
6i
∼
 
0 3 3i 9 + i
 

  1 −i −1 1 
 

 ∼ 0 2i −2 6i 
0 0
0 i
E10 Row reducing gives

2 1+i
 1 − i 1 + i

0
−1
2i
 i
1 − i 2 − i −i 6 + i
1
i
i
0
1−i 2−i



i
1+i
i
 1

 1 0



1
−i
1 + 2i  ∼
 0
 0 1

0 1 − 2i −2 − i
5
0 0
 
 
 
 ∼
1+i
i
−1
2i
−i 6 + i
i
2
−i 1 + 2i
0
0



 ∼




Hence, the system is consistent with one parameter. Let z3 = t ∈ C. Then, the general solution is


 
 2 
−i


 
z = 1 + 2i + t  i, t ∈ C.


 
0
1


 7 − i 


E11 2u + (1 + i)v = 3 + 5i.


9 + 3i


3 + i


E12 u =  −i .


2
E13 &u, v' = 11 − 4i .
Copyright © 2020 Pearson Canada Inc.
37
E14 &v, u' = 11 + 4i.
√
E15 (v( = 27.


29 − 23i
1 
 4 + 11i .
E16 proju (v) =

15 
22 − 8i
E17 Denote the vectors in B by z1 , z2 , and z3 respectively.
First Step: Let w1 = z1 and S = Span{w1 }.
Second Step: Determine projS1 (z2 ).


  2 
 1
1  i
&w1 , z2 '

 3 − 2i    31 
1
+
i
projS1 (z2 ) = z2 −
w
=
−


i =  
1


3    31 
(w1 (2
1+i
i
3
 
2i
 
So, we let w2 =  1 and S2 = Span{w1 , w2 }.
 
1
Third Step: Determine projS2 (z3 ).


 
  

1 − 2i
1
2i  0 
&w1 , z3 '
&w2 , z3 '

 2 − 3i   −3 − i    1 1 
w3 = z3 −
w1 −
w2 =  i  −
i −
 1 = − + i


3  
6    21 21 
(w1 (2
(w2 (2
1
i
1
2 − 2i
Then, {w1 , w2 , w3 } is an orthogonal basis for S.
E18 Observe that &u, v' = −4i, so we must apply the Gram-Schmidt Procedure to get an orthogonal basis
for S. Take w1 = u, and



 

1 − i
 i   −i 
−4i
&w1 , v'


 −1  =  1 
w1 = 1 + i −
w2 = v −
 



4 
(w1 (2
2
1+i
1+i
Then {w1 , w2 } is an orthogonal basis for S. Hence,
E19 We have


 1/2 
&w2 , z'
&w1 , z'


w1 +
w2 =  i/2 
projS (z) =


(w1 (2
(w2 (2
2−i
+
,# "
+
,# +
,
1 1−i
1 1+i
−i
1
1 0
UU = √
=
√
−1 + i
−1 − i
0 1
3 1
3 i
∗
So, U is unitary.
E20 We have
"
II
I
II1 − 3i − λ
−1 III
C(λ) = I
= λ2 + (−2 + 4i)λ − 3 − 4i
I −1
1 − i − λII
The quadratic formula gives that the only eigenvalue is λ1 = 1 − 2i with algebraic multiplicity 2.
We have
+
, +
,
−i −1
1 −i
A − λ1 =
∼
−1 i
0 0
Hence, the geometric multiplicity of λ1 is 1. Thus, A is not diagonalizable.
Copyright © 2020 Pearson Canada Inc.
38
E21 We have
II
II II
II
II3 + i − λ
II II 3 − λ
II
1
i
1
i
3−i−λ
1 III = III 0
3−i−λ
1 III
C(λ) = III 1
II −1 + i
1+i
2 + i − λII II−3 + λ
1+i
2 + i − λII
II
II
II3 − λ
II
1
i
II = −(λ − 3)(λ2 + (−5 − i)λ + 6 + 3i)
3−i−λ
1
= III 0
I
II 0
2+i
2 + 2i − λII
By the quadratic formula, we get that
√
(−5 − i)2 − 4(1)(6 + 3i) 5 + i ± −2i 5 + i ± (1 − i)
λ=
=
=
2
2
2
√
Thus, the roots are 3 and 2 + i (observe that the second root of −2i will give the same answers).
Hence, the eigenvalues of A are λ1 = 3 and λ2 = 2 + i.
For λ1 = 3 we get

 

1
i  1 −i 1
 i



−i
1  ∼ 0 0 0
A − 3I =  1

 

−1 + i 1 + i −1 + i
0 0 0
    

 i −1



   





1
0
Hence, a basis for Eλ1 is 
,
.

















0
1
5+i±
!
For λ2 = 2 + i we get

 

1
i  1 0 − 12 + 12 i
 1

 
1
1 

1 − 2i 1 ∼ 0 1
A − λ2 I =  1
2 + 2 i


 
−1 + i 1 + i 0
0 0
0



 1 − i






−1 − i
Hence, a basis for Eλ2 is 
.





 2 





1 − i
0 
 i −1
3 0




0 .
Therefore, A is diagonalized by P = 1 0 −1 − i to D = 0 3




0 1
2
0 0 2+i
E22 (a) A is Hermitian if A∗ = A. Thus, we must have 3 + ki = 3 − i and 3 − ki = 3 + i, which is only true
when k = −1. Thus, A is Hermitian if and only if k = −1.
+
,
0
3−i
(b) If k = −1, then A =
, and
3+i
3
II
I
II0 − λ 3 − i III
det(A − λI) = I
= (λ + 2)(λ − 5)
I 3 + i 3 − λII
Thus the eigenvalues are λ1 = −2 and λ2 = 5.
Copyright © 2020 Pearson Canada Inc.
39
For λ1 = −2,
, +
,
2
3−i
2 3−i
A − λ1 I =
∼
3+i
5
0
0
=+
,>
3−i
Thus, a basis for the eigenspace of λ1 is
.
−2
For λ2 = 5,
+
, +
,
−5 3 − i
−5 3 − i
A − λ2 I =
∼
3 + i −2
0
0
=+
,>
3−i
Thus, a basis for the eigenspace of λ2 is
.
5
√
√ ,
+
+
,
(3 − i)/
√ 14 (3 − i)/
√ 35 unitarily diagonalizes A to D = −2 0 .
Hence, U =
0 5
−2/ 14
5/ 35
+
Copyright © 2020 Pearson Canada Inc.
Download