Elementary Linear Algebra

advertisement
Instructor’s Manual with Sample Tests for
Elementary Linear Algebra
Fourth Edition
Stephen Andrilli and David Hecker
Dedication
To all the instructors who have used the various
editions of our book over the years
Table of Contents
Preface ........................................................................iii
Answers to Exercises………………………………….1
Chapter 1 Answer Key ……………………………….1
Chapter 2 Answer Key ……………………………...21
Chapter 3 Answer Key ……………………………...37
Chapter 4 Answer Key ……………………………...58
Chapter 5 Answer Key ……………………………...94
Chapter 6 Answer Key …………………………….127
Chapter 7 Answer Key …………………………….139
Chapter 8 Answer Key …………………………….156
Chapter 9 Answer Key …………………………….186
Appendix B Answer Key ………………………….203
Appendix C Answer Key ………………………….204
Chapter Tests ……..……………………………….206
Chapter 1 – Chapter Tests …………………………207
Chapter 2 – Chapter Tests …………………………211
Chapter 3 – Chapter Tests …………………………217
Chapter 4 – Chapter Tests …………………………221
Chapter 5 – Chapter Tests …………………………224
Chapter 6 – Chapter Tests …………………………230
Chapter 7 – Chapter Tests …………………………233
Answers to Chapter Tests ………………………….239
Chapter 1 – Answers for Chapter Tests …………...239
Chapter 2 – Answers for Chapter Tests …………...242
Chapter 3 – Answers for Chapter Tests …………...244
Chapter 4 – Answers for Chapter Tests …………...247
Chapter 5 – Answers for Chapter Tests …………...251
Chapter 6 – Answers for Chapter Tests …………...257
Chapter 7 – Answers for Chapter Tests …………...261
ii
Preface
This Instructor’s Manual with Sample Tests is designed
to accompany Elementary Linear Algebra, 4th edition, by
Stephen Andrilli and David Hecker.
This manual contains answers for all the computational
exercises in the textbook and detailed solutions for
virtually all of the problems that ask students for proofs.
The exceptions are typically those exercises that ask
students to verify a particular computation, or that ask for
a proof for which detailed hints have already been
supplied in the textbook. A few proofs that are extremely
trivial in nature have also been omitted.
This manual also contains sample Chapter Tests for the
material in Chapters 1 through 7, as well as answer keys
for these tests.
Additional information regarding the textbook, this
manual, the Student Solutions Manual, and linear algebra
in general can be found at the web site for the textbook,
where you found this manual.
Thank you for using our textbook.
Stephen Andrilli
David Hecker
August 2009
iii
Andrilli/Hecker - Answers to Exercises
Section 1.1
Answers to Exercises
Chapter 1
Section 1.1
p
(1) (a) [9; 4], distance = 97
p
(b) [ 5; 1; 2], distance = 30
(c) [p 1; 1; 2; 3; 4], distance =
31
(2) (a) (3; 4; 2) (see Figure 1)
(c) (1; 2; 0) (see Figure 3)
(b) (0; 5; 3) (see Figure 2)
(3) (a) (7; 13)
16
3 ;
(4) (a)
(5) (a)
(b)
h
h
(d) (3; 0; 0) (see Figure 4)
(b) ( 6; 3; 2)
13
3 ;8
p3 ;
70
(b)
p5 ; p6
70
70
p4 ; p1 ; 0;
21
21
(c) ( 1; 3; 1; 4; 6)
26
3 ; 0;
6; 6
i
; shorter, since length of original vector is > 1
i
p2
; shorter, since length of original vector is > 1
21
(c) [0:6; 0:8]; neither, since given vector is a unit vector
h
i
(d) p111 ; p211 ; p111 ; p111 ; p211 ; longer, since length of original vector
=
p
11
5
<1
(6) (a) Parallel
(c) Not parallel
(b) Parallel
(d) Not parallel
(7) (a) [ 6; 12; 15]
(b) [2; 0; 6]
(8) (a) x + y = [1; 1], x
(b) x + y = [3; 5], x
(c) [ 3; 4; 8]
(e) [6; 20; 13]
(d) [ 5; 1; 1]
(f) [ 23; 12; 11]
y = [ 3; 9], y
y = [17; 1], y
x = [3; 9] (see Figure 5)
x = [ 17; 1] (see Figure 6)
(c) x+y = [1; 8; 5], x y = [3; 2; 1], y x = [ 3; 2; 1] (see Figure 7)
(d) x+y = [ 2; 4; 4], x y = [4; 0; 6], y x = [ 4; 0; 6] (see Figure 8)
(9) With A = (7; 3; 6), B = (11; 5;p3), and C = (10; 7; 8), the length of
side AB = length of side AC = 29. The
p triangle is isosceles, but not
equilateral, since the length of side BC is 30.
1
Andrilli/Hecker - Answers to Exercises
Section 1.1
2
Andrilli/Hecker - Answers to Exercises
Section 1.1
3
Andrilli/Hecker - Answers to Exercises
Section 1.1
p
(b) [ 5 3; 15]
(10) (a) [10; 10]
(c) [0; 0] = 0
(11) See Figures 1.7 and 1.8 in Section 1.1 of the textbook.
(12) See Figure 9. Both represent the same diagonal vector by the associative
law of addition for vectors.
(13) [0:5
Figure 9: x + (y + z)
p
p
0:6 2; 0:4 2] [ 0:3485; 0:5657]
(14) (a) If x = [a1 ; a2 ] is a unit vector, then cos
larly, cos 2 = a2 .
1
=
a1
kxk
=
a1
1
= a1 . Simi-
(b) Analogous to part (a).
p
p
(15) Net velocity = [ 2 2; 3 + 2 2], resultant speed 2.83 km/hr
h p
p i
3 2 8 3 2
;
(16) Net velocity =
; speed 2.83 km/hr
2
2
(17) [ 8
p
2;
p
2]
(18) Acceleration =
(19) Acceleration =
(20)
180
p
[
14
2; 3; 1]
1 12
20 [ 13 ;
344 392
65 ; 65 ]
[0:0462; 0:2646; 0:3015]
13
4
2 ; 0; 3
[ 96:22; 144:32; 48:11]
p
p ; mg
p ]; b = [ mg
p ; mgp 3 ]
(21) a = [ 1+mg
3 1+ 3
1+ 3 1+ 3
(22) Let a = [a1 ; : : : ; an ]. Then, kak2 = a21 + + a2n is a sum of squares, which
must be nonnegative. The square root of a nonnegative real number is a
nonnegative real number. The sum can only be zero if every ai = 0.
4
Andrilli/Hecker - Answers to Exercises
Section 1.2
p
(23) Let
kcxk = (cx1 )2 +
p x = [x1 ; x2 ; : : : ; xn ]. Then
p
c2 (x21 +
+ x2n ) = jcj x21 +
+ x2n = jcj kxk:
+ (cxn )2 =
(24) In each part, suppose that x = [x1 ; : : : ; xn ], y = [y1 ; : : : ; yn ], and
z = [z1 ; : : : ; zn ].
(a) x + (y + z) = [x1 ; : : : ; xn ] + [(y1 + z1 ); : : : ; (yn + zn )] =
[(x1 + (y1 + z1 )); : : : ; (xn + (yn + zn ))] =
[((x1 + y1 ) + z1 ); : : : ; ((xn + yn ) + zn )] =
[(x1 + y1 ); : : : ; (xn + yn )] + [z1 ; : : : ; zn ] = (x + y) + z
(b) x + ( x) = [x1 ; : : : ; xn ] + [ x1 ; : : : ; xn ] =
[(x1 + ( x1 )); : : : ; (xn + ( xn ))] = [0; : : : ; 0]. Also, ( x) + x =
x + ( x) (by part (1) of Theorem 1.3) = 0, by the above.
(c) c(x+y) = c([(x1 +y1 ); : : : ; (xn +yn )]) = [c(x1 +y1 ); : : : ; c(xn +yn )] =
[(cx1 +cy1 ); : : : ; (cxn +cyn )] = [cx1 ; : : : ; cxn ]+[cy1 ; : : : ; cyn ] = cx+cy
(d) (cd)x = [((cd)x1 ); : : : ; ((cd)xn )] = [(c(dx1 )); : : : ; (c(dxn ))] =
c[(dx1 ); : : : ; (dxn )] = c(d[x1 ; : : : ; xn ]) = c(dx)
(25) If c = 0, done. Otherwise, ( 1c )(cx) = 1c (0) ) ( 1c c)x = 0 (by part (7) of
Theorem 1.3) ) x = 0: Thus either c = 0 or x = 0.
(26) Let x = [x1 ; x2 ; : : : ; xn ]. Then [0; 0; : : : ; 0] = c1 x c2 x = [(c1 c2 )x1 ;
: : : ; (c1 c2 )xn ]. Since c1 c2 6= 0, we have x1 = x2 =
= xn = 0.
(27) (a) F
(b) T
(c) T
(d) F
(e) T
(f) F
(g) F
Section 1.2
27
p
), or about 152:6
5 37
arccos( p1313p66 ), or about 63:7
(1) (a) arccos(
(b)
(c) arccos(0), which is 90 , or
, or 2.66 radians
, or 1.11 radians
radians
2
(d) arccos( 1), which is 180 , or
radians (since x =
2y)
(2) The vector from A1 to A2 is [2; 7; 3], and the vector from A1 to A3 is
[5; 4; 6]. These vectors are orthogonal.
(3) (a) [a; b] [ b; a] = a( b) + ba = 0: Similarly, [a; b] [b; a] = 0:
(b) A vector in the direction of the line ax + by + c = 0 is [b; a], while
a vector in the direction of bx ay + d = 0 is [a; b].
(c)
(4) (a) 12 joules
(b)
p
1040 5
,
9
124 joules
or 258:4 joules
(5) Note that y z is a scalar, so x (y z) is not de…ned.
5
Andrilli/Hecker - Answers to Exercises
Section 1.2
(6) In all parts, let x = [x1 ; x2 ; : : : ; xn ] ; y = [y1 ; y2 ; : : : ; yn ] ; and
z = [z1 ; z2 ; : : : ; zn ] :
Part (1): x y = [x1 ; x2 ; : : : ; xn ] [y1 ; y2 ; : : : ; yn ] = x1 y1 +
y1 x1 +
+ yn xn = [y1 ; y2 ; : : : ; yn ] [x1 ; x2 ; : : : ; xn ] = y x
+ xn yn =
Part (2): x x = [x1 ; x2 ; : : : ; xn ] [x1 ; x2 ; : : : ; xn ] = x1 x1 +
+ xn xn =
x21 + + x2n . Now x21 + + x2n is a sum of squares, each of which must be
nonnegative. Hence, the sum is also nonnegative, and so its square root
2
p
2
+ x2n = kxk .
x21 +
is de…ned. Thus, 0 x x = x21 +
+ x2n =
Part (3): Suppose x x = 0. From part (2), 0 = x x = x21 + + x2n x2i ,
for each i, since the sum of the remaining squares (without x2i ), which is
nonnegative. Hence, 0 x2i for each i. But x2i 0, because it is a square.
Hence each xi = 0. Therefore, x = 0.
Next, suppose x = 0. Then x x = [0; : : : ; 0] [0; : : : ; 0]
= (0)(0) +
+ (0)(0) = 0.
Part (4): c(x y) = c ([x1 ; x2 ; : : : ; xn ] [y1 ; y2 ; : : : ; yn ])
= c (x1 y1 +
+ xn yn ) = cx1 y1 +
+ cxn yn =
[cx1 ; cx2 ; : : : ; cxn ] [y1 ; y2 ; : : : ; yn ] = (cx) y.
Next, c(x y) = c(y x) (by part (1)) = (cy) x (by the above) = x (cy),
by part (1):
Part (6): (x + y) z = ([x1 ; x2 ; : : : ; xn ] + [y1 ; y2 ; : : : ; yn ]) [z1 ; z2 ; : : : ; zn ]
= [x1 + y1 ; x2 + y2 ; : : : ; xn + yn ] [z1 ; z2 ; : : : ; zn ]
= (x1 + y1 )z1 + (x2 + y2 )z2 +
+ (xn + yn )zn
= (x1 z1 + x2 z2 +
+ xn zn ) + (y1 z1 + y2 z2 +
+ yn zn ):
Also, (x z) + (y z) = ([x1 ; x2 ; : : : ; xn ] [z1 ; z2 ; : : : ; zn ])
+ ([y1 ; y2 ; : : : ; yn ] [z1 ; z2 ; : : : ; zn ])
= (x1 z1 + x2 z2 +
+ xn zn ) + (y1 z1 + y2 z2 +
+ yn zn ).
Hence, (x + y) z = (x z) + (y z).
(7) No; consider x = [1; 0], y = [0; 1], and z = [1; 1].
(8) A method similar to the …rst part of the proof of Theorem 1.6 in the
textbook yields:
2
ka bk
0 ) (a a) (b a) (a b) + (b b) 0
) 1 2(a b) + 1 0 ) a b 1.
(9) Note that (x + y) (x
y) = (x x) + (y x)
(x y)
(y y) = kxk2
kyk2 .
(10) Note that kx + yk2 = kxk2 + 2(x y) + kyk2 , while
kx yk2 = kxk2 2(x y) + kyk2 .
(11) (a) This follows immediately from the solution to Exercise 10 above.
(b) Note that kx + y + zk2 = k(x + y) + zk2
= kx + yk2 + 2((x + y) z) + kzk2
= kxk2 + 2(x y) + kyk2 + 2(x z) + 2(y z) + kzk2 .
(c) This follows immediately from the solution to Exercise 10 above.
6
Andrilli/Hecker - Answers to Exercises
Section 1.2
(12) Note that x (c1 y + c2 z) = c1 (x y) + c2 (x z).
(13) cos
1
=
p
a
,
a2 +b2 +c2
(14) (a) Length =
p
cos
=
p
b
,
a2 +b2 +c2
and cos
3
=
p
c
a2 +b2 +c2
3s
(b) angle = arccos(
(15) (a) [ 35 ;
(b) [ 90
17 ;
2
p
3
3 )
3
3
10 ;
2]
54
;
0]
[5:29;
17
54:7 , or 0:955 radians
(c) [ 61 ; 0;
1 1
6; 3]
3:18; 0]
(16) (a) 0 (zero vector). The dropped perpendicular travels along b to the
common initial point of a and b.
(b) The vector b. The terminal point of b lies on the line through a, so
the dropped perpendicular has length zero.
(17) ai, bj, ck
30 40
194 88 163
29 ; 29 ], orthogonal: [
29 ; 29 ; 29 ]
1
1
11
[ 2 ; 1 2 ], orthogonal: [ 2 ; 1; 15
2 ]
60
40 120
354 138 223
[ 49 ; 49 ; 49 ], orthogonal: [ 49 ; 49 ; 49 ]
(18) (a) Parallel: [ 20
29 ;
(b) Parallel:
(c) Parallel:
(19) From the lower triangle in the …gure, we have (projr x) + (projr x
re‡ection of x (see Figure 10).
x) =
Figure 10: Re‡ection of a vector x through a line.
(20) For the case kxk kyk: j kxk kyk j = kyk kxk = kx+y+( x)k kxk
kx+yk+k xk kxk (by the Triangle Inequality) = kx+yk+kxk kxk =
kx + yk.
The case kxk
kyk is done similarly, with the roles of x and y reversed.
(21) (a) Let c = (x y)=(kxk2 ), and w = y
projx y.)
7
projx y. (In this way, cx =
Andrilli/Hecker - Answers to Exercises
Section 1.3
(b) Suppose cx + w = dx + v. Then (d c)x = w v, and
(d c)x (w v) = (w v) (w v) = kw vk2 . But
(d c)x (w v) = (d c)(x w) (d c)(x v) = 0, since v and w
are orthogonal to x. Hence kw vk2 = 0 =) w v = 0 =) w = v.
Then, cx = dx =) c = d, from Exercise 26 in Section 1.1, since x is
nonzero.
(22) If
is the angle between x and y, and
and projy x, then cos
x y
y x
(x
kxk2
kyk2
jx yj
jy xj
kxk kyk2
kxk2
(23) (a) T
y)
kyk
=
(b) T
=
projx y
is the angle between projx y
projy x
kprojx ykkprojy xk
(x y)3
kxk2 kyk2
=
(x y)2
kxkkyk
(x y)
kxkkyk
(c) F
x y
kxk2
x y
kxk2
=
x
= cos . Hence
(d) F
y x
kyk2
y x
kyk2
x
y
y
=
= .
(e) T
(f) F
Section 1.3
(1) (a) We have k4x + 7yk
7(kxk + kyk).
k4xk + k7yk = 4kxk + 7kyk
(b) Let m = maxfjcj; jdjg. Then kcx
(2) (a) Note that 6j
5 = 3(2(j
dyk
7kxk + 7kyk =
m(kxk + kyk).
1)) + 1. Let k = 2(j
1).
(b) Consider the number 4.
(3) Note that since x 6= 0 and y 6= 0, projx y = 0 i¤ (x y)=(kxk2 ) = 0 i¤
x y = 0 i¤ y x = 0 i¤ (y x)=(kyk2 ) = 0 i¤ projy x = 0.
(4) If y = cx (for c > 0), then kx+yk = kx+cxk = (1+c)kxk = kxk+ckxk =
kxk + kcxk = kxk + kyk.
On the other hand, if kx + yk = kxk + kyk, then kx + yk2 = (kxk + kyk)2 .
Now kx+yk2 = (x+y) (x+y) = kxk2 +2(x y)+kyk2 , while (kxk+kyk)2 =
kxk2 + 2kxkkyk + kyk2 . Hence x y = kxkkyk. By Result 4, y = cx, for
some c > 0.
(5) (a) Consider x = [1; 0; 0] and y = [1; 1; 0].
(b) If x 6= y, then x y 6= kxk2 .
(c) Yes
(6) (a) Suppose y 6= 0. We must show that x is not orthogonal to y. Now
kx + yk2 = kxk2 , so kxk2 + 2(x y) + kyk2 = kxk2 . Hence kyk2 =
2(x y). Since y 6= 0, we have kyk2 6= 0, and so x y 6= 0.
(b) Suppose x is not a unit vector. We must show that x y 6= 1. Now
projx y = x =) ((x y)=(kxk2 ))x = 1x =) (x y)=(kxk2 ) = 1 =)
x y = kxk2 . But then kxk =
6 1 =) kxk2 6= 1 =) x y 6= 1.
(7) Use Exercise 11(a) in Section 1.2.
8
Andrilli/Hecker - Answers to Exercises
Section 1.3
(8) (a) Contrapositive: If x = 0, then x is not a unit vector.
Converse: If x is nonzero, then x is a unit vector.
Inverse: If x is not a unit vector, then x = 0.
(b) (Let x and y be nonzero vectors.)
Contrapositive: If y 6= projx y, then x is not parallel to y.
Converse: If y = projx y, then x k y.
Inverse: If x is not parallel to y, then y 6= projx y.
(c) (Let x, y be nonzero vectors.)
Contrapositive: If projy x 6= 0, then projx y 6= 0.
Converse: If projy x = 0, then projx y = 0.
Inverse: If projx y 6= 0, then projy x 6= 0.
(9) (a) Converse: Let x and y be nonzero vectors in Rn . If kx + yk > kyk,
then x y 0.
(b) Let x = [2; 1] and y = [0; 2].
(10) (a) Converse: Let x, y, and z be vectors in Rn . If y = z, then x y = x z.
The converse is obviously true, but the original statement is false in
general, with counterexample x = [1; 1], y = [1; 1], and z = [ 1; 1].
(b) Converse: Let x and y be vectors in Rn . If kx + yk
kyk, then
x y = 0. The original statement is true, but the converse is false
in general. Proof of the original statement follows from kx + yk2 =
(x + y) (x + y) = kxk2 + 2(x y) + kyk2 = kxk2 + kyk2 kyk2 .
Counterexample to converse: let x = [1; 0], y = [1; 1].
(c) Converse: Let x, y be vectors in Rn , with n > 1. If x = 0 or
y = 0, then x y = 0. The converse is obviously true, but the original
statement is false in general, with counterexample x = [1; 1] and
y = [1; 1].
Pn
(11) Suppose x ? y and n is odd. Then x y = 0. Now x y = i=1 xi yi . But
each product xi yi equals either 1 or 1. If exactly k of these products
equal 1, then x y = k (n k) = n + 2k. Hence n + 2k = 0, and so
n = 2k, contradicting n odd.
(12) Suppose that the vectors [1; a], [1; b], and [z1 ; z2 ] as constructed in the
hint are all mutually orthogonal. Then z1 + az2 = 0 = z1 + bz2 . Hence
(a b)z2 = 0, so either a = b or z2 = 0. But if a = b, then [1; a] and [1; b]
are not orthogonal. Alternatively, if z2 = 0, then z1 + az2 = 0 =) z1 = 0,
so [z1 ; z2 ] = [0; 0], a contradiction.
(13) Base Step (n = 1): x1 = x1 .
Inductive Step: Assume x1 +x2 + +xn 1 +xn = xn +xn 1 + +x2 +x1 ,
for some n
1: Prove: x1 + x2 +
+ xn 1 + xn + xn+1 = xn+1 +
xn + xn 1 +
+ x2 + x1 : But x1 + x2 +
+ xn 1 + xn + xn+1 =
(x1 + x2 +
+ xn 1 + xn ) + xn+1 = xn+1 + (x1 + x2 +
+ xn 1 +
xn ) = xn+1 + (xn + xn 1 +
+ x2 + x1 ) (by the assumption) = xn+1 +
xn + xn 1 +
+ x2 + x1 :
9
Andrilli/Hecker - Answers to Exercises
Section 1.3
(14) Base Step (m = 1): kx1 k kx1 k.
Inductive Step: Assume kx1 +
+ xm k
kx1 k +
+ kxm k, for some
m 1. Prove kx1 + +xm +xm+1 k kx1 k+ +kxm k+kxm+1 k. But, by
the Triangle Inequality, k(x1 + +xm )+xm+1 k kx1 + +xm k+kxm+1 k
kx1 k +
+ kxm k + kxm+1 k.
(15) Base Step (k = 1): kx1 k2 = kx1 k2 .
Inductive Step: Assume kx1 +
+ xk k2 = kx1 k2 +
+ kxk k2 . Prove
2
2
2
2
kx1 + + xk + xk+1 k = kx1 k + + kxk k + kxk+1 k . Use the fact that
k(x1 + +xk )+xk+1 k2 = kx1 + +xk k2 +2((x1 + +xk ) xk+1 )+kxk+1 k2
= kx1 + + xk k2 + kxk+1 k2 , since xk+1 is orthogonal to all of x1 ; : : : ; xk .
(16) Base Step (k = 1): We must show (a1 x1 ) y ja1 j kyk. But
(a1 x1 ) y j(a1 x1 ) yj ka1 x1 k kyk (by the Cauchy-Schwarz Inequality)
= ja1 j kx1 k kyk = ja1 j kyk, since x1 is a unit vector.
Inductive Step: Assume (a1 x1 +
+ ak xk ) y (ja1 j +
+ jak j)kyk,
for some k 1. Prove
(a1 x1 +
+ ak xk + ak+1 xk+1 ) y (ja1 j +
+ jak j + jak+1 j)kyk.
Use the fact that ((a1 x1 +
+ ak xk ) +ak+1 xk+1 ) y
= (a1 x1 +
+ ak xk ) y + (ak+1 xk+1 ) y and the Base Step.
(17) Base Step (n =
jx1 j. This is obvious
p 1): We must show that k[x1 ]k
j.
since k[x1 ]k = x21 = jx1q
qP
Pk
Pk
k+1 2
2
Inductive Step: Assume
x
jx
j
and
prove
i
i
i=1
i=1 xi
i=1
Pk+1
Let y = [x1 ; : : : ; xk ; xk+1 ], z = [x1 ; : : : ; xk ; 0], and w =
i=1 jxi j.
[0; : : : ; 0; xk+1 ]. Note
that y = z+w.
Triangle Inequality, kyk
qP So, by the
qP
q
q
Pk
k+1 2
k
2
2
x
x2k+1
x
x
+
jx
j
+
kzk + kwk. Thus,
i
i=1 i
i=1 i
i=1
k+1
Pk
Pk+1
(by the inductive hypothesis) = i=1 jxi j + jxk+1 j = i=1 jxi j.
(18) Step 1 cannot be reversed, because y could equal (x2 + 2).
Step 2 cannot be reversed, because y 2 could equal x4 + 4x2 + c.
Step 4 cannot be reversed, because in general y does not have to equal
x2 + 2.
dy
could equal 2x + c.
Step 6 cannot be reversed, since dx
All other steps remain true when reversed.
(19) (a) For every unit vector x in R3 , x [1; 2; 3] 6= 0.
(b) x 6= 0 and x y
0, for some vectors x and y in Rn .
(c) x = 0 or kx + yk =
6 kyk, for all vectors x and y in Rn .
(d) There is some vector x 2 Rn for which x x
0.
3
(e) There is an x 2 R such that for every nonzero y 2 R3 , x y 6= 0.
(f) For every x 2 R4 , there is some y 2 R4 such that x y 6= 0.
10
Andrilli/Hecker - Answers to Exercises
Section 1.4
(20) (a) Contrapositive: If x 6= 0 and kx yk kyk, then x y 6= 0.
Converse: If x = 0 or kx yk > kyk, then x y = 0.
Inverse: If x y 6= 0, then x 6= 0 and kx yk kyk.
(b) Contrapositive: If kx yk kyk, then either x = 0 or x y 6= 0.
Converse: If kx yk > kyk, then x 6= 0 and x y = 0.
Inverse: If x = 0 or x y 6= 0, then kx yk kyk.
(21) Suppose x 6= 0. We must prove x y 6= 0 for some vector y 2 Rn . Let
y = x.
(22) The contrapositive is: If u and v are (nonzero) vectors that are not in
opposite directions, then there is a vector x such that u x > 0 and
v x > 0. If u and v are in the same direction, we can let x = u. So,
we can assume that u and v are not parallel. Consider a vector x that
bisects the angle between u and v. Geometrically, it is clear that such a
vector makes an acute angle with both u and v, which …nishes the proof.
v
u
+ kvk
. Then,
(Algebraically, a formula for such a vector would be x = kuk
u
kuk
u x=u
+
v
kvk
=
uu
kuk
+
1
uv
kvk
the Cauchy-Schwarz inequality) = kuk
similar proof shows that v x > 0.)
uv
kvk
kuk
kukkvk
kvk
> kuk
(by
kuk = 0. Hence u x > 0. A
(23) Let x = [1; 1], y = [1; 1].
(24) Let y = [1; 2; 2]. Then since x y
kyk = 3.
(25) (a) F
(b) T
(c) T
(d) F
0, Result 2 implies that kx + yk >
(e) F
(f) F
(g) F
(h) T
(i) F
Section 1.4
2
2
(1) (a) 4 2
9
1
7
0
3
3
5 5
1
2
(f) 4
7
1
9
0
3
9
3
8
1 5
5
2
(b) Impossible
23
14
2
3
16
8 12
5
8
(g) 4
0 20
4 5
(c) 4
9
18
24
4
8
2
3 (h) Impossible
32 8
6
2
8 2 14 5
(d) 4
1
1 12
0 6
8
5
8
(i) 4 1
(e) Impossible
8
3
4
2
(j) 4
1
1
8
1
5
3
3
12
8
9 (k)
10
8
5
8
1 (l) Impossible
(m) Impossible
2
13
6
5 (n) 4 3
3
3
5
3
3
12
8 5
4
6
26
3
2
5 5
1
1 We used > rather than
in the Cauchy-Schwarz inequality because equality occurs in
the Cauchy-Schwarz inequality only when the vectors are parallel (see Result 4).
11
Andrilli/Hecker - Answers to Exercises
Section 1.4
(2) Square: B; C; E; F; G; H; J; K; L; M; N; P; Q
Diagonal: B; G; N
Upper triangular: B; G; L; N
Lower triangular: B; G; M; N; Q
Symmetric: B; F; G; J; N; P
Skew-symmetric: H (but not E; C; K)
1 0 6
Transposes: AT =
, BT = B, CT =
4 1 0
on
2
2
1
1
5 3
3 3
0
3
2
2
2
2
7 6
7
6
1
1
6
7
6
2 1 5+4
0 4 7
(3) (a) 4
2
2
5
2
6
(b) 6
4
2
5
2
3
2
1
3
2
3
0
1
2
6 0
(c) 6
4 0
0
2
6
(d) 6
4
1
0
5
0
0
3
7
2
1
0
2
3
2
7
6
6
1 7
5 + 4
0
3
3
2
4
0
3
2
3
2
0
4
2
0
0
0 0
6 3
0 0 7
7 + 6
4 4
2 0 5
1
0 5
3
2
7
2
1
6
4
3
6 7
7 + 6
4
3
5
8 5
6
8
5
(4) If AT = BT , then AT
Theorem 1.12.
(5) (a) If A is an m
m = n.
T
= BT
4
0
4
7
3
T
1
1
, and so
0
3
7
0 7
5
0
3
0
1
2
1
1
4
1
0
0
4 7
0 2
2 0
5 6
3
1
2 7
7
0 5
0
3
3
5 7
7
6 5
0
, implying A = B, by part (1) of
n matrix, then AT is n
m. If A = AT , then
(b) If A is a diagonal matrix and if i 6= j, then aij = 0 = aji .
(c) Follows directly from part (b), since In is diagonal.
(d) The matrix must be a square zero matrix; that is, On , for some n.
(6) (a) If i 6= j, then aij + bij = 0 + 0 = 0.
(b) Use the fact that aij + bij = aji + bji .
(7) Use induction on n. Base Step (n = 1): Obvious. Inductive Step: Assume
Pk
A1 ; : : : ; Ak+1 and B = i=1 Ai are upper triangular. Prove that D =
Pk+1
i=1 Ai is upper triangular. Let C = Ak+1 . Then D = B + C. Hence,
dij = bij + cij = 0 + 0 = 0 if i > j (by the inductive hypothesis). Hence
D is upper triangular.
12
Andrilli/Hecker - Answers to Exercises
Section 1.4
(8) (a) Let B = AT . Then bij = aji = aij = bji .
Let D = cA. Then dij = caij = caji = dji .
(b) Let B = AT . Then bij = aji = aij = bji .
Let D = cA. Then dij = caij = c( aji ) = caji =
(9) In is de…ned as the matrix whose (i; j) entry is
dji .
ij .
(10) Part (4): Let B = A + ( A) (= ( A) + A, by part (1)). Then bij =
aij + ( aij ) = 0.
Part (5): Let D = c(A+B) and let E = cA+cB. Then, dij = c(aij +bij ) =
caij + cbij = eij .
Part (7): Let B = (cd)A and let E = c(dA). Then bij = (cd)aij =
c(daij ) = eij .
(11) Part (1): Let B = AT and C = (AT )T . Then cij = bji = aij .
Part (3): Let B = c(AT ), D = cA and F = (cA)T . Then fij = dji =
caji = bij .
(12) Assume c 6= 0. We must show A = Omn . But for all i; j, 1
1 j n, caij = 0 with c 6= 0. Hence, all aij = 0.
i
m,
(13) (a) ( 21 (A + AT ))T = 12 (A + AT )T = 12 (AT + (AT )T ) = 12 (AT + A) =
1
T
2 (A + A )
(b) ( 21 (A AT ))T = 12 (A
1
AT )
2 (A
(c)
1
2 (A
(d) S1
+ AT ) + 12 (A
V1 = S2
AT )T = 12 (AT
(AT )T ) = 12 (AT
A) =
AT ) = 21 (2A) = A
V2
(e) Follows immediately from part (d).
(f) Follows immediately from parts (d) and (e).
(g) Parts (a) through (c) show A can be decomposed as the sum of
a symmetric matrix and a skew-symmetric matrix, while parts (d)
through (f) show that the decomposition for A is unique.
(14) (a) Trace (B) = 1, trace (C) = 0, trace (E) = 6, trace (F) = 2, trace
(G) = 18, trace (H) = 0, trace (J) = 1, trace (K) = 4, trace (L) =
3, trace (M) = 0, trace (N) = 3, trace (P) = 0, trace (Q) = 1
Pn
(b) P
Part (i): Let
D = A + B. Then trace(D) = i=1 dii =
P
n
n
i=1 bii = trace(A) + trace(B). P
i=1 aii +
Pn
n
Part
(ii):
Let
B = cA. Then trace(B) = i=1 bii = i=1 caii =
Pn
c i=1 aii = c(trace(A)).
Pn
Pn
Part (iii): Let B = AT . Then trace(B) = i=1 bii = i=1 aii (since
bii = aii for all i) = trace(A).
(c) Not necessarily: consider the matrices L and N in Exercise 2. (Note:
If n = 1, the statement is true.)
13
Andrilli/Hecker - Answers to Exercises
(15) (a) F
(b) T
Section 1.5
(c) F
(d) T
(e) T
Section 1.5
(1) (a) Impossible
2
3
34
24
49 5
(b) 4 42
8
22
(e) [ 38]
2
3
24 48
16 (k)
3
6
2 5
(f) 4
12 24
8
(g) Impossible
(c) Impossible
(l)
2
3
(h) [56; 8]
73
34
(i) Impossible
25 5
(d) 4 77
(j)
Impossible
(m)
19
14
2
3
146
5
603
(o) Impossible
560 5
(n) 4 154 27
38
9
193
(2) (a) No
(b) Yes
(c) No
(d) Yes
2
22
4 9
2
2
5
6 4
6
4 1
4
9
73
6
3
1
1
1
2
3
0
3
[226981]
(e) No
(c) [4]
(3) (a) [15; 13; 8]
3
2
11
(b) 4 6 5
3
(d) [2; 8; 2; 12]
(4) (a) Valid, by Theorem 1.14,
part (1)
(f) Invalid
(b) Invalid
(g) Valid, by Theorem 1.14,
part (3)
(c) Valid, by Theorem 1.14,
part (1)
(h) Valid, by Theorem 1.14,
part (2)
(d) Valid, by Theorem 1.14,
part (2)
(i) Invalid
(j) Valid, by Theorem 1.14,
part (3), and Theorem 1.16
(e) Valid, by Theorem 1.16
Outlet
Outlet
(5)
Outlet
Outlet
1
2
3
4
3
6
18 5
4
3
5
1 7
7
2 5
1
Salary Fringe Bene…ts
3
$367500
$78000
6 $225000
$48000 7
6
7
4 $765000
$162000 5
$360000
$76500
2
Tickets
Food
Souvenirs
3
June
$1151300 $3056900 $2194400
4 $1300700 $3456700 $2482400 5
(6) July
August
$981100 $2615900 $1905100
2
14
Andrilli/Hecker - Answers to Exercises
2
Nitrogen
(7) Phosphate 4
Potash
Field 1
1:00
0:90
0:95
Field 2
0:45
0:35
0:35
Field 3
3
0:65
0:75 5(in tons)
0:85
Chip 2 Chip 3 Chip 4
3
1569
1839
2750
1559
1811
2694 7
7
2102
2428
3618 5
1821
2097
3124
2
1
1
0
(9) (a) One example:
0
1
(c) Consider 4 1
2
3
0
1
1 0
4
5
0
1 0
(b) One example:
0
0 1
Rocket
Rocket
(8)
Rocket
Rocket
Chip 1
2
1
2131
2 6
6 2122
3 4 2842
4
2456
Section 1.5
0
0
1
3
1
0 5
0
(10) (a)
(b)
(c)
(d)
Third row, fourth column entry of AB
Fourth row, …rst column entry of AB
Third row, second column entry of BA
Second row, …fth column entry of BA
Pn
Pm
(11) (a)
(b)
k=1 a3k bk2
k=1 b4k ak1
3
2
56
(12) (a) [ 27; 43; 56]
(b) 4 57 5
18
2
3
43
(13) (a) [ 61; 15; 20; 9]
(b) 4 41 5
12
Pm
(14) (a) Let B = Ai. Then B is m 1 and bk1 = j=1 akj ij1 = (ak1 )(1) +
(ak2 )(0) + (ak3 )(0) = ak1 .
(b) Aei = ith column of A.
(c) By part (b), each column of A is easily seen to be the zero vector by
letting x equal each of e1 ; : : : ; en in turn.
(15) Proof of Part (2): The (i; j) entry of A(B + C)
=
=
=
(ith row of A) (jth column of (B + C))
(ith row of A) (jth column of B + jth column of C)
(ith row of A) (jth column of B)
+ (ith row of A) (jth column of C)
= (i; j) entry of AB + (i; j) entry of AC
= (i; j) entry of (AB + AC).
The proof of Part (3) is similar. For the …rst equation in part (4), the
15
Andrilli/Hecker - Answers to Exercises
Section 1.5
(i; j) entry of c(AB)
= c ((ith row of A) (jth column of B))
= (c(ith row of A)) (jth column of B)
= (ith row of cA) (jth column of B)
= (i; j) entry of (cA)B.
The proof of the second equation is similar.
(16) Let B = AOnp . Clearly B is an m
p matrix and bij =
Pn
k=1
aik 0 = 0.
(17) Proof that P
AIn = A: Let B = AIn . Then B is clearly an m n matrix
n
and bjl = k=1 ajk ikl = ajl , since ikl = 0 unless k = l, in which case
ikk = 1. The proof that Im A = A is similar.
(18) (a) We need to show cij = 0 if i 6= j. Now, if i 6= j, both factors
in each term of the formula for cij in the formal de…nition of matrix
multiplication are zero, except possibly for the terms aii bij and aij bjj .
But since the factors bij and aij also equal zero, all terms in the
formula for cij equal zero.
(b) Assume i > j. Consider the term aik bkj in the formula for cij . If
i > k, then aik = 0. If i k, then j < k, so bkj = 0. Hence all terms
in the formula for cij equal zero.
(c) Let L1 and L2 be lower triangular matrices. U1 = LT1 and U2 =
LT2 are then upper triangular, and L1 L2 = UT1 UT2 = (U2 U1 )T (by
Theorem 1.16). But by part (b), U2 U1 is upper triangular. So L1 L2
is lower triangular.
(19) Base Step: Clearly, (cA)1 = c1 A1 .
Inductive Step: Assume (cA)n = cn An , and prove (cA)n+1 = cn+1 An+1 .
Now, (cA)n+1 = (cA)n (cA) = (cn An )(cA) (by the inductive hypothesis)
= cn cAn A (by part (4) of Theorem 1.14) = cn+1 An+1 .
(20) Proof of Part (1): Base Step: As+0 = As = As I = As A0 .
Inductive Step: Assume As+t = As At for some t
0. We must prove
As+(t+1) = As At+1 . But As+(t+1) = A(s+t)+1 = As+t A = (As At )A (by
the inductive hypothesis) = As (At A) = As At+1 .
Proof of Part (2): (We only need prove (As )t = Ast .) Base Step: (As )0 =
I = A0 = As0 .
Inductive Step: Assume (As )t = Ast for some integer t
0. We must
prove (As )t+1 = As(t+1) . But (As )t+1 = (As )t As = Ast As (by the
inductive hypothesis) = Ast+s (by part (1)) = As(t+1) .
(21) (a) If A is an m
n matrix, and B is a p
r matrix, then the fact that
AB exists means n = p, and the fact that BA exists means m = r.
Then AB is an m
m matrix, while BA is an n
n matrix. If
AB = BA, then m = n.
(b) Note that by the Distributive Law, (A + B)2 = A2 + AB + BA + B2 .
(22) (AB)C = A(BC) = A(CB) = (AC)B = (CA)B = C(AB).
16
Andrilli/Hecker - Answers to Exercises
Section 1.5
(23) Use the fact that AT BT = (BA)T , while BT AT = (AB)T .
(24) (AAT )T = (AT )T AT (by Theorem 1.16) = AAT . (Similarly for AT A.)
(25) (a) If A; B are both skew-symmetric, then (AB)T = BT AT = ( B)( A)
= ( 1)( 1)BA = BA. The symmetric case is similar.
(b) Use the fact that (AB)T = BT AT = BA, since A; B are symmetric.
(26) (a) The (i; i) entry of AAT = (ith row of A) (ith column of AT ) = (ith
row of A) (ith row of A) = sum of the squares of the entries in the
ith row of A. Hence, trace(AAT ) is the sum of the squares of the
entries from all rows of A.
Pn Pn
Pn Pn
(c) P
Trace(AB)
Pn = i=1 ( k=1 aik bki ) = i=1 k=1 bki aik =
n
ki aik . Reversing the roles of the dummy variables k and
k=1 P
i=1 bP
n
n
i gives i=1 k=1 bik aki , which is equal to trace(BA).
(27) (a) Consider any matrix of the form
(c) (In A)2 = I2n In A
2
3
2
1
1
0
1 5
(d) 4 1
1
1
0
1 0
x 0
.
AIn + A2 = In
A + A = In
A
A.
(e) A2 = (A)A = (AB)A = A(BA) = AB = A.
(28) (a) (ith row of A) (jth column of B) = (i; j)th entry of Omp = 0.
3
2
1
2
1 2
1
1 5.
(b) Consider A =
and B = 4 0
2 4
2
1
0
2
3
2
4
2 5.
(c) Let C = 4 0
2
0
(29) See Exercise 30(c).
(30) Throughout this solution, we use the fact that Aei = (ith column of A).
(See Exercise 14(b).) Similarly, if ei is thought of as a row matrix, ei A
= (ith row of A).
(a) (jth column of A ij ) = A(jth column of
of A). And, if k =
6 j, (kth column of A
ij ) = A0 = 0.
ij )
= Aei = (ith column
= A(kth column of
ij )
(b) (ith row of ij A) = (ith row of ij )A = ej A = (jth row of A).
And, if k 6= i, (kth row of ij A) = (kth row of ij )A = 0A = 0.
17
Andrilli/Hecker - Answers to Exercises
Chapter 1 Review
(c) Suppose A is an n n matrix that commutes with all other n n
matrices. Suppose i 6= j. Then aij = ith entry of (jth column of
A) = ith entry of (ith column of A ji ) (by reversing the roles of i
and j in part (a)) = (i; i)th entry of A ji = (i; i)th entry of ji A
(since A commutes with ji ) = ith entry of (ith row of ji A) = 0
(since all rows other than the jth row of ji A have all zero entries,
by part (b)). Thus, A is diagonal. Next, aii = ith entry of (ith
column of A) = ith entry of (jth column of A ij ) (by part (a)) =
(i; j)th entry of A ij = (i; j)th entry of ij A (since A commutes
with ij ) = jth entry of (ith row of ij A) = jth entry of (jth row
of A) (by part (b)) = ajj . Thus, all the main diagonal terms of A
are equal, and so A = cIn for some c 2 R.
(31) (a) T
(b) T
(c) T
(d) F
(e) F
(f) F
Chapter 1 Review Exercises
(1) Yes. Vectors corresponding to adjacent sides are orthogonal. Vectors
corresponding to opposite sides are parallel, with one pair having slope
5
3
5 and the other pair having slope
3.
h
i
5
p15
(2) u = p394
; p12
;
[0:2481; 0:5955; 0:7444]; slightly longer.
394
394
p
(3) Net velocity = 4 2
mi/hr.
p
5; 4 2
[0:6569; 5:6569]; speed
5:6947
(4) a = [ 10; 9; 10] m/sec2
(5) jx yj = 74
(6)
kxk kyk
90:9
136
38 19 57
(7) proja b = 114
25 ;
25 ; 25 ; 25 = [4:56; 1:52; 0:76; 2:28];
14
62 56
32
b proja b =
25 ;
25 ; 25 ;
25 = [ 0:56; 2:48; 2:24; 1:28];
a (b proja b) = 0.
(8)
1782 joules
(9) We must prove that (x + y) (x y) = 0 ) kxk = kyk : But,
2
(x + y) (x y) = 0 ) x x x y + y x y y = 0 ) kxk
2
2
) kxk = kyk = kxk = kyk
2
kyk = 0
(10) First, x 6= 0, or else projx y is not de…ned. Also, y 6= 0, since that would
imply projx y = y. Now, assume xky. Then, there is a scalar c 6= 0 such
2
xy
x cx
that y = cx. Hence, projx y = kxk
x = kxk
x = ckxk
x = cx
2
2
kxk2
= y, contradicting the assumption that y 6= projx y.
18
Andrilli/Hecker - Answers to Exercises
(11) (a) 3A
4CT =
3
11
2
19
BA is not de…ned; AC =
2
CA = 4
2
B3 = 4
30
2
11
97
284
268
Chapter 1 Review
13
0
15
22
; AB =
23 14
5 23
21
30
4
11
;
;
3
11 17
0 18 5; A3 is not de…ned;
5 16
3
128
24
375
92 5
354
93
(b) Third row of BC = [5 8].
3
2
2
11
1
0
4
2
2
7
6 5
6 1
6
7
1 7
(12) S = 6
5; V = 4 2
4 2
11
1
1
2
2
2
5
2
0
2
1
2
3
7
2 7
5
0
(13) (a) (3(A B)T )T = 3(A B). Also,
(3(A B)T ) = 3( 1)(AT BT ) = 3( 1)(( A) ( B)) (de…nition
of skew-symmetric) = 3(A B). Hence,
(3(A B)T )T = (3(A B)T ), so 3(A B)T is skew-symmetric.
(b) Let C = A + B. Now, cij = aij + bij . But for i < j, aij = bij = 0.
Hence, for i < j, cij = 0. Thus, C is lower triangular.
(14)
Price Shipping Cost
2
3
$168500
$24200
Company I
$29100 5 .
Company II 4 $202500
$155000
$22200
Company III
(15) Take the transpose of both sides of AT BT = BT AT to get BA = AB.
Then, (AB)2 = (AB)(AB) = A(BA)B = A(AB)B = A2 B2 .
(16) Negation: For every square matrix A, A2 = A. Counterexample: A =
[ 1].
(17) If A 6= O22 , then some row of A, say the ith row, is nonzero. Apply
Result 5 in Section 1.3 with x = (ith row of A).
(18) Base Step (k = 2): Suppose A and B are upper triangular n n matrices,
and let C = AB. Then aij = bij = 0, for i > j. Hence, for i > j,
Pi 1
Pn
Pn
cij = m=1 aim bmj = m=1 0 bmj +aii bij + m=i+1 aim 0 = aii (0) = 0.
Thus, C is upper triangular.
Inductive Step: Let A1 ; : : : ; Ak+1 be upper triangular matrices. Then, the
product C = A1
Ak is upper triangular by the Induction Hypothesis,
19
Andrilli/Hecker - Answers to Exercises
and so the product A1
Step.
Chapter 1 Review
Ak+1 = CAk+1 is upper triangular by the Base
(19) (a) Let A and B be n n matrices having the properties given in the
exercise. Let C = AB. Then we know that aij = 0 for all i < j,
bij = 0 for all i > j, cij = 0 for all i 6= j, and that aii 6= 0 and
bii 6= 0 for all i. We need to prove that aij = 0 for all i > j. We use
induction on j.
Pn
Base Step (jP= 1): Let i > j = 1. P
Then 0 = ci1 = k=1 aik bk1
n
n
= ai1 b11 + k=2 aik bk1 = ai1 b11 + k=2 aik 0 = ai1 b11 . Since
b11 6= 0, this implies that ai1 = 0, completing the Base Step.
Inductive Step: Assume for j = 1; 2; : : : ; m 1 that aij = 0 for
all i > j: That is, assume we have already proved that, for some
m
2, that the …rst m 1 columns of A have all zeros below the
main diagonal. To complete the inductive step, we need to prove
that the mth column of A has all zeros below the main diagonal.
That is, weP
must prove thatPaim = 0 for all i > m. Let
Pni > m. Then,
n
m 1
0 = cim = k=1 aik bkm = k=1 aik bkm +aim bmm + k=m+1 aik bkm
Pm 1
Pn
= k=1 0 bkm + aim bmm + k=m+1 aik 0 = aim bmm . But, since
bmm 6= 0, we must have aim = 0, completing the proof.
(b) Apply part (a) to BT and AT to prove that BT is diagonal. Hence
B is also diagonal.
(20) (a) F
(d) F
(g) F
(j) T
(m) T
(p) F
(b) T
(e) F
(h) F
(k) T
(n) F
(q) F
(c) F
(f) T
(i) F
(l) T
(o) F
(r) T
20
Andrilli/Hecker - Answers to Exercises
Section 2.1
Chapter 2
Section 2.1
(1) (a) f( 2; 3; 5)g
(b) f(4; 5; 2)g
(c) fg
(d) f(c + 7; 2c
3; c; 6) j c 2 Rg; (7; 3; 0; 6); (8; 1; 1; 6); (9; 1; 2; 6)
(e) f(2b d 4; b; 2d + 5; d; 2) j b; d 2 Rg; ( 4; 0; 5; 0; 2); ( 2; 1; 5; 0; 2);
( 5; 0; 7; 1; 2)
(f) f(b + 3c
2; b; c; 8) j b; c 2 Rg; ( 2; 0; 0; 8); ( 1; 1; 0; 8); (1; 0; 1; 8)
(g) f(6; 1; 3)g
(h) fg
(2) (a) f(3c + 13e + 46; c + e + 13; c; 2e + 5; e) j c; e 2 Rg
(b) f(3b
12d + 50e
13; b; 2d
(c) f( 20c+9d 153f
(d) fg
8e + 3; d; e; 2) j b; d; e 2 Rg
68; 7c 2d+37f +15; c; d; 4f +2; f ) j c; d; f 2 Rg
(3) 51 nickels, 62 dimes, 31 quarters
(4) y = 2x2
(5) y =
x+3
3x3 + 2x2
(6) x2 + y 2
6x
4x + 6
8y = 0, or (x
3)2 + (y
4)2 = 25
(7) In each part, R(AB) = (R(A))B, which equals the given matrix.
2
26
6 6
6
(a) 4
0
10
15
4
6
4
3
6
1 7
7
12 5
14
2
26
6 10
6
(b) 4
18
6
15
4
6
4
3
6
14 7
7
15 5
1
(8) (a) To save space, we write hCii for the ith row of a matrix C.
For the Type (I) operation R : hii
chii: Now, hR(AB)ii =
chABii = chAii B (by the hint in the text) = hR(A)ii B = hR(A)Bii .
But, if k 6= i, hR(AB)ik = hABik = hAik B (by the hint in the text)
= hR(A)ik B = hR(A)Bik .
For the Type (II) operation R : hii
chji + hii: Now, hR(AB)ii
= chABij + hABii = chAij B + hAii B (by the hint in the text) =
(chAij + hAii )B = hR(A)ii B = hR(A)Bii . But, if k 6= i, hR(AB)ik
= hABik = hAik B (by the hint in the text) = hR(A)ik B =
hR(A)Bik .
21
Andrilli/Hecker - Answers to Exercises
Section 2.2
For the Type (III) operation R : hii ! hji: Now, hR(AB)ii =
hABij = hAij B (by the hint in the text) = hR(A)ii B = hR(A)Bii .
Similarly, hR(AB)ij = hABii = hAii B (by the hint in the text)
= hR(A)ij B = hR(A)Bij . And, if k 6= i and k 6= j, hR(AB)ik =
hABik = hAik B (by the hint in the text) = hR(A)ik B = hR(A)Bik .
(b) Use induction on n, the number of row operations used.
Base Step: The case n = 1 is part (1) of Theorem 2.1, and is proven
in part (a) of this exercise.
Inductive Step: Assume that
Rn (
(R2 (R1 (AB)))
) = Rn (
(R2 (R1 (A)))
)B
and prove that
Rn+1 (Rn (
(R2 (R1 (AB)))
)) = Rn+1 (Rn (
(R2 (R1 (A)))
))B:
(R2 (R1 (AB)))
)) = Rn+1 (Rn (
(R2 (R1 (A)))
)B)
(by the inductive hypothesis) = Rn+1 (Rn (
(by part (a)).
(R2 (R1 (A)))
))B
Now,
Rn+1 (Rn (
(9) Multiplying a row by zero changes all of its entries to zero, essentially
erasing all of the information in the row.
(10) Follow the hint in the textbook. First, A(X1 + c(X2 X1 )) = AX1 +
cA(X2 X1 ) = B + cAX2 cAX1 = B + cB cB = B. Then we
want to show that X1 + c(X2 X1 ) = X1 + d(X2 X1 ) implies that
c = d to prove that each real number c produces a di¤erent solution. But
X1 + c(X2 X1 ) = X1 + d(X2 X1 ) ) c(X2 X1 ) = d(X2 X1 )
) (c d)(X2 X1 ) = 0 ) (c d) = 0 or (X2 X1 ) = 0. However,
X2 6= X1 , and so c = d.
(11) (a) T
(b) F
(c) F
(d) F
(e) T
(f) T
Section 2.2
(1) Matrices in (a), (b), (c), (d), and (f) are not in reduced row echelon form.
Matrix in (a) fails condition 2 of the de…nition.
Matrix in (b) fails condition 4 of the de…nition.
Matrix in (c) fails condition 1 of the de…nition.
Matrix in (d) fails conditions 1, 2, and 3 of the de…nition.
Matrix in (f) fails condition 3 of the de…nition.
22
Andrilli/Hecker
2
1
(2) (a) 4 0
0
(b) I4
2
1
6 0
(c) 6
4 0
0
2
1
(3) (a) 4 0
0
2
1
(e) 4 0
0
2
1
6 0
6
(g) 4
0
0
- Answers to Exercises
3
13
4 0
3 5
0 1
0
0 0
2
0
0
0
0
1
0
0
0
1
0
1
0
0
3
23
5 7
7
0 5
0
11
2
0
0
3
Section 2.2
2
1
6 0
6
(d) 6
6 0
4 0
0
0
0
1
0
0
2
0
0
1
7
7
7
7
5
2
1
1
3
1
2
(f) I5
2
3 5; ( 2; 3; 5)
5
3
2 0
1 0
4
0 1
2 0
5 5; f(2b
0 0
0 1
2
3
0 0
6
1 0
1 7
7; f(6; 1; 3)g
0 1
3 5
0 0
0
(4) (a) Solution set = f(c
= ( 3; 6; 1; 2)
1
0
(e)
0
1
0
0
0
3
d
4; b; 2d + 5; d; 2) j b; d 2 Rg
2d; 3d; c; d) j c; d 2 Rg; one particular solution
(b) Solution set = f(e; 2d
tion = (1; 1; 3; 1; 1)
e; 3d; d; e) j d; e 2 Rg; one particular solu-
(c) Solution set = f( 4b + 2d f; b; 3d + 2f; d; 2f; f ) j b; d; f 2 Rg;
one particular solution = ( 3; 1; 0; 2; 6; 3)
(5) (a) f(2c; 4c; c) j c 2 Rg =
fc(2; 4; 1) j c 2 Rg
(c) f(0; 0; 0; 0)g
(d) f(3d; 2d; d; d) j d 2 Rg
= fd(3; 2; 1; 1) j d 2 Rg
(b) f(0; 0; 0)g
(6) (a) a = 2, b = 15, c = 12, d = 6
(b) a = 2, b = 25, c = 16, d = 18
(c) a = 4, b = 2, c = 4, d = 1, e = 4
(d) a = 3, b = 11, c = 2, d = 3, e = 2, f = 6
(7) (a) A = 3, B = 4, C =
2
(b) A = 4, B =
(8) Solution for system AX = B1 : (6; 51; 21);
79
solution for system AX = B2 : ( 35
3 ; 98; 2 ).
(9) Solution for system AX = B1 : ( 1; 6; 14; 9);
solution for system AX = B2 : ( 10; 56; 8; 10
3 ).
23
3, C = 1, D = 0
Andrilli/Hecker - Answers to Exercises
Section 2.2
(10) (a)
R1 : (III): h1i ! h2i
1
R2 : (I): h1i
2 h1i
1
R3 : (II): h2i
3 h2i
R4 : (II): h1i
2 h2i + h1i ;
57
21
(b) AB =
;
56
20
R4 (R3 (R2 (R1 (AB)))) = R4 (R3 (R2 (R1 (A))))B =
10
19
4
7
:
(11) (a) A(X1 + X2 ) = AX1 + AX2 = O + O = O; A(cX1 ) = c(AX1 ) =
cO = O.
(b) Any nonhomogeneous system with two equations and two unknowns
that has a unique solution will serve as a counterexample. For instance, consider
x + y = 1
:
x
y = 1
This system has a unique solution: (1; 0). Let (s1 ; s2 ) and (t1 ; t2 )
both equal (1; 0). Then the sum of solutions is not a solution in this
case. Also, if c 6= 1, then the scalar multiple of a solution by c is not
a solution.
(c) A(X1 + X2 ) = AX1 + AX2 = B + O = B.
(d) Let X1 be the unique solution to AX = B. Suppose AX = O has
a nontrivial solution X2 . Then, by (c), X1 + X2 is a solution to
AX = B, and X1 6= X1 + X2 since X2 6= O. This contradicts the
uniqueness of X1 .
a b 0
yields
(12) If a 6= 0, then pivoting in the …rst column of
0
c d
#
"
b
1
0
a
. There is a nontrivial solution if and only if the
b
0
0 d c a
(2; 2) entry of this matrix is zero, which occurs if and only if ad bc = 0.
a b 0
Similarly, if c 6= 0, swapping the …rst and second rows of
0
c d
"
#
d
1
0
c
and then pivoting in the …rst column yields
. There
d
0
0 b a c
is a nontrivial solution if and only if the (2; 2) entry of this matrix is zero,
which occurs if and only if ad bc = 0. Finally, if both a and c equal 0,
then ad bc = 0 and (1; 0) is a nontrivial solution.
(13) (a) The contrapositive is: If AX = O has only the trivial solution, then
A2 X = O has only the trivial solution.
Let X1 be a solution to A2 X = O. Then O = A2 X1 = A(AX1 ).
Thus AX1 is a solution to AX = O. Hence AX1 = O by the
premise. Thus, X1 = O, using the premise again.
24
Andrilli/Hecker - Answers to Exercises
Section 2.3
(b) The contrapositive is: Let t be a positive integer. If AX = O has
only the trivial solution, then At X = O has only the trivial solution.
Proceed by induction. The statement is clearly true when t = 1,
completing the Base Step.
Inductive Step: Assume that if AX = O has only the trivial solution,
then At X = O has only the trivial solution. We must prove that if
AX = O has only the trivial solution, then At+1 X = O has only the
trivial solution.
Let X1 be a solution to At+1 X = O. Then A(At X1 ) = O. Thus
At X1 = O, since it is a solution to AX = O. But then X1 = O by
the inductive hypothesis.
(14) (a) T
(b) T
(c) F
(d) T
(e) F
(f) F
Section 2.3
(1) (a) A row operation of type (I) converts A to B: h2i
(b) A row operation of type (III) converts A to B: h1i
(c) A row operation of type (II) converts A to B:
h2i
h3i + h2i.
5 h2i.
! h3i.
(2) (a) B = I3 . The sequence of row operations converting A to B is:
1
(I): h1i
4 h1i
(II): h2i
2 h1i + h2i
(II): h3i
3 h1i + h3i
(III): h2i ! h3i
(II): h1i
5 h3i + h1i
(b) The sequence of row operations converting B to A is:
(II): h1i
5 h3i + h1i
(III): h2i ! h3i
(II): h3i
3 h1i + h3i
(II): h2i
2 h1i + h2i
(I): h1i
4 h1i
(3) (a) The common reduced row echelon form is I3 .
(b) The sequence
(II): h3i
(I): h3i
(II): h1i
(II): h2i
(II): h3i
of row operations is:
2 h2i + h3i
1 h3i
9 h3i + h1i
3 h3i + h2i
9
5
h2i + h3i
25
Andrilli/Hecker - Answers to Exercises
3
5
(II): h1i
1
5
(I): h2i
(II): h3i
(II): h2i
(I): h1i
h2i + h1i
h2i
3 h1i + h3i
2 h1i + h2i
5 h1i
(4) A reduces to
2
1 0 0
6 0 1 0
6
4 0 0 1
0 0 0
(5) (a) 2
Section 2.3
2
3
1
2 3
6
1 0 7
7 ; while B reduces to 6 0
4 0
1 0 5
0
0 0
(b) 1
(c) 2
(d) 3
0
1
0
0
0
0
1
0
(e) 3
3
2 0
1 0 7
7:
1 0 5
0 1
(f) 2
(6) Corollary 2.6 cannot be used in either part (a) or part (b) because there
are more equations than variables.
(a) Rank = 3. Theorem 2.5 predicts that there is only the trivial solution.
Solution set = f(0; 0; 0)g
(b) Rank = 2. Theorem 2.5 predicts that nontrivial solutions exist. Solution set = f(3c; 4c; c) j c 2 Rg
(7) In the following answers, the asterisk represents any real entry:
(a)
Smallest
2
1
6 0 0
6
4 0 0
0 0
rank = 3
1
0
0
0
0 7
7
0 5
0
2Largest
1 0
6 0 1
6
4 0 0
0 0
rank
0
0
1
0
=
0
0
0
1
43
7
7
5
(b)
2 Smallest rank = 1 3
1
4 0 0 0 0
0 5
0 0 0 0
0
2 Largest rank = 3 3
1 0 0
4 0 1 0
5
0 0 1
(c)
2 Smallest rank = 2 3
1
0
4 0 0 0 0
1 5
0 0 0 0
0
2 Largest rank = 3 3
1 0
0
4 0 1
0 5
0 0 0 0
1
26
Andrilli/Hecker - Answers to Exercises
(d)
Smallest
2
1
6 0 0
6
6 0 0
6
4 0 0
0 0
(8) (a) x =
rank = 3
1
21
11 a1
(b) x = 3a1
0
0
0
0
0
0
0
0
+
7
7
7
7
5
6
11 a2
2Largest
1 0
6 0 1
6
6 0 0
6
4 0 0
0 0
Section 2.3
rank = 33
0
7
0
7
7
1
7
0 5
0
0
0
(c) Not possible
(d) x = 12 a1
4a2 + a3
1
2 a2
(e) The answer is not unique; one possible answer is x =
(f) Not possible
(g) x = 2a1
(b) q1 = 3r1
r2
a3
3a1 +2a2 +0a3 .
(h) Not possible
(e) Yes, but the linear combination of the rows is not unique;
one possible expression for the
given vector is 3(row 1) +
1(row 2) + 0(row 3).
(9) (a) Yes: 5(row 1)
3(row 2)
1(row 3)
(b) Not in row space
(c) Not in row space
(d) Yes: 3(row 1) + 1(row 2)
(10) (a) [13; 23; 60] =
a2
+ 12 a3
2q1 + q2 + 3q3
2r3 ; q2 = 2r1 + 2r2
5r3 ; q3 = r1
(c) [13; 23; 60] = r1 14r2 + 11r3
3
2
1 0
1 2
(11) (a) (i) B = 4 0 1 3 2 5 ; (ii) [1; 0; 1; 2] =
0 0 0 0
1
2 [2; 7; 19; 18] + 0[1; 2; 5; 6];
6r2 + 4r3
7
8 [0; 4; 12; 8]
+
[0; 1; 3; 2] = 14 [0; 4; 12; 8] + 0[2; 7; 19; 18] +
0[1; 2; 5; 6]; (iii) [0; 4; 12; 8] = 0[1; 0; 1; 2] + 4[0; 1; 3; 2]; [2; 17; 19; 18] =
2[1; 0; 1; 2]
2
1
6 0
(b) (i) B = 6
4 0
0
4
3 [ 2; 4;
2
3 [1; 2; 3;
+ 7[0; 1; 3; 2]; [1; 2; 5; 6] = 1[1; 0; 1; 2]+ 2[0; 1; 3; 2]
3
2 3 0
1
0 0 1 5 7
7 ; (ii) [1; 2; 3; 0; 1] = 5 [1; 2; 3; 4; 21]
3
0 0 0 0 5
0 0 0 0
6; 5; 27]+0[13; 26; 39; 5; 12]+0[2; 4; 6; 1; 7]; [0; 0; 0; 1; 5] =
4; 21]
1
3[
2; 4; 6; 5; 27] + 0[13; 26; 39; 5; 12] +
0[2; 4; 6; 1; 7]; (iii) [1; 2; 3; 4; 21] = 1[1; 2; 3; 0; 1] 4[0; 0; 0; 1; 5];
[ 2; 4; 6; 5; 27] = 2[1; 2; 3; 0; 1] + 5[0; 0; 0; 1; 5]; [13; 26; 39; 5; 12] =
13[1; 2; 3; 0; 1] + 5[0; 0; 0; 1; 5]; [2; 4; 6; 1; 7] = 2[1; 2; 3; 0; 1]
1[0; 0; 0; 1; 5]
27
Andrilli/Hecker - Answers to Exercises
Section 2.3
(12) Suppose that all main diagonal entries of A are nonzero. Then, for each i,
perform the row operation hii
(1=aii )hii on the matrix A. This will
convert A into In . We prove the converse by contrapositive. Suppose
some diagonal entry aii equals 0. Then the ith column of A has all zero
entries. No step in the row reduction process will alter this column of
zeroes, and so the unique reduced row echelon form for the matrix must
contain at least one column of zeroes, and so cannot equal In .
(13) (a) Suppose we are performing row operations on an m n matrix A.
Throughout this part, we will write hBii for the ith row of a matrix
B.
1
For the Type (I) operation R : hii
chii: Now R 1 is hii
c hii.
1
Clearly, R and R change only the ith row of A. We want to show
that R 1 R leaves hAii unchanged. But hR 1 (R(A))ii = 1c hR(A)ii
= 1c (chAii ) = hAii .
For the Type (II) operation R : hii
chji+hii: Now R 1 is hii
1
chji + hii. Again, R and R change only the ith row of A, and we
need to show that R 1 R leaves hAii unchanged. But hR 1 (R(A))ii
= chR(A)ij +hR(A)ii = chAij +hR(A)ii = chAij +chAij +hAii
= hAii .
For the Type (III) operation R : hii ! hji: Now, R 1 = R. Also,
R changes only the ith and jth rows of A, and these get swapped.
Obviously, a second application of R swaps them back to where they
were, proving that R is indeed its own inverse.
(b) An approach similar to that used for Type (II) operations in the
abridged proof of Theorem 2.3 in the text works just as easily for
Type (I) and Type (III) operations. However, here is a di¤erent approach: Suppose R is a row operation, and let X satisfy AX = B.
Multiplying both sides of this matrix equation by the matrix R(I)
yields R(I)AX = R(I)B, implying R(IA)X = R(IB), by Theorem 2.1. Thus, R(A)X = R(B), showing that X is a solution to the
new linear system obtained from AX = B after the row operation R
is performed.
(14) The zero vector is a solution to AX = O, but it is not a solution for
AX = B.
(15) Consider the systems
x + y
x + y
=
=
1
0
and
x
x
y
y
=
=
1
:
2
The reduced row echelon matrices for these inconsistent systems are, respectively,
1 1
0
1
1
0
and
:
0 0
1
0
0
1
Thus, the original augmented matrices are not row equivalent, since their
reduced row echelon forms are di¤erent.
28
Andrilli/Hecker - Answers to Exercises
Section 2.3
(16) (a) Plug each of the 5 points in turn into the equation for the conic. This
will give a homogeneous system of 5 equations in the 6 variables a,
b, c, d, e, and f . This system has a nontrivial solution, by Corollary
2.6.
(b) Yes. In this case, there will be even fewer equations, so Corollary 2.6
again applies.
(17) The fact that the system is homogeneous was used in the proof of Theorem 2.5 to show that the system is consistent (because it has the trivial
solution). However, nonhomogeneous systems can be inconsistent, and so
the proof of Theorem 2.5 does not work for nonhomogeneous systems.
(18) (a) R(A) and A are row equivalent, and hence have the same reduced
row echelon form (by Theorem 2.4), and the same rank.
(b) The reduced row echelon form of A cannot have more nonzero rows
than A.
(c) If A has k rows of zeroes, then rank(A) = m
least k rows of zeroes, so rank(AB) m k.
k. But AB has at
(d) Let A = Rn ( (R2 (R1 (D))) ), where D is in reduced row echelon form. Then rank(AB) = rank(Rn ( (R2 (R1 (D))) )B) =
rank(Rn ( (R2 (R1 (DB))) )) (by part (2) of Theorem 2.1) =
rank(DB) (by repeated use of part (a))
rank(D) (by part (c)) =
rank(A) (by de…nition of rank).
(19) (a) As in the abridged proof of Theorem 2.8 in the text, let a1 ; : : : ; am
represent the rows of A, and let b1 ; : : : ; bm represent the rows of B.
For the Type (I) operation R : hii
chii: Now bi = 0a1 + 0a2 +
+ cai + 0ai+1 +
+ 0am , and, for k 6= i, bk = 0a1 + 0a2 +
+
1ak +0ak+1 + +0am . Hence, each row of B is a linear combination
of the rows of A, implying it is in the row space of A.
For the Type (II) operation R : hii
chji + hii: Now bi =
0a1 + 0a2 + + caj + 0aj+1 + + ai + 0ai+1 + : : : + 0am , where our
notation assumes i > j. (An analogous argument works for i < j.)
And, for k 6= i, bk = 0a1 +0a2 + +1ak +0ak+1 + +0am . Hence,
each row of B is a linear combination of the rows of A, implying it
is in the row space of A.
For the Type (III) operation R : hii ! hji: Now, bi = 0a1 + 0a2 +
+ 1aj + 0aj+1 +
+ 0am , bj = 0a1 + 0a2 +
+ 1ai + 0ai+1 +
+ 0am , and, for k 6= i, k 6= j, bk = 0a1 + 0a2 + + 1ak + 0ak+1 +
+ 0am . Hence, each row of B is a linear combination of the rows
of A, implying it is in the row space of A.
(b) This follows directly from part (a) and Lemma 2.7.
(20) Let k be the number of matrices between A and B when performing row
operations to get from A to B. Use a proof by induction on k.
Base Step: If k = 0, then there are no intermediary matrices, and Exercise
29
Andrilli/Hecker - Answers to Exercises
Section 2.4
19 shows that the row space of B is contained in the row space of A.
Inductive Step: Given the chain
A ! D1 ! D2 !
! Dk ! Dk+1 ! B;
we must show that the row space B is contained in the row space of A.
The inductive hypothesis shows that the row space of Dk+1 is in the row
space of A, since there are only k matrices between A and Dk+1 in the
chain. Thus, each row of Dk+1 can be expressed as a linear combination
of the rows of A. But by Exercise 19, each row of B can be expressed
as a linear combination of the rows of Dk+1 . Hence, by Lemma 2.7, each
row of B can be expressed as a linear combination of the rows of A, and
therefore is in the row space of A. By Lemma 2.7 again, the row space
of B is contained in the row space of A.
(21) (a) Let xij represent the jth coordinate of xi . The corresponding homogeneous system in variables a1 ; : : : ; an+1 is
8
+ an+1 xn+1;1 = 0
>
< a1 x11 + a2 x21 +
..
..
..
.. ;
..
.
.
.
.
.
>
:
a1 x1n + a2 x2n +
+ an+1 xn+1;n = 0
which has a nontrivial solution for a1 ; : : : ; an+1 , by Corollary 2.6.
(b) Using part (a), we can suppose that a1 x1 + + an+1 xn+1 = 0, with
ai 1
ai+1
ai 6= 0 for some i. Then xi = aa1i x1
ai xi 1
ai xi+1
aj
an+1
x
.
Let
b
=
for
1
j
n
+
1,
j
=
6
i.
n+1
j
ai
ai
(22) (a) T
(b) T
(c) F
(d) F
(e) F
(f) T
Section 2.4
(1) The product of each given pair of matrices equals I.
(d) Rank = 4; nonsingular
(2) (a) Rank = 2; nonsingular
(b) Rank = 2; singular
(c) Rank = 3; nonsingular
(e) Rank = 3; singular
(3) No inverse exists for (b), (e) and (f).
(a)
"
1
10
1
15
3
10
2
15
#
(c)
"
2
21
5
84
1
7
1
28
30
#
(d)
"
3
11
2
11
4
11
1
11
#
Andrilli/Hecker - Answers to Exercises
(4) No inverse exists
2
1 3
(a) 4 1 0
2 2
2
6
(c) 6
4
(5) (a)
2
for (b) and (e).
3
2
2 5
1
3
2
0
1
2
3
1
2
1
2
8
3
1
3
2
3
1
a11
0
0
1
a22
1
a11
0
(b) 4 0
0
1
a33
cos
sin
"
the inverse is
When
=
2,
the inverse is
2
(b) The general inverse is 4
When
=
(7) (a) Inverse =
1
1
2
0
3
5
0
3
6,
4,
3
2
cos
sin
6
the inverse is 6
4
2
6
the inverse is 6
4
4
2 , the inverse is
2
3
1
3
7
3
5
3
#
2
2
p
2
2
0
1
2
p
1
2
p
"
2
"
sin
cos
p
6 , the inverse is
4,
=
4
6 3
(f) 6
4 10
1
2 1
2
3
=
When
1
10
2
4
0
6 0
6
(c) 6 .
4 ..
0
0
0 5
0
=
1
5
1
2
6
(d) 6
4
7
7
5
When
When
1
7
2
3
a11
1
a22
=
2
3
(6) (a) The general inverse is
When
Section 2.4
1
0
sin
cos
0
p
3
2
1
2
0
p
2
2
p
2
2
0
0 1
1 0
0 0
1
2
3
2p
.
#
2
2
p
2
2
1
a22
..
.
0
.
#
.
.
3
0
0 5.
0 1
3
1
0
2
p
7
3
0 7
5.
2
0
p
1
2
2
p
2
2
0
7
0 7
5.
0 1
3
0
0 5.
1
; solution set = f(3; 5)g
31
3
..
.
3
2
19 7
7
4 5
8
3
6
10 7
7
1 5
6
3
0
0 7
7
.. 7
. 5
1
ann
Andrilli/Hecker - Answers to Exercises
2
6
(b) Inverse = 6
4
2
6
(c) Inverse = 6
4
(8) (a) Consider
1
0
3
8
5
2
5
17
5
1
5
1
5
4
5
1
3
1
0
0
1
1
0
13
3
2
4
Section 2.4
3
7
7; solution set = f( 2; 4; 3)g
5
3
5
7 7
7; solution set =
3 5
1
2
0
(b) Consider 4 1
0
15
0
1
5
.
(c) A = A
(9) (a) A = I2 , B =
(b) A =
1
0
1
f(5; 8; 2; 1)g
1
0
0
3
0
0 5.
1
if A is involutory.
I2 , A + B = O2
0
0
0
0
,B=
0
1
, A+B=
1
0
0
1
(c) If A = B = I2 , then A + B = 2I2 , A 1 = B 1 = I2 , A 1 + B
2I2 , and (A + B) 1 = 21 I2 , so A 1 + B 1 =
6 (A + B) 1 .
1
=
(10) (a) B must be the zero matrix.
(b) No. AB = In implies A 1 exists (and equals B). Multiply both
sides of AC = On on the left by A 1 .
(11) : : : ; A
(12) B
(13) (A
1
9
;A
5
1
;A
; A3 ; A7 ; A11 ; : : :
A is the inverse of A
1 T
) = (AT )
1
1
B.
(by Theorem 2.11, part (4)) = A
1
.
(14) (a) No step in the row reduction process will alter the column of zeroes,
and so the unique reduced row echelon form for the matrix must
contain a column of zeroes, and so cannot equal In .
(b) Such a matrix will contain a column of all zeroes. Then use part (a).
(c) When pivoting in the ith column of the matrix during row reduction,
the (i; i) entry will be nonzero, allowing a pivot in that position.
Then, since all entries in that column below the main diagonal are
already zero, none of the rows below the ith row are changed in that
column. Hence, none of the entries below the ith row are changed
when that row is used as the pivot row. Thus, none of the nonzero
entries on the main diagonal are a¤ected by the row reduction steps
on previous columns (and the matrix stays in upper triangular form
throughout the process). Since this is true for each main diagonal
position, the matrix row reduces to In .
32
Andrilli/Hecker - Answers to Exercises
Section 2.4
(d) If A is lower triangular with no zeroes on the main diagonal, then
AT is upper triangular with no zeroes on the main diagonal. By
part (c), AT is nonsingular. Hence A = (AT )T is nonsingular by
part (4) of Theorem 2.11.
(e) Note that when row reducing the ith column of A, all of the following
occur:
1. The pivot aii is nonzero, so we use a type (I) operation to change
it to a 1. This changes only row i.
2. No nonzero targets appear below row i, so all type (II) row operations only change rows above row i.
3. Because of (2), no entries are changed below the main diagonal,
and no main diagonal entries are changed by type (II) operations.
When row reducing [AjIn ], we use the exact same row operations we
use to reduce A. Since In is also upper triangular, fact (3) above
shows that all the zeroes below the main diagonal of In remain zero
when the row operations are applied. Thus, the result of the row
operations, namely A 1 , is upper triangular.
(15) (a) Part (1): Since AA 1 = I, we must have (A 1 ) 1 = A.
Part (2): For k > 0, to show (Ak ) 1 = (A 1 )k , we must show that
Ak (A 1 )k = I. Proceed by induction on k.
Base Step: For k = 1, clearly AA 1 = I.
Inductive Step: Assume Ak (A 1 )k = I. Prove Ak+1 (A 1 )k+1 = I.
Now, Ak+1 (A 1 )k+1 = AAk (A 1 )k A 1 = AIA 1 = AA 1 = I.
This concludes the proof for k > 0. We now show Ak (A 1 )k = I for
k 0.
For k = 0, clearly A0 (A 1 )0 = I I = I. The case k = 1 is covered
by part (1) of the theorem.
For k
2, (Ak ) 1 = ((A 1 ) k ) 1 (by de…nition) = ((A k ) 1 ) 1
(by the k > 0 case) = A k (by part (1)).
(b) To show (A1
An ) 1 = An 1
A1 1 , we must prove that
1
1
(A1
An )(An
A1 ) = I. Use induction on n.
Base Step: For n = 1, clearly A1 A1 1 = I.
Inductive Step: Assume that (A1
An )(An 1
A1 1 ) = I. Prove
1
1
that (A1
An+1 )(An+1
A1 ) = I.
1
Now, (A1
An+1 )(An+1
A1 1 )
1
1
= (A1
An )An+1 An+1 (An
A1 1 )
1
= (A1
An )I(An 1
A1 ) = (A1
An )(An 1
(16) We must prove that (cA)( 1c A
1I = I.
1
) = I. But, (cA)( 1c A
A1 1 ) = I.
1
) = c 1c AA
1
=
(17) (a) Let p = s, q = t. Then p; q > 0. Now, As+t = A (p+q) =
(A 1 )p+q = (A 1 )p (A 1 )q (by Theorem 1.15) = A p A q = As At .
33
Andrilli/Hecker - Answers to Exercises
Section 2.4
(b) Let q = t. Then (As )t = (As ) q = ((As ) 1 )q = ((A 1 )s )q (by
Theorem 2.11, part (2)) = (A 1 )sq (by Theorem 1.15) = A sq
(by Theorem 2.11, part (2)) = As( q) = Ast . Similarly, (As )t =
((A 1 )s )q (as before) = ((A 1 )q )s (by Theorem 1.15) = (A q )s =
(At )s .
(18) First assume AB = BA. Then (AB)2 = ABAB = A(BA)B =
A(AB)B = A2 B2 .
Conversely, if (AB)2 = A2 B2 , then ABAB = AABB =)
A 1 ABABB 1 = A 1 AABBB 1 =) BA = AB.
(19) If (AB)q = Aq Bq for all q 2, use q = 2 and the proof in Exercise 18 to
show BA = AB.
Conversely, we need to show that BA = AB ) (AB)q = Aq Bq for all
q 2. First, we prove that BA = AB ) ABq = Bq A for all q 2. We
use induction on q.
Base Step (q = 2): AB2 = A(BB) = (AB)B = (BA)B = B(AB) =
B(BA) = (BB)A = B2 A.
Inductive Step: ABq+1 = A(Bq B) = (ABq )B = (Bq A)B (by the inductive hypothesis) = Bq (AB) = Bq (BA) = (Bq B)A = Bq+1 A.
Now we use this “lemma” (BA = AB ) ABq = Bq A for all q 2) to
prove BA = AB ) (AB)q = Aq Bq for all q 2. Again, we proceed by
induction on q.
Base Step (q = 2): (AB)2 = (AB)(AB) = A(BA)B = A(AB)B =
A 2 B2 .
Inductive Step: (AB)q+1 = (AB)q (AB) = (Aq Bq )(AB) (by the inductive hypothesis) = Aq (Bq A)B = Aq (ABq )B (by the lemma) = (Aq A)(Bq B)
= Aq+1 Bq+1 .
(20) Base Step (k = 0): In = (A1 In )(A In ) 1 .
Inductive Step: Assume In + A + A2 + + Ak = (Ak+1 In )(A In ) 1 ;
for some k: Prove In +A+A2 + +Ak +Ak+1 = (Ak+2 In )(A In ) 1 :
Now, In + A + A2 + + Ak + Ak+1 = (In + A + A2 + + Ak ) + Ak+1 =
(Ak+1 In )(A In ) 1 + Ak+1 (A In )(A In ) 1 (where the …rst term
is obtained from the inductive hypothesis)
= ((Ak+1 In ) + Ak+1 (A In ))(A In ) 1 = (Ak+2 In )(A In ) 1 :
(21) (a) Suppose that n > k. Then, by Corollary 2.6, there is a nontrivial X
such that BX = O. Hence, (AB)X = A(BX) = AO = O. But,
(AB)X = In X = X. Therefore, X = O, which gives a contradiction.
(b) Suppose that k > n. Then, by Corollary 2.6, there is a nontrivial Y
such that AY = O. Hence, (BA)Y = B(AY) = BO = O. But,
(BA)Y = Ik Y = X. Therefore, Y = O, which gives a contradiction.
(c) Parts (a) and (b) combine to prove that n = k. Hence A and B are
both square matrices. They are nonsingular with A 1 = B by the
de…nition of a nonsingular matrix, since AB = In = BA.
34
Andrilli/Hecker - Answers to Exercises
(22) (a) F
(b) T
Chapter 2 Review
(c) T
(d) F
(e) F
(f) T
Chapter 2 Review Exercises
(1) (a) x1 =
6, x2 = 8, x3 =
5
(b) No solutions
(c) f[ 5
(2) y =
c + e; 1
2x3 + 5x2
2c
e; c; 1 + 2e; e] j c; e 2 Rg
6x + 3
(3) (a) No. Entries above pivots need to be zero.
(b) No. Rows 3 and 4 should be switched.
(4) a = 4, b = 7, c = 4, d = 6
(5)
(i) x1 =
308, x2 =
(ii) x1 =
29, x2 =
18, x3 =
19, x3 =
641, x4 = 108
88, x4 = 36
(iii) x1 = 139, x2 = 57, x3 = 316, x4 =
30
(6) Corollary 2.6 applies since there are more variables than equations.
(7) (a) (I): h3i
(b) (II): h2i
6 h3i
3 h4i + h2i
(c) (III): h2i $ h3i
(8) (a) rank(A) = 2, rank(B) = 4, rank(C) = 3
(b) AX = 0 and CX = 0: in…nite number of solutions; BX = 0: one
solution
2
3
1 0
3 0
2
2 0
4 5. Therefore, A and B are
(9) rref(A) = rref(B) = 4 0 1
0 0
0 1
3
row equivalent by an argument similar to that in Exercise 3(b) of Section
2.3.
(10) (a) Yes. [ 34; 29; 21] = 5[2; 3; 1] + 2[5; 2; 1]
6[9; 8; 3]
(b) Yes. [ 34; 29; 21] is a linear combination of the rows of the matrix.
" 2 1 #
(11) A
1
=
9
9
1
6
1
3
(12) (a) Nonsingular. A
(b) Singular
1
2
6
=4
1
2
5
2
1
1
6
5
35
2
3
7
4 5
3
Andrilli/Hecker - Answers to Exercises
Chapter 2 Review
(13) This is true by the Inverse Method, which is justi…ed in Section 2.4.
(14) No, by part (1) of Theorem 2.15.
2
3
(15) The inverse of the coe¢ cient matrix is 4 2
1
is x1 = 27, x2 = 21, x3 = 1:
1
1
2
3
4
3 5 : The solution set
2
(16) (a) Because B is nonsingular, B is row equivalent to Im (see Exercise
13). Thus, there is a sequence of row operations, R1 ; : : : ; Rk such
that R1 ( (Rk (B)) ) = Im . Hence, by part (2) of Theorem 2.1,
R1 ( (Rk (BA)) ) = R1 ( (Rk (B)) )A = Im A = A. Therefore, BA is row equivalent to A. Thus BA and A have the same
reduced row echelon form, and so have the same rank.
(b) By part (d) of Exercise 18 in Section 2.3, rank(AC) rank(A). Similarly, rank(A) = rank((AC)C 1 ) rank(AC). Hence, rank(AC) =
rank(A).
(17) (a) F
(e) F
(i) F
(m) T
(b) F
(f) T
(j) T
(n) T
(c) F
(g) T
(k) F
(o) F
(d) T
(h) T
(l) F
(p) T
36
(q) T
(r) T
(s) T
Andrilli/Hecker - Answers to Exercises
Section 3.1
Chapter 3
Section 3.1
(1) (a)
17
(b) 6
4
2
(2) (a)
(b)
0
1
4
(c) 0
(e)
(d) 1
(f) 156
3
4
= 22
2
4
1
3
2
1
(3) (a) ( 1)2+2
4
9
(d) ( 1)1+2
x
x
(c)
(g)
40
(i) 0
(h)
60
(j)
3
2
6
0 5
1 4
4 0
3
= 118
= 65
3
7
5
8
6
(c) ( 1)4+3
108
=
2
2
3
(b) ( 1)2+3
1
13
22
16
4 x 3
1 x+2
9
4
6
3
= 51
= 222
=
2x + 11
(4) Same answers as Exercise 1.
(5) (a) 0
(b)
251
(c)
60
(d) 352
(6) a11 a22 a33 a44 + a11 a23 a34 a42 + a11 a24 a32 a43 + a12 a21 a34 a43
+ a12 a23 a31 a44 + a12 a24 a33 a41 + a13 a21 a32 a44 + a13 a22 a34 a41
+ a13 a24 a31 a42 + a14 a21 a33 a42 + a14 a22 a31 a43 + a14 a23 a32 a41
a11 a22 a34 a43 a11 a23 a32 a44 a11 a24 a33 a42 a11 a21 a33 a44
a12 a23 a34 a41 a12 a24 a31 a43 a13 a21 a34 a42 a13 a22 a31 a44
a13 a24 a32 a41 a14 a21 a32 a43 a14 a22 a33 a41 a14 a23 a31 a42
(7) Let A =
1
1
1
1
1
0
, and let B =
0
1
.
(8) (a) Perform the basketweaving method.
(b) (a
b) a = (a2 b3 a3 b2 )a1 + (a3 b1 a1 b3 )a2 + (a1 b2 a2 b1 )a3 =
a1 a2 b3 a1 a3 b2 + a2 a3 b1 a1 a2 b3 + a1 a3 b2 a2 a3 b1 = 0.
Similarly, (a
b) b = 0.
(9) (a) 7
(b) 18
(c) 12
(10) Let x = [x1 ; x2 ] and y = [y1 ; y2 ]. Then projx y =
1
kxk2 [(x1 y1
+ x2 y2 )x1 ; (x1 y1 + x2 y2 )x2 ]. Hence y
37
(d) 0
xy
kxk2
x=
projx y =
Andrilli/Hecker - Answers to Exercises
Section 3.1
1
2
projx y =
kxk2 (kxk y)
1
1
2
2
2
2
[(x
+
x
)y
1
2 1 ; (x1 + x2 )y2 ]
kxk2
kxk2 [(x1 y1 + x2 y2 )x1 ; (x1 y1
1
2
2
= kxk
x21 y1 x1 x2 y2 ; x21 y2 + x22 y2 x1 x2 y1
2 [x1 y1 + x2 y1
1
2
= kxk
x1 x2 y2 ; x21 y2 x1 x2 y1 ]
2 [x2 y1
1
= kxk2 [x2 (x2 y1 x1 y2 ); x1 (x1 y2 x2 y1 )]
2 x2 y1
= x1 ykxk
[ x2 ; x1 ]. Thus, kxk ky projx yk =
2
2 x2 y1 j
kxk jx1 ykxk
2
(11) (a) 18
p
x22 + x21 = jx1 y2
x2 y1 j = absolute value of
(b) 7
(c) 63
+ x2 y2 )x2 ]
x22 y2 ]
x1
y1
x2
.
y2
(d) 8
(12) Note that
x1 x2 x3
y1 y2 y3 = x1 y2 z3 + x2 y3 z1 + x3 y1 z2 x3 y2 z1 x1 y3 z2 x2 y1 z3 =
z1 z2 z3
(x2 y3 x3 y2 )z1 + (x3 y1 x1 y3 )z2 + (x1 y2 x2 y1 )z3 = (x
y) z (from
the de…nition in Exercise 8). Also note that the formula given in the
hint for this exercise for the area of the parallelogram equals kx
yk.
Hence, the volume of the parallelepiped equals kproj(x y) zk kx
yk =
x1 x2 x3
z (x
y)
y 1 y 2 y3 .
kx
yk
=
jz
(x
y)j
=
absolute
value
of
kx
yk
z 1 z 2 z3
(13) (a) Base Step: If n = 1, then A = [a11 ], so cA = [ca11 ], and jcAj =
ca11 = cjAj.
Inductive Step: Assume that if A is an n n matrix and c is a scalar,
then jcAj = cn jAj. Prove that if A is an (n + 1) (n + 1) matrix,
and c is a scalar, then jcAj = cn+1 jAj. Let B = cA. Then jBj =
bn+1;1 Bn+1;1 + bn+1;2 Bn+1;2 + + bn+1;n Bn+1;n + bn+1;n+1 Bn+1;n+1 :
Each Bn+1;i = ( 1)n+1+i jBn+1;i j = cn ( 1)n+1+i jAn+1;i j (by the
inductive hypothesis, since Bn+1;i is an n n matrix) = cn An+1;i :
Thus, jcAj = jBj = can+1;1 (cn An+1;1 ) + can+1;2 (cn An+1;2 ) +
+
can+1;n (cn An+1;n ) + can+1;n+1 (cn An+1;n+1 ) =
cn+1 (an+1;1 An+1;1 + an+1;2 An+1;2 +
+ an+1;n An+1;n
+ an+1;n+1 An+1;n+1 ) = cn+1 jAj:
(b) j2Aj = 23 jAj (since A is a 3
38
3 matrix) = 8jAj:
Andrilli/Hecker - Answers to Exercises
Section 3.2
(14)
x
0
0
a0
1
x
0
a1
0
0
1
0
x
1
a2 a3 + x
+ a1 ( 1)4+2
x
0
0
0
1
x
1
x
0
= a0 ( 1)4+1
0
0
1
+ a3 ( 1)4+4
x
0
0
+ a2 ( 1)4+3
x
0
0
1
x
0
0
1
x
1
x
0
0
0
1
0
0
1
0
1 :
x
The four 3 3 determinants in the previous equation can be calculated
using “basketweaving”as 1, x, x2 , and x3 , respectively. Therefore, the
original 4 4 determinant equals ( a0 )( 1) + (a1 )(x) + ( a2 )( x2 ) +
(a3 )(x3 ) = a0 + a1 x + a2 x2 + a3 x3 :
(15) (a) x =
(b) x =
5 or x = 2
2 or x = 1
(c) x = 3, x = 1, or x = 2
(16) (a) Use “basketweaving” and factor.
(b) 20
(17) (a) Area of T =
p
3s2
4 :
(b) Suppose one side of T extends from (a; b) to (c; d), where (a; b) and
(c; d) are lattice points. Then s2 = (c a)p2 + (d b)2 ; an integer.
2
2
Hence, s4 is rational, and the area of T = 3s
is irrational.
4
(c) Suppose the vertices of T are lattice points X = (a; b), Y = (c; d),
Z = (e; f ). Then side XY is expressed using vector [c a; d b],
and side XZ is expressed using vector [e a; f b]. Hence, the area
of T = 12 (area of the parallelogram formed by [c a; d b] and [e a;
c a d b
:
f b]) = 12
e a f b
(d)
c a d b
= 12 ((c a)(f b) (d b)(e a)); which is 21 times
e a f b
the di¤erence of two products of integers, and hence, is rational.
1
2
(18) (a) F
(b) T
(c) F
(d) F
Section 3.2
(1) (a) (II): h1i
(b) (III): h2i
3 h2i + h1i; determinant = 1
! h3i; determinant =
39
1
(e) T
Andrilli/Hecker - Answers to Exercises
(c) (I): h3i
4 h3i; determinant =
(d) (II): h2i
4
2 h1i + h2i; determinant = 1
1
2
(e) (I): h1i
(f) (III): h1i
h1i; determinant =
! h2i; determinant =
(2) (a) 30
(b)
Section 3.2
(c)
3
1
2
1
4
(e) 35
(d) 3
(3) (a) Determinant =
nonsingular
(f) 20
2;
(c) Determinant =
nonsingular
(b) Determinant = 1; nonsingular
(4) (a) Determinant =
79;
(d) Determinant = 0; singular
1; the system has only the trivial solution.
(b) Determinant = 0; the system has nontrivial solutions. (One nontrivial solution is (1; 7; 3)).
(c) Determinant =
42; the system has only the trivial solution.
(5) Use Theorem 3.2.
(6) Use row operations to reverse the order of the rows of A. This can be
done using 3 type (III) operations, an odd number. (These operations are
h1i $ h6i, h2i $ h5i and h3i $ h4i.) Hence, by part (3) of Theorem 3.3,
jAj = a16 a25 a34 a43 a52 a61 .
(7) If jAj 6= 0, A
B = C.
1
exists. Hence AB = AC =) A
1
AB = A
1
AC =)
(8) (a) Base Step: When n = 1, A = [a11 ]. Then B = R(A) (with i = 1)
= [ca11 ]. Hence, jBj = ca11 = cjAj:
(b) Inductive Hypothesis: Assume that if A is an n n matrix, and R
is the row operation hii
c hii ; and B = R(A), then jBj = cjAj.
Note: In parts (c) and (d), we must prove that if A is an
(n + 1) (n + 1) matrix and R is the row operation hii
c hii ; and
B = R(A), then jBj = cjAj.
(c) Inductive Step when i 6= n:
jBj = bn+1;1 Bn+1;1 +
+ bn+1;n Bn+1;n + bn+1;n+1 Bn+1;n+1 =
bn+1;1 ( 1)n+1+1 jBn+1;1 j +
+ bn+1;n ( 1)n+1+n jBn+1;n j
n+1+n+1
+ bn+1;n+1 ( 1)
jBn+1;n+1 j: But since jBn+1;i j is the determinant of the n n matrix An+1;i after the row operation hii
c hii
has been performed (since i 6= n), the inductive hypothesis shows
that each jBn+1;i j = cjAn+1;i j: Since each bn+1;i = an+1;i , we have
jBj = can+1;1 ( 1)n+1+1 jAn+1;1 j + + can+1;n ( 1)n+1+n jAn+1;n j +
can+1;n+1 ( 1)n+1+n+1 jAn+1;n+1 j = cjAj:
40
Andrilli/Hecker - Answers to Exercises
Section 3.2
(d) Inductive Step when i = n: Again, jBj = bn+1;1 Bn+1;1 +
+
bn+1;n Bn+1;n + bn+1;n+1 Bn+1;n+1 :
Since i = n, each bn+1;i = can+1;i , while each Bn+1;i = An+1;i (since
the …rst n rows of A are left unchanged by the row operation in
this case). Hence, jBj = c(an+1;1 An+1;1 ) +
+ c(an+1;n An+1;n ) +
c(an+1;n+1 An+1;n+1 ) = cjAj:
(9) (a) In order to add a multiple of one row to another, we need at least
two rows. Hence n = 2 here.
a11 a12
(b) Let A =
: Then the row operation h1i
ch2i + h1i
a21 a22
ca21 + a11 ca22 + a12
produces B =
; and jBj = (ca21 + a11 )a22
a21
a22
(ca22 + a12 )a21 ; which reduces to a11 a22 a12 a21 = jAj:
(c) Let A =
a11
a21
a12
a22
: Then the row operation h2i
ch1i + h2i
a11
a12
; and jBj = a11 (ca12 +
ca11 + a21 ca12 + a22
a12 (ca11 + a21 ); which reduces to a11 a22 a12 a21 = jAj:
produces B =
a22 )
(10) (a) Inductive Hypothesis: Assume that if A is an (n 1) (n 1) matrix
and R is a type (II) row operation, then jR(A)j = jAj.
Remark: We must prove that if A is an n n matrix, and R is a
type (II) row operation, then jR(A)j = jAj:
In what follows, let R: hii
c hji + hii (so, i 6= j).
(b) Inductive Step when i 6= n; j 6= n: Let B = R(A). Then jBj =
bn1 Bn1 +
+ bnn Bnn : But since the nth row of A is not a¤ected
by R, each bni = ani : Also, each Bni = ( 1)n+i jBni j; and jBni j is
the determinant of the (n 1) (n 1) matrix R(Ani ). But by the
inductive hypothesis, each jBni j = jAni j. Hence, jBj = jAj.
(c) Since i 6= j, we can assume j 6= n. Let i = n, and let k 6= n:
The nth row of R1 (R2 (R1 (A))) = kth row of R2 (R1 (A)) = kth row
of R1 (A) + c(jth row of R1 (A)) = nth row of A + c(jth row of A)
= nth row of R(A).
The kth row of R1 (R2 (R1 (A))) = nth row of R2 (R1 (A)) = nth row
of R1 (A) (since k 6= n) = kth row of A = kth row of R(A).
The lth row (where l 6= k; n) of R1 (R2 (R1 (A))) = lth row of R2 (R1 (A))
(since l 6= k; n) = lth row of R1 (A) (since l 6= k) = lth row of A
(since l 6= k; n) = lth row of R(A) (since l 6= n = i).
(d) Inductive Step when i = n: From part (c), we have: jR(A)j =
jR1 (R2 (R1 (A)))j = jR2 (R1 (A))j (by part (3) of Theorem 3.3) =
jR1 (A)j (by part (b), since k; j 6= n = i) = ( jAj) (by part (3)
of Theorem 3.3) = jAj:
(e) Since i 6= j, we can assume i 6= n: Now, the ith row of R1 (R3 (R1 (A))) =
ith row of R3 (R1 (A)) = ith row of R1 (A)+c(kth row of R1 (A)) = ith
41
Andrilli/Hecker - Answers to Exercises
Section 3.2
row of A + c(nth row of A) (since i 6= k) = ith row of R(A) (since
j = n).
The kth row of R1 (R3 (R1 (A))) = nth row of R3 (R1 (A)) = nth row
of R1 (A) (since i 6= n) = kth row of A = kth row of R(A) (since
i 6= k).
The nth row of R1 (R3 (R1 (A))) = kth row of R3 (R1 (A)) = kth row
of R1 (A) (since k 6= i) = nth row of A = nth row of R(A) (since
i 6= n).
The lth row (where l 6= i; k; n) of R1 (R3 (R1 (A))) = lth row of
R3 (R1 (A)) = lth row of R1 (A) = lth row of A = lth row of R(A).
(f) Inductive Step when j = n: jR(A)j = jR1 (R3 (R1 (A)))j =
jR3 (R1 (A))j (by part (3) of Theorem 3.3) = jR1 (A)j (by part
(b)) = ( jAj) (by part (3) of Theorem 3.3) = jAj:
(11) (a) Suppose the ith row of A is a row of all zeroes. Let R be the row
operation hii
2 hii. Then jR(A)j = 2jAj by part (1) of Theorem
3.3. But also jR(A)j = jAj since R has no e¤ect on A. Hence,
2jAj = jAj, so jAj = 0.
(b) If A contains a row of zeroes, rank(A) < n (since the reduced row
echelon form of A also contains at least one row of zeroes). Hence by
Corollary 3.6, jAj = 0.
(12) (a) Let rows i; j of A be identical. Let R be the row operation
hii $ hji : Then, R(A) = A, and so jR(A)j = jAj: But by part (3)
of Theorem 3.3, jR(A)j = jAj: Hence, jAj = jAj; and so jAj = 0:
(b) If two rows of A are identical, then subtracting one of these rows
from the other shows that A is row equivalent to a matrix with a
row of zeroes. Use part (b) of Exercise 11.
(13) (a) Suppose the ith row of A = c(jth row of A). Then the row operation
R : hii
c hji + hii shows A is row equivalent to a matrix with a
row of zeroes. Use part (a) of Exercise 11.
(b) Use the given hint, Theorem 2.5 and Corollary 3.6.
A C
. If jAj = 0, then, when row reducing D into
O B
upper triangular form, one of the …rst m columns will not be a pivot
column. Similarly, if jAj 6= 0 and jBj = 0, one of the last (n m)
columns will not be a pivot column. In either case, some column
will not contain a pivot, and so rank(D) < n. Hence, in both cases,
jDj = 0 = jAjjBj.
Now suppose that both jAj and jBj are nonzero. Then the row
operations for the …rst m pivots used to put D into upper triangular
form will be the same as the row operations for the …rst m pivots
used to put A into upper triangular form (putting 1’s along the entire
main diagonal of A). These operations convert D into the matrix
(14) (a) Let D =
42
Andrilli/Hecker - Answers to Exercises
Section 3.3
U G
, where U is an m
m upper triangular matrix with 1’s
O B
on the main diagonal, and G is some m
(n m) matrix. Hence, at
this point in the computation of jDj, the multiplicative factor P that
we use to calculate the determinant (as in Example 5 in the text)
equals 1=jAj.
The remaining (n m) pivots to put D into upper triangular
form will involve the same row operations needed to put B into upper
triangular form (with 1’s all along the main diagonal). Clearly, this
means that the multiplicative factor P must be multiplied by 1=jBj,
and so the new value of P equals 1=(jAjjBj). Since the …nal matrix
is in upper triangular form, its determinant is 1. Multiplying this
by the reciprocal of the …nal value of P (as in Example 5) yields
jDj = jAjjBj.
(b) (18
18)(20
3) = 0(17) = 0
(15) Follow the hint in the text. If A is row equivalent to B in reduced row echelon form, follow the method in Example 5 in the text to …nd determinants
using row reduction by maintaining a multiplicative factor P throughout
the process. But, since the same rules work for f with regard to row operations as for the determinant, the same factor P will be produced for
f . If B = In , then f (B) = 1 = jBj. Since the factor P is the same for f
and the determinant, f (A) = (1=P )f (B) = (1=P )jBj = jAj. If, instead,
B 6= In , then the nth row of B will be all zeroes. Perform the operation
(I): hni
2hni on B, which yields B. Then, f (B) = 2f (B), implying
f (B) = 0. Hence, f (A) = (1=P )0 = 0.
(16) (a) F
(b) T
(c) F
(d) F
(e) F
(f) T
Section 3.3
(1) (a) a31 ( 1)3+1 jA31 j+a32 ( 1)3+2 jA32 j+a33 ( 1)3+3 jA33 j+a34 ( 1)3+4 jA34 j
(b) a11 ( 1)1+1 jA11 j+a12 ( 1)1+2 jA12 j+a13 ( 1)1+3 jA13 j+a14 ( 1)1+4 jA14 j
(c) a14 ( 1)1+4 jA14 j+a24 ( 1)2+4 jA24 j+a34 ( 1)3+4 jA34 j+a44 ( 1)4+4 jA44 j
(d) a11 ( 1)1+1 jA11 j+a21 ( 1)2+1 jA21 j+a31 ( 1)3+1 jA31 j+a41 ( 1)4+1 jA41 j
(2) (a)
76
(b) 465
(c) 102
(d) 410
(3) In each part, the answer is given in the form Adjoint, Determinant, Inverse.
2
(a) 4
6
6
4
3
9 3
42 0 5,
8 2
2
6
6, 6
4
1
3
2
1
7
2
3
4
3
43
1
2
3
7
0 7
5
1
3
Andrilli/Hecker - Answers to Exercises
2
(b) 4
2
6
(c) 6
4
3
15
15
3
0
3
6
0
0
0
0
2
0
12
0
2
1
3
0
6
(d) 4 9
0
3
(e) 4 0
0
2
2
6 0
(f) 6
4 0
0
2
3
6
6
20 5, 15, 6
4
15
18
65
60
2
4
0
0
3
3
0
3
6
Section 3.3
1
5
6
5
2
5
1
13
3
4
3
1
4
1
3
0 7
7, 0, no inverse
3 5
6
3
2
1
3
0 0
4
0
7
6
3
1
0 5, 24, 6
0 7
8
2
5
4
8
1
0 0 3
2 1
1
2 3
3
3
9
9
2
6
7
1
2 7
6
5
6 , 9, 4 0
3
3 5
9
0
0
1
2 1
1
1
3
2
2
2
2
1
6
6 0
1
1
4
2 7
7, 4, 6
6
5
4
2
6 0
0
1
4
0
2
0
0
0
(4) (a) f( 4; 3; 7)g
1
4
1
2
1
2
1
2
3
7
7
5
3
7
7
7
7
7
5
(c) f(6; 4; 5)g
(b) f(5; 1; 3)g
(d) f(4; 1; 3; 6)g
(5) (a) If A is nonsingular, (AT ) 1 = (A 1 )T by part (4) of Theorem 2.11.
For the converse, A 1 = ((AT ) 1 )T .
(b) jABj = jAjjBj = jBjjAj = jBAj.
(6) (a) Use the fact that jABj = jAjjBj.
(b) jABj = j BAj =) jABj = ( 1)n jBAj (by Corollary 3.4) =)
jAjjBj = ( 1)jBjjAj (since n is odd). Hence, either jAj = 0 or
jBj = 0.
(7) (a) jAAT j = jAjjAT j = jAjjAj = jAj2
T
T
0.
T
(b) jAB j = jAjjB j = jA jjBj
(8) (a) AT = A, so jAj = jAT j = j Aj = ( 1)n jAj (by Corollary 3.4) =
( 1)jAj (since n is odd). Hence jAj = 0.
(b) Consider A =
0
1
1
0
.
44
Andrilli/Hecker - Answers to Exercises
(9) (a) ITn = In = In 1 .
2
0 1
(b) Consider 4 1 0
0 0
(10) Note 2that
9
B=4 3
6
then jAj2 is
Section 3.3
(c) jAT j = jA 1 j =) jAj =
1=jAj =) jAj2 = 1 =)
jAj = 1.
3
0
0 5.
1
3
0
3
2
1 5 has a determinant equal to
0
1
negative, a contradiction.
18. Hence, if B = A2 ,
(11) (a) Base Step: k = 2: jA1 A2 j = jA1 jjA2 j by Theorem 3.7.
Inductive Step: Assume jA1 A2
Ak j = jA1 jjA2 j
jAk j, for any
n n matrices A1 ; A2 ; : : : ; Ak :
Prove jA1 A2
Ak Ak+1 j = jA1 jjA2 j
jAk jjAk+1 j, for any n n
matrices A1 ; A2 ; : : : ; Ak ; Ak+1 :
But, jA1 A2
Ak Ak+1 j = j(A1 A2
Ak )Ak+1 j
= jA1 A2
Ak jjAk+1 j (by Theorem 3.7) = jA1 jjA2 j
jAk jjAk+1 j
by the inductive hypothesis.
(b) Use part (a) with Ai = A, for 1 i k: Or, for an induction proof:
Base Step (k = 1): Obvious.
k
Inductive Step: jAk+1 j = jAk Aj = jAk jjAj = jAj jAj = jAjk+1
(c) Use part (b): jAjk = jAk j = jOn j = 0, and so jAj = 0. Or, for an
induction proof:
Base Step (k = 1): Obvious.
Inductive Step: Assume.Ak+1 = On . If A is singular, then jAj = 0,
and we are done. But A can not be nonsingular, for if it is, Ak+1 =
On ) Ak+1 A 1 = On A 1 ) Ak = On ) jAj = 0 by the inductive
hypothesis. But this implies A is singular, a contradiction.
(12) (a) jAn j = jAjn . If jAj = 0, then jAjn = 0 is not prime. Similarly, if
jAj = 1, then jAjn = 1 and is not prime. If j(jAj)j > 1, then jAjn
has j(jAj)j as a positive proper divisor, for n 2.
(b) jAjn = jAn j = jIj = 1. Since jAj is an integer, jAj =
then jAj 6= 1, so jAj = 1.
1. If n is odd,
(13) (a) Let P 1 AP = B. Suppose P is an n
n matrix. Then P 1 is also
1
an n
n matrix. Hence P AP is not de…ned unless A is also an
n
n matrix, and thus, the product B = P 1 AP is also n
n.
2
1
(b) For example, consider B =
or B =
2
1
5
3
1
2
1
A
(c) A = In AIn = In 1 AIn
45
1
1
1
5
3
A
2
1
=
10
4
1
1
6
16
=
12
5
.
4
11
,
Andrilli/Hecker - Answers to Exercises
(d) If P 1 AP = B, then PP 1 APP
Hence (P 1 ) 1 BP 1 = A.
Section 3.3
1
= PB P
1
=) A = PBP
1
.
(e) A similar to B implies there is a matrix P such that A = P 1 BP.
B similar to C implies there is a matrix Q such that B = Q 1 CQ.
Hence, A = P 1 BP = P 1 (Q 1 CQ)P = (P 1 Q 1 )C(QP) =
(QP) 1 C(QP), and so A is similar to C.
(f) A similar to In implies there is a matrix P such that A = P
Hence, A = P 1 In P = P 1 P = In .
(g) jBj = jP
jAj.
1
APj = jP
1
jjAjjPj = jP
1
1
In P.
jjPjjAj = (1=jPj)jPjjAj =
(14) BA=(jAj jBj)
1
A (by Corollary 3.12), which equals A, since
(15) Use the fact that A 1 = jAj
jAj = 1. The entries of A are sums and products of the entries of A.
Since the entries of A are all integers, the entries of A are also integers.
(16) Use the fact that AA = jAjIn (from Theorem 3.11) and the fact that A
is singular i¤ jAj = 0.
(17) (a) We must show that the (i; j) entry of the adjoint of AT equals the
(i; j) entry of AT . But, the (i; j) entry of the adjoint of AT = (j; i)
cofactor of AT = ( 1)j+i j(AT )ji j = ( 1)i+j jAij j = (j; i) entry of A
= (i; j) entry of AT .
(b) The (i; j) entry of the adjoint of kA = (j; i) cofactor of kA =
( 1)j+i j(kA)ji j = ( 1)j+i k n 1 jAji j (by Corollary 3.4, since Aji is
an (n 1) (n 1) matrix) = k n 1 ((j; i) cofactor of A) = k n 1 ((i; j)
entry of A).
(18) (a) Aji = (j; i) cofactor of A = ( 1)j+i jAji j = ( 1)j+i j(AT )ij j
= ( 1)i+j jAij j (since A is symmetric) = (i; j) cofactor of A =
2
3
2
1
1
0
1 1
0 1 5. Then A = 4 1
1
(b) Consider A = 4 1
1
1 0
1
1
which is not skew-symmetric.
Aij .
3
1
1 5,
1
1
(19) Let A be a nonsingular upper triangular matrix. Since A 1 = jAj
A, it is
enough to show that A is upper triangular; that is, that the (i; j) entry of
A = 0 for i > j. But (i; j) entry of A = (j; i) cofactor of A = ( 1)j+i jAji j.
Notice that when i > j, the matrix Aji is also upper triangular with at
least one zero on the main diagonal. Thus, jAij j = 0.
(20) (a) If A is singular, then by Exercise 16, AA = On . If A 1 exists, then
AAA 1 = On A 1 =) A = On =) A = On , a contradiction,
since On is singular.
46
Andrilli/Hecker - Answers to Exercises
Section 3.3
(b) Assume A is an n
n matrix. Suppose jAj =
6 0. Then A 1 exists,
1
and by Corollary 3.12, A = jAjA . Hence jAj = j jAjA 1 j =
1
= jAjn 1 .
jAjn jA 1 j (by Corollary 3.4) = jAjn jAj
Also, if jAj = 0, then A is singular by part (a). Hence jAj = 0 =
jAjn 1 .
(21) The solution is completely outlined in the given Hint.
(22) (a) T
(b) T
(c) F
(d) T
(e) F
(f) T
(23) (a) Let R be the type (III) row operation hki ! hk 1i, let A be an
n n matrix and let B = R(A). Then, by part (3) of Theorem 3.3,
jBj = ( 1)jAj. Next, notice that the submatrix A(k 1)j = Bkj because the (k 1)st row of A becomes the kth row of B, implying
the same row is being eliminated from both matrices, and since all
other rows maintain their original relative positions (notice the same
column is being eliminated in both cases). Hence, a(k 1)1 A(k 1)1 +
a(k 1)2 A(k 1)2 +
+ a(k 1)n A(k 1)n = bk1 ( 1)(k 1)+1 jA(k 1)1 j +
(k 1)+2
bk2 ( 1)
jA(k 1)2 j +
+ bkn ( 1)(k 1)+n jA(k 1)n j (because
a(k 1)j = bkj for 1 j n) = bk1 ( 1)k jBk1 j + bk2 ( 1)k+1 jBk2 j +
+bkn ( 1)k+n 1 jBkn j = ( 1)(bk1 ( 1)k+1 jBk1 j+bk2 ( 1)k+2 jBk2 j+
+ bkk ( 1)k+n jBkn j) = ( 1)jBj (by applying Theorem 3.10 along
the kth row of B) = ( 1)jR(A)j = ( 1)( 1)jAj = jAj, …nishing the
proof.
(b) The de…nition of the determinant is the Base Step for an induction
proof on the row number, counting down from n to 1. Part (a) is the
Inductive Step.
(24) Suppose row i of A equals row j of A. Let R be the type (III) row
operation hii ! hji: Then, clearly A = R(A). Thus, by part (3) of
Theorem 3.3, jAj = jAj, or 2jAj = 0, implying jAj = 0.
(25) Let A, i; j; and B be as given in the exercise and its hint. Then by
Exercise 24, jBj = 0, since its ith and jth rows are the same. Also, since
every row of A equals the corresponding row of B, with the exception of
the jth row, the submatrices Ajk and Bjk are equal for 1 k n. Hence,
Ajk = Bjk for 1 k n. Now, computing the determinant of B using a
cofactor expansion along the jth row (allowed by Exercise 23) yields 0 =
jBj = bj1 Bj1 +bj2 Bj2 + +bjn Bjn = ai1 Bj1 +ai2 Bj2 + +ain Bjn (because
the jth row of B equals the ith row of A) = ai1 Aj1 +ai2 Aj2 + +ain Ajn ,
completing the proof.
(26) Using the fact that the general (k; m) entry of A is Amk , we see that
the (i; j) entry of AA equals ai1 Aj1 + ai2 Aj2 +
+ ain Ajn . If i = j,
Exercise 23 implies that this sum equals jAj, while if i 6= j, Exercise 25
implies that the sum equals 0. Hence, AA equals jAj on the main diagonal
and 0 o¤ the main diagonal, yielding (jAj)In .
47
Andrilli/Hecker - Answers to Exercises
Section 3.3
(27) Suppose A is nonsingular. Then jAj =
6 0 by Theorem 3.5. Thus, by
1
1
Exercise 26, A jAj
A = In . Therefore, by Theorem 2.9, jAj
A A = In ,
implying AA = (jAj)In .
(28) Using the fact that the general (k; m) entry of A is Amk , we see that the
(j; j) entry of AA equals a1j A1j + a2j A2j +
+ anj Anj . But all main
diagonal entries of AA equal jAj by Exercise 27.
(29) Let A be a singular n n matrix. Then jAj = 0 by Theorem 3.5. By
part (4) of Theorem 2.11, AT is also singular (or else A = (AT )T is
nonsingular). Hence, jAT j = 0 by Theorem 3.5, and so jAj = jAT j in this
case.
(30) Case 1: Assume 1
k < j and 1
i < m. Then the (i; k) entry of
(Ajm )T = (k; i) entry of Ajm = (k; i) entry of A = (i; k) entry of AT =
(i; k) entry of (AT )mj . Case 2: Assume j k < n and 1 i < m. Then
the (i; k) entry of (Ajm )T = (k; i) entry of Ajm = (k + 1; i) entry of A =
(i; k +1) entry of AT = (i; k) entry of (AT )mj . Case 3: Assume 1 k < j
and m < i n. Then the (i; k) entry of (Ajm )T = (k; i) entry of Ajm
= (k; i + 1) entry of A = (i + 1; k) entry of AT = (i; k) entry of (AT )mj .
Case 4: Assume j
k < n and m < i
n. Then the (i; k) entry of
(Ajm )T = (k; i) entry of Ajm = (k + 1; i + 1) entry of A = (i + 1; k + 1)
entry of AT = (i; k) entry of (AT )mj . Hence, the corresponding entries of
(Ajm )T and (AT )mj are all equal, proving that the matrices themselves
are equal.
(31) Suppose A is a nonsingular n n matrix. Then, by part (4) of Theorem 2.11, AT is also nonsingular. We prove jAj = jAT j by induction on
n.
Base Step: Assume n = 1. Then A = [a11 ] = AT , and so their determinants must be equal.
Inductive Step: Assume that jBj = jBT j for any (n 1) (n 1)
nonsingular matrix B, and prove that jAj = jAT j for any n n nonsingular matrix A. Now, …rst note that, by Exercise 30, j(AT )ni j =
j(Ain )T j. But, j(Ain )T j = jAin j, either by the inductive hypothesis, if Ain
is nonsingular, or by Exercise 29, if Ain is singular. Hence, j(AT )ni j =
jAin j. Let D = AT . So, jDni j = jAin j. Then, by Exercise 28, jAj =
a1n A1n + a2n A2n + + ann Ann = dn1 ( 1)1+n jA1n j + dn2 ( 1)2+n jA2n j +
+ dnn ( 1)n+n jAnn j = dn1 ( 1)1+n jDn1 j + dn2 ( 1)2+n jDn2 j +
+
dnn ( 1)n+n jDnn j = jDj, since this is the cofactor expansion for jDj along
the last row of D. This completes the proof.
(32) Let A be an n n singular matrix. Let D = AT . D is also singular by part (4) of Theorem 2.11 (or else A = DT is nonsingular). Now,
Exercise 30 shows that jDjk j = j(Akj )T j, which equals jAkj j, by Exercise 29 if Akj is singular, or by Exercise 31 if Akj is nonsingular. So,
48
Andrilli/Hecker - Answers to Exercises
Section 3.3
jAj = jDj (by Exercise 29) = dj1 Dj1 + dj2 Dj2 +
+ djn Djn (by Exercise 23) = a1j ( 1)j+1 jDj1 j + a2j ( 1)j+2 jDj2 j +
+ anj ( 1)j+n jDjn j =
a1j ( 1)1+j jA1j j + a2j ( 1)2+j jA2j j +
+ anj ( 1)n+j jAnj j = a1j A1j +
a2j A2j +
+ anj Anj , and we are …nished.
(33) If A has two identical columns, then AT has two identical rows. Hence
jAT j = 0 by Exercise 24. Thus, jAj = 0, by Theorem 3.9 (which is proven
in Exercises 29 and 31).
(34) Let B be the n n matrix, all of whose entries are equal to the corresponding entries in A, except that the jth column of B equals the ith
column of A (as does the ith column of B). Then jBj = 0 by Exercise 33. Also, since every column of A equals the corresponding column of B, with the exception of the jth column, the submatrices Akj
and Bkj are equal for 1
k
n. Hence, Akj = Bkj for 1
k
n.
Now, computing the determinant of B using a cofactor expansion along
the jth column (allowed by part (2) of Theorem 3.10, proven in Exercises 28 and 32) yields 0 = jBj = b1j B1j + b2j B2j +
+ bnj Bnj =
a1i B1j + a2i B2j + + ani Bnj (because the jth column of B equals the ith
column of A) = a1i A1j + a2i A2j +
+ ani Anj , completing the proof.
(35) Using the fact that the general (k; m) entry of A is Amk , we see that
the (i; j) entry of AA equals a1j A1i + a2j A2i +
+ anj Ani . If i = j,
Exercise 32 implies that this sum equals jAj, while if i 6= j, Exercise 34
implies that the sum equals 0. Hence, AA equals jAj on the main diagonal
and 0 o¤ the main diagonal, and so AA = (jAj)In . (Note that, since A
is singular, jAj = 0, and so, in fact, AA = On .)
1
AA = In . Hence, AX = B )
(36) (a) By Exercise 27, jAj
1
1
1
jAj AB ) In X = jAj AB ) X = jAj AB.
1
jAj AAX
=
(b) Using part (a) and the fact that the general (k; m) entry of A is
1
AB
Amk , we see that the kth entry of X equals the kth entry of jAj
1
= jAj (b1 A1k +
+ bn Ank ).
(c) The matrix Ak is de…ned to equal A in every entry, except in the
kth column, which equals the column vector B. Thus, since this kth
column is ignored when computing (Ak )ik , the (i; k) submatrix of
Ak , we see that (Ak )ik = Aik for each 1 i n. Hence, computing
jAk j by performing a cofactor expansion down the kth column of Ak
(by part (2) of Theorem 3.10) yields jAk j = b1 ( 1)1+k j(Ak )1k j +
b2 ( 1)2+k j(Ak )2k j +
+ bn ( 1)n+k j(Ak )nk j = b1 ( 1)1+k jA1k j +
b2 ( 1)2+k jA2k j+ +bn ( 1)n+k jAnk j = b1 A1k +b2 A2k + +bn Ank .
(d) Theorem 3.13 claims that the kth entry of the solution X of AX = B
1
is jAk j=jAj. Part (b) gives the expression jAj
(b1 A1k +
+ bn Ank )
for this kth entry, and part (c) replaces (b1 A1k +
+ bn Ank ) with
jAk j.
49
Andrilli/Hecker - Answers to Exercises
Section 3.4
Section 3.4
(1) (a) x2
(b) x3
(c) x3
7x + 14
6x2 + 3x + 10
8x2 + 21x 18
(d) x3
8x2 + 7x
(e) x4
3x3
(2) (a) E2 = fb[1; 1]g
(b) E2 = fc[ 1; 1; 0]g
(c) E
1
5
4x2 + 12x
= fb[1; 2; 0] + c[0; 0; 1]g
(3) In the answers for this exercise, b, c, and d represent arbitrary scalars.
(a)
(b)
(c)
(d)
= 1, E1 = fa[1; 0]g, algebraic multiplicity of
1 = 2, E2 = fb[1; 0]g, algebraic multiplicity of
E3 = fb[1; 1]g, algebraic multiplicity of 2 = 1
1
(f)
(g)
(h)
= 1, E1 = fb[3; 1]g, algebraic multiplicity of
= fb[7; 3]g, algebraic multiplicity of 2 = 1
= 1;
1
1 = 1, E1 = fa[1; 0; 0]g, algebraic multiplicity of
E2 = fb[0; 1; 0]g, algebraic multiplicity of 2 = 1;
fc[ 16 ; 37 ; 1]g, algebraic multiplicity of 3 = 1
E
(e)
=2
1
3
1
2
= 3,
= 1; 2 = 2,
= 5, E 5 =
= 1;
2
=
1,
1
1 = 0, E0 = fc[1; 3; 2]g, algebraic multiplicity of
E2 = fb[0; 1; 0] + c[1; 0; 1]g, algebraic multiplicity of
= 1;
2 =2
1
2
= 13, E13 = fc[4; 1; 3]g, algebraic multiplicity of 1 = 1;
13, E 13 = fb[1; 4; 0]+c[3; 0; 4]g, algebraic multiplicity of
= 2,
2
=
=2
=
1,
1
1 = 1, E1 = fd[2; 0; 1; 0]g, algebraic multiplicity of 1 = 1;
E 1 = fd[0; 2; 1; 1]g, algebraic multiplicity of 2 = 1
1
1
2
2
2
= 0, E0 = fc[ 1; 1; 1; 0] + d[0; 1; 0; 1]g, algebraic multiplicity of
= 2; 2 = 3, E 3 = fd[ 1; 0; 2; 2]g, algebraic multiplicity of
=2
(4) (a) P =
3
1
2
1
;D=
3
0
0
5
(b) P =
2
1
5
2
,D=
2
0
0
2
(c) Not diagonalizable
2
3
2
6 1 1
1
(d) P = 4 2 2 1 5, D = 4 0
5 1 1
0
3
0 0
1 0 5
0 2
(f) Not diagonalizable
2
2
3
2 1
0
2
1 5, D = 4 0
(g) P = 4 3 0
0 3
1
0
0
2
0
(e) Not diagonalizable
50
3
0
0 5
3
Andrilli/Hecker - Answers to Exercises
2
3
2
3
6 3 1
0 0 0
(h) P = 4 2 1 0 5, D = 4 0 1 0 5
1 0 1
0 0 1
2
3
2
2 1 1
1
1 0
6 2 0 2
6 0 1
1 7
6
7
6
(i) P = 4
,D=4
1 0 1
0 5
0 0
0 1 0
1
0 0
2
32770
65538
17 6 24
(5) (a)
32769
65537 (b) 4 15 6 20
9 3 13
2
3
352246
4096
354294
2048
175099 5
(d) 4 175099
175099
4096
177147
2
3
4188163 6282243
9421830
9432060 5
(e) 4 4192254 6288382
4190208 6285312
9426944
Section 3.4
0
0
1
0
3
5
3
0
0 7
7
0 5
0
(c) A49 = A
(6) A similar to B implies there is a matrix P such that A = P 1 BP. Hence,
pA (x) = jxIn Aj = jxIn P 1 BPj = jxP 1 In P P 1 BPj = jP 1 (xIn
1
1
j(xIn B)jjPj = jPj
jPjj(xIn B)j
B)Pj = jP 1 jj(xIn B)jjPj = jPj
= j(xIn B)j = pB (x).
(7) (a) Suppose A = PDP 1 for some diagonal matrix D. Let C be the
1
diagonal matrix with cii = (dii ) 3 . Then C3 = D. Clearly then,
1 3
(PCP ) = A.
(b) If A has all eigenvalues nonnegative, then A has a square root. Proceed as in part (a), only taking square roots instead of cube roots.
3
2
3
2
2
10
11 5 :
(8) One possible answer: 4 7
8
10
11
(9) The characteristic polynomial of the matrix is x2
whose discriminant simpli…es to (a d)2 + 4bc.
(a + d)x + (ad
bc),
(10) (a) Let v be an eigenvector for A corresponding to . Then Av =
v, A2 v = A(Av) = A( v) = (Av) = ( v) = 2 v, and, an
analogous induction argument shows Ak v = k v, for any integer
k 1.
0
1
(b) Consider the matrix A =
. Although A has no eigenval1
0
ues, A4 = I2 has 1 as an eigenvalue.
(11) ( x)n jA 1 jpA ( x1 ) = ( x)n jA 1 j j x1 In Aj = ( x)n jA
= ( x)n j x1 A 1 In j = j( x)( x1 A 1 In )j = jxIn A
51
1 1
( x In
1
A)j
j = pA 1 (x).
Andrilli/Hecker - Answers to Exercises
Section 3.4
(12) Both parts are true, because xIn A is also upper triangular, and so
pA (x) = jxIn Aj = (x a11 )(x a22 ) (x ann ), by Theorem 3.2.
AT j = jxITn
(13) pAT (x) = jxIn
AT j = j(xIn
A)T j = jxIn
Aj = pA (x)
(14) AT X = X since the entries of each row of AT sum to 1. Hence
an eigenvalue for AT , and hence for A by Exercise 13.
= 1 is
(15) Base Step: k = 1. Use argument in the text directly after the de…nition
of similarity.
Inductive Step: Assume that if P 1 AP = D, then for some k > 0,
Ak = PDk P 1 .Prove that if P 1 AP = D; then Ak+1 = PDk+1 P 1 :
But Ak+1 = AAk = (PDP 1 )(PDk P 1 ) (by the inductive hypothesis)
= PDk+1 P 1 :
(16) Since A is upper triangular, so is xIn A. Thus, by Theorem 3.2, jxIn Aj
equals the product of the main diagonal entries of xIn A, which is
(x a11 )(x a22 ) (x ann ). Hence the main diagonal entries of A are
the eigenvalues of A. Thus, A has n distinct eigenvalues. Then, by the
Method for Diagonalizing an n n Matrix given in Section 3.4, the necessary matrix P can be constructed (since only one fundamental eigenvector
for each eigenvalue is needed in this case), and so A is diagonalizable.
(17) Assume A is an n n matrix. Then A is singular , jAj = 0 , ( 1)n jAj =
0 , j Aj = 0 , j0In Aj = 0 , pA (0) = 0 , = 0 is an eigenvalue
for A.
(18) Let D = P 1 AP. Then D = DT = (P 1 AP)T = PT AT (P
((PT ) 1 ) 1 AT (PT ) 1 , so AT is diagonalizable.
1 T
)
=
(19) D = P 1 AP =) D 1 = (P 1 AP) 1 = P 1 A 1 P. But D 1 is a diagonal matrix whose main diagonal entries are the reciprocals of the main
diagonal entries of D: (Since the main diagonal entries of D are the nonzero
eigenvalues of A;their reciprocals exist.) Thus, A 1 is diagonalizable.
(20) (a) The ith column of AP = A(ith column of P) = APi . Since Pi is
an eigenvector for i ; we have APi = i Pi , for each i.
(b) P
1
i Pi
=
i (P
1
(ith column of P)) =
1
1
i (ith
column of In ) =
i ei :
1
(c) The ith column of P AP = P (ith column of AP) = P ( i Pi )
(by part (a)) = ( i ei ) (by part (b)). Hence, P 1 AP is a diagonal
matrix with main diagonal entries 1 ; 2 ; : : : ; n :
(21) Since PD = AP, the ith column of PD equals the ith column of AP.
But the ith column of PD = P(ith column of D) = P(dii ei ) = dii Pei =
dii (P(ith column of In )) = dii (ith column of PIn ) = dii (ith column of
P) = dii Pi : Also, the ith column of AP = A(ith column of P) = APi :
Thus, dii is an eigenvalue for the ith column of P.
52
Andrilli/Hecker - Answers to Exercises
Section 3.4
(22) Use induction on n.
Base Step (n = 1): jCj = a11 x + b11 , which has degree 1 if k = 1
(implying a11 6= 0), and degree 0 if k = 0.
Inductive Step: Assume true for (n 1) (n 1) matrices and prove
for n n matrices. Using a cofactor expansion along the …rst row yields
jCj = (a11 x + b11 )( 1)2 jC11 j + (a12 x + b12 )( 1)3 jC12 j +
+ (a1n x +
b1n )( 1)n+1 jC1n j.
Case 1: Suppose the …rst row of A is all zeroes (so each a1i = 0).
Then, each of the submatrices A1i has at most k nonzero rows. Hence,
by the inductive hypothesis, each jC1i j = jA1i x + B1i j is a polynomial of
degree at most k. Therefore, jCj = b11 ( 1)2 jC11 j + b12 ( 1)3 jC12 j + +
b1n ( 1)n+1 jC1n j is a sum of constants multiplied by such polynomials, and
hence is a polynomial of degree at most k.
Case 2: Suppose some a1i 6= 0. Then, at most k 1 of the rows from
2 through n of A have a nonzero entry. Hence, each submatrix A1i has
at most k 1 nonzero rows, and, by the inductive hypothesis, each of
jC1i j = jA1i x + B1i j is a polynomial of degree at most k 1. Therefore,
each term (a1i x + b1i )( 1)i+1 jC1i j is a polynomial of degree at most k.
The desired result easily follows.
(23) (a) jxI2
= x2
a11
a12
= (x a11 )(x a22 ) a12 a21
a21
x a22
(a11 + a22 )x + (a11 a22 a12 a21 ) = x2 (trace(A))x + jAj:
Aj =
x
(b) Use induction on n.
Base Step (n = 1): Now pA (x) = x a11 , which is a …rst degree
polynomial with coe¢ cient of its x term equal to 1.
Inductive Step: Assume that the statement is true for all
(n 1) (n 1) matrices, and prove it is true for an n n matrix
A. Consider
2
3
(x a11 )
a12
a1n
6
a21
(x a22 )
a2n 7
6
7
C = xIn A = 6
7.
..
..
..
.
.
4
5
.
.
.
.
an1
an2
(x ann )
Using a cofactor expansion along the …rst row, pA (x) = jCj =
(x a11 )( 1)1+1 jC11 j + ( a12 )( 1)1+2 jC12 j + ( a13 )( 1)1+3 jC13 j
+
+ ( a1n )( 1)1+n jC1n j. Now jC11 j = pA11 (x), and so, by the
inductive hypothesis, is an n 1 degree polynomial with coe¢ cient
of xn 1 equal to 1. Hence, (x a11 )jC11 j is a degree n polynomial
2, C1j is an
having coe¢ cient 1 for its xn term. But, for j
(n 1) (n 1) matrix having exactly n 2 rows containing entries
which are linear polynomials, since the linear polynomials at the (1; 1)
and (j; j) entries of C have been eliminated. Therefore, by Exercise
22, jC1j j is a polynomial of degree at most n 2. Summing to get
jCj then gives an n degree polynomial with coe¢ cient of the xn term
53
Andrilli/Hecker - Answers to Exercises
Chapter 3 Review
equal to 1, since only the term (x
the xn term in pA (x).
a11 )jC11 j in jCj contributes to
(c) Note that the constant term of pA (x) is pA (0) = j0In
( 1)n jAj.
Aj = j Aj =
(d) Use induction on n.
Base Step (n = 1): Now pA (x) = x a11 . The degree 0 term is
a11 = trace(A).
Inductive Step: Assume that the statement is true for all
(n 1) (n 1) matrices, and prove it is true for an n n matrix
A. We know that pA (x) is a polynomial of degree n from part (b).
Then, let C = xIn A as in part (b). Then, arguing as in part (b),
only the term (x a11 )jC11 j in jCj contributes to the xn and xn 1
terms in pA (x), because all of the terms of the form a1j jC1j j are
polynomials of degree (n 2), for j 2 (using Exercise 22). Also,
since jC11 j = pA11 (x), the inductive hypothesis and part (b) imply
that jC11 j = xn 1 trace(A11 )xn 2 + (terms of degree (n 3)).
Hence, (x a11 )jC11 j = (x a11 )(xn 1 trace(A11 )xn 2 + (terms
of degree (n 3))) = xn a11 xn 1 trace(A11 )xn 1 + (terms of
degree (n 2)) = xn trace(A)xn 1 + (terms of degree (n 2)),
since a11 + trace(A11 ) = trace(A). This completes the proof.
(24) (a) T
(c) T
(e) F
(g) T
(b) F
(d) T
(f) T
(h) F
Chapter 3 Review Exercises
(1) (a) (3,4) minor = jA34 j =
(b) (3,4) cofactor =
(2) jAj =
262
(3) jAj =
42
30
jA34 j = 30
(c) jAj =
830
(d) jAj =
830
15
(c) jBj = 15
(4) Volume = 45
(5) (a) jBj = 60
(6) (a) Yes
(b) jBj =
(b) 4
(c) Yes
(7) 378
54
(d) Yes
Andrilli/Hecker - Answers to
2
0
2
2
(8) (a) A = 4 2
2
1
(9) (a) A
(c)
(10) x1 =
1
11
2
6
=6
4
4, x2 =
Exercises
3
4
2 5
5
1
1
2
3
7
2
11
2
5
2
11
4
17
4
Chapter 3 Review
1
(b) A
2
6
=4
0
1
1
1
1
1
2
2
3
4
(b) A = 4 12
10
7
7
5
3
2
1 7
5
5
2
3
8
22 5
17
4
14
11
(d) B is singular, so we can not
compute B 1 .
3, x3 = 5
(11) (a) The determinant of the given matrix is 289. Thus, we would need
jAj4 = 289. But no real number raised to the fourth power is
negative.
(b) The determinant of the given matrix is zero, making it singular.
Hence it can not be the inverse of any matrix.
(12) B similar to A implies there is a matrix P such that B = P
1
AP.
(a) We will prove that Bk = P 1 Ak P by induction on k.
Base Step: k = 1. B = P 1 AP is given.
Inductive Step: Assume that Bk = P 1 Ak P for some k. Then
Bk+1 = BBk = (P 1 AP)(P 1 Ak P) = P 1 A(PP 1 )Ak P
= P 1 A(In )Ak P = P 1 AAk P = P 1 Ak+1 P, so Bk+1 is similar
to Ak+1 .
(b) jBT j = jBj = jP
1
1
APj = jP
jjAjjPj =
1
jPj jAjjPj
= jAj = jAT j
(c) If A is nonsingular, B 1 = (P 1 AP) 1 = P 1 A 1 (P 1 ) 1 =
P 1 A 1 P, so B is nonsingular.
Note that A = PBP 1 . So, if B is nonsingular, A 1 = (PBP 1 ) 1 =
(P 1 ) 1 B 1 P 1 = PB 1 P 1 , so A is nonsingular.
(d) By part (c), A
to B 1 .
(e) B + In = P
1
1
= (P
1
)
1
AP + In = P
B
1
1
(P
1
), proving that A
AP + P
1
In P = P
1
1
is similar
(A + In )P.
(f) Exercise 26(c) in Section 1.5 states that trace(AB) = trace(BA) for
any two n n matrices A and B. In this particular case, trace(B) =
trace(P 1 (AP)) = trace((AP)P 1 ) = trace(A(PP 1 )) = trace(AIn )
= trace(A).
(g) If B is diagonalizable, then there is a matrix Q such that D =
Q 1 BQ is diagonal. Thus, D = Q 1 (P 1 AP)Q = (Q 1 P 1 )A(PQ)
= (PQ) 1 A(PQ), and so A is diagonalizable. Since A is similar to
B if and only if B is similar to A (see Exercise 13(d) in Section
55
Andrilli/Hecker - Answers to Exercises
Chapter 3 Review
3.3), an analogous argument shows that A is diagonalizable =) B
is diagonalizable.
(13) Assume in what follows that A is an n n matrix. Let Ai be the ith
matrix (as de…ned in Theorem 3.13) for the matrix A.
(a) Let C = R([AjB]): We must show that for each i, 1
i
n; the
matrix Ci (as de…ned in Theorem 3.13) for C is identical to R(Ai ):
First we consider the columns of Ci other than the ith column.
If 1
j
n with j 6= i, then the jth column of Ci is the same as
the jth column of C = R([AjB]). But since the jth column of [AjB]
is identical to the jth column of Ai , it follows that the jth column
of R([AjB]) is identical to the jth column of R(Ai ). Therefore, for
1 j
n with j 6= i, the jth column of Ci is the same as the jth
column of R(Ai ).
Finally, we consider the ith column of Ci . Now, the ith column
of Ci is identical to the last column of R([AjB]) = [R(A)jR(B)],
which is R(B): But since the ith column of Ai equals B, it follows
that the ith column of R(Ai ) = R(B). Thus, the ith column of Ci
is identical to the ith column of R(Ai ).
Therefore, since Ci and R(Ai ) agree in every column, we have
Ci = R(Ai ):
(b) Consider each type of operation in turn. First, if R is the type (I)
operation hki
c hki for some c 6= 0, then by part (1) of Theorem
cjAi j
jR(Ai )j
ij
3.3, jR(A)j = cjAj = jA
jAj . If R is any type (II) operation, then
jR(Ai )j
jAi j
jR(A)j = jAj .
( 1)jAi j
jAi j
( 1)jAj = jAj .
by part (2) of Theorem 3.3,
operation, then
jR(Ai )j
jR(A)j
=
If R is any type (III)
(c) In the special case when n = 1, we have A = [1]. Let B = [b]. Hence,
A1 (as de…ned in Theorem 3.13) = B = [b]: Then the formula in Theb
1j
orem 3.13 gives x1 = jA
jAj = 1 = b, which is the correct solution to
the equation AX = B in this case. For the remainder of the proof,
we assume n > 1.
If A = In , then the solution to the system is X = B. Therefore we must show that, for each i, 1 i n, the formula given in
Theorem 3.13 yields xi = bi (the ith entry of B). First, note that
jAj = jIn j = 1. Now, the ith matrix Ai (as de…ned in Theorem 3.13)
for A is identical to In except in its ith column, which equals B.
Therefore, the ith row of Ai has bi as its ith entry, and zeroes elsewhere. Thus, a cofactor expansion along the ith row of Ai yields jAi j
= bi ( 1)i+i jIn 1 j = bi . Hence, for each i, the formula in Theorem
bi
ij
3.13 produces xi = jA
jAj = 1 = bi , completing the proof.
(d) Because A is nonsingular, [AjB] row reduces to [In jX], where X is the
unique solution to the system. Let [CjD] represent any intermediate
augmented matrix during this row reduction process. Now, by part
56
Andrilli/Hecker - Answers to Exercises
Chapter 3 Review
(a) and repeated use of part (b), the ratio
jAi j
jAj
jCi j
jCj
is identical to the
ratio
obtained from the original augmented matrix, for each
i, 1
i
n. But part (c) proves that for the …nal augmented
matrix, [In jX], this common ratio gives the correct solution for xi ,
for each i, 1 i n. Since all of the systems corresponding to these
intermediate matrices have the same unique solution, the formula
in Theorem 3.13 gives the correct solution for the original system,
[AjB], as well. Thus, Cramer’s Rule is validated.
(14) (a) pA (x) = x3
2, 2
1 =
E
=
fa[
1
2
1
2
4 3
7
3
7
x2 10x 8 = (x + 2)(x + 1)(x 4); Eigenvalues:
= 1, 3 = 4; Eigenspaces: E 2 = fa[ 1; 3; 3] j a 2 Rg,
2; 7; 3
7] j a 22Rg, E4 = fa[3 3; 10; 11] j a 2 Rg; P =
3
2
0 0
10 5; D = 4 0
1 0 5
11
0
0 4
(b) pA (x) = x3 +x2 21x 45 = (x+3)2 (x 5); Eigenvalues: 1 = 5, 2 =
3; Eigenspaces: E5 = fa[
2 Rg; E2 3 = fa[ 2; 1;3
0] +
2 1; 4; 4] j a 3
5
0
0
1
2 2
3
0 5
1 0 5; D = 4 0
b[2; 0; 1] j a; b 2 Rg; P = 4 4
0
0
3
4
0 1
(15) (a) pA (x) = x3 1 = (x 1)(x2 + x + 1). = 1 is the only eigenvalue,
having algebraic multiplicity 1. Thus, at most one fundamental eigenvector will be produced, which is insu¢ cient for diagonalization by
Step 4 of the Diagonalization Method.
(b) pA (x) = x4 + 6x3 + 9x2 = x2 (x + 3)2 . Even though the eigenvalue
3 has algebraic multiplicity 2, only one fundamental eigenvector is
produced for = 3 because ( 3I4 A) has rank 3. Hence, we
get only 3 fundamental eigenvectors overall, which is insu¢ cient for
diagonalization by Step 4 of the Diagonalization Method.
3
2
9565941
9565942
4782976
6377300 5
(16) A13 = 4 12754588 12754589
3188648
3188648
1594325
(17) (a)
1
= 2,
2
=
1,
3
=3
(b) E2 = fa[1; 2; 1; 1] j a 2 Rg, E 1 = fa[1; 0; 0; 1]+b[3; 7; 3; 2] j a; b 2
Rg, E3 = fa[2; 8; 4; 3] j a 2 Rg
(c) jAj = 6
(18) (a)
(b)
(c)
(d)
(e)
F
T
F
F
T
(f)
(g)
(h)
(i)
(j)
T
T
T
F
T
(k)
(l)
(m)
(n)
(o)
F
F
T
F
F
(p)
(q)
(r)
(s)
(t)
57
F
F
T
T
F
(u)
(v)
(w)
(x)
(y)
F
T
T
T
F
(z) F
Andrilli/Hecker - Answers to Exercises
Section 4.1
Chapter 4
Section 4.1
(1) Property
Property
Property
Property
(2):
(5):
(6):
(7):
u (v w) = (u v) w
a (u v) = (a u) (a
(a + b) u = (a u) (b
(ab) u = a (b u)
v)
u)
(2) Let W = fa[1; 3; 2] j a 2 Rg. First, W is closed under addition because a[1; 3; 2] + b[1; 3; 2] = (a + b)[1; 3; 2], which has the correct form.
Similarly, c(a[1; 3; 2]) = (ca)[1; 3; 2], proving closure under scalar multiplication. Now, properties (1), (2), (5), (6), (7), and (8) are true for all
vectors in R3 by Theorem 1.3, and hence are true in W. Property (3)
is satis…ed because [0; 0; 0] = 0[1; 3; 2] 2 W, and [0; 0; 0] + a[1; 3; 2] =
a[1; 3; 2] + [0; 0; 0] = [0; 0; 0]. Finally, property (4) follows from the equation (a[1; 3; 2]) + (( a)[1; 3; 2]) = [0; 0; 0].
(3) Let W = ff 2 P3 j f (2) = 0g. First, W is closed under addition since the
sum of polynomials of degree 3 has degree 3, and because if f ; g 2 W,
then (f + g)(2) = f (2) + g(2) = 0 + 0 = 0, and so (f + g) 2 W. Similarly,
if f 2 W and c 2 R, then cf has the proper degree, and (cf )(2) = cf (2) =
c0 = 0, thus establishing closure under scalar multiplication. Now properties (1), (2), (5), (6), (7), and (8) are true for all real-valued functions
on R, as shown in Example 6 in Section 4.1 of the text. Also, property (3)
holds because the zero function z (see Example 6) is a polynomial of degree 0, and z(2) = 0. Finally, property (4) is true since f is the additive
inverse of f (again, see Example 6) and because f 2 W ) f 2 W, since
f has the correct degree and ( f )(2) = f (2) = 0 = 0.
(4) Clearly the operation
is commutative. Also,
is associative because
(x y) z = (x3 +y 3 )1=3 z = (((x3 +y 3 )1=3 )3 +z 3 )1=3 = ((x3 +y 3 )+z 3 )1=3
= (x3 + (y 3 + z 3 ))1=3 = (x3 + ((y 3 + z 3 )1=3 )3 )1=3 = x (y 3 + z 3 )1=3 =
x (y z).
Clearly, the real number 0 acts as the additive identity, and, for any real
number x, the additive inverse of x is x, because (x3 + ( x)3 )1=3 = 01=3
= 0, the additive identity.
The …rst distributive law holds because a (x y) = a (x3 + y 3 )1=3
= a1=3 (x3 + y 3 )1=3 = (a(x3 + y 3 ))1=3 = (ax3 + ay 3 )1=3 = ((a1=3 x)3 +
(a1=3 y)3 )1=3 = (a1=3 x) (a1=3 y) = (a x) (a y).
Similarly, the other distributive law holds because (a + b) x = (a + b)1=3 x
= (a+b)1=3 (x3 )1=3 = (ax3 +bx3 )1=3 = ((a1=3 x)3 +(b1=3 x)3 )1=3 = (a1=3 x)
(b1=3 x) = (a x) (b x).
Associativity of scalar multiplication holds since (ab)
a1=3 b1=3 x = a1=3 (b1=3 x) = a (b1=3 x) = a (b x).
Finally, 1
x = 11=3 x = 1x = x.
58
x = (ab)1=3 x =
Andrilli/Hecker - Answers to Exercises
Section 4.1
(5) The set is not closed under addition. For example,
1
1
1
1
+
1
2
2
4
=
2
3
3
5
.
(6) The set does not contain the zero vector.
(7) Property (8) is not satis…ed. For example, 1
5 = 0 6= 5.
(8) Properties (2), (3), and (6) are not satis…ed. Property (4) makes no sense
without Property (3). The following is a counterexample for Property (2):
3 (4 5) = 3 18 = 42, but (3 4) 5 = 14 5 = 38.
(9) Properties (3) and (6) are not satis…ed. Property (4) makes no sense
without Property (3). The following is a counterexample for Property (6):
(1+2) [3; 4] = 3 [3; 4] = [9; 12], but (1 [3; 4]) (2 [3; 4]) = [3; 4] [6; 8]
= [9; 0].
(10) Not a vector space. Property (6) is not satis…ed. For example, (1 + 2) 3
= 3 3 = 9, but (1 3) (2 4) = 3 8 = (35 + 85 )1=5 = 330111=5
8:01183.
(11) (a) Suppose B = 0, x1 ; x2 2 V, and c 2 R. Then, A(x1 + x2 ) =
Ax1 + Ax2 = 0 + 0 = 0, and A(cx1 ) = cAx1 = c0 = 0, and so V
is closed. Conversely, assume V is closed. Let x 2 V. Then B =
A(0x) = 0(Ax) (by part (4) of Theorem 1.14) = 0B = 0.
(b) Theorem 1.3 shows these six properties hold for all vectors in Rn ,
including those in V.
(c) Since A0 = 0, 0 2 V if and only if B = 0.
(d) If property (3) is not satis…ed, there can not be an inverse for any
vector, since a vector added to its inverse must equal the (nonexistent) zero vector. If B = 0, then the inverse of x 2 V is x, which
is in V since A( x) = Ax = 0 = 0.
(e) Clearly, V is a vector space if and only if B = 0. Closure and the
remaining 8 properties are all covered in parts (a) through (d).
(12) Suppose 01 and 02 are both zero vectors. Then, 01 = 01 + 02 = 02 .
(13) Zero vector = [2; 3]; additive inverse of [x; y] = [4
(14) (a) Add
x; 6
y]
v to both sides.
(b) av = bv =) av + ( (bv)) = bv + ( (bv)) =) av + ( 1)(bv) = 0
(by Theorem 4.1, part (3), and property (4)) =) av + ( b)v = 0
(by property (7)) =) (a b)v = 0 (by property (6)) =) a b = 0
(by Theorem 4.1, part (4)) =) a = b.
(c) Multiply both sides by the scalar
59
1
a;
and use property (7).
Andrilli/Hecker - Answers to Exercises
Section 4.2
(15) Mimic the steps in Example 6 in Section 4.1 in the text, restricting the
proof to polynomial functions only. Note that the zero function z used in
properties (3) and (4) is a (constant) polynomial function.
(16) Mimic the steps in Example 6 in Section 4.1, generalizing the proof to
allow the domain of the functions to be the set X rather than R.
(17) Base Step: n = 1. Then a1 v1 2 V; since V is closed under scalar multiplication.
Inductive Step: Assume a1 v1 +
+ an vn 2 V if v1 ; : : : ; vn 2 V,
a1 ; : : : ; an 2 R: Prove a1 v1 +
+ an vn + an+1 vn+1 2 V
if v1 ; : : : ; vn ; vn+1 2 V, a1 ; : : : ; an ; an+1 2 R: But, a1 v1 +
+ an vn +
an+1 vn+1 = (a1 v1 +
+ an vn ) + an+1 vn+1 = w + an+1 vn+1 (by the
inductive hypothesis). Also, an+1 vn+1 2 V since V is closed under scalar
multiplication. Finally, w+an+1 vn+1 2 V since V is closed under addition.
(18) 0v = 0v + 0 = 0v + (0v + ( (0v))) = (0v + 0v) + ( (0v))
= (0 + 0)v + ( (0v)) = 0v + ( (0v)) = 0.
(19) Let V be a nontrivial vector space, and let v 2 V with v 6= 0. Then
the set S = fav j a 2 Rg is a subset of V, because V is closed under
scalar multiplication. Also, if a 6= b then av 6= bv, since av = bv )
av bv = 0 ) (a b)v = 0 ) a = b (by part (4) of Theorem 4.1).
Hence, the elements of S are distinct, making S an in…nite set, since R
is in…nite. Thus, V has an in…nite number of elements because it has an
in…nite subset.
(20) (a) F
(b) F
(c) T
(d) T
(e) F
(f) T
(g) T
Section 4.2
(1) When needed, we will refer to the given set as V.
(a) Not a subspace; no zero vector.
(b) Not a subspace; not closed under either operation.
Counterexample for scalar multiplication: 2[1; 1] = [2; 2] 2
=V
(c) Subspace. Nonempty: [0; 0] 2 V;
Addition: [a; 2a] + [b; 2b] = [(a + b); 2(a + b)] 2 V;
Scalar multiplication: c[a; 2a] = [ca; 2(ca)] 2 V
(d) Not a subspace; not closed under addition.
Counterexample: [1; 0] + [0; 1] = [1; 1] 2
=V
(e) Not a subspace; no zero vector.
(f) Subspace. Nonempty: [0; 0] 2 V; Addition: [a; 0] + [b; 0]
= [a + b; 0] 2 V; Scalar multiplication: c[a; 0] = [ca; 0] 2 V.
60
Andrilli/Hecker - Answers to Exercises
Section 4.2
(g) Not a subspace; not closed under addition.
Counterexample: [1; 1] + [1; 1] = [2; 0] 2
= V.
(h) Subspace. Nonempty: [0; 0] 2 V since 0 = 3(0); Vectors in V are
precisely those of the form [a; 3a]. Addition: [a; 3a] + [b; 3b] =
[a + b; 3a 3b] = [(a + b); 3(a + b)] 2 V;
Scalar multiplication: c[a; 3a] = [(ca); 3(ca)] 2 V.
(i) Not a subspace; no zero vector since 0 6= 7(0)
5.
(j) Not a subspace; not closed under either operation.
Counterexample for addition: [1; 1] + [2; 4] = [3; 5] 2
= V, since 5 6= 32 .
(k) Not a subspace; not closed under either operation.
Counterexample for scalar multiplication: 2[0; 4] = [0; 8] 2
= V,
since 8 < 2(0) 5.
(l) Not a subspace; not closed under either operation.
Counterexample for scalar multiplication: 2[0:75; 0] = [1:5; 0] 2
= V.
(2) When needed, we will refer to the given set as V.
(a) Subspace. Nonempty: O22 2 V;
+
c
d
c
0
=
(a + c)
(b + d)
Scalar multiplication: c
a
b
a
0
=
(ca)
(cb)
Addition:
a
b
a
0
(a + c)
0
(ca)
0
2 V.
(b) Not a subspace; not closed under addition.
Counterexample:
1
0
1
0
0
1
+
0
1
(c) Subspace. Nonempty: O22 2 V;
Addition:
a b
b c
+
Scalar multiplication: c
d
e
e
f
=
a b
b d
1
1
=
(a + d)
(b + e)
1
1
(b + e)
(c + f )
(ca) (cb)
(cb) (cd)
=
2
= V.
2 V.
(d) Not a subspace; no zero vector since O22 is singular.
(e) Subspace. Nonempty: O22 2 V;
Addition:
=
a+d
c+f
=
(a + d)
(c + f )
a
c
b
(a + b + c)
+
d
f
e
(d + e + f )
b+e
(a + b + c) (d + e + f )
(b + e)
((a + d) + (b + e) + (c + f ))
61
2 V;
2 V;
2 V;
Andrilli/Hecker - Answers to Exercises
a
c
Scalar multiplication: k
=
ka
kb
kc k( (a + b + c))
Section 4.2
b
(a + b + c)
(f) Subspace. Nonempty: O22 2 V;
Addition:
a
c
b
a
+
Scalar multiplication: c
(g) Subspace. Let B =
(ka)
(kc)
(kb)
((ka) + (kb) + (kc))
e
d
(a + d)
(b + e)
=
d
f
a
d
b
a
1
2
3
6
=
=
(ca)
(cd)
(b + e)
(a + d)
(cb)
(ca)
2 V.
2 V;
2 V.
. Nonempty: O22 2 V because
O22 B = O22 . Addition: If A; C 2 V, then (A + C)B = AB + CB =
O22 + O22 = O22 . Hence, (A + C) 2 V; Scalar multiplication: If
A 2 V, then (cA)B = c(AB) = cO22 = O22 , and so (cA) 2 V.
(h) Not a subspace; not closed under addition.
Counterexample:
1
0
1
0
+
0
1
0
1
=
1
1
1
1
2
= V.
(3) When needed, we will refer to the given set as V. Also, let z be the zero
polynomial.
(a) Subspace. Nonempty: Clearly, z 2 V. Addition: (ax5 + bx4 + cx3 +
dx2 + ax + e) + (rx5 + sx4 + tx3 + ux2 + rx + w) = (a + r)x5 +
(b + s)x4 + (c + t)x3 + (d + u)x2 + (a + r)x + (e + w) 2 V; Scalar
multiplication: t(ax5 + bx4 + cx3 + dx2 + ax + e) = (ta)x5 + (tb)x4 +
(tc)x3 + (td)x2 + (ta)x + (te) 2 V.
(b) Subspace. Nonempty: z 2 V. Suppose p; q 2 V.
Addition: (p + q)(3) = p(3) + q(3) = 0 + 0 = 0. Hence (p + q) 2 V;
Scalar multiplication: (cp)(3) = cp(3) = c(0) = 0. Thus (cp) 2 V.
(c) Subspace. Nonempty: z 2 V.
Addition: (ax5 + bx4 + cx3 + dx2 + ex (a + b + c + d + e))
+ (rx5 + sx4 + tx3 + ux2 + wx (r + s + t + u + w))
= (a + r)x5 + (b + s)x4 + (c + t)x3 + (d + u)x2 + (e + w)x
+ ( (a + b + c + d + e) (r + s + t + u + w))
= (a + r)x5 + (b + s)x4 + (c + t)x3 + (d + u)x2 + (e + w)x
+ ( ((a + r) + (b + s) + (c + t) + (d + u) + (e + w))) 2 V;
Scalar multiplication: t(ax5 +bx4 +cx3 +dx2 +ex (a+b+c+d+e))
= (ta)x5 + (tb)x4 + (tc)x3 + (td)x2 + (te)x
((ta) + (tb) + (tc) + (td) + (te)) 2 V.
(d) Subspace. Nonempty: z 2 V. Suppose p; q 2 V.
Addition: (p + q)(3) = p(3) + q(3) = p(5) + q(5) = (p + q)(5).
Hence (p + q) 2 V; Scalar multiplication: (cp)(3) = cp(3) = cp(5) =
(cp)(5). Thus (cp) 2 V.
62
Andrilli/Hecker - Answers to Exercises
Section 4.2
(e) Not a subspace; z 2
= V.
(f) Not a subspace; not closed under scalar multiplication. Counterexample: x2 has a relative maximum at x = 0, but ( 1)( x2 ) = x2
does not –it has a relative minimum instead.
(g) Subspace. Nonempty: z 2 V. Suppose p; q 2 V.
Addition: (p + q)0 (4) = p0 (4) + q0 (4) = 0 + 0 = 0. Hence (p + q) 2 V;
Scalar multiplication: (cp)0 (4) = cp0 (4) = c(0) = 0. Thus (cp) 2 V.
(h) Not a subspace; z 2
= V.
(4) Let V be the given set of vectors. Setting a = b = c = 0 shows that
0 2 V. Hence V is nonempty. For closure under addition, note that
[a1 ; b1 ; 0; c1 ; a1 2b1 + c1 ] + [a2 ; b2 ; 0; c2 ; a2 2b2 + c2 ] = [a1 + a2 ; b1 + b2 ;
0; c1 + c2 ; a1 2b1 + c1 +a2 2b2 + c2 ]. Letting A = a1 + a2 , B = b1 + b2 ,
C = c1 + c2 , we see that the sum equals [A; B; 0; C; A 2B + C], which
has the correct form, and so is in V. Similarly, for closure under scalar
multiplication, k[a; b; 0; c; a 2b + c] = [ka; kb; 0; kc; ka 2(kb) + kc], which
is also seen to have the correct form. Now apply Theorem 4.2.
(5) Let V be the given set of vectors. Setting a = b = c = 0 shows that 0 2 V.
Hence V is nonempty. For closure under addition, note that [2a1 3b1 ;
a1 5c1 ; a1 ; 4c1 b1 ; c1 ] + [2a2 3b2 ; a2 5c2 ; a2 ; 4c2 b2 ; c2 ] = [2a1 3b1
+ 2a2 3b2 ; a1 5c1 + a2 5c2 , a1 + a2 ; 4c1 b1 + 4c2 b2 , c1 + c2 ].
Letting A = a1 + a2 , B = b1 + b2 , C = c1 + c2 , we see that the sum equals
[2A 3B; A 5C; A; 4C B; C], which has the correct form, and so is
in V. Similarly, for closure under scalar multiplication, k[2a 3b; a 5c;
a; 4c b; c] = [2(ka) 3(kb); (ka) 5(kc); ka; 4(kc) (kb); kc], which is
also seen to have the correct form. Now apply Theorem 4.2.
(6) (a) Let V = fx 2 R3 j x [1; 1; 4] = 0g. Clearly [0; 0; 0] 2 V since
[0; 0; 0] [1; 1; 4] = 0. Hence V is nonempty. Next, if x; y 2 V, then
(x + y) [1; 1; 4] = x [1; 1; 4] + y [1; 1; 4] = 0 + 0 = 0. Thus,
(x + y) 2 V, and so V is closed under addition. Also, if x 2 V and
c 2 R, then (cx) [1; 1; 4] = c(x [1; 1; 4]) = c(0) = 0. Therefore
(cx) 2 V, so V is closed under scalar multiplication. Hence, by
Theorem 4.2, V is a subspace of R3 .
(b) Plane, since it contains the nonparallel vectors [0; 4; 1] and [1; 1; 0].
(7) (a) The zero function z(x) = 0 is continuous, and so the set is nonempty.
Also, from calculus, we know that the sum of continuous functions is
continuous, as is any scalar multiple of a continuous function. Hence,
both closure properties hold. Apply Theorem 4.2.
(b) Replace the word “continuous”everywhere in part (a) with the word
“di¤erentiable.”
(c) Let V be the given set. The zero function z(x) = 0 satis…es z( 21 ) = 0,
and so V is nonempty. Furthermore, if f ; g 2 V and c 2 R, then
63
Andrilli/Hecker - Answers to Exercises
Section 4.2
(f + g)( 12 ) = f ( 12 ) + g( 12 ) = 0 + 0 = 0, and (cf )( 12 ) = cf ( 12 ) = c0 =
0. Hence, both closure properties hold. Now use Theorem 4.2.
(d) Let V be the given set. The zero function z(x) = 0 is continuous
R1
and satis…es 0 z(x) dx = 0, and so V is nonempty. Also, from calculus, we know that the sum of continuous functions is continuous,
as is any scalar multiple ofR a continuous function.
Furthermore, if
R1
1
f ; g 2 V and c 2 R, then 0 (f + g)(x) dx = 0 (f (x) + g(x)) dx =
R1
R1
R1
R1
f (x) dx + 0 g(x) dx = 0 + 0 = 0, and 0 (cf )(x) dx = 0 cf (x) dx
0 R
1
= c 0 f (x) dx = c0 = 0. Hence, both closure properties hold. Finally,
apply Theorem 4.2.
(8) The zero function z(x) = 0 is di¤erentiable and satis…es 3(dz=dx) 2z =
0, and so V is nonempty. Also, from calculus, we know that the sum
of di¤erentiable functions is di¤erentiable, as is any scalar multiple of a
di¤erentiable function. Furthermore, if f ; g 2 V and c 2 R, then 3(f + g)0
2(f + g) = (3f 0
2f ) + (3g0
2g) = 0 + 0 = 0, and 3(cf )0 2(cf ) =
0
0
3cf
2cf = c(3f
2f ) = c0 = 0. Hence, both closure properties hold.
Now use Theorem 4.2.
(9) Let V be the given set. The zero function z(x) = 0 is twice-di¤erentiable
and satis…es z00 + 2z0
9z = 0, and so V is nonempty. From calculus, sums and scalar multiples of twice-di¤erentiable functions are twicedi¤erentiable. Furthermore, if f ; g 2 V and c 2 R, then (f +g)00 + 2(f +g)0
9(f + g) = (f 00 + 2f 0 9f ) + (g00 + 2g0 9g) = 0 + 0 = 0, and (cf )00
+ 2(cf )0 9(cf ) = cf 00 + 2cf 0 9cf = c(f 00 + 2f 0 9f ) = c0 = 0. Hence,
both closure properties hold. Theorem 4.2 …nishes the proof.
(10) The given subset does not contain the zero function.
(11) First note that AO = OA, and so O 2 W. Thus W is nonempty. Next,
let B1 ; B2 2 W. Then A(B1 + B2 ) = AB1 + AB2 = B1 A + B2 A =
(B1 + B2 )A and A(cB1 ) = c(AB1 ) = c(B1 A) = (cB1 )A.
(12) (a) Closure under addition is a required property of every vector space.
(b) Let A be a singular n n matrix. Then jAj = 0. But then jcAj =
cn jAj = 0, so cA is also singular.
(c) All eight properties hold.
(d) For n = 2,
1
0
0
0
+
0
0
0
1
=
1
0
0
1
.
(e) No; if jAj 6= 0 and c = 0, then jcAj = jOn j = 0.
(13) (a) Since the given line goes through the origin, it must have either the
form y = mx, or x = 0. In the …rst case, all vectors on the line have
the form [x; mx]. Notice that [x1 ; mx1 ]+[x2 ; mx2 ] = [x1 +x2 ; m(x1 +
x2 )], which lies on y = mx, and c[x; mx] = [cx; m(cx)], which also
lies on y = mx. If the line has the form x = 0, then all vectors on
64
Andrilli/Hecker - Answers to Exercises
Section 4.2
the line have the form [0; y]. These vectors clearly form a subspace
of R2 .
(b) The given subset does not contain the zero vector.
(14) (a) Follow the proof of Theorem 4.4, substituting 15 for :
(b) E15 = fa[4; 1; 0] + b[2; 0; 1]g = f[4a + 2b; a; b]g: Let A = 4a + 2b;
and B = a: Then b = 12 (A 4a) = 21 (A 4B) = 12 A 2B: Hence
E15 f[A; B; 12 A 2B]g = W:
2
3 2
32
a
16
4
2
5=4
b
3
6 54
(c) 4 3
1
2
8 11
a
2b
2
Hence, W = f[a; b; 21 a
2b]g
E15 :
3
2
15a
5 = 15 4
15b
15
a
30b
2
a
b
1
2a
2b
3
5:
(15) S = f0g, the trivial subspace of Rn .
(16) Let a 2 V with a 6= 0, and let b 2 R. Then ( ab )a = b 2 V. (We have used
bold-faced variables when we are considering objects as vectors rather than
scalars. However, a and a have the same value as real numbers; similarly
for b and b.)
(17) The given subset does not contain the zero vector.
(18) Note that 0 2 W1 and 0 2 W2 . Hence 0 2 W1 \ W2 , and so W1 \ W2 is
nonempty. Let x; y 2 W1 \ W2 . Then x + y 2 W1 since W1 is a subspace,
and x+y 2 W2 since W2 is a subspace. Hence, x+y 2 W1 \W2 . Similarly,
if c 2 R, then cx 2 W1 and cx 2 W2 since W1 and W2 are subspaces.
Thus cx 2 W1 \ W2 . Now apply Theorem 4.2.
(19) (a) If W is a subspace, then aw1 ; bw2 2 W by closure under scalar
multiplication. Hence aw1 + bw2 2 W by closure under addition.
Conversely, setting b = 0 shows that aw1 2 W for all w1 2 W. This
establishes closure under scalar multiplication. Use a = 1 and b = 1
to get closure under addition.
(b) If W is a subspace, proceed as in part (a). Conversely, …rst consider
a = 1 to get closure under addition. Then use cw =
(c 1)w + w to prove closure under scalar multiplication.
(20) Suppose w1 ; w2 2 W: Then w1 + w2 = 1w1 + 1w2 is a …nite linear
combination of vectors of W, so w1 + w2 2 W. Also, c1 w1 (for c1 2 R) is
a …nite linear combination in W, so c1 w1 2 W. Now apply Theorem 4.2.
(21) Apply Theorems 4.4 and 4.3.
(22) (a) F
(c) F
(e) T
(g) T
(b) T
(d) T
(f) F
(h) T
65
Andrilli/Hecker - Answers to Exercises
Section 4.3
Section 4.3
(1) (a)
(b)
(c)
(d)
f[a; b; a + b] j a; b 2 Rg
fa[1; 13 ; 23 ] j a 2 Rg
f[a; b; b] j a; b 2 Rg
Span(S) = R3
(2) (a) fax3 + bx2 + cx
(e) f[a; b; c; 2a + b + c] j a; b; c 2
Rg
(f) f[a; b; 2a + b; a + b] j a; b 2 Rg
(a + b + c) j a; b; c 2 Rg
(b) fax3 + bx2 + d j a; b; d 2 Rg
(3) (a)
(b)
(c)
a
c
b
a
b
a
a
a b
c d
a; b 2 R
a; b; c; d 2 R
= M22
2
1
(4) (a) W = row space of A = row space of 4 1
0
b[1; 0; 1; 0] +c[0; 1; 1; 1] j a; b; c 2 Rg
2
3
1
1
0
0
2 7
6
6
7
1 7
(b) B = 6 0 1 0
4
2 5
0
0
1
ax + b j a; b 2 Rg
a; b; c 2 R
c
b
8a + 3b
b
(c) fax3
1
0
1
0
1
1
3
0
0 5 = fa[1; 1; 0; 0]+
1
1
2
(c) Row space of B = fa[1; 0; 0; 21 ]+b[0; 1; 0; 21 ]+c[0; 0; 1; 21 ] j a; b; c 2 Rg
= f[a; b; c; 12 a + 12 b + 12 c] j a; b; c 2 Rg
3
2
2
1
0 3 4
1
1 4 2 5
(5) (a) W = row space of A = row space of 4 3
4
1
7 0 0
= fa[2; 1; 0; 3; 4] + b[3; 1; 1; 4; 2] + c[ 4; 1; 7; 0; 0]g
2
3
1 0 0
2
2
1
8 5
(b) B = 4 0 1 0
0 0 1
1
0
(c) Row space of B = f[a; b; c; 2a
b + c; 2a + 8b] j a; b; c 2 Rg
(6) S spans R3 by the Simpli…ed Span Method since
2
3
2
3
1 3
1
1 0 0
4 2 7
3 5 row reduces to 4 0 1 0 5.
4 8
7
0 0 1
66
Andrilli/Hecker - Answers to Exercises
Section 4.3
(7) S does not span R3 by the Simpli…ed
2
1
2
3
1
2
2
6
6 0
6 3
4
1 7
6
7 row reduces to 6
6
4 1
4
9 5
6 0
4
0
2
7
0
Thus [0; 0; 1] is not in span(S).
(8) ax2 + bx + c = a(x2 + x + 1) + (b
Span Method since
3
0
5
7
7 7
1
2 7
7.
0
0 7
5
0
0
a)(x + 1) + (c
b)1.
(9) The set does not span P2 by the Simpli…ed Span Method since
2
23 3
2
3
1 0
7
1 4
3
6
7
11 7
6
4 2 1
5
5 row reduces to 4 0 1
7 5.
0 7
11
0 0
0
Thus 0x2 + 0x + 1 is not in the span of the set.
(10) (a) [ 4; 5; 13] = 5[1; 2; 2]
3[3; 5; 1] + 0[ 1; 1; 5].
3
(b) S does not span R by the Simpli…ed Span Method since
3
2
3
2
1 0 12
1
2
2
4 3
7 5.
5
1 5 row reduces to 4 0 1
0 0
0
1
1
5
Thus [0; 0; 1] is not in span(S).
(11) One answer is
1(4x2 + x
1(x3
2x2 + x
3) + 0(4x3
3) + 2(2x3
7x2 + 4x
3x2 + 2x + 5)
1).
(12) Let S1 = f[1; 1; 0; 0]; [1; 0; 1; 0]; [1; 0; 0; 1]; [0; 0; 1; 1]g. The Simpli…ed Span
Method shows that S1 spans R4 . Thus, since S1 S, S also spans R4 .
(13) Let b 2 R. Then b = ( ab )a 2 span(fag). (We have used bold-faced
variables when we are considering objects as vectors rather than scalars.
However, a and a have the same value as real numbers; similarly for b
and b.)
(14) (a) Hint: Use Theorem 1.13.
(b) Statement: If S1 is the set of symmetric n n matrices, and S2 is the
set of skew-symmetric n n matrices, then span(S1 [ S2 ) = Mnn .
Proof: Again, use Theorem 1.13.
(15) Apply the Simpli…ed Span Method to S. The matrix
2
2
3
0 1
0 1
1
4
1 0
1 0 5 represents the vectors in S. Then 4 0
12 3
2 3
0
67
0
1
0
0
0
1
3
0
1 5
0
Andrilli/Hecker - Answers to Exercises
Section 4.3
is the corresponding reduced row echelon form matrix, which clearly indicates the desired result.
(16) (a) f[ 3; 2; 0]; [4; 0; 5]g
(b) E1 = f[ 32 b + 45 c; b; c]g = fb[ 32 ; 1; 0] + c[ 45 ; 0; 1]g = f 2b [ 3; 2; 0] +
c
5 [4; 0; 5]g; so the set in part (a) spans E1 .
(17) Span(S1 ) span(S2 ), since
a1 v1 +
+ an vn = ( a1 )( v1 ) +
+ ( an )( vn ):
Span(S2 ) span(S1 ), since
a1 ( v1 ) +
+ an ( vn ) = ( a1 )v1 +
+ ( an )vn :
(18) If u = av, then all vectors in span(S) are scalar multiples of v 6= 0: Hence,
span(S) is a line through the origin. If u 6= av; then span(S) is the set of
all linear combinations of the vectors u; v; and since these vectors point
in di¤erent directions, span(S) is a plane through the origin.
(19) First, suppose that span(S) = R3 . Let x be a solution to Ax = 0. Then
u x = v x = w x = 0, since these are the three coordinates of Ax.
Because span(S) = R3 , there exist a; b; c such that au + bv + cw = x.
Hence, x x = (au + bv + cw) x = au x + bv x + cw x = 0. Therefore,
by part (3) of Theorem 1.5, x = 0. Thus, Ax = 0 has only the trivial
solution. Theorem 2.5 and Corollary 3.6 then show that jAj =
6 0.
Next, suppose that jAj =
6 0. Then Corollary 3.6 implies that rank(A) = 3.
This means that the reduced row echelon form for A has three nonzero
rows. This can only occur if A row reduces to I3 . Thus, A is row equivalent
to I3 . By Theorem 2.8, A and I3 have the same row space. Hence, span(S)
= row space of A = row space of I3 = R3 .
(20) Choose n = max(degree(p1 ),: : :, degree(pk )).
(21) S1 S2 span(S2 ), by Theorem 4.5, part (1). Then, since span(S2 ) is a
subspace of V containing S1 (by Theorem 4.5, part (2)),
span(S1 ) span(S2 ) by Theorem 4.5, part (3).
(22) (a) Suppose S is a subspace. Then S span(S) by Theorem 4.5, part (1).
Also, S is a subspace containing S, so span(S) S by Theorem 4.5,
part (3). Hence, S = span(S).
Conversely, if span(S) = S, then S is a subspace by Theorem 4.5,
part (2).
(c) span(fskew-symmetric 3
matricesg
(23) If span(S1 ) = span(S2 ), then S1
= span(S1 ).
3 matricesg) = fskew-symmetric 3
span(S1 ) = span(S2 ) and S2
3
span(S2 )
Conversely, since span(S1 ) and span(S2 ) are subspaces, part (3) of Theorem 4.5 shows that if S1 span(S2 ) and S2 span(S1 ), then span(S1 )
span(S2 ) and span(S2 ) span(S1 ), and so span(S1 ) = span(S2 ).
68
Andrilli/Hecker - Answers to Exercises
Section 4.3
(24) (a) Clearly S1 \ S2 S1 and S1 \ S2 S2 . Corollary 4.6 then implies
span(S1 \S2 ) span(S1 ) and span(S1 \S2 ) span(S2 ). The desired
result easily follows.
(b) S1 = f[1; 0; 0]; [0; 1; 0]g, S2 = f[0; 1; 0]; [0; 0; 1]g
(c) S1 = f[1; 0; 0]; [0; 1; 0]g, S2 = f[1; 0; 0]; [1; 1; 0]g
(25) (a) Clearly S1 S1 [ S2 and S2 S1 [ S2 . Corollary 4.6 then implies
span(S1 ) span(S1 [S2 ) and span(S2 ) span(S1 [S2 ). The desired
result easily follows.
(b) If S1
S2 ; then span(S1 )
span(S2 ), so span(S1 ) [ span(S2 ) =
span(S2 ). Also S1 [ S2 = S2 :
(c) S1 = fx5 g, S2 = fx4 g
(26) If span(S) = span(S [ fvg), then v 2 span(S [ fvg) = span(S).
Conversely, if v 2 span(S), then since S span(S), we have
S [ fvg
span(S). Then span(S [ fvg)
span(S) by Theorem 4.5,
part (3). The reverse inclusion follows from Corollary 4.6, and so
span(S) = span(S [ fvg).
(27) Step 3 of the Diagonalization Method creates a set S = fX1 ; : : : ; Xm g
of fundamental eigenvectors where each Xj is the particular solution of
In A = 0 obtained by letting the jth independent variable equal 1
while letting all other independent variables equal 0. Now, let X 2 E :
Then X is a solution of In A = 0: Suppose X is obtained by setting
the ith independent variable equal to ki ; for each i; with 1 i m: Then
X = k1 X1 + : : : + km Xm : Hence X is a linear combination of the Xi ’s,
and thus S spans E :
(28) We show that span(S) is closed under vector addition. Let u and v be
two vectors in span(S). Then there are …nite subsets fu1 ; : : :; uk g and
fv1 ; : : :; vl g of S such that u = a1 u1 + + ak uk and v = b1 v1 + + bl vl
for some real numbers a1 ; : : :; ak ; b1 ; : : : ; bl : The natural thing to do at this
point is to combine the expressions for u and v by adding corresponding
coe¢ cients. However, each of the subsets fu1 ; : : :; uk g or fv1 ; : : :; vl g may
contain elements not found in the other, so it is di¢ cult to match up their
coe¢ cients. We need to create a larger set containing all the vectors in
both subsets.
Consider the …nite subset X = fu1 ; : : :; uk g [ fv1 ; : : :; vl g of S. Renaming the elements of X; we can suppose X is the …nite subset
fw1 ; : : :; wm g of S. Hence, there are real numbers c1 ; : : :; cm and d1 ; : : :; dm
such that u = c1 w1 +
+ cm wm and v = d1 w1 +
+ dm wm : (Note
that ci = aj if wi = uj and that ci = 0 if wi 2
= fu1 ; : : :; uk g. A similar
formula gives the value of each di :) Then,
u+v =
m
X
i=1
ci wi +
m
X
i=1
69
di wi =
m
X
i=1
(ci + di ) wi ;
Andrilli/Hecker - Answers to Exercises
Section 4.4
a linear combination of the elements wi in the subset X of S. Thus,
u + v 2 span(S).
(29) (a) F
(b) T
(c) F
(d) F
(e) F
(f) T
(g) F
Section 4.4
(1) (a) Linearly independent (a[0; 1; 1] = [0; 0; 0] =) a = 0)
(b) Linearly independent (neither vector is a multiple of the other)
(c) Linearly dependent (each vector is a multiple of the other)
(d) Linearly dependent (contains [0; 0; 0])
(e) Linearly dependent (Theorem 4.7)
(2) Linearly independent: (b), (c), (f). The others are linearly dependent,
and the appropriate reduced row echelon form matrix for each is given.
2
3
2
1 3
1 0
1
1 0
2
6
1 7
1 5
(a) 4 0 1
6 0 1
2 7
(e) 6
7
0 0
0
4
0 0
0 5
3
2
1 0 3
0 0
0
(d) 4 0 1 2 5
0 0 0
(3) Linearly independent: (a), (b). The others are linearly dependent, and
the appropriate reduced row echelon form matrix for each is given.
"
#
1 0 73
2 7 12
46
(c)
reduces to
6 2
7
0 1 29
23
3
2
3
2
1 0 0
1
1
1
1
1
1 5
1
1
1 5 reduces to 4 0 1 0
(d) 4 1
0 0 1
1
1
1
1
1
2
3
1 1 1
(4) (a) Linearly independent: 4 0 0 1 5 row reduces to I3 .
1 1 0
(b) 2
Linearly
1
6 1
6
4 0
1
independent:
3
2
0 1
1
6 0
0 0 7
7 row reduces to 6
4 0
2 1 5
1 0
0
0
1
0
0
3
0
0 7
7
1 5
0
(c) Consider the polynomials in P2 as corresponding vectors in R3 . Then,
the set is linearly dependent by Theorem 4.7.
70
Andrilli/Hecker - Answers to Exercises
Section 4.4
(d) Linearly dependent:
2
2
3
1 0 0
3 1
0
1
6
6
6 0 0
7
0 1 0
0
0 7
6
row reduces to 6
6 0 0 1
4 2 1
1
1 5
4
1 0
5
10
0 0 0
5
2
13
2
5
2
0
3
7
7
7, although
7
5
row reduction really is not necessary since the original matrix clearly
has rank 3 due to the row of zeroes.
(e) Linearly independent. See the remarks before Example 15 in the
textbook. No polynomial in the set can be expressed as a …nite
linear combination of the others since each has a distinct degree.
(f) Linearly independent. Use an argument similar to that in Example
15 in the textbook. (Alternately, use the comments right before
Example 15 in the textbook since the polynomial pn = 1 + 2x +
+ (n + 1)xn can not be expressed as a …nite linear combination
of other vectors in the given set. The reason for this is that since
all vectors in the set have a di¤erent degree, we …nd that the degree
of the highest degree polynomial with a nonzero coe¢ cient in such
a linear combination is distinct from n, and hence the …nite linear
combination could not equal pn .)
(5) 21
2
6
6
6
(6) 6
6
6
4
1
0
2
1
+15
1
2
1
1
3
0
4
2
6
1
0
1
0
1
1
1
2
2
0
7
5
2
1
6
3 2
6 1
3
18
4
5
1
2
2
7
6
7
6
7
6
7 row reduces to 6
7
6
7
6
5
4
1
0
0
0
0
0
+2
0
1
0
0
0
0
0
0
1
0
0
0
3
0
0
0
0
1
0
0
early independent by the Independence Test Method.
3
0
=
0
0
0
0
.
3
7
7
7
7, so the set is lin7
7
5
(7) (a) [ 2; 0; 1] is not a scalar multiple of [1; 1; 0].
(b) [0; 1; 0]
(c) No; [0; 0; 1] will also work.
(d) Any nonzero linear combination of [1; 1; 0] and [ 2; 0; 1], other than
[1; 1; 0] and [ 2; 0; 1] themselves, will work.
(8) (a) Applying the Independence Test Method, we …nd that the matrix
whose columns are the given vectors row reduces to I3 .
(b) [ 2; 0; 3; 4] =
1[2; 1; 0; 5] + 1[1; 1; 2; 0] + 1[ 1; 0; 1; 1].
(c) No, because S is linearly independent (see Theorem 4.9).
(9) (a) x3
4x + 8 = 2(2x3
x + 3)
71
(3x3 + 2x
2).
Andrilli/Hecker - Answers to Exercises
Section 4.4
(b) See part (a). Also, 4x3 + 5x 7 = 1(2x3 x + 3) + 2(3x3 + 2x
2x3 x + 3 = 23 (x3 4x + 8) + 13 (4x3 + 5x 7);
3x3 + 2x 2 = 31 (x3 4x + 8) + 23 (4x3 + 5x 7)
2);
(c) No polynomial in S is a scalar multiple of any other polynomial in
S.
(10) Following the hint in the textbook, let A be the matrix whose rows are
the vectors u, v, and w. By the Independence Test Method S is linearly
independent i¤ AT row reduces to a matrix with a pivot in every column
i¤ AT has rank 3 i¤ jAT j =
6 0 i¤ jAj =
6 0.
(11) In each part, there are many di¤erent correct answers. Only one possibility
is given here.
(a) fe1 ; e2 ; e3 ; e4 g
(d)
1
0
82
< 1
(e) 4 0
:
0
0
0
0
0
0
;
0
3 2
0
0
0
0
0
0 5;4 1
0
0
(b) fe1 ; e2 ; e3 ; e4 g
1
0
0
0
1
0
0
0
;
0
3 2
0
0
0 5;4 0
1
0
0
0
1
0
0
0
0
(c) f1; x; x2 ; x3 g
0 0 0
;
1 0 0
39
3 2
0 0 0 =
1
0 5;4 0 1 0 5
;
0 0 0
0
(Notice that each matrix is symmetric.)
(12) If S is linearly dependent, then a1 v1 + +an vn = 0 for some fv1 ; : : : ; vn g
S and a1 ; : : : ; an 2 R, with some ai 6= 0. Then vi = aa1i v1
ai+1
ai 1
an
ai vi 1
ai vi+1
ai vn . So, if w 2 span(S), then w = bvi +
c1 w1 + +ck wk , for some fw1 ; : : : ; wk g S fvi g and b; c1 ; : : : ; ck 2 R.
bai 1
bai+1
ban
1
Hence, w = ba
ai v1
ai vi 1
ai vi+1
ai vn + c1 w1 +
+ ck wk 2 span(S fvi g). So, span(S)
span(S fvi g). Also,
span(S fvi g)
span(S) by Corollary 4.6. Thus span(S fvi g) =
span(S).
Conversely, if span(S fvg) = span(S) for some v 2 S, then by Theorem 4.5, part (1), v 2 span(S) = span(S fvg), and so S is linearly
dependent by the Alternate Characterization of linear dependence in Table 4.1.
(13) In each part, you can prove that the indicated vector is redundant by
creating a matrix using the given vectors as rows, and showing that the
Simpli…ed Span Method leads to the same reduced row echelon form (except, perhaps, with an extra row of zeroes) with or without the indicated
vector as one of the rows.
(a) [0; 0; 0; 0]. Any linear combination of the other three vectors becomes
a linear combination of all four vectors when 1[0; 0; 0; 0] is added to
it.
72
Andrilli/Hecker - Answers to Exercises
Section 4.4
(b) [0; 0; 6; 0]: Let S be the given set of three vectors. We prove that
v = [0; 0; 6; 0] is redundant. To do this, we need to show that
span(S fvg) = span(S). We will do this by applying the Simpli…ed
Span Method to both S fvg and S. If the method leads to the
same reduced row echelon form (except, perhaps, with an extra row
of zeroes) with or without v as one of the rows, then the two spans
are equal.
For span(S
1
1
1
1
0
1
0
0
fvg):
row reduces to
1
0
1
0
For span(S):
2
3
2
1 1
0 0
1
4 1 1
1 0 5 row reduces to 4 0
0 0
6 0
0
0
1
1
0
0
0
0
0
1
0
.
3
0
0 5.
0
Since the reduced row echelon form matrices are the same, except for
the extra row of zeroes, the two spans are equal, and v = [0; 0; 6; 0]
is a redundant vector.
For an alternate approach to proving that span(S fvg) = span(S),
…rst note that S fvg
S, and so span(S fvg)
span(S) by
Corollary 4.6. Next, [1; 1; 0; 0] and [1; 1; 1; 0] are clearly in
span(S fvg) by part (1) of Theorem 4.5. But v 2 span(S fvg)
as well, because [0; 0; 6; 0] = (6)[1; 1; 0; 0] + ( 6)[1; 1; 1; 0]. Hence, S
is a subset of the subspace span(S fvg). Therefore, by part (3) of
Theorem 4.5, span(S) span(S fvg). Thus, since we have proven
subset inclusion in both directions, span(S fvg) = span(S).
(c) Let S be the given set of vectors. Every one of the given 16 vectors in
S is individually redundant. One clever way to see this is to consider
the matrix
3
2
1
1
1
1
6 1
1
1
1 7
7;
A=6
4 1
1
1
1 5
1
1
1
1
whose rows are 4 of the vectors in S. Now, A row reduces to I4 . Thus,
the four rows of A span R4 , by the Simpli…ed Span Method. Hence,
any of the remaining 12 vectors in S are redundant since they are in
the span of these 4 rows. Now, repeat this argument using A to
show that the four rows in A are also individually redundant. (Note
that A and A are row equivalent, so we do not have to perform a
second row reduction.)
(14) pA (x) = (x
1)(x
2)2 .
(15) Use the de…nition of linear independence.
73
Andrilli/Hecker - Answers to Exercises
Section 4.4
(16) Let S = fv1 ; : : : ; vk g
Rn , with n < k, and let A be the n
k
matrix having the vectors in S as columns. Then the system AX = O
has nontrivial solutions by Corollary 2.6. But, if X = [x1 ; : : : ; xk ], then
AX = x1 v1 +
+ xk vk , and so, by the de…nition of linear dependence,
the existence of a nontrivial solution to AX = 0 implies that S is linearly
dependent.
(17) Notice that xf 0 (x) is not a scalar multiple of f (x). For if the terms of
f (x) are written in order of descending degree, and the …rst two nonzero
terms are, respectively, an xn and ak xk , then the corresponding terms of
xf 0 (x) are nan xn and kak xk , which are di¤erent multiples of an xn and
ak xk , respectively, since n 6= k.
(18) Theorem 4.5, part (3) shows that span(S) W. Thus v 62 span(S). Now
if S [ fvg is linearly dependent, then there exists fw1 ; : : : ; wn g
S so
that a1 w1 +
+ an wn + bv = 0, with not all of a1 ; : : : ; an ; b equal to
zero. Also, b 6= 0 since S is linearly independent. Hence,
v=
a1
b w1
an
b wn
2 span(S), a contradiction.
(19) (a) Suppose a1 v1 + + ak vk = 0. Then a1 Av1 + + ak Avk = 0 =)
a1 =
= ak = 0, since T is linearly independent.
(b) The converse to the statement is: If S = fv1 ; : : : ; vk g is a linearly
independent subset of Rm , then T = fAv1 ; : : : ; Avk g is a linearly
independent subset of Rn with Av1 ; : : : ; Avk distinct. We can construct speci…c counterexamples that contradict either (or both) of
the conclusions of the converse. For a speci…c counterexample in
which Av1 ; : : : ; Avk are distinct but T is linearly dependent, let
1 1
A=
and S = f[1; 0]; [1; 2]g. Then T = f[1; 1]; [3; 3]g. Note
1 1
that S is linearly independent, but the vectors Av1 and Av2 are distinct and T is linearly dependent. For a speci…c counterexample in
which Av1 ; : : : ; Avk are not distinct but T is linearly independent,
1 1
use A =
and S = f[1; 2]; [2; 1]g. Then T = f[3; 3]g. Note
1 1
that Av1 and Av2 both equal [3; 3], and that both S and T are linearly independent. For a …nal counterexample in which both parts
of the conclusion are false, let A = O22 and S = fe1 ; e2 g. Then Av1
and Av2 both equal 0, S is linearly independent, but T = f0g is
linearly dependent.
(c) The converse to the statement is: If S = fv1 ; : : : ; vk g is a linearly independent subset of Rm , then T = fAv1 ; : : : ; Avk g is a linearly independent subset of Rn with Av1 ; : : : ; Avk distinct. Suppose
a1 Av1 +
+ ak Avk = 0. Then a1 A 1 Av1 +
+ ak A 1 Avk = 0
=) a1 v1 +
+ ak vk = 0 ) a1 =
= ak = 0, since S is linearly
independent.
74
Andrilli/Hecker - Answers to Exercises
Section 4.4
(20) Case 1: S is empty. This case is obvious, since the empty set is the only
subset of S.
Case 2: Suppose S = fv1 ; : : : ; vn g is a …nite nonempty linearly independent set in V, and let S1 be a subset of S. If S1 is empty, it is linearly
independent by de…nition. Suppose S1 is nonempty. After renumbering
the subscripts if necessary, we can assume S1 = fv1 ; : : : ; vk g, with k n.
If a1 v1 +
+ ak vk = 0, then a1 v1 +
+ ak vk + 0vk+1 +
0vn = 0,
and so a1 =
= ak = 0, since S is linearly independent. Hence, S1 is
linearly independent.
Case 3: Suppose S is an in…nite linearly independent subset of V. Then
any …nite subset of S is linearly independent by the de…nition of linear
independence for in…nite subsets. If S1 is an in…nite subset, then any
…nite subset S2 of S1 is also a …nite subset of S. Hence S2 is linearly
independent because S is. Thus, S2 is linearly independent, by de…nition.
(21) First, let S = fag. Then a is the only vector in S, so we must only consider span(S fag). But span(S fag) = span(f g) = f0g (by de…nition).
Thus, a 2 span(S fag) if and only if a = 0, which is true if and only if
fag is linearly dependent.
Next, consider S = f g. The empty set is de…ned to be linearly independent. Also, there are no vectors in S, so, in particular, there are no vectors
v in S such that v 2 span(S fvg).
(22) First, suppose that v1 6= 0 and that for every k, with 2
k
n, we
have vk 62 span(fv1 ; : : : ; vk 1 g). It is enough to show that S is linearly
independent by assuming that S is linearly dependent and showing that
this leads to a contradiction. If S is linearly dependent, then there real
numbers a1 ; : : : ; an , not all zero, such that
0 = a1 v1 +
+ ai vi +
+ an vn :
Suppose k is the largest subscript such that ak 6= 0. Then
0 = a1 v1 +
+ ak vk :
If k = 1, then a1 v1 = 0 with a1 6= 0. This implies that v1 = 0, a
contradiction. If k 2, then solving for vk yields
vk =
a1
ak
v1 +
+
ak 1
ak
vk
1;
where vk is expressed as a linear combination of v1 ; : : : ; vk 1 , contradicting the fact that vk 62 span(fv1 ; : : : ; vk 1 g).
Conversely, suppose S is linearly independent. Notice that v1 6= 0
since any …nite subset of V containing 0 is linearly dependent. We must
prove that for each k with 1 k n, vk 62 span(fv1 ; : : : ; vk 1 g). Now,
by Corollary 4.6, span(fv1 ; : : : ; vk 1 g) span(S fvk g) since
fv1 ; : : : ; vk 1 g
S fvk g. But, by the …rst “boxed” alternate characterization of linear independence following Example 11 in the textbook,
vk 62 span(S fvk g). Therefore, vk 62 span(fv1 ; : : : ; vk 1 g).
75
Andrilli/Hecker - Answers to Exercises
Section 4.4
(23) Reversing the order of the elements, f (k) 6= ak+1 f (k+1) +
+ an f (n) , for
any ak+1 ; : : : ; an , because all polynomials on the right have lower degrees
than f (k) . Now use Exercise 22.
(24) (a) If S is linearly independent, then the zero vector has a unique expression by Theorem 4.10.
Conversely, suppose v 2 span(S) has a unique expression of the form
v = a1 v1 +
+ an vn , where v1 ; : : : ; vn are in S. To show that S
is linearly independent, it is enough to show that any …nite subset
S1 = fw1 ; : : : ; wk g of S is linearly independent. Suppose b1 w1 + +
bk wk = 0. Now b1 w1 +
+ bk wk + a1 v1 +
+ an vn = 0 + v = v.
If wi = vj for some j, then the uniqueness of the expression for
v implies that bi + aj = aj , yielding bi = 0. If, instead, wi does
not equal any vj , then bi = 0, again by the uniqueness assumption.
Thus each bi must equal zero. Hence, S1 is linearly independent by
the de…nition of linear independence.
(b) S is linearly dependent i¤ every vector v in span(S) can be expressed
in more than one way as a linear combination of vectors in S (ignoring
zero coe¢ cients).
(25) Follow the hint in the text. The fundamental eigenvectors are found by
solving the homogeneous system ( In A)v = 0. Each vi has a “1”in the
position corresponding to the ith independent variable for the system and
a “0” in the position corresponding to every other independent variable.
Hence, any linear combination a1 v1 + + ak vk is an n-vector having the
value ai in the position corresponding to the ith independent variable for
the system, for each i. So if this linear combination equals 0, then each
ai must equal 0, proving linear independence.
(26) (a) Since T [ fvg is linearly dependent, by the de…nition of linear independence, there is some …nite subset ft1 ; : : : ; tk g of T such that
a1 t1 + + ak tk + bv = 0; with not all coe¢ cients equal to zero. But
if b = 0, then a1 t1 +
+ ak tk = 0 with not all ai = 0, contradicting the fact that T is linearly independent. Hence, b 6= 0; and thus
+ ( abk )tk : Hence, v 2 span(T ).
v = ( ab1 )t1 +
(b) The statement in part (b) is merely the contrapositive of the statement in part (a).
(27) Suppose that S is linearly independent, and suppose v 2 span(S) can be
expressed both as v = a1 u1 +
+ ak uk and v = b1 v1 +
+ bl vl ; for
distinct u1 ; : : :; uk 2 S and distinct v1 ; : : :; vl 2 S, where these expressions
di¤er in at least one nonzero term. Since the ui ’s might not be distinct
from the vi ’s, we consider the set X = fu1 ; : : : ; uk g [ fv1 ; : : : ; vl g and
label the distinct vectors in X as fw1 ; : : :; wm g: Then we can express v =
a1 u1 + +ak uk as v = c1 w1 + +cm wm ; and also v = b1 v1 + +bl vl as
v = d1 w1 + +dm wm ; choosing the scalars ci ; di ; 1 i m with ci = aj
76
Andrilli/Hecker - Answers to Exercises
Section 4.5
if wi = uj , ci = 0 otherwise, and di = bj if wi = vj , di = 0 otherwise.
Since the original linear combinations for v are distinct, we know that
ci 6= di for some i. Now, v v = 0 = (c1 d1 )w1 +
+ (cm dm )wm :
Since fw1 ; : : :; wm g S; a linearly independent set, each cj dj = 0 for
every j with 1 j m. But this is a contradiction since ci 6= di :
Conversely, assume every vector in span(S) can be uniquely expressed
as a linear combination of elements of S: Since 0 2 span(S), there is
exactly one linear combination of elements of S that equals 0. Now, if
fv1 ; : : : ; vn g is any …nite subset of S, we have 0 = 0v1 + +0vn . Because
this representation is unique, it means that in any linear combination of
fv1 ; : : : ; vn g that equals 0, the only possible coe¢ cients are zeroes. Thus,
by de…nition, S is linearly independent.
(28) (a) F
(b) T
(c) T
(d) F
(e) T
(f) T
(g) F
(h) T
(i) T
Section 4.5
(1) For each part, row reduce the matrix whose rows are the given vectors.
The result will be I4 . Hence, by the Simpli…ed Span Method, the set of
vectors spans R4 . Row reducing the matrix whose columns are the given
vectors also results in I4 . Therefore, by the Independence Test Method,
the set of vectors is linearly independent. (After the …rst row reduction,
you could just apply part (1) of Theorem 4.13, and then skip the second
row reduction.)
(2) Follow the procedure in the answer to Exercise 1.
(3) Follow the procedure in the answer to Exercise 1, except that row reduction results in I5 instead of I4 .
(4) (a) Not a basis: jSj < dim(R4 ). (linearly independent but does not span)
(b) Not a basis: jSj < dim(R4 ). (linearly dependent and does not span)
(c) Basis: Follow the procedure in the answer to Exercise
2
3
2
1
1
2
0
2
7
6 0
6 3
0
6
10
7 row reduces to 6
(d) Not a basis: 6
4 2
4 0
6 10
3 5
0
0
7
7
1
1.
0
1
0
0
2
1
0
0
(linearly dependent and does not span)
(e) Not a basis: jSj > dim(R4 ). (linearly dependent but spans)
(5) (a) [ 1; 1; 1; 1] is not a scalar multiple of [2; 3; 0; 1].
Also, [1; 4; 1; 2] = 1[2; 3; 0; 1] + 1[ 1; 1; 1; 1], and
[3; 2; 1; 0] = 1[2; 3; 0; 1] 1[ 1; 1; 1; 1].
(b) 2
77
3
0
0 7
7.
1 5
0
Andrilli/Hecker - Answers to Exercises
Section 4.5
(c) No; dim(span(S)) = 2 6= 4 = dim(R4 )
(d) Yes. B is a basis for span(S) by Theorem 4.14, and so B is a spanning
set for span(S). Finally, by part (1) of Theorem 4.13, no set smaller
than B could be a spanning set for span(S).
(6) (a) Solving an appropriate homogeneous system shows that B is linearly
independent. Also,
1
1
1
(x3 x2 + 2x + 1) + 12
(2x3 + 4x 7) 12
(3x3 x2 6x + 6)
x 1 = 12
3
2
3
2
and x
3x
22x + 34 = 1(x
x + 2x + 1) 3(2x3 + 4x 7) +
3
2
2(3x
x
6x + 6).
(b) dim(span(S)) = 3
(c) No; dim(span(S)) = 3 6= 4 = dim(P3 )
(d) By Theorem 4.14, B is a basis for span(S), and so, B is a spanning
set for span(S). Finally, by part (1) of Theorem 4.13, no set smaller
than B could be a spanning set for span(S).
(7) (a) W is nonempty, since 0 2 W: Let X1 ; X2 2 W; c 2 R: Then A(X1 +
X2 ) = AX1 + AX2 = 0 + 0 = 0: Also, A(cX1 ) = c(AX1 ) = c0 = 0:
Hence W is closed under addition and scalar multiplication, and so
is a subspace by Theorem 4.2.
2
3
2
1 0 15
1
5
6
7
1
6 0 1 2
1 7
5
5
6
7
(b) A row reduces to 6
7. Thus, a basis for W is
6 0 0 0
0
0 7
4
5
0
0
0
0
0
f[ 1; 2; 5; 0; 0]; [ 2; 1; 0; 5; 0]; [ 1; 1; 0; 0; 1]g
(c) dim(W) = 3; rank(A) = 2; 3 + 2 = 5
(8) The dimension of a proper nontrivial subspace must be 1 or 2, by Theorem
4.16. If the basis for the subspace contains one [two] elements, then the
subspace is a line [plane] through the origin as in Exercise 18 of Section
4.3.
(9) These vectors are linearly independent by Exercise 23, Section 4.4. The
given set is clearly the basis of a subspace of Pn of dimension n + 1. Apply
Theorem 4.16 to show that this subspace equals Pn .
(10) (a) Let S = fI2 ; A; A2 ; A3 ; A4 g. If S is linearly independent, then S is a
basis for W = span(S) M22 , and since dim(W) = 5 and dim(M22 )
= 4, this contradicts Theorem 4.16. Hence S is linearly dependent.
Now use the de…nition of linear dependence.
(b) Generalize the proof in part (a).
(11) (a) B is easily seen to be linearly independent using Exercise 22, Section
4.4. To show B spans V, note that every f 2 V can be expressed as
f = (x 2)g, for some g 2 P4 .
78
Andrilli/Hecker - Answers to Exercises
Section 4.5
(b) 5
(c) f(x
2)(x
3); x(x
3); x2 (x
2)(x
2)(x
3); x3 (x
2)(x
3)g
(d) 4
(12) (a) Let V = R3 , and let S = f[1; 0; 0]; [2; 0; 0]; [3; 0; 0]g.
(b) Let V = R3 , and let T = f[1; 0; 0]; [2; 0; 0]; [3; 0; 0]g.
(13) If S spans V, then S is a basis by Theorem 4.13. If S is linearly independent, then S is a basis by Theorem 4.13.
(14) By Theorem 4.13, if S spans V, then S is a basis for V, so S is linearly
independent. Similarly, if S is linearly independent, then S is a basis for
V, so S spans V.
(15) (a) Suppose B = fv1 ; : : : ; vn g.
Linear independence: a1 Av1 +
+ an Avn = 0
=) A 1 (a1 Av1 + + an Avn ) = A 1 0 =) a1 v1 + + an vn = 0
=) a1 =
= an = 0.
Spanning: Suppose A 1 v = a1 v1 + + an vn . Then v = AA 1 v =
a1 Av1 +
+ an Avn .
(b) Similar to part (a).
(c) Note that Aei = (ith column of A).
(d) Note that ei A = (ith row of A).
(16) (a) If S = ff1 ; : : : ; fk g, let n = max(degree(f1 ); : : :,degree(fk )).
Then S Pn .
(b) Apply Theorem 4.5, part (3).
(c) xn+1 62 span(S).
(17) (a) Suppose V is …nite dimensional. Since V has an in…nite linearly
independent subset, part (2) of Theorem 4.13 is contradicted.
(b) f1; x; x2 ; : : :g is a linearly independent subset of P, and hence, of any
vector space containing P. Apply part (a).
(18) (a) It is given that B is linearly independent.
(b) Span(B) is a subspace of V, and so by part (3) of Theorem 4.5,
span(S)
span(B). Thus, V
span(B). Also, B
V, and so
span(B) V, again by part (3) of Theorem 4.5. Hence, span(B) =
V.
(c) Since w 2
= B, this follows directly from the de…nition for “B is a
maximal linearly independent subset of S” given in the textbook.
79
Andrilli/Hecker - Answers to Exercises
Section 4.5
(d) Since C = B [ fwg is linearly dependent, by the de…nition of linear
independence, there is some …nite subset fv1 ; : : : ; vk g of B such that
a1 v1 +
+ ak vk + bw = 0; with not all coe¢ cients equal to zero.
But if b = 0, then a1 v1 +
+ ak vk = 0 with not all ai = 0,
contradicting the fact that B is linearly independent. Hence, b 6= 0;
and thus w = ( ab1 )v1 + + ( abk )vk : Hence, w 2 span(B). (This
is essentially identical to the solution for part (a) of Exercise 26 of
Section 4.4.)
(e) Part (d) proves that S span(B), since all vectors in S that are not
already in B are proven to be in span(B). Part (b) then shows that
B spans V, which, by part (a), is su¢ cient to …nish the proof of the
theorem.
(19) (a) It is given that S is a spanning set.
(b) Let S be a spanning set for a vector space V. If S is linearly dependent, then there is a subset C of S such that C 6= S and C spans
V.
(c) The statement in part (b) is essentially half of the “if and only if”
statement in Exercise 12 in Section 4.4. We rewrite the appropriate
half of the proof of that exercise.
Since S is linearly dependent, then a1 v1 +
+ an vn = 0 for some
fv1 ; : : : ; vn g
S and a1 ; : : : ; an 2 R, with some ai 6= 0. Then
ai 1
ai+1
an
vi = aa1i v1
ai vi 1
ai vi+1
ai vn . So,
if w 2 span(S), then w = bvi + c1 w1 +
+ ck wk , for some
1
fw1 ; : : : ; wk g S fvi g and b; c1 ; : : : ; ck 2 R. Hence, w = ba
ai v1
bai+1
ban
vi 1
+
ai vi+1
ai vn + c1 w1 +
ck wk 2 span(S
fvi g). So, span(S)
span(S
fvi g). Also,
span(S fvi g) span(S) by Corollary 4.6. Thus span(S fvi g) =
span(S) = V. Letting C = S fvi g completes the proof.
bai
ai
1
(20) Suppose B is a basis for V. Then B is linearly independent in V. If v 2
= B;
then v 2 span(B) = V. Thus, C = B [ fvg is linearly dependent since
v 2 span(C fvg) = span(B). Thus B is a maximal linearly independent
subset of V.
(21) Suppose B is a basis for V. Then B spans V. Also, if v 2 B, then
span(B fvg) 6= span(B) = V by Exercise 12, Section 4.4. Thus B is a
minimal spanning set for V.
(22) (a) The empty set is linearly independent and is a subset of W. Hence
0 2 A.
(b) Since V and W share the same operations, every linearly independent
subset T of W is also a linearly independent subset of V (by the de…nition of linear independence). Hence, using part (2) of Theorem 4.13
on T in V shows that k = jT j dim(V).
80
Andrilli/Hecker - Answers to Exercises
Section 4.5
(c) T is clearly linearly independent by the de…nition of A. Suppose
C
W with T
C and T 6= C. We must show that C is linearly
dependent. If not, then jCj 2 A, by the de…nition of A. (Note that
jCj is …nite, by an argument similar to that given in part (b).) But
T C and T 6= C implies that jCj > jT j = n. This contradicts the
fact that n is the largest element of A.
(d) This is obvious.
(e) Since T is a …nite basis for W, from part (d), we see that W is …nite
dimensional. Also, dim(W) = jT j dim(V), by part (b).
(f) Suppose dim(W) = dim(V), and let T be a basis for W. Then T
is a linearly independent subset of V (see the discussion in part (b))
with jT j = dim(W) = dim(V). Part (2) of Theorem 4.13 then shows
that T is a basis for V. Hence, W = span(T ) = V.
(g) The converse of part (f) is “If W = V, then dim(W) = dim(V).”
This is obviously true, since the dimension of a …nite dimensional
vector space is unique.
(23) Let B = fv1 ; : : : ; vn 1 g Rn be a basis for V. Let A be the (n 1) n
matrix whose rows are the vectors in B, and consider the homogeneous
system AX = 0. By Corollary 2.6, the system has a nontrivial solution
x. Therefore, Ax = 0. That is, x vi = 0 for each 1 i n 1. Now,
suppose v 2 V. Then there are a1 ; : : : ; an 1 2 R such that v = a1 v1 +
+an 1 vn 1 . Hence, x v = x a1 v1 +
+x an 1 vn 1 = 0 +
+0 =
0. Therefore V
fv 2 Rn j x v = 0g. Let W = fv 2 Rn j x v = 0g.
So, V
W. Now, it is easy to see that W is a subspace of Rn . (Clearly
0 2 W since x 0 = 0. Hence W is nonempty. Next, if w1 ; w2 2 W, then
x (w1 + w2 ) = x w1 + x w2 = 0 + 0 = 0. Thus, (w1 + w2 ) 2 W, and
so W is closed under addition. Also, if w 2 W and c 2 R, then x (cw)
= c(x w) = c(0) = 0. Therefore (cw) 2 W, and W is closed under scalar
multiplication. Hence, by Theorem 4.2, W is a subspace of Rn .) If V = W,
we are done. Suppose, then, that V =
6 W. Then, by Theorem 4.16, n 1 =
dim(V) < dim(W) dim(Rn ) = n. Hence, dim(W) = n, and so W = Rn ,
by Theorem 4.16. But x 2
= W, since x x 6= 0, because x is nontrivial.
This contradiction completes the proof.
(24) Let B be a maximal linearly independent subset of S. S only has …nitely many linearly independent subsets, since it only has …nitely many
elements. Also, since f g
S, S has at least one linearly independent
subset. Select any set B of the largest possible size from the list of all of
the linearly independent subsets of S. Then no other linearly independent
subset of S can be larger than B. So, by Theorem 4.14, B is a basis for
V. Since jBj is …nite, V is …nite dimensional.
81
Andrilli/Hecker - Answers to Exercises
Section 4.6
(25) (a) T
(c) F
(e) F
(g) F
(i) F
(b) F
(d) F
(f) T
(h) F
(j) T
Section 4.6
Problems 1 through 10 ask you to construct a basis for some vector space.
The answers here are those you will most likely get using the techniques in the
textbook. However, other answers are possible.
(1) (a) f[1; 0; 0; 2; 2]; [0; 1; 0; 0; 1]; [0; 0; 1; 1; 0]g
1 6 3
5 ; 5 ; 5 ]; [0; 1;
(b) f[1; 0;
1
5;
9
5;
2
5 ]g
(c) fe1 ; e2 ; e3 ; e4 ; e5 g
(d) f[1; 0; 0; 2;
13
9
4 ]; [0; 1; 0; 3; 2 ]; [0; 0; 1; 0;
(2) fx3 3x; x2 x; 1g
82
3 2
0
< 1 0
(3) 4 43 13 5 ; 4 13
:
2 0
0
1
3 2
0
1 5 4
0
;
3
0
0
39
0 =
0 5
;
1
(e) f[3; 2; 2]; [1; 2; 1]; [3; 2; 7]g
(4) (a) f[3; 1; 2]; [6; 2; 3]g
(b) f[4; 7; 1]; [1; 0; 0]g
(c) f[1; 3; 2]; [2; 1; 4]; [0; 1; 1]g
(d) f[1; 4; 2]; [2; 8; 5]; [0; 7; 2]g
(5) (a) fx3
8x2 + 1; 3x3
3
3
(b) f 2x + x + 2; 3x
(c) fx3 ; x2 ; xg
1
4 ]g
(f) f[3; 1; 0]; [2; 1; 7]; [1; 5; 7]g
(g) fe1 ; e3 g
(h) f[1; 3; 0]; [0; 1; 1]g
2x2 + x; 4x3 + 2x
2
3
10; x3
20x2
x + 12g
2
x + 4x + 6; 8x + x + 6x + 10g
(d) f1; x; x2 g
(e) fx3 + x2 ; x; 1g
(f) f8x3 ; 8x3 + x2 ; 8x3 + x; 8x3 + 1g
(6) (a) One answer is f ij j 1
(i; j) entry = 1 and all
82
3 2
1
< 1 1 1
(b) 4 1 1 1 5 , 4 1
:
1 1 1
1
2
3 2
1 1
1
1
4 1 1
1 5, 4 1
1 1
1
1
2
3 2
1 1
1
1
4 1 1
1 5, 4 1
1 1
1
1
i; j 3g, where
other entries 0.
3 2
1 1
1
1 1 5, 4 1
1 1
1
3 2
1 1
1
1 1 5, 4 1
1 1
1
3 2
1 1
1
1 1 5, 4 1
1 1
1
82
ij
is the 3
3
1 1
1 1 5,
1 1
3
1 1
1 1 5,
1 1
39
1 1 =
1 1 5
;
1 1
3 matrix with
Andrilli/Hecker - Answers to Exercises
82
3 2
3 2
0 1 0
0
< 1 0 0
(c) 4 0 0 0 5 ; 4 1 0 0 5 ; 4 0
:
0 0 0
0 0 0
1
2
3 2
3 2
0 0 0
0 0 0
0 0
4 0 1 0 5; 4 0 0 1 5;4 0 0
0 0 0
0 1 0
0 0
82
3 2
3 2
1 1 0
1
< 1 0 0
(d) 4 0 1 0 5 , 4 0 1 0 5, 4 0
:
0 0 1
0 0 1
0
2
3 2
3 2
1 0 0
1 0 0
1 0
4 1 1 0 5, 4 0 1 1 5, 4 0 1
0 0 1
0 0 1
1 0
2
3 2
3 2
1 0 0
1 0 0
1 0
4 0 1 0 5, 4 0 2 0 5, 4 0 1
0 1 1
0 0 1
0 0
Section 4.6
0
0
0
0
0
1
0
1
0
3
1
0 5;
0
39
=
5
;
3
1
0 5,
1
3
0
0 5,
1
39
0 =
0 5
;
2
(7) (a) f[1; 3; 0; 1; 4]; [2; 2; 1; 3; 1]; [1; 0; 0; 0; 0]; [0; 1; 0; 0; 0]; [0; 0; 1; 0; 0]g
(b) f[1; 1; 1; 1; 1]; [0; 1; 1; 1; 1]; [0; 0; 1; 1; 1]; [0; 0; 1; 0; 0]; [0; 0; 0; 1; 0]g
(c) f[1; 0; 1; 0; 0]; [0; 1; 1; 1; 0]; [2; 3; 8; 1; 0]; [1; 0; 0; 0; 0]; [0; 0; 0; 0; 1]g
(8) (a) fx3
x2 ; x4
(c) fx4
82
<
(9) (a) 4
:
x3 + x2
(b) f3x
2
0
4 0
0
82
<
(b) 4
:
2
1
4 0
0
82
<
(c) 4
:
2; x3
1
1
0
1
0
0
0
1
1
0
0
0
3
1
0
3x3 + 5x2
x; x4 ; x3 ; 1g
6x + 4; x4 ; x2 ; xg
x + 1; x3 x2 + x
3 2
3 2
1
0
0
1
1 5;4 0
1 5;4 1
0
1
1
0
39
3 2
3 2
0 0 =
0 0
5;4 1 0 5;4 0 0 5
;
1 0
0 0
3 2
3 2
0
3
0
2
1 5;4 1
0 5;4 0
4
2
3
6
3 2
3 2
39
0 1
0 0 =
5;4 0 0 5;4 0 0 5
;
0 0
1 0
3 2
3 2
0
1
0
2
7 5;4 1
3 5;4 3
1
0
2
0
83
1; x2 x + 1; x2 ; xg
3
0
0 5,
0
3
1
1 5,
8
3
0
1 5,
1
Andrilli/Hecker - Answers to Exercises
2
6
4 0
0
3 2
0
0
1 5;4 0
1
0
3 2
1
0
0 5;4 0
0
1
Section 4.6
39
0 =
0 5
;
0
(10) f ij j 1 i j 4g, where ij is the 4 4 matrix with (i; j) entry = 1
and all other entries 0. Notice that the condition 1 i j
4 assures
that only upper triangular matrices are included.
(11) (a) 2
(b) 8
(c) 4
(d) 3
(12) (a) A basis for Un is f ij j 1
i
j
ng, where ij is the n n
matrix having a zero in every entry, except for the (i; j)th entry,
which equals 1. Since there are n i + 1 entries in the ith row of
an
aboveP
the mainP
diagonal, this basis contains
Pnn n matrix onPor
n
n
n
2
i + 1) = i=1 n
n(n + 1)=2 + n
i=1 (n
i=1 i +
i=1 1 = n
2
= (n + n)=2 elements.
Similarly, a basis for Ln is f ij j 1 j i ng, and a basis for
the symmetric n n matrices is f ij + Tij j 1 i j ng.
(b) (n2
n)=2
(13) (a) SA is nonempty, since 0 2 SA : Let X1 ; X2 2 SA ; c 2 R: Then
A(X1 + X2 ) = AX1 + AX2 = 0 + 0 = 0: Also, A(cX1 ) = c(AX1 ) =
c0 = 0: Hence SA is closed under addition and scalar multiplication,
and so is a subspace by Theorem 4.2.
(b) Let B be the reduced row echelon form of A. Each basis element
of SA is found by setting one independent variable of the system
AX = O equal to 1 and the remaining independent variables equal
to zero. (The values of the dependent variables are then determined
from these values.) Since there is one independent variable for each
nonpivot column in B, dim(SA ) = number of nonpivot columns of B.
But, clearly, rank(A) = number of pivot columns in B. So dim(SA )
+ rank(A) = total number of columns in B = n.
(14) Let V and S be as given in the statement of the theorem. Let
A = fk j a set T exists with T S, jT j = k, and T linearly independentg.
Step 1: The empty set is linearly independent and is a subset of S. Hence
0 2 A, and A is nonempty.
Step 2: Suppose k 2 A. Then, every linearly independent subset T of
S is also a linearly independent subset of V. Hence, using part (2) of
Theorem 4.13 on T shows that k = jT j dim(V).
Step 3: Suppose n is the largest number in A, which must exist, since
A is nonempty and all its elements are
dim(V). Let B be a linearly
independent subset of S with jBj = n, which exists because n 2 A. We
want to show that B is a maximal linearly independent subset of S. Now,
B is given as linearly independent. Suppose C
S with B
C and
84
Andrilli/Hecker - Answers to Exercises
Section 4.6
B 6= C. We must show that C is linearly dependent. If not, then jCj 2 A,
by the de…nition of A. (Note that jCj is …nite, by an argument similar to
that given in Step 2.) But B C and B 6= C implies that jCj > jBj = n.
This contradicts the fact that n is the largest element of A.
Step 4: The set B from Step 3 is clearly a basis for V, by Theorem 4.14.
This completes the proof.
(15) (a) Let B 0 be a basis for W. Expand B 0 to a basis B for V (using
Theorem 4.18).
(b) No; consider the subspace W of R3 given by W = f[a; 0; 0] j a 2 Rg.
No subset of B = f[1; 1; 0]; [1; 1; 0]; [0; 0; 1]g (a basis for R3 ) is a
basis for W.
(c) Yes; consider Y = span(B 0 ).
(16) (a) Let B1 be a basis for W. Expand B1 to a basis for V (using Theorem 4.18). Let B2 = B B1 , and let W 0 = span(B2 ). Suppose
B1 = fv1 ; : : : vk g and B2 = fu1 ; : : : ; ul g. Since B is a basis for V, all
v 2 V can be written as v = w + w0 with w = a1 v1 + + ak vk 2 W
and w0 = b1 u1 +
+ bl ul 2 W 0 .
For uniqueness, suppose v = w1 + w10 = w2 + w20 , with w1 ; w2 2
W and w10 ; w20 2 W 0 . Then w1 w2 = w20 w10 2 W \ W 0 = f0g.
Hence, w1 = w2 and w10 = w20 .
(b) In R3 , consider W = f[a; b; 0] j a; b 2 Rg. We could let
W 0 = f[0; 0; c] j c 2 Rg or W 0 = f[0; c; c] j c 2 Rg.
(17) (a) Let A be the matrix whose rows are the n-vectors in S. Let C be
the reduced row echelon form matrix for A. If the nonzero rows of C
form the standard basis for Rn ; then span(S) = span(fe1 ; : : : ; en g)
= Rn : Conversely, if span(S) = Rn ; then there must be at least n
nonzero rows of C by part (1) of Theorem 4.13. But this means there
are at least n rows with pivots, and hence all n columns have pivots.
Thus the nonzero rows of C are e1 ; : : : ; en ; which form the standard
basis for Rn .
(b) Let A and C be as in the answer to part (a). Note that A and C are
n n matrices. Now, since jBj = n, B is a basis for Rn () span(B)
= Rn (by Theorem 4.13) () the rows of C form the standard basis
for Rn (from part (a)) () C = In () jAj =
6 0:
(18) Let A be an m
n matrix and let S
Rn be the set of the m rows of A.
(a) Suppose C is the reduced row echelon form of A. Then the Simpli…ed Span Method shows that the nonzero rows of C form a basis for span(S). But the number of such rows is rank(A). Hence,
dim(span(S)) = rank(A).
85
Andrilli/Hecker - Answers to Exercises
Section 4.7
(b) Let D be the reduced row echelon form of AT . By the Independence
Test Method, the pivot columns of D correspond to the columns of
AT which make up a basis for span(S). Hence, the number of pivot
columns of D equals dim(span(S)). However, the number of pivot
columns of D equals the number of nonzero rows of D (each pivot
column shares the single pivot element with a unique pivot row, and
vice-versa), which equals rank(AT ).
(c) Both ranks equal dim(span(S)), and so they must be equal.
(19) Use the fact that sin( i + j ) = (cos i ) sin j + (sin i ) cos j : Note that
the ith row of A is then (cos i )x1 + (sin i )x2 : Hence each row of A is a
linear combination of fx1 ; x2 g; and so dim(span({rows of A}))
2<n
Hence, A has less than n pivots when row reduced, and so jAj = 0.
(20) (a) T
(b) T
(c) F
(d) T
(e) T
(f) F
(g) F
Section 4.7
(1) (a) [v]B = [7; 1; 5]
(b) [v]B =
[3; 2; 1; 2]
(d) [v]B =
[10; 3; 2; 1]
(g) [v]B = [ 1; 4; 2]
(h) [v]B = [2; 3; 1]
(e) [v]B = [4; 5; 3]
(i) [v]B = [5; 2; 6]
(c) [v]B = [ 2; 4; 5]
(f) [v]B = [ 3; 2; 3]
2
3
2
102
20
3
1
6 4
67
13
2 5
(2) (a) 4
(d) 6
4 0
36
7
1
4
2
(b) 4
2
3
6
2
20
(c) 4 24
9
3
1
1 5
1
6
6
7
30
24
11
(e)
2
(f) 4
3
69
80 5
31
2
(g) 4
3
2
(j) [v]B = [5; 2]
3
4
2
9
5
1
3 7
7
2
3
1 5
13 13
15
2
3
6
1
1
1
1
1
3
2
2
2
3
1
3
2
2 5
3
3
4
5 5
3
(3) [(2; 6)]B = (2; 2) (see Figure 11)
(4) (a) P =
2
0
(b) P = 4 0
1
13
18
1
0
0
31
11
8
,Q=
,T=
43
29 21
3
2
3
2
0
0 1 0
0
1 5, Q = 4 1 0 0 5, T = 4 0
0
0 0 1
1
86
1
1
0
1
0
3
4
3
1
0 5
0
Andrilli/Hecker - Answers to Exercises
Section 4.7
Figure 11: (2; 6) in B-coordinates
2
2
4
6
(c) P =
11
2
25
T = 4 31
145
2
3
6 41
(d) P = 6
4 19
4
2
4
6 1
T=6
4 5
2
8
25
45
97
120
562
2
2
1
1
45
250
113
22
3
1
7
3
3
2
24
2
13
3
43 5, Q = 4 30
139 13
76
3
150
185 5
868
3
2
77
38
6
420
205 7
7, Q = 6
5
4
191
93
373
18
1
1 7
7
2 5
1
87
3
1
1 5,
5
1
0
1
3
0 1
1 2
2 6
1 6
3
3
1 7
7,
6 5
36
Andrilli/Hecker - Answers to Exercises
Section 4.7
3
1 6 3
(5) (a) C = ([1; 4; 0; 2; 0]; [0; 0; 1; 4; 0]; [0; 0; 0; 0; 1]); P = 4 1 5 3 5;
1 3 2
2
3
1
3
3
1
0 5; [v]B = [17; 4; 13]; [v]C = [2; 2; 3]
Q=P 1=4 1
2
3
1
2
3
1
3
1
14
4 5;
(b) C = ([1; 0; 17; 0; 21]; [0; 1; 3; 0; 5]; [0; 0; 0; 1; 2]); P = 4 5
0
2
3
2
3
34
7
2
3
1 5; [v]B = [5; 2; 3]; [v]C = [2; 9; 5]
Q = P 1 = 4 15
10
2
1
2
(c) C = ([1; 0; 0; 0]; [0; 1; 0; 0]; [0; 0; 1; 0]; [0; 0; 0; 1]);
2
1
2
3
3
6
4
2
6
6
2
6 1
7
3
0 7
7; Q = P 1 = 6
P=6
6
4 4
5
3
3
1
6
5
4
6
2
4
2
5
[v]B = [2; 1; 3; 7]; [v]C = [10; 14; 3; 12]
(6) For each i, vi = 0v1 + 0v2 +
[0; 0; : : : ; 1; : : : 0] = ei :
(7) (a) Transition matrices
3 2
2
0
0 1 0
4 0 0 1 5, 4 1
0
1 0 0
+ 1vi +
to C1 , C2 , C3 ,
3 2
1
0 1
0 0 5, 4 0
1 0
0
4
12
7
9
27
31
2
22
67
77
2
23
71
41
7
7
7
7;
7
5
+ 0vn ; and so [vi ]B =
C4 , and C5 , respectively:
3 2
3 2
0 0
0 1 0
0
0 1 5, 4 1 0 0 5, 4 0
1 0
0 0 1
1
0
1
0
3
1
0 5
0
(b) Let B = (v1 ; : : : ; vn ) and C = (v 1 ; : : : ; v n ), where 1 ; 2 ;
; n
is a rearrangement of the numbers 1; 2;
; n. Let P be the transition
matrix from B to C. The goal is to show, for 1
i
n, that the
ith row of P equals e i . This is true i¤ (for 1
i
n) the i th
column of P equals ei . Now the i th column of P is, by de…nition,
[v i ]C . But since v i is the ith vector in C, [v i ]C = ei , completing
the proof.
(8) (a) Following the Transition Matrix Method, it is necessary to row reduce
2
3
First Second
nth
6 vector vector
vector
In 7
6
7:
4 in
5
in
in
C
C
C
The algorithm from Section 2.4 for calculating inverses shows that
the result obtained in the last n columns is the desired inverse.
88
3
Andrilli/Hecker - Answers to Exercises
Section 4.7
(b) Following the Transition Matrix Method, it is necessary to row reduce
3
2
First Second
nth
6 In
vector vector
vector 7
7:
6
4
in
in
in 5
B
B
B
Clearly this is already in row reduced form, and so the result is as
desired.
(9) Let S be the standard basis for Rn . Exercise 8(b) shows that P is the
transition matrix from B to S, and Exercise 8(a) shows that Q 1 is the
transition matrix from S to C. Theorem 4.21 …nishes the proof.
(10) C = ([ 142; 64; 167]; [ 53; 24; 63]; [ 246; 111; 290])
(11) (a) Easily veri…ed with straightforward computations.
(b) [v]B = [1; 2; 3]; Av = [14; 2; 5]; D[v]B = [Av]B = [2; 2; 3]
(12) (a)
1 = 2, v1
[6; 2; 5]
2
2
(b) D = 4 0
0
2
2 19
2
(c) 4 1
5 31
= [2; 1; 5];
0
1
0
3
6
2 5
5
2
=
1, v2 = [19; 2; 31];
3
=
3, v3 =
3
0
0 5
3
(13) Let V, B , and the ai ’s and w’s be as given in the statement of the theorem.
Proof of part (1): Suppose that [w1 ]B = [b1 ; : : : ; bn ] and [w2 ]B = [c1 ; : : : ; cn ].
Then, w1 = b1 v1 +
+bn vn and w2 = c1 v1 +
+cn vn . Hence,
w1 + w2 = b1 v1 +
+bn vn + c1 v1 +
+cn vn = (b1 + c1 )v1 +
+(bn + cn )vn , implying [w1 + w2 ]B = [(b1 + c1 ); : : : ; (bn + cn )], which
equals [w1 ]B + [w2 ]B .
Proof of part (2): Suppose that [w1 ]B = [b1 ; : : : ; bn ]. Then, w1 = b1 v1
+
+bn vn . Hence, a1 w1 = a1 (b1 v1 +
+bn vn ) = a1 b1 v1 +
+a1 bn vn , implying [a1 w1 ]B = [a1 b1 ; : : : ; a1 bn ], which equals a1 [b1 ; : : : ; bn ].
Proof of part (3): Use induction on k.
Base Step (k = 1): This is just part (2).
Inductive Step: Assume true for any linear combination of k vectors, and
prove true for a linear combination of k + 1 vectors. Now, [a1 w1 +
+
ak wk + ak+1 wk+1 ]B = [a1 w1 + + ak wk ]B + [ak+1 wk+1 ]B (by part (1))
= [a1 w1 +
+ ak wk ]B + ak+1 [wk+1 ]B (by part (2)) = a1 [w1 ]B +
+ak [wk ]B + ak+1 [wk+1 ]B (by the inductive hypothesis).
(14) Let v 2 V. Then, using Theorem 4.20, (QP)[v]B = Q[v]C = [v]D . Hence,
by Theorem 4.20, QP is the (unique) transition matrix from B to D.
89
Andrilli/Hecker - Answers to Exercises
Chapter 4 Review
(15) Let B, C, V, and P be as given in the statement of the theorem. Let
Q be the transition matrix from C to B. Then, by Theorem 4.21, QP
is the transition matrix from B to B. But, clearly, since for all v 2 V,
I[v]B = [v]B , Theorem 4.20 implies that I must be the transition matrix
from B to B. Hence, QP = I, …nishing the proof of the theorem.
(16) (a) F
(b) T
(c) T
(d) F
(e) F
(f) T
(g) F
Chapter 4 Review Exercises
(1) This is a vector space. To prove this, modify the proof in Example 7 in
Section 4.1.
(2) Zero vector = [ 4; 5]; additive inverse of [x; y] is [ x
8; y + 10]
(3) (a), (b), (d), (f) and (g) are not subspaces; (c) and (e) are subspaces
(4) (a) span(S) = f[a; b; c; 5a
3b + c] j a; b; c 2 Rg =
6 R4
(b) Basis = f[1; 0; 0; 5]; [0; 1; 0; 3]; [0; 0; 1; 1]g; dim(span(S)) = 3
(5) (a) span(S) = fax3 + bx2 + ( 3a + 2b)x + c j a; b; c 2 Rg =
6 P3
(b) Basis = fx3
(6) (a) span(S) =
3x; x2 + 2x; 1g; dim(span(S)) = 3
a
c
b
2a + b
1
0 4
0
2 0
dim(span(S)) = 4
(b) Basis =
4a
c
0 1
0 1
;
3b
a; b; c; d 2 R
d
3
0
;
0
1
0 0
1 0
;
6= M23
0
0
0
0
0
1
(7) (a) S is linearly independent
(b) S is a maximal linearly independent subset of itself. S spans R3 .
(c) No, by Theorem 4.9
(8) (a) S is linearly dependent; x3
+ 8(2x3 x2 2x + 1)
2x2
x + 2 = 3( 5x3 + 2x2 + 5x
2)
(b) S does not span P3 . Maximal linearly independent subset of S =
f 5x3 + 2x2 + 5x 2; 2x3 x2 2x + 1; 2x3 + 2x2 + 3x 5g
(c) Yes, by Theorem 4.9. One such alternate linear combination is
5 5x3 + 2x2 + 5x 2
5 2x3 x2 2x + 1
3
2
+1 x
2x
x+2
1 2x3 + 2x2 + 3x 5 .
(9) (a) Linearly dependent;
2
3
2
4 6
10
4 3 4 5 = 4 10
2 5
6
3
14
8 5
12
90
2
7
24 5
3
3 2
12
8
7 5+4 3
10
2
3
16
10 5
13
;
Andrilli/Hecker - Answers to Exercises
82
3 2
3 2
4
0
10
14
7
<
2 5 ; 4 10
8 5;4 5
(b) 4 11
:
6
1
6
12
3
S does not span M32
Chapter 4
3 2
12
8 16
7 5 ; 4 3 10 5 ; 4
10
2 13
3 2
Review
39
6 11 =
4
7 5 ;
;
3
9
(c) Yes, by Theorem 4.9. One such alternate linear combination is
2
3 2
3 2
3 2
3
4
0
10
14
7 12
8 16
7 5 4 3 10 5
2 5 + 4 10
8 5 4 5
2 4 11
6
1
6
12
3 10
2 13
2
3 2
3
4 6
6 11
7 5:
+4 3 4 5+4 4
2 5
3
9
(10) v = 1v. Also, since v 2 span(S), v = a1 v1 +
+ an vn , for some
a1 ; : : : ; an . These are 2 di¤erent ways to express v as a linear combination
of vectors in T .
(11) Use the same technique as in Example 15 in Section 4.4.
(12) (a) The matrix whose rows are the given vectors row reduces to I4 , so the
Simpli…ed Span Method shows that the given set spans R4 . Since the
set has 4 vectors and dim(R4 ) = 4, part (1) of Theorem 4.13 shows
that the given set is a basis for R4 .
2
3
2 2 13
3 5 row reduces to I3 :
(b) Similar to part (a). The matrix 4 1 0
4 1 16
3
2
1
5
0
3
6 6
1
4
3 7
7 row reduces
(c) Similar to part (a). The matrix 6
4 7
4
7
1 5
3
7
2
4
to I4 :
(13) (a) W nonempty: 0 2 W because A0 = O. Closure under addition: If
X1 ; X2 2 W, then A(X1 +X2 ) = AX1 +AX2 = O+O = O. Closure
under scalar multiplication: If X 2 W, then A(cX) = cAX = cO =
O.
(b) Basis = f[3; 1; 0; 0]; [ 2; 0; 1; 1]g
(c) dim(W) = 2, rank(A) = 2; 2 + 2 = 4
(14) (a) First, use direct computation to check that
in V. Next,
2
3
2
1
0 0
6 0
6
1 0 7
6
7 clearly row reduces to 6
4 3
5
4
2 0
0
0 1
91
every polynomial in B is
1
0
0
0
0
1
0
0
3
0
0 7
7, and so B is
1 5
0
Andrilli/Hecker - Answers to Exercises
Chapter 4 Review
linearly independent by the Independence Test Method. Finally, since
the polynomial x 2
= V, dim(V) < dim(P3 ) = 4. But, jBj dim(V),
implying that jBj = dim(V) and that B is a basis for V, by Theorem
4.13. Thus, dim(V) = 3.
(b) C = f1; x3
3x2 + 3xg is a basis for W and dim(W) = 2.
(15) (a) T = f[2; 3; 0; 1]; [4; 3; 0; 4]; [1; 0; 2; 1]g
(b) Yes
(16) (a) T = fx2
2x; x3
x; 2x3
2x2 + 1g
(b) Yes
(17) f[2; 1; 1; 2]; [1; 2; 2; 4]; [0; 1; 0; 0]; [0; 0; 1; 0]g
82
3 2
3 2
3 2
1
1
2
2
1
1
< 3
2 5;4 0
1 5;4 0
1 5;4 0
(18) 4 0
:
1
0
3
0
4
0
0
(19) fx4 + 3x2 + 1; x3
2x2
1; x + 3g
(20) (a) [v]B = [ 3; 1; 2]
(b) [v]B = [4; 2; 3]
(21) (a) [v]B = [27; 62; 6]; P = 4
2
(b) [v]B = [ 4; 1; 3]; P = 4
2
4
1
3
2
0
1
4
1
1
1
2
1
6 2
5 0
3 1
5 2
6
(c) [v]B = [4; 2; 1; 3]; P = 6
4
[v]C = [25; 24; 15; 27]
2
3
0
0 1 0
6 1
0 1 1 7
7
(22) (a) P = 6
4 1
1 1 1 5
0
1 0 1
2
3
0 1 0 1
6 1 0 1 1 7
7
(b) Q = 6
4 1 1 0 1 5
0 0 1 0
1
= 0, v1 = [4; 4; 13];
3 2
0
0
0 5;4 0
0
0
39
0 =
0 5
;
1
(c) [v]B = [ 3; 5; 1]
2
(23) (a)
3 2
0
0
0 5;4 1
0
0
2
3
5
1 5; [v]C = [14; 21; 7]
2
3
2
0 5; [v]C = [ 23; 6; 6]
1
3
2
1
1
1 7
7;
1
0 5
0
1
2
1
6 1
(c) R = QP = 6
4 1
1
2
0
6
0
(d) R 1 = 6
4 1
1
3
1
0
2
3
1 1 2
0 2 2 7
7
1 2 2 5
1 1 1
3
2
2
1
0 7
7
1
0 5
2
1
= 2, v2 = [ 3; 2; 0]; v3 = [3; 0; 4]
92
Andrilli/Hecker - Answers to Exercises
2
3
0 0 0
(b) D = 4 0 2 0 5
0 0 2
(24) (a) C = f[1; 0; 0;
2
1
(b) P = 4 2
1
4
(c) 4 4
13
7]; [0; 1; 0; 6]; [0; 0; 1; 4]g
3
1
3
1
5 5
1
2
2
(c) [v]B = [25; 2; 10]; (Note: P
(d) v = [ 3; 2; 3; 45]
Chapter 4 Review
3
3 3
2 0 5
0 4
2
1
3
1
1
=4
5
1
2
3
8
1 5 :)
3
(25) By part (b) of Exercise 8 in Section 4.7, the matrix C is the transition
matrix from C-coordinates to standard coordinates. Hence, by Theorem
4.21, CP is the transition matrix from B-coordinates to standard coordinates. However, again by part (b) of Exercise 8 in Section 4.7, this
transition matrix is the matrix whose columns are the vectors in B.
(26) (a)
(b)
(c)
(d)
(e)
T
T
F
T
F
(f)
(g)
(h)
(i)
(j)
T
T
F
F
T
(k)
(l)
(m)
(n)
(o)
(p)
(q)
(r)
(s)
(t)
F
F
T
T
F
93
F
T
F
T
T
(u)
(v)
(w)
(x)
(y)
T
T
F
T
T
(z) F
Andrilli/Hecker - Answers to Exercises
Section 5.1
Chapter 5
Section 5.1
(1) (a) Linear operator: f ([x; y] + [w; z]) = f ([x + w; y + z])
= [3(x + w) 4(y + z); (x + w) + 2(y + z)]
= [(3x 4y) + (3w 4z); ( x + 2y) + ( w + 2z)]
= [3x 4y; x + 2y] + [3w 4z; w + 2z] = f ([x; y]) + f ([w; z]);
also, f (c[x; y]) = f ([cx; cy]) = [3(cx) 4(cy); (cx) + 2(cy)]
= [c(3x 4y); c( x + 2y)] = c[3x 4y; x + 2y] = cf ([x; y]).
(b) Not a linear transformation: h([0; 0; 0; 0]) = [2; 1; 0; 3] 6= 0.
(c) Linear operator: k([a; b; c] + [d; e; f ]) = k([a + d; b + e; c + f ]) =
[b + e; c + f; a + d] = [b; c; a] + [e; f; d] = k([a; b; c]) + k([d; e; f ]);
also, k(d[a; b; c]) = k([da; db; dc]) = [db; dc; da] = d[b; c; a]
= dk([a; b; c]).
a b
e f
a+e b+f
+
=l
c d
g h
c+g d+h
2(c + g) + (d + h)
3(b + f )
4(a + e)
(b + f ) + (c + g) 3(d + h)
(d) Linear operator: l
=
=
=
(a + e)
(a
a
2c + d) + (e 2g + h)
4a 4e
(b + c
2c + d
3b
4a
b + c 3d
a b
c d
=l
also, l s
=
=
(sa)
+l
a b
c d
e
g
=l
e
+
f
h
2g + h
3f
4e
f + g 3h
sa sb
sc sd
2(sc) + (sd)
3(sb)
4(sa)
(sb) + (sc) 3(sd)
a b
c d
1
0
= s
a
2c + d
4a
3b
b + c 3d
.
(e) Not a linear transformation: n 2
but 2 n
3h)
;
s(a 2c + d)
s(3b)
s( 4a)
s(b + c 3d)
= sl
3b + 3f
3d) + (f + g
2
1
1
0
2
1
=n
2
0
4
2
= 2(1) = 2.
(f) Not a linear transformation: r(8x3 ) = 2x2 , but 8 r(x3 ) = 8(x2 ).
94
= 4,
Andrilli/Hecker - Answers to Exercises
Section 5.1
(g) Not a linear transformation: s(0) = [1; 0; 1] 6= 0.
(h) Linear transformation: t((ax3 + bx2 + cx + d) + (ex3 + f x2 + gx + h))
= t((a + e)x3 + (b + f )x2 + (c + g)x + (d + h)) =
(a + e) + (b + f ) + (c + g) + (d + h) = (a + b + c + d)
+ (e + f + g + h) = t(ax3 + bx2 + cx + d) + t(ex3 + f x2 + gx + h);
also, t(s(ax3 + bx2 + cx + d)) = t((sa)x3 + (sb)x2 + (sc)x + (sd))
= (sa) + (sb) + (sc) + (sd) = s(a + b + c + d) = s t(ax3 + bx2 + cx + d).
(i) Not a linear transformation: u(( 1)[0; 1; 0; 0]) = u([0; 1; 0; 0]) = 1,
but ( 1) u([0; 1; 0; 0]) = ( 1)(1) = 1.
(j) Not a linear transformation: v(x2 + (x + 1)) = v(x2 + x + 1) = 1, but
v(x2 ) + v(x + 1) = 0 + 0 = 0.
02
3 2
31
a b
u v
(k) Linear transformation: g @4 c d 5 + 4 w x 5A
e f
y z
02
31
a+u b+v
= g @4 c + w d + x 5A = (a + u)x4 (c + w)x2 + (e + y)
e+y f +z
= (ax4 cx2 + e) + (ux4 wx2 + y)
31
0 2
31
31
02
02
a b
u v
a b
= g @4 c d 5A + g @4 w x 5A ; also, g @s 4 c d 5A
e f
e f
y z
02
31
sa sb
= g @4 sc sd 5A = (sa)x4 (sc)x2 + (se) = s(ax4 cx2 + e)
se sf
02
31
a b
= s g @4 c d 5A .
e f
p
(l) Not a linear transformation:pe([3; 0] + [0;p4]) = e([3; 4]) = 32 + 42 =
5, but e([3; 0]) + e([0; 4]) = 32 + 02 + 02 + 42 = 3 + 4 = 7.
(2) (a) i(v + w) = v + w = i(v) + i(w); also, i(cv) = cv = c i(v).
(b) z(v + w) = 0 = 0 + 0 = z(v) + z(w); also, z(cv) = 0 = c0 = c z(v).
(3) f (v + w) = k(v + w) = kv + kw = f (v) + f (w); f (av) = kav = a(kv) =
af (v).
(4) (a) f ([u; v; w] + [x; y; z]) = f ([u + x; v + y; w + z]) =
[ (u + x); v + y; w + z] = [( u) + ( x); v + y; w + z]
= [ u; v; w] + [ x; y; z] = f ([u; v; w]) + f ([x; y; z]);
f (c[x; y; z]) = f ([cx; cy; cz]) = [ (cx); cy; cz] = [c( x); cy; cz]
= c[ x; y; z] = c f ([x; y; z]).
95
Andrilli/Hecker - Answers to Exercises
Section 5.1
(b) f ([x; y; z]) = [x; y; z]. It is a linear operator.
(c) Through the y-axis: g([x; y]) = [ x; y]; Through the x-axis: f ([x; y]) =
[x; y]; both are linear operators.
(5) h([a1 ; a2 ; a3 ] + [b1 ; b2 ; b3 ]) = h([a1 + b1 ; a2 + b2 ; a3 + b3 ]) =
[a1 + b1 ; a2 + b2 ; 0] = [a1 ; a2 ; 0] + [b1 ; b2 ; 0]
= h([a1 ; a2 ; a3 ]) + h([b1 ; b2 ; b3 ]);
h(c[a1 ; a2 ; a3 ]) = h([ca1 ; ca2 ; ca3 ]) = [ca1 ; ca2 ; 0] = c[a1 ; a2 ; 0]
= c h([a1 ; a2 ; a3 ]);
j([a1 ; a2 ; a3 ; a4 ] + [b1 ; b2 ; b3 ; b4 ]) = j([a1 + b1 ; a2 + b2 ; a3 + b3 ; a4 + b4 ]) =
[0; a2 + b2 ; 0; a4 + b4 ] = [0; a2 ; 0; a4 ] + [0; b2 ; 0; b4 ]
= j([a1 ; a2 ; a3 ; a4 ]) + j([b1 ; b2 ; b3 ; b4 ]);
j(c[a1 ; a2 ; a3 ; a4 ]) = j([ca1 ; ca2 ; ca3 ; ca4 ]) = [0; ca2 ; 0; ca4 ] = c[0; a2 ; 0; a4 ]
= c j([a1 ; a2 ; a3 ; a4 ])
(6) f ([a1 ; a2 ; : : : ; ai ; : : : ; an ] + [b1 ; b2 ; : : : ; bi ; : : : ; bn ])
= f ([a1 + b1 ; a2 + b2 ; : : : ; ai + bi ; : : : ; an + bn ]) = ai + bi
= f ([a1 ; a2 ; : : : ; ai ; : : : ; an ]) + f ([b1 ; b2 ; : : : ; bi ; : : : ; bn ]);
f (c[a1 ; a2 ; : : : ; ai ; : : : ; an ]) = f ([ca1 ; ca2 ; : : : ; cai ; : : : ; can ]) = cai
= c f ([a1 ; a2 ; : : : ; ai ; : : : ; an ])
(x y)+(x z)
x (y+z)
x
kxk2 x =
kxk2
x (cy)
xy
kxk2 x = c kxk2 x = cg(y)
(7) g(y + z) = projx (y + z) =
g(y) + g(z); g(cy) =
=
(x y)
kxk2 x
+
(x z)
kxk2 x
=
(8) L(y1 + y2 ) = x (y1 + y2 ) = (x y1 ) + (x y2 ) = L(y1 ) + L(y2 ); L(cy1 ) =
x (cy1 ) = c(x y1 ) = cL(y1 )
(9) Follow the hint in the text, and use the sum of angle identities for sine
and cosine.
3
2
sin
0 cos
(10) (a) Use Example 10.
5
1
0
(c) 4 0
(b) Use the hint in Exercise 9.
cos
0
sin
(11) The proofs follow the proof in Example 10, but with the matrix A replaced
by the speci…c matrices of this exercise.
Pn
Pn
Pn
(12) f (A + B) = trace(A + B) = i=1 (aii + bii ) = i=1 aii + P
i=1 bii =
n
trace(A)
+
trace(B)
=
f
(A)
+
f
(B);
f
(cA)
=
trace(cA)
=
i=1 caii
Pn
= c i=1 aii = c trace(A) = c f (A).
(13) g(A1 + A2 ) = (A1 + A2 ) + (A1 + A2 )T = (A1 + AT1 ) + (A2 + AT2 ) =
g(A1 ) + g(A2 ); g(cA) = cA + (cA)T = c(A + AT ) = cg(A): The proof
for h is done similarly.
R
R
R
(14) (a) f (p1 + p2 ) = (p1 + p2 ) dx = p1 dx + p2 dx = f (p1 ) + f (p2 )
(since the constant of integration is assumed to be zero for all these
96
Andrilli/Hecker - Answers to Exercises
Section 5.1
R
R
inde…nite integrals); f (cp) = (cp) dx = c p dx = cf (p) (since
the constant of integration is assumed to be zero for these inde…nite
integrals)
Rb
Rb
Rb
(b) g(p1 + p2 ) = a (p1 + p2 ) dx = a p1 dx + a p2 dx = g(p1 ) + g(p2 );
Rb
Rb
g(cp) = a (cp) dx = c a p dx = cg(p)
(15) Base Step: k = 1: An argument similar to that in Example 3 in Section
5.1 shows that M : V ! V given by M (f ) = f 0 is a linear operator.
Inductive Step: Assume that Ln : V ! V given by Ln (f ) = f (n) is a
linear operator. We need to show that Ln+1 : V ! V given by Ln+1 (f ) =
f (n+1) is a linear operator. But since Ln+1 = Ln M; Theorem 5.2 assures
us that Ln+1 is a linear operator.
(16) f (A1 + A2 ) = B(A1 + A2 ) = BA1 + BA2 = f (A1 ) + f (A2 ); f (cA) =
B(cA) = c(BA) = cf (A):
(17) f (A1 + A2 ) = B 1 (A1 + A2 )B = B 1 A1 B + B
1
1
f (cA) = B (cA)B =c(B AB) =cf (A)
1
A2 B = f (A1 ) + f (A2 );
(18) (a) L(p + q) = (p + q)(a) = p(a) + q(a) = L(p) + L(q).
L(cp) = (cp)(a) = c(p(a)) = cL(p).
Similarly,
(b) For all x 2 R, L((p + q)(x)) = (p + q)(x + a) = p(x + a) + q(x + a) =
L(p(x)) + L(q(x)). Also, L((cp)(x)) = (cp)(x + a) = c(p(x + a)) =
cL(p(x)).
(19) Let p = an xn +
+ a1 x + a0 and let q = bn xn +
+ b1 x + b0 . Then
n
L(p+q) = L(an x + +a1 x+a0 +bn xn + +b1 x+b0 ) = L((an +bn )xn +
+ (a1 + b1 )x + (a0 + b0 )) = (an + bn )An + + (a1 + b1 )A + (a0 + b0 )In
= (an An +
+ a1 A + a0 In ) + (bn An +
+ b1 A + b0 In ) = L(p) + L(q).
Similarly, L(cp) = L(c(an xn + +a1 x+a0 )) = L(can xn + +ca1 x+ca0 )
= can An +
+ ca1 A + ca0 In = c(an An +
+ a1 A + a0 In ) = cL(p).
(20) L(x
L(a
y) = L(xy) = ln(xy) = ln(x) + ln(y) = L(x) + L(y);
x) = L(xa ) = ln(xa ) = a ln(x) = aL(x)
(21) f (0) = 0 + x = x 6= 0, contradicting Theorem 5.1, part (1).
(22) f (0) = A0 + y = y 6= 0, contradicting
02
31
02
1 0 0
1
(23) f @4 0 1 0 5A = 1, but f @4 0
0 0 1
0
Theorem 5.1, part (1).
31
02
0 0
0 0
1 0 5A + f @4 0 0
0 0
0 0
31
0
0 5A =
1
0 + 0 = 0. (Note: for any n > 1, f (2In ) = 2n , but 2f (In ) = 2(1) = 2.)
(24) L2 (v + w) = L1 (2(v + w)) = 2L1 (v + w) = 2(L1 (v) + L1 (w))
= 2L1 (v) + 2L1 (w) = L2 (v) + L2 (w); L2 (cv) = L1 (2cv) = 2cL1 (v)
= c(2L1 (v)) = cL2 (v).
97
Andrilli/Hecker - Answers to Exercises
(25) L([ 3; 2; 4]) = [12;
02
31 2
x
L @4 y 5A = 4
z
(26) L(i) = 57 i
11
5 j;
11; 14];
2
1
0
L(j) =
3
2
1
2
5i
Section 5.1
32
3 2
3
0
x
2x + 3y
1 5 4 y 5 = 4 x 2y z 5
3
z
y + 3z
4
5j
(27) L(x y) = L(x + ( y)) = L(x) + L(( 1)y) = L(x) + ( 1)L(y)
= L(x) L(y).
(28) Follow the hint in the text. Letting a = b = 0 proves property (1) of a
linear transformation. Letting b = 0 proves property (2).
(29) Suppose Part (3) of Theorem 5.1 is true for some n. We prove it true for
n + 1.
L(a1 v1 +
+ an vn + an+1 vn+1 ) = L(a1 v1 +
+ an vn ) + L(an+1 vn+1 )
(by property (1) of a linear transformation) = (a1 L(v1 )+ +an L(vn ))+
L(an+1 vn+1 ) (by the inductive hypothesis) = a1 L(v1 ) +
+ an L(vn ) +
an+1 L(vn+1 ) (by property (2) of a linear transformation), and we are
done.
(30) (a) a1 v1 +
+ an vn = 0V =) L(a1 v1 +
+ an vn ) = L(0V )
=) a1 L(v1 ) +
+ an L(vn ) = 0W =) a1 = a2 =
= an = 0.
(b) Consider the zero linear transformation.
(31) 0W 2 W 0 , so 0V 2 L 1 (f0W g) L 1 (W 0 ). Hence, L 1 (W 0 ) is nonempty.
Also, x; y 2 L 1 (W 0 ) =) L(x); L(y) 2 W 0 =) L(x) + L(y) 2 W 0
=) L(x + y) 2 W 0 =) x + y 2 L 1 (W 0 ). Finally, x 2 L 1 (W 0 ) =)
L(x) 2 W 0 =) aL(x) 2 W 0 (for any a 2 R) =) L(ax) 2 W 0
=) ax 2 L 1 (W 0 ). Hence L 1 (W 0 ) is a subspace by Theorem 4.2.
(32) Let c = L(1). Then L(x) = L(x1) = xL(1) = xc = cx.
(33) (L2 L1 )(av) = L2 (L1 (av)) = L2 (aL1 (v)) = aL2 (L1 (v)) = a(L2 L1 (v)).
(34) (a) For L1 L2 : L1 L2 (x + y) = L1 (x + y) + L2 (x + y) =
(L1 (x) + L1 (y)) + (L2 (x) + L2 (y)) = (L1 (x) + L2 (x))
+ (L1 (y) + L2 (y)) = L1 L2 (x) + L1 L2 (y);
L1 L2 (cx) = L1 (cx)+L2 (cx) = cL1 (x)+cL2 (x) = c(L1 (x)+L2 (x))
= cL1 L2 (x).
For c L1 : c L1 (x + y) = c(L1 (x + y)) = c(L1 (x) + L1 (y)) =
cL1 (x) + cL2 (y) = c L1 (x) + c L1 (x); c L1 (ax) = cL1 (ax) =
c(aL1 (x)) = a(cL1 (x)) = a(c L1 (x)).
(b) Use Theorem 4.2: the existence of the zero linear transformation
shows that the set is nonempty, while part (a) proves closure under
addition and scalar multiplication.
98
Andrilli/Hecker - Answers to Exercises
Section 5.2
(35) De…ne a line l parametrically by ftx + y j t 2 Rg, for some …xed x; y 2 R2 .
Since L(tx+y) = tL(x)+L(y), L maps l to the set ftL(x)+L(y) j t 2 Rg.
If L(x) 6= 0; this set represents a line; otherwise, it represents a point.
(36) (a) F
(c) F
(e) T
(g) T
(b) T
(d) F
(f) F
(h) T
Section 5.2
(1) For each operator, check that the ith column of the given matrix (for
1 i 3) is the image of ei . This is easily done by inspection.
2
3
2
3
6
4
1
4
1
3
3
3
5 5
3
1
5 5
(2) (a) 4 2
(c) 4 1
3
1
7
2
7
5
1
3
2
3
0
2 0
6 0
1
0 4 7
7
(d) 6
4
3
5
1
2
0
4
1 3 5
(b)
5
1
2
8
6
1
0 2
3
2
32 16 24 15
47 128
288
(3) (a)
10
7 12
6 5
18
51
104
(d) 4
121 63 96 58
2
3
3
5
3
2
5
6
0
1 5
(b) 4 2
6 11
26
6 7
4
2
7
6
6 14
19
1 7
7
6
3
2
(e) 6
6
3
2 7
22 14
7
6
4
1
1
1 5
(c) 4 62 39 5
11
13
0
68 43
2
3
2
3
202
32
43
98
91 60
76
23
31 5
25 17
21 5
(4) (a) 4 146
(c) 4 28
83
14
18
135
125 82
104
21
51
(b)
7
13
21
51
16
38
(5) See the answer for Exercise 3(d).
(6) (a)
2
(b) 4
67
37
7
5
6
123
68
2
2
1
2
8
6 8
(c) 6
4 6
1
3
10
9 5
8
99
9
8
6
1
9
5
1
2
3
14
7 7
7
6 5
5
Andrilli/Hecker
2
3
(7) (a) 4 0
0
12x2
(8) (a)
"
2
6
6
6
(9) (a) 6
6
6
4
2
(10) 4
12
4
10
- Answers to Exercises
3
0 0 0
2 0 0 5;
0 1 0
3
2
p
1
2
3
2
1
2
0
0
1
0
0
0
12
6
3
0
0
0
0
1
0
#
0
1
0
0
0
0
3
2
2 5
7
0
0
0
1
0
0
2
6
(11) (a) Matrix for L1 : 6
4
(b) Matrix for L2 L1 :
0
0
0
0
0
1
1
0
1
2
3
1
3
0 0
7
6
1
6 0 2 0 7 2 3 1 2
7; x
(b) 6
2 x + 5x
7 3
6
4 0 0 1 5
0 0 0
#
" 1p
13
9
2 3
2
(b)
p
25
1
2
2 3+9
2
3
1
0 0
1
0
0
6 1
0 0
1
0
0 7
6
7
6
0
1 1
0
1
0 7
7
(b) 12 6
6 0
1 1
0
1
0 7
6
7
4 0
1 0
0
1
1 5
0
1 0
0
1
1
10x + 6
p
1
0
0
0
0
0
Section 5.2
2
3
7
7
7
7
7
7
5
3
1
3 7
7; matrix for L2 :
0 5
1
1
2
3
0
8
2
0
1
2
0
2
1
3
1
2 9
4 0
(12) (a) A2 represents a rotation through the angle twice, which corresponds
to a rotation through the angle 2 .
(b) Generalize the argument in part (a).
(13) (a) In
(b) On
(c) cIn
(d) The n
n matrix whose columns are e2 ; e3 ; : : : ; en ; e1 , respectively
(e) The n
tively
n matrix whose columns are en , e1 , e2 , : : :, en
1,
respec-
(14) Let A be the matrix for L (with respect to the standard bases). Then A
is a 1 n matrix (an n-vector). If we let x = A, then Ay = x y; for all
y 2 Rn :
(15) [(L2 L1 )(v)]D = [L2 (L1 (v))]D = ACD [L1 (v)]C = ACD (ABC [v]B ) =
(ACD ABC )[v]B . Now apply the uniqueness condition in Theorem 5.5.
2
3
0
4
13
5
6 5
(16) (a) 4 6
2
2
3
100
Andrilli/Hecker - Answers to Exercises
2
3
2
0 0
1 0 5
(b) 4 0
0
0 1
Section 5.2
(c) The vectors in B are eigenvectors for the matrix from part (a) corresponding to the eigenvalues, 2, 1, and 1, respectively.
(17) Easily veri…ed with straightforward computations.
(18) (a) pABB (x) = x(x
1)2
(b) E1 = fb[2; 1; 0] + c[2; 0; 1]g; basis for E1 = f[2; 1; 0]; [2; 0; 1]g;
E0 = fc[ 1; 2; 2]g; basis for E0 = f[ 1; 2; 2]g;
C = ([2; 1; 0]; [2; 0; 1]; [ 1; 2; 2]) (Note: the remaining answers will
vary if the vectors in C are ordered di¤erently)
(c) Using the
as ([2; 1; 0]; [2;
2 0; 1]; [ 1; 2; 2]),3the answer
2 basis C ordered
3
2
5
4
2 2
1
4
5 5. Another
2 5, with P 1 = 91 4 2
is P = 4 1 0
1
2
2
0 1
2
possible
3 is
2 [2; 1; 0]; [2; 0; 1]),
2 answer, using
3 C = ([ 1; 2; 2];
1
2 2
1 2 2
5 4 5:
P = 4 2 1 0 5, with P 1 = 91 4 2
2
4 5
2 0 1
3
2
1 0 0
(d) Using C = ([2; 1; 0]; [2; 0; 1]; [ 1; 2; 2]) produces ACC = 4 0 1 0 5.
0 0 0
Using C = ([ 1; 2; 2]; [2; 1; 0]; [2; 0; 1]) yields
2
3
0 0 0
ACC = P 1 ABB P = 4 0 1 0 5 instead.
0 0 1
(e) L is a projection onto the plane formed by [2; 1; 0] and [2; 0; 1]; while
[ 1; 2; 2] is orthogonal to this plane.
(19) [L(v)]C = k1 [L(v)]B = k1 ABB [v]B =
Theorem 5.5, ACC = ABB .
1
k ABB (k[v]C )
= ABB [v]C . By
(20) Follow the hint in the text. Suppose dim(V) = n, and that X and Y are
similar n n matrices, where Y = P 1 XP.
Let B = (w1 ; : : : ; wn ) be some ordered basis for V. Consider the
products X[w1 ]B ; : : : ; X[wn ]B 2 Rn : Each of these products represents
the coordinatization of a unique vector in V. De…ne L : V ! V to be
the unique linear transformation (as speci…ed by Theorem 5.4) such that
L(wi ) is the vector in V whose coordinatization equals X[wi ]B . That
is, L : V ! V is the unique linear transformation such that X[wi ]B =
[L(wi )]B : Then, by de…nition, X is the matrix for L with respect to B.
Next, since P is nonsingular, Exercise 15(c) in Section 4.5 implies that
the columns of P form a basis for Rn . Let vi be the unique vector in V such
101
Andrilli/Hecker - Answers to Exercises
Section 5.3
that [vi ]B = the ith column of P. Now the vectors v1 ; : : : ; vn are distinct
because the columns of P are distinct, and because coordinatization sends
di¤erent vectors in V to di¤erent vectors in Rn . Also, the set fv1 ; : : : ; vn g
is linearly independent, because a1 v1 +
+ an vn = 0V , [a1 v1 +
+
an vn ]B = 0 , a1 [v1 ]B + +an [vn ]B = 0 (by part (3) of Theorem 4.19) ,
a1 =
= an = 0, since the set f[v1 ]B ; : : : ; [vn ]B g is linearly independent.
Part (2) of Theorem 4.13 then implies that C = (v1 ; : : : ; vn ) is an ordered
basis for V. Also, by de…nition, P is the transition matrix from C to
B, and therefore P 1 is the transition matrix from C to B. Finally, by
Theorem 5.6, Y is the matrix for L with respect to C.
(21) Find the matrix for L with respect to B, and then apply Theorem 5.6.
1
m
1
0
1
. Then A represents a
and P = p1+m
2
m
1
0
1
re‡ection through the x-axis, and P represents a counterclockwise rotation
putting the x-axis on the line y = mx (using Example 9, Section 5.1).
Thus, the desired matrix is PAP 1 .
(22) Let A =
(23) (a) L2
(b) U2
(c) D2
(24) Let fv1 ; : : : ; vk g be a basis for Y. Extend this to a basis B = fv1 ; : : : ; vn g
for V. By Theorem 5.4, there is a unique linear transformation
L0 : V
! W such that L0 (vi ) =
L(vi )
0
if
if
i k
.
i>k
Clearly, L0 agrees with L on Y.
(25) Let L : V ! W be a linear transformation such that L(v1 ) = w1 ;
L(v2 ) = w2 ; : : :, L(vn ) = wn ; with fv1 ; v2 ; : : : ; vn g a basis for V. If
v 2V, then v = c1 v1 + c2 v2 +
+ cn vn ; for unique c1 ; c2 ; : : : ; cn 2 R
(by Theorem 4.9). But then L(v) = L(c1 v1 + c2 v2 +
+ cn vn ) =
c1 L(v1 )+c2 L(v2 )+ +cn L(vn ) = c1 w1 +c2 w2 + +cn wn : Hence L(v)
is determined uniquely, for each v 2V, and so L is uniquely determined.
(26) (a) T
(c) F
(e) T
(g) T
(i) T
(b) T
(d) F
(f) F
(h) T
(j) F
Section 5.3
(1) (a) Yes, because L([1; 2; 3]) = [0; 0; 0]
(b) No, because L([2; 1; 4]) = [5; 2; 1]
8
x3
< 5x1 + x2
3x1
+ x3
(c) No; the system
:
x1
x2
x3
102
=
=
=
2
1
4
Andrilli/Hecker - Answers to Exercises
2
has no solutions. Note: 4
2
1
6
4 0
0
1
1
3
2
3
1
3
1
3
0
0
4
0
3
augmentation bar.
Section 5.3
5
3
1
1
0
1
1
1
1
3
2
1 5 row reduces to
4
7
5, where we have not row reduced beyond the
(d) Yes, because L([ 4; 4; 0]) = [ 16; 12; 8]
(2) (a) No, since L(x3
5x2 + 3x
3
(b) Yes, because L(4x
6) = 6x3 + 4x
2
4x ) = 0
9 6= 0
(c) Yes, because, for example, L(x3 + 4x + 3) = 8x3
x
1
(d) No, since every element of range(L) has a zero coe¢ cient for its x2
term.
(3) In each part, we give the reduced row echelon form for the matrix for the
linear transformation as well.
3
2
1 0
2
3 5; dim(ker(L)) = 1, basis for ker(L) = f[ 2; 3; 1]g,
(a) 4 0 1
0 0
0
dim(range(L)) = 2, basis for range(L) = f[1; 2; 3]; [ 1; 3; 3]g
3
2
1 0
1
6 0 1
2 7
7; dim(ker(L)) = 1, basis for ker(L) = f[ 1; 2; 1]g,
(b) 6
4 0 0
0 5
0 0
0
dim(range(L)) = 2, basis for range(L) = f[4; 7; 2; 3]; [ 2; 1; 1; 2]g
(c)
1
0
0
1
5
2
dim(range(L))
2
1 0
1
6 0 1
3
6
0
0
0
(d) 6
6
4 0 0
0
0 0
0
; dim(ker(L)) = 1, basis for ker(L) = f[ 5; 2; 1]g,
= 2, basis for range(L) = f[3; 2]; [2; 1]g
3
1
2 7
7
0 7
7; dim(ker(L)) = 2,
0 5
0
basis for ker(L) = f[1; 3; 1; 0]; [ 1; 2; 0; 1]g, dim(range(L)) = 2,
basis for range(L) = f[ 14; 4; 6; 3; 4]; [ 8; 1; 2; 7; 2]g
(4) Students can generally solve these problems by inspection.
(a) dim(ker(L)) = 2, basis for ker(L) = f[1; 0; 0], [0; 0; 1]g, dim(range(L))
= 1, basis for range(L) = f[0; 1]g
103
Andrilli/Hecker - Answers to Exercises
Section 5.3
(b) dim(ker(L)) = 0, basis for ker(L) = fg, dim(range(L)) =
range(L) = f[1; 1; 0]; [0; 1; 1]g
1 0
0 0
(c) dim(ker(L)) = 2, basis for ker(L) =
;
0 0
0 1
82
3 2
< 0 1
dim(range(L)) = 2, basis for range(L) = 4 0 0 5 ; 4
:
0 0
2, basis for
,
0
1
0
39
0 =
0 5
;
0
(d) dim(ker(L)) = 2, basis for ker(L) = fx4 ; x3 g, dim(range(L)) = 3,
basis for range(L) = fx2 ; x; 1g
(e) dim(ker(L)) = 0, basis for ker(L) = fg, dim(range(L)) = 3, basis for
range(L) = fx3 ; x2 ; xg
(f) dim(ker(L)) = 1, basis for ker(L) = f[0; 1; 1]g, dim(range(L)) = 2,
basis for range(L) = f[1; 0; 1]; [0; 0; 1]g (A simpler basis for range(L)
= f[1; 0; 0]; [0; 0; 1]g.)
(g) dim(ker(L)) = 0, basis for ker(L) = fg (empty set), dim(range(L))
= 4, basis for range(L) = standard basis for M22
82
3 2
3
0 1 0
< 1 0 0
(h) dim(ker(L)) = 6, basis for ker(L) = 4 0 0 0 5 , 4 1 0 0 5,
:
0 0 0
0 0 0
2
3 2
3 2
3 2
39
0 0 1
0 0 0
0 0 0
0 0 0 =
4 0 0 0 5, 4 0 1 0 5, 4 0 0 1 5, 4 0 0 0 5 ,
;
1 0 0
0 0 0
0 1 0
0 0 1
dim(range(L)) = 3, basis
82
3 2
0
0 1 0
<
4 1 0 0 5, 4 0
:
1
0 0 0
for range(L) =
3 2
0 1
0
0 0 5, 4 0
0 0
0
39
0 0 =
0 1 5
;
1 0
(i) dim(ker(L)) = 1, basis for ker(L) = fx2 2x + 1g, dim(range(L)) =
2, basis for range(L) = f[1; 2]; [1; 1]g (A simpler basis for range(L) =
standard basis for R2 .)
(j) dim(ker(L)) = 2, basis for ker(L) = f(x+1)x(x 1); x2 (x+1)(x 1)g,
dim(range(L)) = 3, basis for range(L) = fe1 ; e2 ; e3 g
(5) (a) ker(L) = V, range(L) = f0W g
(b) ker(L) = f0V g, range(L) = V
P3
P3
P3
(6) L(A + B) = trace(A + B) = i=1 (aii + bii ) = i=1 aii + i=1 bii =
P3
trace(A) + trace(B) = L(A) + L(B); L(cA) = trace(cA) =
i=1 caii
P3
= c i=1 aii = c(trace(A)) = cL(A);
82
9
3
c
< a b
=
5 a; b; c; d; e; f; g; h 2 R ,
f
ker(L) = 4 d e
:
;
g h
a e
dim(ker(L)) = 8, range(L) = R, dim(range(L)) = 1
104
Andrilli/Hecker - Answers to Exercises
Section 5.3
(7) range(L) = V, ker(L) = f0V g
(8) ker(L) = f0g, range(L) = fax4 + bx3 + cx2 g, dim(ker(L)) = 0,
dim(range(L)) = 3
(9) ker(L) = fdx + e j d; e 2 Rg, range(L) = P2 , dim(ker(L)) = 2,
dim(range(L)) = 3
(10) When k n, ker(L) = all polynomials of degree less than k, dim(ker(L))
= k, range(L) = Pn k , and dim(range(L)) = n k + 1. When k > n,
ker(L) = Pn , dim(ker(L)) = n + 1, range(L) = f0g,
and dim(range(L)) = 0.
(11) Since range(L) = R, dim(range(L)) = 1. By the Dimension Theorem,
dim(ker(L)) = n. Since the given set is clearly a linearly independent
subset of ker(L) containing n distinct elements, it must be a basis for
ker(L) by Theorem 4.13.
(12) ker(L) = f[0; 0; : : : ; 0]g, range(L) = Rn (Note: Every vector X is in the
range since L(A 1 X) = A(A 1 X) = X.)
(13) ker(L) = f0V g i¤ dim(ker(L)) = 0 i¤ dim(range(L)) = dim(V)
dim(V) i¤ range(L) = V.
0 =
(14) For ker(L): 0V 2 ker(L), so ker(L) 6= fg. If v1 ; v2 2 ker(L), then
L(v1 + v2 ) = L(v1 ) + L(v2 ) = 0V + 0V = 0V , and so v1 + v2 2 ker(L).
Similarly, L(av1 ) = 0V , and so av1 2 ker(L).
For range(L): 0W 2 range(L) since 0W = L(0V ). If w1 ; w2 2 range(L),
then there exist v1 ; v2 2 V such that L(v1 ) = w1 and L(v2 ) = w2 . Hence,
L(v1 + v2 ) = L(v1 ) + L(v2 ) = w1 + w2 . Thus w1 + w2 2 range(L).
Similarly, L(av1 ) = aL(v1 ) = aw1 , so aw1 2 range(L).
(15) (a) If v 2 ker(L1 ), then (L2 L1 )(v) = L2 (L1 (v)) = L2 (0W ) = 0X .
(b) If x 2 range(L2 L1 ), then there is a v such that (L2
Hence L2 (L1 (v)) = x, so x 2 range(L2 ).
L1 )(v) = x.
(c) dim(range(L2 L1 )) = dim(V) dim(ker(L2 L1 ))
dim(V) dim(ker(L1 )) (by part (a)) = dim(range(L1 ))
(16) Consider L
x
y
=
1
1
1
1
x
y
. Then, ker(L) = range(L) =
f[a; a] j a 2 Rg.
(17) (a) Let P be the transition matrix from the standard basis to B, and let
Q be the transition matrix from the standard basis to C. Note that
both P and Q are nonsingular by Theorem 4.22. Then, by Theorem
5.6, B = QAP 1 . Hence, rank(B) = rank(QAP 1 ) = rank(AP 1 )
(by part (a) of Review Exercise 16 in Chapter 2) = rank(A) (by part
(b) of Review Exercise 16 in Chapter 2).
105
Andrilli/Hecker - Answers to Exercises
Section 5.3
(b) The comments prior to Theorem 5.9 in the text show that parts (a)
and (b) of the theorem are true if A is the matrix for L with respect
to the standard bases. However, part (a) of this exercise shows that if
B is the matrix for L with respect to any bases, rank(B) = rank(A).
Hence, rank(B) can be substituted for any occurrence of rank(A) in
the statement of Theorem 5.9. Thus, parts (a) and (b) of Theorem
5.9 hold when the matrix for L with respect to any bases is used.
Part (c) of Theorem 5.9 follows immediately from parts (a) and (b).
(18) (a) By Theorem 4.18, there are vectors q1 ; : : : ; qt in V such that B =
fk1 ; : : : ; ks ; q1 ; : : : ; qt g is a basis for V. Hence, dim(V) = s + t.
(b) If v 2 V; then v = a1 k1 +
+ as ks + b1 q1 +
+ bt qt for some
a1 ; : : : ; as ; b1 ; : : : ; bt 2 R: Hence,
L(v)
=
=
=
=
L(a1 k1 +
+ as ks + b1 q1 +
+ bt qt )
a1 L(k1 ) +
+ as L(ks ) + b1 L(q1 ) +
+ bt L(qt )
a1 0 +
+ as 0 + b1 L(q1 ) +
+ bt L(qt )
b1 L(q1 ) +
+ bt L(qt ):
(c) By part (b), any element of range(L) is a linear combination of
L(q1 ); : : : ; L(qt ), so range(L) is spanned by fL(q1 ); : : : ; L(qt )g. Hence,
by part (1) of Theorem 4.13, range(L) is …nite dimensional and
dim(range(L)) t.
(d) Suppose c1 L(q1 ) +
+ ct L(qt ) = 0W . Then L(c1 q1 +
+ ct qt ) =
0W , and so c1 q1 +
+ ct qt 2 ker(L).
(e) Every element of ker(L) can be expressed as a linear combination of
the basis vectors k1 ; : : : ; ks for ker(L).
(f) The fact that c1 q1 + +ct qt = d1 k1 + +ds ks implies that d1 k1 +
+ds ks c1 q1
ct qt = 0W . But since B = fk1 ; : : : ; ks ; q1 ; : : : ; qt g
is linearly independent, d1 =
= ds = c1 =
= ct = 0, by the
de…nition of linear independence.
(g) In part (d) we assumed that c1 L(q1 ) +
+ ct L(qt ) = 0W . This
led to the conclusion in part (f) that c1 =
= ct = 0. Hence
fL(q1 ); : : : ; L(qt )g is linearly independent by de…nition.
(h) Part (c) proved that fL(q1 ); : : : ; L(qt )g spans range(L) and part (g)
shows that fL(q1 ); : : : ; L(qt )g is linearly independent. Hence,
fL(q1 ); : : : ; L(qt )g is a basis for range(L).
(i) We assumed that dim(ker(L)) = s. Part (h) shows that
dim(range(L)) = t. Therefore, dim(ker(L))+dim(range(L)) = s+t =
dim(V) (by part (a)).
(19) From Theorem 4.16, we have dim(ker(L)) dim(V). The Dimension Theorem states that dim(range(L)) is …nite and dim(ker(L)) + dim(range(L))
= dim(V). Since dim(ker(L)) is nonnegative, it follows that dim(range(L))
dim(V).
106
Andrilli/Hecker - Answers to Exercises
Section 5.4
(20) (a) F
(c) T
(e) T
(g) F
(b) F
(d) F
(f) F
(h) F
Section 5.4
(1) (a) Not one-to-one, because L([1; 0; 0]) = L([0; 0; 0]) = [0; 0; 0; 0]; not
onto, because [0; 0; 0; 1] is not in range(L)
(b) Not one-to-one, because L([1; 1; 1]) = [0; 0]; onto, because
L([a; 0; b]) = [a; b]
(c) One-to-one, because L([x; y; z]) = [0; 0; 0] implies that
[2x; x + y + z; y] = [0; 0; 0], which gives x = y = z = 0; onto,
because every vector [a; b; c] can be expressed as [2x; x + y + z; y],
where x = a2 , y = c, and z = b a2 + c
(d) Not one-to-one, because L(1) = 0; onto, because L(ax3 + bx2 + cx) =
ax2 + bx + c
(e) One-to-one, because L(ax2 + bx + c) = 0 implies that a + b = b + c =
a + c = 0, which gives b = c and hence a = b = c = 0; onto, because
every polynomial Ax2 + Bx + C can be expressed as
(a + b)x2 + (b + c)x + (a + c), where a = (A B + C)=2,
b = (A + B C)=2, and c = ( A + B + C)=2
(f) One-to-one, because L
0
0
a b
c d
0
0
=
0
0
d
=)
b
b+c
a
c
0
0
=) d = a = b + c = b c = 0 =) a = b = c = d = 0;
"
x+y #!
z
w x
2
=
onto, because L
x y
y z
w
2
0
1
(g) Not one-to-one, because L
0
0
0
0
1
0
; onto, because every 2
expressed as
a
2e
c
d+f
0
1
0
0
=L
2 matrix
, where a = A, c =
A
C
0
0
B
D
0
0
=
can be
B, e = C=2, d = D,
and f = 0
0 0
implies that a + c =
0 0
3a = 0, which gives a = b = c = 0; not onto, because
(h) One-to-one, because L(ax2 + bx + c) =
b
0
0
c =
1
is not in range(L)
0
(2) (a) One-to-one; onto; the matrix row reduces to I2 , which means that
dim(ker(L)) = 0 and dim(range(L)) = 2.
107
=
Andrilli/Hecker - Answers to Exercises
Section 5.4
3
0
1 5, which
0
2
1
(b) One-to-one; not onto; the matrix row reduces to 4 0
0
means that dim(ker(L)) = 0 and dim(range(L)) = 2.
2
1
6
(c) Not one-to-one; not onto; the matrix row reduces to 6
4 0
0
which means that dim(ker(L)) = 1 and dim(range(L)) =
2
1 0
(d) Not one-to-one; onto; the matrix row reduces to 4 0 1
0 0
which means that dim(ker(L)) = 1 and dim(range(L)) =
0
2
5
1
6
5
0
0
2.
0
0
1
3.
3
7
7,
5
3
2
3 5,
1
(3) (a) One-to-one; onto; the matrix row reduces to I3 which means that
dim(ker(L)) = 0 and dim(range(L)) = 3.
3
2
1 0 0
2
6 0 1 0
2 7
7,
(b) Not one-to-one; not onto; the matrix row reduces to 6
4 0 0 1
1 5
0 0 0
0
which means that dim(ker(L)) = 1 and dim(range(L)) = 3.
2
10
19
1 0
11
11
6
9
3
6 0 1
11
11
6
(c) Not one-to-one; not onto; the matrix row reduces to 6
6 0 0
0
0
4
0
0
0
which means that dim(ker(L)) = 2 and dim(range(L)) = 2.
(4) (a) Let L : Rn ! Rm . Then dim(range(L)) =
n dim(ker(L)) n < m, so L is not onto.
(b) Let L : Rm ! Rn . Then dim(ker(L))
= m dim(range(L)) m n > 0, so L is not one-to-one.
(5) (a) L(In ) = AIn In A = A A = On . Hence In 2 ker(L), and so L is
not one-to-one by Theorem 5.12.
(b) Use part (a) of this exercise, part (b) of Theorem 5.12, and the Dimension Theorem.
(6) If L(A) = O3 , then AT = A, so A is skew-symmetric. But since A
is also upper triangular, A = O3 . Thus, ker(L) = f0g and L is one-toone. By the Dimension Theorem, dim (range(L)) = dim (U3 ) = 6 (since
dim(ker(L)) = 0). Hence, range(L) 6= M33 , and L is not onto.
108
0
3
7
7
7
7,
7
5
Andrilli/Hecker - Answers to Exercises
Section 5.5
(7) Suppose L is not one-to-one. Then ker(L) is nontrivial. Let
v 2 ker(L) with v 6= 0: Then T = fvg is linearly independent. But
L(T ) = f0W g, which is linearly dependent, a contradiction.
(8) (a) Suppose w 2 L(span(S)). Then there is a vector v 2 span(S)
such that L(v) = w. There are also vectors v1 ; : : : ; vk 2 S and
scalars a1 ; : : : ; ak such that a1 v1 +
+ ak vk = v. Hence, w = L(v)
= L(a1 v1 + + ak vk ) = L(a1 v1 ) + + L(ak vk ) = a1 L(v1 ) + +
ak L(vk ), which shows that w 2 span(L(S)). Hence L(span(S))
span(L(S)).
Next, S
span(S) (by Theorem 4.5) ) L(S)
L(span(S))
) span(L(S))
L(span(S)) (by part (3) of Theorem 4.5, since
L(span(S)) is a subspace of W by part (1) of Theorem 5.3). Therefore, L(span(S)) = span(L(S)).
(b) By part (a), L(S) spans L(span(S)) = L(V) = range(L).
(c) W = span(L(S)) = L(span(S)) (by part (a))
span(S) V) = range(L). Thus, L is onto.
L(V) (because
(9) (a) F
(c) T
(e) T
(g) F
(b) F
(d) T
(f) T
(h) F
Section 5.5
(1) In each part, let A represent the given matrix for L1 and let B represent
the given matrix for L2 . By Theorem 5.15, L1 is an isomorphism if and
only if A is nonsingular, and L2 is an isomorphism if and only if B is
nonsingular. In each part, we will give jAj and jBj to show that A and B
are nonsingular.
02
32
3
31 2
x1
x1
0
0 1
1 0 5 4 x2 5 ,
(a) jAj = 1, jBj = 3, L1 1 @4 x2 5A = 4 0
1
2 0
x3
x3
3
32
02
31 2
1 0
0
x1
x1
1 54
x2 5 ,
L2 1 @4 x2 5A = 4 0 0
3
x3
x3
2 1
0
02
31 2
32
3
x1
0
2
1
x1
4
2 5 4 x2 5 ,
(L2 L1 ) @4 x2 5A = 4 1
x3
0
3
0
x3
02
02
31
31
x1
x1
(L2 L1 ) 1 @4 x2 5A = (L1 1 L2 1 ) @4 x2 5A
x3
x3
109
Andrilli/Hecker - Answers to Exercises
2
2
6
=6
4 0
1
1
0
0
1
3
0
2
3
Section 5.5
3
2
3
7 x1
7 4 x2 5
5
x3
02
31 2
32
3
x1
0
2 1
x1
1 0 5 4 x2 5 ,
(b) jAj = 1, jBj = 1, L1 1 @4 x2 5A = 4 0
x3
1
8 4
x3
02
31 2
32
3
x1
0 1 0
x1
L2 1 @4 x2 5A = 4 1 0 1 5 4 x2 5 ,
x3
2 0 3
x3
02
31 2
32
3
x1
1 1 0
x1
(L2 L1 ) @4 x2 5A = 4 4 0 1 5 4 x2 5 ,
x3
1 0 0
x3
02
31
02
31
x1
x1
(L2 L1 ) 1 @4 x2 5A = (L1 1 L2 1 ) @4 x2 5A
x3
x3
32
3
2
x1
0 0 1
= 4 1 0 1 5 4 x2 5
0 1 4
x3
02
31 2
32
3
x1
2
4
1
x1
13
3 5 4 x2 5 ,
(c) jAj = 1, jBj = 1, L1 1 @4 x2 5A = 4 7
x3
5
10
3
x3
02
32
31 2
3
x1
x1
1
0
1
1
3 5 4 x2 5 ,
L2 1 @4 x2 5A = 4 3
x3
1
2
2
x3
32
3
02
31 2
x1
x1
29
6
4
5
2 5 4 x2 5 ,
(L2 L1 ) @4 x2 5A = 4 21
38
8
5
x3
x3
02
31
02
31
x1
x1
(L2 L1 ) 1 @4 x2 5A = (L1 1 L2 1 ) @4 x2 5A
x3
x3
2
32
3
9
2 8
x1
7 26 5 4 x2 5
= 4 29
22
4 19
x3
(2) L is invertible (L
1
= L). By Theorem 5.14, L is an isomorphism.
(3) (a) L1 is a linear operator: L1 (B + C) = A(B + C) = AB + AC =
L1 (B) + L1 (C); L1 (cB) = A(cB) = c(AB) = cL1 (B).
Note that L1 is invertible (L1 1 (B) = A 1 B). Use Theorem 5.14.
110
Andrilli/Hecker - Answers to Exercises
Section 5.5
(b) L2 is a linear operator: L2 (B + C) = A(B + C)A 1
= (AB + AC)A 1 = ABA 1 + ACA 1 = L2 (B) + L2 (C);
L2 (cB) = A(cB)A 1 = c(ABA 1 ) = cL2 (B).
Note that L2 is invertible (L2 1 (B) = A 1 BA). Use Theorem 5.14.
(4) L is a linear operator: L(p + q) = (p + q) + (p + q)0 = p + q + p0 + q0 =
(p + p0 ) + (q + q0 ) = L(p) + L(q); L(cp) = (cp) + (cp)0 = cp + cp0 =
c(p + p0 ) = cL(p).
Now, if L(p) = 0, then p + p0 = 0 =) p = p0 : But if p is not a constant
polynomial, then p and p0 have di¤erent degrees, a contradiction. Hence,
p0 = 0; and so p = 0. Thus, ker(L) = f0g and L is one-to-one. Then, by
Corollary 5.21, L is an isomorphism.
0
1
(5) (a)
1
0
(b) Use Theorem 5.15 (since the matrix in part (a) is nonsingular).
(c) If A is the matrix for R in part (a), then A2 = I2 , so R = R
1
.
(d) Performing the re‡ection R twice in succession gives the identity
mapping.
(6) The change of basis is performed by multiplying by a nonsingular transition matrix. Use Theorem 5.15. (Note that multiplication by a matrix is
a linear transformation.)
(7) (M
(M
L1 )(v) = (M L2 )(v) =)
M L1 )(v) = (M 1 M
1
L2 )(v) =) L1 (v) = L2 (v)
(8) Suppose dim(V) = n > 0 and dim(W) = k_ > 0.
Suppose L is an isomorphism. Let M be the matrix for L 1 with
respect to C and B. Then (L 1 L)(v) = v for every v 2 V =)
MABC [v]B = [v]B for every v 2 V (by Theorem 5.7). Letting vi be the
ith basis element in B, [vi ]B = ei , and so ei = [vi ]B = MABC [vi ]B =
MABC ei = (ith column of MABC ). Therefore, MABC = In . Similarly,
using the fact that (L L 1 )(w) = w for every w 2 W implies that
ABC M = Ik . Exercise 21 in Section 2.4 then implies that n = k, ABC is
1
nonsingular, and M = ABC
.
1
Conversely, if ABC is nonsingular, then ABC is square, n = k, ABC
1
1
exists, and ABC ABC = ABC ABC = In . Let K be the linear operator
1
from W to V whose matrix with respect to C and B is ABC
. Then, for
1
all v 2 V, [(K L)(v)]B = ABC ABC [v]B = In [v]B = [v]B . Therefore,
(K L)(v) = v for all v 2 V, since coordinatizations are unique. Similarly,
1
for all w 2 W, [(L K)(w)]C = ABC ABC
[w]C = In [w]C = [w]C . Hence
(L K)(w) = w for all w 2 W. Therefore, K acts as an inverse for L.
Hence, L is an isomorphism by Theorem 5.14.
(9) In all parts, use Corollary 5.19. In part (c), the answer to the question is
yes.
111
Andrilli/Hecker - Answers to Exercises
(10) If L is an isomorphism, (L L)
by Theorem 5.14.
1
Section 5.5
=L
1
L
1
, so L L is an isomorphism
Conversely, if L L is an isomorphism, it is easy to prove L is one-to-one
and onto directly. Or, let f = L (L L) 1 and g = (L L) 1 L. Then
clearly, L f = I (identity operator on V) = g L. But f = I f =
g L f = g I = g, and so f = g = L 1 . Hence L is an isomorphism by
Theorem 5.14.
(11) (a) Let v 2 V with v 6= 0V . Since (L L)(v) = 0V , either L(v) = 0V or
L(L(v)) = 0V , with L(v) 6= 0V . Hence, ker(L) 6= f0V g, since one of
the two vectors v or L(v) is nonzero and is in ker(L).
(b) Proof by contrapositive: The goal is to prove that if L is an isomorphism, then L L 6= L or L is the identity transformation. Assume L is an isomorphism and L L = L (see “If A; Then B or C
Proofs” in Section 1.3). Then L is the identity transformation because (L L)(v) = L(v) =) (L 1 L L)(v) = (L 1 L)(v) =)
L(v) = v.
(12) Range(L) = column space of A (see comments just before the Range
Method in Section 5.3 of the textbook). Now, L is an isomorphism i¤ L
is onto (by Corollary 5.21) i¤ range(L) = Rn i¤ span(fcolumns of Ag) =
Rn i¤ the n columns of A are linearly independent (by Theorem 4.13).
(13) (a) No, by Corollary 5.21, because dim(R6 ) = dim(P5 ).
(b) No, by Corollary 5.21, because dim(M22 ) = dim(P3 ).
(14) (a) Use Theorem 5.13.
(b) By Exercise 8(c) in Section 5.4, L is onto. Now, if L is not oneto-one, then ker(L) is nontrivial. Suppose v 2 ker(L) with v 6= 0:
If B = fv1 ; : : : ; vn g; then there are a1 ; : : : ; an 2 R such that v =
a1 v1 + +an vn ; where not every ai = 0 (since v 6= 0). Then 0W =
L(v) = L(a1 v1 + +an vn ) = a1 L(v1 )+ +an L(vn ): But since
each L(vj ) is distinct, and some ai 6= 0; this contradicts the linear
independence of L(B). Hence L is one-to-one.
(c) T (e1 ) = [3; 1]; T (e2 ) = [5; 2]; and T (e3 ) = [3; 1]: Hence, T (B) =
f[3; 1]; [5; 2]g (because identical elements of a set are not listed more
than once). This set is clearly a basis for R2 . T is not an isomorphism
or else we would contradict Theorem 5.17, since dim(R3 ) 6= dim(R2 ).
(d) Part (c) is not a counterexample to part (b) because in part (c) the
images of the vectors in B are not distinct.
(15) Let B = (v1 ; : : : ; vn ); let v 2 V, and let a1 ; : : : ; an 2 R such that
v = a1 v1 +
+ an vn : Then [v]B = [a1 ; : : : ;an ]: But L(v) =
L(a1 v1 +
+ an vn ) = a1 L(v1 ) +
+ an L(vn ): From Exercise 14(a),
fL(v1 ); : : : ; L(vn )g is a basis for L(B) = W (since L is onto).
Hence, [L(v)]L(B) = [a1 ; : : : ;an ] as well.
112
Andrilli/Hecker - Answers to Exercises
Section 5.5
(16) (Note: This exercise will be used as part of the proof of the Dimension
Theorem, so we do not use the Dimension Theorem in this proof.)
Because Y is a subspace of the …nite dimensional vector space V, Y
is also …nite dimensional (Theorem 4.16). Suppose fy1 ; : : : ; yk g is a basis
for Y. Because L is one-to-one, the vectors L(y1 ); : : : ; L(yk ) are distinct.
By part (1) of Theorem 5.13, fL(y1 ); : : : ; L(yk )g = L(fy1 ; : : : ; yk g) is
linearly independent. Part (a) of Exercise 8 in Section 5.4 shows that
span(L(fy1 ; : : : ; yk g)) = L(span(fy1 ; : : : ; yk g)) = L(Y). Therefore,
fL(y1 ); : : : ; L(yk )g is a k-element basis for L(Y). Hence, dim(L(Y)) =
k = dim(Y).
(17) (a) Let v 2 ker(T ). Then (T2 T )(v) = T2 (T (v)) = T2 (0W ) = 0Y , and
so v 2 ker(T2 T ). Hence, ker(T )
ker(T2 T ). Next, suppose
v 2 ker(T2 T ). Then 0Y = (T2 T )(v) = T2 (T (v)), which implies
that T (v) 2 ker(T2 ). But T2 is one-to-one, and so ker(T2 ) = f0W g.
Therefore, T (v) = 0W , implying that v 2 ker(T ). Hence,
ker(T2 T ) ker(T ). Therefore, ker(T ) = ker(T2 T ).
(b) Suppose w 2 range(T T1 ): Then there is some x 2 X such that
(T T1 )(x) = w; implying T (T1 (x)) = w: Hence w is the image under
T of T1 (x) and so w 2 range(T ). Thus, range(T T1 )
range(T ).
Similarly, if w 2 range(T ); there is some v 2 V such that T (v) = w:
Since T1 1 exists, (T T1 )(T1 1 (v)) = w; and so w 2 range(T T1 ):
Thus, range(T ) range(T T1 ); …nishing the proof.
(c) Let v 2 T1 (ker(T T1 )). Thus, there is an x 2 ker(T T1 ) such that
T1 (x) = v. Then, T (v) = T (T1 (x)) = (T T1 )(x) = 0W . Thus,
v 2 ker(T ). Therefore, T1 (ker(T T1 ))
ker(T ). Now suppose that
v 2 ker(T ). Then T (v) = 0W . Thus, (T T1 )(T1 1 (v)) = 0W as
well. Hence, T1 1 (v) 2 ker(T T1 ). Therefore, v = T1 (T1 1 (v)) 2
T1 (ker(T T1 )), implying ker(T ) T1 (ker(T T1 )), completing the
proof.
(d) From part (c), dim(ker(T )) = dim(T1 (ker(T T1 ))). Apply Exercise
16 using T1 as L and Y = ker(T T1 ) to show that dim(T1 (ker(T T1 )))
= dim(ker(T T1 )).
(e) Suppose y 2 range(T2 T ). Hence, there is a v 2 V such that
(T2 T )(v) = y. Thus, y = T2 (T (v)). Clearly, T (v) 2 range(T ), and
so y 2 T2 (range(T )). This proves that range(T2 T )
T2 (range(T )).
Next, suppose that y 2 T2 (range(T )). Then there is some w 2
range(T ) such that T2 (w) = y. Since w 2 range(T ), there is a v 2 V
such that T (v) = w. Therefore, (T2 T )(v) = T2 (T (v)) = T2 (w) =
y. This establishes that T2 (range(T ))
range(T2 T ), …nishing the
proof.
(f) From part (e), dim(range(T2 T )) = dim(T2 (range(T ))). Apply Exercise 16 using T2 as L and Y = range(T ) to show that
dim(T2 (range(T ))) = dim(range(T )).
113
Andrilli/Hecker - Answers to Exercises
Section 5.5
(18) (a) Part (c) of Exercise 17 states that T1 (ker (T T1 )) = ker (T ). Substituting T = L2 L and T1 = L1 1 yields L1 1 ker L2 L L1 1 =
ker (L2 L). Now M = L2 L L1 1 . Hence, L1 1 (ker (M )) =
ker (L2 L).
(b) With L2 = T2 and L = T , part (a) of Exercise 17 shows that
ker (L2 L) = ker (L). This, combined with part (a) of this exercise, shows that L1 1 (ker (M )) = ker (L).
(c) From part (b), dim(ker (L)) = dim(L1 1 (ker (M ))). Apply Exercise 16 with L replaced by L1 1 and Y = ker(M ) to show that
dim(L1 1 (ker (M ))) = dim(ker(M )), completing the proof.
(d) Part (e) of Exercise 17 states that range (T2 T ) = T2 (range(T )).
Substituting L L1 1 for T and L2 for T2 yields range L2 L L1 1
= L2 range(L L1 1 ) . Apply L2 1 to both sides to produce
L2 1 (range L2 L L1 1 ) = range(L L1 1 ). Using M = L2 L L1 1
gives L2 1 (range (M )) = range(L L1 1 ), the desired result.
(e) Part (b) of Exercise 17 states that range (T T1 ) = range (T ). Substituting L for T and L1 1 for T1 produces range(L L1 1 ) = range (L).
Combining this with L2 1 (range (M )) = range(L L1 1 ) from part (d)
of this exercise yields L2 1 (range(M )) = range(L).
(f) From part (e), dim(L2 1 (range(M ))) = dim(range(L)). Apply Exercise 16 with L replaced by L2 1 and Y = range(M ) to show that
dim(L2 1 (range(M ))) = dim(range(M )), completing the proof.
(19) (a) A nonsingular 2 2 matrix must have at least one of its …rst column
entries nonzero. Then, use an appropriate equation from the two
given in the problem.
k 0
0 1
(b)
k
x
y
kx
y
=
, a contraction (if k
1 0
0 k
1) along the x-coordinate. Similarly,
a contraction (if k
1) or dilation (if k
1) or dilation (if
x
y
=
x
ky
,
1) along the y-coordinate.
k 0
…rst performs a
0 1
dilation/contraction by part (a) followed by a re‡ection about the
(c) By the hint, multiplying by
y-axis. Using
plying by
1 0
0 k
1 0
0 k
=
1
0
0
1
1
0
0
k
shows that multi-
…rst performs a dilation/contraction by part (a)
followed by a re‡ection about the x-axis.
(d) These matrices are de…ned to represent shears in Exercise 11 in Section 5.1.
114
Andrilli/Hecker - Answers to Exercises
(e)
0
1
1
0
x
y
=
y
x
Section 5.5
; that is, this matrix multiplication switches
the coordinates of a vector in R2 . That is precisely the result of a
re‡ection through the line y = x.
(f) This is merely a summary of parts (a) through (e).
p #
" p2
#
#"
" p2
2
1 0
0
1 p0
1
1
2
2
2
p
=
, where
(20)
p
p
2
0
1
0
2
2
2
1
0
1
2
2
2
the matrices on the right side are, respectively, a contraction along the
x-coordinate, a shear in the y-direction, a dilation along the y-coordinate,
and a shear in the x-direction.
(21) (a) Consider L: P2 ! R3 given by L(p) = [p(x1 ); p(x2 ); p(x3 )]: Now,
dim(P2 ) = dim(R3 ) = 3: Hence, by Corollary 5.21, if L is either oneto-one or onto, it has the other property as well.
We will show that L is one-to-one using Theorem 5.12. If p 2 ker(L),
then L(p) = 0, and so p(x1 ) = p(x2 ) = p(x3 ) = 0. Hence p is a
polynomial of degree 2 touching the x-axis at x = x1 , x = x2 , and
x = x3 . Since the graph of p must be either a parabola or a line,
it cannot touch the x-axis at three distinct points unless its graph is
the line y = 0. That is, p = 0 in P2 . Therefore, ker(L) = f0g, and
L is one-to-one.
Now, by Corollary 5.21, L is onto. Thus, given any 3-vector
[a; b; c], there is some p 2 P2 such that p(x1 ) = a, p(x2 ) = b, and
p(x3 ) = c.
(b) From part (a), L is an isomorphism. Hence, any [a; b; c] 2 R3 has a
unique pre-image under L in P2 .
(c) Generalize parts (a) and (b) using L: Pn ! Rn+1 given by L(p) =
[p(x1 ); : : : ; p(xn ); p(xn+1 )]: Note that any polynomial in ker(L) has
n+1 distinct roots, and so must be trivial. Hence L is an isomorphism
by Corollary 5.21. Thus, given any (n + 1)-vector [a1 ; : : : ; an ; an+1 ],
there is a unique p 2 Pn such that p(x1 ) = a1 ; : : : ; p(xn ) = an , and
p(xn+1 ) = an+1 .
(22) (a) Clearly, multiplying a nonzero polynomial by x produces another
nonzero polynomial. Hence, ker(L) = f0P g, and L is one-to-one.
But, every nonzero element of range(L) has degree at least 1, since L
multiplies by x. Hence, the constant polynomial 1 is not in range(L),
and L is not onto.
(b) Corollary 5.21 requires that the domain and codomain of the linear
transformation be …nite dimensional. However, P is in…nite dimensional.
115
Andrilli/Hecker - Answers to Exercises
(23) (a) T
(b) T
(c) F
(d) F
Section 5.6
(e) F
(f) T
(g) T
(h) T
(i) T
Section 5.6
(1) (a)
Eigenvalue
1 =2
Basis for E
f[1; 0]g
Alg. Mult.
2
Geom. Mult.
1
(b)
Eigenvalue
1 =3
2 =2
Basis for E
f[1; 4]g
f[0; 1]g
Alg. Mult.
1
1
Geom. Mult.
1
1
(c)
Eigenvalue
1 =1
1
2 =
3 =2
Basis for E
f[2; 1; 1]g
f[ 1; 0; 1]g
f[1; 1; 1]g
Alg. Mult.
1
1
1
Geom. Mult.
1
1
1
(d)
Eigenvalue
1 =2
2 =3
Basis for E
f[5; 4; 0]; [3; 0; 2]g
f[0; 1; 1]g
(e)
Eigenvalue
1
1 =
=
0
2
Basis for E
f[ 1; 2; 3]g
f[ 1; 1; 3]g
Alg. Mult.
2
1
Geom. Mult.
1
1
(f)
Eigenvalue
1 =0
2 =2
Basis for E
f[6; 1; 1; 4]g
f[7; 1; 0; 5]g
Alg. Mult.
2
2
Geom. Mult.
1
1
Alg. Mult.
2
1
Geom. Mult.
2
1
(2) (a) C = (e1 ; e2 ; e3 ; e4 ); B = ([1; 1; 0; 0]; [0; 0; 1; 1]; [1; 1; 0; 0]; [0; 0; 1; 1]);
3
3
2
2
1 0
0
0
0 1 0 0
6 0 1
6 1 0 0 0 7
0
0 7
7;
7
6
A =6
4 0 0 0 1 5; D = 4 0 0
1
0 5
0 0
0
1
0 0 1 0
3
2
1 0
1
0
6 1 0
1
0 7
7
6
P=4
0 1
0
1 5
0 1
0
1
(b) C = (x2 ; x; 1); B
2
2
0
1
A=4 2
0
1
= (x2 2x + 1;
3
2
0
2
0 5; D = 4 0
0
0
x + 1; 1);
3
2
0 0
1 0 5; P = 4
0 0
2
1
0
3
(c) Not diagonalizable; C = (x2 ; x; 1); A = 4 2
0
1
116
1
2
1
3
0 0
1 0 5
1 1
3
0
0 5;
3
Andrilli/Hecker - Answers to Exercises
Eigenvalue
1
1 =
3
2 =
Section 5.6
Basis for E
f2x2 + 2x + 1g
f1g
Alg. Mult.
1
2
2
(d) C = (x2 ; x; 1);
B = (2x2
2
A=4
D=4
8x + 9; x; 1);
3
1
0
0
12
4
0 5;
18
0
5
2
P=4
(e) Not diagonalizable; no eigenvalues; A =
(f) C =
1
0
0
0
;
B=
1
0
0
0
;
2
1
6 0
A =6
4 0
0
2
1
6 0
P=6
4 0
0
0
0
1
0
0
1
0
0
0
0
1
0
0 0
0 1
3
;
;
0
1
0
0
;
0
1
2
1
0
;
0
6
0 7
7; D = 6
4
5
0
1
3
0
1 7
7
1 5
0
0 0
0 1
0 1
1 0
1
0
0
0
0
1
0
0
(g) C =
1
0
0
0
;
0
0
1
0
;
0
1
0
0
B=
1
0
0
0
;
0
1
1
0
;
0
0
0
1
2
2
1
6 0
A =6
4 0
0
2
1
6 0
P=6
4 0
0
(h) C =
1
0
3
0 0
6
1 0 7
7; D = 6
4
1 0 5
0 0
3
0
1 7
7
1 5
0
0
1
1
0
0 0
1 0
1 0
0 1
0
0
;
0
0
1
0
Geom. Mult.
1
1
3
1
0
0
0
4
0 5;
0
0
5
3
2 0 0
8 1 0 5
9 0 1
p #
;
117
0
1
0
0
"
1
p2
3
2
0
0
0
1
0
0
;
0
1
0
0
0
0
;
1
0
3
;
0
0 7
7;
0 5
1
;
0
0
0
0
;
0
1
0
0
1
0
3
2
1
2
0
1
;
1
0
0
0
0
0
0
0
;
3
0
0 7
7;
0 5
2
0
1
;
Andrilli/Hecker - Answers to Exercises
3
5
B=
2
0
0
4
0
10
0
6
A=6
4
2
3
6 0
P=6
4 5
0
(3) (a) pL (x) =
5)
2
4
16
(x
= (x
= (x
8 (x
3)
3)
1
2
0
;
0
3
2
1
7
6 0
7;D = 6
5
4 0
0
;
2
(x
0
0
1)
4
0
5)
2
16
16
5)
3
5
0 3 0
4 0 3
0 7 0
10 0 7
3
1 0
0 1 7
7
2 0 5
0 2
0
3
0
5
(x
0
0
;
Section 5.6
(x
2
1)
0
2
(x
(x
1)
4
1)
3)
1
2
0
0
2
0
3
0
0 7
7;
0 5
2
2
4
5)
2
2
4
8
+ (x + 5)
2
;
(x
1
1
x+5
1
1
0
1
0
0
1
2
1
1
2
(x + 5)
8
(x
0
0
1
2
(x
(x
1
1
2
1)
4
5)
2
4
2
(x
1)
2
(x
1)
1
1
= (x 3)( 16(x 3) + (x + 5)(x2 6x + 9))
8((x 5)( 2x + 6) 2(0) 4(x 3))
= (x 3)( 16(x 3) + (x + 5)(x 3)2 )
8(( 2)(x 5)(x 3) 4(x 3))
= (x 3)2 ( 16 + (x + 5)(x 3)) +16(x 3)((x 5) + 2)
= (x 3)2 (x2 + 2x 31) + 16(x 3)2 = (x 3)2 (x2 + 2x 15)
= (x 3)3 (x + 5). You can check that this polynomial expands as
claimed.
2
3
1
3
2
1 0 0
8
10
2
0
1
6
1 7
6
6 0 1 0
7
2
6
0
1 7
8 7
7
6
6
(b) 4
when put into
becomes 6
1 7
4
4
8
2 5
4 0 0 1
4 5
16
0
8
0
0 0 0
0
reduced row echelon form.
(4) For both parts, the only eigenvalue is
= 1; E1 = f1g.
(5) Since A is upper triangular, so is xIn A. Thus, by Theorem 3.2, jxIn Aj
equals the product of the main diagonal entries of xIn A, which is
(x aii )n , since a11 = a22 =
= ann : Thus A has exactly one eigenvalue
with algebraic multiplicity n. Hence A is diagonalizable ()
has
118
Andrilli/Hecker - Answers to Exercises
Section 5.6
geometric multiplicity n () E = Rn () Av = v for all v 2 Rn ()
Av = v for all v 2 fe1 ; : : : ; en g () A = In (using Exercise 14(b) in
Section 1.5).
(6) In both cases, the given linear operator is diagonalizable, but there are
less than n distinct eigenvalues. This occurs because at least one of the
eigenvalues has geometric multiplicity greater than 1.
2
3
1 1
1
(7) (a) 4 0 1 0 5 ; eigenvalue = 1; basis for E1 = f[1; 0; 0]; [0; 1; 1]g;
0 0 1
has algebraic multiplicity 3 and geometric multiplicity 2
3
1 0 0
(b) 4 0 1 0 5 ; eigenvalues 1 = 1; 2 = 0; basis for E1 = f[1; 0; 0];
0 0 0
2
[0; 1; 0]g;
1
has algebraic multiplicity 2 and geometric multiplicity 2
(8) (a) Zero is not an eigenvalue for L i¤ ker(L) = f0g. Now apply Theorem 5.12 and Corollary 5.21.
(b) L(v) =
1
v=L
1
v =) L
(v) (since
1
(L(v)) = L 1 ( v) =) v =
6= 0 by part (a)).
L
1
(v) =)
(9) Let P be such that P 1 AP is diagonal. Let C = (v1 ; : : : ; vn ), where [vi ]B
= ith column of P. Then, by de…nition, P is the transition matrix from
C to B. Then, for all v 2 V, P 1 AP[v]C = P 1 A[v]B = P 1 [L(v)]B =
[L(v)]C . Hence, as in part (a), P 1 AP is the matrix for L with respect
to C. But P 1 AP is diagonal.
(10) If P is the matrix with columns v1 ; : : : ; vn , then P 1 AP = D is a diagonal
matrix with 1 ; : : : ; n as the main diagonal entries of D. Clearly jDj =
1
APj = jP 1 jjAjjPj = jAjjP 1 jjPj = jAj.
1 2
n . But jDj = jP
Pk
Pk
(11) Clearly n
i=1 (algebraic multiplicity of i )
i=1 (geometric multiplicity of i ), by Theorem 5.26.
(12) Proof by contrapositive: If pL (x) has a nonreal root, then the sum of
the algebraic multiplicities of the eigenvalues of L is less than n. Apply
Theorem 5.28.
(13) (a) A(Bv) = (AB)v = (BA)v = B(Av) = B( v) = (Bv).
(b) The eigenspaces E for A are one-dimensional. Let v 2 E , with
v 6= 0. Then Bv 2 E , by part (a). Hence Bv = cv for some c 2 R,
and so v is an eigenvector for B. Repeating this for each eigenspace
of A shows that B has n linearly independent eigenvectors. Now
apply Theorem 5.22.
119
Andrilli/Hecker - Answers to Exercises
Chapter 5 Review
(14) (a) Suppose v1 and v2 are eigenvectors for A corresponding to 1 and
2 matrix [v1 0] (where the
2 , respectively. Let K1 be the 2
vectors represent columns). Similarly, let K2 = [0 v1 ], K3 = [v2 0],
and K4 = [0 v2 ]. Then a little thought shows that fK1 ; K2 g is a
basis for E 1 in M22 , and fK3 ; K4 g is a basis for E 2 in M22 .
(b) Generalize the construction in part (a).
(15) Suppose v 2 B1 \ B2 : Then v 2 B1 =) v 2 E 1 =) L(v) = 1 v:
Similarly, v 2 B2 =) v 2 E 2 =) L(v) = 2 v: Hence, 1 v = 2 v; and so
( 1 2 )v = 0: Since 1 6= 2 ; v = 0: But 0 can not be an element of any
basis, and so B1 \ B2 is empty.
(16) (a) E
is a subspace, hence closed.
Pki
aij vij in the given double sum equa(b) First, substitute ui for j=1
Pn
tion. This proves i=1 ui = 0. Now, the set of all nonzero ui ’s
is linearly independent by Theorem 5.23, since they are eigenvectors
corresponding
to distinct eigenvalues. But then, the nonzero terms
Pn
in i=1 ui would give a nontrivial linear combination from a linearly
independent set equal to the zero vector. This contradiction shows
that all of the ui ’s must equal 0.
Pki
(c) Using part (b) and the de…nition of ui , 0 = j=1
aij vij for each i.
But fvi1 ; : : : ; viki g is linearly independent, since it is a basis for E i .
Hence, for each i, ai1 =
= aiki = 0.
i
(d) Apply the de…nition of linear independence to B.
(17) Example 7 states that pA (x) = x3 4x2 + x + 6. So, pA (A) = A3 4A2 +
2
3
2
3
17
22
76
5
2
4
A + 6I3 = 4 374 214 1178 5 4 4 106 62 334 5
54
27
163
18
9
53
2
3
2
3 2
3
31
14
92
1 0 0
0 0 0
28 158 5 + 6 4 0 1 0 5 = 4 0 0 0 5 , verifying the
+ 4 50
18
9
55
0 0 1
0 0 0
Cayley-Hamilton Theorem for this matrix.
(18) (a) F
(c) T
(e) T
(g) T
(i) F
(b) T
(d) F
(f) T
(h) F
(j) T
Chapter 5 Review Exercises
(1) (a) Not a linear transformation: f ([0; 0; 0]) 6= [0; 0; 0]
(b) Linear transformation:
g((ax3 + bx2 + cx + d) + (kx3 + lx2 + mx + n))
= g((a + k)x3 + (b + l)x2 + (c + m)x + (d + n))
120
Andrilli/Hecker - Answers to Exercises
Chapter 5 Review
2
3
4(b + l) (c + m)
3(d + n) (a + k)
5
2(d + n) + 3(a + k)
4(c + m)
=4
5(a + k) + (c + m) + 2(d + n) 2(b + l) 3(d + n)
2
3 2
3
4b c
3d a
4l m
3n k
5+4
4c
2n + 3k
4m 5
= 4 2d + 3a
5a + c + 2d 2b 3d
5k + m + 2n 2l 3n
= g(ax3 + bx2 + cx + d) + g(kx3 + lx2 + mx + n);
g(k(ax3 + bx2 + cx + d)) = g((ka)x3 + (kb)x2 + (kc)x + (kd))
2
3
4(kb) (kc)
3(kd) (ka)
5
2(kd) + 3(ka)
4(kc)
=4
5(ka) + (kc) + 2(kd) 2(kb) 3(kd)
2
3
4b c
3d a
5
4c
= k 4 2d + 3a
5a + c + 2d 2b 3d
= k(g(ax3 + bx2 + cx + d))
(c) Not a linear transformation: h([1; 0] + [0; 1]) 6= h([1; 0]) + h([0; 1])
since h([1; 1]) = [2; 3] and h([1; 0]) + h([0; 1]) = [0; 0]+[0; 0] = [0; 0].
(2) [1:598; 3:232]
(3) f (A1 ) + f (A2 ) = CA1 B 1 + CA2 B 1 = C(A1 B
= C(A1 + A2 )B 1 = f (A1 + A2 );
f (kA) = C(kA)B 1 = kCAB 1 = kf (A).
2
3
5
1
(4) L([6; 2; 7]) = [20; 10; 44]; L([x; y; z]) = 4 2
4
3
= [ 3x + 5y
4z; 2x
y; 4x + 3y
1
2z]
+ A2 B
1
)
32
3
4
x
0 54 y 5
2
z
(5) (a) Use Theorem 5.2 and part (1) of Theorem 5.3.
(b) Use Theorem 5.2 and part (2) of Theorem 5.3.
(6) (a) ABC =
2
(b) ABC = 4
(7) (a) ABC
(b) ABC
29 32
43 42
113
566
1648
2
6
58
291
847
51
255
742
2
3
58
283 5
823
2
3
2 1
3
0
151
0
4 5; ADE = 4 171
=4 1 3
0 0
1
2
238
2
3
2
6
1
1
115
6 0
6 374
3
2 7
6
7
6
=4
; ADE = 4
2
0
4 5
46
1
5
1
271
121
99
113
158
45
146
15
108
163
186
260
3
59
190 7
7
25 5
137
3
16
17 5
25
Andrilli/Hecker
2
0
1
0
(8) 4 0
0
0
- Answers to Exercises
3
0
0 5
1
(9) (a) pABB (x) = x3
x2
Chapter 5 Review
1)2
x + 1 = (x + 1)(x
(b) Fundamental eigenvectors for
f[3; 6; 2]g
1
= 1: f[2; 1; 0]; [2; 0; 3]g; for
(c) C = ([2; 1; 0];
2
1
(d) ACC = 4 0
0
2
P
1
[2; 0; 3]; [3; 6; 2])
3
2
0
0
2
1
0 5. (Note: P = 4 1
0
1
0
3
18
5
12
1 4
2
4
15 5 )
= 41
3
6
2
2
=
1:
3
3
6 5 and
2
2
0
3
(e) This is a re‡ection through the plane spanned by f[2; 1; 0]; [2; 0; 3]g.
The equation for this plane is 3x 6y + 2z = 0.
(10) (a) Basis for ker(L) = f[2; 3; 1; 0]; [ 3; 4; 0; 1]g; basis for range(L) =
f[3; 2; 2; 1]; [1; 1; 3; 4]g
(b) 2 + 2 = 4
(c) [ 18; 26; 4; 2] 2
= ker(L) because L([ 18; 26; 4; 2]) =
[ 6; 2; 10; 20] 6= 0; [ 18; 26; 6; 2] 2 ker(L) because
L([ 18; 26; 6; 2]) = 0
(d) [8; 3; 11; 23] 2 range(L); row reduction shows [8; 3; 11; 23] =
5[3; 2; 2; 1] 7[1; 1; 3; 4], and so, L([5; 7; 0; 0]) = [8; 3; 11; 23]
3
2
1 0
0 0 0
1
6 0 1
2 0 0
0 7
7;
(11) Matrix for L with respect to standard bases: 6
4 0 0
0 1 0
3 5
0 0
0 0 0
0
Basis for ker(L) =
0
0
2
0
1
0
;
0
0
0
1
0
0
;
1
3
0
0
0
1
;
Basis for range(L) = fx3 ; x2 ; xg;
dim(ker(L)) + dim(range(L)) = 3 + 3 = 6 = dim(M32 )
(12) (a) If v 2 ker(L1 ), then L1 (v) = 0W . Thus, (L2 L1 )(v) = L2 (L1 (v)) =
L2 (0W ) = 0X , and so v 2 ker(L2 L1 ). Therefore,
ker(L1 ) ker(L2 L1 ). Hence, dim(ker(L1 )) dim(ker(L2 L1 )).
(b) Let L1 be the projection onto the x-axis (L1 ([x; y]) = [x; 0]) and L2
be the projection onto the y-axis (L2 ([x; y]) = [0; y]). A basis for
ker(L1 ) is f[0; 1]g, while ker(L2 L1 ) = R2 .
122
Andrilli/Hecker - Answers to Exercises
Chapter 5 Review
(13) (a) By part (2) of Theorem 5.9 applied to L and M , dim(ker(L))
dim(ker(M )) = (n rank(A)) (m rank(AT )) = (n m)
(rank(A) rank(AT )) = n m, by Corollary 5.11.
(b) L onto =) dim(range(L)) = m = rank(A) (part (1) of Theorem
5.9) = rank(AT ) (Corollary 5.11) = dim(range(M )). Thus, by part
(3) of Theorem 5.9, dim(ker(M )) + dim(range(M )) = m, implying
dim(ker(M )) + m = m, and so dim(ker(M )) = 0, and M is one-toone by part (1) of Theorem 5.12.
(c) Converse: M one-to-one =) L onto. This is true. M one-to-one
=) dim(ker(M )) = 0 =) dim(ker(L)) = n m (by part (a)) =)
(n m) + dim(range(L)) = n (by part (3) of Theorem 5.9) =)
dim(range(L)) = m =) L is onto (by Theorem 4.16, and part (2) of
Theorem 5.12).
(14) (a) L is not one-to-one because L(x3 x + 1) = O22 . Corollary 5.21 then
shows that L is not onto.
(b) ker(L) has fx3 x + 1g as a basis, so dim(ker(L)) = 1. Thus,
dim(range(L)) = 3 by the Dimension Theorem.
(15) In each part, let A represent the given matrix for L with respect to the
standard bases.
(a) The reduced row echelon form of A is I3 . Therefore, ker(L) = f0g,
and so dim(ker(L)) = 0 and L is one-to-one. By Corollary 5.21, L is
also onto. Hence dim(range(L)) = 3.
3
2
1 0
4 0
1 0 5.
(b) The reduced row echelon form of A is 4 0 1
0 0
0 1
dim(ker(L)) = 4 rank(A) = 1. L is not one-to-one.
dim(range(L)) = rank(A) = 3. L is onto.
(16) (a) dim(ker(L)) = dim(P3 ) dim(range(L))
4 dim(R3 ) = 1, so L
can not be one-to-one
(b) dim(range(L)) = dim(P2 ) dim(ker(L)) dim(P2 ) = 3 < dim(M22 ),
so L can not be onto.
(17) (a) L(v1 ) = cL(v2 ) = L(cv2 ) =) v1 = cv2 , because L is one-to-one. In
this case, the set fL(v1 ); L(v2 )g is linearly dependent, since L(v1 ) is
a nonzero multiple of L(v2 ). Part (1) of Theorem 5.13 then implies
that fv1 ; v2 g must be linearly dependent, which is correct, since v1
is a nonzero multiple of v2 .
(b) Consider the contrapositive: For L onto and w 2 W, if fv1 ; v2 g
spans V, then L(av1 + bv2 ) = w for some a; b 2 R. Proof of the
contrapositive: Suppose L is onto, w 2 W, and fv1 ; v2 g spans V.
By part (2) of Theorem 5.13, fL(v1 ); L(v2 )g spans W, so there exist
a; b 2 R such that w = aL(v1 ) + bL(v2 ) = L(av1 + bv2 ).
123
Andrilli/Hecker - Answers to Exercises
Chapter 5 Review
(18) (a) L1 and L2 are isomorphisms (by Theorem 5.15) because their (given)
matrices are nonsingular. Their inverses are given in part (b).
2
3
81
71
15 18
6 107
77
31 19 7
7;
(b) Matrix for L2 L1 = 6
4 69
45
23 11 5
29
36
1
9
2
3
2
10
19
11
6 0
5
10
5 7
7;
Matrix for L1 1 : 51 6
4 3
15
26
14 5
4
15
23
17
2
3
8
26
30
2
6 10
35
41
4 7
7:
Matrix for L2 1 : 21 6
4 10
30
34
2 5
14
49
57
6
3
2
80
371
451
72
6
20
120
150
30 7
1 6
7:
(c) Both computations yield 10
4 110
509
619
98 5
190
772
922
124
(19) (a) The determinant of the shear matrix given in Table 5.1 is 1, so this
matrix is nonsingular. Therefore, the given mapping is an isomorphism by Theorem 5.15.
02
31 2
32
3
a1
1 0
k
a1
k 5 4 a2 5
(b) The inverse isomorphism is L @4 a2 5A = 4 0 1
a3
0 0
1
a3
2
3
a1 ka3
= 4 a2 ka3 5 . This represents a shear in the z-direction with
a3
factor
k.
T
(20) (a) (BT AB) = BT AT (BT )T = BT AB, since A is symmetric
(b) Suppose A; C 2 W. Note that if BT AB = BT CB, then A = C
because B is nonsingular. Thus L is one-to-one, and so L is onto by
Corollary 5.21. An alternate approach is as follows: Suppose C 2 W.
If A = (BT ) 1 CB 1 ; then L(A) = C: Thus L is onto, and so L is
one-to-one by Corollary 5.21.
(21) (a) L(ax4 + bx3 + cx2 ) = 4ax3 + (12a + 3b)x2 + (6b + 2c)x + 2c. Clearly
ker(L) = f0g. Apply part (1) of Theorem 5.12.
(b) No. dim(W) = 3 6= dim(P3 )
(c) The polynomial x 2
= range(L) since it does not have the correct form
for an image under L (as stated in part (a)).
124
Andrilli/Hecker - Answers to Exercises
Chapter 5 Review
(22) In parts (a), (b), and (c), let A represent the given matrix.
(a) (i) pA (x) = x3 3x2 x + 3 = (x 1)(x + 1)(x 3); Eigenvalues for
L: 1 = 1, 2 = 1, and 3 = 3; Basis for E1 : f[ 1; 3; 4]g; Basis for
E 1 : f[ 1; 4; 5]g; Basis for E3 : f[ 6; 20; 27]g
(ii) All algebraic and geometric multiplicities equal 1; L is diagonalizable.
2
3
1
0 0
1 0 5;
(iii) B = f[ 1; 3; 4]; [ 1; 4; 5]; [ 6; 20; 27]g; D = 4 0
0
0 3
2
3
2
3
1
1
6
8
3
4
4 20 5 : (Note that: P 1 = 4 1
3
2 5).
P=4 3
4
5 27
1
1
1
(b) (i) pA (x) = x3 + 5x2 + 20x + 28 = (x + 2)(x2 + 3x + 14); Eigenvalue
for L: = 2 since x2 + 3x + 14 has no real roots; Basis for E 2 :
f[0; 1; 1]g
(ii) For = 2: algebraic multiplicity = geometric multiplicity = 1;
L is not diagonalizable.
(c) (i) pA (x) = x3 5x2 + 3x + 9 = (x + 1)(x 3)2 ; Eigenvalues for
L: 1 = 1, and 2 = 3; Basis for E 1 : f[1; 3; 3]g; Basis for E3 :
f[1; 5; 0]; [3; 0; 25]g
(ii) For 1 = 1: algebraic multiplicity = geometric multiplicity =
1; For 2 = 3: algebraic multiplicity = geometric multiplicity = 2; L
is diagonalizable.
2
3
1 0 0
(iii) B = f[1; 3; 3]; [1; 5; 0]; [3; 0; 25]g; D = 4 0 3 0 5;
0 0 3
2
3
2
3
125
25
15
1 1
3
16
9 5).
0 5 : (Note that: P 1 = 51 4 75
P=4 3 5
15
3
2
3 0 25
(d) Matrix for L
2
1
6 3
A=6
4 0
0
with respect to standard coordinates:
3
0
0
0
0
0
0 7
7.
2
1
0 5
0
1
2
(i) pA (x) = x4 + 2x3 x2 2x = x(x 1)(x + 1)(x + 2); Eigenvalues for L: 1 = 1, 2 = 0, 3 = 1, and 4 = 2; Basis for E1 :
f x3 + 3x2 3x + 1g (f[ 1; 3; 3; 1]g in standard coordinates); Basis
for E0 : fx2 2x + 1g (f[0; 1; 2; 1]g in standard coordinates); Basis
for E 1 : f x + 1g (f[0; 0; 1; 1]g in standard coordinates); Basis for
E 2 : f1g (f[0; 0; 0; 1]g in standard coordinates)
(ii) All algebraic and geometric multiplicities equal 1; L is diagonalizable.
125
Andrilli/Hecker - Answers to Exercises
Chapter 5 Review
(iii) B = f x3 + 3x2 3x + 1; x2 2x + 1; x + 1; 1g
(f[ 1; 3; 3; 1]; [0; 1; 2; 1]; [0; 0; 1; 1]; [0; 0; 0; 1]g in standard coordi2
3
2
3
1
0
0 0
1 0
0
0
6
6 0 0
1
0 0 7
0
0 7
7 : (Note
7; P = 6 3
nates); D = 6
4 3
4 0 0
2
1 0 5
1
0 5
1
1
1 1
0 0
0
2
that: P
1
= P).
(23) Basis for E1 : f[a; b; c]; [d; e; f ]g; Basis for E 1 : f[bf ce; cd af; ae
Basis2of eigenvectors:
3 f[a; b; c]; [d; e; f ]; [bf ce; cd af; ae bd]g;
1 0
0
0 5
D=4 0 1
0 0
1
(24) pA (A) = A4 4A3 18A2 + 108A 135I4
3
2
2
97
649
216
176
68
7
6 70
6
568
135
176
68
7 46
=6
4 140
4 1136
432
271
136 5
304
1088
0
544
625
3
2
2
5 2
37 12
8
2
7
6 2 1
6 28
3
8
2
7 + 108 6
18 6
4 4 4
4 56 24
7
4 5
16 0
32
0 16 25
3
2
135
0
0
0
6 0 135
0
0 7
7 = O44
6
4 0
0 135
0 5
0
0
0 135
54
27
108
0
0
0
3
8
8
8
11
152
3
1
1 7
7
2 5
5
bd]g.
3
19
19 7
7
38 5
125
(25) (a) T
(e) T
(i) T
(m) F
(q) F
(u) T
(y) T
(b) F
(f) F
(j) F
(n) T
(r) T
(v) F
(z) T
(c) T
(g) F
(k) F
(o) T
(s) F
(w) T
(d) T
(h) T
(l) T
(p) F
(t) T
(x) T
126
Andrilli/Hecker - Answers to Exercises
Section 6.1
Chapter 6
Section 6.1
(1) Orthogonal, not orthonormal: (a), (f);
Orthogonal and orthonormal: (b), (d), (e), (g);
Neither: (c)
(2) Orthogonal: (a), (d), (e);
Not orthogonal, columns not normalized: (b), (c)
i
h p
p
(b) [v]B = [2; 1; 4]
(3) (a) [v]B = 2 23+3 ; 3 23 2
p
p
p
(c) [v]B = [3; 133 3 ; 5 3 6 ; 4 2]
(4) (a) f[5; 1; 2]; [5; 3; 14]g
(b) f[2; 1; 3; 1]; [ 7; 1; 0; 13]g
(c) f[2; 1; 0; 1]; [ 1; 1; 3; 1]; [5; 7; 5; 3]g
(d) f[0; 1; 3; 2]; [2; 3; 1; 0]; [ 3; 2; 0; 1]g
(e) f[4; 1; 2; 2]; [2; 0; 3; 1]; [3; 8; 4; 6]g
(5) (a) f[2; 2; 3]; [13; 4; 6]; [0; 3; 2]g
(e) f[2; 1; 2; 1]; [3; 1; 2; 1];
[0; 5; 2; 1]; [0; 0; 1; 2]g
(b) f[1; 4; 3]; [25; 4; 3]; [0; 3; 4]g
(c) f[1; 3; 1]; [2; 5; 13]; [4; 1; 1]g
(f) f[2; 1; 0; 3]; [0; 3; 2; 1];
[5; 1; 0; 3]; [0; 3; 5; 1]g
(d) f[3; 1; 2]; [5; 3; 6]; [0; 2; 1]g
(6) Orthogonal basis for W = f[ 2; 1; 4; 2; 1]; [4; 3; 0; 2; 1];
[ 1; 33; 15; 38; 19]; [3; 3; 3; 2; 7]g
(7) (a) [ 1; 3; 3]
(b) [3; 3; 1]
(c) [5; 1; 1]
(d) [4; 3; 2]
(8) (a) (ci vi ) (cj vj ) = ci cj (vi vj ) = ci cj 0 = 0, for i 6= j.
(b) No
(9) (a) Express v and w as linear combinations of fu1 ; : : : ; un g using Theorem 6.3. Then expand and simplify v w.
(b) Let w = v in part (a).
(10) Follow the hint. Then use Exercise 9(b). Finally, drop some terms to get
the inequality.
(11) (a) (A
1 T
) = (AT )
1
= (A
1
)
1
, since A is orthogonal.
(b) AB(AB) = AB(B A ) = A(BBT )AT = AIAT = AAT = I.
Hence (AB)T = (AB) 1 .
T
T
T
127
Andrilli/Hecker - Answers to Exercises
Section 6.1
(12) A = AT =) A2 = AAT =) In = AAT =) AT = A
T
1
Conversely, A = A
=) In = AA
A = In AT =) A = AT .
T
2
=) A = A A
T
1
.
=)
(13) Suppose A is both orthogonal and skew-symmetric. Then A2 = A( AT ) =
AAT = In . Hence jAj2 = j In j = ( 1)n jIn j (by Corollary 3.4) = 1,
if n is odd, a contradiction.
(14) A(A + In )T = A(AT + In ) = AAT + A = In + A (since A is orthogonal).
Hence jIn + Aj = jAj j(A + In )T j = ( 1)jA + In j, implying jA + In j = 0.
Thus A + In is singular.
(15) Use part (2) of Theorem 6.7. To be a unit vector, the …rst column must be
[ 1; 0; 0]. To be a unit vector orthogonal to the …rst column, the second
column must be [0; 1; 0], etc.
(16) (a) By Theorem 6.5, we can expand the orthonormal set fug to an orthonormal basis for Rn . Form the matrix using these basis vectors
as rows. Then use part (1) of Theorem 6.7.
p
p 3
2 p6
6
6
6
(b) 6
4
6
p
30
6
0
p
3
30
15
p
5
5
p
6
30
30
p
2 5
5
7
7 is one possible answer.
5
(17) Proof of the other half of part (1) of Theorem 6.7: (i; j) entry of AAT =
1 i=j
(ith row of A) (jth row of A) =
(since the rows of A form
0 i 6= j
an orthonormal set). Hence AAT = In , and A is orthogonal.
Proof of part (2): A is orthogonal i¤ AT is orthogonal (by part (2) of
Theorem 6.6) i¤ the rows of AT form an orthonormal basis for Rn (by
part (1) of Theorem 6.7) i¤ the columns of A form an orthonormal basis
for Rn .
(18) (a) kvk2 = v v = Av Av (by Theorem 6.9) = kAvk2 . Then take
square roots.
(Av) (Aw)
vw
(b) Using part (a) and Theorem 6.9, we have (kvk)(kwk)
= (kAvk)(kAwk)
,
and so the cosines of the appropriate angles are equal.
(19) A detailed proof of this can be found in the beginning of the proof of
Theorem 6.18 in Appendix A. (That portion of the proof uses no results
beyond Section 6.1.)
(20) (a) Let Q1 and Q2 be the matrices whose columns are the vectors in B
and C, respectively. Then Q1 is orthogonal by part (2) of Theorem
6.7. Also, Q1 (respectively, Q2 ) is the transition matrix from B
(respectively, C) to standard coordinates. By Theorem 4.22, Q2 1
128
Andrilli/Hecker - Answers to Exercises
Section 6.2
is the transition matrix from standard coordinates to C. Theorem
4.21 then implies P = Q2 1 Q1 : Hence, Q2 = Q1 P 1 : By part (b) of
Exercise 11, Q2 is orthogonal, since both P and Q1 are orthogonal.
Thus C is an orthonormal basis by part (2) of Theorem 6.7.
(b) Let B = (b1 ; : : : ; bk ); where k = dim(V). Then the ith column of the
transition matrix from B to C is [bi ]C . Now, [bi ]C [bj ]C = bi bj (by
Exercise 19, since C is an orthonormal basis). This equals 0 if i 6= j
and equals 1 if i = j; since B is orthonormal. Hence, the columns
of the transition matrix form an orthonormal set of k vectors in Rk ;
and so this matrix is orthogonal by part (2) of Theorem 6.7.
(c) Let C = (c1 ; : : : ; ck ); where k = dim(V). Then ci cj = [ci ]B [cj ]B
(by Exercise 19 since B is orthonormal) = (P[ci ]B ) (P[cj ]B ) (by
Theorem 6.9 since P is orthogonal) = [ci ]C [cj ]C (since P is the
transition matrix from B to C) = ei ej ; which equals 0 if i 6= j and
equals 1 if i = j: Hence C is orthonormal.
(21) First note that AT A is an n n matrix. If i 6= j, the (i; j) entry of AT A
equals (ith row of AT ) (jth column of A) = (ith column of A) (jth column
of A) = 0, because the columns of A are orthogonal to each other. Thus,
AT A is a diagonal matrix. Finally, the (i; i) entry of AT A equals (ith
row of AT ) (ith column of A) = (ith column of A) (ith column of A)
= 1, because the ith column of A is a unit vector.
(22) (a) F
(c) T
(e) T
(g) T
(i) T
(b) T
(d) F
(f) T
(h) F
(j) T
Section 6.2
(e) W ? = span(f[ 2; 5; 1]g)
(1) (a) W ? = span(f[2; 3]g)
(b) W ? = span(f[5; 2; 1];
[0; 1; 2]g)
(c) W ? = span(f[2; 3; 7]g)
(d) W ? = span(f[3; 1; 4]g)
(2) (a) w1 = projW v = [
(b) w1 = projW v = [
(g) W ? = span(f[3; 2; 4; 1]g)
33 111 12
35 ; 35 ; 7 ];
17
9 ;
(c) w1 = projW v = [ 17 ;
(d) w1 = projW v = [
(f) W ? = span(f[7; 1; 2; 3];
[0; 4; 1; 2]g)
3
7;
w2 = [
10 14
9 ; 9 ];
2
7 ];
2
35 ;
w2 = [ 26
9 ;
17
w2 = [ 13
7 ; 7 ;
54 4 42 92
35 ; 35 ; 35 ; 35 ];
(3) Use the orthonormal basis fi; jg for W.
129
6 2
35 ; 7 ]
26 13
9 ; 9 ]
19
7 ]
101 63
w2 = [ 19
35 ; 35 ; 35 ;
22
35 ]
Andrilli/Hecker - Answers to Exercises
(4) (a)
p
3 129
43
(b)
p
3806
22
Section 6.2
(c)
p
2 247
13
(d)
p
8 17
17
(5) (a) Orthogonal projection onto 3x + y + z = 0
(b) Orthogonal re‡ection through x
2y
2z = 0
2
(c) Neither. pL (x) = (x 1)(x + 1) . E1 = spanf[ 7; 9; 5]g, E
spanf[ 7; 2; 0]; [11; 0; 2]g. Eigenspaces have wrong dimensions.
1
=
(d) Neither. pL (x) = (x 1)2 (x + 1). E1 = spanf[ 1; 4; 0]; [ 7; 0; 4]g,
E 1 = spanf[2; 1; 3]g. Eigenspaces are not orthogonal.
2
3
5 2
4
2 5
(6) 91 4 2 8
4 2
5
2
3
2 3
6
2 5
(7) 17 4 3 6
6 2
3
2
1
3
1
3
1
3
6
(8) (a) 4
1
3
5
6
1
6
1
3
1
6
5
6
3
(b) This is straightforward.
7
5
(9) (a) Characteristic polynomial = x3
2x2 + x
(b) Characteristic polynomial = x3
x2
(c) Characteristic polynomial = x3
3
2
50
21
3
1 4
21
10
7 5
(10) (a) 59
3
7 58
x2
(b)
1
17
2
4
8
6
6
6
13
4
3
6
4 5
13
x+1
2
2
6 2
1 6
(c) 9 4
3
1
2
9
6
3
1 6
(d) 15
4 3
6
2
8
0
2
3
0
6
3
3
1
1
2
3
1
1
2
3
1
2 7
7
3 5
2
3
6
2 7
7
2 5
4
(11) W1? = W2? =) (W1? )? = (W2? )? =) W1 = W2 (by Corollary 6.14).
(12) Let v 2 W2? and let w 2 W1 . Then w 2 W2 , and so v w = 0, which
implies v 2 W1? .
(13) Both parts rely on the uniqueness assertion just before Example 6 and the
facts in the box just before Example 7 in Section 6.2, and the equation
v = v + 0.
130
Andrilli/Hecker - Answers to Exercises
Section 6.2
(14) v 2 W ? =) projW v = 0 (see Exercise 13) =) minimum distance
between P and W = kvk (by Theorem 6.17).
Conversely, suppose kvk is the minimum distance from P to W. Let w1 =
projW v and w2 = v projW v. Then kvk = kw2 k, by Theorem 6.17.
Hence kvk2 = kw1 + w2 k2 = (w1 + w2 ) (w1 + w2 ) = w1 w1 + 2w1 w2 +
w2 w2 = kw1 k2 + kw2 k2 (since w1 ? w2 ). Subtracting kvk2 = kw2 k2
from both sides yields 0 = kw1 k2 , implying w1 = 0. Hence v = w2 2 W ? .
(15) (a) p1 + p2
(b) cp1
To prove each part, just use the de…nition of projection and properties of
the dot product.
(16) Let A 2 V, B 2 W. Then “A B” =
Pn Pn
Pn Pi 1
Pn
Pn 1 Pn
i=1
j=1 aij bij =
i=2
j=1 aij bij +
i=1 aii bii +
i=1
j=i+1 aij bij
Pn Pi 1
Pn 1 Pn
= i=2 j=1 aij bij + 0 + i=1 j=i+1 aji ( bji ) (since bii = 0)
Pn Pi 1
Pn Pj 1
= i=2 j=1 aij bij
j=2
i=1 aji bji = 0. Now use Exercise 12(b) in
Section 4.6 and follow the hint in the text.
a
. Then fug is an orthonormal basis for W. Hence projW b =
(17) Let u = kak
a
a
ba
(b u)u = (b kak
) kak
= ( kak
2 )a = proja b, according to Section 1.2.
(18) Let S be a spanning set for W. Clearly, if v 2 W ? and u 2 S then
v u = 0.
Conversely, suppose v u = 0 for all u 2 S. Let w 2 W. Then w =
a1 u1 +
+ an un for some u1 ; : : : ; un 2 S. Hence
v w = a1 (v u1 ) +
+ an (v un ) = 0 +
+ 0 = 0. Thus v 2 W ? .
(19) First, show W (W ? )? . Let w 2 W. We need to show that w 2 (W ? )? .
That is, we must prove that w ? v for all v 2 W ? . But, by the de…nition
of W ? , w v = 0, since w 2 W, which completes this half of the proof.
Next, prove W = (W ? )? . By Corollary 6.13, dim(W) = n dim(W ? ) =
n (n dim((W ? )? )) = dim((W ? )? ). Thus, by Theorem 4.16, W =
(W ? )? .
(20) The proof that L is a linear operator is straightforward.
To show W ? = ker(L), we …rst prove that W ? ker(L). If v 2 W ? ,
then since v w = 0 for every w 2 W, L(v) = projW (cv) = (v u1 )u1 +
+ (v uk )uk = 0u1 +
+ 0uk = 0. Hence, v 2 ker(L).
Next, we prove that range(L) = W. First, note that if w 2 W, then
w = w + 0, where w 2 W and 0 2 W ? . Hence, by the statements
in the text just before and just after Example 6 in Section 6.2, L(w) =
projW w = w. Therefore, every w in W is in the range of L. Similarly,
the same statements surrounding Example 6 imply projW v 2 W for every
v 2 Rn . Hence, range(L) = W.
131
Andrilli/Hecker - Answers to Exercises
Section 6.2
Finally, by the Dimension Theorem, we have
dim(ker(L)) = n
Since W ?
dim(range(L)) = n
dim(W) = dim(W ? ):
ker(L), Theorem 4.16 implies that W ? = ker(L).
(21) Since for each v 2 ker(L) we have Av = 0, each row of A is orthogonal to
v. Hence, ker(L) (row space of A)? . Also, dim(ker(L)) = n rank(A)
(by part (2) of Theorem 5.9) = n
dim(row space of A) = dim((row
space of A)? ) (by Corollary 6.13). Then apply Theorem 4.16.
(22) Suppose v 2 (ker(L))? and T (v) = 0. Then L(v) = 0, so v 2 ker(L).
Apply Theorem 6.11 to show ker(T ) = f0g.
(23) First, a 2 W ? by the statement just after Example 6 in Section 6.2 of the
text. Also, since projW v 2 W and w 2 W (since T is in W), we have
b = (projW v) w 2 W.
2
2
2
Finally, kv wk = kv projW v + (projW v) wk = ka + bk =
2
2
2
2
(a+b) (a+b) = kak +kbk (since a b = 0) kak = kv projW vk :
(24) (a) Mimic the proof of Theorem 6.11.
(b) Mimic the proofs of Theorem 6.12 and Corollary 6.13.
(25) Note that, for any v 2 Rk , if we consider v to be an k 1 matrix, then
vT v is the 1 1 matrix whose single entry is v v. Thus, we can equate
v v with vT v.
(a) v (Aw) = vT (Aw) = (vT A)w = (AT v)T w = (AT v) w
(b) Let v 2 ker(L2 ). We must show that v is orthogonal to every vector
in range(L1 ). An arbitrary element of range(L1 ) is of the form L1 (w),
for some w 2 Rn . Now, v L1 (w) = L2 (v) w (by part (a)) = 0 w
(because v 2 ker(L2 )) = 0. Hence, v 2 (range(L1 ))? .
(c) By Theorem 5.9, dim(ker(L2 )) = m rank(AT ) = m rank(A) (by
Corollary 5.11). Also, dim(range(L1 )? ) = m dim(range(L1 )) (by
Corollary 6.13) = m rank(A) (by Theorem 5.9). Hence, dim(ker(L2 ))
= dim(range(L1 )? ). This, together with part (b) and Theorem 4.16
completes the proof.
(d) The row space of A = the column space of AT = range(L2 ). Switching the roles of A and AT in parts (a), (b) and (c), we see that
(range(L2 ))? = ker(L1 ). Taking the orthogonal complement of both
sides of this equality and using Corollary 6.14 yields range(L2 ) =
(ker(L1 ))? , which completes the proof.
(26) (a) T
(c) F
(e) T
(g) T
(i) T
(k) T
(b) T
(d) F
(f) F
(h) T
(j) F
(l) T
132
Andrilli/Hecker - Answers to Exercises
Section 6.3
Section 6.3
(1) Parts (a) and (g) are symmetric, because the matrix for L is symmetric.
Part (b) is not symmetric, because the matrix for L with respect to the
standard basis is not symmetric.
Parts (c), (d), and (f) are symmetric, since L is orthogonally diagonalizable.
Part (e) is not symmetric, since L is not diagonalizable, and hence not
orthogonally diagonalizable.
2
3
58
6
18
7 24
1
(2) (a) 25
1 4
6 53
12 5
24
7
(c) 49
2
3
18 12
85
11
6 6
1 4
5
6 18 0
(b) 11
6
0 4
3
2
119
72
96
0
6
72 119
0
96 7
1 6
7
(d) 169
4
96
0 119
72 5
0
96
72
119
1
1
(3) (a) B = ( 13
[5; 12]; 13
[ 12; 5]), P =
(b) B = ( 15 [4; 3]; 15 [3; 4]), P =
p1 [
2
(c) B =
1
5
1
13
5
12
4
3
3
4
1
D=4 0
0
,D=
1
1; 1; 0]; 3p
[1; 1; 4]; 31 [ 2; 2; 1]
2
2
6
ble, since E1 is two-dimensional), P = 6
4
2
12
5
0
1
0
0
0
0 5. Another possibility is
3
1; 2]; 13 [2; 2;
3
2
2
2 5, D = 4
1
1]),
1
0
0
0
1
0
3
0
0 5
3
(d) B = ( 91 [ 8; 1; 4]; 19 [4; 4; 7]; 91 [1; 8; 4]),
2
2
3
3
8 4
1
0
0 0
8 5, D = 4 0
3 0 5
P = 19 4 1 4
4 7
4
0
0 9
133
3
0
0
169
0
1
(other bases are possi1
2 3
p1
2
p1
2
3
B = ( 31 [ 1; 2; 2]; 13 [2;
2
1
2
1
P = 13 4 2
2
2
0
0
,D=
p
3 2
1
p
3 2
4
p
3 2
3
2
3
1
3
7
7,
5
Andrilli/Hecker - Answers to Exercises
p1 [3; 2; 1; 0]; p1 [
142
14
(e) B =
3
6
2
P = p114 6
4 1
0
2
3
0
1
1
0
3
2
Section 6.3
2; 3; 0; 1]; p114 [1; 0; 3; 2]; p114 [0; 1; 2; 3] ,
2
3
3
2 0
0 0
0
6
0 0 7
1 7
7
7, D = 6 0 2
4 0 0
3 0 5
2 5
0 0
0 5
3
(f) B = ( p126 [4; 1; 3]; 15 [3; 0; 4]; 5p126 [4; 25; 3]) (other bases are possible, since E 13 is two-dimensional),
3
2 4
3
p
p4
2
3
5
26
5 26
13
0
0
7
6
7
6
25
p
0
13
0 5
,D=4 0
P = 6 p126
5 26 7
5
4
0
0
13
4
3
3
p
(g) B =
2
6
6
P=6
4
p
5 26
5
26
p1 [1; 2; 0]; p1 [ 2; 1; 1]; p1 [2;
5
6
30
3
p1
p2
p2
2
5
6
30
p2
5
p1
6
p1
30
0
p1
6
p5
30
1; 5] ,
15
7
7
4
0
,
D
=
7
5
0
3
0
0 5
15
0
15
0
1
1
(4) (a) C = ( 19
[ 10; 15; 6]; 19
[15; 6; 10]),
B = ( 191p5 [20; 27; 26]; 191p5 [35; 24; 2]),
2
2
A=
2
1
,P=
p1
5
1
2
2
1
,D=
2
0
0
3
(b) C = ( 21 [1; 1; 1; 1]; 12 [ 1; 1; 1; 1]; 12 [1; 1; 1; 1]),
1
B = ( 2p
[3; 1; 3; 1]; p130 [ 2; 4; 3; 1]; 16 [ 1; 1; 0; 2]);
5
2 2
3
p
p1
p1
3
2
5
30
6
2
1
2
6
7
6
p5
p1 7
2
2 5, P = 6 0
,
A=4 1
30
6 7
4
5
2
2
1
2
3
D=4 0
0
(5) (a)
1
25
23
36
(b)
1
17
353
120
p1
5
3
0
3
0
36
2
0
0 5
3
p2
30
2
11
(c) 13 4 4
4
120
514
p2
6
4
17
8
3
4
8 5
17
(6) For example, the matrix A in Example 7 of Section 5.6 is diagonalizable
but not symmetric and hence not orthogonally diagonalizable.
134
Andrilli/Hecker - Answers to Exercises
Section 6.3
p
a + c + (a c)2 + 4b2
1
p 0
(7) 2
(Note: You only
0
a+c
(a c)2 + 4b2
need to compute the eigenvalues. You do not need the corresponding
eigenvectors.)
(8) (a) Since L is diagonalizable and = 1 is the only eigenvalue, E1 = Rn .
Hence, for every v 2 Rn , L(v) = 1v = v. Therefore, L is the identity
operator.
(b) L must be the zero linear operator. Since L is diagonalizable, the
eigenspace for 0 must be all of V.
(9) Let L1 and L2 be symmetric operators on Rn . Then, note that for all
x; y 2 Rn , (L2 L1 )(x) y = L2 (L1 (x)) y = L1 (x) L2 (y) = x (L1 L2 )(y).
Now L2 L1 is symmetric i¤ for all x; y 2 Rn , x (L1 L2 )(y)
= (L2 L1 )(x) y = x (L2 L1 )(y) i¤ for all y 2 Rn , (L1 L2 )(y) =
(L2 L1 )(y) i¤ (L1 L2 ) = (L2 L1 ). (This argument uses the fact that
if for all x 2 Rn , x y = x z, then y = z. Proof: Let x = y z. Then,
((y z) y) = ((y z) z) =) ((y z) y) ((y z) z) = 0 =)
(y z) (y z) = 0 =) ky zk2 = 0 =) y z = 0 =) y = z.)
(10) (i) =) (ii): This is Exercise 6 in Section 3.4.
(ii) =) (iii): Since A and B are symmetric, both are orthogonally similar
to diagonal matrices D1 and D2 , respectively, by Theorems 6.18 and 6.20.
Now, since A and B have the same characteristic polynomial, and hence
the same eigenvalues with the same algebraic multiplicities (and geometric
multiplicities, since A and B are diagonalizable), we can assume D1 = D2
(by listing the eigenvectors in an appropriate order when diagonalizing).
Thus, if D1 = P1 1 AP1 and D2 = P2 1 BP2 , for some orthogonal matrices
P1 and P2 , then (P1 P2 1 ) 1 A(P1 P2 1 ) = B. Note that P1 P2 1 is orthogonal by Exercise 11, Section 6.1, so A and B are orthogonally similar.
(iii) =) (i): Trivial.
(11) L(v1 ) v2 = v1 L(v2 ) =) ( 1 v1 ) v2 = v1 ( 2 v2 )
=) ( 2
1 )(v1 v2 ) = 0 =) v1 v2 = 0, since 2
1
6= 0.
(12) Suppose A is orthogonal, and is an eigenvalue for A with corresponding
unit eigenvector u. Then 1 = u u = Au Au (by Theorem 6.9) =
( u) ( u) = 2 (u u) = 2 . Hence = 1.
Conversely, suppose that A is symmetric with all eigenvalues equal to 1.
Let P be an orthogonal matrix with P 1 AP = D, a diagonal matrix with
all entries equal to 1 on the main diagonal. Note that D2 = In . Then,
since A = PDP 1 , AAT = AA = PD2 P 1 = PIn P 1 = In . Hence A
is orthogonal.
135
Andrilli/Hecker - Answers to Exercises
(13) (a) T
(b) F
(c) T
Chapter 6 Review
(d) T
(e) T
(f) T
Chapter 6 Review Exercises
(1) (a) [v]B = [ 1; 4; 2]
(b) [v]B = [3; 2; 3]
(2) (a) f[1; 1; 1; 1]; [1; 1; 1; 1]g
(b) f[1; 3; 4; 3; 1]; [ 1; 1; 0; 1; 1]; [ 1; 3; 4; 3; 1]g
(3) f[6; 3; 6]; [3; 6; 6]; [2; 2; 1]g
(4) (a) f[4; 7; 0; 4]; [2; 0; 1; 2]; [29; 28; 18; 20]; [0; 4; 14; 7]g
(b) f 91 [4; 7; 0; 4]; 13 [2; 0; 1; 2]; 9p129 [29; 28; 18; 20]; 3p129 [0; 4; 14; 7]g
2
7
4 3
4
0
9
9
9
6
2
1
2 7
7
6
0
3
3 7
6 p3
(c) 6 29
7
28
20
p2
p
p
7
6 9
9 29
29
9 29 5
4
14
p4
p
p7
0
3 29
3 29
3 29
(5) Using the hint, note that v w = v (AT Aw). Now, AT Aej is the jth
column of AT A, so ei (AT Aej ) is the (i; j) entry of AT A. Since this
equals ei ej , we see that AT A = In . Hence, AT = A 1 , and A is
orthogonal.
(6) (a)
A(A In )T = A(AT ITn ) = AAT + A = In + A
= A In . Therefore, jA In j = A(A In )T = j Aj (A In )T
= ( 1)n jAj j(A In )j = jA In j, since n is odd and jAj = 1.
This implies that jA In j = 0, and so A In is singular.
(b) By part (a), A In is singular. Hence, there is a nonzero vector
v such that (A In )v = 0. Thus, Av = In v = v, an so v is an
eigenvector for the eigenvalue = 1.
(c) Suppose v is a unit eigenvector for A corresponding to = 1. Expand the set fvg to an orthonormal basis B = fw; x; vg for R3 .
Notice that we listed the vector v last. Let Q be the orthogonal matrix whose columns are w; x; and v, in that order. Let P = QT AQ.
Now, the last column of P equals QT Av = QT v (since Av = v)
= e3 , since v is a unit vector orthogonal to the …rst two rows of QT .
Next, since P is the product of orthogonal matrices, P is orthogonal,
and since jQT j = jQj = 1; we must have jPj = jAj = 1. Also, the
…rst two columns of P are orthogonal to its last column, e3 , making
the (3; 1) and (3; 2) entries of P equal to 0. Since the …rst column
of P is a unit vector with third coordinate 0, it can be expressed as
[cos ; sin ; 0], for some value of , with 0
< 2 . The second
136
Andrilli/Hecker - Answers to Exercises
Chapter 6 Review
column of P must be a unit vector with third coordinate 0, orthogonal to the …rst column. The only possibilities are [ sin ; cos ; 0].
Choosing the minus sign makes jPj = 1, so the second column must
be [ sin ; cos ; 0]. Thus, P = QT AQ has the desired form.
(d) The matrix for the linear operator L on R3 given by L(v) = Av with
respect to the standard basis has a matrix that is orthogonal with
determinant 1. But, by part (c), and the Orthogonal Diagonalization
Method, the matrix for
2 L with respect to3the ordered orthonormal
cos
sin
0
cos
0 5. According to Table 5.1
basis B = (w; x; v) is 4 sin
0
0 1
in Section 5.2, this is a counterclockwise rotation around the z-axis
through the angle . But since we are working in B-coordinates, the
z-axis corresponds to the vector v, the third vector in B. Because
the basis B is orthonormal, the plane of the rotation is perpendicular
to the axis of rotation.
(e) The axis of rotation is in the direction of the vector [ 1; 3; 3]: The
angle of rotation is
278 . This is a counterclockwise rotation
about the vector [ 1; 3; 3] as you look down from the point ( 1; 3; 3)
toward the plane through the origin spanned by [6; 1; 1] and [0; 1; 1].
(f) Let G be the matrix with respect to the standard basis for any chosen orthogonal re‡ection through a plane in R3 . Then, G is orthogonal, jGj = 1 (since it has eigenvalues 1; 1; 1), and G2 = I3 :
Let C = AG. Then C is orthogonal since it is the product of orthogonal matrices. Since jAj = 1, it follows that jCj = 1. Thus,
A = AG2 = CG, where C represents a rotation about some axis in
R3 by part (d) of this exercise.
(7) (a) w1 = projW v = [0; 9; 18]; w2 = projW ? v = [2; 16; 8]
(b) w1 = projW v =
7 23 5
2; 2 ; 2;
(8) (a) Distance
10:141294
(b) Distance
14:248050
(9) f[1; 0; 2;
2
2
(10) 13 4 1
1
2
3
(11) 17 4 6
2
2]; [0; 1; 2; 2]g
3
1 1
2 1 5
1 2
3
6
2
2
3 5
3
6
(12) (a) pL (x) = x(x
(b) pL (x) = x2 (x
1)2 = x3
1) = x3
3
2
; w2 = projW ? v =
2x2 + x
x2
137
3
2;
3 9
2; 2;
15
2
Andrilli/Hecker - Answers to Exercises
Chapter 6 Review
(13) (a) Not a symmetric operator,
since the
2
3 matrix for L with respect to the
0 0 0 0
6 3 0 0 0 7
7
standard basis is 6
4 0 2 0 0 5 ; which is not symmetric.
0 0 1 0
(b) Symmetric operator, since the matrix for L is orthogonally diagonalizable.
(c) Symmetric operator, since the matrix for L with respect to the standard basis is symmetric.
(14) (a) B =
P=
(b) B =
p1 [ 1; 2; 1]; p1 [1; 2; 5]; p1 [
6
30
5
3
2
p1
p1
p2
6
30
5
6
7
6 p2
p2
p1 7; D = 4
6
30
5 5
4
p1
p5
0
6
30
2
2
6
6
P=6
4
p1
3
[1; 1; 1] ; p12 [1; 1; 0] ; p16 [
3
p1
p1
p1
2
3
2
6
7
7
1
1
1
p
p
p 7; D = 4
3
2
6 5
p1
p2
0
3
6
2; 1; 0] ;
1
0
0
3
0
0 5
1
0
2
0
1; 1; 2] ;
0
0
0
0
3
0
3
0
0 5
3
(15) Any diagonalizable 4 4 matrix that is not symmetric works. For example,
chose any non-diagonal upper triangular 4 4 matrix with 4 distinct entries
on the main diagonal.
T
T
(16) Because AT A = AT AT
= AT A, we see that AT A is symmetric.
T
Therefore A A is orthogonally diagonalizable by Theorems 6.18 and 6.20.
(17) (a) T
(e) F
(i) T
(m) F
(q) T
(b) T
(f) F
(j) T
(n) T
(r) F
(c) T
(g) T
(k) T
(o) T
(s) F
(d) T
(h) T
(l) T
(p) T
(t) T
138
(u) T
(v) F
(w) T
Andrilli/Hecker - Answers to Exercises
Section 7.1
Chapter 7
Section 7.1
(1) (a) [1 + 4i; 1 + i; 6
(b) [ 12
i]
(d) [ 24
32i; 7 + 30i; 53
(c) [5 + i; 2
29i]
i; 3i]
12i; 28
8i; 32i]
(e) 1 + 28i
(f) 3 + 77i
(2) Let z1 = [a1 ; : : : ; an ], and let z2 = [b1 ; : : : ; bn ].
Pn
Pn
Pn
Pn
(a) Part (1): z1 z2 = i=1 ai bi = i=1 ai bi = i=1 ai bi = i=1 ai bi =
Pn
i=1 bi ai = z2 z1
Pn
Pn
2
Part (2): z1 z1 =
0: Also, z1 z1 =
i=1 ai ai =
i=1 jai j
2
p
Pn
P
n
2
2
2
= kz1 k
i=1 jai j =
i=1 jai j
Pn
Pn
Pn
(b) k(z1 z2 ) = k i=1 ai bi = i=1 kai bi = i=1 ai kbi = z1 (kz2 )
(3) (a)
2
11 + 4i
2 4i
1
4 2i
12
i
2i
6
4i
3
5
5
0
7 + 3i
3
2
1 i
0
10i
5
3 i
0
(c) 4 2i
6 4i
5
7 + 3i
(b) 4
0
10i
3
12 + 6i
15 + 23i
(g)
i
16 3i
20 + 5i
2
40 + 58i
50i
8 6i
(j) 4 4 + 8i
56 10i 50 + 10i
3
9
(e)
7 + 6i
3 3i
(f)
1 + 40i
13 50i
11 + 41i
61 + 15i
2
15i
6i
4 + 36i
(i) 4 1 7i
5 + 40i
3
86 33i 6 39i
61 + 36i 13 + 9i
(h)
(d)
20 + 80i
5
20
80 + 102i
3
0
9
4
9i
3 + 12i
i
6i
4 14i
23 + 21i
3
5 + 39i
6 4i 5
7 5i
T
(4) (a) (i; j) entry of (kZ) = (i; j) entry of (kZ) = (j; i) entry of (kZ) =
T
kzji = kzji = k((j; i) entry of Z) = k((i; j) entry of Z )
= k((i; j) entry of Z )
(b) Let Z be an m n complex matrix and let W be an n p matrix.
Note that (ZW) and W Z are both p m matrices. We present
two methods of proof that (ZW) = W Z :
First method: Notice that the rule (AB)T = BT AT holds for
139
Andrilli/Hecker - Answers to Exercises
Section 7.1
complex matrices, since matrix multiplication for complex matrices
is de…ned in exactly the same manner as for real matrices, and hence
the proof of Theorem 1.16 is valid for complex matrices as well. Thus
we have: (ZW) = (ZW)T = WT ZT = (WT )(ZT ) (by part (4) of
Theorem 7.2) = W Z :
Second method: We begin by computing the (i; j) entry of (ZW) :
Now, the (j; i) entry of ZW = zj1 w1i + + zjn wni . Hence, the (i; j)
entry of (ZW) = ((j; i) entry of ZW) = (zj1 w1i +
+ zjn wni ) =
zj1 w1i +
+ zjn wni . Next, we compute the (i; j) entry of W Z
to show that it equals the (i; j) entry of (ZW) . Let A = Z , an
n m matrix, and B = W , a p n matrix. Then akl = zlk and
bkl = wlk . Hence, the (i; j) entry of W Z = the (i; j) entry of BA
= bi1 a1j + +bin anj = w1i zj1 +
Therefore, (ZW) = W Z .
(5) (a) Skew-Hermitian
(b) Neither
+wni zjn = zj1 w1i +
(c) Hermitian
(d) Skew-Hermitian
+zjn wni .
(e) Hermitian
(6) (a) H = ( 12 (Z + Z )) = 21 (Z + Z ) (by part (3) of Theorem 7.2)
= 12 (Z + Z) (by part (2) of Theorem 7.2) = H. A similar calculation
shows that K is skew-Hermitian.
(b) Part (a) proves existence for the decomposition by providing speci…c
matrices H and K. For uniqueness, suppose Z = H + K, where H is
Hermitian and K is skew-Hermitian. Our goal is to show that H and
K must satisfy the formulas from part (a). Now Z = (H + K) =
H + K = H K. Hence Z + Z = (H + K) + (H K) = 2H, which
gives H = 21 (Z + Z ). Similarly, Z Z = 2K, so K = 12 (Z Z ).
(7) (a) HJ is Hermitian i¤ (HJ) = HJ i¤ J H = HJ i¤ JH = HJ (since
H and J are Hermitian).
(b) Use induction on k.
Base Step (k = 1): H1 = H is given to be Hermitian.
Inductive Step: Assume Hk is Hermitian and prove that Hk+1 is
Hermitian. But Hk H = Hk+1 = H1+k = HHk (by part (1) of
Theorem 1.15), and so by part (a), Hk+1 = Hk H is Hermitian.
(c) (P HP) = P H (P ) = P HP (since H is Hermitian).
(8) (AA ) = (A ) A = AA (and A A is handled similarly).
(9) Let Z be Hermitian. Then ZZ = ZZ = Z Z. Similarly, if Z is skewHermitian, ZZ = Z( Z) = (ZZ) = ( Z)Z = Z Z.
(10) Let Z be normal. Consider H1 = 21 (Z + Z ) and H2 = 12 (Z Z ).
Note that H1 is Hermitian and H2 is skew-Hermitian (by Exercise 6(a)).
Clearly Z = H1 + H2 .
140
Andrilli/Hecker - Answers to Exercises
Section 7.2
Now, H1 H2 = ( 12 )( 21 )(Z + Z )(Z Z ) = 14 (Z2 + Z Z ZZ
(Z )2 ) =
1
2
2
(Z ) ) (since Z is normal). A similar computation shows that
4 (Z
H2 H1 is also equal to 14 (Z2 (Z )2 ). Hence H1 H2 = H2 H1 .
Conversely, let H1 be Hermitian and H2 be skew-Hermitian with H1 H2 =
H2 H1 . If Z = H1 + H2 , then ZZ = (H1 + H2 )(H1 + H2 )
= (H1 +H2 )(H1 +H2 ) (by part (2) of Theorem 7.2) = (H1 +H2 )(H1 H2 )
(since H1 is Hermitian and H2 is skew-Hermitian)
= H21 + H2 H1 H1 H2 H22 = H21 H22 (since H1 H2 = H2 H1 ).
A similar argument shows Z Z = H21 H22 also, and so ZZ = Z Z, and
Z is normal.
(11) (a) F
(b) F
(c) T
(d) F
(e) F
(f) T
Section 7.2
(1) (a) w =
1
5
+
13
5 i;
z=
28
5
3
5i
(b) No solutions: After applying the row reduction method to the …rst 3
3
2
1 i
1 1+i 0
0
1 3 + 4i 5
columns, the matrix changes to: 4 0
0
0
0 4 2i
3
2
1 0 4 3i 2 + 5i
5 + 2i 5;
i
(c) Matrix row reduces to 4 0 1
0
0 0
0
Solution set = f( (2 + 5i)
(d) w = 7 + i; z =
(4
3i)c; (5 + 2i) + ic; c) j c 2 Cg
6 + 5i
(e) No solutions: After applying the row reduction method to the …rst
1
0
column, the matrix changes to:
(f) Matrix row reduces to
1
0
0
1
Solution set = f( ( i) + (3
(2) (a) jAj = 0; jA j = 0 = jAj
(b) jAj =
(c) jAj = 3
15
23i; jA j =
2 + 3i
0
3 + 2i
4 + 7i
2i)c; (2
2
5i)
1 + 3i
3 + 4i
i
5i
;
(4 + 7i)c; c) j c 2 Cg
15 + 23i = jAj
5i; jA j = 3 + 5i = jAj
(3) Your computations may produce complex scalar multiples of the vectors
given here, although they might not be immediately recognized as such.
(a) pA (x) = x2 + (1 i)x i = (x i)(x + 1); Eigenvalues 1 = i,
1; Ei = fc[1 + i; 2] j c 2 Cg; E 1 = fc[7 + 6i; 17] j c 2 Cg.
2 =
141
Andrilli/Hecker - Answers to Exercises
Section 7.3
(b) pA (x) = x3 11x2 + 44x 34; eigenvalues: 1 = 5 + 3i, 2 = 5
3i; 5; 1 3i] j c 2 Cg;
3 = 1; E 1 = fc[1
E 2 = fc[1 + 3i; 5; 1 + 3i] j c 2 Cg; E 3 = fc[1; 2; 2] j c 2 Cg.
(c) pA (x) = x3 + (2 2i)x2 (1 + 4i)x 2 = (x i)2 (x + 2);
2; Ei = fc[( 3 2i); 0; 2] + d[1; 1; 0] j c; d 2 Cg and
2
E 2 = fc[ 1; i; 1] j c 2 Cg.
1
3i,
= i,
(d) pA (x) = x3 2x2 x+2+i( 2x2 +4x) = (x i)2 (x 2); eigenvalues:
1 = i, 2 = 2; Ei = fc[1; 1; 0] j c 2 Cg; E2 = fc[i; 0; 1] j c 2 Cg.
(Note that A is not diagonalizable.)
(4) (a) Let A be the matrix from Exercise 3(a). A is diagonalizable since A
1 + i 7 + 6i
i 0
has 2 distinct eigenvalues. P =
;D=
:
2
17
0
1
(b) Let A be the matrix from Exercise 3(d). The solution to Exercise 3(d)
shows that the Diagonalization Method of Section 3.4 only produces
two fundamental eigenvectors. Hence, A is not diagonalizable by
Step 4 of that method.
(c) Let A be the matrix in Exercise 3(b). pA (x) = x3 11x2 + 44x 34.
There are 3 eigenvalues, 5 + 3i; 5 3i; and 1, each producing one
complex fundamental eigenvector. Thus, there are three fundamental
complex eigenvectors, and so A is diagonalizable as a complex matrix.
On the other hand, A has only one real eigenvalue = 1, and since
produces only one real fundamental eigenvector, A is not diagonalizable as a real matrix.
(5) By the Fundamental Theorem of Algebra, the characteristic polynomial
pA (x) of a complex n n matrix A must factor into n linear factors.
Since each root of pA (x) has multiplicity 1, there must be n distinct roots.
Each of these roots is an eigenvalue, and each eigenvalue yields at least
one fundamental eigenvector. Thus, we have a total of n fundamental
eigenvectors, showing that A is diagonalizable.
(6) (a) T
(b) F
(c) F
(d) F
Section 7.3
(1) (a) Mimic the proof in Example 6 in Section 4.1 of the textbook, restricting the discussion to polynomial functions of degree
n, and
using complex-valued functions and complex scalars instead of their
real counterparts.
(b) This is the complex analog of Theorem 1.11. Property (1) is proven
for the real case just after the statement of Theorem 1.11 in Section
1.4 of the textbook. Generalize this proof to complex matrices. The
other properties are similarly proved.
142
Andrilli/Hecker - Answers to Exercises
Section 7.3
(2) (a) Linearly independent; dim = 2
(b) Not linearly independent; dim = 1.
(Note: [ 3 + 6i; 3; 9i] = 3i[2 + i; i; 3])
(c) Not linearly independent; dim = 2. (Note:
2
1
given vectors as columns row reduces to 4 0
0
Matrix formed using
3
1
2 5 :)
0
0
1
0
(d) Not linearly independent, dim = 2. (Note: Matrix formed using given
2
3
1 0
i
2i 5 :)
vectors as columns row reduces to 4 0 1
0 0
0
(3) (a) Linearly independent; dim = 2. (Note: [ i; 3 + i; 1] is not a scalar
multiple of [2 + i; i; 3]:)
(b) Linearly independent; dim = 2. (Note: [ 3 + 6i; 3; 9i] is not a real
scalar multiple of [2 + i; i; 3]:)
(c) Not linearly independent; dim = 2.
2
3
2
1
3
1
1
6 0
7
6 1
1
3
6
7
6
6
6 1
2
5 7
7 row reduces to 6 0
(Note: 6
6 0
7
6 2
0
2 7
6
6
4 0
5
4 0
4
8
0
1
1
3
(d) Linearly independent; dim = 3.
2
3
2
1
3
1
3
6 0
7
6 1
1
1
6
7
6
6 0
6 1
2
2 7
6
7
6
(Note: 6
7 row reduces to 6 0
2
0
5
6
7
6
4 0
4 0
4
3 5
0
1
1
8
0
1
0
0
0
0
0
1
0
0
0
0
1
2
0
0
0
0
0
0
1
0
0
0
3
3
7
7
7
7 :)
7
7
5
7
7
7
7 :)
7
7
5
(4) (a) The 3 3 complex matrix whose columns (or rows) are the given
vectors row reduces to I3 .
(b) [i; 1 + i; 1]
2
0
6 0
(5) Ordered basis = ([1; 0]; [i; 0]; [0; 1]; [0; i]), matrix =6
4 1
0
0 1
0 0
0 0
1 0
3
0
1 7
7
0 5
0
(6) Span: Let v 2 V. Then there exists c1 ; : : : ; cn 2 C such that v = c1 v1 +
+ cn vn . Suppose, for each j, cj = aj + bj i, for some aj ; bj 2 R. Then
v = (a1 +b1 i)v1 + +(an +bn i)vn = a1 v1 +b1 (iv1 )+ +an vn +bn (ivn ),
which shows that v 2 span(fv1 ; iv1 ; : : : ; vn ; ivn g).
143
Andrilli/Hecker - Answers to Exercises
Section 7.4
Linear Independence: Suppose a1 v1 + b1 (iv1 ) + + an vn + bn (ivn ) = 0.
Then (a1 + b1 i)v1 +
+ (an + bn i)vn = 0, implying (a1 + b1 i) =
=
(an + bn i) = 0 (since fv1 ; : : : ; vn g is linearly independent). Hence, a1 =
b1 = a2 = b2 =
= an = bn = 0.
(7) Exercise 6 clearly implies that if V is an n-dimensional complex vector
space, then it is 2n-dimensional when considered as a real vector space.
Hence, the real dimension must be even. Therefore, R3 cannot be considered as a complex vector space since its real dimension is odd.
2
2
11 3
3+i
5
5 i
7
6 1 3
7
i
(8) 6
5
4 2 2i
1
2
+ 72 i
8
5
4
5i
(9) (a) T
(b) F
(c) T
(d) F
Section 7.4
(1) (a) Not orthogonal; [1 + 2i; 3
(b) Not orthogonal; [1
i] [4
i; 1 + i; 1
(c) Orthogonal
2i; 3 + i] =
10 + 10i
i] [i; 2i; 2i] =
5
5i
(d) Orthogonal
(2) (ci zi ) (cj zj ) = ci cj (zi zj ) (by parts (4) and (5) of Theorem 7.1) = ci cj (0)
(since zi is orthogonal to zj ) = 0; so ci zi is orthogonal to cj zj : Also,
jjci zi jj2 = (ci zi ) (ci zi ) = ci ci (zi zi ) = jci j2 jjzi jj2 = 1; since jci j = 1 and
zi is a unit vector. Hence, each ci zi is also a unit vector.
2 1+i
3
i
1
(3) (a) f[1 + i; i; 1]; [2; 1 i; 1 + i];
2
2
2
6
7
[0; 1; i]g
1 i
1+i 7
6 2
p
p
(b) 6 p8
7
8
8
4
5
p1
pi
0
2
2
(4) Part (1): For any n
2
Therefore, jAj
n matrix B, jBj = jBj; by part (3) of Theorem 7.5.
= jAj jAj = jAj jAj = jAj j(A)T j = jAj jA j = jAA j =
jIn j = 1.
Part (2): A unitary ) A = A
A is unitary.
1
) (A )
1
= A ) (A )
(5) (AB) = B A (by part (5) of Theorem 7.2) = B
1
A
1
1
= (A ) )
= (AB)
1
.
(6) (a) A = A 1 i¤ (A)T = A 1 i¤ (A)T = A 1 i¤ (A) = (A) 1 (using
the fact that A 1 = (A) 1 , since AA 1 = In =) AA 1 = In =
In =) (A)(A 1 ) = In =) A 1 = (A) 1 ).
144
Andrilli/Hecker - Answers to Exercises
Section 7.4
(b) (Ak ) = ((Ak ))T = ((A)k )T (by part (4) of Theorem 7.2) = ((A)T )k
= (A )k = (A 1 )k = (Ak ) 1 .
(c) A2 = In i¤ A = A
Hermitian.
1
i¤ A = A (since A is unitary) i¤ A is
(7) (a) A is unitary i¤ A = A 1 i¤ AT = A
(AT ) = (AT ) 1 i¤ AT is unitary.
1
i¤ (AT )T = (A
1 T
)
i¤
(b) Follow the hint in the textbook.
(8) Modify the proof of Theorem 6.8.
2
3
1 3i 2 2i
2
2i
2i 5 and
(9) Z = 31 4 2 2i
2 2
2i
1 4i 3
22
8
10i
16
2i 5.
ZZ = Z Z = 91 4 8
10i
2i 25
(10) (a) Let A be the given matrix for L. A is normal since
1 + 6i
2 + 10i
A =
and AA = A A =
10 + 2i
5
141
12 12i
. Hence A is unitarily diagonalizable by The12 + 12i
129
orem 7.9.
" 1+i 1 i #
(b) P =
p
6
p2
6
p
3
p1
3
;
9 + 6i
0
0
3 12i
3
2
4 5i 2 2i
4 4i
1 8i
2 + 2i 5 and AA =
(11) (a) A is normal since A = 4 2 2i
4 4i
2 + 2i
4 5i
A A = 81I3 . Hence A is unitarily diagonalizable by Theorem 7.9.
2
3
2
3
2 1 2i
9 0 0
2 2i 5 and P 1 AP = P AP = 4 0 9i 0 5.
(b) P = 31 4 1
2
2
i
0
0 9i
P
1
AP = P AP =
(12) (a) Note that Az Az = z z = ( z z) (by part (5) of Theorem 7.1)
= (z z) (by part (4) of Theorem 7.1), while Az Az = z (A (Az))
(by Theorem 7.3) = z ((A A)z) = z ((A 1 A)z) = z z. Thus
(z z) = z z, and so
= 1, which gives j j2 = 1, and hence
j j = 1.
(b) Let A be unitary, and let be an eigenvalue of A. By part (a),
j j = 1.
Now, suppose A is Hermitian. Then
is real by Theorem 7.11.
Hence = 1.
145
Andrilli/Hecker - Answers to Exercises
Section 7.5
Conversely, suppose every eigenvalue of A is 1. Since A is unitary,
A = A 1 , and so AA = A A = I, which implies that A is
normal. Thus A is unitarily diagonalizable (by Theorem 7.9). Let
A = PDP , where P is unitary and D is diagonal. But since the
eigenvalues of A are 1 (real), D = D. Thus A = (PDP ) =
PDP = A, and A is Hermitian.
p
p
(13) The eigenvalues are 4, 2 + 6, and 2
6.
(14) (a) Since A is normal, A is unitarily diagonalizable (by Theorem 7.9).
Hence A = PDP for some diagonal matrix D and some unitary
matrix P. Since all eigenvalues of A are real, the main diagonal
elements of D are real, and so D = D. Thus, A = (PDP ) =
PD P = PDP = A, and so A is Hermitian.
(b) Since A is normal, A is unitarily diagonalizable (by Theorem 7.9).
Hence, A = PDP for some diagonal matrix D 2and some unitary
3
0
1
6
.. 7,
..
matrix P. Thus PP = I. Also note that if D = 4 ...
.
. 5
0
2
3
2
0
.. 7, and so DD = 6
4
. 5
1
6
then D = 4 ...
0
2
j 1 j2
6
..
= 4 ...
.
..
.
n
0
..
.
n
0
..
.
1 1
..
.
0
..
.
n n
3
3
7
5
7
5. Since the main diagonal elements of D
0
j n j2
are the (not necessarily distinct) eigenvalues of A, and these eigenvalues all have absolute value 1, it follows that DD = I. Then,
AA = (PDP )(PDP ) = (PDP )(PD P ) = P(D(P P)D )P
= P(DD )P = PP = I. Hence, A = A 1 , and A is unitary.
(c) Because A is unitary, A
fore, A is normal.
(15) (a) F
(b) T
1
= A . Hence, AA = In = A A. There-
(c) T
(d) T
(e) F
Section 7.5
(1) (a) Note that < x; x > = (Ax) (Ax) = kAxk2 0.
kAxk2 = 0 , Ax = 0 , A 1 Ax = A 1 0 , x = 0.
< x; y > = (Ax) (Ay) = (Ay) (Ax) = < y; x >.
< x + y; z > = (A(x + y)) (Az) = (Ax + Ay) Az
= ((Ax) (Az)) + ((Ay) (Az)) = < x; z > + < y; z >.
< kx; y > = (A(kx)) (Ay) = (k(Ax)) (Ay) = k((Ax) (Ay)) =
k < x; y >.
146
Andrilli/Hecker - Answers to Exercises
(b) < x; y > =
183, kxk =
p
Section 7.5
314
(2) Note that < p1 ; p1 > = a2n +
+ a21 + a20 0.
2
2
2
Clearly, an +
+ a1 + a0 = 0 if and only if each ai = 0.
< p1 ; p2 > = an bn +
+ a1 b1 + a0 b0 = bn an +
+ b1 a1 + b0 a0 =
< p2 ; p1 >.
< p1 + p2 ; p3 > = < (an + bn )xn +
+ (a0 + b0 ); cn xn +
+ c0 >
= (an + bn )cn +
+ (a0 + b0 )c0 = (an cn + bn cn ) +
+ (a0 c0 + b0 c0 )
= (an cn +
+ a0 c0 ) + (bn cn +
+ b0 c0 ) = < p1 ; p3 > + < p2 ; p3 >.
< kp1 ; p2 > = (kan )bn + +(ka1 )b1 +(ka0 )b0 = k(an bn + +a1 b1 +a0 b0 )
= k < p1 ; p2 >.
Rb
(3) (a) Note that < f ; f > = a (f (t))2 dt
0. Also, we know from calculus
that a nonnegative continuous function on an interval has integral
zero if and only if the function is the zero function.
Rb
Rb
< f ; g > = a f (t)g(t) dt = a g(t)f (t) dt = < g; f >.
Rb
Rb
< f + g; h > = a (f (t) + g(t))h(t) dt = a (f (t)h(t) + g(t)h(t)) dt
Rb
Rb
= a f (t)h(t) dt + a g(t)h(t) dt = < f ; h > + < g + h >.
Rb
Rb
< kf ; g > = a (kf (t))g(t) dt = k a f (t)g(t) dt = k < f ; g >.
q
1)
(b) < f ; g > = 21 (e + 1); kf k = 12 (e2
(4) Note that < A; A > = trace(AT A), which by Exercise 26 in Section 1.5
is the sum of the squares of all the entries of A, and hence is nonnegative.
Also, this is clearly equal to zero if and only if A = On .
Next, < A; B > = trace(AT B) = trace((AT B)T ) (by Exercise 14 in
Section 1.4) = trace(BT A) = < B; A >.
< A+B; C > = trace((A+B)T C) = trace((AT +BT )C) = trace(AT C+
BT C) = trace(AT C) + trace(BT C) = < A; C > + < B; C >.
< kA; B > = trace((kA)T B) = trace((kAT )B) = trace(k(AT B))
= k(trace(AT B)) = k < A; B >.
(5) (a) < 0; x > = < 0 + 0; x > = < 0; x > + < 0; x >. Since < 0; x >
2 C, we know that “ < 0; x >” exists. Adding “ < 0; x >” to
both sides gives 0 = < 0; x >. Then, < x; 0 > = 0, by property (3)
in the de…nition of an inner product.
(b) For complex inner product spaces, < x; ky > = < ky; x > (by property (3) of an inner product space) = k < y; x > (by property (5) of
an inner product space) = k(< y; x >) = k < x; y > (by property (3)
of an inner product space).
A similar proof works in real inner product spaces.
p
p
(6) kkxk = < kx; kx > = k < x; kx > (by property (5) of an inner product
p
p
space) = kk < x; x > (by part (3) of Theorem 7.12) = jkj2 < x; x >
p
= jkj < x; x > = jkj kxk.
147
Andrilli/Hecker - Answers to Exercises
Section 7.5
(7) (a) kx + yk2 = < x + y; x + y > = < x; x > + < x; y > + < y; x >
+ < y; y > = kxk2 + 2 < x; y > +kyk2 (by property (3) of an inner
product space).
(b) Use part (a) and the fact that x and y are orthogonal i¤ < x; y > =
0.
(c) A proof similar to that in part (a) yields
kx yk2 = kxk2 2 < x; y > +kyk2 . Add this to the equation for
kx + yk2 in part (a).
(8) (a) Subtract the equation for kx yk2 in the answer to Exercise 7(c)
from the equation for kx + yk2 in Exercise 7(a). Then multiply by
1
4.
(b) In the complex case, kx + yk2 = < x + y; x + y > = < x; x > +
< x; y > + < y; x > + < y; y > = < x; x > + < x; y >
+ < x; y > + < y; y >. Similarly, kx yk2 = < x; x >
2
< x; y > + < y; y >, kx + iyk = < x; x >
iyk = < x; x > + i < x; y >
2
< y; y >. Then, kx + yk
2
Also, i(kx + iyk
i < x; y > + i< x; y >
2
+ < y; y >, and kx
kx
2
kx
< x; y >
i< x; y > +
2
yk = 2 < x; y > + 2< x; y >.
iyk ) = i( 2i < x; y > +2i< x; y >) =
2 < x; y > 2< x; y >. Adding these and multiplying by 41 produces
the desired result.
q
3
3
(b) 0:941 radians, or, 53:9
(9) (a)
3
2
(10) (a)
p
174
(b) 0:586 radians, or, 33:6
(11) (a) If either x or y is 0, then the theorem is obviously true. We only need
to examine the case when both kxk and kyk are nonzero. We need
to prove kxk kyk j < x; y > j kxk kyk. By property (5) of an
inner product space, and part (3) of Theorem 7.12, this is equivalent
y
x
to proving j < kxk
; kyk
> j 1. Hence, it is enough to show
j < a; b > j 1 for any unit vectors a and b.
Let < a; b > = z 2 C. We need to show jzj
1. If z = 0, we
jzj
are done. If z 6= 0, then let v = z a. Then < v; b > = < jzj
z a; b >
=
jzj
z
< a; b >=
jzj
z z
unit vector. Now, 0
= jzj 2 R. Also, since
jzj
z
= 1, v =
kv bk2 = < v b; v b > = kvk2
< v; b > + kbk2 = 1 jzj jzj + 1 = 2
or 1 jzj, completing the proof.
2jzj. Hence
jzj
z a
is a
< v; b >
2
2jzj,
(b) Note that kx + yk2 = < x + y; x + y > = < x; x > + < x; y >
+ < y; x > + < y; y > = kxk2 + < x; y > +< x; y > + kyk2 (by
property (3) of an inner product space) = kxk2 +
148
Andrilli/Hecker - Answers to Exercises
2(real part of < x; y >)+kyk2
Section 7.5
kxk2 +2j < x; y > j+kyk2
kxk2 +
2kxk kyk + kyk2 (by the Cauchy-Schwarz Inequality) = (kxk + kyk)2 .
(12) The given inequality follows directly from the Cauchy-Schwarz Inequality.
(13) (i) d(x; y) = kx yk = k( 1)(x y)k (by Theorem 7.13) = ky xk =
d(y; x).
p
(ii) d(x; y) = kx yk = < x y; x y >
0. Also, d(x; y) = 0 i¤
kx yk = 0 i¤ < x y; x y > = 0 i¤ x y = 0 (by property (2) of an
inner product space) i¤ x = y.
(iii) d(x; y) = kx yk = k(x z) + (z y)k kx zk + kz yk (by the
Triangle Inequality, part (2) of Theorem 7.14) = d(x; z) + d(z; y).
(14) (a) Orthogonal
(c) Not orthogonal
(b) Orthogonal
(d) Orthogonal
(15) Suppose x 2 T , where x is a linear combination of fx1 ; : : : ; xn g
T.
Then x = a1 x1 +
+ an xn . Consider < x; x > = < x; a1 x1 > +
+
+
< x; an xn > (by part (2) of Theorem 7.12) = a1 < x; x1 > +
an < x; xn > (by part (3) of Theorem 7.12) = a1 (0) +
+ an (0) (since
vectors in T are orthogonal) = 0, a contradiction, since x is assumed to
be nonzero.
R
1
1
1
= m
(sin mt)j
(sin m
sin( m )) = m
(0)
(16) (a)
cos mt dt = m
(since the sine of an integral multiple of is 0) = 0.
R
1
Similarly,
(cos mt)j
sin mt dt = m
1
1
= m (cos m
cos( m )) = m
(cos m
cos m ) (since
1
cos( x) = cos x) = m (0) = 0.
(b) Use the trigonometric identities cos mt cos nt = 21 (cos(m n)t+
1
cos(m +
R n)t) and sin mt sin nt1 R= 2 (cos(m n)t cos(m + n)t).
cos(m n)t dt +
Then,
cos mt cos nt dt = 2
R
1
1
1
cos(m + n)t dt = 2 (0) + 2 (0) = 0, by part (a), since m n is
2
an
integer.
Also,
R
R
R
sin mt sin nt dt = 12
cos(m n)t dt 12
cos(m + n)t dt =
1
1
(0)
(0)
=
0,
by
part
(a).
2
2
1
(c) Use the trigonometricR identity sin nt cos mt =
R 2 (sin(n + m)t+
1
sin(n + m)t dt+
sin(n m)t). Then,
cos mt sin nt dt = 2
R
1
1
1
sin(n m)t dt = 2 (0) + 2 (0) = 0, by part (a), since n m is
2
an integer.
(d) Obvious from parts (a), (b), and (c) with the real inner product
, b = : < f ; g >=
Rof Example 5 (or Example 8) with a =
f (t)g(t) dt.
149
Andrilli/Hecker - Answers to Exercises
Section 7.5
(17) Let B = (v1 ; : : : ; vk ) be an orthogonal ordered basis for a subspace W of
V, and let v 2 W. Let [v]B = [a1 ; : : : ; ak ]. The goal is to show that
ai = < v; vi > =kvi k2 . Now, v = a1 v1 +
+ ak vk . Hence, < v; vi > =
< a1 v1 +
+ ak vk ; vi > = a1 < v1 ; vi > +
+ ai < vi ; vi > + +
ak < vk ; vi > (by properties (4) and (5) of an inner product)
= a1 (0) + + ai 1 (0) + ai kvi k2 + ai+1 (0) + + ak (0) (since B is orthogonal) = ai kvi k2 . Hence, ai = < v; vi > =kvi k2 . Also, if B is orthonormal,
then each kvi k = 1, so ai = < v; vi >.
(18) Let v = a1 v1 +
+ ak vk and w = b1 v1 +
+ bk vk , where
ai = < v; vi > and bi = < w; vi > (by Theorem 7.16). Then < v; w > =
< a1 v1 +
+ ak vk ; b1 v1 +
+ bk vk > = a1 b1 < v1 ; v1 > +
a2 b2 < v2 ; v2 > +
+ ak bk < vk ; vk > (by property (5) of an inner
product and part (3) of Theorem 7.12, and since < vi ; vj > = 0 for i 6= j)
= a1 b1 + a2 b2 +
+ ak bk (since kvi k = 1, for 1
< v; v1 > < w; v1 >+ < v; v2 > < w; v2 > +
i
k) =
+ < v; vk > < w; vk >.
(19) Using w1 = t2 t + 1, w2 = 1, and w3 = t yields the orthogonal basis
fv1 ; v2 ; v3 g, with v1 = t2 t + 1, v2 = 20t2 + 20t + 13, and
v3 = 15t2 + 4t 5.
(20) f[ 9; 4; 8]; [27; 11; 22]; [5; 3; 4]g
(21) The proof is totally analogous to the (long) proof of Theorem 6.4 in Section
6.1 of the textbook. Use Theorem 7.15 in the proof in place of Theorem 6.1.
(22) (a) Follow the proof of Theorem 6.11, using properties (4) and (5) of
an inner product. In the last step, use the fact that < w; w >
= 0 =) w = 0 (by property (2) of an inner product).
(b) Prove part (4) of Theorem 7.19 …rst, using a proof similar to the
proof of Theorem 6.12. (In that proof, use Theorem 7.16 in place of
Theorem 6.3.) Then prove part (5) of Theorem 7.19 as follows:
Let W be a subspace of V of dimension k. By Theorem 7.17, W has
an orthogonal basis fv1 ; : : : ; vk g. Expand this basis to an orthogonal basis for all of V. (That is, …rst expand to any basis for V by
Theorem 4.18, then use the Gram-Schmidt Process. Since the …rst k
vectors are already orthogonal, this expands fv1 ; : : : ; vk g to an orthogonal basis fv1 ; : : : ; vn g for V.) Then, by part (4) of Theorem
7.19, fvk+1 ; : : : ; vn g is a basis for W ? , and so dim(W ? ) = n k.
Hence dim(W) + dim(W ? ) = n.
(c) Since any vector in W is orthogonal to all vectors in W ? ,
W (W ? )? :
(d) By part (3) of Theorem 7.19, W
(W ? )? . Next, by part (5) of
Theorem 7.19, dim(W) = n dim(W ? ) = n (n dim((W ? )? ))
= dim((W ? )? ). Thus, by Theorem 4.16, or its complex analog,
W = (W ? )? .
150
Andrilli/Hecker - Answers to Exercises
(23) W ? = span(ft3
Section 7.5
t2 ; t + 1g)
(24) Using 1 and t as the vectors in the Gram-Schmidt Process, the basis
obtained for W ? is f 5t2 + 10t 2; 15t2 14t + 2g.
(25) As in the hint, let fv1 ; : : : ; vk g be an orthonormal basis for W. Now
if v 2 V, let w1 = hv; v1 i v1 +
+ hv; vk i vk and w2 = v w1 .
Then, w1 2 W because w1 is a linear combination of basis vectors for
W. We claim that w2 2 W ? . To see this, let u 2 W. Then u =
a1 v1 +
+ ak vk for some a1 ; : : : ; ak . Then hu; w2 i = hu; v w1 i =
ha1 v1 +
+ ak vk ; v (hv; v1 i v1 +
+ hv; vk i vk )i =
ha1 v1 +
+ ak vk ; vi ha1 v1 +
+ ak vk ; hv; v1 i v1 +
+ hv; vk i vk i
Pk
Pk Pk
= i=1 ai hvi ; vi
i=1
j=1 ai hv; vj i hvi ; vj i. But hvi ; vj i = 0 when
i 6= j and hvi ; vj i = 1 when i = j, since fv1 ; : : : ; vk g is an orthonormal
Pk
Pk
Pk
set. Hence, hu; w2 i = i=1 ai hvi ; vi
ai hv; vi i = i=1 ai hvi ; vi
i=1
Pk
i=1 ai hvi ; vi = 0. Since this is true for every u 2 W, we conclude that
w2 2 W ? .
Finally, we want to show uniqueness of decomposition. Suppose that
v = w1 +w2 and v = w10 +w20 ; where w1 ; w10 2 W and w2 , w20 2 W ? . We
want to show that w1 = w10 and w2 = w20 : Now, w1 w10 = w20 w2 . Also,
w1 w10 2 W, but w20 w2 2 W ? : Thus, w1 w10 = w20 w2 2 W \ W ? :
By part (2) of Theorem 7.19, w1 w10 = w20 w2 = 0: Hence, w1 = w10
and w2 = w20 .
sin t + 21 cos t
q
nq
o
15
3
2
(27) Orthonormal basis for W =
1); 322
(10t2 + 7t + 2) . Then
14 (2t
(26) w1 =
1
2
(sin t
cos t), w2 =
1 t
ke
1
2
v = 4t2 t + 3 = w1 + w2 , where w1 =
2
8t + 1) 2 W ? .
w2 = 12
23 (5t
1
2
23 (32t
+ 73t + 57) 2 W and
Pk
(28) Let W = span(fv1 ; : : : ; vk g). Notice that w1 =
i=1 < v; vi > vi
(by Theorem 7.16, since fv1 ; : : : ; vk g is an orthonormal basis for W by
Theorem 7.15). Then, using the hint in the text yields kvk2 = < v; v >
= < w1 + w2 ; w1 + w2 > = < w1 ; w1 > + < w2 ; w1 > + < w1 ; w2 >
+ < w2 ; w2 > = kw1 k2 + kw2 k2 (since w1 2 W and w2 2 W ? ). Hence,
2
2
kvk
D P kw1 k
E
Pk
k
=
i=1 < v; vi > vi ;
j=1 < v; vj > vj
E
Pk D
Pk
= i=1 (< v; vi > vi ) ;
j=1 < v; vj > vj
Pk Pk
= i=1 j=1 h(< v; vi > vi ) ; (< v; vj > vj )i
Pk Pk
= i=1 j=1 ((< v; vi > < v; vj >) < vi ; vj >)
Pk
=
i=1 (< v; vi > < v; vi >) (because fv1 ; : : : ; vk g is an orthonormal
Pk
set) = i=1 < v; vi >2 .
151
Andrilli/Hecker - Answers to Exercises
Chapter 7 Review
(29) (a) Let fv1 ; : : : ; vk g be an orthonormal basis for W. Let x; y 2 V. Then
projW x = < x; v1 > v1 +
+ < x; vk > vk ,
projW y = < y; v1 > v1 +
+ < y; vk > vk , projW (x + y) =
< x + y; v1 > v1 +
+ < x + y; vk > vk , and projW (kx) =
< kx; v1 > v1 +
+ < kx; vk > vk . Clearly, by properties (4)
and (5) of an inner product, projW (x + y) = projW x + projW y,
and projW (kx) = k(projW x). Hence L is a linear transformation.
(b) ker(L) = W ? , range(L) = W
(c) First, we establish that if v 2 W, then L(v) = v. Note that for
v 2 W, v = v + 0, where v 2 W and 0 2 W ? . But by Theorem
7.20, any decomposition v = w1 + w2 with w1 2 W and w2 2 W ?
is unique. Hence, w1 = v and w2 = 0. Then, since w1 = projW v,
we get L(v) = v.
Finally, let v 2 V and let x = L(v) = projW v 2 W. Then
(L L)(v) = L(L(v)) = L(x) = projW x = x (since x 2 W) = L(v).
Hence, L L = L.
(30) (a) F
(b) F
(c) F
(d) T
(e) F
Chapter 7 Review Exercises
(1) (a) 0
(b) (1 + 2i)(v z) = ((1 + 2i)v) z =
21 + 43i, v ((1 + 2i)z) = 47
9i
(c) Since (v z) = (13 + 17i), (1 + 2i)(v z) = ((1 + 2i)v) z =
(1+2i)(13+17i) = 21+43i; while v ((1+2i)z) = (1 + 2i)(13+17i) =
(1 2i)(13 + 17i) = 47 9i
(d) w z =
2
35 + 24i; w (v + z) =
2
(2) (a) H = 4 1 3i
7+i
(b) AA =
1 + 3i
34
10 + 10i
32
2
14i
2 + 14i
34
35 + 24i, since w v = 0
3
7 i
10 10i 5
30
, which is clearly Hermitian.
(3) (Az) w = z (A w) (by Theorem 7.3) = z ( Aw) (A is skew-Hermitian)
= z (Aw) (by Theorem 7.1, part (5))
(4) (a) w = 4 + 3i, z =
2i
(b) x = 2 + 7i, y = 0, z = 3
2i
(c) No solutions
(d) f[(2 + i)
= f[(2
(3
i)c; (7 + i)
ic; c] j c 2 Cg
3c) + (1 + c)i; 7 + (1
152
c)i; c] j c 2 Cg
Andrilli/Hecker - Answers to Exercises
Chapter 7 Review
(5) By part (3) of Theorem 7.5, jA j = jAj. Hence, jA Aj = jA j jAj =
2
jAj jAj = jAj , which is real and nonnegative. This equals zero if and
only if jAj = 0, which occurs if and only if A is singular (by part (4) of
Theorem 7.5).
2
(6) (a) pA (x)2= x3 x2 +3x 1 =
2 (x + 1)(x 1) = (x3 i)(x + i)(x
i
0 0
2 i
2+i 0
i 0 5; P = 4 1 i
1+i 2 5
D=4 0
0
0 1
1
1 1
1);
(b) pA (x) = x3 2ix2 x = x(x i)2 ; Not diagonalizable. Eigenspace for
= i is one-dimensional, with fundamental eigenvector [1 3i; 1; 1].
Fundamental eigenvector for = 0 is [ i; 1; 1].
(7) (a) One possibility: Consider L : C ! C given by L(z) = z. Note that
L(v + w) = (v + w) = v + w = L(v) + L(w). But L is not a linear
operator on C because L(i) = i, but iL(1) = i(1) = i, so the rule
“L(cv) = cL(v)” is not satis…ed.
(b) The example given in part (a) is a linear operator on C, thought of as
a real vector space. In that case we may use only real scalars, and so,
if v = a + bi, then L(cv) = L(ca + cbi) = ca cbi = c(a bi) = cL(v).
Note: for any function L from any vector space to itself (real or
complex), the rule “L(v + w) = L(v) + L(w)” implies that L(cv) =
cL(v) for any rational scalar c. Thus, any example for real vector
spaces for which L(v + w) = L(v) + L(w) is satis…ed but L(cv) =
cL(v) is not satis…ed for some c must involve an irrational value of
c.
One possibility: Consider R as a vector space over the rationals
(as the scalar …eld). Choose an uncountably in…nite basis B for R
over Q with 1 2 B and 2 B. De…ne L : R ! R by letting L(1) = 1,
but L(v) = 0 for every v 2 B with v 6= 1. (De…ning L on a basis
uniquely determines L.) This L is a linear operator on R over Q, and
so L(v + w) = L(v) + L(w) is satis…ed. But, by de…nition L( ) = 0,
but L(1) = (1) = . Thus, L is not a linear operator on the real
vector space R.
(8) (a) B = f[1; i; 1; i]; [4 + 5i; 7 4i; i; 1] ;
[10;
2 + 6i;
8 i;
1 + 8i] ; [0; 0; 1; i]g
1
(b) C = f 12 [1; i; 1; i]; 6p
[4 + 5i; 7 4i; i; 1] ;
3
p1 [10;
2 + 6i;
8 i;
1 + 8i] ; p12 [0; 0; 1; i]g
3 30
2
1
1
1
1
2
2i
2
2i
1
1
1
1
6 p (4 5i)
p (7 + 4i)
p i
p
6 6 3
6 3
6 3
6 3
(c) 6
10
p
p1 ( 2
6i) 3p130 ( 8 + i) 3p130 ( 1 8i)
4
3 30
3 30
p1
p1 i
0
0
2
2
153
3
7
7
7
5
Andrilli/Hecker - Answers to Exercises
(9) (a) pA (x) = x2
2
p1 ( 1
i)
4 6
p2
6
Chapter 7 Review
5x + 4 = (x
3
p1 (1 + i)
3
5
1)(x
4); D =
1
0
0
4
; P =
p1
3
2
(b) pA (x) = x3 + ( 98 + 98i)x2 4802ix
2 6 =1 x(x 4 493+ 49i) ; D =
p
p
i 7 5
2
3
7
5
0
0
0
6
7
3
2i 7
2
4 0 49 49i
5; P = 6
0
6 7 i p5 7p5 7 is a probable
4
5
0
0
49 49i
2
15
p
0
7
7 5 3
2
3i
2
6i
3 5
answer. Another possibility: P = 17 4 2 6i
6i 3
2i
2
3
80
105 + 45i
60
185
60 60i 5, and so
(10) Both AA and A A equal 4 105 45i
60
60 + 60i
95
A is normal. Hence, by Theorem 7.9, A is unitarily diagonalizable.
(11) Now, A A =
2
459
6
46
459i
6
6
459
46i
6
4
11 + 925i
83 2227i
but AA =
2
7759
6
120
2395i
6
6 3850 + 1881i
6
4 1230 + 3862i
95 + 2905i
46 + 459i
459 + 46i
473
92 + 445i
92 445i
473
918 103i
81 932i
2248 + 139i 305 + 2206i
120 + 2395i
769
648 + 1165i
1188 445i
906 75i
11 925i
918 + 103i
81 + 932i
1871
4482 + 218i
3850 1881i
648 1165i
2371
329 + 2220i
660 + 1467i
83 + 2227i
2248 139i
305 2206i
4482 218i
10843
1230 3862i
1188 + 445i
329 2220i
2127
1467 415i
1
. Hence, U U = UU = I.
(14) f[1; 0; 0]; [4; 3; 0]; [5; 4; 2]g
(15) Orthogonal basis for W: fsin x; x cos x + 12 sin xg;
1
w1 = 2 sin x 4 36
w1
2 +3 (x cos x + 2 sin x); w2 = x
154
7
7
7,
7
5
95 2905i
906 + 75i
660 1467i
1467 + 415i
1093
so A is not normal. Hence, A is not unitarily diagonalizable, by Theorem 7.9. (Note: A is diagonalizable (eigenvalues 0, 1, and i), but the
eigenspaces are not orthogonal to each other.)
(12) If U is a unitary matrix, then U = U
q
8
(13) Distance = 105
0:276
3
3
7
7
7,
7
5
Andrilli/Hecker - Answers to Exercises
Chapter 7 Review
(16) (a) T
(e) F
(i) F
(m) F
(q) T
(b) F
(f) T
(j) T
(n) T
(r) T
(c) T
(g) F
(k) T
(o) T
(s) T
(d) F
(h) T
(l) T
(p) T
(t) T
155
(u) T
(v) T
(w) F
Andrilli/Hecker - Answers to Exercises
Section 8.1
Chapter 8
Section 8.1
(1) Symmetric: (a), (b), (c), (d), (g)
2
3
0 1 1 1
6 1 0 1 1 7
7
(a) G1 : 6
4 1 1 0 1 5
1 1 1 0
3
2
1 1 0 0 1
6 1 0 0 1 0 7
7
6
7
(b) G2 : 6
6 0 0 1 0 1 7
4 0 1 0 0 1 5
1 0 1 1 1
3
2
0 0 0
(c) G3 : 4 0 0 0 5
0 0 0
3
2
0 1 0 1 0 0
6 1 0 1 1 0 0 7
7
6
6 0 1 0 0 1 1 7
7
6
(d) G4 : 6
7
6 1 1 0 0 1 0 7
4 0 0 1 1 0 1 5
0 0 1 0 1 0
2
0 1 0 0 0 0 0
6 0 0 1 0 0 0 0
6
6 0 0 0 1 0 0 0
6
6 0 0 0 0 1 0 0
(h) D4 : 6
6 0 0 0 0 0 1 0
6
6 0 0 0 0 0 0 1
6
4 0 0 0 0 0 0 0
1 0 0 0 0 0 0
2
0
6 0
(e) D1 : 6
4 1
1
2
0
6 0
(f) D2 : 6
4 0
0
2
0
6 1
6
(g) D3 : 6
6 1
4 0
0
0
0
0
0
0
0
1
0
1
0
0
1
0
1
0
0
1
1
1
0
1
1
0
0
1
0
0
0
1
1
0
0
1
0
3
0
0 7
7
1 5
0
3
0
1 7
7
1 5
1
0
0
1
0
1
0
1
0
1
0
3
7
7
7
7
5
3
7
7
7
7
7
7
7
7
7
7
5
(2) All …gures for this exercise appear on the next page.
F can be the adjacency matrix for either a graph or digraph (see Figure 12).
G can be the adjacency matrix for a digraph (only) (see Figure 13).
H can be the adjacency matrix for a digraph (only) (see Figure 14).
I can be the adjacency matrix for a graph or digraph (see Figure 15).
K can be the adjacency matrix for a graph or digraph (see Figure 16).
L can be the adjacency matrix for a graph or digraph (see Figure 17).
156
Andrilli/Hecker - Answers to Exercises
157
Section 8.1
Andrilli/Hecker - Answers to Exercises
(3) The digraph is shown in Figure 18
matrix is
A B
2
A
0 0
B 6
6 0 0
C 6
6 0 1
D 6
6 1 0
E 4 1 1
F
0 0
Section 8.1
(see previous page), and the adjacency
C
1
0
0
0
0
0
D E
0 0
1 1
0 1
0 0
1 0
1 0
F
0
0
0
1
0
0
3
7
7
7
7:
7
7
5
The transpose gives no new information. But it does suggest a di¤erent
interpretation of the results: namely, the (i; j) entry of the transpose
equals 1 if author j in‡uences author i (see Figure 18 on previous page).
(4) (a) 3
(b) 3
(c) 6 = 1 + 1 + 4
(e) Length 4
(d) 7 = 0 + 1 + 1 + 5
(f) Length 3
(5) (a) 4
(d) 7 = 1 + 0 + 3 + 3
(b) 0
(e) No such path exists
(c) 5 = 1 + 1 + 3
(f) Length 2
(6) (a) Figure 8.5: 7; Figure 8.6: 2
(b) Figure 8.5: 8; Figure 8.6: 4
(c) Figure 8.5: 3 = 0 + 1 + 0 + 2; Figure 8.6: 17 = 1 + 2 + 4 + 10
(7) (a) If the vertex is the ith vertex, then the ith row and ith column entries
of the adjacency matrix all equal zero, except possibly for the (i; i)
entry.
(b) If the vertex is the ith vertex, then the ith row entries of the adjacency
matrix all equal zero, except possibly for the (i; i) entry. (Note: The
ith column entries may be nonzero.)
(8) (a) The trace equals the number of loops in the graph or digraph.
(b) The trace equals the number of cycles of length k in the graph or
digraph. (See Exercise 6 in the textbook for the de…nition of a cycle.)
(9) (a) The digraph in Figure 8.5 is strongly connected; the digraph in Figure
8.6 is not strongly connected (since there is no path directed to P5
from any other vertex).
(b) Use the fact that if a path exists between a given pair Pi , Pj of
vertices, then there must be a path of length at most n 1 between
Pi and Pj . This is because any longer path would involve returning
to a vertex Pk that was already visited earlier in the path since there
are only n vertices. (That is, any portion of the longer path that
involves travelling from Pk to itself could be removed from that path
to yield a shorter path between Pi and Pj ). Then use the statement
in the box just before Example 4 in the textbook.
158
Andrilli/Hecker - Answers to Exercises
Section 8.3
(10) (a) Use part (c).
(b) Yes, it is a dominance digraph, because no tie games are possible and
because each team plays every other team. Thus if Pi and Pj are two
given teams, either Pi defeats Pj or vice versa.
(c) Since the entries of both A and AT are all zeroes and ones, then,
with i 6= j, the (i; j) entry of A + AT = aij + aji = 1 i¤ aij = 1 or
aji = 1, but not both. Also, the (i; i) entry of A + AT = 2aii = 0 i¤
aii = 0. Hence, A + AT has the desired properties (all main diagonal
entries equal to 0, and all other entries equal to 1) i¤ there is always
exactly one edge between two distinct vertices (aij = 1 or aji = 1,
but not both, for i 6= j), and the digraph has no loops (aii = 0).
(This is the de…nition of a dominance digraph.)
(11) We prove Theorem 8.1 by induction on k. For the base step, k = 1. This
case is true by the de…nition of the adjacency matrix. For the inductive
step, let B = Ak . Then Ak+1 = BA. Note
j) entry of Ak+1
Pn that the (i;P
n
= (ith row of B) (jth column of A) = q=1 biq aqj = q=1 (number of
paths of length k from Pi to Pq ) (number of paths of length 1 from Pq to
Pj ) = (number of paths of length k + 1 from Pi to Pj ).
(12) (a) T
(b) F
(c) T
(d) F
(e) T
(f) T
(g) T
Section 8.2
(1) (a) I1 = 8, I2 = 5, I3 = 3
(b) I1 = 10, I2 = 4, I3 = 5, I4 = 1, I5 = 6
(c) I1 = 12, I2 = 5, I3 = 3, I4 = 2, I5 = 2, I6 = 7
(d) I1 = 64, I2 = 48, I3 = 16, I4 = 36, I5 = 12, I6 = 28
(2) (a) T
(b) T
Section 8.3
(1) (a) y =
0:8x
3:3, y t
7:3 when x = 5
(b) y = 1:12x + 1:05, y t 6:65 when x = 5
(c) y =
1:5x + 3:8, y t
3:7 when x = 5
(2) (a) y = 0:375x2 + 0:35x + 3:60
(b) y = 0:83x2 + 1:47x 1:8
(c) y =
159
0:042x2 + 0:633x + 0:266
Andrilli/Hecker - Answers to Exercises
(3) (a) y = 41 x3 +
25 2
28 x
(4) (a) y = 4:4286x2
+
25
14 x
+
37
35
Section 8.3
(b) y =
1 3
6x
1 2
14 x
1
3x
+
117
35
2:0571
2
(b) y = 0:7116x + 1:6858x + 0:8989
(c) y =
0:1014x2 + 0:9633x
(d) y =
0:1425x3 + 0:9882x
0:8534
(e) y = 0x3
0:3954x2 + 0:9706
(5) (a) y = 0:4x + 2:54; the angle reaches 20 in the 44th month
(b) y = 0:02857x2 + 0:2286x + 2:74; the angle reaches 20 in the 21st
month
(c) The quadratic approximation will probably be more accurate after
the …fth month because the amount of change in the angle from the
vertical is generally increasing with each passing month.
(6) The answers to (a) and (b) assume that the years are renumbered as
suggested in the textbook.
(a) y = 25:2x + 126:9; predicted population in 2020 is 328:5 million
(b) y = 0:29x2 + 23:16x + 129:61; predicted population in 2020 is 333:51
million
(c) The quadratic approximation is probably more accurate. The population changes between successive decades (that is, the …rst di¤erences of the original data) are, respectively, 28:0, 24:0, 23:2, 22:2,
and 32:7 (million). This shows that the change in the population is
decreasing slightly each decade at …rst, but then starts increasing.
For the linear approximation in part (a) to be a good …t for the data,
these …rst di¤erences should all be relatively close to 25:2, the slope
of the line in part (a). But that is not the case here. For this reason, the linear model does not seem to …t the data well. The second
di¤erences of the original data (the changes between successive …rst
di¤erences) are, respectively, 4:0, 0:8, 1:0, and +10:5 (million).
For the quadratic approximation in part (b) to be a good …t for the
data, these second di¤erences should all be relatively close to 0:58,
the second derivative of the quadratic in part (b). However, that is
not the case here. On the other hand, the positive second derivative
of the quadratic model in part (b) indicates that this graph is concave
up, and therefore is a better …t to the overall shape of the data than
the linear model. This probably makes the quadratic approximation
more reliable for predicting the population in the future.
(7) The least-squares polynomial is y = 45 x2
quadratic through the given three points.
160
2
5x
+ 2, which is the exact
Andrilli/Hecker - Answers to Exercises
Section 8.4
(8) When t = 1, the system in Theorem 8.2 is
2
3
1 a1
6
7
1 1
1 6 1 a2 7 c0
=
6 ..
.
.. 7
a1 a2
an 4 .
5 c1
1 an
which becomes
"
the desired system.
(9) (a) x1 =
(b) x1 =
8
>
>
>
>
>
<
>
>
>
>
>
:
(10) (a) T
230
39 ,
x2 =
8
4x1
>
>
<
2x1
>
>
:
3x1
46
11 ,
x2 =
2x1
x1
+
n
Pn
i=1
ai
Pn
i=1
ai
i=1
a2i
Pn
1
a1
#
c0
c1
1
a2
=
1
an
"
Pn
2
6
6
6
4
i=1 bi
Pn
i=1
ai bi
b1
b2
..
.
bn
#
3
7
7
7;
5
;
155
39 ;
3x2
=
11 32 ;
which is almost 12
+
5x2
=
31 23 ;
which is almost 32
+
x2
=
21 23 ;
which is close to 21
12
11 ,
x3 =
x2
+
3x2
62
33 ;
x3
=
11 13 ;
x3
=
9 31 ;
x1
2x2
+
3x3
=
12;
3x1
4x2
+
2x3
=
20 23 ;
(b) F
(c) F
which is almost 11
which is almost
9
which is almost 21
(d) F
Section 8.4
(1) A is not stochastic, since A is not square; A is not regular, since A is not
stochastic.
B is not stochastic, since the entries of column 2 do not sum to 1; B is
not regular, since B is not stochastic.
C is stochastic; C is regular, since C is stochastic and has all nonzero
entries.
D is stochastic; D is not regular, since every positive power of D is a
matrix whose rows are the rows of D permuted in some order, and hence
every such power contains zero entries.
E is not stochastic, since the entries of column 1 do not sum to 1; E is
not regular, since E is not stochastic.
F is stochastic; F is not regular, since every positive power of F has all
second row entries zero.
G is not stochastic, since G is not square; G is not regular, since G is not
161
Andrilli/Hecker - Answers to Exercises
Section 8.4
stochastic.
H is stochastic; H is regular, since
2 1
which has no zero entries.
6
H2 = 6
4
2
1
4
1
4
1
4
1
2
1
4
1
4
1
4
1
2
5 13
67 149
(2) (a) p1 = [ 18
; 18 ], p2 = [ 216
; 216 ]
2 13 5
(b) p1 = [ 9 ; 36 ; 12 ],
p2 =
3
7
7;
5
1 5
(c) p1 = [ 17
48 ; 3 ; 16 ],
49 175
p2 = [ 205
576 ; 144 ; 576 ]
25
97 23
[ 108
; 216
; 72 ]
(3) (a) [ 25 ; 35 ]
20 21
(b) [ 18
59 ; 59 ; 59 ]
(4) (a) [0:4; 0:6]
3
3
3
(c) [ 14 ; 10
; 10
; 20
]
(b) [0:305; 0:339; 0:356]
(5) (a) [0:34; 0:175; 0:34; 0:145] in the next election;
[0:3555; 0:1875; 0:2875; 0:1695] in the election after that
(b) The steady-state vector is [0:36; 0:20; 0:24; 0:20]; in a century, the
votes would be 36% for Party A and 24% for Party C.
3
2 1 1 1 1
0
2
6
6
5
7
6
6 1 1 0 0
1 7
6 8 2
5 7
7
6
6
1
1 7
(6) (a) 6 18 0 12 10
10 7
7
6
6 1
1 7
7
6 4 0 16 12
5 5
4
1
0 31 16 15
2
(b) If M is the stochastic matrix in part (a), then
2 41
3
1
1
13
9
6
6
6
6
6
2
M =6
6
6
6
4
120
6
5
60
100
1
8
27
80
13
240
13
200
1
5
3
20
13
240
73
240
29
200
3
25
13
48
13
120
29
120
107
300
13
60
9
80
1
3
1
5
13
60
28
75
Thus, M is regular.
(c)
29
120 , since the probability
13
73
29 1
[ 15 ; 240
; 240
; 120
; 5]
7
7
7
7
7
7 ; which has all entries nonzero.
7
7
7
5
vector after two time intervals is
3
3 1 1
(d) [ 15 ; 20
; 20
; 4 ; 4 ]; over time, the rat frequents rooms B and C the least
and rooms D and E the most.
162
Andrilli/Hecker - Answers to Exercises
Section 8.5
(7) The limit vector for any initial input is [1; 0; 0]. However, M is not regular,
since every positive power of M is upper triangular, and hence is zero
below the main diagonal. The unique …xed point is [1; 0; 0].
(8) (a) Any steady-state vector is a solution of the system
1
a
a
b
1
which simpli…es to
1
0
b
a
a
b
b
0
1
x1
x2
x1
x2
=
0
0
0
0
=
;
. Using the fact that
x1 + x2 = 1 yields a larger system whose augmented
2
b
2
3
1 0
a+b
0
a
b
6
a
4 a
0 5. This reduces to 6
b
a+b
4 0 1
1
1
1
0 0
0
matrix is
3
7
7, assuming a
5
and b are not both zero. (Begin the row reduction with the row
operation h1i ! h3i in case a = 0.) Hence, x1 =
the unique steady-state vector.
b
a+b ,
x2 =
a
a+b
is
(9) It is enough to show that the product of any two stochastic matrices
is stochastic. Let A and B be n
n stochastic matrices. Then the
entries of AB are clearly nonnegative, since the entries of both A and
B are nonnegative.
Pn Furthermore,
Pn the sum of the entries in the jth column
of
AB
=
(AB)
=
row of A) (jth column
of B) =
ij
i=1
i=1 (ith P
Pn
Pn
n
(a
b
+
a
b
+
+
a
b
)
=
(
a
)
b
+
(
a
) b2j +
i1
1j
i2
2j
in
nj
i1
1j
i2
i=1 P
i=1
i=1
n
+ ( i=1 ain ) bnj = (1)b1j + (1)b2j +
+ (1)bnj (since A is stochastic,
and each of the summations is a sum of the entries of a column of A) =
1 (since B is stochastic). Hence AB is stochastic.
(10) Suppose M is a k k matrix.
Pk
Base Step (n = 1): ith entry in Mp = (ith row of M) p = j=1 mij pj =
Pk
j=1 (probability of moving from state Sj to Si )(probability of being in
state Sj ) = probability of being in state Si after 1 step of the process =
ith entry of p1 .
Inductive Step: Assume pk = Mk p is the probability vector after k steps
of the process. Then, after an additional step of the process, the probability vector pk+1 = Mpk = M(Mk p) = Mk+1 p.
(11) (a) F
(b) T
(c) T
(d) T
(e) F
Section 8.5
(1) (a)
24
26
46
42
15
51
30
84
10
24
163
16
37
39
11
62
23
Andrilli/Hecker - Answers to Exercises
(b)
97
201
254
177
171
175
146
93
312
96
168
256
Section 8.6
169
133
238
143
175
(430)
113
311
(357)
(where “27” was used twice to pad the last vector)
(2) (a) HOMEWORK IS GOOD FOR THE SOUL
(b) WHO IS BURIED IN GRANT( ’) S TOMB–
(c) DOCTOR, I HAVE A CODE
(d) TO MAKE A SLOW HORSE FAST DON( ’) T FEED IT––
(3) (a) T
(b) T
(c) F
Section 8.6
(1) (a) (III): h2i ! h3i; inverse operation is (III): h2i
matrix is its own inverse.
(b) (I): h2i
2 h2i; inverse operation is
2
1
1
4 0
h2i.
The
inverse
matrix
is
2
0
(I): h2i
(c) (II): h3i
4 h1i + h3i; inverse operation is
! h3i. The
0
1
2
0
2
1
4 h1i + h3i. The inverse matrix is 4 0
4
(II): h3i
(d) (I): h2i
3
0
0 5.
1
3
0
0 5.
1
0
1
0
1
6
6 h2i; inverse operation is (I): h2i
3
2
1 0 0 0
7
6
6 0 1 0 0 7
6
7.
6
inverse matrix is 6
7
4 0 0 1 0 5
0 0 0 1
(e) (II): h3i
h3i
2 h4i + h3i; inverse operation is
2
1
6 0
6
2 h4i + h3i. The inverse matrix is 4
0
0
(II):
0
1
0
0
(f) (III): h1i ! h4i; inverse operation is (III): h1i
matrix is its own inverse.
(2) (a)
4
3
9
7
=
4
0
0
1
1
3
0
1
1
0
0
1
4
(b) Not possible, since the matrix is singular.
164
1
0
9
4
1
0
0
1
0
h2i. The
3
0
0 7
7.
2 5
1
! h4i. The
1
0
0
1
Andrilli/Hecker - Answers to Exercises
(c) 2
The
0
6 1
6
4 0
0
2
1
6 0
6
4 0
0
2
1
6
6 0
6
6
6 0
4
0
Section 8.6
product of 3the2 following matrices
3 2in the order listed:
3
1 0 0
3 0 0 0
1 0 0 0
6
7 6
7
0 0 0 7
7, 6 0 1 0 0 7, 6 0 1 0 0 7 ,
5
4
5
4
0 1 0
0 0 1 0
0 0 1 0 5
0 0 1
0 0 0 1
3 0 0 1
3
3 2
3 2
1 0 0 0
1 0 0 0
0 0 0
7
6
7 6
0 1 0 7
7, 6 0 6 0 0 7, 6 0 1 0 0 7 ,
1 0 0 5 4 0 0 1 0 5 4 0 0 5 0 5
0 0 0 1
0 0 0 1
0 0 1
3 2
3 2
3
0
0 0
1 0 0
0
1 0 0 23
7 6
7 6
7
5
1 7
6
7 6
1
0 7
3
6 7
7 6 0 1 0 0 7 6 0 1 0
7, 6
7, 6
7
6 0 0 1 0 7 6 0 0 1
0
1 0 7
0 7
5 4
5 4
5
0
0
1
0
0
0
1
0
0
0
1
(3) If A and B are row equivalent, then B = Ek
E2 E1 A for some elementary matrices E1 ; : : : ; Ek , by Theorem 8.8. Hence B = PA, with
P = Ek
E1 . By Corollary 8.9, P is nonsingular.
Conversely, if B = PA for a nonsingular matrix P, then by Corollary 8.9,
P = Ek
E1 for some elementary matrices E1 ; : : : ; Ek . Hence A and B
are row equivalent by Theorem 8.8.
(4) Follow the hint in the textbook. Because U is upper triangular with all
nonzero diagonal entries, when row reducing U to In , no type (III) row
operations are needed, and all type (II) row operation used will be of the
form hii
chji + hii, where i < j (the pivots are all below the targets).
Thus, none of the type (II) row operations change a diagonal entry, since
uji = 0 when i < j. Hence, only type (I) row operations make changes on
the main diagonal, and no main diagonal entry will be made zero. Also, the
elementary matrices for the type (II) row operations mentioned are upper
triangular: nonzero on the main diagonal, and perhaps in the (i; j) entry,
with i < j. Since the elementary matrices for type (I) row operations
are also upper triangular, we see that U 1 , which is the product of all
these elementary matrices, is the product of upper triangular matrices.
Therefore, it is also upper triangular.
(5) For type (I) and (III) operations, E = ET . If E corresponds to a type (II)
operation hii
c hji + hii, then ET corresponds to
hji
c hii + hji, so ET is also an elementary matrix.
(6) Note that (AFT )T = FAT . Now, multiplying by F on the left performs
a row operation on AT . Taking the transpose again shows that this is a
corresponding column operation on A. The …rst equation show that this
is the same as multiplying by FT on the right side of A.
165
Andrilli/Hecker - Answers to Exercises
Section 8.7
(7) A is nonsingular i¤ rank(A) = n (by Theorem 2.14) i¤ A is row equivalent
to In (by the de…nition of rank) i¤ A = Ek
E1 In = Ek
E1 for some
elementary matrices E1 ; : : : ; Ek (by Theorem 8.8).
(8) AX = O has a nontrivial solution i¤ rank(A) < n (by Theorem 2.5) i¤
A is singular (by Theorem 2.14) i¤ A cannot be expressed as a product
of elementary matrices (by Corollary 8.9).
(9) (a) EA and A are row equivalent (by Theorem 8.8), and hence have the
same reduced row echelon form, and the same rank.
(b) The reduced row echelon form of A cannot have more nonzero rows
than A.
(c) If A has k rows of zeroes, then rank(A) = m
least k rows of zeroes, so rank(AB) m k.
k. But AB has at
(d) Let A = Ek
E1 D, where D is in reduced row echelon form. Then
rank(AB) = rank(Ek
E1 DB) = rank(DB) (by repeated use of
part (a)) rank(D) (by part (c)) = rank(A) (by de…nition of rank).
(e) Exercise 18 in Section 2.3 proves the same results as those in parts
(a) through (d), except that it is phrased in terms of the underlying
row operations rather than in terms of elementary matrices.
(10) (a) T
(b) F
(c) F
Section 8.7
(1) (a)
=
1
2
arctan(
p
3
3 )
=
12 ;
P=
(d) T
"
p
6+ 2
4
p p
2
6
4
2
p
p
(e) T
p
6
4
2
#
p
6+ 2
4
u2
v2
2
2
p
;
equation in uv-coordinates: u2 v = 2; or,
= 1;
center in uv-coordinates: (0; 0); center in xy-coordinates: (0; 0); see
Figures 19 and 20.
166
Andrilli/Hecker - Answers to Exercises
Section 8.7
Figure 19: Rotated Hyperbola
Figure 20: Original Hyperbola
(b)
=
4,
P=
"
p
p
2
2
p
2
2
2
2
p
2
2
2
#
; equation in uv-coordinates: 4u2 + 9v 2
2
8u = 32, or, (u 91) + v4 = 1; center in uv-coordinates: (1; 0); center
p
p
in xy-coordinates: ( 22 ; 22 ); see Figures 21 and 22.
167
Andrilli/Hecker - Answers to Exercises
Figure 21: Rotated Ellipse
Figure 22: Original Ellipse
168
Section 8.7
Andrilli/Hecker - Answers to Exercises
(c)
(d)
=
1
2
arctan(
p
3) =
6;
Section 8.7
P=
"
p
3
2
1
2
p
1
2
3
2
; equation in
uv-coordinates: v = 2u2 12u + 13, or (v + 5) = 2(u 3)2 ; vertex in
uv-coordinates: (3; 5); vertex in xy-coordinates: (0:0981; 5:830);
see Figures 23 and 24.
" 4
3 #
t 0:6435 radians (about 36 520 ); P =
uv-coordinates: 4u2
t
5
5
3
5
4
5
(u 2)2
9
0:6435 radians (about
0
36 52 ); P =
; equation in
(v+1)2
= 1;
4
11 2
xy-coordinates = ( 5 ; 5 );
16u + 9v 2 + 18v = 11; or,
center in uv-coordinates: (2; 1); center in
see Figures 25 and 26.
(e)
#
"
uv-coordinates: 12v = u2 + 12u; or, (v + 3) =
4
5
3
5
+
#
3
4
5
5
1
12 (u +
; equation in
6)2 ; vertex in
uv-coordinates: ( 6; 3); vertex in xy-coordinates = (
Figures 27 and 28.
33 6
5 ; 5 );
see
(f) (All answers are rounded to four signi…cant digits.) t 0:4442 ra0:9029
0:4298
dians (about 25 270 ); P =
; equation in uv0:4298
0:9029
coordinates:
u2
(1:770)2
v2
(2:050)2
= 1; center in uv-coordinates: (0; 0);
center in xy-coordinates = (0; 0); see Figures 29 and 30.
169
Andrilli/Hecker - Answers to Exercises
Figure 23: Rotated Parabola
Figure 24: Original Parabola
170
Section 8.7
Andrilli/Hecker - Answers to Exercises
Figure 25: Rotated Ellipse
Figure 26: Original Ellipse
171
Section 8.7
Andrilli/Hecker - Answers to Exercises
Figure 27: Rotated Parabola
Figure 28: Original Parabola
172
Section 8.7
Andrilli/Hecker - Answers to Exercises
Section 8.7
Figure 29: Rotated Hyperbola
Figure 30: Original Hyperbola
(2) (a) T
(b) F
(c) T
173
(d) F
Andrilli/Hecker - Answers to Exercises
Section 8.8
Section 8.8
(1) (a) (9; 1); (9; 5); (12; 1); (12; 5); (14; 3)
(b) (3; 5); (1; 9); (5; 7); (3; 10); (6; 9)
(c) ( 2; 5); (0; 9); ( 5; 7); ( 2; 10); ( 5; 10)
(d) (20; 6); (20; 14); (32; 6); (32; 14); (40; 10)
(2) See the following pages for …gures.
(a) (3; 11); (5; 9); (7; 11); (11; 7); (15; 11); see Figure 31
(b) ( 8; 2); ( 7; 5); ( 10; 6); ( 9; 11); ( 14; 13); see Figure 32
(c) (8; 1); (8; 4); (11; 4); (10; 10); (16; 11); see Figure 33
(d) (3; 18); (4; 12); (5; 18); (7; 6); (9; 18); see Figure 34
Figure 31
174
Andrilli/Hecker - Answers to Exercises
Figure 32
Figure 33
175
Section 8.8
Andrilli/Hecker - Answers to Exercises
Section 8.8
Figure 34
(3) (a) (3; 4); (3; 10); (7; 6); (9; 9); (10; 3)
(b) (4; 3); (10; 3); (6; 7); (9; 9); (3; 10)
(c) ( 2; 6); (0; 8); ( 8; 17); ( 10; 22); ( 16; 25)
(d) (6; 4); (11; 7); (11; 2); (14; 1); (10; 3)
(4) (a) (14; 9); (10; 6); (11; 11); (8; 9); (6; 8); (11; 14)
(b) ( 2; 3); ( 6; 3); ( 3; 6); ( 6; 6); ( 8; 6); ( 2; 9)
(c) (2; 4); (2; 6); (8; 5); (8; 6) (twice), (14; 4)
(5) (a) (4; 7); (6; 9); (10; 8); (9; 2); (11; 4)
(b) (0; 5); (1; 7); (0; 11); ( 5; 8); ( 4; 10)
(c) (7; 3); (7; 12); (9; 18); (10; 3); (10; 12)
(6) (a) (2; 20); (3; 17); (5; 14); (6; 19); (6; 16); (9; 14); see Figure 35
(b) (17; 17); (17; 14); (18; 10); (20; 15); (19; 12); (22; 10); see Figure 36
(c) (1; 18); ( 3; 13); ( 6; 8); ( 2; 17); ( 5; 12); ( 6; 10); see Figure 37
(d) ( 19; 6); ( 16; 7); ( 15; 9); ( 27; 7); ( 21; 8); ( 27; 10); see Figure 38
176
Andrilli/Hecker - Answers to Exercises
Figure 35
Figure 36
177
Section 8.8
Andrilli/Hecker - Answers to Exercises
Section 8.8
Figure 37
Figure 38
(7) (a) Show
2
1
4 0
0
(b) Show
2
1
4 0
0
that the given matrix is
32
0 r
cos
sin
1 s 5 4 sin
cos
0 1
0
0
equal to the product
32
3
0
1 0
r
0 54 0 1
s 5:
1
0 0
1
that the given matrix is equal to the
2
3
1 m2
2m
0 0
1
m2 1
1 b 5 1+m2 4 2m
0
0
0 1
product
32
0
1
0 54 0
1
0
(c) Show that the given matrix is equal to the product
178
0
1
0
3
0
b 5:
1
Andrilli/Hecker
2
1
4 0
0
(8) Show
2
1
4 0
0
- Answers
32
0 r
1 s 54
0 1
that the given
32
0 k
1
1 0 54 0
0 1
0
to Exercises
32
c 0 0
1
0 d 0 54 0
0 0 1
0
Section 8.8
0
1
0
matrix is equal to
32
0 0
1 0
1 0 54 0 1
0 1
0 0
(9) (a) (4; 7); (6; 9); (10; 8); (9; 2); (11; 4)
3
r
s 5:
1
the product
3
k
0 5:
1
(b) (0; 5); (1; 7); (0; 11); ( 5; 8); ( 4; 10)
(c) (7; 3); (7; 12); (9; 18); (10; 3); (10; 12)
(10) (a) Multiplying the two matrices yields I3 .
(b) The …rst matrix represents a translation along the vector [a; b], while
the second is the inverse translation along the vector [ a; b].
(c) See the answer for Exercise 7(a), and substitute a for r, and b for s,
and then use part (a) of this Exercise.
(11) (a) Show that the matrices in parts (a) and (c) of Exercise 7 commute
with each other when c = d. Both products equal
3
2
c(cos )
c(sin ) r(1 c(cos )) + sc(sin )
4 c(sin ) c(cos ) s(1 c(cos )) rc(sin ) 5.
0
0
1
(b) Consider the re‡ection about the y-axis, whose matrix is given in
3
2
1 0 0
Section 8.8 of the textbook as A = 4 0 1 0 5, and a counter0 0 1
clockwise rotation of 90 about the origin, whose matrix is
2
3
2
3
0
1 0
0 1 0
0 0 5 . Then AB = 4 1 0 0 5, but
B=4 1
0
0 1
0 0 1
2
3
0
1 0
0 0 5, and so A and B do not commute. In parBA = 4 1
0
0 1
ticular, starting from the point (1; 0), performing the rotation and
then the re‡ection yields (0; 1). However, performing the re‡ection
followed by the rotation produces (0; 1).
(c) Consider a re‡ection about y = x and scaling by a factor of c in
the x-direction and d in the y-direction, with c 6= d: Now,
(re‡ection) (scaling)([1; 0]) = (re‡ection)([c; 0]) = [0; c]. But,
(scaling) (re‡ection)([1; 0]) = (scaling)([0; 1]) = [0; d]. Hence,
(re‡ection) (scaling) 6= (scaling) (re‡ection).
179
Andrilli/Hecker - Answers to Exercises
Section 8.9
3
cos
sin
0
cos
sin
cos
0 5. Ver(12) (a) The matrices are
and 4 sin
sin
cos
0
0 1
ify that they are orthogonal by direct computation of AAT .
2
3
2
3
0
1 18
325
108 18
T
0
6 5, then AA = 4 108
37
6 5 6= I3 .
(b) If A = 4 1
0
0
1
18
6
1
2
(c) Let m be the slope of a nonvertical2line. Then, the matrices
are, re3
#
"
1 m2
2m
2
0
2
2
2m
1 m
1+m
6 1+m
7
2
1+m2
1+m2
and 4 2m 2 m 12 0 5 : For a verspectively,
2m
m2 1
1+m
1+m
1+m2
1+m2
0
0
1
2
3
1 0 0
1 0
tical line, the matrices are
and 4 0 1 0 5. All these
0 1
0 0 1
matrices are obviously symmetric, and since a re‡ection is its own
inverse, all matrices are their own inverses. Hence, if A is any one of
these matrices, then AAT = AA = I, so A is orthogonal.
(13) (a) F
(b) F
(c) F
(d) T
2
1
(b) b1 e3t
(e) T
(f) F
Section 8.9
(1) (a) b1 et
7
3
+ b2 e
t
1
1
+ b2 e
2t
3
4
2 3
3
2
3
2
1
0
(c) b1 4 1 5 + b2 et 4 1 5 + b3 e3t 4 0 5
1
1
1
2 3
2 3
2
3
1
5
1
(d) b1 et 4 1 5 + b2 et 4 0 5 + b3 e4t 4 1 5 (There are other possible
1
2
0
answers. For example, the …rst two vectors in the sum could be
any basis for the two-dimensional eigenspace corresponding to the
eigenvalue 1.)
2
3
2 3
2 3
2 3
1
1
2
0
6
7
6
7
6
7
6
7
1
3
3
7 +b2 et 6 7 +b3 6 7 +b4 e 3t 6 1 7 (There are other
(e) b1 et 6
4 1 5
4 0 5
4 0 5
4 1 5
0
1
1
1
possible answers. For example, the …rst two vectors in the sum could
be any basis for the two-dimensional eigenspace corresponding to the
eigenvalue 1.)
2
180
Andrilli/Hecker - Answers to Exercises
(2) (a) y = b1 e2t + b2 e
3t
(c) y = b1 e2t + b2 e
2t
Section 8.9
(b) y = b1 et + b2 e
p
+ b3 e
2t
+ b4 e
p
t
+ b3 e5t
2t
(3) Using the fact that Avi = i vi , it can be easily seen that, with F(t) =
b1 e 1 t v1 +
+ bk e k t vk , both sides of F0 (t) = AF(t) yield 1 b1 e 1 t v1 +
t
+ k bk e k vk .
(4) (a) Suppose (v1 ; : : : ; vn ) is an ordered basis for Rn consisting of eigenvectors of A corresponding to the eigenvalues 1 ; : : : ; n . Then,
by Theorem 8.11, all solutions to F0 (t) = AF(t) are of the form
F(t) = b1 e 1 t v1 +
+ bn e n t vn . Substituting t = 0 yields v =
F(0) = b1 v1 +
+ bn vn . But since (v1 ; : : : ; vn ) forms a basis for
Rn , b1 ; : : : ; bn are uniquely determined (by Theorem 4.9). Thus, F(t)
is uniquely determined.
2 3
2
3
2 3
1
1
1
(b) F(t) = 2e5t 4 0 5 et 4 2 5 2e t 4 1 5.
1
0
1
(5) (a) The equations f1 (t) = y, f2 (t) = y 0 ; : : : ; fn (t) = y (n 1) translate into
these (n 1) equations: f10 (t) = f2 (t), f20 (t) = f3 (t); : : : ; fn0 1 (t) =
fn (t). Also, y (n) + an 1 y (n 1) +
+ a1 y 0 + a0 y = 0 translates into
fn0 (t) = a0 f1 (t) a1 f2 (t)
an 1 fn (t). These n equations taken
together are easily seen to be represented by F0 (t) = AF(t), where
A and F are as given.
(b) Base Step (n = 1): A = [ a0 ], xI1 A = [x + a0 ], and so pA (x) =
x + a0 .
Inductive Step: Assume true for k. Prove true for k + 1. For the case
k + 1,
2
3
0
1
0
0
0
6 0
0
1
0
0 7
6
7
6 ..
.
.
.
.. 7 :
.
..
..
..
..
A=6 .
. 7
6
7
4 0
0
0
0
1 5
a0
a1
a2
a3
ak
Then
(xIk+1
2
x
0
..
.
6
6
6
A) = 6
6
4 0
a0
1
x
..
.
0
1
..
.
0
0
..
.
0
a1
0
a2
0
a3
181
..
.
0
0
..
.
0
0
..
.
x
ak 1
1
x + ak
3
7
7
7
7:
7
5
Andrilli/Hecker - Answers to Exercises
Section 8.9
Using a cofactor expansion on the …rst column yields
jxIk+1
Aj = x
x
..
.
0
a1
1 0
..
..
.
.
0
0
a2 a3
..
+ ( 1)(k+1)+1 a0
0
..
.
0
..
.
x
ak 1
1
x + ak
.
1
x
..
.
0
1
..
.
0
0
The …rst determinant equals jxIk Bj, where
2
0
1
0
0
6 0
0
1
0
6
6 ..
.
.
..
.
..
..
..
B=6 .
.
6
4 0
0
0
0
a1
a2
a3
ak
..
.
0
0
..
.
x
0
0
1
1
ak
0
0
..
.
:
1
3
7
7
7
7:
7
5
Now, using the inductive hypothesis on the …rst determinant, and
Theorems 3.2 and 3.9 on the second determinant (since its corresponding matrix is lower triangular) yields pA (x) = x(xk + ak xk 1 +
+ a2 x + a1 ) + ( 1)k+2 a0 ( 1)k = xk+1 + ak xk +
+ a1 x + a0 .
(6) (a) [b2 ; b3 ; : : : ; bn ; a0 b1
a1 b2
an
1 bn ]
(b) By part (b) of Exercise 5, 0 = pA ( ) = n + an 1 n 1 +
+
a1 + a0 . Thus, n = a0 a1
an 1 n 1 . Letting x =
[1; ; 2 ; : : : ; n 1 ] and substituting x for [b1 ; : : : ; bn ] in part (a) gives
Ax = [ ; 2 ; : : : ; n 1 ; a0 a1
an 1 n 1 ]
2
n 1
n
= [ ; ;:::;
; ] = x.
(c) Let v = [c; v2 ; v3 ; : : : ; vn ]. Then v = [c ; v2 ; : : : ; vn ]. By part (a),
Av = [v2 ; v3 ; : : : ; vn ; a0 c
an 1 vn ]. Equating the …rst n 1
coordinates of Av and v yields v2 = c , v3 = v2 ; : : : ; vn = vn 1 .
Further recursive substitution gives v2 = c , v3 = c 2 ; : : : ; vn =
c n 1 . Hence v = c[1; ; : : : ; n 1 ].
(d) By parts (b) and (c), f[1; ; : : : ;
182
n 1
]g is a basis for E .
Andrilli/Hecker - Answers to Exercises
(7) (a) T
(b) T
Section 8.10
(c) T
(d) F
Section 8.10
(1) (a) Unique least-squares solution: v =
jjAz bjj = 1
(b) Unique least-squares solution: v =
6:813; jjAz bjj = 7
23 11
30 ; 10
; jjAv bjj =
229 1
75 ; 3
; jjAv
p
6
6
bjj =
t 0:408;
p
59 3
15
t
(c) In…nite number of least-squares solutions, all of the form
23
v = 7c + 17
3 ; 13c
3 ; c ; two particular least-squares solutions
17
23
are 3 ; 3 ; 0 and 8; 12; 31 ; with v as either of these vectors,
p
jjAv bjj = 36 t 0:816; jjAz bjj = 3
(d) In…nite number of least-squares solutions, all of the form
19
v = 21 c + 45 d 1; 32 c 11
4 d + 3 ; c; d ; two particular least-squares
1 29
solutions are
0 and 14 ; 43
v as either of these
2 ; 6 ; 1;
12 ; 0; 1 ; with
p
p
6
vectors, jjAv bjj = 3 t 0:816; jjAz bjj = 5 t 2:236
(2) (a) In…nite number of least-squares solutions, all of the form
4
19 8
5
5
v=
c 19
7 c + 42 ; 7 c
21 ; c ; with 24
24 :
(b) In…nite number of least-squares solutions, all of the form
19
1
19
19
6
c 12
:
v=
5 c + 10 ;
5 c + 35 ; c ; with 0
(3) (a) v t [1:58; 0:58];
( 0 I C)v t [0:025; 0:015] (Actual nearest eigenp
value is 2 + 3 3:732)
(b) v t [0:46; 0:36; p
0:90]; ( 0 I C)v t [0:03; 0:04; 0:07] (Actual nearest eigenvalue is 2 1:414)
(c) v t [1:45; 0:05; 0:40];
( 0 I C)v t [ 0:09; 0:06; 0:15] (Actual
p
3
nearest eigenvalue is 12 2:289)
(4) If AX = b is consistent, then b is in the subspace W of Theorem 8.12.
Thus, projW b = b: Finally, the fact that statement (3) of Theorem 8.12
is equivalent to statement (1) of Theorem 8.12 shows that the actual solutions are the same as the least-squares solutions in this case.
(5) Let b = Av1 . Then AT Av1 = AT b. Also, AT Av2 = AT Av1 = AT b.
Hence v1 and v2 both satisfy part (3) of Theorem 8.12. Therefore, by part
(1) of Theorem 8.12, if W = fAx j x 2 Rn g, then Av1 = projW b = Av2 .
(6) Let b = [b1 ; :::; bn ], let A be as in Theorem 8.2, and let W = fAx j x 2
Rn g: Since projW b 2 W exists, there must be some v 2 Rn such that
Av = projW b. Hence, v satis…es part (1) of Theorem 8.12, so (AT A)v =
AT b by part (3) of Theorem 8.12. This shows that the system (AT A)X =
AT b is consistent, which proves part (2) of Theorem 8.2.
Next, let f (x) = d0 + d1 x +
+ dt xt and z = [d0 ; d1 ; : : : ; dt ]: A short
computation shows that jjAz bjj2 = Sf ; the sum of the squares of the
183
Andrilli/Hecker - Answers to Exercises
Section 8.11
vertical distances illustrated in Figure 8.9, just before the de…nition of a
least-squares polynomial in Section 8.3 of the textbook. Hence, minimizing
jjAz bjj over all possible (t + 1)-vectors z gives the coe¢ cients of a
degree t least-squares polynomial for the given points (a1 ; b1 ); : : : ; (an ; bn ):
However, parts (2) and (3) of Theorem 8.12 show that such a minimal
solution is found by solving (AT A)v = AT b; thus proving part (1) of
Theorem 8.2.
Finally, when AT A is row equivalent to It+1 , the uniqueness condition
holds by Theorems 2.14 and 2.15, which proves part(3) of Theorem 8.2.
(7) (a) F
(b) T
(c) T
(d) T
(e) T
Section 8.11
(1) (a) C =
8
0
(b) C =
7
0
12
9
17
11
2
5
(c) C = 4 0
0
B=
,A=
4
2
0
43
24
(2) (a) A =
,A=
24
57
8
6
"
6
9
7
17
2
17
2
11
2
3
3
6
5 5, A = 6
4
0
, P = 15
3
4
#
5
2
3
2
2
2
5
2
3
2
5
2
0
4
3
,D=
3
7
7
5
75
0
0 25
,
3; 4]; 15 [4; 3] , [x]B = [ 7; 4], Q(x) = 4075
3
2
3
5 16 40
1
8 4
16 37 16 5 , P = 19 4 8
1 4 5,
40 16 49
4
4 7
3
27
0 0
0
27
0 5, B = 19 [1; 8; 4]; 19 [ 8; 1; 4]; 91 [4; 4; 7] ,
0
0 81
1
5[
2
(b) A = 4
2
D=4
[x]B = [3; 6; 3], Q(x) = 0
2
2
3
18
48
30
2
68
18 5 , P = 17 4 3
(c) A = 4 48
6
30
18
1
2
3
0
0
0
0 5, B = 71 [2; 3; 6]; 71 [
D = 4 0 49
0
0
98
[x]B = [5; 0; 6], Q(x) =
3528
184
6
2
3
3
3
6 5,
2
6; 2; 3]; 71 [3; 6; 2] ,
Andrilli/Hecker - Answers to
2
1
0
6
0
5
(d) A = 6
4 12 60
12 60
2
289
0
6 0 1445
D=6
4 0
0
0
0
B=
1
17 [
1
17 [1; 0;
Exercises
12
60
864
576
0
0
0
0
3
12
60 7
7, P =
576 5
864
3
0
0 7
7,
0 5
0
2
6
1 6
17 4
1
0
12
12
0
1
12
12
Section 8.11
3
12
12
12
12 7
7,
1
0 5
0
1
1
1
12; 12]; 17
[0; 1; 12; 12]; 17
[12; 12; 1; 0];
12; 12; 0; 1] , [x]B = [1; 3; 3; 10], Q(x) = 13294
(3) First, aii = eTi Aei = eTi Bei = bii . Also, for i 6= j, let x = ei + ej . Then
aii + aij + aji + ajj = xT Ax = xT Bx = bii + bij + bji + bjj . Using aii = bii
and ajj = bjj , we get aij + aji = bij + bji . Hence, since A and B are
symmetric, 2aij = 2bij , and so aij = bij .
(4) Yes; if Q(x) = aij xi xj , 1
i
j
n, then xT C1 x and C1 upper
triangular imply that the (i; j) entry for C1 is zero if i > j and aij if
i j. A similar argument describes C2 . Thus, C1 = C2 .
(5) (a) See the Quadratic Form Method in Section 8.11 of the textbook.
Consider the result Q(x) = d11 y12 +
+ dnn yn2 in Step 3, where
d11 ; : : : ; dnn are the eigenvalues of A, and [x]B = [y1 ; : : : ; yn ].
Clearly, if all of d11 ; : : : ; dnn are positive and x 6= 0, then Q(x) must
be positive. (Obviously, Q(0) = 0.)
Conversely, if Q(x) is positive for all x 6= 0, then choose x so that
[x]B = ei to yield Q(x) = dii , thus proving that the eigenvalue dii is
positive.
(b) Replace “positive” with “nonnegative” throughout the solution to
part (a).
(6) (a) T
(b) F
(c) F
185
(d) T
(e) T
Andrilli/Hecker - Answers to Exercises
Section 9.1
Chapter 9
Section 9.1
(1) (a) Solution to …rst system: (602; 1500); solution to second system:
(302; 750). The systems are ill-conditioned, because a very small
change in the coe¢ cient of y leads to a very large change in the
solution.
(b) Solution to …rst system: (400; 800; 2000); solution to second system:
(336; 666 23 ; 1600). Systems are ill-conditioned.
(2) Answers to this problem may di¤er signi…cantly from the following, depending on where the rounding is performed in the algorithm:
(a) Without partial pivoting: (3210; 0:765); with partial
pivoting: (3230; 0:767). Actual solution is (3214; 0:765).
(b) Without partial pivoting: (3040; 10:7; 5:21); with partial pivoting:
(3010; 10:1; 5:03). (Actual solution is (3000; 10; 5).)
(c) Without partial pivoting: (2:26; 1:01; 2:11); with partial
pivoting: (277; 327; 595). Actual solution is (267; 315; 573).
(3) Answers to this problem may di¤er signi…cantly from the following, depending on where the rounding is performed in the algorithm:
(a) Without partial pivoting: (3214; 0:7651); with partial
pivoting: (3213; 0:7648). Actual solution is (3214; 0:765).
(b) Without partial pivoting: (3001; 10; 5); with partial pivoting:
(3000; 9:995; 4:999). (Actual solution is (3000; 10; 5).)
(c) Without partial pivoting: ( 2:380; 8:801; 16:30); with partial pivoting: (267:8; 315:9; 574:6). Actual solution is (267; 315; 573).
(4) (a)
Initial Values
After 1 Step
After 2 Steps
After 3 Steps
After 4 Steps
After 5 Steps
After 6 Steps
After 7 Steps
After 8 Steps
After 9 Steps
x1
0:000
5:200
6:400
6:846
6:949
6:987
6:996
6:999
7:000
7:000
x2
0:000
6:000
8:229
8:743
8:934
8:978
8:994
8:998
9:000
9:000
186
Andrilli/Hecker - Answers to Exercises
(b)
Initial Values
After 1 Step
After 2 Steps
After 3 Steps
After 4 Steps
After 5 Steps
After 6 Steps
After 7 Steps
(c)
Initial Values
After 1 Step
After 2 Steps
After 3 Steps
After 4 Steps
After 5 Steps
After 6 Steps
After 7 Steps
After 8 Steps
After 9 Steps
After 10 Steps
(d)
x1
0:000
0:778
1:042
0:995
1:003
1:000
1:000
1:000
Section 9.1
x2
0:000
4:375
4:819
4:994
4:995
5:000
5:000
5:000
x1
0:000
8:857
10:738
11:688
11:875
11:969
11:988
11:997
11:999
12:000
12:000
x2
0:000
4:500
3:746
4:050
3:975
4:005
3:998
4:001
4:000
4:000
4:000
Initial Values
After 1 Step
After 2 Steps
After 3 Steps
After 4 Steps
After 5 Steps
After 6 Steps
After 7 Steps
After 8 Steps
x1
0:000
0:900
1:874
1:960
1:993
1:997
2:000
2:000
2:000
x2
0:000
1:667
0:972
0:999
0:998
1:001
1:000
1:000
1:000
Initial Values
After 1 Step
After 2 Steps
After 3 Steps
After 4 Steps
After 5 Steps
After 6 Steps
x1
0:000
5:200
6:846
6:987
6:999
7:000
7:000
x2
0:000
8:229
8:934
8:994
9:000
9:000
9:000
(5) (a)
187
x3
0:000
2:000
2:866
2:971
2:998
2:999
3:000
3:000
x3
0:000
4:333
8:036
8:537
8:904
8:954
8:991
8:996
8:999
9:000
9:000
x3
0:000
3:000
3:792
3:966
3:989
3:998
3:999
4:000
4:000
x4
0:000
2:077
2:044
2:004
1:999
2:000
2:000
2:000
2:000
Andrilli/Hecker - Answers to Exercises
(b)
Initial Values
After 1 Step
After 2 Steps
After 3 Steps
After 4 Steps
After 5 Steps
(c)
Initial Values
After 1 Step
After 2 Steps
After 3 Steps
After 4 Steps
After 5 Steps
After 6 Steps
After 7 Steps
(d)
Initial Values
After 1 Step
After 2 Steps
After 3 Steps
After 4 Steps
After 5 Steps
x1
0:000
0:778
0:963
0:998
1:000
1:000
x1
0:000
8:857
11:515
11:931
11:990
11:998
12:000
12:000
x1
0:000
0:900
1:980
2:006
2:000
2:000
Section 9.1
x2
0:000
4:569
4:978
4:999
5:000
5:000
x2
0:000
3:024
3:879
3:981
3:997
4:000
4:000
4:000
x2
0:000
1:767
1:050
1:000
1:000
1:000
x3
0:000
2:901
2:993
3:000
3:000
3:000
x3
0:000
7:790
8:818
8:974
8:996
8:999
9:000
9:000
x3
0:000
3:510
4:003
4:002
4:000
4:000
x4
0:000
2:012
2:002
2:000
2:000
2:000
(6) Strictly diagonally dominant: (a), (c)
(7) (a) Put the third equation …rst, and move the other two down to get the
following:
x1
x2
x3
Initial Values 0:000
0:000 0:000
After 1 Step
3:125
0:481 1:461
After 2 Steps 2:517
0:500 1:499
After 3 Steps 2:500
0:500 1:500
After 4 Steps 2:500
0:500 1:500
(b) Put the …rst equation last (and leave the other two alone) to get:
x1
x2
x3
Initial Values 0:000
0:000 0:000
After 1 Step
3:700
6:856 4:965
After 2 Steps 3:889
7:980 5:045
After 3 Steps 3:993
8:009 5:004
After 4 Steps 4:000
8:001 5:000
After 5 Steps 4:000
8:000 5:000
After 6 Steps 4:000
8:000 5:000
188
Andrilli/Hecker - Answers to Exercises
Section 9.1
(c) Put the second equation …rst, the fourth equation second, the …rst
equation third, and the third equation fourth to get the following:
x1
x2
x3
x4
Initial Values
0:000
0:000
0:000
0:000
After 1 Step
5:444
5:379
9:226
10:447
After 2 Steps
8:826
8:435 10:808
11:698
After 3 Steps
9:820
8:920 10:961
11:954
After 4 Steps
9:973
8:986 10:994
11:993
After 5 Steps
9:995
8:998 10:999
11:999
After 6 Steps
9:999
9:000 11:000
12:000
After 7 Steps 10:000
9:000 11:000
12:000
After 8 Steps 10:000
9:000 11:000
12:000
(8) The Jacobi Method yields the following:
Initial Values
After 1 Step
After 2 Steps
After 3 Steps
After 4 Steps
After 5 Steps
After 6 Steps
x1
0:0
16:0
37:0
224:0
77:0
3056:0
12235:0
x2
0:0
13:0
59:0
61:0
907:0
2515:0
19035:0
x3
0:0
12:0
87:0
212:0
1495:0
356:0
23895:0
The Gauss-Seidel Method yields the following:
Initial Values
After 1 Step
After 2 Steps
After 3 Steps
After 4 Steps
x1
0:0
16:0
248:0
5656:0
124648:0
x2
0:0
83:0
1841:0
41053:0
909141:0
x3
0:0
183:0
3565:0
80633:0
1781665:0
The actual solution is (2; 3; 1).
(9) (a)
Initial Values
After 1 Step
After 2 Steps
After 3 Steps
After 4 Steps
After 5 Steps
After 6 Steps
After 7 Steps
After 8 Steps
After 9 Steps
After 10 Steps
x1
0:000
3:500
1:563
1:039
0:954
0:966
0:985
0:995
0:999
1:000
1:000
x2
0:000
2:250
2:406
2:223
2:089
2:028
2:006
2:000
1:999
2:000
2:000
189
x3
0:000
1:625
2:516
2:869
2:979
3:003
3:005
3:003
3:001
3:000
3:000
Andrilli/Hecker - Answers to Exercises
(b)
x1
0:000
3:500
0:750
3:125
0:938
3:031
0:984
3:008
0:996
Initial Values
After 1 Step
After 2 Steps
After 3 Steps
After 4 Steps
After 5 Steps
After 6 Steps
After 7 Steps
After 8 Steps
(10) (a) T
(b) F
Section 9.2
x2
0:000
4:000
0:000
4:000
0:000
4:000
0:000
4:000
0:000
x3
0:000
4:500
0:750
4:875
0:938
4:969
0:985
4:992
0:996
(c) F
(d) T
(e) F
(f) F
Section 9.2
1
3
(1) (a) LDU =
1
(b) LDU =
2
(c) LDU = 4
2
6
(d) LDU = 6
4
(e) LDU =
2
1
6
4
6
3
6
6
6
2
4
2
3
(f) LDU
=
2
1
6 2
6
4 3
1
0
1
3
2
0
1
2
0
0
1
1
2
0
5
1
0
3
0
0
2
32
1
0
1
0
0 54 0
0
1
3
2
0 0
7 2
4
1 0 7
5 0
0
3
1
2
1
2
2
0
1
4
1
5
2
1
2
0
0
1
0
3
2
1
2
0
0
32
76
6
0 7
76
76
6
0 7
54
1
32
0 0
6
0 0 7
76
5
1 0 4
2 1
0
2
0
2
1
1
3
1
32
1
0
0 54 0
3
0
32
1
0 0
4 0 54 0
0
0 3
3
0
0
0
2
3
0
0
0
1
2
0
0
0
3
0
0
0
0 0
2 0
0 1
0 0
0
32
3
1
0
3
2
5 5
1
1
1
3
1
3
1
3
1
5
2
11
2
0
1
0
0
76
6
0 7
76 0
76
6
0 7
54 0
1
3
2
4 5
1
4
1
0
0
32
0
1 4
6
0 7
76 0 1
0 54 0 0
3
0 0
2
0
1
0
3
7
7
7
7
3 7
5
1
3
3
1 7
7
5 5
1
x
xz
.
wx wxz + y
The …rst row of LDU cannot equal [0; 1], since this would give x = 0,
forcing xz = 0 6= 1.
(2) (a) With the given values of L, D, and U, LDU =
190
Andrilli/Hecker - Answers to Exercises
Section 9.3
0 1
cannot be reduced to row echelon form using
1 0
only type (I) and lower type (II) row operations.
(b) The matrix
(3) (a) KU =
2
1
2
0
3
2
(b) KU = 4 2
1
2
1
0
32
5
1
0 0
1
1 0 54 0
3 3
0
32
1 0
0
1
0 54 0
(c) KU = 4 4 3
2 5
2
0
2
3
3
0 0
0
6 1
2 0
0 7
7
(d) KU = 6
4 5
1 4
0 5
1
3 0
3
solution = f(2; 2; 1; 3)g.
(4) (a) F
; solution = f(4; 1)g.
3
2 5
1 3 5; solution = f(5; 1; 2)g.
0 1
3
3
2
1
5 5; solution = f(2; 3; 1)g.
0
1
2
3
1
5
2
2
6 0
1
3
0 7
6
7;
4 0
0
1
2 5
0
0
0
1
(b) T
(c) F
(d) F
Section 9.3
(1) (a) After nine iterations, eigenvector = [0:60; 0:80] and eigenvalue = 50.
(b) After …ve iterations, eigenvector = [0:91; 0:42] and eigenvalue = 5:31.
(c) After seven iterations, eigenvector = [0:41; 0:41; 0:82] and eigenvalue
= 3:0.
(d) After …ve iterations, eigenvector = [0:58; 0:58; 0:00; 0:58] and eigenvalue = 6:00.
(e) After …fteen iterations, eigenvector = [0:346; 0:852; 0:185; 0:346] and
eigenvalue = 5:405
(f) After six iterations, eigenvector = [0:4455; 0:5649; 0:6842; 0:1193]
and eigenvalue = 5:7323.
(2) The Power Method fails to converge, even after many iterations, for the
matrices in both parts. The matrix in part (a) is not diagonalizable, with
1 as its only eigenvalue and dim(E1 ) = 1. The matrix in part (b) has
eigenvalues 1, 3, and 3, none of which are strictly dominant.
(3) (a) ui is derived from ui 1 as ui = ci Aui 1 , where ci is a normalizing
constant. A proof by induction shows that ui = ki Ai u0 for some
nonzero constant ki . If u0 = a0 v1 + b0 v2 , then ui = ki a0 Ai v1 +
ki b0 Ai v2 = ki a0 i1 v1 +ki b0 i2 v2 . Hence ai = ki a0 i1 and bi = ki b0 i2 .
Thus,
i
ki a0 i1
ja0 j
jai j
1
=
=
:
jbi j
jb0 j
ki b0 i2
2
191
Andrilli/Hecker - Answers to Exercises
jai j
jbi j
Therefore, the ratio
Section 9.4
! 1 as a geometric sequence.
(b) Let 1 ; : : : n be the eigenvalues of A with j 1 j > j j j, for 2 j n.
Let fv1 ; : : : ; vn g be as given in the exercise. Suppose the initial
vector in the Power Method is u0 = a01 v1 +
+ a0n vn and the
ith iteration yields ui = ai1 v1 +
+ ain vn . As in part (a), a proof
by induction shows that ui = ki Ai u0 for some nonzero constant
ki . Therefore, ui = ki a01 Ai v1 + ki a02 Ai v2 +
+ ki a0n Ai vn =
i
i
i
ki a01 1 v1 + ki a02 2 v2 +
+ ki a0n n vn . Hence, aij = ki a0j ij .
Thus, for 2 j n, j 6= 0, and a0j 6= 0, we have
ki a01
jai1 j
=
jaij j
ki a0j
(4) (a) F
(b) T
Section 9.4
(1) (a) Q =
1
3
2
2
2
1
4
2
6
1 4
7
(b) Q = 11
6
2 p
6
(c) Q = 6
4
2
6
6
p
6
3
p
6
6
2
6
2
(d) Q = 13 6
4 0
1
2
14
6
70
1 6
(e) Q = 105
4 77
0
2
3
1 4
4
(2) (a) Q = 13
12
1
2
2
2
6
9
p
3
3
p
3
3
p
3
3
2
2
1
0
i
1
i
j
i
1
=
j
ja01 j
:
ja0j j
(c) T
3
2
2
3 6
1 5; R = 4 0 6
2
0 0
3
2
9
11 22
6 5; R = 4 0 11
2
0
0
p 3
2 p
2
6
2
7
6
6
0 7
5; R = 4 0
p
2
2
0
3
(d) F
3
3
9 5
3
3
11
22 5
11
p
3 6
p
2 3
0
3
p
2 6
3
p
10 3
3
p
2
2
0
6
3
9
1 7
7; R = 4 0
5
9
12
2 5
0
0 15
2
2
3
105 105
99
2
32
6 0 210
0
70
35 7
7; R = 6
4 0
18
64
26 5
0
0
0
30
45
90
3
4
13 26
x
1
12 5; R =
;
= 169
0 13
y
3
5:562
2:142
192
3
7
7
5
3
105 210
105 315 7
7
105 420 5
0 210
940
362
Andrilli/Hecker - Answers to
2
2
2
6
1
0
(b) Q = 13 6
4 0
1
2
2
2
3
2:986
4 0:208 5
4:306
2
1 8
6
6 4 0
6
(c) Q = 19 6
6 0 4
4
2
8 1
3 2
Exercises
3
2
1
3
2 7
7; R = 4 0
2 5
0
0
p
2 2
p
7
2 2
p
9
2 2
p
2 2
Section 9.4
9
6
0
3 2
3
6
x
9 5; 4 y 5 =
12
z
3
2
7
9
7
7
7; R = 4 0
7
0
5
3
0:565
1 4
4 0:602 5
108
0:611
p
2 1
3
2
2
3
3
19 19
6 2
7
p
4
1
6
7
2
6 9
9
19 19 7
9
6 4
7
p
2
3
7; R = 4 0
(d) Q = 6
19
6 9
7
9
19
6 4
7
p
0
4
1
6
7
19
9
19
4 9
5
p
1
2
2
3
3
19 19
3
3 2
2
2:893
2968
1 4
6651 5 4 6:482 5
1026
3:184
3267
2
3
43
5 4
3 5
72
62
9
18
0
3 2
3
9
x
5 4 y 5 =
p9 ;
z
18 2
18
9
0
3 2
3
18
x
5 4 y 5 =
p 9 ;
z
2 19
61
65 5
66
(3) We will show that the entries of Q and R are uniquely determined by the
given requirements. We will proceed column by column, using a proof by
induction on the column number m.
Base Step: m = 1. [1st column of A] = Q [1st column of R] =
[1st column of Q] (r11 ), because R is upper triangular. But, since
[1st column of Q] is a unit vector and r11 is positive, r11 =
k[1st column of A]k, and [1st column of Q] = r111 [1st column of A]. Hence,
the …rst column in each of Q and R is uniquely determined.
Inductive Step: Assume m > 1 and that columns 1; : : : ; (m 1) of
both Q and R are uniquely determined. We will show that the mth column in each of Q and R is uniquely determined. Now,
[mth column of A] = Q [mth column of R] =
Pm 1
Pm
i=1 [ith column of Q] (rim ). Let v =
i=1 [ith column of Q] (rim ). By
the Inductive Hypothesis, v is uniquely determined.
Next, [mth column of Q] (rmm ) = [mth column of A] v. But
since [mth column of Q] is a unit vector and rmm is positive, rmm =
193
Andrilli/Hecker - Answers to Exercises
Section 9.5
k[mth column of A] vk, and [mth column of Q] =
1
v). Hence, the mth column in each of Q and
rmm ([mth column of A]
R is uniquely determined.
(4) (a) If A is square, then A = QR, where Q is an orthogonal matrix and
R has nonnegative entries along its main diagonal. Setting U = R,
we see that AT A = UT QT QU = UT (In )U = UT U.
(b) Suppose A = QR (where this is a QR factorization of A) and AT A
= UT U. Then R and RT must be nonsingular. Also,
P = Q(RT ) 1 UT is an orthogonal matrix, because PPT
T
= Q(RT ) 1 UT Q(RT ) 1 UT
= Q(RT ) 1 UT UR 1 QT
T
= Q(RT ) 1 AT AR 1 QT = Q(RT ) 1 (R QT )(QR)R 1 QT
= Q((RT ) 1 RT )(QT Q)(RR 1 )QT = QIn In In QT = QQT = In .
Finally, PU = Q(RT ) 1 UT U = (QT ) 1 (RT ) 1 AT A =
T
T T
(R Q ) 1 AT A = (A ) 1 AT A = In A = A, and so PU is a QR
factorization of A, which is unique by Exercise 3. Hence, U is
uniquely determined.
(5) (a) T
(b) T
(c) T
(d) F
(e) T
Section 9.5
(1) For each part, one possibility is given.
p
10
1
1
2
1
(a) U = p2
, =
1
1
0
p
3
4
15 2
(b) U = 51
, =
4
3
0
p
3 1
9 10
(c) U = p110
, =
1 3
0
3
2
1
2 2
2 1 5
V = 13 4 2
2
1 2
p
1
2
5 5
(d) U = p15
, =
2
1
0
2 2
3
18
3
6
6
(e) U = 6
4
p
13
p
7 13
7
p3
13
p12
7 13
2
7
0
13
p
7 13
6
7
7
7
7,
5
p 0
10
,V=
p0
10 2
,V=
p 0 0
3 10 0
p0 0
5 5 0
2
1
= 4 0
0
0
1
0
1
2
1
1
p1
2
1
1
,
,V=
1
5
2
4
3
4
0
0
0
5
3
4
3 5
0
3
0
0 5, V = U (Note: A
0
represents the orthogonal projection onto the plane
3x + 2y + 6z = 0.)
194
2
1
p1
5
Andrilli/Hecker - Answers to Exercises
2
3
2 5,
(f) U =
6
3
2 6
9
1 4
9 6
2 5,
(g) U = 11
6 7
6
p
2 2p
8
5 p2
15 p2
1
(h) U = 4 12 p2
6 p2
3
13
10 2
30 2
2
0
1 0 1
6
1
0 1 0
V = p12 6
4 1
0 1 0
0
1 0 1
1
7
(2) (a) A+ =
6
4 3
2
2
2
6
3
104
158
1
2250
AT Av = AT b =
(b) A+ =
1
450
8
6
70
110
3
7
7
5
122
31
1
15
6823
3874
43
24
11
48
65
170
AT Av = AT b =
2
36
1 4
12
(c) A+ = 84
31
Section 9.5
3
2 p
2 2 p0
1
=4
0
2 5, V = p12
1
0
0
2
3
3 0
0 1
= 4 0 2 5, V =
1 0
0 0
3
2
3
1
2 0 0 0
3
2 5
, = 4 0 2 0 0 5,
3
2
0 0 0 0
3
3
,v=
,v=
1
90
p
(3) (a) A = 2 10
+
p
10
p1
2
1
7
1
1
p1
2
1
1
173
264
3
12
0
24
0 5, v =
41 49
3
24
36
23
2
127
30 5
60
3
2
2 0
4
4
1 4
8 6
4
1 5, v =
(d) A+ = 54
4 6
4
7
2
3
13
AT Av = AT b = 4 24 5
2
AT Av = AT b =
1
2250
4
p1
5
p1
5
195
2
1
1
2
1
14
5618
3364
,
2
4
3
26
1 4
59 5,
54
7
2
,
3
44
18 5,
71
1
1
Andrilli/Hecker - Answers to Exercises
p
(b) A = 9 10
p
+ 3 10
3
1
p1
10
p1
10
1
3
Section 9.5
1
3
1
3
1
2
2
2
2
1
0 2 31
6
p
(c) A = 2 2 @ 71 4 3 5A p12 1 1
2
0 2
31
2
p
1 1
+ 2 @ 17 4 6 5A p12
3
0 2
31
0 2 31
2
6
1 4
1 4
9 5A 0 1 + 2 @ 11
6 5A
(d) A = 3 @ 11
6
7
2 2p 3
5 2
6 1 p 7 p1
0
1 1 0
(e) A = 2 4 2 2 5
2
p
3
10 2
2
6
+ 24
p
8
15 2
p
1
6 2
p
13
30 2
3
7
5
p1
2
1
0
0
1
0
1
(4) If A is orthogonal, then AT A = In . Therefore, = 1 is the only eigenvalue
for AT A, and the eigenspace is all of Rn . Therefore, fv1 ; : : : ; vn g can be
any orthonormal basis for Rn . If we take fv1 ; : : : ; vn g to be the standard
basis for Rn , then the corresponding singular value decomposition for A
is AIn In . If, instead, we use the rows of A, which form an orthonormal
basis for Rn , to represent fv1 ; : : : ; vn g, then the corresponding singular
value decomposition for A is In In A.
(5) If A = U VT , then AT A = V T UT U VT = V T VT . For i m,
let i be the (i; i) entry of . Thus, for i m, AT Avi = V T VT vi
= V T ei = V T ( i ei ) = V( 2i ei ) = 2i vi . (Note that in this proof,
“ei ”has been used to represent both a vector in Rn and a vector in Rm .)
If i > m, then AT Avi = V T VT vi = V T ei = V T (0) = 0. This
proves the claims made in the exercise.
(6) Because A is symmetric, it is orthogonally diagonalizable. Let D =
PT AP be an orthogonal diagonalization for A, with the eigenvalues of
A along the main diagonal of D. Thus, A = PDPT . By Exercise 5, the
columns of P form an orthonormal basis of eigenvectors for AT A, and
the corresponding eigenvalues are the squares of the diagonal entries in
D. Hence, the singular values of A are the square roots of these eigenval-
196
Andrilli/Hecker - Answers to Exercises
Section 9.5
ues, which are the absolute values of the diagonal entries of D. Since the
diagonal entries of D are the eigenvalues of A, this completes the proof.
(7) Express v 2 Rn as v = a1 v1 +
+ an vn , where fv1 ; : : : ; vn g are right
singular vectors for A. Since fv1 ; : : : ; vn g is an orthonormal basis for Rn ,
p
+ a2n . Thus,
kvk = a21 +
2
kAvk
= (Av) (Av)
= (a1 Av1 +
+ an Avn ) (a1 Av1 +
+ an Avn )
2 2
2 2
= a1 1 +
+ an n (by parts (2) and (3) of Lemma 9.4)
2 2
2 2
a1 1 + a2 1 +
+ a2n 21
=
2
1
Therefore, kAvk
a21 + a22 +
1
+ a2n =
2
1
2
kvk :
kvk.
(8) In all parts, assume k represents the number of nonzero singular values.
(a) The ith column of V is the right singular vector vi , which is a unit
eigenvector corresponding to the eigenvalue i of AT A. But vi is
also a unit eigenvector corresponding to the eigenvalue i of AT A.
Thus, if any vector vi is replaced with its opposite vector, the set
fv1 ; : : : ; vn g is still an orthonormal basis for Rn consisting of eigenvectors for AT A. Since the vectors are kept in the same order, the
i do not increase, and thus fv1 ; : : : ; vn g ful…lls all the necessary
conditions to be a set of right singular vectors for A. For i k, the
left singular vector ui = 1i Avi , so when we change the sign of vi ,
we must adjust U by changing the sign of ui as well. For i > k,
changing the sign of vi has no e¤ect on U, but still produces a valid
singular value decomposition.
(b) If the eigenspace E for AT A has dimension higher than 1, then the
corresponding right singular vectors can be replaced by any orthonormal basis for E , for which there is an in…nite number of choices.
Then the associated left singular vectors u1 ; : : : ; uk must be adjusted
accordingly.
(c) Let fv1 ; : : : ; vn g be a set of right singular vectors for A. By Exercise
5, the ith diagonal entry of must be the square root of an eigenvalue
of AT A corresponding to the eigenvector vi . Since fv1 ; : : : ; vn g is an
orthonormal basis of eigenvectors, it must correspond to a complete
set of eigenvalues for AT A. Also by Exercise 5, all of the vectors vi ,
for i > m, correspond to the eigenvalue 0. Hence, all of the square
roots of the nonzero eigenvalues of AT A must lie on the diagonal of
. Thus, the values that appear on the diagonal of
are uniquely
determined. Finally, the order in which these numbers appear is
determined by the requirement for the singular value decomposition
that they appear in non-increasing order.
197
Andrilli/Hecker - Answers to Exercises
Section 9.5
(d) By de…nition, for 1 i k, ui = 1i Avi , and so these left singular
vectors are completely determined by the choices made for v1 ; : : : ; vk ,
the …rst k columns of V.
(e) Columns k + 1 through m of U are the left singular vectors
uk+1 ; : : : ; um , which can be any orthonormal basis for the orthogonal complement of the column space of A (by parts (2) and (3)
of Theorem 9.5). If m = k + 1, this orthogonal complement to the
column space is one-dimensional, and so there are only two choices
for uk+1 , which are opposites of each other, because uk+1 must be a
unit vector. If m > k + 1, the orthogonal complement to the column
space has dimension greater than 1, and there is an in…nite number
of choices for its orthonormal basis.
(9) (a) Each right singular vector vi , for 1 i k, must be an eigenvector
for AT A. Performing the Gram-Schmidt Process on the rows of A,
eliminating zero vectors, and normalizing will produce an orthonormal basis for the row space of A, but there is no guarantee that it
will consist of eigenvectors for AT A. For example, if A =
1
0
1
1
,
performing the Gram-Schmidt Process on the rows of A produces
the two vectors [1; 1] and
for AT A =
1
1
1
2
1 1
2; 2
, neither of which is an eigenvector
.
(b) The right singular vectors vk+1 ; : : : ; vn form an orthonormal basis
for the eigenspace E0 of AT A. Any orthonormal basis for E0 will
do. By part (5) of Theorem 9.5, E0 equals the kernel of the linear
transformation L whose matrix with respect to the standard bases
is A. A basis for ker(L) can be found by using the Kernel Method.
That basis can be turned into an orthonormal basis for ker(L) by
applying the Gram-Schmidt Process and normalizing.
(10) If A = U VT , as given in the exercise, then A+ = V + UT , and so A+ A
= V + UT U VT = V + VT . Note that + is an n n diagonal
matrix whose …rst k diagonal entries equal 1, with the remaining diagonal
entries equal to 0. Note also that since the columns v1 ; : : : ; vn of V are
orthonormal, VT vi = ei , for 1 i n.
(a) If 1
i
k, then A+ Avi = V
+
(b) If i > k, then A Avi = V
+
+
T
VT vi = V
V vi = V
+
+
ei = Vei = vi .
ei = V (0) = 0.
(c) By parts (4) and (5) of Theorem 9.5, fv1 ; : : : ; vk g is an orthonormal
?
basis for (ker(L)) , and fvk+1 ; : : : ; vn g is an orthonormal basis for
?
ker(L). The orthogonal projection onto (ker(L)) sends every vector
?
?
in (ker(L)) to itself, and every vector orthogonal to (ker(L)) (that
is, every vector in ker(L)) to 0. Parts (a) and (b) prove that this is
198
Andrilli/Hecker - Answers to Exercises
Section 9.5
precisely the result obtained by multiplying A+ A by the bases for
?
(ker(L)) and ker(L).
(d) Using part (a), if 1 i k, then AA+ Avi = A (A+ Avi ) = Avi .
By part (b), if i > k, AA+ Avi = A (A+ Avi ) = A(0) = 0. But,
if i > k, vi 2 ker(L), and so Avi = 0 as well. Hence, AA+ A and
A represent linear transformations from Rn to Rm that agree on a
basis for Rn . Therefore, they are equal as matrices.
(e) By part (d), AA+ A = A. If A is nonsingular, we can multiply both
sides by A 1 (on the left) to obtain A+ A = In . Multiplying by A 1
again (on the right) yields A+ = A 1 .
(11) If A = U VT , as given in the exercise, then A+ = V + UT . Note
that since the columns u1 ; : : : ; um of U are orthonormal, UT ui = ei , for
1 i m.
(a) If 1
i
k, then A+ ui = V
+
UT ui = V
+
vi . Thus, AA ui = A (A+ ui ) = A
1
(b) If i > k, then A+ ui = V + UT ui = V
AA+ ui = A (A+ ui ) = A (0) = 0.
+
=
1
i
i
+
vi =
ei = V
1
i
1
i
ei
Avi = ui .
ei = V (0) = 0. Thus,
(c) By Theorem 9.5, fu1 ; : : : ; uk g is an orthonormal basis for range(L),
?
and fuk+1 ; : : : ; um g is an orthonormal basis for (range(L)) . The
orthogonal projection onto range(L) sends every vector in range(L)
?
to itself, and every vector in (range(L)) to 0. Parts (a) and (b)
prove that this is precisely the action of multiplying by AA+ by
?
showing how it acts on bases for range(L) and (range(L)) .
(d) Using part (a), if 1
i
k, then A+ AA+ ui = A+ AA+ ui =
+
A ui . By part (b), if i > k, A+ AA+ ui = A+ AA+ ui = A+ (0)
= 0. But, if i > k, A+ ui = 0 as well. Hence, A+ AA+ and A+
represent linear transformations from Rm to Rn that agree on a basis
for Rm . Therefore, they are equal as matrices.
(e) Let L1 : (ker(L))? ! range(L) be given by L1 (v) = Av. Parts (2)
and (3) of Theorem 9.5 shows that dim((ker(L))? ) = k =
dim(range(L)). Part (a) of this exercise and part (2) of Theorem 9.5
shows that T : range(L) ! range(L) given by T (u) = L1 (A+ u) is
the identity linear transformation. Hence, L1 must be onto, and so
by Corollary 5.21, L1 is an isomorphism. Because T is the identity
transformation, the inverse of L1 has A+ as its matrix with respect to
the standard basis. This shows that A+ u is uniquely determined for
u 2 range(L). Thus, since the subspace range(L) depends only on A,
not on the singular value decomposition of A, A+ u is uniquely determined on the subspace range(L), independently from which singular
value decomposition is used for A to compute A+ . Similarly, part (3)
of Theorem 9.5 and part (b) of this exercise shows that A+ u = 0 on a
199
Andrilli/Hecker - Answers to Exercises
Section 9.5
?
?
basis for (range(L)) , and so A+ u = 0 for all vectors in (range(L))
– again, independent of which singular value decomposition of A
is used. Thus, A+ u is uniquely determined for u 2 range(L) and
?
u 2 (range(L)) , and thus, on a basis for Rm which can be chosen
from these two subspaces. Thus, A+ is uniquely determined.
(12) By part (a) of Exercise 26 in Section 1.5, trace AAT
equals the sum of
T
the squares of the entries of A. If A = U V is a singular value decompoT T
sition of A, then AAT = U VT V T UT = U
U . Using part (c) of
T
T T
Exercise 26 in Section 1.5, we see that trace AA = trace U
U
= trace UUT
(13) Let V1 be the n
T
= trace
T
=
2
1
+
+
2
k.
n matrix whose columns are
vi ; : : : ; vj ; v1 ; : : : ; vi
in that order. Let U1 be the m
1 ; vj+1 ; : : : ; vn ;
m matrix whose columns are
ui ; : : : ; uj ; u1 ; : : : ; ui
1 ; uj+1 ; : : : ; um ;
in that order. (The notation used here assumes that i > 1; j < n, and
j < m, but the construction of V1 and U1 can be adjusted in an obvious
way if i = 1, j = n, or j = m.) Let 1 be the diagonal m n matrix
with i ; : : : ; j in the …rst j i + 1 diagonal entries and zero on the
remaining diagonal entries. We claim that U1 1 V1T is a singular value
decomposition for Aij . To show this, because U1 , 1 , and V1 are of the
proper form, it is enough to show that Aij = U1 1 V1T .
If i
l
j, then Aij vl = i ui viT +
+ j uj vjT vl = l ul , and
U1 1 V1T vl = U1 1 el i+1 = U1 l el i+1 = l ul . If l < i or j < l,
then Aij vl = i ui viT +
+ j uj vjT vl = 0. If l < i, U1 1 V1T vl =
U1 1 el+j i+1 = U1 0 = 0, and if j < l, U1 1 V1T vl = U1 1 el = U1 0
= 0. Hence, Aij vl = U1 1 V1T vl for every l, and so Aij = U1 1 V1T .
Finally, since we know a singular value decomposition for Aij , we know
that i ; : : : ; j are the singular values for Aij , and by part (1) of Theorem
9.5, rank(Aij ) = j i + 1.
2
3
40
5 15
15
5
30
6 1:8
3 1:2
1:2
0:6
1:8 7
6
7
50
5
45
45
5
60 7
(14) (a) A = 6
6
7
4 2:4
1:5 0:9
0:9
3:3
2:4 5
42:5
2:5 60
60
2:5
37:5
2
3
25 0 25
25 0
25
6 0 0 0
0
0
0 7
6
7
6
50 0
50 7
(b) A1 = 6 50 0 50
7
4 0 0 0
0 0
0 5
50 0 50
50 0
50
200
Andrilli/Hecker - Answers
2
35 0
6 0 0
6
A2 = 6
6 55 0
4 0 0
40 0
2
40
6
0
6
50
A3 = 6
6
4
0
42:5
2
40
6 1:8
6
50
A4 = 6
6
4 2:4
42:5
to Exercises
15
0
45
0
60
Section 9.5
15 0
0 0
45 0
0 0
60 0
3
35
0
55
0
40
5 15
0
0
5 45
0
0
2:5 60
15
0
45
0
60
5 15
1:8
0
5 45
2:4
0
2:5 60
15
0
45
0
60
5
0
5
0
2:5
7
7
7
7
5
5
1:8
5
2:4
2:5
30
0
60
0
37:5
3
7
7
7
7
5
30
1:8
60
2:4
37:5
3
7
7
7
7
5
(c) N (A) 153:85; N (A A1 )=N (A) 0:2223; N (A A2 )=N (A)
0:1068; N (A A3 )=N (A) 0:0436; N (A A4 )=N (A) 0:0195
(d) The method described in the text for the compression of digital images takes the matrix describing the image and alters it by zeroing
out some of the lower singular values. This exercise illustrates how
the matrices Ai that use only the …rst i singular values for a matrix
A get closer to approximating A as i increases. The matrix for a
digital image is, of course, much larger than the 5 6 matrix considered in this exercise. Also, you can frequently get a very good
approximation of the image using a small enough number of singular
values so that less data needs to be saved. Using the outer product
form of the singular value decomposition, only the singular values
and the relevant singular vectors are needed to construct each Ai , so
not all mn entries of Ai need to be kept in storage.
(15) These are the steps to process a digital image in MATLAB, as described
in the textbook:
– Enter the command : edit
– Use the text editor to enter the following MATLAB program:
function totalmat = RevisedPic(U, S, V, k)
T = V’
totalmat = 0;
for i = 1:k
totalmat = totalmat + U(:,i)*T(i,:)*S(i,i);
end
201
Andrilli/Hecker - Answers to Exercises
Section 9.5
– Save this program under the name RevisedPic.m
– Enter the command: A=imread(’picture…lename’);
where picture…lename is the name of the …le containing the
picture, including its …le extension.
(preferably .tif or .jpg)
– Enter the command: ndims(A)
– If the response is “3”, then MATLAB is treating your picture
as if it is in color, even if it appears to be black-and-white.
If this is the case, enter the command: B=A(:,:,1);
(to use only the …rst color of the three used for color pictures)
followed by the command: C=double(B);
(to convert the integer format to decimal)
Otherwise, just enter: C=double(A);
– Enter the command: [U,S,V]=svd(C);
(This computes the singular value decomposition of C.)
– Enter the command: W=RevisedPic(U,S,V,100);
(The number “100”is the number of singular values you are using.
You may change this to any value you like.)
– Enter the command: R=uint8(round(W));
(This converts the decimal output to the correct integer format.)
– Enter the command: imwrite(R,’Revised100.tif’,’tif’)
(This will write the revised picture out to a …le named
“Revised100.tif”. Of course, you can use any …le name you
would like, or use the .jpg extension instead of .tif.)
– You may repeat the steps:
W=RevisedPic(U,S,V,k);
R=uint8(round(W));
imwrite(R,’Revisedk.tif’,’tif’)
with di¤erent values for k to see how the picture gets more
re…ned as more singular values are used.
– Output can be viewed using any appropriate software you may
have on your computer for viewing such …les.
(16) (a) F
(b) T
(c) F
(d) F
(e) F
(f) T
(g) F
(h) T
202
(i) F
(j) T
(k) T
Andrilli/Hecker - Answers to Exercises
Appendix B
Appendices
Appendix B
(1) (a) Not a function; unde…ned for x < 1
(b) Function; range = fx 2 R j x
= f 3; 5g
0g; image of 2 = 1; pre-images of 2
(c) Not a function; two values are assigned to each x 6= 1
(d) Function; range = f 4; 2; 0; 2; 4; : : :g; image of 2 =
of 2 = f6; 7g
(e) Not a function (k unde…ned at
=
2; pre-images
2)
(f) Function; range = all prime numbers; image of 2 = 2; pre-image of
2 = f0; 1; 2g
(g) Not a function; 2 is assigned to two di¤erent values ( 1 and 6)
(2) (a) f 15; 10; 5; 5; 10; 15g
(b) f 9; 8; 7; 6; 5; 5; 6; 7; 8; 9g
(c) f: : : ; 8; 6; 4; 2; 0; 2; 4; 6; 8; : : :g
p
p
(3) (g f )(x) = 14 75x2 30x + 35; (f g)(x) = 14 (5 3x2 + 2
(4) (g f )
x
y
=
8
2
(f
x
y
=
12
8
4 12
g)
24
8
x
y
1)
,
x
y
(5) (a) Let f : A ! B be given by f (1) = 4, f (2) = 5, f (3) = 6. Let
g : B ! C be given by g(4) = g(7) = 8, g(5) = 9, g(6) = 10. Then
g f is onto, but f 1 (f7g) is empty, so f is not onto.
(b) Use the example from part (a). Note that g
g(4) = g(7), so g is not one-to-one.
f is one-to-one, but
(6) The function f is onto because, given any real number x, the n
n
diagonal matrix A with a11 = x and aii = 1 for 2 i n has determinant
x. Also, f is not one-to-one because for every x 6= 0, any other matrix
obtained by performing a type (II) operation on A (for example,
<1>
< 2 > + < 1 >) also has determinant x.
(7) The function f is not onto because only symmetric 3
3 matrices are
in the range of f (f (A) = A + AT = (AT + A)T = (f (A))T ). Also, f
is not one-to-one because if A is any nonsymmetric 3
3 matrix, then
f (A) = A + AT = AT + (AT )T = f (AT ), but A 6= AT .
(8) f is not one-to-one, because f (x+1) = f (x+3) = 1; f is not onto, because
there is no pre-image for xn . For n 3; the pre-image of P2 is P3 .
203
Andrilli/Hecker - Answers to Exercises
(9) If f (x1 ) = f (x2 ), then 3x31
Appendix C
5 = 3x32
5 =) 3x31 = 3x32 =) x31 = x32 =)
x1 = x2 . So f is one-to-one. Also, if b 2 R, then f
onto. Inverse of f = f
1
x+5
3
(x) =
1
3
b+5
3
1
3
= b, so f is
.
(10) The function f is one-to-one, because
f (A1 ) = f (A2 )
) B 1 A1 B = B 1 A2 B
1
) B(B A1 B)B 1 = B(B
) (BB
1
)A1 (BB
1
1
) = (BB
A2 B)B
1
1
1
)A2 (BB
)
) I n A 1 I n = In A 2 In
) A1 = A2 .
The function f is onto, because for any C 2 Mnn , f (BCB
B 1 (BCB 1 )B = C. Also, f 1 (A) = BAB 1 .
1
)=
(11) (a) Let c 2 C. Since g f is onto, there is some a 2 A such that
(g f )(a) = c. Then g(f (a)) = c. Let f (a) = b. Then g(b) = c, so g
is onto. (Exercise 5(a) shows that f is not necessarily onto.)
(b) Let a1 ; a2 2 A such that f (a1 ) = f (a2 ). Then g(f (a1 )) = g(f (a2 )),
implying (g f )(a1 ) = (g f )(a2 ). But since g f is one-to-one,
a1 = a2 . Hence, f is one-to-one. (Exercise 5(b) shows that g is not
necessarily one-to-one.)
(12) (a) F
(c) F
(e) F
(g) F
(b) T
(d) F
(f) F
(h) F
i
32i
12i
9i
(e) 9 + 19i
(f) 2 + 42i
(g) 17 19i
(h) 5 4i
(i) 9 + 2i
(j) 6
(k) 16 + 22i
p
(l) 73
(m)
1
20 i
(b)
Appendix C
(1) (a)
(b)
(c)
(d)
11
24
20
18
(2) (a)
3
20
+
3
25
4
25 i
(c)
4
17
1
17 i
p
53
(n) 5
(d)
5
34
+
3
34 i
(3) In all parts, let z1 = a1 + b1 i and z2 = a2 + b2 i.
(a) Part (1): z1 + z2 = (a1 + b1 i) + (a2 + b2 i) = (a1 + a2 ) + (b1 + b2 )i =
(a1 + a2 ) (b1 + b2 )i = (a1 b1 i) + (a2 b2 i) = z1 + z2 .
Part (2): (z1 z2 ) = (a1 + b1 i)(a2 + b2 i)
= (a1 a2
(a1 a2
b1 b2 ) + (a1 b2 + a2 b1 )i = (a1 a2
b1 b2 )
( b1 )( b2 )) + (a1 ( b2 ) + a2 ( b1 ))i = (a1
z1 z2 .
204
(a1 b2 + a2 b1 )i =
b1 i)(a2
b2 i) =
Andrilli/Hecker - Answers to Exercises
1
z1
(b) If z1 6= 0, then
(c) Part (4):
, b1 = 0
Part (5):
a1 = a1
Appendix C
exists. Hence,
1
z1 (z1 z2 )
=
1
z1 (0),
implying z2 = 0.
z1 = z1 , a1 + b1 i = a1 b1 i , b1 i = b1 i , 2b1 i = 0
, z1 is real.
z1 = z1 , a1 +b1 i = (a1 b1 i) , a1 +b1 i = a1 +b1 i ,
, 2a1 = 0 , a1 = 0 , z1 is pure imaginary.
(4) In all parts, let z1 = a1 + b1 i and z2 = a2 + b2 i. Also note that z1 z1 = jz1 j2
because z1 z1 = (a1 + b1 i)(a1 b1 i) = a21 + b21 .
2
(a) jz1 z2 j = (z1 z2 )(z1 z2) (by the above) = (z1 z2 ) (z1 z2 ) (by part (2) of
2
Theorem C.1) = (z1 z1 ) (z2 z2 ) = jz1 j2 jz2 j2 = (jz1 j jz2 j) . Now take
square roots.
(b)
1
z1
=
z1
jz1 j2
(by the boxed equation just before Theorem C.1 in
Appendix C of the textbook) =
r
r
a1
a21 +b21
=
z1
z2
(c)
=
2
+
z1 z2
z2 z2
=
b1
a21 +b21
z1 z2
jz2 j2
2
=
(a1 a2 b1 b2 )
a22 +b22
+
(b1 a2 a1 b2 )
i
a22 +b22
=
(a1 a2 b1 b2 )
a22 +b22
+
( b1 a2 +a1 b2 )
i
a22 +b22
(b) F
1
a21 +b21
b1
i
a21 +b21
2
(a21 + b21 ) = p
1
a21 +b21
=
(a1 +b1 i)(a2 b2 i)
a22 +b22
=
=
(5) (a) F
a1
a21 +b21
=
(a1 a2 b1 b2 )
a22 +b22
=
(c) T
205
(b1 a2 a1 b2 )
i
a22 +b22
(a1 b1 i)(a2 +b2 i)
a22 +b22
=
(d) T
z1 z2
z2 z2
=
z1
z2
(e) F
1
jz1 j
Sample Tests
We include here three sample tests for each of
Chapters 1 through 7. Answer keys for all tests follow
the test collection itself. The answer keys also include
optional hints that an instructor might want to divulge for
some of the more complicated or time-consuming
problems.
Instructors who are supervising independent study
students can use these tests to measure their students’
progress at regular intervals. However, in a typical
classroom situation, we do not expect instructors to give
a test immediately after every chapter is covered, since
that would amount to six or seven tests throughout the
semester. Rather, we envision these tests as being used
as a test bank; that is, a supply of questions, or ideas for
questions, to assist instructors in composing their own
tests.
Note that most of the tests, as printed here, would
take a student much more than an hour to complete, even
with the use of software on a computer and/or calculator.
Hence, we expect instructors to choose appropriate
subsets of these tests to fulfill their classroom needs.
206
Andrilli/Hecker - Chapter Tests
Chapter 1 - Version A
Test for Chapter 1 — Version A
(1) A rower can propel a boat 5 km/hr on a calm river. If the rower rows
southeastward against a current of 2 km/hr northward, what is the net
velocity of the boat? Also, what is the net speed of the boat?
(2) Use a calculator to …nd the angle (to the nearest degree) between the
vectors x = [3; 2; 5] and y = [ 4; 1; 1].
(3) Prove that, for any vectors x; y 2 Rn ,
kx + yk2 + kx
yk2 = 2(kxk2 + kyk2 ):
(4) Let x = [ 3; 4; 2] represent the force on an object in a three-dimensional
coordinate system, and let a = [5; 1; 3] be a given vector. Use proja x
to decompose x into two component forces in directions parallel and orthogonal to a. Verify that your answer is correct.
(5) State the contrapositive, converse, and inverse of the following statement:
If kx + yk =
6 kxk + kyk, then x is not parallel to y.
Which one of these is logically equivalent to the original statement?
(6) Give the negation of the following statement:
kxk = kyk or (x
y) (x + y) 6= 0.
(7) Use a proof by induction to show that if the n n matrices A1 ; : : : ; Ak
Pk
are diagonal, then i=1 Ai is diagonal.
3
2
4
3
2
1
7 5 into the sum of a symmetric matrix
(8) Decompose A = 4 6
2
4
3
S and a skew-symmetric matrix V.
(9) Given the following information about the employees of a certain TV network, calculate the total amount of salaries and perks paid out by the
network for each TV show:
TV
TV
TV
TV
Show
Show
Show
Show
1
2
3
4
Actors
2
12
6 10
6
4 6
9
Writers
4
2
3
4
Directors
3
2
3 7
7
5 5
1
Salary Perks
3
Actor
$50000 $40000
Writer 4 $60000 $30000 5
Director
$80000 $25000
2
(10) Let A and B be symmetric n
n matrices. Prove that AB is symmetric
if and only if A and B commute.
207
Andrilli/Hecker - Chapter Tests
Chapter 1 - Version B
Test for Chapter 1 — Version B
(1) Three people are competing in a three-way tug-of-war, using three ropes
all attached to a brass ring. Player A is pulling due north with a force of
100 lbs. Player B is pulling due east, and Player C is pulling at an angle
of 30 degrees o¤ due south, toward the west side. The ring is not moving.
At what magnitude of force are Players B and C pulling?
(2) Use a calculator to …nd the angle (to the nearest degree) between the
vectors x = [2; 5] and y = [ 6; 5].
(3) Prove that, for any vectors a and b in Rn , the vector b
orthogonal to a.
proja b is
(4) Let x = [5; 2; 4] represent the force on an object in a three-dimensional
coordinate system, and let a = [ 2; 3; 3] be a given vector. Use proja x
to decompose x into two component forces in directions parallel and orthogonal to a. Verify that your answer is correct.
(5) State the contrapositive, converse, and inverse of the following statement:
If kxk2 + 2(x y) > 0, then kx + yk > kyk.
Which one of these is logically equivalent to the original statement?
(6) Give the negation of the following statement:
There is a unit vector x in R3 such that x is parallel to [ 2; 3; 1].
(7) Use a proof by induction to show that if the n n matrices A1 ; : : : ; Ak
Pk
are lower triangular, then i=1 Ai is lower triangular.
2
3
5 8
2
3 5 into the sum of a symmetric matrix S
(8) Decompose A = 4 6 3
9 4
1
and a skew-symmetric matrix V.
(9) Given the following information about the amount of foods (in ounces)
eaten by three cats each week, and the percentage of certain nutrients in
each food type, …nd the total intake of each type of nutrient for each cat
each week:
Food A
2
Cat 1
9
Cat 2 4 6
Cat 3
4
Food
Food
Food
Food
Food B
4
3
6
Food C
7
10
9
Nutrient 1 Nutrient 2
2
A
5%
4%
B 6
2%
3%
6
C 4
9%
2%
D
6%
0%
208
Food D
3
8
4 5
7
Nutrient
8%
2%
6%
8%
3
3
7
7
5
Andrilli/Hecker - Chapter Tests
Chapter 1 - Version B
(10) Prove that if A and B are both n
(AB)T = BA.
209
n skew-symmetric matrices, then
Andrilli/Hecker - Chapter Tests
Chapter 1 - Version C
Test for Chapter 1 — Version C
(1) Using Newton’s Second Law of Motion, …nd the acceleration vector on a
10 kg object in a three-dimensional coordinate system when the following
forces are simultaneously applied:
a force of 6 newtons in the direction of the vector [ 4; 0; 5],
a force of 4 newtons in the direction of the vector [2; 3; 6], and
a force of 8 newtons in the direction of the vector [ 1; 4; 2].
(2) Use a calculator to …nd the angle (to the nearest degree) between the
vectors x = [ 2; 4; 3] and y = [8; 3; 2].
(3) Prove that, for any vectors x; y 2 Rn ,
kx + yk2
(kxk + kyk)2 :
(Hint: Use the Cauchy-Schwarz Inequality.)
(4) Let x = [ 4; 2; 7] represent the force on an object in a three-dimensional
coordinate system, and let a = [6; 5; 1] be a given vector. Use proja x
to decompose x into two component forces in directions parallel and orthogonal to a. Verify that your answer is correct.
(5) Use a proof by contrapositive to prove the following statement:
If y 6= projx y, then y 6= cx for all c 2 R.
(6) Give the negation of the following statement:
For every vector x 2 Rn , there is some y 2 Rn such that kyk > kxk.
2
3
3 6
3
2 5 into the sum of a symmetric matrix S
(7) Decompose A = 4 7 5
2 4
4
and a skew-symmetric matrix V.
2
3
3
2
6
4
2
4
3
1
5
6 1
3
4 7
7, calcu9
2
4 5 and B = 6
(8) Given A = 4 6
4 2
3
9 5
8
7
1
2
2
5
8
late, if possible, the third row of AB and the second column of BA.
(9) Use a proof by induction to show that if the n n matrices A1 ; : : : ; Ak
are diagonal, then the product A1
Ak is diagonal.
(10) Prove that if A is a skew-symmetric matrix, then A3 is also skew-symmetric.
210
Andrilli/Hecker - Chapter Tests
Chapter 2 - Version A
Test for Chapter 2 — Version A
(1) Use Gaussian elimination to …nd the quadratic equation y = ax2 + bx + c
that goes through the points (3; 7), ( 2; 12), and ( 4; 56).
(2) Use Gaussian elimination or the Gauss-Jordan method
lowing linear system. Give the full solution set.
8
< 5x1 + 10x2 + 2x3 + 14x4 + 2x5
7x1
14x2
2x3
22x4 + 4x5
:
3x1 +
6x2 + x3 +
9x4
to solve the fol=
=
=
13
47
13
(3) Solve the following homogeneous system using the Gauss-Jordan method.
Give the full solution set, expressing the vectors in it as linear combinations
of particular solutions.
8
16x3
9x4 = 0
< 2x1 + 5x2
x1 + x2
2x3
2x4 = 0
:
3x1 + 2x2
14x3
2x4 = 0
2
(4) Find the rank of A = 4
2
2
5
3
8
10 5. Is A row equivalent to I3 ?
21
1
2
3
(5) Solve the following two systems simultaneously:
8
5x2 + 11x3 = 8
< 2x1 +
2x1 +
7x2 + 14x3 = 6
:
3x1 + 11x2 + 22x3 = 9
8
< 2x1
2x1
:
3x1
and
+
+
+
5x2
7x2
11x2
+ 11x3
+ 14x3
+ 22x3
= 25
= 30
= 47
(6) Let A be a m n matrix, B be an n p matrix, and let R be the type
(I) row operation R : hii
chii, for some scalar c and some i with
1 i m. Prove that R(AB) = R(A)B. (Since you are proving part
of Theorem 2.1 in the textbook, you may not use that theorem in your
proof. However, you may use the fact from Chapter 1 that
(kth row of (AB)) = (kth row of A)B.)
(7) Determine whether or not [ 15; 10; 23] is in the row space of
2
3
5
4 8
1 2 5:
A=4 2
1
5 1
211
Andrilli/Hecker - Chapter Tests
2
3
(8) Find the inverse of A = 4 8
4
Chapter 2 - Version A
1
9
3
3
2
3 5.
2
(9) Without using row reduction, …nd the inverse of the matrix A =
6
5
8
9
(10) Let A and B be nonsingular n n matrices. Prove that A and B commute
if and only if (AB)2 = A2 B2 .
212
.
Andrilli/Hecker - Chapter Tests
Chapter 2 - Version B
Test for Chapter 2 — Version B
(1) Use Gaussian elimination to …nd the circle x2 + y 2 + ax + by + c that goes
through the points (5; 1), (6; 2), and (1; 7).
(2) Use Gaussian elimination or the Gauss-Jordan method to solve the following linear system. Give the full solution set.
8
2x2 + 19x3 + 11x4 + 9x5
27x6 =
31
< 3x1
2x1 + x2
12x3
7x4
8x5 + 21x6 =
22
:
2x1
2x2 + 14x3 +
8x4 + 3x5
14x6 =
19
(3) Solve the following homogeneous system using the Gauss-Jordan method.
Give the full solution set, expressing the vectors in it as linear combinations
of particular solutions.
8
9x2 + 8x3 +
4x4 = 0
< 3x1 +
5x1
15x2 + x3 + 22x4 = 0
:
4x1
12x2 + 3x3 + 22x4 = 0
2
(4) Find the rank of A = 4
3
1
15
9
2
41
3
7
2 5. Is A row equivalent to I3 ?
34
(5) Solve the following two systems simultaneously:
8
3x2
x3 =
< 4x1
4x1
2x2
3x3 =
:
5x1
x2 + x3 =
and
8
4x
3x
x3 =
<
1
2
4x1
2x2
3x3 =
:
5x1
x2 + x3 =
13
14
17
13
16
18
(6) Let A be a m n matrix, B be an n p matrix, and let R be the type (II)
row operation R : hii
chji + hii, for some scalar c and some i; j with
1 i; j m. Prove that R(AB) = R(A)B. (Since you are proving part
of Theorem 2.1 in the textbook, you may not use that theorem in your
proof. However, you may use the fact from Chapter 1 that
(kth row of (AB) = (kth row of A)B.)
2
3
2
1
5
1
9 5.
(7) Determine whether [ 2; 3; 1] is in the row space of A = 4 4
1
1
3
2
3
3 3
1
0 5.
(8) Find the inverse of A = 4 2 1
10 3
1
213
Andrilli/Hecker - Chapter Tests
(9) Solve the linear system
8
< 5x1
7x1
:
7x1
Chapter 2 - Version B
+
9x2
14x2
+ 12x2
+
4x3
10x3
+
4x3
=
=
=
91
146
120
without using row reduction, if the inverse of the coe¢ cient matrix
3
2
3
2
32
6
17
5
9
4
4 7
4
11 5 :
14
10 5 is 4 21
7
3
7
12
4
7
2
2
(10) (a) Prove that if A is a nonsingular matrix and c 6= 0, then
(cA) 1 = ( 1c )A 1 .
(b) Use the result from part (a) to prove that if A is a nonsingular skewsymmetric matrix, then A 1 is also skew-symmetric.
214
Andrilli/Hecker - Chapter Tests
Chapter 2 - Version C
Test for Chapter 2 — Version C
(1) Use Gaussian elimination to …nd the values of A, B, C, and D that solve
the following partial fractions problem:
D
5x2 18x + 1
A
Bx + C
+
=
+
:
2
2
(x 2) (x + 3)
x 2 (x 2)
x+3
(2) Use Gaussian elimination or the Gauss-Jordan method to solve
lowing linear system. Give the full solution set.
8
9x1 + 36x2
11x3 + 3x4 + 34x5
15x6
>
>
<
10x1
40x2 +
9x3
2x4
38x5
9x6
4x
+
16x
4x
+
x
+
15x
>
1
2
3
4
5
>
:
11x1 + 44x2
12x3 + 2x4 + 47x5 +
3x6
the fol=
=
=
=
6
89
22
77
(3) Solve the following homogeneous system using the Gauss-Jordan method.
Give the full solution set, expressing the vectors in it as linear combinations
of particular solutions.
8
2x2 + 14x3 + x4
7x5 = 0
< 2x1
6x1 + 9x2
27x3
2x4 + 21x5 = 0
:
3x1
4x2 + 16x3 + x4
10x5 = 0
2
6
(4) Find the rank of A = 6
4
5
2
6
4
6
4
9
6
3
23
13 7
7. Is A row equivalent to
27 5
19
2
1
1
1
I4 ?
(5) Find the reduced row echelon form matrix B for the matrix
3
2
2 2 3
A = 4 3 2 1 5;
5 1 3
and list a series of row operations that converts B to A.
(6) Determine whether [25; 12; 19]
b = [3; 3; 4], and c = [3; 2; 3].
2
2
6 4
(7) Find the inverse of A = 6
4 3
3
is a linear combination of a = [7; 3; 5],
0
1
1
7
215
1
1
1
2
3
1
3 7
7.
2 5
16
Andrilli/Hecker - Chapter Tests
(8) Solve the linear system
8
< 4x1
4x1
:
6x1
Chapter 2 - Version C
+ 2x2
+ x2
+ 3x2
5x3
3x3
4x3
=
=
=
3
7
26
without using row reduction, if the inverse of the coe¢ cient matrix
2
3
3
2
7
1
5
4 2
5
2
2
2
4 4 1
3 5 is 4 17 23
4 5:
6 3
4
9 12
2
(9) Prove by induction that if A1 ; : : : ; Ak are nonsingular n n matrices, then
(A1
Ak ) 1 = (Ak ) 1
(A1 ) 1 .
(10) Suppose that A and B are n n matrices, and that R1 ; : : : ; Rk are
row operations such that R1 (R2 ( (Rk (AB)) )) = In . Prove that
R1 (R2 ( (Rk (A)) )) = B 1 .
216
Andrilli/Hecker - Chapter Tests
Chapter 3 - Version A
Test for Chapter 3 — Version A
(1) Calculate the area of the parallelogram determined by the vectors
x = [ 2; 5] and y = [6; 3].
(2) If A is a 5
5 matrix with determinant 4, what is j6Aj ? Why?
2
3
5
6
3
6
1 5 by row reducing A to
(3) Calculate the determinant of A = 4 4
5 10
2
upper triangular form.
(4) Prove that if A and B are n
n matrices, then jABT j = jAT Bj.
(5) Use cofactor expansion along any row or column to …nd the determinant
of
2
3
3
0
4
6
6 4
2
5
2 7
7:
A = 6
4 3
3
0
5 5
7
0
0
8
Be sure to use cofactor expansion to
as well.
2
2 3
(6) Find the inverse of A = 4 1 2
3 0
matrix for A.
…nd any 3
3 determinants needed
3
2
2 5 by …rst computing the adjoint
8
(7) Use Cramer’s Rule to solve the following system:
8
< 5x1 + x2 + 2x3 =
8x1 + 2x2
x3 =
:
6x1
x2 + x3 =
2
0
(8) Let A = 4 1
1
integer entries, and
(9) Prove that an n
eigenvalue for A.
3
17 :
10
3
1
1
0
1 5 : Find a nonsingular matrix P having all
1
0
a diagonal matrix D such that D = P 1 AP.
n matrix A is singular if and only if
(10) Suppose that 1 = 2 is an eigenvalue for an n
3
2 = 8 is an eigenvalue for A .
217
= 0 is an
n matrix A. Prove that
Andrilli/Hecker - Chapter Tests
Chapter 3 - Version B
Test for Chapter 3 — Version B
(1) Calculate the volume of the parallelepiped determined by the vectors
x = [2; 7; 1], y = [4; 2; 3], and z = [ 5; 6; 2].
(2) If A is a 4
4 matrix with determinant 7, what
2
2
4
6 6
11
(3) Calculate the determinant of A = 6
4 4
13
3
16
ducing A to upper triangular form.
is j5Aj ? Why?
3
12
15
25
32 7
7 by row re32
41 5
45
57
(4) Prove that if A and B are n n matrices with AB =
then either A or B is singular.
(5) Use cofactor expansion along any
of
2
2
6 4
A = 6
4 6
1
BA, and n is odd,
row or column to …nd the determinant
3
1
5
2
3
1
0 7
7:
8
0
0 5
7
0
3
Be sure to use cofactor expansion to …nd any 3 3 determinants needed
as well.
2
3
0 1
2
4 5 by …rst computing the adjoint
(6) Find the inverse of A = 4 4 3
2 1
2
matrix for A.
(7) Use Cramer’s Rule to solve the following system:
8
4x2
3x3 =
< 4x1
6x1
5x2
10x3 =
:
2x1 + 2x2 +
2x3 =
10
28 :
6
3
1
3
2
3 5 : Find a nonsingular matrix P having all integer
1
2
diagonal matrix D such that D = P 1 AP.
2
3
2 1 0 0
6 0 2 1 0 7
7
(9) Consider the matrix A = 6
4 0 0 2 1 5.
0 0 0 2
2
2
(8) Let A = 4 1
1
entries, and a
(a) Show that A has only one eigenvalue. What is it?
(b) Show that Step 3 of the Diagonalization Method of Section 3.4 produces only one fundamental eigenvector for A.
218
Andrilli/Hecker - Chapter Tests
(10) Consider the matrix A =
Chapter 3 - Version B
"
3
5
4
5
4
5
3
5
#
. It can be shown that, for every
nonzero vector X, the vectors X and (AX) form an angle with each other
measuring = arccos( 53 ) 53 . (You may assume this fact.) Show why
A is not diagonalizable, …rst from an algebraic perspective, and then from
a geometric perspective.
219
Andrilli/Hecker - Chapter Tests
Chapter 3 - Version C
Test for Chapter 3 — Version C
(1) Calculate the volume of the parallelepiped
x = [5; 2; 3], y = [ 3; 1; 2], and z = [4;
2
4 3
6 1 9
(2) Calculate the determinant of A = 6
4 8 3
4 3
to upper triangular form.
determined by the vectors
5; 1].
3
1
2
0
2 7
7 by row reducing A
2
2 5
1
1
(3) Use a cofactor expansion along any row or column to …nd the determinant
of
3
2
0
0
0 0 5
6 0 12
0 9 4 7
7
6
6
0
0 8 3 7
A=6 0
7:
4 0 14 11 7 2 5
15 13 10 6 1
Be sure to use cofactor expansion on each 4 4, 3 3, and 2 2 determinant
you need to calculate as well.
(4) Prove that if A is an orthogonal matrix (that is, AT = A
then jAj = 1.
1
),
(5) Let A be a nonsingular matrix. Express A 1 in terms of jAj and the
adjoint of A. Use this to prove that if A is an n n matrix with jAj =
6 0,
then the determinant of the adjoint of A equals jAjn 1 .
(6) Perform (only) the Base Step in the proof by induction of the following
statement, which is part of Theorem 3.3: Let A be an n n matrix, with
n 2, and let R be the type (II) row operation hji
chii + hji. Then
jAj = jR(A)j.
(7) Use Cramer’s Rule to solve the following system:
8
< 9x1 + 6x2 + 2x3 =
5x1
3x2
x3 =
:
8x1 + 6x2 + x3 =
(8) Use diagonalization to calculate A9 if A =
2
4
(9) Let A = 4 0
3
integer entries, and
5
3
41
22 :
40
6
4
.
3
6
6
2
0 5 : Find a nonsingular matrix P having all
3
5
a diagonal matrix D such that D = P 1 AP.
(10) Let be an eigenvalue for an n n matrix A having algebraic multiplicity
k. Prove that is also an eigenvalue with algebraic multiplicity k for AT .
220
Andrilli/Hecker - Chapter Tests
Chapter 4 - Version A
Test for Chapter 4 — Version A
(1) Prove that R is a vector space using the operations
x y = (x5 + y 5 )1=5 and a x = a1=5 x.
and
given by
(2) Prove that the set of n n skew-symmetric matrices is a subspace of Mnn .
(3) Prove that the set f[5; 2; 1]; [2; 1; 1]; [8; 2; 1]g spans R3 .
(4) Find a simpli…ed general form for all the vectors in span(S) in P3 if
S
= f4x3
20x2 x 11; 6x3 30x2
5x3 + 25x2 + 2x + 16g:
2x
18;
(5) Prove that the set f[2; 1; 11; 4]; [9; 1; 2; 2]; [4; 0; 21; 8]; [ 2; 1; 8; 3]g in
R4 is linearly independent.
(6) Determine whether
4 13
19
1
;
3
19
15
1
;
2
15
12
1
;
4
5
7
2
is a basis for M22 .
(7) Find a maximal linearly independent subset of the set
S
= f3x3
2x2 + 3x
7x3 + 5x
1; 6x3 + 4x2 6x + 2; x3
1; 11x3 12x2 + 13x + 3g
3x2 + 2x
1;
in P3 . What is dim(span(S))?
(8) Prove that the columns of a nonsingular n n matrix A span Rn .
3
2
5
6
5
0
0 5, which has = 2 as an eigen(9) Consider the matrix A = 4 1
2
4
3
value. Find a basis B for R3 that contains a basis for the eigenspace E2
for A.
(10) (a) Find the transition matrix from B-coordinates to C-coordinates if
B
C
= ([ 11; 19; 12]; [7; 16; 2]; [ 8: 12; 11])
= ([2; 4; 1]; [ 1; 2; 1]; [1; 1; 2])
and
are ordered bases for R3 .
(b) Given v = [ 63; 113; 62] 2 R3 , …nd [v]B , and use your answer to
part (a) to …nd [v]C .
221
Andrilli/Hecker - Chapter Tests
Chapter 4 - Version B
Test for Chapter 4 — Version B
(1) Prove that R with the usual scalar multiplication, but with addition given
by x y = 3(x + y) is not a vector space.
(2) Prove that the set of all polynomials p in P4 for which the coe¢ cient of
the second-degree term equals the coe¢ cient of the fourth-degree term is
a subspace of P4 .
(3) Let V be a vector space with subspaces W1 and W2 . Prove that the
intersection of W1 and W2 is a subspace of V.
(4) Let S = f[5; 15; 4]; [ 2; 6; 1]; [ 9; 27; 7]g. Determine whether [2; 4; 3]
is in span(S).
(5) Find a simpli…ed general form for all the vectors in span(S) in M22 if
2
1
S=
6
1
;
5
1
15
1
;
8
2
24
1
;
3
1
9
2
:
(6) Let S be the set
f2x3
11x2 12x 1; 7x3 + 35x2 + 16x 7;
5x3 + 29x2 + 33x + 2; 5x3 26x2
12x + 5g
in P3 and let p(x) 2 P3 . Prove that there is exactly one way to express
p(x) as a linear combination of the elements of S.
(7) Find a subset of
S
= f[2; 3; 4; 1]; [ 6; 9; 12; 3]; [3; 1; 2; 2];
[2; 8; 12; 3]; [7; 6; 10; 4]g
that is a basis for span(S) in R4 . Does S span R4 ?
(8) Let A be a nonsingular n
linearly independent.
2
n matrix.
1
6 2
(9) Consider the matrix A = 6
4 0
2
eigenvalue. Find a basis B for R3
E 2 for A.
Prove that the rows of A are
3
2
3 1
6
6 7 7
7, which has = 2 as an
0
2 0 5
4
6 5
that contains a basis for the eigenspace
(10) (a) Find the transition matrix from B-coordinates to C-coordinates if
B
C
= ( 21x2 + 14x 38; 17x2 10x + 32; 10x2 + 4x
= ( 7x2 + 4x 14; 2x2 x + 4; x2 x + 1)
are ordered bases for P2 .
(b) Given v = 100x2 + 64x
to part (a) to …nd [v]C .
23) and
181 2 P3 , …nd [v]B , and use your answer
222
Andrilli/Hecker - Chapter Tests
Chapter 4 - Version C
Test for Chapter 4 — Version C
(1) The set R2 with operations [x; y] [w; z] = [x + w + 3; y + z 4], and
a [x; y] = [ax + 3a 3; ay 4a + 4] is a vector space. Find the zero vector
0 and the additive inverse of v = [x; y] for this vector space.
(2) Prove that the set of all 3-vectors orthogonal to [2; 3; 1] forms a subspace
of R3 .
(3) Determine whether [ 3; 6; 5] is in span(S) in R3 , if
S = f[3; 6; 1]; [4; 8; 3]; [5; 10; 1]g:
(4) Find a simpli…ed form for all the vectors in span(S) in P3 if
S = f4x3
x2
11x + 18; 3x3 + x2 + 9x
14; 5x3
x2
44
32
;
5
13
1
2
3
4
13x + 22g:
(5) Prove that the set
3
9
63
46
;
5
14
56
41
;
4
13
14
10
in M22 is linearly independent.
(6) Find a maximal linearly independent subset of
S=
3
2
1
4
;
6
4
2
8
;
2
2
1
0
;
4
2
;
3
2
in M22 . What is dim(span(S))?
(7) Let A be an n n singular matrix and let S be the set of columns of A.
Prove that dim(span(S)) < n.
(8) Enlarge the linearly independent set
T = f2x3
3x2 + 3x + 1; x3 + 4x2
6x
2g
of P3 to a basis for P3 .
(9) Let A 6= In be an n
dim(E1 ) < n.
n matrix having eigenvalue
= 1. Prove that
(10) (a) Find the transition matrix from B-coordinates to C-coordinates if
B
C
= ([10; 17; 8]; [ 4; 10; 5]; [29; 36; 16])
= ([12; 12; 5]; [ 5; 3; 1]; [1; 2; 1])
and
are ordered bases for R3 .
(b) Given v = [ 109; 155; 71] 2 R3 , …nd [v]B , and use your answer to
part (a) to …nd [v]C .
223
Andrilli/Hecker - Chapter Tests
Chapter 5 - Version A
Test for Chapter 5 — Version A
(1) Prove that the mapping f : Mnn ! Mnn given by f (A) = BAB 1 ,
where B is some …xed nonsingular n
n matrix, is a linear operator.
(2) Suppose L: R3 ! R3 is a linear transformation, and L([ 2; 5; 2]) =
[2; 1; 1], L([0; 1; 1]) = [1; 1; 0], and L([1; 2; 1]) = [0; 3; 1]. Find
L([2; 3; 1]).
(3) Find the matrix ABC for L: R2
! P2 given by
L([a; b]) = ( 2a + b)x2
(2b)x + (a + 2b);
for the ordered bases B = ([3; 7]; [2; 5]) for R2 ,
and C = (9x2 + 20x 21; 4x2 + 9x 9; 10x2 + 22x
(4) For the linear transformation L:
02
31
2
x1
B6 x2 7C
6
7C 4
LB
@4 x3 5A =
x4
R4
4
3
4
23) for P2 .
! R3 given by
2
3
3 x1
8
1
7 6
x2 7
7
6
1
6 56
4 x3 5 ;
8
2 10
x4
…nd a basis for ker(L), a basis for range(L), and verify that the Dimension
Theorem holds for L.
(5) (a) Consider the linear transformation L : P3 ! R3 given by
L(p(x)) = p(3); p0 (1);
Z
1
p(x) dx :
0
Prove that ker(L) is nontrivial.
(b) Use part (a) to prove that there is a nonzero polynomial p 2 P3 such
R1
that p(3) = 0, p0 (1) = 0, and 0 p(x) dx = 0.
(6) Prove that the mapping L: R3 ! R3
2
5
L([x; y; z]) = 4 6
10
is one-to-one. Is L an isomorphism?
(7) Show that L: Pn
given by
32
3
2
1
x
3
2 54 y 5
3
1
z
! Pn given by L(p) = p
224
p0 is an isomorphism.
Andrilli/Hecker - Chapter Tests
Chapter 5 - Version A
(8) Consider the diagonalizable operator L : M22 ! M22
given by L(K) = K KT . Let A be the matrix representation of L with
respect to the standard basis C for M22 .
(a) Find an ordered basis B of M22 consisting of fundamental eigenvectors for L, and the diagonal matrix D that is the matrix representation of L with respect to B.
(b) Calculate the transition matrix P from B to C, and verify
that D = P 1 AP.
(9) Indicate whether each given statement is true or false.
(a) If L is a linear operator on a …nite dimensional vector space for which
every root of pL (x) is real, then L is diagonalizable.
(b) Let L : V ! V be an isomorphism with eigenvalue . Then
and 1= is an eigenvalue for L 1 .
6= 0
(10) Prove that the linear operator on P3 given by L(p(x)) = p(x + 1) is not
diagonalizable.
225
Andrilli/Hecker - Chapter Tests
Chapter 5 - Version B
Test for Chapter 5 — Version B
! Mnn given by f (A) = A + AT is
(1) Prove that the mapping f : Mnn
a linear operator.
(2) Suppose L: R3 ! P3 is a linear transformation, and L([1; 1; 1]) =
2x3 x2 + 3, L([2; 5; 6]) = 11x3 + x2 + 4x 2, and
L([1; 4; 4]) = x3 3x2 + 2x + 10. Find L([6; 4; 7]).
! R3 given by
(3) Find the matrix ABC for L: M22
L
a b
c d
= [ 2a + c
d; a
2b + d; 2a
b + c];
for the ordered bases
B=
1
4
2
1
4 6
11 3
;
;
2
3
1
1
1
5
;
4
1
for M22 and C = ([ 1; 0; 2]; [2; 2; 1]; [1; 3; 2]) for R3 .
(4) For the linear transformation L: R4
02
31 2
x1
2
B6 x2 7C 6 1
6
7C 6
LB
@4 x3 5A = 4 4
4
x4
! R4 given by
32
5
4
8
6
1
5
1 7
76
5
2
16
1 4
1
10
2
3
x1
x2 7
7;
x3 5
x4
…nd a basis for ker(L), a basis for range(L), and verify that the Dimension
Theorem holds for L.
(5) Let A be a …xed m n matrix, with m 6= n. Let L1 : Rn ! Rm be
given by L1 (x) = Ax, and let L2 : Rm ! Rn be given by L2 (y) = AT y.
(a) Prove or disprove: dim(range(L1 )) = dim(range(L2 )):
(b) Prove or disprove: dim(ker(L1 )) = dim(ker(L2 )):
(6) Prove that the mapping L: M22
L
a b
c d
=
3a
! M22 given by
2b 3c
a d
4d
a
3a
c
b
d
c
d
is one-to-one. Is L an isomorphism?
(7) Let A be a …xed nonsingular n n matrix. Show that L: Mnn
given by L(B) = BA 1 is an isomorphism.
226
! Mnn
Andrilli/Hecker - Chapter Tests
Chapter 5 - Version B
(8) Consider the diagonalizable operator L : M22
L
a b
c d
=
7
4
12
7
! M22 given by
a b
c d
:
Let A be the matrix representation of L with respect to the standard basis
C for M22 .
(a) Find an ordered basis B of M22 consisting of fundamental eigenvectors for L, and the diagonal matrix D that is the matrix representation of L with respect to B.
(b) Calculate the transition matrix P from B to C, and verify that
D = P 1 AP.
(9) Indicate whether each given statement is true or false.
(a) Let L : V ! V be an isomorphism on a …nite dimensional vector
space V, and let be an eigenvalue for L. Then = 1.
(b) A linear operator L on an n-dimensional vector space is diagonalizable if and only if L has n distinct eigenvalues.
(10) Let L be the linear operator on R3 representing a counterclockwise rotation
through an angle of =6 radians around an axis through the origin that is
parallel to [4; 2; 3]. Explain in words why L is not diagonalizable.
227
Andrilli/Hecker - Chapter Tests
Chapter 5 - Version C
Test for Chapter 5 — Version C
(1) Let A be a …xed n n matrix. Prove that the function L : Pn ! Mnn
given by
L(an xn + an 1 xn 1 +
+ a1 x + a0 )
= an An + an
1A
n 1
+
+ a1 A + a0 In
is a linear transformation.
(2) Suppose L: R3
! M22 is a linear transformation such that
35
17
24
9
and L([ 7; 3; 1]) =
38
11
L([6; 1; 1]) =
21
5
; L([4; 2; 1]) =
31
17
(3) Find the matrix ABC for L: R3
L([x; y; z]) = [3x
18
11
;
: Find L([ 3; 1; 4]):
! R4 given by
y; x
z; 2x + 2y
z;
x
2y];
for the ordered bases
B
C
= ([ 5; 1; 2]; [10; 1; 3]; [41; 12; 20]) for R3 ; and
= ([1; 0; 1; 1]; [0; 1; 2; 1]; [ 1; 0; 2; 1]; [0; 1; 2; 0]) for R4 :
(4) Consider L: M22
L
a11
a21
! P2 given by
a12
a22
= (a11 + a22 )x2 + (a11
a21 )x + a12 :
What is ker(L)? What is range(L)? Verify that the Dimension Theorem
holds for L.
(5) Suppose that L1 : V ! W and L2 : W ! Y are linear transformations,
where V; W, and Y are …nite dimensional vector spaces.
(a) Prove that ker(L1 )
ker(L2 L1 ).
(b) Prove that dim(range(L1 ))
(6) Prove that the mapping L: P2
L(ax2 + bx + c) = [ 5a + b
dim(range(L2 L1 )).
! R4 given by
c;
7a
c; 2a
b
c;
2a + 5b + c]
is one-to-one. Is L an isomorphism?
(7) Let B = (v1 ; : : : ; vn ) be an ordered basis for an n-dimensional vector
space V. De…ne L : Rn ! V by L([a1 ; : : : ; an ]) = a1 v1 +
+ an vn .
Prove that L is an isomorphism.
228
Andrilli/Hecker - Chapter Tests
Chapter 5 - Version C
(8) Consider the diagonalizable operator L : P2
L(ax2 + bx + c) = (3a + b
! P2 given by
c)x2 + ( 2a + c)x + (4a + 2b
c):
Let A be the matrix representation of L with respect to the standard basis
C for P2 .
(a) Find an ordered basis B of P2 consisting of fundamental eigenvectors
for L, and the diagonal matrix D that is the matrix representation
of L with respect to B.
(b) Calculate the transition matrix P from B to C, and verify that
D = P 1 AP.
(9) Indicate whether each given statement is true or false.
(a) If L is a diagonalizable linear operator on a …nite dimensional vector space then the algebraic multiplicity of equals the geometric
multiplicity of for every eigenvalue of L.
(b) A linear operator L on a …nite dimensional vector space V is one-toone if and only if 0 is not an eigenvalue for L.
(10) Prove that the linear operator on Pn , for n > 0, de…ned by L(p) = p0 is
not diagonalizable.
229
Andrilli/Hecker - Chapter Tests
Chapter 6 - Version A
Test for Chapter 6 — Version A
(1) Use the Gram-Schmidt Process to enlarge the following set to an orthogonal basis for R3 : f[4; 3; 2]g.
(2) Let L : R3 ! R3 be the orthogonal re‡ection through the plane
x 2y + 2z = 0. Use eigenvalues and eigenvectors to …nd the matrix
representation of L with respect to the standard basis for R3 . (Hint:
[1; 2; 2] is orthogonal to the given plane.)
(3) For the subspace W = span(f[2; 3; 1]; [ 3; 2; 2]g) of R3 , …nd a basis for
W ? . What is dim(W ? )? (Hint: The two given vectors are not orthogonal.)
(4) Suppose A is an n
kAxk = kxk.
n orthogonal matrix. Prove that for every x 2 Rn ,
(5) Prove the following part of Corollary 6.14: Let W be a subspace of Rn .
Then W (W ? )? .
(6) Let L : R4 ! R4 be the orthogonal projection onto the subspace
W = span(f[2; 1; 3; 1]; [5; 2; 3; 3]g) of R4 . Find the characteristic polynomial of L.
(7) Explain why L : R3 ! R3 given by the orthogonal projection onto the
plane 3x + 5y 6z = 0 is orthogonally diagonalizable.
(8) Consider L : R3
! R3 given by
02
31
2
x1
L @4 x2 5A = 4
x3
5
4
2
32
3
2
x1
2 5 4 x2 5 :
8
x3
4
5
2
Find an ordered orthonormal basis B of fundamental eigenvectors for L,
and the diagonal matrix D that is the matrix for L with respect to B.
(9) Use orthogonal diagonalization to …nd a symmetric matrix A such that
A3 =
1
5
31
18
18
4
:
(10) Let L be a symmetric operator on Rn and let 1 and 2 be distinct
eigenvalues for L with corresponding eigenvectors v1 and v2 . Prove that
v1 ? v2 .
230
Andrilli/Hecker - Chapter Tests
Chapter 6 - Version B
Test for Chapter 6 — Version B
(1) Use the Gram-Schmidt Process to …nd an orthogonal basis for R3 , starting
with the linearly independent set f[2; 3; 1]; [4; 2; 1]g. (Hint: The two
given vectors are not orthogonal.)
(2) Let L : R3 ! R3 be the orthogonal projection onto the plane 3x+4z = 0.
Use eigenvalues and eigenvectors to …nd the matrix representation of L
with respect to the standard basis for R3 . (Hint: [3; 0; 4] is orthogonal to
the given plane.)
(3) Let W = span(f[2; 2; 3]; [3; 0; 1]g) in R3 .
Decompose v = [ 12; 36; 43] into w1 + w2 , where w1 2 W and
w2 2 W ? . (Hint: The two vectors spanning W are not orthogonal.)
(4) Suppose that A and B are n
an orthogonal matrix.
n orthogonal matrices. Prove that AB is
(5) Prove the following part of Corollary 6.14: Let W be a subspace of Rn .
Assuming W (W ? )? has already been shown, prove that W = (W ? )? .
(6) Let v = [ 3; 5; 1; 0], and let W be the subspace of R4 spanned by
S = f[0; 3; 6; 2]; [ 6; 2; 0; 3]g. Find projW ? v.
(7) Is L : R2
! R2 given by
L
x1
x2
3
7
=
7
5
x1
x2
orthogonally diagonalizable? Explain.
(8) Consider L : R3 ! R3 given by
02
31
2
x1
40
L @4 x2 5A = 4 18
6
x3
18
13
12
32
3
x1
6
12 5 4 x2 5 :
45
x3
Find an ordered orthonormal basis B of fundamental eigenvectors for L,
and the diagonal matrix D that is the matrix for L with respect to B.
(9) Use orthogonal diagonalization to …nd a symmetric matrix A such that
A2 =
10
6
6
10
:
(10) Let A be an n n symmetric matrix. Prove that if all eigenvalues for A
are 1, then A is orthogonal.
231
Andrilli/Hecker - Chapter Tests
Chapter 6 - Version C
Test for Chapter 6 — Version C
(1) Use the Gram-Schmidt Process to enlarge the following set to an orthogonal basis for R4 : f[ 3; 1; 1; 2]; [2; 4; 0; 1]g.
(2) Let L : R3 ! R3 be the orthogonal projection onto the plane
2x 2y z = 0. Use eigenvalues and eigenvectors to …nd the matrix
representation of L with respect to the standard basis for R3 . (Hint:
[2; 2; 1] is orthogonal to the given plane.)
(3) For the subspace W = span(f[5; 1; 2]; [ 8; 2; 9]g) of R3 , …nd a basis for
W ? . What is dim(W ? )? (Hint: The two given vectors are not orthogonal.)
(4) Let A be an n
n orthogonal skew-symmetric matrix. Show that n is
even. (Hint: Assume n is odd and calculate jA2 j to reach a contradiction.)
(5) Prove that if W1 and W2 are two subspaces of Rn such that W1
then W2? W1? .
W2 ,
(6) Let P = (5; 1; 0; 4), and let
W = span(f[2; 1; 0; 2]; [ 2; 0; 1; 2]; [0; 2; 2; 1]g):
Find the minimum distance from P to W.
(7) Explain why the matrix for L : R3 ! R3 given by the orthogonal
re‡ection through the plane 4x 3y z = 0 with respect to the standard
basis for R3 is symmetric.
(8) Consider L : R3 ! R3 given by
02
31
2
x1
49
L @4 x2 5A = 4 84
x3
72
32
3
72
x1
84 5 4 x2 5 :
49
x3
84
23
84
Find an ordered orthonormal basis B of fundamental eigenvectors for L,
and the diagonal matrix D that is the matrix for L with respect to B.
(9) Use orthogonal diagonalization to …nd a symmetric matrix A such that
A2 =
10
30
30
90
:
(10) Let A and B be n
n symmetric matrices such that pA (x) = pB (x).
Prove that there is an orthogonal matrix P such that A = PBP 1 .
232
Andrilli/Hecker - Chapter Tests
Chapter 7 - Version A
Test for Chapter 7 — Version A
Note that there are more than 10 problems in this test, with at least
three problems from each section of Chapter 7.
(1) (Section 7.1) Verify that a b = b a for the complex vectors a = [2; i; 2+i]
and b = [1 + i; 1 i; 2 + i].
(2) (Section 7.1) Suppose H and P are n
n complex matrices and that H
is Hermitian. Prove that P HP is Hermitian.
(3) (Section 7.1) Indicate whether each given statement is true or false.
(a) If Z and W are n n complex matrices, then ZW = (WZ).
(b) If an n n complex matrix is normal, then it is either Hermitian or
skew-Hermitian.
(4) (Section 7.2) Use Gaussian elimination to solve the following system of
linear equations:
iz1
(2 + 3i)z1
+ (1 + 3i)z2
+ (7 + 7i)z2
=
=
6 + 2i
:
20 2i
(5) (Section 7.2) Give an example of a matrix having all real entries that is
not diagonalizable when thought of as a real matrix, but is diagonalizable
when thought of as a complex matrix. Prove that your example works.
(6) (Section 7.2) Find all eigenvalues and a basis of fundamental eigenvectors
for each eigenspace for L : C3 ! C3 given by
02
31
2
32
3
z1
1 0
1 i
z1
5 4 z2 5 :
i
L @4 z2 5A = 4 1 i
z3
i 0
1+i
z3
Is L diagonalizable?
(7) (Section 7.3) Prove or disprove: For n 1, the set of polynomials in PnC
with real coe¢ cients is a complex subspace of PnC .
(8) (Section 7.3) Find a basis B for span(S) that is a subset of
S
= f[1 + 3i; 2 + i; 1 2i]; [ 2 + 4i; 1 + 3i; 3
[1; 2i; 3 + i]; [5 + 6i; 3; 1 2i]g:
i];
(9) (Section 7.3) Find the matrix with respect to the standard bases for the
2
complex linear transformation L : MC
22 ! C given by
L
a b
c d
= [a + bi; c + di] :
Also, …nd the matrix for L with respect to the standard bases when
thought of as a linear transformation between real vector spaces.
233
Andrilli/Hecker - Chapter Tests
Chapter 7 - Version A
(10) (Section 7.4) Use the Gram-Schmidt Process to …nd an orthogonal basis
for the complex vector space C3 starting with the linearly independent set
f[1; i; 1 + i]; [3; i; 1 i]g.
(11) (Section 7.4) Find a unitary matrix P such that
P
0 i
i 0
P
is diagonal.
(12) (Section 7.4) Show that if A and B are n
is unitary and jAj = 1.
n unitary matrices, then AB
(13) (Section 7.5) Prove that
hf ; gi = f (0)g(0) + f (1)g(1) + f (2)g(2)
is a real inner product on P2 . For f (x) = 2x + 1 and g(x) = x2
calculate hf ; gi and kf k.
1,
(14) (Section 7.5) Find the distance between x = [4; 1; 2] and y = [3; 3; 5] in
the real inner product space consisting of R3 with inner product
2
3
4 0
1
1 5:
hx; yi = Ax Ay; where A = 4 3 3
1 2
1
(15) (Section 7.5) Prove that in a real inner product space, for any vectors x
and y,
kxk = kyk if and only if hx + y; x yi = 0:
(16) (Section 7.5) Use the Generalized Gram-Schmidt Process to …nd an orthogonal basis for P2 starting with the linearly independent set fx2 ; xg
using the real inner product given by
Z 1
hf ; gi =
f (t)g(t) dt:
1
(17) (Section 7.5) Decompose v = [0; 1; 3] in R3 as w1 + w2 , where
w1 2 W = span(f[2; 1; 4]; [4; 3; 8]g) and w2 2 W ? , using the real inner
product on R3 given by hx; yi = Ax Ay, with
2
3
3
1 1
3 1 5:
A = 4 0
1
2 1
234
Andrilli/Hecker - Chapter Tests
Chapter 7 - Version B
Test for Chapter 7 — Version B
Note that there are more than 10 problems in this test, with at least
three problems from each section of Chapter 7.
(1) (Section 7.1) Calculate A B for the complex matrices
A=
i 3 + 2i
5
2i
and B =
1
i 3
2i
3i
7
:
(2) (Section 7.1) Prove that if H is a Hermitian matrix, then iH is a skewHermitian matrix.
(3) (Section 7.1) Indicate whether each given statement is true or false.
(a) Every skew-Hermitian matrix must have all zeroes on its main diagonal.
(b) If x; y 2 Cn , but each entry of x and y is real, then x y produces
the same answer if the dot product is thought of as taking place in
Cn as it would if the dot product were considered to be taking place
in Rn .
(4) (Section 7.2) Use the an inverse matrix to solve the given linear system:
(4 + i)z1
(2 i)z1
+ (2 + 14i)z2
+
(6 + 5i)z2
= 21 + 4i
:
= 10 6i
(5) (Section 7.2) Explain why the sum of the algebraic multiplicities of a
complex n n matrix must equal n.
(6) (Section 7.2) Find all eigenvalues and a basis of fundamental eigenvectors
for each eigenspace for L : C2 ! C2 given by
L
z1
z2
=
i 1
4 3i
z1
z2
:
Is L diagonalizable?
(7) (Section 7.3) Prove or disprove: The set of n n Hermitian matrices is a
complex subspace of MC
nn . If it is a subspace, compute its dimension.
(8) (Section 7.3) If
S
= f[i; 1; 2 + i]; [2 i; 2 + 2i; 5 + 2i];
[0; 2; 2 2i]; [1 + 3i; 2 + 2i; 3 + 5i]g;
…nd a basis B for span(S) that uses vectors having a simpler form than
those in S.
235
Andrilli/Hecker - Chapter Tests
Chapter 7 - Version B
C
(9) (Section 7.3) Show that L : MC
22 ! M22 given by L(Z) = Z is not a
complex linear transformation.
(10) (Section 7.4) Use the Gram-Schmidt Process to …nd an orthogonal basis
for the complex vector space C3 containing f[1 + i; 1 i; 2]g.
1+i
1 i
(11) (Section 7.4) Prove that Z =
1+i
1+i
is unitarily diagonalizable.
n unitary matrix. Prove that A2 = In if
(12) (Section 7.4) Let A be an n
and only if A is Hermitian.
(13) (Section 7.5) Let A be the 3
2
4
3 nonsingular matrix
3
4 3 1
1 4 3 5:
1 1 1
Prove that hx; yi = Ax Ay is a real inner product on R3 . For x =
[2; 1; 1] and y = [1; 4; 2], calculate hx; yi and kxk for this inner product.
(14) (Section 7.5) Find the distance between w = [4 + i; 5; 4 + i; 2 + 2i] and
z = [1 + i; 2i; 2i; 2 + 3i] in the complex inner product space C3 using the
usual complex dot product as an inner product.
(15) (Section 7.5) Prove that in a real inner product space, for any vectors x
and y,
1
hx; yi = (kx + yk2 kx yk2 ):
4
(16) (Section 7.5) Use the Generalized Gram-Schmidt Process to …nd an orthogonal basis for P2 containing x2 using the real inner product given
by
hf ; gi = f (0)g(0) + f (1)g(1) + f (2)g(2):
(17) (Section 7.5) Decompose v = x + 3 in P2 as w1 + w2 , where
w1 2 W = span(fx2 g) and w2 2 W ? , using the real inner product on P2
given by
Z
1
hf ; gi =
f (t)g(t) dt:
0
236
Andrilli/Hecker - Chapter Tests
Chapter 7 - Version C
Test for Chapter 7 — Version C
Note that there are more than 10 problems in this test, with at least
three problems from each section of Chapter 7.
(1) (Section 7.1) Calculate AT B for the complex matrices
A=
2
3 i
1 + 2i
i
4+i
3i
2
1 2i
and B =
(2) (Section 7.1) Let H and J be n
HJ is Hermitian, then HJ = JH.
:
n Hermitian matrices. Prove that if
(3) (Section 7.1) Indicate whether each given statement is true or false.
(a) If x,y 2 Cn with x y 2 R, then x y = y x.
(b) If A is an n n Hermitian matrix, and x; y 2 Cn , then Ax y = x Ay.
(4) (Section 7.2) Use Cramer’s Rule to solve the following system of linear
equations:
(1
iz1
2i)z1
+
+
(1
(4
i)z2
i)z2
=
=
3
5
2i
:
6i
(5) (Section 7.2) Show that a complex n n matrix that is not diagonalizable
must have an eigenvalue whose algebraic multiplicity is strictly greater
than its geometric multiplicity.
(6) (Section 7.2) Find all eigenvalues and a basis of fundamental eigenvectors
for each eigenspace for L : C2 ! C2 given by
L
z1
z2
0
2
2i 2 + 2i
=
z1
z2
:
Is L diagonalizable?
(7) (Section 7.3) Prove that the set of normal 2 2 matrices is not a complex
subspace of MC
22 by showing that it is not closed under matrix addition.
(8) (Section 7.3) Find a basis B for C4 that contains
f[1; 3 + i; i; 1
i]; [1 + i; 3 + 4i; 1 + i; 2]g:
(9) (Section 7.3) Let w 2 Cn be a …xed nonzero vector. Show that L : Cn ! C
given by L(z) = w z is not a complex linear transformation.
(10) (Section 7.4) Use the Gram-Schmidt Process to …nd an orthogonal basis
for the complex vector space C3 , starting with the basis f[2 i; 1 + 2i; 1];
[0; 1; 0]; [ 1; 0; 2 + i]g.
237
Andrilli/Hecker - Chapter Tests
Chapter 7 - Version C
(11) (Section 7.4) Let Z be an n n complex matrix whose rows form an
orthonormal basis for Cn . Prove that the columns of Z also form an
orthonormal basis for Cn :
(12) (Section 7.4)
(a) Show that
23i
36
A =
36
2i
is normal, and hence unitarily diagonalizable.
(b) Find a unitary matrix P such that P
1
AP is diagonal.
(13) (Section 7.5) For x = [x1 ; x2 ] and y = [y1 ; y2 ] in R2 , prove that hx; yi =
2x1 y1 x1 y2 x2 y1 + x2 y2 is a real inner product on R2 . For x = [2; 3]
and y = [4; 4], calculate hx; yi and kxk for this inner product.
(14) (Section 7.5) Find the distance between f (x) = sin2 x and g(x) = cos2 x
in the real inner product space consisting of the set of all real-valued
continuous functions de…ned on the interval [0; ] with inner product
Z
hf ; gi =
f (t)g(t) dt:
0
(15) (Section 7.5) Prove that in a real inner product space, vectors x and y are
orthogonal if and only if kx + yk2 = kxk2 + kyk2 .
(16) (Section 7.5) Use the Generalized Gram-Schmidt Process to …nd an orthogonal basis for R3 , starting with the linearly independent set
f[ 1; 1; 1]; [3; 4; 1]g using the real inner product given by
3
2
3 2 1
hx; yi = Ax Ay; where A = 4 2 1 2 5 :
2 1 1
(17) (Section 7.5) Decompose v = x2 in P2 as w1 + w2 , where
w1 2 W = span(fx 5; 6x 5g) and w2 2 W ? , using the real inner
product on P2 given by
hf ; gi = f (0)g(0) + f (1)g(1) + f (2)g(2):
238
Andrilli/Hecker - Answers to Chapter Tests
Chapter 1 - Version A
Answers to Test for Chapter 1 — Version A
p
p
(1) The net velocity vector is [ 5 2 2 ; 2 5 2 2 ]
speed is approximately 3.85 km/hr.
[3:54; 1:54] km/hr. The net
(2) The angle (to the nearest degree) is 137 .
2
2
(3) kx + yk + kx yk = (x + y) (x + y) + (x y) (x y)
= x (x + y) + y (x + y) + x (x y) y (x y)
=x x+x y+y x+y y+x x x y y x+y y
2
2
= 2(x x) + 2(y y) = 2 kxk + kyk
40 127 109
65 13
; 35 ; 39
(4) The vector x = [ 3; 4; 2] can be expressed as [ 35
35 ]+[ 35 ; 35 ; 35 ],
where the …rst vector (proja x) of the sum is parallel to a (since it equals
13
proja x) of the sum is easily seen to
35 a ), and the second vector (x
be orthogonal to a.
(5) Contrapositive: If x is parallel to y, then kx + yk = kxk + kyk.
Converse: If x is not parallel to y, then kx + yk =
6 kxk + kyk.
Inverse: If kx + yk = kxk + kyk, then x is parallel to y.
Only the contrapositive is logically equivalent to the original statement.
(6) The negation is: kxk =
6 kyk and (x
y) (x + y) = 0.
(7) Base Step: Assume A and B are any two diagonal n n matrices. Then,
C = A + B is also diagonal because for i 6= j, cij = aij + bij = 0 + 0 = 0,
since A and B are both diagonal.
Inductive Step: Assume that the sum of any k diagonal n n matrices
is diagonal. We must show for any diagonal n n matrices A1 ; : : : ; Ak+1
Pk+1
Pk+1
Pk
that i=1 Ai is diagonal. Now, i=1 Ai = ( i=1 Ai ) + Ak+1 . Let B
Pk
=
A . Then B is diagonal by the inductive hypothesis. Hence,
Pk+1i=1 i
i=1 Ai = B + Ak+1 is the sum of two diagonal matrices, and so is
diagonal by the Base Step.
3
2
3
2
3
9
0
2
0
4
2
2
7
6 3
7
6 9
3 7
7
6
1 11
0
(8) A = S + V, where S = 6
2 5, V = 4 2
2 5, S is
4 2
3
0 11
3
2
0
2
2
symmetric, and V is skew-symmetric.
TV
TV
(9)
TV
TV
Show
Show
Show
Show
Salary
Perks
2
3
1
$1000000 $650000
7
2 6
6 $860000 $535000 7
3 4 $880000 $455000 5
4
$770000 $505000
239
Andrilli/Hecker - Answers to Chapter Tests
Chapter 1 - Version B
(10) Assume A and B are both n n symmetric matrices. Then AB is symmetric if and only if (AB)T = AB if and only if BT AT = AB (by
Theorem 1.16) if and only if BA = AB (since A and B are symmetric)
if and only if A and B commute.
Answers to Test for Chapter 1 — Version B
p
(1) Player B is pulling with a force ofp100= 3 lbs, or about 57:735 lbs. Player
C is pulling with a force of 200= 3 lbs, or about 115:47 lbs.
(2) The angle (to the nearest degree) is 152 .
(3) See the proof in the textbook, just before Theorem 1.10 in Section 1.2.
(4) The vector x = [5; 2; 4] can be expressed as
12
102
32
100
8
; 12
[ 22
22 ; 22 ] + [ 22 ;
22 ;
22 ], where the …rst vector (proja x) of the sum
4
is parallel to a (since it equals 22
a), and the second vector (x proja x)
of the sum is easily seen to be orthogonal to a.
(5) Contrapositive: If kx + yk kyk, then kxk2 + 2(x y) 0.
Converse: If kx + yk > kyk, then kxk2 + 2(x y) > 0.
Inverse: If kxk2 + 2(x y) 0, then kx + yk kyk.
Only the contrapositive is logically equivalent to the original statement.
(6) The negation is: No unit vector x 2 R3 is parallel to [ 2; 3; 1].
(7) Base Step: Assume A and B are any two lower triangular n n matrices.
Then, C = A + B is also lower triangular because for i < j, cij =
aij + bij = 0 + 0 = 0, since A and B are both lower triangular.
Inductive Step: Assume that the sum of any k lower triangular n n
matrices is lower triangular. P
We must show for any lower triangular
n n
Pk+1
k+1
matrices A1 ; : : : ; Ak+1 that i=1 Ai is lower triangular. Now, i=1 Ai
Pk
Pk
= ( i=1 Ai ) + Ak+1 . Let B = i=1 Ai . Then B is lower triangular by
Pk+1
the inductive hypothesis. Hence, i=1 Ai = B + Ak+1 is the sum of two
lower triangular matrices, and so is lower triangular by the Base Step.
3
2
2
11 3
0 7
5 1 27
2
7
6
7
6
1 7
7 7
6
6
7 0
(8) A = S + V, where S = 4 1 3 2 5 ; V = 4
2 5 ; S is
7
2
1
2
1
symmetric, and V is skew-symmetric.
Nutrient 1
Cat 1
1:64
1:50
(9) Cat 2 4
Cat 3
1:55
2
Nutrient 2
0:62
0:53
0:52
11
2
7
2
0
Nutrient 3
3
1:86
1:46 5 (All …gures are in ounces.)
1:54
240
Andrilli/Hecker - Answers to Chapter Tests
Chapter 1 - Version C
(10) Assume A and B are both n n skew-symmetric matrices. Then (AB)T =
BT AT (by Theorem 1.16) = ( B)( A) (since A, B are skew-symmetric)
= BA (by part (4) of Theorem 1.14).
Answers to Test for Chapter 1 — Version C
(1) The acceleration vector is approximately [ 0:435; 0:870; 0:223] m/sec2 .
This can also be expressed as 1:202[ 0:362; 0:724; 0:186] m/sec2 , where
the latter vector is a unit vector.
(2) The angle (to the nearest degree) is 118 .
(3) See the proof of Theorem 1.7 in Section 1.2 of the textbook.
246 205
41
(4) The vector x = [ 4; 2; 7] can be expressed as
62 ; 62 ;
62
81
393
2
+
62 ;
62 ;
62 ; where the …rst vector (proja x) of the sum is parallel
41
to a (since it equals 62
a ) and the second vector (x proja x) of the
sum is easily seen to be orthogonal to a.
(5) Assume y = cx, for some c 2 R. We must prove that y = projx y. Now,
(cx)
x)
xy
x = xkxk
x = c(x
x (by part
if y = cx, then projx y = kxk
2
2
kxk2
(4) of Theorem 1.5) = cx (by part (2) of Theorem 1.5) = y.
(6) The negation is: For some vector x 2 Rn , there is no vector y 2 Rn such
that kyk > kxk.
2
2
1
1 3
5 3
3 13
0
2
2
2
2
7
6
7
6 13
1
6
7
6
5
3 5; V = 4
0
1 7
(7) A = S + V, where S = 4 2
2
5; S
1
3
4
2
is symmetric, and V is skew-symmetric.
5
2
1
0
(8) The third row of AB is [35; 18; 13], and the second column of BA is
[ 32; 52; 42; 107].
(9) Base Step: Suppose A P
and B are two n n diagonal matrices. Let
n
C = AB. Then, cij = k=1 aik bkj . But if k 6= i, then aik = 0. Thus,
cij = aii bij . Then, if i 6= j, bij = 0, hence cij = aii 0 = 0. Hence C is
diagonal.
Inductive Step: Assume that the product of any k diagonal n n matrices is diagonal. We must show that for any diagonal n n matrices
A1 ; : : : ; Ak+1 that the product A1
Ak+1 is diagonal. Now, A1
Ak+1
= (A1
Ak )Ak+1 . Let B = A1
Ak . Then B is diagonal by the inductive hypothesis. Hence, A1
Ak Ak+1 = BAk+1 is the product of
two diagonal matrices, and so is diagonal by the Base Step.
241
Andrilli/Hecker - Answers to Chapter Tests
Chapter 2 - Version A
(10) Assume that A is skew-symmetric. Then (A3 )T = (A2 A)T = AT (A2 )T
(by Theorem 1.16) = AT (AA)T = AT AT AT (again by Theorem 1.16)
= ( A)( A)( A) (since A is skew-symmetric) = (A3 ) (by part (4) of
Theorem 1.14). Hence, A3 is skew-symmetric.
Answers to Test for Chapter 2 — Version A
(1) The quadratic equation is y = 3x2
(2) The solution set is f( 2b
4d
4x
8.
5; b; 3d + 2; d; 4) j b; d 2 Rg.
(3) The solution set is fc( 2; 4; 1; 0) j c 2 Rg.
(4) The rank of A is 2, and so A is not row equivalent to I3 .
(5) The solution sets are, respectively, f(3; 4; 2)g and f(1; 2; 3)g.
(6) First, ith row of R(AB)
= c(ith row of (AB))
= c(ith row of A)B (by the hint)
= (ith row of R(A))B
= ith row of (R(A)B) (by the hint).
Now, if k 6= i, kth row of R(AB)
= kth row of (AB)
= (kth row of A)B (by the hint)
= (kth row of R(A))B
= kth row of (R(A)B) (by the hint).
Hence, R(AB) = R(A)B, since they are equal on each row.
(7) Yes: [ 15; 10; 23] = ( 2) (row 1) + ( 3) (row 2) + ( 1)(row 3).
2
3
9
4
15
2
7 5.
(8) The inverse of A is 4 4
12
5
19
" 9
4 #
14
7
9
8
1
(9) The inverse of A is 14
.
=
5
3
5 6
14
7
(10) Suppose A and B are nonsingular n n matrices. If A and B commute,
then AB = BA. Hence, A(AB)B = A(BA)B, and so A2 B2 = (AB)2 .
Conversely, suppose A2 B2 = (AB)2 . Since both A and B are nonsingular,
we know that A 1 and B 1 exist. Multiply both sides of A2 B2 = ABAB
by A 1 on the left and by B 1 on the right to obtain AB = BA. Hence,
A and B commute.
242
Andrilli/Hecker - Answers to Chapter Tests
Chapter 2 - Version C
Answers to Test for Chapter 2 — Version B
(1) The equation of the circle is x2 + y 2
or (x 3)2 + (y + 4)2 = 13.
(2) The solution set is
f( 5c 3d + f + 6; 2c + d
3f
6x + 8y + 12 = 0,
2; c; d; 2f + 1; f ) j c; d; f 2 Rg.
(3) The solution set is fb(3; 1; 0; 0) + d( 4; 0; 2; 1) j b; d 2 Rg.
(4) The rank of A is 3, and so A is row equivalent to I3 .
(5) The solution sets are, respectively, f( 2; 3; 4)g and f(3; 1; 2)g.
(6) First, ith row of R(AB)
= c(jth row of (AB)) + (ith row of (AB))
= c(jth row of A)B + (ith row of A)B (by the hint)
= (c(jth row of A) + (ith row of A))B
= (ith row of R(A))B
= ith row of (R(A)B) (by the hint).
Now, if k 6= i, kth row of R(AB)
= kth row of (AB)
= (kth row of A)B (by the hint)
= (kth row of R(A))B
= kth row of (R(A)B) (by the hint).
Hence, R(AB) = R(A)B, since they are equal on each row.
(7) The vector [ 2; 3; 1] is not in the row space of A.
3
2
1
6
1
13
2 5.
(8) The inverse of A is 4 2
4
21
3
(9) The solution set is f(4; 7; 2)g.
(10) (a) To prove that the inverse of (cA) is ( 1c )A 1 , simply note that the
product (cA) times ( 1c )A 1 equals I (by part (4) of Theorem 1.14).
(b) Now, assume A is a nonsingular skew-symmetric matrix. Then,
(A 1 )T = (AT ) 1 (by part (4) of Theorem 2.11) = ( 1A) 1 (since
A is skew-symmetric) = ( 11 )A 1 (by part(a)) = (A 1 ). Hence,
A 1 is skew-symmetric.
Answers to Test for Chapter 2 — Version C
(1) A = 3, B =
2, C = 1, and D = 4.
(2) The solution set is f( 4b
3e + 3; b; 2e
3; 5e
2; e; 4) j b; e 2 Rg.
(3) The solution set is fc(12; 5; 1; 0; 0) + e( 1; 1; 0; 3; 1) j c; e 2 Rg.
243
Andrilli/Hecker - Answers to Chapter Tests
Chapter 3 - Version A
(4) The rank of A is 3, and so A is not row equivalent to I4 .
(5) The reduced row echelon form matrix for A is B = I3 . One possible
sequence of row operations converting B to A is:
(II): < 2 >
(II): < 1 >
(I): < 3 >
(II): < 3 >
(II): < 1 >
(I): < 2 >
(II): < 3 >
(II): < 2 >
(I): < 1 >
11
10 < 3 > + < 2 >
2
5 <3>+<1>
1
10 < 3 >
4<2>+<3>
1<2>+<1>
5<2>
5<1>+<3>
3<1>+<2>
2<1>
(6) Yes: [25; 12; 19] = 4[7; 3; 5] + 2[3; 3; 4] 3[3; 2; 3].
3
2
1
3
4 1
6 3 21
34 8 7
7.
(7) The inverse of A is 6
4 4 14
21 5 5
1
8
13 3
(8) The solution set is f(4; 6; 5)g.
(9) Base Step: Assume A and B are two n
n nonsingular matrices. Then
(AB) 1 = B 1 A 1 , by part (3) of Theorem 2.11.
Inductive Step: Assume that for any set of k nonsingular n n matrices,
the inverse of their product is found by multiplying the inverses of the matrices in reverse order. We must show that if A1 ; : : : ; Ak+1 are nonsingular
n n matrices, then (A1
Ak+1 ) 1 = (Ak+1 ) 1
(A1 ) 1 . Now, let B
1
1
= A1
Ak . Then (A1
Ak+1 ) = (BAk+1 ) = (Ak+1 ) 1 B 1 (by
the Base Step, or by part (3) of Theorem 2.11) = (Ak+1 ) 1 (A1
Ak ) 1
1
1
1
= (Ak+1 ) (Ak )
(A1 ) , by the inductive hypothesis.
(10) Now, (R1 (R2 ( (Rk (A)) )))B = R1 (R2 ( (Rk (AB)) )) (by part
(2) of Theorem 2.1) = In (given). Hence R1 (R2 ( (Rk (A)) )) = B 1 .
Answers for Test for Chapter 3 — Version A
(1) The area of the parallelogram is 24 square units.
(2) The determinant of 6A is 65 (4) = 31104.
(3) The determinant of A is
2.
244
Andrilli/Hecker - Answers to Chapter Tests
Chapter 3 - Version B
(4) Note that jABT j = jAj jBT j (by Theorem 3.7) = jAT j jBj (by Theorem 3.9) = jAT Bj (again by Theorem 3.7).
(5) The determinant of A is 1070.
2
(6) The adjoint matrix for A is 4
A
1
2
8
7
3
=4
3
1
1 5.
12
11
9
2
16
14
6
24
22
9
3
2
2 5 and jAj = 2. Hence
1
1
2
(7) The solution set is f( 2; 3; 5)g.
3
(8) (Optional hint:
x.)
2 pA (x) = x
3
2
1
1
1
1
1
1 5; D = 4 0
Answer: P = 4 0
1
0
1
0
(9)
3
0 0
1 0 5
0 0
= 0 is an eigenvalue for A , pA (0) = 0 , j0In
, ( 1)n jAj = 0 , jAj = 0 , A is singular.
Aj = 0 , j
Aj = 0
(10) Let X be an eigenvector for A corresponding to the eigenvalue 1 = 2.
Hence, AX = 2X. Therefore, A3 X = A2 (AX) = A2 (2X) = 2(A2 X) =
2A(AX) = 2A(2X) = 4AX = 4(2X) = 8X. This shows that X is an
eigenvector corresponding to the eigenvalue 2 = 8 for A3 .
Answers for Test for Chapter 3 — Version B
(1) The volume of the parallelepiped is 59 cubic units.
(2) The determinant of 5A is 54 (7) = 4375.
(3) The determinant of A is
1.
(4) Assume A and B are n n matrices and that AB = BA, with n odd.
Then, jABj = j BAj. Now, j BAj = ( 1)n jBAj (by Corollary 3.4)
= jBAj, since n is odd. Hence, jABj = jBAj. By Theorem 3.7, this
means that jAj jBj = jBj jAj. Since jAj and jBj are real numbers, this
can only be true if either jAj or jBj equals zero. Hence, either A or B is
singular (by Theorem 3.5).
(5) The determinant of A is
916.
2
(6) The adjoint matrix for A is 4
A
1
2
1
2
1
1
1
2
1
2
6
=4 0
5
2
3
2
0
2
7
2 5.
1
245
4
4
2
3
10
8 5 and jAj =
4
4. Hence
Andrilli/Hecker - Answers to Chapter Tests
Chapter 3 - Version C
(7) The solution set is f( 3; 2; 2)g.
(8) Optional hint:2pA (x) = x3 3 2x2 + x.2
1 3 1
1
Answer: P = 4 1 0 1 5 ; D = 4 0
0 1 1
0
0
1
0
3
0
0 5
0
(9) (a) Because A is upper triangular, we can easily see that
pA (x) = (x 2)4 . Hence = 2 is the only eigenvalue for A.
2
3
0 1 0 0
6 0 0 1 0 7
7
(b) The reduced row echelon form for 2I4 A is 6
4 0 0 0 1 5, which
0 0 0 0
is just (2I4 A). There is only one non-pivot column (column 1),
and so Step 3 of the Diagonalization Method will produce only one
fundamental eigenvector. That fundamental eigenvector is [1; 0; 0; 0].
(10) From the algebraic perspective, pA (x) = x2 56 x + 1, which has no roots,
since its discriminant is 64
25 . Hence, A has no eigenvalues, and so A can
not be diagonalized. From the geometric perspective, for any nonzero
vector X, AX will be rotated away from X through an angle . Hence,
X and AX can not be parallel. Thus, A can not have any eigenvectors.
This makes A nondiagonalizable.
Answers for Test for Chapter 3 — Version C
(1) The volume of the parallelepiped is 2 cubic units.
(2) The determinant of A is 3.
0 12
0 9
0 12
0
0
0
0 8
= 5(8) 0 14 11 =
(3) One possibility: jAj = 5
0 14 11 7
15 13 10
15 13 10 6
0 11
= 5(8)( 12)( 11) j15j = 79200. Cofactor expan5(8)( 12)
15 10
sions were used along the …rst row in each case, except for the 4 4 matrix,
where the cofactor expansion was along the second row.
(4) Assume that AT = A 1 . Then jAT j = jA 1 j. By Theorems 3.8 and 3.9,
we have jAj = 1=jAj. Hence, jAj2 = 1, and so jAj = 1.
(5) Let A be a nonsingular n n matrix. Recall that A 1 = (1=jAj)A,
by Corollary 3.12, where A is the adjoint of A. Therefore, A = jAjA 1 .
Hence, jAj equals the determinant of jAjA 1 , which is equal to jAjn jA 1 j,
by Corollary 3.4. However, since jAj and jA 1 j are reciprocals (by Theorem 3.8), we have jAjn jA 1 j = (jAjn 1 jAj)(1=jAj) = jAjn 1 .
246
Andrilli/Hecker - Answers to Chapter Tests
Chapter 4 - Version A
a11 a12
: Note that jAj = a11 a22
a21 a22
a12 a21 . Since A has only two rows, there are only two possible forms for
the row operation R.
a11 + ca21 a12 + ca22
Case 1: R = h1i
ch2i+h1i. Then R(A) =
:
a21
a22
Hence jR(A)j = (a11 + ca21 )a22 (a12 + ca22 )a21 = a11 a22 + ca21 a22
a12 a21 ca22 a21 = a11 a22 a12 a21 = jAj.
a11
a12
Case 2: R = h2i
ch1i+h2i. Then R(A) =
:
a21 + ca11 a22 + ca12
Hence jR(A)j = a11 (a22 + ca12 ) a12 (a21 + ca11 ) = a11 a22 + a11 ca12
a12 a21 a12 ca11 = a11 a22 a12 a21 = jAj.
Therefore jAj = jR(A)j in all possible cases when n = 2, thus completing the Base Step.
(6) Base Step (n = 2): Let A =
(7) The solution set is f(3; 3; 2)g.
(8) If P =
1
1
So A9 = P
2
1
1
0
, then A = P
1
0
0
2
9
P
1
=P
0
2
P
1
0
0
512
2
(9) (Optional hint:
2 pA (x) = (x +31)(x 2)2 .)
2 1
1
1
0 5; D = 4 0
Answer: P = 4 0 1
1 0
1
0
1
.
P
0
2
0
1
=
1025
513
1026
514
.
3
0
0 5
2
)
(10) We will show that pA (x) = pAT (x). Hence, the linear factor (x
will appear exactly k times in pAT (x) since it appears exactly k times in
pA (x). Now pAT (x) = jxIn AT j = j(xIn )T AT j (since (xIn ) is diagonal,
hence symmetric) = j(xIn A)T j (by Theorem 1.12) = jxIn Aj (by
Theorem 3.9) = pA (x).
Answers for Test for Chapter 4 — Version A
(1) Clearly the operation
is commutative. Also,
is associative because
(x y) z = (x5 +y 5 )1=5 z = (((x5 +y 5 )1=5 )5 +z 5 )1=5 = ((x5 +y 5 )+z 5 )1=5
= (x5 + (y 5 + z 5 ))1=5 = (x5 + ((y 5 + z 5 )1=5 )5 )1=5 = x (y 5 + z 5 )1=5 =
x (y z).
Clearly, the real number 0 acts as the additive identity, and, for any real
number x, the additive inverse of x is x, because (x5 + ( x)5 )1=5 = 01=5
= 0, the additive identity.
Also, the …rst distributive law holds because a (x y) = a (x5 + y 5 )1=5
= a1=5 (x5 + y 5 )1=5 = (a(x5 + y 5 ))1=5 = (ax5 + ay 5 )1=5 = ((a1=5 x)5 +
(a1=5 y)5 )1=5 = (a1=5 x) (a1=5 y) = (a x) (a y).
247
Andrilli/Hecker - Answers to Chapter Tests
Chapter 4 - Version A
Similarly, the other distributive law holds because (a + b) x = (a + b)1=5 x
= (a + b)1=5 (x5 )1=5 = (ax5 + bx5 )1=5 = ((a1=5 x)5 + (b1=5 x)5 )1=5
= (a1=5 x) (b1=5 x) = (a x) (b x).
Associativity of scalar multiplication holds because (ab)
a1=5 b1=5 x = a1=5 (b1=5 x) = a (b1=5 x) = a (b x).
Finally, 1
x = (ab)1=5 x =
x = 11=5 x = 1x = x.
(2) Since the set of n n skew-symmetric matrices is nonempty, for any n 1,
it is enough to prove closure under addition and scalar multiplication.
That is, we must show that if A and B are any two n n skew-symmetric
matrices, and if c is any real number, then A + B is skew-symmetric, and
cA is skew-symmetric. However, A + B is skew-symmetric since (A + B)T
= AT + BT (by part (2) of Theorem 1.12) = A B (since A and B are
skew-symmetric) = (A + B). Similarly, cA is skew-symmetric because
(cA)T = cAT (by part (3) of Theorem 1.12) = c( A) = cA.
3
2
5
2
1
1
1 5.
(3) Form the matrix A whose rows are the given vectors: 4 2
8
2
1
It is easily shown that A row reduces to I3 . Hence, the given vectors span
R3 . (Alternatively, the vectors span R3 since jAj is nonzero.)
(4) A simpli…ed form for the vectors in span(S) is: fa(x3 5x2 2)
+ b(x + 3) j a; b 2 Rg = fax3 5ax2 + bx + ( 2a + 3b) j a; b 2 Rg.
(5) Use the Independence Test Method (in Section 4.4). Note that the matrix
whose columns are the given vectors row reduces to I4 . Thus, there is a
pivot in every column, and the vectors are linearly independent.
(6) The given set is a basis for M22 . Since the set contains four vectors, it
is only necessary to check either that it spans M22 or that it is linearly
independent (see Theorem 4.13). To check for linear independence, form
the matrix whose columns are the entries of the given matrices. (Be sure
to take the entries from each matrix in the same order.) Since this matrix
row reduces to I4 , the set of vectors is linearly independent, hence a basis
for M22 .
(7) Using the Independence Test Method, as explained in Section 4.6 produces
the basis B = f3x3 2x2 + 3x 1; x3 3x2 + 2x 1; 11x3 12x2 + 13x + 3g
for span(S). This set B is a maximal linearly independent subset of S,
since the Independence Test Method works precisely by extracting such a
subset from S. Since B has 3 elements, dim(span(S)) = 3.
(8) Let S be the set of columns of A. Because A is nonsingular, it row reduces
to In . Hence, the Independence Test Method applied to S shows that S
is linearly independent. Thus, S itself is a basis for span(S). Since S has
n elements, dim(span(S)) = n, and so span(S) = Rn , by Theorem 4.16.
Hence S spans Rn .
248
Andrilli/Hecker - Answers to Chapter Tests
Chapter 4 - Version B
(9) One possibility is B = f[2; 1; 0]; [1; 0; 0]; [0; 0; 1]g.
2
2
(10) (a) The transition matrix from B to C is 4 4
3
3
1
2 5.
4
3
3
2
(b) For v = [ 63; 113; 62], [v]B = [3; 2; 2], and [v]C = [ 14; 22; 13].
Answers for Test for Chapter 4 — Version B
(1) The operation is not associative in general because (x y) z =
(3(x + y)) z = 3(3(x + y)) + z) = 9x + 9y + 3z, while x (y z) =
x (3(y + z)) = 3(x + 3(y + z)) = 3x + 9y + 9z instead. For a particular
counterexample, (1 2) 3 = 9 3 = 30, but 1 (2 3) = 1 15 = 48.
(2) The set of all polynomials in P4 for which the coe¢ cient of the seconddegree term equals the coe¢ cient of the fourth-degree term has the form
fax4 + bx3 + ax2 + cx + dg. To show this set is a subspace of P4 , it is
enough to show that it is closed under addition and scalar multiplication,
as it is clearly nonempty. Clearly, this set is closed under addition because
(ax4 + bx3 + ax2 + cx + d) + (ex4 + f x3 + ex2 + gx + h) = (a + e)x4 +
(b + f )x3 + (a + e)x2 + (c + g)x + (d + h), which has the correct form
because this latter polynomial has the coe¢ cients of its second-degree
and fourth-degree terms equal. Similarly, k(ax4 + bx3 + ax2 + cx + d) =
(ka)x4 + (kb)x3 + (ka)x2 + (kc)x + (kd), which also has the coe¢ cients of
its second-degree and fourth-degree terms equal.
(3) Let V be a vector space with subspaces W1 and W2 . Let W = W1 \ W2 .
First, W is clearly nonempty because 0 2 W since 0 must be in both W1
and W2 due to the fact that they are subspaces: Thus, to show that W is
a subspace of V, it is enough to show that W is closed under addition and
scalar multiplication. Let x and y be elements of W, and let c be any real
number. Then, x and y are elements of both W1 and W2 , and since W1
and W2 are both subspaces of V, hence closed under addition, we know
that x + y is in both W1 and W2 . Thus x + y is in W. Similarly, since
W1 and W2 are both subspaces of V, then cx is in both W1 and W2 , and
hence cx is in W.
(4) The vector [2;
2
5
2
4 15
6
4
1
4; 3] is not in span(S). Note
2
3
9
2
6
27
4 5 row reduces to 4
7
3
249
that
1 0
0
1
5
3
1
3
0
0
0
8
3
23
3
2
3
7
5.
Andrilli/Hecker - Answers to Chapter Tests
Chapter 4 - Version C
(5) A simpli…ed form for the vectors in span(S) is:
1
0
a
=
a
b
3
0
3a
c
+b
0
1
0
0
+ c
0
0
0
1
a; b; c 2 R
a; b; c 2 R :
(6) Form the matrix A whose columns are the coe¢ cients of the polynomials
in S. Row reducing A shows that there is a pivot in every column. Hence
S is linearly independent by the Independence Test Method in Section 4.4.
Hence, since S has four elements and dim(P3 ) = 4, Theorem 4.13 shows
that S is a basis for P3 . Therefore, S spans P3 , showing that p(x) can be
expressed as a linear combination of the elements in S. Finally, since S is
linearly independent, the uniqueness assertion is true by Theorem 4.9.
(7) A basis for span(S) is f[2; 3; 4; 1]; [3; 1; 2; 2]; [2; 8; 12; 3]g.
dim(span(S)) = 3. Since dim(R4 ) = 4, S does not span R4 .
Hence
(8) Since A is nonsingular, AT is also nonsingular (Theorem 2.11). Therefore
AT row reduces to In , giving a pivot in every column. The Independence
Test Method from Section 4.4 now shows the columns of AT , and hence
the rows of A, are linearly independent.
(9) One possible answer: B = f[2; 1; 0; 0]; [ 3; 0; 1; 0]; [1; 0; 0; 0], [0; 0; 0; 1]g
3
2
3
1
1
4
3 5.
(10) (a) The transition matrix from B to C is 4 2
4
2
3
(b) For v = 100x2 +64x 181, [v]B = [2; 4; 1], and [v]C = [9; 9; 19].
Answers for Test for Chapter 4 — Version C
(1) From part (2) of Theorem 4.1, we know 0v = 0 in any general vector space.
Thus, 0 [x; y] = [0x + 3(0) 3; 0y 4(0) + 4] = [ 3; 4] is the additive
identity 0 in this given vector space. Similarly, by part (3) of Theorem 4.1,
( 1)v is the additive inverse of v. Hence, the additive inverse of [x; y] is
( 1) [x; y] = [ x 3 3; y + 4 + 4] = [ x 6; y + 8].
(2) Let W be the set of all 3-vectors orthogonal to [2; 3; 1]. First, [0; 0; 0] 2 W
because [0; 0; 0] [2; 3; 1] = 0. Hence W is nonempty. Thus, to show
that W is a subspace of R3 , it is enough to show that W is closed under
addition and scalar multiplication. Let x; y 2 W; that is, let x and y both
be orthogonal to [2; 3; 1]. Then, x + y is also orthogonal to [2; 3; 1],
because (x + y) [2; 3; 1] = (x [2; 3; 1]) + (y [2; 3; 1]) = 0 + 0 = 0.
Similarly, if c is any real number, and x is in W, then cx is also in W
because (cx) [2; 3; 1] = c(x [2; 3; 1]) (by part (4) of Theorem 1.5)
= c(0) = 0.
250
Andrilli/Hecker - Answers to Chapter Tests
Chapter 5 - Version A
(3) The vector [ 3; 6; 5] is in span(S) since it equals
11
12
5 [3; 6; 1]
5 [4; 8; 3] + 0[5; 10; 1].
(4) A simpli…ed form for the vectors in span(S) is:
fa(x3 2x + 4) + b(x2 + 3x 2) j a; b 2 Rg
= fax3 + bx2 + ( 2a + 3b)x + (4a 2b) j a; b 2 Rg.
(5) Form the matrix A whose columns are the entries of each given matrix
(taking the entries in the same order each time). Noting that every column
in the reduced row echelon form for A contains a pivot, the Independence
Test Method from Section 4.4 shows that the given set is linearly independent.
(6) A maximal linearly independent subset of S is
3
2
1
4
;
2
2
1
0
4
3
. Theorem 4.14 shows that this set is a basis for span(S),
2
2
and so dim(span(S)) = 3.
(7) Note that span(S) is the row space of AT . Now AT is also singular (use
Theorems 1.12 and 2.11), and so rank(AT ) < n by Theorem 2.14. Hence,
the Simpli…ed Span Method as explained in Section 4.6 for …nding a basis
for span(S) will produce fewer than n basis elements (the nonzero rows of
the reduced row echelon form of AT ), showing that dim(span(S)) < n.
(8) One possible answer is f2x3 3x2 + 3x + 1; x3 + 4x2 6x 2; x3 ; xg.
(This is obtained by enlarging T with the standard basis for P3 and then
eliminating x2 and 1 for redundancy.)
(9) First, by Theorem 4.4, E1 is a subspace of Rn , and so by Theorem 4.16,
dim(E1 ) dim(Rn ) = n. But, if dim(E1 ) = n, then Theorem 4.16 shows
that E1 = Rn . Hence, for every vector X 2 Rn , AX = X. In particular,
this is true for each vector e1 ; : : : ; en . Using these vectors as columns
forms the matrix In . Thus we see that AIn = In . implying A = In . This
contradicts the given condition A 6= In . Hence dim(E1 ) 6= n, and so
dim(E1 ) < n.
3
2
2
1 3
2 2 5.
(10) (a) The transition matrix from B to C is 4 3
1
2 3
(b) For v = [ 109; 155; 71], [v]B = [ 1; 3; 3], and
[v]C = [ 14; 15; 16].
Answers to Test for Chapter 5 — Version A
(1) To show f is a linear operator, we must show that f (A1 + A2 ) = f (A1 ) +
f (A2 ), for all A1 ; A2 2 Mnn , and that f (cA) = cf (A), for all c 2 R and
251
;
Andrilli/Hecker - Answers to Chapter Tests
Chapter 5 - Version A
all A 2 Mnn . However, the …rst equation holds since B(A1 + A2 )B 1
= BA1 B 1 + BA2 B 1 by the distributive laws of matrix multiplication
over addition. Similarly, the second equation holds true since B(cA)B 1
= cBAB 1 by part (4) of Theorem 1.14.
(2) L([2; 3; 1]) = [ 1; 11; 4].
2
167
46
(3) The matrix ABC = 4
170
3
117
32 5.
119
(4) The kernel of L = f[2b + d; b; 3d; d] j b; d 2 Rg. Hence, a basis for ker(L)
= f[2; 1; 0; 0]; [1; 0; 3; 1]g. A basis for range(L) is the set
f[4; 3; 4]; [ 1; 1; 2]g, the …rst and third columns of the original matrix.
Thus, dim(ker(L)) + dim(range(L)) = 2 + 2 = 4 = dim(R4 ), and the
Dimension Theorem is veri…ed.
(5) (a) Now, dim(range(L)) dim(R3 ) = 3. Hence dim(ker(L)) = dim(P3 )
dim(range(L)) (by the Dimension Theorem) = 4 dim(range(L))
4 3 = 1. Thus, ker(L) is nontrivial.
(b) Any nonzero polynomial p in ker(L) satis…es the given conditions.
(6) The 3 3 matrix given in the de…nition of L is clearly the matrix for L
with respect to the standard basis for R3 . Since the determinant of this
matrix is 1, it is nonsingular. Theorem 5.15 thus shows that L is an
isomorphism, implying also that it is one-to-one.
(7) First, L is a linear operator, because L(p1 + p2 ) = (p1 + p2 ) (p1 p2 )0
= (p1 p1 0 ) + (p2 p2 0 ) = L(p1 ) + L(p2 ), and because L(cp) = cp (cp)0
= cp cp0 = c(p p0 ) = cL(p).
Since the domain and codomain of L have the same dimension, in order to
show L is an isomorphism, it is only necessary to show either L is one-toone, or L is onto (by Corollary 5.21). We show L is one-to-one. Suppose
L(p) = 0 (the zero polynomial). Then, p p0 = 0, and so p = p0 . But
these polynomials have di¤erent degrees unless p is constant, in which
case p0 = 0. Therefore, p = p0 = 0. Hence, ker(L) = f0g, and L is
one-to-one.
(8) B =
2
1
6 0
P=6
4 0
0
1
0
0
0
0
1
1
0
;
0
0
0
1
0
1
1
0
;
0
0
2
3
0
6
1 7
7, D = 6
5
4
1
0
0
1
0
0
0
0
0
1
;
0
0
0
0
0
0
0
0
3
0
0 7
7
0 5
2
1
0
,
Other answers for B and P are possible since the eigenspace E0 is threedimensional.
252
Andrilli/Hecker - Answers to Chapter Tests
Chapter 5 - Version B
(9) (a) False
(b) True
(10) The matrix for L with respect to the standard basis for P3 is lower triangular with all 1’s on the main diagonal. (The ith column of this matrix
consists of the coe¢ cients of (x + 1)4 i in order of descending degree.)
Thus, pL (x) = (x 1)4 , and = 1 is the only eigenvalue for L. Now, if
L were diagonalizable, the geometric multiplicity of = 1 would have to
be 4, and we would have E1 = P3 . This would imply that L(p) = p for
all p 2 P3 . But this is clearly false since the image of the polynomial x
is x + 1.
Answers to Test for Chapter 5 — Version B
(1) To show f is a linear operator, we must show that f (A1 + A2 ) =
f (A1 ) + f (A2 ), for all A1 ; A2 2 Mnn , and that f (cA) = cf (A), for
all c 2 R and all A 2 Mnn . However, the …rst equation holds since
(A1 + A2 ) + (A1 + A2 )T = A1 + A2 + A1 T + A2 T (by part (2) of Theorem 1.12) = (A1 + A1 T ) + (A2 + A2 T ) (by commutativity of matrix
addition). Similarly, f (cA) = (cA) + (cA)T = cA + cAT
= c(A + AT ) = cf (A).
x2 + 2x + 3
2
55 165
(3) The matrix ABC = 4 46 139
32
97
(2) L([6; 4; 7]) =
49
42
29
3
56
45 5.
32
(4) The kernel of L = f[3c; 2c; c; 0] j c 2 Rg. Hence, a basis for ker(L) =
f[3; 2; 1; 0]g. The range of L is spanned by columns 1, 2, and 4 of the
original matrix. Since these columns are linearly independent, a basis
for range(L) = f[2; 1; 4; 4]; [5; 1; 2; 1]; [8: 1; 1; 2]g. The Dimension
Theorem is veri…ed because dim(ker(L)) = 1, dim(range(L)) = 3, and the
sum of these dimensions equals the dimension of R4 ; the domain of L.
(5) (a) This statement is true. dim(range(L1 )) = rank(A) (by part (1) of
Theorem 5.9) = rank(AT ) (by Corollary 5.11) = dim(range(L2 ))
(by part (1) of Theorem 5.9)
(b) This statement is false. For a particular counterexample, use A =
Omn , in which case dim(ker(L1 )) = n 6= m = dim(ker(L2 )). In
general, using the Dimension Theorem, dim(ker(L1 ))
= n dim(range(L1 )), and dim(ker(L2 )) = m dim(range(L1 )). But,
by part (a), dim(range(L1 )) = dim(range(L2 )), and since m 6= n,
dim(ker(L1 )) can never equal dim(ker(L2 )).
253
Andrilli/Hecker - Answers to Chapter Tests
Chapter 5 - Version C
(6) Solving the appropriate homogeneous system to …nd the kernel of L gives
0 0
. Hence, L is one-to-one by part (1) of Theo0 0
rem 5.12. Since the domain and codomain of L have the same dimension,
L is also an isomorphism by Corollary 5.21.
ker(L) =
(7) First, L is a linear operator, since L(B1 + B2 ) = (B1 + B2 )A 1
= B1 A 1 + B2 A 1 = L(B1 ) + L(B2 ), and since L(cB) = (cB)A
= c(BA 1 ) = cL(B).
1
Since the domain and codomain of L have the same dimension, in order
to show that L is an isomorphism, by Corollary 5.21 it is enough to show
either that L is one-to-one, or that L is onto. We show L is one-to-one.
Suppose L(B1 ) = L(B2 ). Then B1 A 1 = B2 A 1 . Multiplying both sides
on the right by A gives B1 = B2 . Hence, L is one-to-one.
(8) (Optional Hint: pL (x) = x4 2x2 + 1 = (x 1)2 (x + 1)2 .)
Answer:
3 0
0 3
2 0
0 2
B=
;
;
;
;
2 0
0 2
1 0
0 1
3
2
3
2
3 0 2 0
7
0 12 0
7
6
6 0
7
0 12 7
7, P = 6 0 3 0 2 7 ,
A=6
4 2 0 1 0 5
4 4
0
7
0 5
0 2 0 1
0
4
0
7
3
2
1 0
0
0
6 0 1
0
0 7
7 Other answers for B and P are possible since
D=6
4 0 0
1
0 5
0 0
0
1
the eigenspaces E1 and E
1
are both two-dimensional.
(9) (a) False
(b) False
(10) All vectors parallel to [4; 2; 3] are mapped to themselves under L, and
so = 1 is an eigenvalue for L. Now, all other nonzero vectors in R3
are rotated around the axis through the origin parallel to [4; 2; 3], and
so their images under L are not parallel to themselves. Hence E1 is the
one-dimensional subspace spanned by [4; 2; 3], and there are no other
eigenvalues. Therefore, the sum of the geometric multiplicities of all
eigenvalues is 1, which is less than 3, the dimension of R3 . Thus Theorem
5.28 shows that L is not diagonalizable.
Answers to Test for Chapter 5 — Version C
(1) To show that L is a linear transformation, we must prove that L(p + q) =
L(p) + L(q) and that L(cp) = cL(p) for all p; q 2 Pn and all c 2 R. Let
254
Andrilli/Hecker - Answers to Chapter Tests
p = an xn +
L(p + q)
+ a1 x + a0 and let q = bn xn +
Chapter 5 - Version C
+ b1 x + b0 . Then
L(an xn +
+ a1 x + a0 + bn xn +
+ b1 x + b0 )
n
L((an + bn )x +
+ (a1 + b1 )x + (a0 + b0 ))
n
(an + bn )A +
+ (a1 + b1 )A + (a0 + b0 )In
an An +
+ a1 A + a0 In + bn An +
+ b 1 A + b0 In
L(p) + L(q):
=
=
=
=
=
Similarly,
L(cp)
29
2
2
(2) L([ 3; 1; 4]) =
6
(3) The matrix ABC = 6
4
(4) The matrix for
2
1 0
A=4 1 0
0 1
ker(L) =
d
L(c(an xn +
+ a1 x + a0 ))
n
L(can x +
+ ca1 x + ca0 )
n
can A +
+ ca1 A + ca0 In
c(an An +
+ a1 A + a0 In )
cL(p):
=
=
=
=
=
21
6
44
13
28
10
.
91
23
60
16
3
350
118 7
7.
215 5
97
L with respect to the standard bases for M22 and P2 is
3
2
3
0 1
1 0 0 1
1 0 5, which row reduces to 4 0 1 0 0 5. Hence
0 0
0 0 1 1
1
1
0
1
d 2 R , which has dimension 1. A basis for
range(L) consists of the …rst three “columns” of A, and hence
dim(range(L)) = 3, implying range(L) = P2 . Note that
dim(ker(L)) + dim(range(L)) = 1 + 3 = 4 = dim(M22 ), thus verifying
the Dimension Theorem.
(5) (a) v 2 ker(L1 ) ) L1 (v) = 0W ) L2 (L1 (v)) = L2 (0W ) )
(L2 L1 )(v) = 0Y ) v 2 ker(L2 L1 ).
Hence ker(L1 ) ker(L2 L1 ).
(b) By part (a) and Theorem 4.16, dim(ker(L1 ))
dim(ker(L2
Hence, by the Dimension Theorem,
dim(range(L1 )) = dim(V) dim(ker(L1 ))
dim(V) dim(ker(L2 L1 )) = dim(range(L2 L1 )).
255
L1 )).
Andrilli/Hecker - Answers to Chapter Tests
Chapter 5 - Version C
(6) Solving the appropriate homogeneous system to …nd the kernel of L shows
that ker(L) consists only of the zero polynomial. Hence, L is one-to-one
by part (1) of Theorem 5.12. Also, dim(ker(L)) = 0. However, L is not
an isomorphism, because by the Dimension Theorem, dim(range(L)) = 3,
which is less than the dimension of the codomain R4 .
(7) First, we must show that L is a linear transformation. Now
L([a1 ; : : : ; an ] + [b1 ; : : : ; bn ])
=
=
=
=
=
L([a1 + b1 ; : : : ; an + bn ])
(a1 + b1 )v1 +
+ (an + bn )vn
a1 v1 + b1 v1 +
+ an vn + bn vn
a1 v1 +
+ an vn + b1 v1 +
+ bn vn
L([a1 ; : : : ; an ]) + L([b1 ; : : : ; bn ]):
Also,
L(c[a1 ; : : : ; an ])
=
=
=
=
L([ca1 ; : : : ; can ])
ca1 v1 +
+ can vn
c(a1 v1 +
+ an vn )
cL([a1 ; : : : ; an ]):
In order to prove that L is an isomorphism, it is enough to show that L is
one-to-one or that L is onto (by Corollary 5.21), since the dimensions of
the domain and the codomain are the same. We show that L is one-toone. Suppose L([a1 ; : : : ; an ]) = 0V . Then a1 v1 + a2 v2 + + an vn = 0V ,
implying a1 = a2 =
= an = 0 because B is a linearly independent
set (since B is a basis for V). Hence [a1 ; : : : ; an ] = [0; : : : ; 0]. Therefore,
ker(L) = f[0; : : : ; 0]g, and L is one-to-one by part (1) of Theorem 5.12.
(8) B = (x2
2
P=4
x + 2; x2
1
1
2
2x; x2 + 2),
2
0 0
1 1
2 0 5, D = 4 0 1
0 0
0 2
3
3
0
0 5
1
Other answers for B and P are possible since the eigenspace E1 is twodimensional.
(9) (a) True
(b) True
(10) If p is a nonconstant polynomial, then p0 is nonzero and has a degree
lower than that of p, Hence p0 6= cp, for any c 2 R. Hence, nonconstant
polynomials can not be eigenvectors. All constant polynomials, however,
are in the eigenspace E0 of = 0. Thus, E0 is one-dimensional. Therefore,
the sum of the geometric multiplicities of all eigenvalues is 1, which is less
than dim(Pn ) = n + 1, since n > 0. Hence L is not diagonalizable by
Theorem 5.28.
256
Andrilli/Hecker - Answers to Chapter Tests
Chapter 6 - Version A
Answers for Test for Chapter 6 — Version A
(1) Performing the Gram-Schmidt Process as outlined in the text and using appropriate multiples to avoid fractions leads to the following basis:
f[4; 3; 2]; [13; 12; 8]; [0; 2; 3]g.
2
3
7 4
4
8 5.
(2) Matrix for L = 19 4 4 1
4 8
1
(3) Performing the Gram-Schmidt Process as outlined in the text and using
appropriate multiples to avoid fractions leads to the following basis for W:
f[2; 3; 1]; [ 11; 1; 19]g. Continuing the process leads to the following
basis for W ? : f[8; 7; 5]g. Also, dim(W ? ) = 1.
2
2
(4) kAxk = Ax Ax = x x (by Theorem 6.9) = kxk . Taking square roots
yields kAxk = kxk.
(5) Let w 2 W. We need to show that w 2 (W ? )? . To do this, we need
only prove that w ? v for all v 2 W ? . But, by the de…nition of W ? ,
w v = 0, since w 2 W, which completes the proof.
(6) The characteristic polynomial is pL (x) = x2 (x 1)2 = x4 2x3 +x2 . (This
is the characteristic polynomial for every orthogonal projection onto a 2dimensional subspace of R4 .)
(7) If v1 and v2 are unit vectors in the given plane with v1 ? v2 , then the
matrix for L with respect to the ordered orthonormal basis
2
3
0 0 0
1
p [3; 5; 6]; v1 ; v2
is 4 0 1 0 5 :
70
0 0 1
(8) (Optional Hint: pL (x) = x3
18x2 + 81x.)
2
0
Answer: B = ( 31 [2; 2; 1]; 13 [1; 2; 2]; 31 [2; 1; 2]), D = 4 0
0
0
9
0
3
0
0 5:
9
Other answers for B are possible since the eigenspace E9 is two-dimensional.
Another likely answer for B is ( 31 [2; 2; 1]; p12 [1; 1; 0]; p118 [ 1; 1; 4]).
(9) A3 =
1
2
p1
5
2
1
1
0
0
8
1
2
p1
5
2
1
.
Hence, one possible answer is
A
p1
5
=
=
1
5
7
6
1
2
6
2
2
1
1
0
0
2
p1
5
1
2
2
1
:
:
257
Andrilli/Hecker - Answers to Chapter Tests
Chapter 6 - Version B
(10) (Optional Hint: Use the de…nition of a symmetric operator to show that
( 2
1 )(v1 v2 ) = 0.)
Answer: L symmetric ) v1 L(v2 ) = L(v1 ) v2 ) v1 ( 2 v2 ) = ( 1 v1 ) v2
) 2 (v1 v2 ) = 1 (v1 v2 ) ) ( 2
1 )(v1 v2 ) = 0 ) (v1 v2 ) = 0 (since
2 6= 1 ) ) v1 ?v2 .
Answers for Test for Chapter 6 — Version B
(1) Performing the Gram-Schmidt Process as outlined in the text and using appropriate multiples to avoid fractions leads to the following basis:
f[2; 3; 1]; [30; 11; 27]; [5; 6; 8]g.
2
3
16
0
12
1 4
0 25
0 5.
(2) Matrix for L = 25
12
0
9
(3) Performing the Gram-Schmidt Process as outlined in the text and using
appropriate multiples to avoid fractions leads to the following basis for
W: f[2; 2; 3]; [33; 18; 10]g. Continuing the process leads to the following basis for W ? : f[2; 7; 6]g. Using this information, we can show that
[ 12; 36; 43] = [0; 6; 7] + [ 12; 42; 36], where the …rst vector of this
sum is in W, and the second is in W ? .
(4) Because A and B are orthogonal, AAT = BBT = In . Hence (AB)(AB)T
= (AB)(BT AT ) = A(BBT )AT = AIn AT = AAT = In . Thus, AB is
orthogonal.
(5) By Corollary 6.13, dim(W) = n dim(W ? ) = n (n
dim((W ? )? ). Thus, by Theorem 4.16, W = (W ? )? .
dim((W ? )? )) =
(6) Now S is an orthogonal basis for W, and so projW v =
v [0;3;6; 2]
[0;3;6; 2] [0;3;6; 2] [0; 3; 6;
2] +
v [ 6;2;0;3]
[ 6;2;0;3] [ 6;2;0;3] [
= 17 [ 24; 17; 18; 6]. Thus, projW ? v = v
6; 2; 0; 3]
projW v = 17 [3; 18; 11; 6].
(7) Yes, by Theorems 6.18 and 6.20, because the matrix for L with respect to
the standard basis is symmetric.
(8) (Optional Hint:
1
= 0,
2
= 49)
2
0
Answer: B = ( 71 [ 3; 6; 2]; 17 [6; 2; 3]; 71 [2; 3; 6]), D = 4 0
0
0
49
0
3
0
0 5:
49
Other answers for B are possible since the eigenspace E49 is two-dimensional.
1
Another likely answer for B is ( 71 [ 3; 6; 2]; p15 [2; 1; 0]; p245
[2; 4; 15]).
258
Andrilli/Hecker - Answers to Chapter Tests
(9) A2 =
p1
2
1
1
1
1
4
0
0
16
Chapter 6 - Version C
1
1
p1
2
1
1
.
Hence, one possible answer is
A
=
=
p1
2
3
1
1
1
1
1
1
3
2
0
0
4
p1
2
1
1
1
1
:
:
(10) (Optional Hint: Consider the orthogonal diagonalization of A and the
fact that A2 = AAT .)
Answer: Since A is symmetric, it is orthogonally diagonalizable. So
there is an orthogonal matrix P and a diagonal matrix D such that D =
P 1 AP, or equivalently, A = PDP 1 = PDPT . Since all eigenvalues
of A are 1, D has only these values on its main diagonal, and hence
D2 = In . Therefore, AAT = PDPT (PDPT )T = PDPT ((PT )T DT PT )
= PDPT (PDPT ) = PD(PT P)DPT = PDIn DPT = PD2 PT = PIn PT
= PPT = In . Thus, AT = A 1 , and A is orthogonal.
Answers for Test for Chapter 6 — Version C
(1) (Optional Hint: The two given vectors are orthogonal. Therefore, only
two additional vectors need to be found.) Performing the Gram-Schmidt
Process as outlined in the text and using appropriate multiples to avoid
fractions leads to the following basis:
f[ 3; 1; 1; 2]; [2; 4; 0; 1]; [22; 19; 21; 32]; [0; 1; 7; 4]g.
2
3
5
4 2
5 2 5.
(2) Matrix for L = 91 4 4
2
2 8
(3) Performing the Gram-Schmidt Process as outlined in the text and using
appropriate multiples to avoid fractions leads to the following basis for W:
f[5; 1; 2]; [2; 0; 5]g. Continuing the process leads to the following basis
for W ? : f[5; 29; 2]g. Also, dim(W ? ) = 1.
(4) Suppose n is odd. Then jAj2 = jA2 j = jAAj = jA( AT )j = j AAT j
= ( 1)n jAAT j = ( 1)jIn j = 1. But jAj2 can not be negative. This
contradiction shows that n is even.
(5) Let w2 2 W2? . We must show that w2 2 W1? . To do this, we show that
w2 w1 = 0 for all w1 2 W1 . So, let w1 2 W1 W2 . Thus, w1 2 W2 .
Hence, w1 w2 = 0, because w2 2 W2? . This completes the proof.
(6) Let v = [5; 1; 0; 4]. Then, projW v = 31 [14; 5; 2; 12]; projW ? v =
1
1
3 [1; 2; 2; 0]; minimum distance = 3 [1; 2; 2; 0] = 1.
259
Andrilli/Hecker - Answers to Chapter Tests
Chapter 6 - Version C
(7) If v1 and v2 are unit vectors in the given plane with v1 ? v2 , then the
matrix for L with respect to the ordered orthonormal basis
2
3
1 0 0
1
p [4; 3; 1]; v1 ; v2
is 4 0 1 0 5 :
26
0 0 1
Hence, L is orthogonally diagonalizable. Therefore, L is a symmetric operator, and so its matrix with respect to any orthonormal basis is symmetric.
In particular, this is true of the matrix for L with respect to the standard
basis.
(8) (Optional Hint:
121)
1 = 121, 2 =
1
1
1
[6; 7; 6]),
( 11 [2; 6; 9]; 11 [ 9; 6; 2]; 11
Answer: B =
2
121
0
D = 4 0 121
0
0
3
0
0 5.
121
Other answers for B are possible since the eigenspace E121
is two-dimensional. Another likely answer for
1
1
B is ( p185 [ 7; 6; 0]; 11p
[6; 7; 6]).
[ 36; 42; 85]; 11
85
(9) A2 =
3
1
p1
10
1
3
0
0
0
100
p1
10
3
1
1
3
.
Hence, one possible answer is
A
=
p1
10
=
1
3
3
1
3
9
1
3
0
0
0
10
p1
10
3
1
1
3
:
:
(10) (Optional Hint: First show that A and B orthogonally diagonalize to the
same matrix.)
Answer: First, because A and B are symmetric, they are both orthogonally diagonalizable, by Theorems 6.18 and 6.20. But, since the characteristic polynomials of A and B are equal, these matrices have the same
eigenvalues with the same algebraic multiplicities. Thus, there is a single
diagonal matrix D; having these eigenvalues on its main diagonal, such
that D = P1 1 AP1 and D = P2 1 BP2 , for some orthogonal matrices P1
and P2 . Hence P1 1 AP1 = P2 1 BP2 , or A = P1 P2 1 BP2 P1 1 . Let
P = P1 P2 1 : Then A = PBP 1 , where P is orthogonal by parts (2) and
(3) of Theorem 6.6.
260
Andrilli/Hecker - Answers to Chapter Tests
Chapter 7 - Version A
Answers to Test for Chapter 7 — Version A
(1) a b = b a = [2
2i; 1 + i; 5]
(2) (P HP) = P H (P ) (by part (5) of Theorem 7.2) = P H P (by
part (1) of Theorem 7.2) = P HP (since H is Hermitian). Hence P HP
is Hermitian.
(3) (a) False
(b) False
(4) z1 = 1 + i, z2 = 1
2i
0 1
. Then pA (x) = x2 + 1, which has no real roots. Hence
1 0
A has no real eigenvalues, and so is not diagonalizable as a real matrix.
However, pA (x) has complex roots i and i. Thus, this 2 2 matrix has
two complex distinct eigenvalues, and thus must be diagonalizable.
(5) Let A =
(6) (Optional Hint: pL (x) = x3 2ix2 x.)
Answer: 1 = 0, 2 = i; basis for E0 = f[1 + i; i; 1]g, basis for Ei =
f[i; 0; 1]; [0; 1; 0]g. L is diagonalizable.
(7) The statement is false. The polynomial x is in the given subset, but ix is
not. Hence, the subset is not closed under scalar multiplication.
(8) B = f[1 + 3i; 2 + i; 1
(9)
2i]; [1; 2i; 3 + i]g
As a complex linear
transformation:
1
0
As
2
1
6 0
6
4 0
0
i 0 0
0 1 i
a real linear transformation:
3
0 0
1 0 0 0
0
1 1
0 0 0 0
0 7
7
0 0
0 1 0 0
1 5
0 0
0 0 1 1
0
(10) (Optional Hint: The two given vectors are already orthogonal.)
Answer: f[1; i; 1 + i]; [3; i; 1 i]; [0; 2; 1 + i]g
(11) P =
p
2
2
1
1
1
1
(12) (AB) = B A = B
1
A
1
= (AB)
1
, and so AB is unitary. Also,
2
jAj = jAj jAj = jAj jA j (by part (3) of Theorem 7.5) = jAA j = jIn j
= 1.
(13) Proofs for the …ve properties of an inner product:
(1) hf ; f i = f 2 (0) + f 2 (1) + f 2 (2)
nonnegative.
0, since the sum of squares is always
261
Andrilli/Hecker - Answers to Chapter Tests
Chapter 7 - Version B
(2) Suppose f (x) = ax2 + bx + c. Then hf ; f i = 0
=) f (0) = 0; f (1) = 0; f (2) = 0 (since a sum of squares of real
numbers can only be zero if each number is zero)
=) (a; b; c) satis…es
8
c = 0
<
a + b + c = 0
:
4a + 2b + c = 0
=) a = b = c = 0 (since the coe¢ cient matrix of the
homogeneous system has determinant 2 ).
Conversely, it is clear that h0; 0i = 0.
(3) hf ; gi = f (0)g(0) + f (1)g(1) + f (2)g(2)
= g(0)f (0) + g(1)f (1) + g(2)f (2) = hg; f i
(4) hf + g; hi = (f (0) + g(0))h(0) + (f (1) + g(1))h(1) + (f (2) + g(2))h(2)
= f (0)h(0) + g(0)h(0) + f (1)h(1) + g(1)h(1) + f (2)h(2) + g(2)h(2)
= f (0)h(0) + f (1)h(1) + f (2)h(2) + g(0)h(0) + g(1)h(1) + g(2)h(2)
= hf ; hi + hg; hi
(5) hkf ; gi = (kf )(0)g(0) + (kf )(1)g(1) + (kf )(2)g(2)
= kf (0)g(0) + kf (1)g(1) + kf (2)g(2)
= k(f (0)g(0) + f (1)g(1) + f (2)g(2))
= khf ; gi
p
Also, hf ; gi = 14 and kf k = 35.
(14) The distance between x and y is 11.
(15) Note that hx + y; x yi = hx; xi + hx; yi + hy; xi + hy; yi =
kxk2 kyk2 . So, if kxk = kyk, then hx + y; x yi = kxk2 kyk2 = 0.
Conversely, hx + y; x yi = 0 =) kxk2 kyk2 = 0 =) kxk2 = kyk2
=) kxk = kyk.
(16) fx2 ; x; 3
5x2 g
(17) w1 = [ 8; 5; 16], w2 = [8; 6; 19]
Answers to Test for Chapter 7 — Version B
(1) A B =
1 + 14i 33 3i
5 5i 5 + 2i
(2) (iH) = iH (by part (3) of Theorem 7.2) = ( i)H = ( i)H (since H
is Hermitian) = (iH). Hence, iH is skew-Hermitian.
(3) (a) False
(b) True
262
Andrilli/Hecker - Answers to Chapter Tests
Chapter 7 - Version B
(4) The determinant of the matrix of coe¢ cients is 1, and so the inverse matrix
6 + 5i
2 14i
is
. Using this gives z1 = 2 + i and z2 = i.
2+i
4+i
(5) Let A be an n n complex matrix. By the Fundamental Theorem of
Algebra, pA (x) factors into n linear factors. Hence,
k1
km
pA (x) = (x
(x
, where 1 ; : : : ; m are the eigenvalues
1)
m)
of A, and k1 ; : : : ; km are their corresponding
algebraic multiplicities. But,
Pm
since the degree of pA (x) is n, i=1 ki = n.
(6)
= i; basis for Ei = f[1; 2i]g. L is not diagonalizable.
(7) The set of Hermitian n n matrices is not closed under scalar multiplication, and so is not a subspace of MC
nn . To see this, note that
i
1
0
0
0
i 0
0 0
=
1
0
. However,
not. Since the set of Hermitian n
does not have a dimension.
(8) B = f[1; 0; i]; [0; 1; 1
(9) Note that L i
1
0
but iL
(10) f[1 + i; 1
0
0
0
0
is Hermitian and
i 0
0 0
is
n matrices is not a vector space, it
i]g
1
0
0
0
=i
i 0
0 0
=L
1
0
0
0
=i
i; 2]; [3; i; 1 + i]; [0; 2; 1
i 0
0 0
=
1
0
0
0
=
i 0
0 0
=
i 0
0 0
,
.
i]g
1 i 1+i
4 4i
and ZZ =
= Z Z. Hence, Z
1 i 1 i
4i 4
is normal. Thus, by Theorem 7.9, Z is unitarily diagonalizable.
(11) Now, Z =
(12) Now, A unitary implies AA = In . Suppose A2 = In . Hence AA =
AA. Multiplying both sides by A 1 (= A ) on the left yields A = A,
and A is Hermitian. Conversely, if A is Hermitian, then A = A . Hence
AA = In yields A2 = In .
(13) Proofs for the …ve properties of an inner product:
(1) hx; xi = Ax Ax
0, since is an inner product.
(2) hx; xi = 0 =) Ax Ax = 0 =) Ax = 0 (since is an inner product)
=) x = 0 (since A is nonsingular)
Conversely, h0; 0i = A0 A0 = 0 0 = 0.
(3) hx; yi = Ax Ay = Ay Ax = hy; xi
(4) hx + y; zi = A(x + y) Az = (Ax + Ay) Az
= Ax Az + Ay Az = hx; zi + hy; zi
263
Andrilli/Hecker - Answers to Chapter Tests
Chapter 7 - Version C
(5) hkx; yi = A(kx) Ay = (kAx) (Ay)
= k((Ax) (Ay)) = khx; yi
Also, hx; yi = 35 and kxk = 7.
(14) The distance between w and z is 8.
(15)
2
2
1
kx yk ) = 14 (hx + y; x + yi hx y; x
4 (kx + yk
1
= 4 (hx + y; xi + hx + y; yi hx y; xi + hx y; yi)
= 41 (hx; xi + hy; xi + hx; yi + hy; yi hx; xi + hy; xi +
= 41 (4hx; yi) = hx; yi.
(16) fx2 ; 17x
(17) w1 =
9x2 ; 2
25 2
4 x ,
yi)
hx; yi
hy; yi)
3x + x2 g
w2 = x + 3
25 2
4 x
Answers to Test for Chapter 7 — Version C
14 5i 1 + 4i
8 7i 8 3i
(1) AT B =
(2) HJ = (HJ) = J H = JH, because H, J, and HJ are all Hermitian.
(3) (a) True
(b) True
(4) z1 = 4
3i, z2 =
1+i
(5) Let A be an n n complex matrix that is not diagonalizable. By the Fundamental Theorem of Algebra, pA (x) factors into n linear factors. Hence,
k1
km
pA (x) = (x
(x
, where 1 ; : : : ; m are the eigenval1)
m)
ues of A, and k1 ; : : : ; km are their corresponding
algebraic multiplicities.
Pm
Now, since the degree of pA (x) is n,
k
=
n. Hence, if each geoi=1 i
metric multiplicity actually equaled each algebraic multiplicity for each
eigenvalue, then the sum of the geometric multiplicities would also be n,
implying that A is diagonalizable. However, since A is not diagonalizable,
some geometric multiplicity must be less than its corresponding algebraic
multiplicity.
(6)
1 = 2, 2 = 2i; basis for E2 = f[1; 1]g, basis for E2i = f[1; i]g. L is
diagonalizable.
(7) We need to …nd normal matrices Z and W such that (Z + W) is not
1 1
0 i
normal. Note that Z =
and W =
are normal ma1 0
i 0
trices (Z is Hermitian and W is skew-Hermitian). Let Y = Z + W =
1 1+i
1+i
0
. Then YY =
1 1+i
1+i
0
264
1
1 1
i
i
0
Andrilli/Hecker - Answers to Chapter Tests
=
=
2 1
1+i
1
i
2
2 1+i
i
2
(8) One possibility:
B = f[1; 3 + i; i; 1
, while Y Y =
1
Chapter 7 - Version C
1 1
i
i
0
1 1+i
1+i
0
, and so Y is not normal.
i]; [1 + i; 3 + 4i; 1 + i; 2]; [1; 0; 0; 0]; [0; 0; 1; 0]g
(9) Let j be such that wj , the jth entry of w, does not equal zero. Then,
L(iej ) = wj (i) = w1 ( i). However, iL(ej ) = i(wj )(1). Setting these
equal gives iwj = iwj , which implies 0 = 2iwj , and thus wj = 0. But
this contradicts the assumption that wj 6= 0.
(10) (Optional Hint: The last vector in the given basis is already orthogonal
to the …rst two.)
Answer: f[2 i; 1 + 2i; 1]; [5i; 6; 1 + 2i]; [ 1; 0; 2 + i]g
(11) Because the rows of Z form an orthonormal basis for Cn , by part (1) of
Theorem 7.7, Z is a unitary matrix. Hence, by part (2) of Theorem 7.7,
the columns of Z form an orthonormal basis for Cn .
1825 900i
900i 1300
(12) (a) Note that AA = A A =
(b) (Optional Hint: pA (x) = x2
and 2 = 50i.)
3
4
Answer: P = 51
4i
3i
.
25ix + 1250, or,
1
= 25i
(13) Proofs for the …ve properties of an inner product:
Property 1: hx; xi = 2x1 x1 x1 x2 x2 x1 + x2 x2
= x21 + x21 2x1 x2 + x22
= x21 + (x1 x2 )2 0.
Property 2: hx; xi = 0 precisely when x1 = 0 and x1 x2 = 0,
which is when x1 = x2 = 0.
Property 3: hy; xi = 2y1 x1 y1 x2 y2 x1 + y2 x2
= 2x1 y1 x1 y2 x2 y1 + x2 y2 = hx; yi
Property 4: Let z = [z1 ; z2 ]. Then, hx + y; zi
= 2(x1 + y1 )z1 (x1 + y1 )z2 (x2 + y2 )z1 + (x2 + y2 )z2
= 2x1 z1 + 2y1 z1 x1 z2 y1 z2 x2 z1 y2 z1 + x2 z2 + y2 z1
= 2x1 z1 x1 z2 x2 z1 + x2 z2 + 2y1 z1 y1 z2 y2 z1 + y2 z1
= hx; zi + hy; zi
Property 5: hkx; yi
= 2(kx1 )y1 (kx1 )y2 (kx2 )y1 + (kx2 )y2
= k(2x1 y1 x1 y2 x2p
y1 + x2 y2 ) = khx; yi
Also, hx; yi = 0 and kxk = 5.
p
(14) Distance between f and g is
.
265
Andrilli/Hecker - Answers to Chapter Tests
Chapter 7 - Version C
(15) x ? y , hx; yi = 0 , 2hx; yi = 0
2
2
2
2
, kxk + 2hx; yi + kyk = kxk + kyk
2
2
, hx; xi + hx; yi + hy; xi + hy; yi = kxk + kyk
2
2
, hx; x + yi + hy; x + yi = kxk + kyk
2
2
, hx + y; x + yi = kxk + kyk
2
2
2
, kx + yk = kxk + kyk
(16) (Optional Hint: h[ 1; 1; 1]; [3; 4; 1]i = 0)
Answer: f[ 1; 1; 1]; [3; 4; 1]; [ 1; 2; 0]g
(17) w1 = 2x
1
3,
w2 = x2
2x +
1
3
266
Download