Uploaded by chae2142

SolutionManualElementaryLinearAlgebra8th8ELarson-2

advertisement
Complete Solutions Manual
to Accompany
Elementary Linear Algebra
© Cengage Learning. All rights reserved. No distribution allowed without express authorization.
EIGHTH EDITION
Ron Larson
The Pennsylvania State University,
State College, PA
Australia • Brazil • Mexico • Singapore • United Kingdom • United States
ISBN-13: 978-13056580-35
ISBN-10: 0-130565803-5
© 2017 Cengage Learning
ALL RIGHTS RESERVED. No part of this work covered by the
copyright herein may be reproduced, transmitted, stored, or
used in any form or by any means graphic, electronic, or
mechanical, including but not limited to photocopying,
recording, scanning, digitizing, taping, Web distribution,
information networks, or information storage and retrieval
systems, except as permitted under Section 107 or 108 of the
1976 United States Copyright Act, without the prior written
permission of the publisher except as may be permitted by the
license terms below.
For product information and technology assistance, contact us at
Cengage Learning Customer & Sales Support,
1-800-354-9706.
For permission to use material from this text or product, submit
all requests online at www.cengage.com/permissions
Further permissions questions can be emailed to
permissionrequest@cengage.com.
Cengage Learning
200 First Stamford Place, 4th Floor
Stamford, CT 06902
USA
Cengage Learning is a leading provider of customized
learning solutions with office locations around the globe,
including Singapore, the United Kingdom, Australia,
Mexico, Brazil, and Japan. Locate your local office at:
www.cengage.com/global.
Cengage Learning products are represented in
Canada by Nelson Education, Ltd.
To learn more about Cengage Learning Solutions,
visit www.cengage.com.
Purchase any of our products at your local college
store or at our preferred online store
www.cengagebrain.com.
NOTE: UNDER NO CIRCUMSTANCES MAY THIS MATERIAL OR ANY PORTION THEREOF BE SOLD, LICENSED, AUCTIONED,
OR OTHERWISE REDISTRIBUTED EXCEPT AS MAY BE PERMITTED BY THE LICENSE TERMS HEREIN.
READ IMPORTANT LICENSE INFORMATION
Dear Professor or Other Supplement Recipient:
Cengage Learning has provided you with this product (the
“Supplement”) for your review and, to the extent that you adopt
the associated textbook for use in connection with your course
(the “Course”), you and your students who purchase the
textbook may use the Supplement as described below. Cengage
Learning has established these use limitations in response to
concerns raised by authors, professors, and other users
regarding the pedagogical problems stemming from unlimited
distribution of Supplements.
Cengage Learning hereby grants you a nontransferable license
to use the Supplement in connection with the Course, subject to
the following conditions. The Supplement is for your personal,
noncommercial use only and may not be reproduced, posted
electronically or distributed, except that portions of the
Supplement may be provided to your students IN PRINT FORM
ONLY in connection with your instruction of the Course, so long
as such students are advised that they
Printed in the United States of America
1 2 3 4 5 6 7 17 16 15 14 13
may not copy or distribute any portion of the Supplement to any
third party. You may not sell, license, auction, or otherwise
redistribute the Supplement in any form. We ask that you take
reasonable steps to protect the Supplement from unauthorized
use, reproduction, or distribution. Your use of the Supplement
indicates your acceptance of the conditions set forth in this
Agreement. If you do not accept these conditions, you must
return the Supplement unused within 30 days of receipt.
All rights (including without limitation, copyrights, patents, and
trade secrets) in the Supplement are and will remain the sole and
exclusive property of Cengage Learning and/or its licensors. The
Supplement is furnished by Cengage Learning on an “as is” basis
without any warranties, express or implied. This Agreement will
be governed by and construed pursuant to the laws of the State
of New York, without regard to such State’s conflict of law rules.
Thank you for your assistance in helping to safeguard the integrity
of the content contained in this Supplement. We trust you find the
Supplement a useful teaching tool.
CONTENTS
Chapter 1
Introduction to Systems of Linear Equations........................................ 1
Chapter 2
Matrices ................................................................................................ 29
Chapter 3
Determinants......................................................................................... 76
Chapter 4
Vector Spaces ..................................................................................... 104
Chapter 5
Inner Product Spaces .......................................................................... 154
Chapter 6
Linear Transformations...................................................................... 206
Chapter 7
Eigenvalues and Eigenvectors ........................................................... 244
C H A P T E R 1
Systems of Linear Equations
Section 1.1
Introduction to Systems of Linear Equations ........................................ 2
Section 1.2
Gaussian Elimination and Gauss-Jordan Elimination .......................... 8
Section 1.3
Applications of Systems of Linear Equations .....................................14
Review Exercises ..........................................................................................................22
Project Solutions ..........................................................................................................27
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
C H A P T E R 1
Systems of Linear Equations
Section 1.1 Introduction to Systems of Linear Equations
2. Because the term xy cannot be rewritten as ax + by for
any real numbers a and b, the equation cannot be written
in the form a1x + a2 y = b. So, this equation is not
linear in the variables x and y.
14.
y
4
3
2
1
4. Because the terms x 2 and y 2 cannot be rewritten as
ax + by for any real numbers a and b, the equation
cannot be written in the form a1x + a2 y = b. So, this
equation is not linear in the variables x and y.
−4 −3 −2
Multiplying the first equation by 2 produces a new first
equation.
x − 23 y = 2
3x − 12 t = 9
−2 x + 43 y = −4
3 x = 9 + 12 t
Adding 2 times the first equation to the second equation
produces a new second equation.
x = 3 + 16 t.
So, you can describe the solution set as x = 3 + 16 t and
x − 23 y = 2
y = t , where t is any real number.
0 = 0
Choosing y = t as the free variable, you obtain
10. Choosing x2 and x3 as free variables, let x2 = s and
x3 = t and obtain 12 x1 + 24s − 36t = 12.
x = 23 t + 2. So, you can describe the solution set as
x = 23 t + 2 and y = t , where t is any real number.
x1 + 2 s − 3t = 1
x1 = 1 − 2s + 3t.
So, you can describe the solution set as
x1 = 1 − 2s + 3t , x3 = t , and x2 = s, where s and t
−x + 2y = 3
4x + 3y = 7
x
−3 −2
x + 3y = 2
x + 3y = 2
−x + 2y = 3
Adding the first equation to the second equation
produces a new second equation, 5 y = 5 or y = 1.
y
8
6
−x + 3y = 17 2
−8 −6 −4 −2
−2
(−1, 1)
−2
−3
−4
16.
(−2, 5)
are any real number.
4
x
The two lines coincide.
8. Choosing y as the free variable, let y = t and obtain
y
4
−2
−3
6. Because the equation is in the form a1x + a2 y = b, it is
linear in the variables x and y.
12.
1
x − 13 y = 1
2
−2x + 43 y = −4
x
− x + 3 y = 17
4x + 3 y = 7
Subtracting the first equation from the second equation
produces a new second equation, 5 x = −10 or x = −2.
So, 4( −2) + 3 y = 7 or y = 5, and the solution is:
x = −2, y = 5. This is the point where the two lines
intersect.
So, x = 2 − 3 y = 2 − 3(1), and the solution is: x = −1,
y = 1. This is the point where the two lines intersect.
2
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 1.1
y
18.
3
−3
Introduction to Systems of Linear Equations
22.
6x + 5y = 21
−3
−6
−9
3
y
250
x
9
150
(6, −3)
100
6
0.3x + 0.4y = 68.7
0.2x − 0.5y = −27.8
(101, 96)
x − 5y = 21
x
50
− 50
150
x − 5 y = 21
0.2 x − 0.5 y = −27.8
6 x + 5 y = 21
0.3x + 0.4 y = 68.7
Adding the first equation to the second equation
produces a new second equation, 7 x = 42 or x = 6.
Multiplying the first equation by 40 and the second
equation by 50 produces new equations.
So, 6 − 5 y = 21 or y = −3, and the solution is: x = 6,
y = −3. This is the point where the two lines intersect.
8 x − 20 y = −1112
20.
15 x + 20 y = 3435
Adding the first equation to the second equation
produces a new second equation, 23 x = 2323
or x = 101.
y
12
9
x−1 y+2
+
=4
2
3
3
−3
So, 8(101) − 20 y = −1112 or y = 96, and the solution
x − 2y = 5
6
(7, 1)
6
3
12
is: x = 101, y = 96. This is the point where the two
lines intersect.
x
x −1
y + 2
+
= 4
2
3
x − 2y = 5
Multiplying the first equation by 6 produces a new first
equation.
3x + 2 y = 23
x − 2y = 5
Adding the first equation to the second equation
produces a new second equation, 4 x = 28 or x = 7.
So, 7 − 2 y = 5 or y = 1, and the solution is: x = 7,
y = 1. This is the point where the two lines intersect.
y
24.
5
4
3
2
1
−1
−2
2
x + 16 y = 23
3
4x + y = 4
x
2 3 4 5 6
2x + 1 y = 2
3
6
3
4x + y = 4
Adding 6 times the first equation to the second equation
produces a new second equation, 0 = 0. Choosing
x = t as the free variable, you obtain y = 4 − 4t. So,
you can describe the solution as x = t and y = 4 − 4t ,
where t is any real number.
26. From Equation 2 you have x2 = 3. Substituting this value into Equation 1 produces 2 x1 − 12 = 6 or x1 = 9.
So, the system has exactly one solution: x1 = 9 and x2 = 3.
28. From Equation 3 you have z = 2. Substituting this value into Equation 2 produces 3 y + 2 = 11 or y = 3.
Finally, substituting y = 3 into Equation 1, you obtain x − 3 = 5 or x = 8. So, the system has exactly
one solution: x = 8, y = 3, and z = 2.
30. From the second equation you have x2 = 0. Substituting this value into Equation 1 produces x1 + x3 = 0.
Choosing x3 as the free variable, you have x3 = t and obtain x1 + t = 0 or x1 = −t. So, you can describe the
solution set as x1 = −t , x2 = 0, and x3 = t.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
4
Chapter 1
Systems of Linear Equations
32. (a)
−8x + 10y = 14
4
4x − 5y = 3
−6
38. Adding −2 times the first equation to the second
equation produces a new second equation.
3x + 2 y = 2
0 = 10
6
Because the second equation is a false statement, the
original system of equations has no solution.
−4
(b) This system is inconsistent, because you see two
parallel lines on the graph of the system.
34. (a)
1
x + 13 y = 0
2
40. Adding −6 times the first equation to the second
equation produces a new second equation.
x1 − 2 x2 = 0
2
9x − 4y = 5
14 x2 = 0
Now, using back-substitution, the system has exactly one
solution: x1 = 0 and x2 = 0.
−3
3
42. Multiplying the first equation by 32 produces a new first
equation.
−2
(b) Two lines corresponding to two equations intersect
at a point, so this system is consistent.
(c) The solution is approximately x = 13 and y = − 12 .
(d) Adding −18 times the second equation to the first
equation, you obtain −10 y = 5 or y = − 12 .
Substituting y = − 12 into the first equation, you
obtain 9 x = 3 or x = 13. The solution is: x = 13
and y = − 12 .
(e) The solutions in (c) and (d) are the same.
36. (a)
2
−14.7x + 2.1y = 1.05
−3
3
− 2 44.1x − 6.3y = −3.15
(b) Because the lines coincide, the system is consistent.
(c) All solutions of this system lie on the line
y = 7 x + 12 . So, let x = t , then the solution set is
x = t , y = 7t + 12 , where t is any real number.
(d) Adding 3 times the first equation to the second
equation you obtain
− 44.1x + 6.3 y = 3.15
0 = 0.
Choosing x = t as a free variable, you obtain
−14.7t + 2.1 y = 1.05 or −147t + 21y = 105 or
y = 7t + 12 .
So, you can describe the solution set as
x = t , y = 7t + 12 , where t is any real number.
(e) The solutions in (c) and (d) are the same.
x1 + 14 x2 = 0
4 x1 + x2 = 0
Adding −4 times the first equation to the second
equation produces a new second equation.
x1 + 14 x2 = 0
0 = 0
Choosing x2 = t as the free variable, you obtain
x1 = − 14 t. So you can describe the solution set as
x1 = − 14 t and x2 = t , where t is any real number.
44. To begin, change the form of the first equation.
x1
x
5
+ 2 = −
3
2
6
3x1 − x2 = − 2
Multiplying the first equation by 3 yields a new first
equation.
3
5
x2 = −
2
2
3 x1 − x2 = − 2
x1 +
Adding –3 times the first equation to the second equation
produces a new second equation.
3
5
x2 = −
2
2
11
11
− x2 =
2
2
x1 +
Multiplying the second equation by −
2
yields a new
11
second equation.
x1 +
3
5
x2 = −
2
2
x2 = −1
Now, using back-substitution, the system has exactly one
solution: x1 = −1 and x2 = −1.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 1.1
46. Multiplying the first equation by 20 and the second
equation by 100 produces a new system.
Introduction to Systems of Linear Equations
50. Interchanging the first and third equations yields a new
system.
x1 − 0.6 x2 = 4.2
x1 − 11x2 + 4 x3 = 3
7 x1 +
2 x1 + 4 x2 −
2 x2 = 17
Adding −7 times the first equation to the second
equation produces a new second equation.
x1 − 0.6 x2 =
4.2
6.2 x2 = −12.4
Now, using back-substitution, the system has exactly one
solution: x1 = 3 and x2 = −2.
48. Adding the first equation to the second equation yields a
new second equation.
x +
y + z = 2
Adding −2 times the first equation to the second
equation yields a new second equation.
x1 − 11x2 + 4 x3 = 3
26 x2 − 9 x3 = 1
5 x1 − 3 x2 + 2 x3 = 3
Adding −5 times the first equation to the third equation
yields a new third equation.
x1 − 11x2 + 4 x3 =
3
26 x2 − 9 x3 =
1
Adding −4 times the first equation to the third equation
yields a new third equation.
x+
y +
z = 2
4 y + 3 z = 10
−3 y − 4 z = −4
Dividing the second equation by 4 yields a new second
equation.
x+
y +
5
2
−3 y − 4 z = −4
Adding 3 times the second equation to the third equation
yields a new third equation.
y +
z = 2
3z
4
− 74 z
= 52
= 72
Multiplying the third equation by − 74 yields a new third
equation.
x+ y +
At this point, you realize that Equations 2 and 3 cannot
both be satisfied. So, the original system of equations has
no solution.
52. Adding −4 times the first equation to the second
equation and adding −2 times the first equation to the
third equation produces new second and third equations.
z =
2
y + 34 z =
5
2
z = −2
Now, using back-substitution the system has exactly one
solution: x = 0, y = 4, and z = −2.
+ 4 x3 =
x1
13
−2 x2 − 15 x3 = −45
z = 2
y + 34 z =
x + y +
52 x2 − 18 x3 = −12
= 4
y
x3 = 7
5 x1 − 3 x2 + 2 x3 = 3
4 y + 3 z = 10
4x +
5
−2 x2 − 15 x3 = −45
The third equation can be disregarded because it is the
same as the second one. Choosing x3 as a free variable
and letting x3 = t , you can describe the solution as
x1 = 13 − 4t
x2 = 45
− 15
t
2
2
x3 = t , where t is any real number.
54. Adding −3 times the first equation to the second
equation produces a new second equation.
x1 − 2 x2 + 5 x3 = 2
8 x2 − 16 x3 = −8
Dividing the second equation by 8 yields a new second
equation.
x1 − 2 x2 + 5 x3 = 2
x2 − 2 x3 = −1
Adding 2 times the second equation to the first equation
yields a new first equation.
x1
+
x3 = 0
x2 − 2 x3 = −1
Letting x3 = t be the free variable, you can describe the
solution as x1 = −t , x2 = 2t − 1, and x3 = t , where t is
any real number.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
6
Chapter 1
Systems of Linear Equations
56. Adding 3 times the first equation to the fourth equation
yields
− x1
+ 2 x4 = 1
4 x2 −
x3 −
x4 = 2
x2
−
x4 = 0
−2 x2 + 3 x3 + 6 x4 = 7.
Interchanging the second equation with the third
equation yields
− x1
+ 2 x4 = 1
x2
−
x4 = 0
4 x2 −
x3 −
x4 = 2
−2 x2 + 3 x3 + 6 x4 = 7.
Adding − 4 times the second equation to the third
equation, and adding − 2 times the second equation to
64. x = y = z = 0 is clearly a solution.
Dividing the first equation by 2 produces
x + 32 y
4x + 3y − z = 0
8 x + 3 y + 3 z = 0.
Adding −4 times the first equation to the second equation,
and −8 times the first equation to the third, yields
+ 2 x4 = 1
−
x2
−
x4 = 0
x3 + 3x4 = 2
3 x3 + 4 x4 = 7.
3y
2
x +
= 0
−3 y − z = 0
−9 y + 3 z = 0.
Adding −3 times the second equation to the third
equation yields
3y
2
x +
= 0
−3 y − z = 0
the fourth equation yields
− x1
= 0
6 z = 0.
Using back-substitution, you conclude there is exactly
one solution: x = y = z = 0.
66. x = y = z = 0 is clearly a solution.
Adding 3 times the second equation to the third equation
yields
Dividing the second equation by 2 yields a new second
equation.
− x1
16 x + 3 y +
x2
z = 0
+ 2 x4 =
1
−
x4 =
0
8x +
− x3 + 3 x4 =
2
Adding − 3 times the second equation to the first
13 x4 = 13.
Using back-substitution, the original system has exactly
one solution: x1 = 1, x2 = 1, x3 = 1, and x4 = 1.
Answers may vary slightly for Exercises 58–62.
58. Using a software program or graphing utility, you obtain
x = 0.8, y = 1.2, z = −2.4.
60. Using a software program or graphing utility, you obtain
x = 10, y = − 20, z = 40, w = −12.
62. Using a software program or graphing utility, you obtain
x = 6.8813, y = −163.3111, z = −210.2915,
w = −59.2913.
y −
1z
2
= 0
equation produces a new first equation.
+ 52 z = 0
− 8x
8 x + y − 12 z = 0
Letting z = t be the free variable, you can describe the
5
solution as x = 16
t , y = − 2t , and z = t , where t is any
real number.
68. Let x = the speed of the plane that leaves first and
y = the speed of the plane that leaves second.
y −
2x +
x = 80
3y
2
Equation 1
= 3200
−2 x + 2 y =
Equation 2
160
2 x + 32 y = 3200
7
y
2
= 3360
y =
960
960 − x = 80
x = 880
Solution: First plane: 880 kilometers per hour; second
plane: 960 kilometers per hour
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 1.1
70. (a) False. Any system of linear equations is either
consistent, which means it has a unique solution,
or infinitely many solutions; or inconsistent, which
means it has no solution. This result is stated on
page 5, and will be proved later in Theorem 2.5.
(b) True. See definition on page 6.
(c) False. Consider the following system of three linear
equations with two variables.
2 x + y = −3
−6 x − 3 y = 9
x
= 1.
Introduction to Systems of Linear Equations
74. Substituting A =
The solution to this system is: x = 1, y = −5.
72. Because x1 = t and x2 = s, you can write
x3 = 3 + s − t = 3 + x2 − x1. One system could be
x1 − x2 + x3 = 3
− x1 + x2 − x3 = −3
1
1
and B = into the original system
x
y
−1
2 A − 3B = −
17
.
6
Reduce the system to row-echelon form.
27 A + 18B = − 9
12 A − 18B = −17
27 A + 18 B =
−9
= − 26
39 A
Using back substitution, A = −
A =
Letting x3 = t and x2 = s be the free variables, you can
describe the solution as x1 = 3 + s − t , x2 = s, and
x3 = t , where t and s are any real numbers.
76. Substituting A =
yields
3 A + 2B =
7
2
1
and B = . Because
3
2
1
1
and B = , the solution of the original system
x
y
3
of equations is: x = − and y = 2.
2
1
1
1
, B = , and C = into the original system yields
y
x
z
2 A + B − 2C = 5
3 A − 4B
= −1.
2A +
B + 3C = 0
Reduce the system to row-echelon form.
2 A + B − 2C = 5
3A − 4B
= −1
5C = −5
3A −
4B
= −1
−11B + 6C = −17
5C = −5
So, C = −1. Using back-substitution, −11B + 6( −1) = −17, or B = 1 and 3 A − 4(1) = −1, or A = 1. Because A = 1 x,
B = 1 y , and C = 1 z , the solution of the original system of equations is: x = 1, y = 1, and z = −1.
78. Multiplying the first equation by sin θ and the second by cos θ produces
(sin θ cos θ ) x + (sin 2 θ ) y = sin θ
−(sin θ cos θ ) x + (cos 2 θ ) y = cos θ .
Adding these two equations yields
(sin 2 θ + cos2 θ ) y = sin θ + cos θ
y = sin θ + cos θ .
So, (cos θ ) x + (sin θ ) y = (cos θ ) x + sin θ (sin θ + cos θ ) = 1 and
x =
(1 − sin 2 θ − sin θ cos θ ) = (cos2 θ − sin θ cos θ ) = cos θ − sin θ .
cos θ
cos θ
Finally, the solution is x = cos θ − sin θ and y = cos θ + sin θ .
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
8
Chapter 1
Systems of Linear Equations
80. Reduce the system to row-echelon form.
x+
ky = 0
(1 − k ) y = 0
2
88. If c1 = c2 = c3 = 0, then the system is consistent
because x = y = 0 is a solution.
90. Multiplying the first equation by c, and the second by a,
produces
x + ky = 0
acx + bcy = ec
y = 0, 1 − k 2 ≠ 0
acx + day = af .
x = 0
y = 0, 1 − k 2 ≠ 0
Subtracting the second equation from the first yields
If 1 − k 2 ≠ 0, that is if k ≠ ±1, the system will have
exactly one solution.
(ad − bc) y = af − ec.
acx + bcy = ec
So, there is a unique solution if ad − bc ≠ 0.
82. Reduce the system to row-echelon form.
x + 2 y + kz =
y
92.
6
(8 − 3k ) z = −14
This system will have no solution if 8 − 3k = 0, that is,
3
2
1
−3 −2 −1
−2
k = 83.
84. Reduce the system to row-echelon form.
kx + y = 16
(4k + 3) x
= 0
4 5
x
−4
−5
The two lines coincide.
The system will have an infinite number of solutions
when 4k + 3 = 0  k = − 34 .
86. Reducing the system to row-echelon form produces
x +
5y +
z = 0
y −
1
2z = 0
(a − 10) y + (b − 2) z = c
x + 5y + z = 0
y − 2z = 0
(2a + b − 22) z = c.
So, you see that
(a) if 2a + b − 22 ≠ 0, then there is exactly one
solution.
(b) if 2a + b − 22 = 0 and c = 0, then there is an
infinite number of solutions.
(c) if 2a + b − 22 = 0 and c ≠ 0, there is no solution.
2x − 3y = 7
0 = 0
Letting y = t , x =
7 + 3t
.
2
The graph does not change.
94. 21x − 20 y =
0
13 x − 12 y = 120
Subtracting 5 times the second equation from 3 times the
first equation produces a new first equation,
−2 x = −600, or x = 300. So, 21(300) − 20 y = 0 or
y = 315, and the solution is: x = 300, y = 315. The
graphs are misleading because they appear to be parallel,
but they actually intersect at (300, 315).
Section 1.2 Gaussian Elimination and Gauss-Jordan Elimination
2. Because the matrix has 4 rows and 1 column, it has size
4 × 1.
 3 −1 −4
3 −1 −4
8. 
  

4
3
7
−


5 0 −5
4. Because the matrix has 1 row and 1 column, it has size
1 × 1.
Add 3 times Row 1 to Row 2.
6. Because the matrix has 1 row and 5 columns, it has size
1 × 5.
3 −2
−1 −2
−1 −2 3 −2




1 − 7   0 − 9 7 −11
10.  2 − 5
 5
 0 − 6 8 − 4
4 −7
6



Add 2 times Row 1 to Row 2. Then add 5 times Row 1
to Row 3.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 1.2
Gaussian Elimination and Gauss-Jordan Elimination
12. Because the matrix is in reduced row-echelon form, you
can convert back to a system of linear equations
x1 = 2
x2 = 3.
14. Because the matrix is in row-echelon form, you can
convert back to a system of linear equations
x1 + 2 x2 + x3 = 0
x3 = −1.
Using back-substitution, you have x3 = −1. Letting
x2 = t be the free variable, you can describe the
solution as x1 = 1 − 2t , x2 = t , and x3 = −1, where t
is any real number.
16. Gaussian elimination produces the following.
3 −1 1 5
1 0 1 2




1 2 1 0  1 2 1 0
1 0 1 2
3 −1 1 5




 1 0 1 2
 1 0 1 2




 0 2 0 − 2  0 2 0 − 2
3 −1 1 2
0 −1 − 2 −1




 1 0 1 2
 1 0 1 2




 0 1 2 1  0 1 2 1
0 2 0 − 2
0 0 − 4 − 4




24. The matrix satisfies all three conditions in the definition
of row-echelon form. Moreover, because each column
that has a leading 1 (columns one and four) has zeros
elsewhere, the matrix is in reduced row-echelon form.
26. The augmented matrix for this system is
 2 6 16

.
−2 −6 −16
Use Gauss-Jordan elimination as follows.
8
 2 6 16
 1 3
 1 3 8

  
  

2
6
16
2
6
16
−
−
−
−
−
−




0 0 0
Converting back to a system of linear equations, you have
x + 3 y = 8.
Choosing y = t as the free variable, you can describe
the solution as x = 8 − 3t and y = t , where t is any
real number.
28. The augmented matrix for this system is
2 −1 −0.1

.
3 2 1.6
Gaussian elimination produces the following.
1
1 − 12 − 20
2 −1 −0.1

  
8
2

3 2 1.6
5
3
 1 − 12
 
7
2
0
 1 0 1 2


 0 1 2 1
0 0 1 1


Because the matrix is in row-echelon form, convert back
to a system of linear equations.
x1
+
x3 = 2
x2 + 2 x3 = 1
x3 = 1
By back-substitution, x1 = 1, x2 = −1, and x3 = 1.
18. Because the fourth row of this matrix corresponds to the
equation 0 = 2, there is no solution to the linear
system.
20. Because the leading 1 in the first row is not farther to the
left than the leading 1 in the second row, the matrix is
not in row-echelon form.
22. The matrix satisfies all three conditions in the definition
of row-echelon form. However, because the third column
does not have zeros above the leading 1 in the third row,
the matrix is not in reduced row-echelon form.
9
1
− 20

7

4
1
 1 − 12 − 20
1 0
 
  
1
1
0
0 1
2

1
5

1
2

Converting back to a system of equations, the solution is:
x = 15 and y = 12 .
30. The augmented matrix for this system is
1 2 0


1 1 6.
3 −2 8


Gaussian elimination produces the following.
1 2 0
 1 2 0




1 1 6  0 −1 6
3 −2 8
0 −8 8




0
 1 2 0
1 2




 0
1 −6  0 1 −6
0 −8 8
0 0 −40




Because the third row corresponds to the equation
0 = − 40, you can conclude that the system has
no solution.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
10
Chapter 1
Systems of Linear Equations
32. The augmented matrix for this system is
34. The augmented matrix for this system is
22
3 − 2 3


3 −1
24.
0
6 − 7 0 − 22


 1 1 −5 3
 1 0 −2 1.


2 −1 −1 0
Gaussian elimination produces the following.
Subtracting the first row from the second row yields a
new second row.
22 
1 − 2
1
22
3 − 2 3
3
3




1

0
3
1
24
0
1
8
−

−


3




0 − 22
6 − 7
6 − 7 0 − 22
22 
1 − 2
1
3
3


1 − 13
8
 0


0 − 3 − 6 − 66
22 
1 − 2
1
3
3


1 − 13
8
 0


0 − 7 − 42
0
1 − 2
1
3

1
1 −3
 0

0
1
0
22 
3

8

6
Back-substitution now yields
x3 = 6
x2 = 8 + 13 x3
( ) = 10
= 8 + 13 6
x1 = 22
+ 23 x2 − x3 = 22
+ 23 (10) − (6) = 8.
3
3
So, the solution is: x1 = 8, x2 = 10, and x3 = 6.
 1 1 −5 3
0 −1 3 −2


2 −1 −1 0
Adding −2 times the first row to the third row yields a
new third row.
 1 1 −5 3
0 −1 3 −2


0 −3 9 −6
Multiplying the second row by −1 yields a new second
row.
 1 1 −5 3
0
1 −3 2

0 −3 9 −6
Adding 3 times the second row to the third row yields a
new third row.
 1 1 −5 3
0 1 −3 2


0 0 0 0
Adding −1 times the second row to the first row yields a
new first row.
 1 0 −2 1
0 1 −3 2


0 0 0 0
Converting back to a system of linear equations produces
x1
− 2 x3 = 1
x2 − 3x3 = 2.
Finally, choosing x3 = t as the free variable, you can
describe the solution as x1 = 1 + 2t , x2 = 2 + 3t , and
x3 = t , where t is any real number.
36. The augmented matrix for this system is
1
8
 1 2

.
−
−
−
−
3
6
3
21


Gaussian elimination produces the following matrix.
 1 2 1 8


0 0 0 3
Because the second row corresponds to the equation
0 = 3, there is no solution to the original system.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 1.2
Gaussian Elimination and Gauss-Jordan Elimination
11
38. The augmented matrix for this system is
2

3
1

5
1 −1
2 −6

1 1
.
5 2 6 −3

2 −1 −1 3
4
0
Gaussian elimination produces the following.
1

3
2

5
6 −3
5
2
6 −3
5
2
6 −3
1
1




6
17 − 10 
1 1
0
11
6
17
10
0
1
−
−
−
  
11
11
11 
 
0 −9 −5 −10 0
0 −9 −5 −10
1 −1 2 −6
0





2 −1 −1 3
0 −23 −11 −31 18
0 −23 −11 −31 18
5
2
4
0
1

0
 
0

0
1

0
 
0

0
5
2
6
1
6
11
1
− 11
17
11
17
11
43
11
50
11
0
0
−3
1


10
− 11 
0

0
1 − 43
90


781
1562
0
− 11 
0
11
5
2
6
1
6
11
17
11
0
0
−3
1


10
− 11 
0
  0
− 90
11 


0
− 32
11 
−3

− 10
1
11 
0 1 − 43 90

50 − 32 
0 17
11
11
4
5
2
6
6
11
17
11
−3

− 10
11 
1 − 43 90

0
1 −2
5
2
6
1
6
11
17
11
0
0
Back-substitution now yields
w = −2
z = 90 + 43w = 90 + 43( −2) = 4
6 z − 17 w = − 10 − 6 4 − 17 −2 = 0
− 11
y = − 10
( ) 11 ( )
11
11
11 ( )
11 ( )
.
x = −3 − 5 y − 2 z − 6 w = −3 − 5(0) − 2( 4) − 6( −2) = 1.
So, the solution is: x = 1, y = 0, z = 4, and w = −2.
40. Using a software program or graphing utility, the
augmented matrix reduces to
1

0
0

0

0
0 0 0 0
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
2

−1
3.

4

1
So, the solution is:
x1 = 2, x2 = −1, x3 = 3, x4 = 4, and x5 = 1.
44. The corresponding equations are
x1
= 0
x2 + x3 = 0.
Choosing x4 = t and x3 = t as the free variables, you
can describe the solution as x1 = 0, x2 = − s, x3 = s,
and x4 = t , where s and t are any real numbers.
46. The corresponding equations are all 0 = 0. So, there are
three free variables. So, x1 = t , x2 = s, and x3 = r ,
where t , s, and r are any real numbers.
42. Using a computer software program or graphing utility,
you obtain
x1 = 1
x2 = −1
x3 = 2
x4 = 0
x5 = −2
x6 = 1.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
12
Chapter 1
Systems of Linear Equations
48. x = number of $1 bills
50. (a) If A is the augmented matrix of a system of linear
equations, then the number of equations in this
system is three (because it is equal to the number of
rows of the augmented matrix). The number of
variables is two because it is equal to the number of
columns of the augmented matrix minus one.
(b) Using Gaussian elimination on the augmented matrix
of a system, you have the following.
3 
2 −1
 2 −1 3




−4 2 k   0 0 k + 6
 4 −2 6
0 0
0 



y = number of $5 bills
z = number of $10 bills
w = number of $20 bills
x + 5 y + 10 z + 20 w = 95
x +
y +
z +
y − 4z
x − 2y
w = 26
= 0
= −1
 1 5 10 20 95
1



 1 1 1 1 26  0
0
0
1 −4 0 0



0
 1 −2 0 0 −1
0 0 0 15

1 0 0 8
0 1 0 2

0 0 1 1
This system is consistent if and only if k + 6 = 0,
so k = −6.
If A is the coefficient matrix of a system of linear
equations, then the number of equations is three,
because it is equal to the number of rows of the
coefficient matrix. The number of variables is also
three, because it is equal to the number of columns
of the coefficient matrix.
Using Gaussian elimination on A you obtain the
following coefficient matrix of an equivalent system.
x = 15
y = 8
z = 2
w =1
The server has 15 $1 bills, 8 $5 bills, 2 $10 bills, and one
$20 bill.
3 
1 − 1
2
2


0 k + 6
0
0
0
0 

Because the homogeneous system is always
consistent, the homogeneous system with the
coefficient matrix A is consistent for any value of k.
52. Using Gaussian elimination on the augmented matrix, you have the following.
1

0
1

a
1 0 0
1


0
1 1 0
 


0
0 1 0


b c 0
0
1
1
−1
(b − a )
0 0
1


1 0
0
 


1 0
0


c 0
0

1
0
1
1
0
2
0
( a − b + c)
0
1


0
0
 


0
0


0
0
1 0 0

1 1 0
0 1 0

0 0 0
From this row reduced matrix you see that the original system has a unique solution.
54. Because the system composed of Equations 1 and 2 is consistent, but has a free variable, this system must have an infinite
number of solutions.
56. Use Gauss-Jordan elimination as follows.
3
1 2 3
1 2
1 2 3
 1 0 −1








4 5 6  0 −3 −6  0 1 2  0 1 2
7 8 9
0 −6 −12
0 0 0
0 0 0








© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 1.2
Gaussian Elimination and Gauss-Jordan Elimination
13
58. Begin by finding all possible first rows
[0 0 0], [0 0 1], [0 1 0], [0 1 a], [1 0 0], [1 0 a], [1 a b], [1 a 0],
where a and b are nonzero real numbers. For each of these examine the possible remaining rows.
0 0 0 0 0 1 0 1 0 0 1 0 0 1 a

 
 
 
 

0 0 0, 0 0 0, 0 0 0, 0 0 1, 0 0 0,
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

 
 
 
 

1 0 0 1 0 0 1 0 0 1 0 0 1 0 0

 
 
 
 

0 0 0, 0 1 0, 0 1 0, 0 0 1, 0 1 a,
0 0 0 0 0 0 0 0 1 0 0 0 0 0 0

 
 
 
 

1 a 0 1 a 0 1 a b 1 0 a 1 0 a

 
 
 
 

0 0 0, 0 0 1, 0 0 0, 0 0 0, 0 1 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

 
 
 
 

60. (a) False. A 4 × 7 matrix has 4 rows and 7 columns.
(b) True. Reduced row-echelon form of a given matrix
is unique while row-echelon form is not. (See also
exercise 64 of this section.)
(c) True. See Theorem 1.1 on page 21.
(d) False. Multiplying a row by a nonzero constant is
one of the elementary row operations. However,
multiplying a row of a matrix by a constant c = 0
is not an elementary row operation. (This would
change the system by eliminating the equation
corresponding to this row.)
64. First, you need a ≠ 0 or c ≠ 0. If a ≠ 0, then you
have
b
a

b 
a b 

  a
cb

  

.

c
d
ad
− bc
0
+b
0 −



a


62. No, the row-echelon form is not unique. For instance,
1 0
1 2
. The reduced row-echelon form is

 and 
0 1
0 1
unique.
Again, ad − bc = 0 and d = 0, which implies that
So, ad − bc = 0 and b = 0, which implies that d = 0.
If c ≠ 0, then you interchange rows and proceed.
d
c

d 
a b 

  c

ad




0 −
+ b
c d 
0 ad − bc
c


a b 
b = 0. In conclusion, 
 is row-equivalent to
c d 
1 0

 if and only if b = d = 0, and a ≠ 0 or c ≠ 0.
0 0
66. Row reduce the augmented matrix for this system.
− λ 0
−λ
0
2λ + 9 − 5 0
 1
1

  
  

2
− λ 0
 1
2λ + 9 − 5 0
0 2λ + 9λ − 5 0
To have a nontrivial solution you must have the following.
2 λ 2 + 9λ − 5 = 0
(λ + 5)(2λ − 1) = 0
So, if λ = − 5 or λ = 12 , the system will have nontrivial solutions.
68. A matrix is in reduced row-echelon form if every column
that has a leading 1 has zeros in every position above and
below its leading 1. A matrix in row-echelon form may
have any real numbers above the leading 1’s.
70. (a) When a system of linear equations is inconsistent,
the row-echelon form of the corresponding
augmented matrix will have a row that is all zeros
except for the last entry.
(b) When a system of linear equations has infinitely
many solutions, the row-echelon form of the
corresponding augmented matrix will have a row
that consists entirely of zeros or more than one
column with no leading 1’s. The last column will not
contain a leading 1.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
14
Chapter 1
Systems of Linear Equations
Section 1.3 Applications of Systems of Linear Equations
2. (a) Because there are three points, choose a second-degree polynomial, p ( x) = a0 + a1 x + a2 x 2 .
Then substitute x = 0, 2, and 4 into p ( x ) and equate the results to y = 0, − 2, and 0, respectively.
a0 + a1 (0) + a2 (0) = a0
2
= 0
a0 + a1 ( 2) + a2 ( 2) = a0 + 2a1 + 4a2 = − 2
2
a0 + a1 ( 4) + a2 ( 4) = a0 + 4a1 + 16a2 = 0
2
Use Gauss-Jordan elimination on the augmented matrix for this system.
1 0 0
0
0
1 0 0




1
2
4
−
2

0
1
0
−
2



0 0 1
1
1 4 16
0

2

So, p ( x) = − 2 x + 12 x 2 .
y
(b)
4
(0, 0)
−2
(4, 0)
2
−2
−4
4
6
x
(2, − 2)
4. (a) Because there are three points, choose a second-degree polynomial, p( x) = a0 + a1x + a2 x 2 .
Then substitute x = 2, 3, and 4 into p( x) and equate the results to y = 4, 4, and 4, respectively.
a0 + a1 ( 2) + a2 ( 2) = a0 + 2a1 + 4a2 = 4
2
a0 + a1 (3) + a2 (3) = a0 + 3a1 + 9a2 = 4
2
a0 + a1 ( 4) + a2 ( 4) = a0 + 4a1 + 16a2 = 4
2
Use Gauss-Jordan elimination on the augmented matrix for this system.
1 2 4 4
1 0 0 4




1 3 9 4  0 1 0 0
1 4 16 4
0 0 1 0




So, p( x) = 4.
(b)
y
5
(2, 4)
(4, 4)
(3, 4)
3
2
1
1
2
3
4
5
x
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 1.3
Applications of Systems of Linear Equations
15
6. (a) Because there are four points, choose a third-degree polynomial, p ( x) = a0 + a1 x + a2 x 2 + a3 x 3 . Then substitute
x = 0, 1, 2, and 3 into p ( x) and equate the results to y = 42, 0, − 40, and − 72, respectively.
a0 + a1 (0) + a2 (0) + a3 (0) = a0
2
a0 + a1 (1) + a2 (1)
3
+ a3 (1)
2
3
= 42
= a0 + a1
+ a2
+ a3
a0 + a1 ( 2) + a2 ( 2) + a3 ( 2) = a0 + 2a1 + 4a2 + 8a3
2
3
= 0
= − 40
a0 + a1 (3) + a2 (3) + a3 (3) = a0 + 3a1 + 9a2 + 27 a3 = − 72
2
2
Use Gauss-Jordan elimination on the augmented matrix for this system.
1

1
1

1
42
1


1 1 1
0
0
 


2 4 8 − 40
0


0
3 9 27 − 72
0 0
0
42

1 0 0 − 41
0 1 0 − 2

0 0 1
1
0 0 0
So, p ( x) = 42 − 41x − 2 x 2 + x3 .
y
(b)
60
30
(0, 42)
(1, 0)
−4 −2
−60
−90
x
4 6 8 10
(3, − 72)
(2, − 40)
8. (a) Because there are five points, choose a fourth-degree polynomial, p ( x) = a0 + a1 x + a2 x 2 + a3 x 3 + a4 x 4 . Then
substitute x = − 4, 0, 4, 6, and 8 into p ( x ) and equate the results to y = 18, 1, 0, 28, and 135, respectively.
a0 + a1 ( − 4) + a2 ( − 4) + a3 ( − 4) + a4 ( − 4) = a0 − 4a1 + 16a2 − 64a3 + 256a4 = 18
2
3
4
a0 + a1 (0)
+ a2 ( 0 )
2
+ a3 (0)
3
+ a4 (0)
4
= a0
a0 + a1 ( 4)
+ a2 ( 4)
2
+ a3 ( 4)
3
+ a4 ( 4)
4
= a0 + 4a1 + 16a2 + 64a3 + 256a4 = 0
a0 + a1 (6)
+ a2 ( 6 )
2
+ a3 (6)
3
+ a4 (6)
4
= a0 + 6a1 + 36a2 + 216a3 + 1296a4 = 28
a0 + a1 (8)
+ a2 (8)
2
+ a3 (8)
3
+ a4 (8)
4
= a0 + 8a1 + 64a2 + 512a3 + 4096a4 = 135
=1
Use Gauss-Jordan elimination on the augmented matrix for this system.
1
1 − 4 16 − 64 256 18



0 0
0
0
1
0
1
1
4 16
64 256
0  0



0
1
6 36 216 1296 28



8 64 512 4096 135
1
0
0 0 0 0
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
(
1
3
4
− 12 

3
− 16
1

16 
)
3 3
1 4
1
So, p ( x) = 1 + 34 x − 12 x 2 − 16
x + 16
x = 16
16 + 12 x − 8 x 2 − 3 x3 + x 4 .
y
(b)
(8, 135)
120
80
40
(0, 1) (6, 28)
(− 4, 18)
−8 −4
(4, 0)
4
8
12
x
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
16
Chapter 1
Systems of Linear Equations
10. (a) Let z = x − 2012. Because there are four points, choose a third-degree polynomial, p( z ) = a0 + a1 z + a2 z 2 + a3 z 3.
Then substitute z = 0, 1, 2, and 3 into p( z ) and equate the results to y = 150, 180, 240, and 360 respectively.
a0 + a1 (0) + a2 (0) + a3 (0) = a0
2
a0 + a1 (1) + a2 (1)
3
+ a3 (1)
2
3
= 150
= a0 + a1 + a2 +
a3
= 180
a0 + a1 ( 2) + a2 ( 2) + a3 ( 2) = a0 + 2a1 + 4a2 + 8a3 = 240
2
a0 + a1 (3) + a2 (3)
3
+ a3 (3) = a0 + 3a1 + 9a2 + 27 a3 = 360
2
3
Use Gauss-Jordan elimination on the augmented matrix for this system.
1

1
1

1
150 
1


1 1 1 180 
0
 
0
2 4 8 240


3 9 27 360
0
0 0
0 0 0 150

1 0 0 25 
0 1 0 0 

0 0 1 5 
0
So, p( z ) = 150 + 25 z + 5 z 3 , or p( x) = 150 + 25( x − 2012) + 5( x − 2012) .
3
y
(b)
400
(3, 360)
300
(2, 240)
(1, 180)
(0, 150)
100
x
1
2
3
(2012)(2013)(2014)(2015)
12. (a) Because there are four points, choose a third-degree polynomial, p ( x) = a0 + a1 x + a2 x 2 + a3 x 3 . Then substitute
x = 1, 1.189, 1.316, and 1.414 into p ( x ) and equate the results to y = 1, 1.587, 2.080, and 2.520, respectively.
a0 + a1 (1)
+ a2 (1)
+ a3 (1)
2
3
= a0 +
a1 +
a2 +
a3 = 1
a0 + a1 (1.189) + a2 (1.189) + a3 (1.189) ≈ a0 + 1.189a1 + 1.414 a2 + 1.681a3 = 1.587
2
3
a0 + a1 (1.316) + a2 (1.316) + a3 (1.316) ≈ a0 + 1.316a1 + 1.732 a2 + 2.279a3 = 2.080
2
3
a0 + a1 (1.414) + a2 (1.414) + a3 (1.414) ≈ a0 + 1.414a1 + 1.999a2 + 2.827 a3 = 2.520
2
3
Use Gauss-Jordan elimination on the augmented matrix for this system.
1

1
1

1

1


1.189 1.414 1.681 1.587 
0
 
0
1.316 1.732 2.279 2.080


1.414 1.999 2.827 2.520
0
1
1
1
1
0 0 0 − 0.095

1 0 0
0.103
0 1 0
0.405

0 0 1 0.587
So, p ( x) ≈ − 0.095 + 0.103 x + 0.405 x 2 + 0.587 x3 .
y
(b)
4
3
2
1
−3 −2
−1
−2
−3
−4
(1.414, 2.520)
(1.316, 2.080)
(1.189, 1.587)
(1, 1)
1
2
3
x
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 1.3
Applications of Systems of Linear Equations
17
14. Choosing a second-degree polynomial approximation p( x) = a0 + a1x + a2 x 2 , substitute x = 1, 2, and 4
into p( x) and equate the results to y = 0, 1, and 2, respectively.
a0 + a1 + a2 = 0
a0 + 2a1 + 4a2 = 1
a0 + 4a1 + 16a2 = 2
The solution to this system is a0 = − 43 , a1 = 32 , and a2 = − 16 .
So, p( x) = − 43 + 23 x − 16 x 2 .
Finally, to estimate log 2 3, calculate p(3) = − 43 + 23 (3) − 16 (3) = 53.
2
16. Assume that the equation of the circle is x 2 + ax + y 2 + by − c = 0. Because each of the given points lie on the circle, you
have the following linear equations.
(− 5) + a(− 5) + (1) + b(1) − c = − 5a + b − c + 26 = 0
2
2
(− 3) + a(− 3) + (2) + b( 2) − c = − 3a + 2b − c + 13 = 0
2
2
(−1) + a(−1) + (1) + b(1) − c = −a + b − c + 2 = 0
2
2
2
Use Gauss-Jordan elimination on the system.
6
− 5 1 −1 − 26
1 0 0




−
−
−

3
2
1
13
0
1
0
1







 −1 1 −1 − 2
0 0 1 − 3
(
So, the equation of the circle is x 2 − 6 x + y 2 + y + 3 = 0, or ( x − 3) + y − 12
2
18. (a) Letting z =
) = 254 .
2
x − 1970
, the four data points are (0, 205), (1, 227), ( 2, 249), and (3, 282). Let
10
p( z ) = a0 + a1z + a2 z 2 + a3 z 3 . Substituting the points into p( z ) produces the following system of linear equations.
a0 + a1 (0) + a2 (0) + a3 (0) = a0
2
3
a0 + a1 (1) + a2 (1) + a3 (1) = a0 +
2
3
205
a2 +
a3 = 227
a0 + a1 ( 2) + a2 ( 2) + a3 ( 2) = a0 + 2a1 + 4a2 +
8a3 = 249
2
a1 +
3
a0 + a1 (3) + a2 (3) + a3 (3) = a0 + 3a1 + 9a2 + 27 a3 = 282
2
3
Form the augmented matrix
1 0 0 0 205


1 1 1 1 227
1 2 4 8 249


1 3 9 27 282
and use Gauss-Jordan elimination to obtain the equivalent reduced row-echelon matrix.
 1 0 0 0 205

77 
0 1 0 0
3

11
0 0 1 0 − 2 
11
0 0 0 1
6

77
11
11
So, the cubic polynomial is p( z ) = 205 +
z − z 2 + z 3.
3
2
6
3
Because z =
x − 1970
77  x − 1970  11 x − 1970  11 x − 1970 
, p( x) = 205 +

− 
+ 
.
10
3  10
2  10
6  10



(b) To estimate the population in 2010, let x = 2010. p( 2010) = 205 +
77
11 2 11 3
(4) − (4) + (4) = 337 million,
3
2
6
which is greater than the actual population of 309 million.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
18
Chapter 1
Systems of Linear Equations
20. (a) Letting z = x − 2000, the five points (6, 348.7), (7, 378.8), (8, 405.6), (9, 408.2), and (10, 421.8).
Let p( z ) = a0 + a1 z + a2 z 2 + a3 z 3 + a4 z 4 .
a0 + a1 (6) + a2 (6) + a3 (6) + a4 (6) = a0 + 6a1 + 36a2 + 216a3 +
1296a4 = 348.7
a0 + a1 (7) + a2 (7) + a3 (7) + a4 (7) = a0 + 7 a1 + 49a2 + 343a3 +
2401a4 = 378.8
2
a0 +
a1 (8) +
3
2
3
a2 (8) +
a3 (8) +
2
3
4
4
a4 (8) = a0 + 8a1 + 64a2 + 512a3 +
4
a0 + a1 (9) + a2 (9) + a3 (9) + a4 (9) = a0 + 9a1 +
2
3
4
81a2 + 729a3 +
4096a4 = 405.6
6561a4 = 408.2
a0 + a1 (10) + a2 (10) + a3 (10) + a4 (10) = a0 + 10a1 + 100a2 + 1000a3 + 10,000a4 = 421.8
2
3
4
(b) Use Gauss-Jordan elimination to solve the system.
1296
1 6 36 216

1
7
49
343
2401

1 8 64 512
4096

6561
1 9 81 729

1
10
100
1000
10,000

348.7
1


378.8
0

405.6  0


408.2
0


421.8
0
8337.8

1 0 0 0 − 4313.89
0 1 0 0
854.563

0 0 1 0 − 73.608

0 0 0 1
2.338
0 0 0 0
So, p( z ) = 8337.8 − 4313.89 z + 854.563z 2 − 73.608 z 3 + 2.338 z 4 . Because z = x − 2000,
p( x) = 8337.8 − 4313.89( x − 2000) + 854.563( x − 2000) − 73.608( x − 2000) + 2.338( x − 2000) .
2
3
4
To determine the reasonableness of the model for years after 2010, compare the predicted values for 2011–2013 to
the actual values.
x
2011
p( x)
537.8 903.4 1722.3
2012
Actual 447.0 469.2
2013
476.2
The model does not produce reasonable outcomes after 2010.
22. (a) Each of the network’s four junctions gives rise to a linear equation as shown below.
input = output
300 = x1 + x2
x1 + x3 = x4 + 150
x2 + 200 = x3 + x5
x4 + x5 = 350
Rearrange these equations, form the augmented matrix, and use Gauss-Jordan elimination.
 1 1 0 0 0 300
 1 0 1 0 1 500
 1 0 1 −1 0 150



  0 1 −1 0 −1 −200
0 1 −1 0 −1 −200
0 0 0 1 1 350




0
0 0 0 1 1 350
0 0 0 0 0
Letting x5 = t and x3 = s be the free variables, you have
x1 = 500 − s − t
x2 = −200 + s + t
x3 = s
x4 = 350 − t
x5 = t , where t and s are any real numbers.
(b) If x2 = 200 and x3 = 50, then you have s = 50 and t = 350.
So, the solution is: x1 = 100, x2 = 200, x3 = 50, x4 = 0, and x5 = 350.
(c) If x2 = 150 and x3 = 0, then you have s = 0 and t = 350.
So, the solution is: x1 = 150, x2 = 150, x3 = 0, x4 = 0, and x5 = 350.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 1.3
Applications of Systems of Linear Equations
19
24. (a) Each of the network’s six junctions gives rise to a linear equation as shown below.
input = output
600 = x1 + x3
x1 = x2 + x4
x2 + x5 = 500
x3 + x6 = 600
x4 + x7 = x6
500 = x5 + x7
Rearrange these equations, form the augmented matrix, and use Gauss-Jordan elimination.
1 0

 1 −1
0 1

0 0

0 0
0 0

1
0 0
0 −1 0
0
0 1
1
0 0
0
1 0
0
0 1
0 0 600
1


0 0
0
0
0
0 0 500
  
1 0 600
0


0
−1 1
0

0
0 1 500

0 0 0 0 −1
0
1 0 0 0
0 −1
0 1 0 0
1
0
0 0 1 0 −1
1
0 0 0 1
0
1
0 0 0 0
0
0
0

0
600

0

500
0
Letting x7 = t and x6 = s be the free variables, you have
x1 = s
x2 = t
x3 = 600 − s
x4 = s − t
x5 = 500 − t
x6 = s
x7 = t , where s and t are any real numbers.
(b) If x1 = x2 = 100, then the solution is x1 = 100, x2 = 100, x3 = 500, x4 = 0, x5 = 400, x6 = 100, and x7 = 100.
(c) If x6 = x7 = 0, then the solution is x1 = 0, x2 = 0, x3 = 600, x4 = 0, x5 = 500, x6 = 0, and x7 = 0.
(d) If x5 = 1000 and x6 = 0, then the solution is x1 = 0, x2 = −500, x3 = 600, x4 = 500, x5 = 1000, x6 = 0,
and x7 = −500.
26. Applying Kirchoff’s first law to three of the four junctions produces
I1 + I 3 = I 2
I1 + I 4 = I 2
I3 + I 6 = I5
and applying the second law to the three paths produces
R1I1 + R2 I 2 = 3I1 + 2 I 2 = 14
R2 I 2 + R4 I 4 + R5 I 5 + R3 I 3 = 2 I 2 + 2 I 4 + I 5 + 4 I 3 = 25
R5 I 5 + R6 I 6 =
I5 +
I 6 = 8.
Rearrange these equations, form the augmented matrix, and use Gauss-Jordan elimination.
 1 −1 1 0 0 0 0
 1 0 0 0 0 0 2




0 1 0 0 0 0 4
 1 −1 0 1 0 0 0
0 0 1 0 0 0 2
0 0 1 0 −1 1 0


  
3 2 0 0 0 0 14
0 0 0 1 0 0 2




0 2 4 2 1 0 25
0 0 0 0 1 0 5
0 0 0 0 1 1 8
0 0 0 0 0 1 3




So, the solution is: I1 = 2, I 2 = 4, I 3 = 2, I 4 = 2, I 5 = 5, and I 6 = 3.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
20
Chapter 1
Systems of Linear Equations
28. (a) For a set of n points with distinct x-values, substitute the points into the polynomial p ( x) = a0 + a1 x +  + an −1 x n −1.
This creates a system of linear equations in a0 , a1 ,  an −1. Solving the system gives values for the coefficients an , and
the resulting polynomial fits the original points.
(b) In a network, the total flow into a junction is equal to the total flow out of a junction. So, each junction determines an
equation, and the set of equations for all the junctions in a network forms a linear system. In an electrical network,
Kirchhoff’s Laws are used to determine additional equations for the system.
50 + 25 + T2 + T3
4
50 + 25 + T1 + T4
T2 =
4
25 + 0 + T1 + T4
T3 =
4
25 + 0 + T2 + T3
T4 =
4
30. T1 =
4T1 −

T2 −
− T1 + 4T2
− T1
= 75
T3
−
T4 = 75
+ 4T3 −
T4 = 25
− T2 −
T3 + 4T4 = 25
Use Gauss-Jordan elimination to solve this system.
 4 −1 −1 0

−1 4 0 −1
−1 0 4 −1

 0 −1 −1 4
75
1


75
0
 
0
25


25
0
0 0 0 31.25

1 0 0 31.25
0 1 0 18.75

0 0 1 18.75
So, T1 = 31.25°C, T2 = 31.25°C, T3 = 18.75°C, and T4 = 18.75°C.
32.
3x 2 − 7 x − 12
( x + 4)( x − 4)
2
A
B
C
+
+
x+4
x − 4 ( x − 4)2
=
3x 2 − 7 x − 12 = A( x − 4) + B( x + 4)( x − 4) + C ( x + 4)
2
3x 2 − 7 x − 12 = Ax 2 − 8 Ax + 16 A + Bx 2 − 16 B + Cx + 4C
3x 2 − 7 x − 12 = ( A + B ) x 2 + ( −8 A + C ) x + 16 A − 16 B + 4C
So,
A +
−8 A
=
B
3
+ C = −7
16 A − 16 B + 4C = −12.
Use Gauss-Jordan elimination to solve the system.
1 0
3
 1
 1 0 0 1




−
8
0
1
−
7



0 1 0 2
16 −16 4 −12
0 0 1 1




The solution is: A = 1, B = 2, and C = 1.
So,
3x 2 − 7 x − 12
( x + 4)( x − 4)
2
=
1
2
1
+
+
x+4
x − 4 ( x − 4)2
34. Use Gauss-Jordan elimination to solve the system.
0 2 2 −2
 1 0 0 25




2 0 1 −1  0 1 0 50
2 1 0 100
0 0 1 −51




So, x = 25, y = 50, and λ = −51.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 1.3
2 y + 2λ +
36.
Applications of Systems of Linear Equations
21
2 = 0
+ λ + 1 = 0
2x
2x + y
− 100 = 0
The augmented matrix for this system is
0 2 2 − 2 


2 0 1 −1 .
2 1 0 100


Gauss-Jordan elimination produces the matrix
 1 0 0 25


0 1 0 50.
0 0 1 − 51


So, x = 25, y = 50, and λ = − 51.
38. To begin, substitute x = −1 and x = 1 into p( x) = a0 + a1x + a2 x 2 + a3 x3 and equate the results to y = 2 and y = −2,
respectively.
a0 − a1 + a2 − a3 = 2
a0 + a1 + a2 + a3 = −2
Then, differentiate p, yielding p′( x) = a1 + 2a2 x + 3a3 x 2 . Substitute x = −1 and x = 1 into p′( x) and equate the results to 0.
a1 − 2a2 + 3a3 = 0
a1 + 2a2 + 3a3 = 0
Combining these four equations into one system and forming the augmented matrix, you obtain
 1 −1 1 −1 2
 1 1 1 1 −2

.
0 1 −2 3 0


0 1 2 3 0
Use Gauss-Jordan elimination to find the equivalent reduced row-echelon matrix
1
0

0

0
0
1 0 0 −3
.
0 1 0 0

0 0 1 1
0 0 0
So, p( x) = −3x + x3 . The graph of y = p( x) is shown below.
y
(− 1, 2)
−1
x
−1
−2
1
2
(1, − 2)
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
22
Chapter 1
Systems of Linear Equations
40. Let
p1 ( x) = a0 + a1 x + a2 x 2 +  + an −1 x n −1 and p2 ( x) = b0 + b1x + b2 x 2 +  + bn −1x n −1
be two different polynomials that pass through the n given points. The polynomial
p1 ( x) − p2 ( x) = ( a0 − b0 ) + ( a1 − b1 ) x + ( a2 − b2 ) x 2 +  + ( an −1 − bn −1 ) x n −1
is zero for these n values of x. So, a0 = b0 , a1 = b1 , a2 = b2 , , an −1 = bn −1.
Therefore, there is only one polynomial function of degree n − 1 (or less) whose graph passes through n points in the plane
with distinct x-coordinates.
42. Choose a fourth-degree polynomial and substitute x = 1, 2, 3, and 4 into p( x) = a0 + a1x + a2 x 2 + a3 x3 + a4 x 4 .
However, when you substitute x = 3 into p( x) and equate it to y = 2 and y = 3 you get the contradictory equations
a0 + 3a1 + 9a2 + 27 a3 + 81a4 = 2
a0 + 3a1 + 9a2 + 27 a3 + 81a4 = 3
and must conclude that the system containing these two equations will have no solution. Also, y is not a function of x
because the x-value of 3 is repeated. By similar reasoning, you cannot choose p( y ) = b0 + b1 y + b2 y 2 + b3 y 3 + b4 y 4
because y = 1 corresponds to both x = 1 and x = 2.
Review Exercises for Chapter 1
2. Because the equation cannot be written in the form
a1x + a2 y = b, it is not linear in the variables x and y.
6. Because the equation is in the form a1x + a2 y = b, it is
linear in the variables x and y.
4. Because the equation is in the form a1x + a2 y = b, it is
linear in the variables x and y.
8. Choosing x2 and x3 as the free variables and letting
x2 = s and x3 = t , you have
3x1 + 2s − 4t = 0
3x1 = −2 s + 4t
x1 = 13 ( −2 s + 4t ).
10. Row reduce the augmented matrix for this system.
1 1 −1
 1 1 −1
 1 1 −1
 1 0 2

  
  
  

3 2 0
0 −1 3
0 1 −3
0 1 −3
Converting back to a linear system, the solution is x = 2 and y = −3.
12. Rearrange the equations, form the augmented matrix, and row reduce.
7
1 0
3
 1 −1
 1 −1 3
 1 −1 3
3




.






2
2
0 1 − 3 
0 1 − 3 
4 −1 10
0 3 −2
Converting back to a linear system, you obtain the solution x = 73 and y = − 23 .
14. Rearrange the equations, form the augmented matrix, and row reduce.
− 5 1 0
 1 −1 0
 1 −1 0
 1 0 0

  
  
  

−
1
1
0
−
5
1
0
0
−
4
0






0 1 0
Converting back to a linear system, the solution is: x = 0 and y = 0.
16. Row reduce the augmented matrix for this system.
 1
40 30 24


  
20
15
14
−


20 15 −14
3
4
3
1
5 
3
4
18. Use Gauss-Jordan elimination on the augmented matrix.
3
5


0 0 −26
Because the second row corresponds to the false
statement 0 = −26, the system has no solution.
 13

2
 1 0 −3
3
  

3 15
0 1 7
4
7
So, the solution is: x = −3, y = 7.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Review Exercises for Chapter 1
20. Multiplying both equations by 100 and forming the
augmented matrix produces
20 −10 7

.
40 −50 −1
Gauss-Jordan elimination yields the following.
7
1
20 


1
0
2

36. Use Gauss-Jordan elimination on the augmented matrix.
1 0 0 − 3
2 0 6 −9
4




3
2
11
16
0
1
0
0
−
−





0 0 1 − 5 
3 −1 7 −11


4

So, the solution is: x = − 34 , y = 0, and z = − 54 .
7
 1 −1 7 
1 − 1
2
20 
2
20




40 −50 −1
0 −30 −15
1 − 1
2
 
1
0
23
38. Use Gauss-Jordan elimination on the augmented matrix.
0
1
3
5

1
2

So, the solution is: x = 53 and y = 12 .
22. Because the matrix has 3 rows and 2 columns, it has size
3 × 2.
24. This matrix corresponds to the system
− 2 x1 + 3 x2 = 0.
Choosing x2 = t as a free variable, you can describe the
solution as x1 = 32 t and x2 = t , where t is a real
number.
26. This matrix corresponds to the system
x1 + 2 x2 + 3 x3 = 0
0 = 1.
Because the second equation is not possible, the system
has no solution.
28. The matrix satisfies all three conditions in the definition
of row-echelon form. Because each column that has a
leading 1 (columns 1 and 4) has zeros elsewhere, the
matrix is in reduced row-echelon form.
30. The matrix satisfies all three conditions in the definition
of row-echelon form. Because each column that has a
leading 1 (columns 2 and 3) has zeros elsewhere, the
matrix is in reduced row-echelon form.
32. Use Gauss-Jordan elimination on the augmented matrix.
2
1 18
5
4
1 0 0




2
4 − 2 − 2 28  0 1 0



2 − 8
2 − 3
0 0 1 − 6
So, the solution is: x = 5, y = 2, and z = − 6.
34. Use the Gauss-Jordan elimination on the augmented
matrix.
1 0 2 3 
2 1 2 4
2




2
2
0
5

0
1
2
1
−



0 0 0 0
2 −1 6 2




Choosing z = t as the free variable, you can describe
2 5 −19 34
 1 0 3 2

  

3
8
31
54
−


0 1 −5 6
Choosing x3 = t as the free variable, you can describe
the solution as x1 = 2 − 3t , x2 = 6 + 5t , and x3 = t ,
where t is any real number.
40. Use Gauss-Jordan elimination on the augmented matrix.
1

0
0

2

2
0 14
1


3
0

0 3 8 6 16  0


0
4 0 0 −2 0


0 −1 0 0 0
0
5
3 0
4
2 5
0
2

0
0 1 0 0 4

0 0 1 0 −1

0 0 0 1 2
0 0 0 0
1 0 0 0
So, the solution is: x1 = 2, x2 = 0, x3 = 4, x4 = −1,
and x5 = 2.
42. Using a graphing utility, the augmented matrix reduces to
 1 0 − 0.533 0


0 1 1.733 0.
0 0
0 1

Because 0 ≠ 1, the system has no solution.
44. Using a graphing utility, the augmented matrix reduces to
1

0
0

0
5 0 0

0 1 0
.
0 0 1

0 0 0
The system is inconsistent, so there is no solution.
46. Using a graphing utility, the augmented matrix reduces to
 1 0 0 1.5 0


0 1 0 0.5 0.
0 0 1 0.5 0


Choosing w = t as the free variable, you can describe
the solution as x = −1.5t , y = −0.5t , z = −0.5t ,
w = t , where t is any real number.
the solution as x = 32 − 2t , y = 1 + 2t , and z = t ,
where t is any real number.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
24
Chapter 1
Systems of Linear Equations
48. Use Gauss-Jordan elimination on the augmented matrix.
1 0
2 4 −7 0

  
0 1
 1 −3 9 0
3
2
− 52
54. Form the augmented matrix for the system.
0

0
Letting x3 = t be the free variable, you have x1 = − 32 t ,
x2 = 52 t , and x3 = t , where t is any real number.
50. Use Gauss-Jordan elimination on the augmented matrix.
 1 0 37
1 3 5 0
2




1
9
0 1 − 2
1 4 2 0
0

0
Choosing x3 = t as the free variable, you can describe
2 −1 1 a


 1 1 2 b
0 3 3 c


Use Gaussian elimination to reduce the matrix to
row-echelon form.
1

1 − 2

1
1

0
3

1
2
2
3
1

a
1 − 2

2
  
3
b
0
2


c
3
0
1

1 − 2

 
1
0

3
0
t , x2 = 92 t , and x3 = t , where
the solution as x1 = − 37
2
t is any real number.
52. Use Gaussian elimination on the augmented matrix.
1
 1 −1 2 0



−
1
1
−
1
0

0


0
 1 k

1 0


1

 0
0

−1
0
(k + 1)
−1
(k + 1)
0
0

1 0
−1 0
2
0

−1 0
1 0
1

1 − 2

 
1
0

0
0
2
So, there will be exactly one solution (the trivial solution
x = y = z = 0) if and only if k ≠ −1.
−
1
2
3
2
3
1
2
1
3
a 
2 

2b − a 
2 

c 
a 
2 

2b − a 
3 

c 
a


2

2b − a 
1

3

0 c − 2b + a
1
2
(a) If c − 2b + a ≠ 0, then the system has no solution.
(b) The system cannot have one solution.
(c) If c − 2b + a = 0, then the system has infinitely
many solutions
56. Find all possible first rows, where a and b are nonzero real numbers.
[0 0 0], [0 0 1], [0 1 0], [0 1 a], [1 0 0], [1 a 0], [1 a b], [1 0 a]
For each of these, examine the possible second rows.
0 0 0 0 0 1 0 1 0 0 1 0

, 
, 
, 
,
0 0 0 0 0 0 0 0 0 0 0 1
0 1 a 1 0 0 1 0 0 1 0 0 1 0 0

, 
, 
, 
, 
,
0 0 0 0 0 0 0 1 0 0 0 1 0 1 a
1 a 0 1 a 0 1 a b 1 0 a 1 0 a

, 
, 
, 
, 

0 0 0 0 0 1 0 0 0 0 0 0 0 1 0
58. Use Gaussian elimination on the augmented matrix.
(λ + 2)

 −2
 1

−2
(λ − 1)
2
1
3 0
2
λ
0
2
1




6 0  0 λ + 3
6 + 2λ
0  0 λ + 3

2



λ 0
0
0
0 −2λ − 6 −λ − 2λ + 3 0

λ
6 + 2λ
(λ − 2λ − 15)
2
0

0

0
So, you need λ 2 − 2λ − 15 = (λ − 5)(λ + 3) = 0, which implies λ = 5 or λ = −3.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Review Exercises for Chapter 1
25
60. (a) True. A homogeneous system of linear equations is always consistent, because there is always a trivial solution,
i.e., when all variables are equal to zero. See Theorem 1.1 on page 21.
(b) False. Consider, for example, the following system (with three variables and two equations).
x+ y − z = 2
−2 x − 2 y + 2 z = 1.
It is easy to see that this system has no solution.
62. From the following chart, you obtain a system of
equations.
66. (a) Because there are four points, choose a third-degree
polynomial, p( x) = a0 + a1x + a2 x 2 + a3 x3.
By substituting the values at each point into this
equation, you obtain the system
A
B
C
Mixture X
1
5
2
5
2
5
Mixture Y
0
0
1
a0
Mixture Z
1
3
1
3
1
3
a0 + a1 + a2 + a3 = 1
Desired Mixture
6
27
8
27
13
27
a0 − a1 + a2 − a3 = −1
a0 + 2a1 + 4a2 + 8a3 = 4.
Use Gauss-Jordan elimination on the augmented
matrix.
6 
= 27

10
12
 x = 27 , z = 27
2x + 1z = 8
5
3
27 

1x + 1z
5
3
2 x + y + 1 z = 13
5
3
27
1 −1

1 0
1 1

1 2
5
 y = 27
To obtain the desired mixture, use 10 gallons of
spray X, 5 gallons of spray Y, and 12 gallons of spray Z.
64.
3x 2 + 3x − 2
( x + 1) ( x − 1)
2
=
= 0
A
B
C
+
+
x + 1 x − 1 ( x + 1)2
1
1 −1 −1


0 0 0
0
 

0
1 1 1


0
4 8 4
So, p( x) = 23 x + 13 x3.
y
(b)
3
3x 2 + 3 x − 2 = Ax 2 − A + Bx 2 + 2 Bx + B + Cx − C
2
3x 2 + 3x − 2 = ( A + B) x 2 + ( 2 B + C ) x − A + B − C
1
2
A +
−A +
=
3
2B + C =
3
B
B − C = −2.
Use Gauss-Jordan elimination to solve the system.
(2, 4)
4
3x 2 + 3 x − 2 = A( x + 1)( x − 1) + B( x + 1) + C ( x − 1)
So,
(−1, −1)
(1, 1)
(0, 0) 2
x
3
68. Substituting the points, (1, 0), (2, 0), (3, 0), and (4, 0)
into the polynomial p( x) yields the system
3
 1 1 0
 1 0 0 2




 0 2 1 3  0 1 0 1
−1 1 −1 −2
0 0 1 1




a0 + a1 +
The solution is: A = 2, B = 1, and C = 1.
a0 + 4a1 + 16a2 + 64a3 = 0.
3x + 3x − 2
2
So,
( x + 1) ( x − 1)
2
=
2
1
1
+
+
.
x + 1 x − 1 ( x + 1)2
0 0 0 0

1 0 0 23 
0 1 0 0

0 0 1 13 
a2 +
a3 = 0
a0 + 2a1 + 4a2 + 8a3 = 0
a0 + 3a1 + 9a2 + 27 a3 = 0
Gaussian elimination shows that the only solution is
a0 = a1 = a2 = a3 = 0.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
26
Chapter 1
Systems of Linear Equations
70. (a) When t = 0, s = 160: 12 a(0) + v0 (0) + s0 = 160 
2
()
( )
2
1
a1
2
2
1
a2
2
When t = 1, s = 96:
When t = 2, s = 0:
+ v0 (1) + s0 = 96 
s0 = 160
1a + v + s
0
0
2
= 96
+ v0 ( 2) + s0 = 0  2a + 2v0 + s0 = 0
Use Gaussian elimination to solve the system.
s0 = 160
1
a
2
+
v0 + s0 = 96
2a + 2v0 + s0 =
0
a + 2v0 + 2 s0 = 192
2a + 2v0 + s0 = 0
s0 = 160
a + 2v0 + 2 s0 = 192
− 2v0 − 3s0 = − 384
s0 = 160
( − 2) Eq. 1 + Eq. 2
a + 2v0 + 2 s0 = 192
(− 12 ) Eq. 2
v0 + 32 s0 = 192
s0 = 160
s0 = 160  s0 = 160
v0 + 32 (160) = 192  v0 = − 48
a + 2( − 48) + 2(160) = 192  a = − 32
The position equation is s = 12 ( − 32)t 2 − 48t + 160, or s = −16t 2 − 48t + 160.
(b) When t = 1, s = 134: 12 a(1) + v0 (1) + s0 = 134  a + 2v0 + 2s0 = 268
2
When t = 2, s = 86: 12 a( 2) + v0 ( 2) + s0 = 86  2a + 2v0 + s0 = 86
2
When t = 3, s = 6: 12 a(3) + v0 (3) + s0 = 6  9a + 6v0 + 2s0 = 12
2
Use Gaussian elimination to solve the system.
a +
2a +
9a +
2v0 + 2 s0 =
2v0 + s0 =
6v0 + 2s0 =
268
86
12
a +
2v0 + 2 s0 =
268
− 2v0 − 3s0 = −450
−12v0 − 16 s0 = −2400
a +
2v0 +
− 2v0 −
2 s0 = 268
3s0 = −450
3v0 +
4 s0 =
2v0 +
− 2v0 −
2s0 = 268
3s0 = −450
− s0 = −150
a +
600
( −2)Eq.1 + Eq.2
(−9)Eq.1 + Eq.3
(− 14 )Eq.3
3Eq.2 + 2Eq.3
− s0 = −150  s0 = 150
−2v0 − 3(150) = −450  v0 = 0
a + 2(0) + 2(150) = 268  a = −32
The position equation is s = 12 ( −32)t 2 + (0)t + 150, or s = −16t 2 + 150.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Project Solutions for Chapter 1
()
( )
()
2
1
a1
2
2
1
a2
2
2
1
a3
2
(c) When t = 1, s = 184:
When t = 2, s = 116:
When t = 3, s = 16:
27
+ v0 (1) + s0 = 134  a + 2v0 + 2s0 = 368
+ v0 ( 2) + s0 = 116  2a + 2v0 + s0 = 116
+ v0 (3) + s0 = 16  9a + 6v0 + 2s0 = 32
Use Gaussian elimination to solve the system.
a + 2v0 + 2 s0 = 368
2a + 2v0 +
s0 = 116
9a + 6v0 + 2 s0 =
32
a + 2v0 + 2 s0 =
368
− 2v0 − 3s0 =
− 620
(− 2) Eq. 1 + Eq. 2
( − 9) Eq. 1 + Eq. 3
− 12v0 − 16 s0 = − 3280
a + 2v0 + 2 s0 =
v0 +
3
s
2 0
=
368
(− 12 ) Eq. 2
310
− 12v0 − 16 s0 = − 3280
a + 2v0 + 2 s0 = 368
v0 + 32 s0 = 310
2 s0 = 440
12 Eq. 2 + Eq. 3
2 s0 = 440
 s0 = 220
a + 2( − 20) + 2( 220) = 368
 a = − 32
− 2v0 − 3( 220) = − 620  v0 = − 20
The position equation is s = − 12 ( − 32)t 2 + ( − 20)t + 220, or s = −16t 2 − 20t + 220.
72. Applying Kirchoff’s first law to either junction produces
I1 + I 3 = I 2 and applying the second law to the two paths produces
R1I1 + R2 I 2 = 3I1 + 4 I 2 = 3
R2 I 2 + R3 I 3 = 4 I 2 + 2 I 3 = 2.
Rearrange these equations, form the augmented matrix, and use Gauss-Jordan elimination.
1 0 0
 1 −1 1 0



3 4 0 3  0 1 0
0 0 1
0 4 2 2



5
13

6
13 
1
13 
5 , I = 6 , and I = 1 .
So, the solution is I1 = 13
2
3
13
13
Project Solutions for Chapter 1
1 Graphing Linear Equations
3
1

− 12
2 −1 3
2
1. 

  
1a 6 − 3a
0
b
+
6
a
b




2
2 
(a) Unique solution if b + 12 a ≠ 0. For instance, a = b = 2.
(b) Infinite number of solutions if b + 12 a = 6 − 32 a = 0  a = 4 and b = −2.
(c) No solution if b + 12 a = 0 and 6 − 32 a ≠ 0  a ≠ 4 and b = − 12 a. For instance, a = 2, b = −1.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
28
Chapter 1
Systems of Linear Equations
y
(d) 2x + 2y = 6 y 2x − y = 3
4
3
2
1
3
2
1
−4
−2
2 3
x
−4
−2
(a) 2 x −
1
y = 3
2 3 4
(b) 2 x −
2x + 2 y = 6
x
−2
−2
−3
−2
−3
y
2x − y = 3
x
−2
−3
2x − y = 3
2x − y = 6
y = 3
(c) 2 x − y = 3
4x − 2 y = 6
2x − y = 6
(The answers are not unique.)
2. (a) x + y + z = 0
(b)
x + y + z = 0
(c)
x + y + z = 0
x + y + z = 0
y + z =1
x + y + z = 1
x− y − z = 0
z = 2
x− y − z = 0
(The answers are not unique.)
There are other configurations, such as three mutually parallel planes or three planes that intersect pairwise in lines.
2 Underdetermined and Overdetermined Systems of Equations
1. Yes, x + y = 2 is a consistent underdetermined system.
2. Yes,
x+
y = 2
2x + 2 y = 4
3x + 3 y = 6
is a consistent, overdetermined system.
3. Yes,
x + y + z = 1
x + y + z = 2
is an inconsistent underdetermined system.
4. Yes,
x + y =1
x + y = 2
5. In general, a linear system with more equations than
variables would probably be inconsistent. Here is an
intuitive reason: Each variable represents a degree of
freedom, while each equation gives a condition that in
general reduces number of degrees of freedom by one.
If there are more equations (conditions) than variables
(degrees of freedom), then there are too many conditions
for the system to be consistent. So you expect such a
system to be inconsistent in general. But, as Exercise 2
shows, this is not always true.
6. In general, a linear system with more variables than
equations would probably be consistent. As in Exercise 5,
the intuitive explanation is as follows. Each variable
represents a degree of freedom, and each equation
represents a condition that takes away one degree of
freedom. If there are more variables than equations, in
general, you would expect a solution. But, as Exercise 3
shows, this is not always true.
x + y = 3
is an inconsistent underdetermined system.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
C H A P T E R
Matrices
2
Section 2.1
Operations with Matrices .....................................................................30
Section 2.2
Properties of Matrix Operations...........................................................36
Section 2.3
The Inverse of a Matrix ........................................................................ 41
Section 2.4
Elementary Matrices.............................................................................46
Section 2.5
Markov Chains ..................................................................................... 53
Section 2.6
More Applications of Matrix Operations ............................................ 62
Review Exercises ..........................................................................................................66
Project Solutions...........................................................................................................75
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
C H A P T E R
Matrices
2
Section 2.1 Operations with Matrices
2. x = 13, y = 12
4. x + 2 = 2 x + 6
−4 = x
2 y = 18
y = 9
2 x = −8
y + 2 = 11
x = −4
y = 9
 6 −1  1 4
 6 + 1 −1 + 4
 7 3

 





6. (a) A + B =  2 4 + −1 5 = 2 + ( −1) 4 + 5 =  1 9
−3 5  1 10
 −3 + 1 5 + 10
−2 15

 





 6 −1  1 4
 6 − 1 −1 − 4
 5 −5

 





(b) A − B =  2 4 − −1 5 = 2 − ( −1) 4 − 5 =  3 −1
−3 5  1 10
 −3 − 1 5 − 10
−4 −5

 





 2(6) 2( −1)
 6 −1
 12 −2






(c) 2 A = 2  2 4 =  2( 2) 2( 4) =  4 8
2( −3) 2(5)
−3 5
−6 10






 12 −2  1 4
 12 − 1 −2 − 4
 11 −6

 





8 − 5 =  5 3
(d) 2 A − B =  4 8 − −1 5 = 4 − ( −1)
−6 10  1 10
 −6 − 1 10 − 10
−7 0

 





 4
4
 6 −1
 1 4  3 − 12 


 1


 
5 + 2  2 4 = −1 5 +  1
2 =  0
− 1
5
 1 10
−3 5
 1 10 − 3





  2
2
 2
 1
(e)

B + 12 A = −1
7
2

7
25 
2
3 2 −1 0 2 1
3 + 0 2 + 2 −1 + 1
3 4 0

 





8. (a) A + B = 2 4 5 + 5 4 2 = 2 + 5 4 + 4 5 + 2 = 7 8 7
0 1 2 2 1 0
0 + 2 1 + 1 2 + 0
2 2 2

 





3 2 −1 0 2 1
3 − 0 2 − 2 −1 − 1
 3 0 −2

 





3
(b) A − B = 2 4 5 − 5 4 2 = 2 − 5 4 − 4 5 − 2 = −3 0
0 1 2 2 1 0
0 − 2 1 − 1 2 − 0
−2 0 2

 





 2(3) 2( 2) 2( −1)
3 2 −1
6 4 −2






(c) 2 A = 2 2 4 5 = 2( 2) 2( 4) 2(5) = 4 8 10
2(0) 2(1) 2( 2)
0 1 2
0 2 4






3 2 −1 0 2 1
6 4 −2 0 2 1
 6 2 −3

 


 



(d) 2 A − B = 2 2 4 5 − 5 4 2 = 4 8 10 − 5 4 2 =  −1 4 8
0 1 2 2 1 0
0 2 4 2 1 0
−2 1 4

 


 



 32 3
0 2 1
3 2 −1
0 2 1  32 1 − 12 









5 = 6 6
(e) B + 12 A = 5 4 2 + 12 2 4 5 = 5 4 2 +  1 2

2

2 3
2 1 0
0 1 2
2 1 0  0 1
1





 
2
2


30
1
2

9
2
1
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 2.1
10. (a) A + B is not possible. A and B have different sizes.
(b) A − B is not possible. A and B have different sizes.
 3
 6
 
 
(c) 2 A = 2  2 =  4
−1
−2
 
 
(d) 2 A − B is not possible. A and B have different
sizes.
(e)
B + 12 A is not possible. A and B have different
sizes.
12. (a) c23 = 5a23 + 2b23 = 5( 2) + 2(11) = 32
(b) c32 = 5a32 + 2b32 = 5(1) + 2( 4) = 13
Operations with Matrices
31
14. Simplifying the right side of the equation produces
w x
−4 + 2 y 3 + 2w

 = 
.
 y x
 2 + 2 z −1 + 2 x
By setting corresponding entries equal to each other, you
obtain four equations.
w = −4 + 2 y
x =
3 + 2w
y =
2 + 2z
x =
−1 + 2 x

−2 y + w = −4

x − 2w = 3
y − 2z = 2
x = 1
The solution to this linear system is: x = 1, y = 32 ,
z = − 14 , and w = −1.
2( 4) + ( − 2)( 2) 2(1) + ( − 2)( − 2)
1
6
 2 − 2 4
4
16. (a) AB = 
 = 

 = 

−1(1) + 4( − 2) 
4 2 − 2
−1( 4) + 4( 2)
−1
4 − 9
4( − 2) + 1( 4) 
4( 2) + 1( −1)
1  2 − 2
4
7 − 4
(b) BA = 
 = 

 = 

2
2
+
−
2
−
1
2
−
2
+
−
2
4
2
−
2
−
1
4
(
)
(
)(
)
(
)
(
)(
)



6 −12


 1 −1 7  1 1 2 1(1) + ( −1)( 2) + 7(1) 1(1) + ( −1)(1) + 7( −3) 1( 2) + ( −1)(1) + 7( 2) 6 −21 15
 


 

1 1 = 2(1) + ( −1)( 2) + 8(1) 2(1) + ( −1)(1) + 8( −3) 2( 2) + ( −1)(1) + 8( 2) = 8 −23 19
18. (a) AB = 2 −1 8 2
3 1 −1  1 −3 2  3(1) + 1( 2) + ( −1)(1) 3(1) + 1(1) + ( −1)( −3) 3( 2) + 1(1) + ( −1)( 2) 4
7 5


 
 
1( −1) + 1( −1)1 + 2(1)
1(7) + 1(8) + 2( −1) 9 0 13
 1 1 2  1 −1 7  1(1) + 1( 2) + 2(3)
 


 

2( −1) + 1( −1) + 1(1)
2(7) + 1(8) + 1( −1) = 7 −2 21
(b) BA = 2 1 1 2 −1 8 =  2(1) + 1( 2) + 1(3)
 1 −3 2 3 1 −1 1(1) + ( −3)( 2) + 2(3) 1( −1) + ( −3)( −1) + 2(1) 1(7) + ( −3)(8) + 2( −1)  1 4 −19


 

 
2
1  1
2
3(1) + 2( 2) + 1(1)
3( 2) + 2( −1) + 1( − 2)

2
 3
 8







0
4 2 −1 = − 3(1) + 0( 2) + 4(1)
− 3( 2) + 0( −1) + 4( − 2)
20. (a) AB = − 3
 =  1 −14
4(1) + ( − 2)( 2) + ( − 4)(1) 4( 2) + ( − 2)( −1) + ( − 4)( − 2)
 4 − 2 − 4  1 − 2
− 4 18







(b) BA is not defined because B is 3 × 2 and A is 3 × 3.
 −1( 2) −1(1) −1(3) −1( 2)
 −1
− 2 −1 − 3 − 2


 


2
2
2
1
2
3
2
2
2
2
6
4
(
)
(
)
(
)
(
)


 4
=
22. (a) AB =   [2 1 3 2] = 
− 2
− 4 − 2 − 6 − 4
− 2( 2) − 2(1) − 2(3) − 2( 2)


 


 1( 2)
1(1)
1(3)
1( 2)
 1
 2
1
3
2
 −1
 
2
(b) BA = [2 1 3 2]   = 2( −1) + 1( 2) + 3( − 2) + 2(1) = [− 4]
− 2
 
 1
24. (a) AB is not defined because A is 2 × 2 and B is 3 × 2.
2( 2) + 1(5)
2( − 3) + 1( 2) 
2 1
 9 − 4



 2 − 3


1( − 3) + 3( 2)  = 17
3
(b) BA =  1 3 
 = 1( 2) + 3(5)
5
2

2( 2) + ( −1)(5) 2( − 3) + ( −1)( 2)
2 −1 
−1 − 8






© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
32
Chapter 2
Matrices
1 3
 2 1 2  4 0



26. (a) AB =  3 −1 − 2  −1 2 − 3 −1
− 2 1 − 2 − 2 1 4 3



2( 4) + 1( −1) + 2( − 2)

= 3( 4) + ( −1)( −1) + ( − 2)( − 2)
− 2( 4) + 1( −1) + ( − 2)( − 2)

2(0) + 1( 2) + 2(1)
3(0) + ( −1)( 2) + ( − 2)(1)
− 2(0) + 1( 2) + ( − 2)(1)
2(1) + 1( − 3) + 2( 4)
3(1) + ( −1)( − 3) + ( − 2)( 4)
− 2(1) + 1( − 3) + ( − 2)( 4)
2(3) + 1( −1) + 2(3)


3(3) + ( −1)( −1) + ( − 2)(3)
− 2(3) + 1( −1) + ( − 2)(3) 
4
7
11
 3


4
=  17 − 4 − 2
− 5
0 −13 −13

(b) BA is not defined because B is 3 × 4 and A is 3 × 3.
28. (a) AB is not defined because A is 2 × 5 and B is 2 × 2.
 1 6  1 0 3 −2 4
(b) BA = 


4 2 6 13 8 −17 20
 1(1) + 6(6) 1(0) + 6(13) 1(3) + 6(8) 1( −2) + 6( −17) 1( 4) + 6( 20)
= 

4(1) + 2(6) 4(0) + 2(13) 4(3) + 2(8) 4( −2) + 2(−17) 4( 4) + 2( 20)
37 78 51 −104 124
= 

16 26 28 −42 56
30. C + E is not defined because C and E have different
sizes.
32. − 4A is defined and has size 3 × 4 because A has size
3 × 4.
40. In matrix form Ax = b, the system is
2 3  x1 
 5

   =  .
1
4
x

  2
10
Use Gauss-Jordan elimination on the augmented matrix.
34. BE is defined. Because B has size 3 × 4 and E has size
4 × 3, the size of BE is 3 × 3.
2 3 5
 1 0 −2

  

 1 4 10
0 1 3
36. 2D + C is defined and has size 4 × 2 because 2D and C
have size 4 × 2.
 x1 
−2
So, the solution is   =  .
x
 2
 3
38. As a system of linear equations, Ax = 0 is
x1 + 2 x2 + x3 + 3 x4 = 0
x1 − x2
x2
+ x4
= 0.
− x3 + 2 x4 = 0
Use Gauss-Jordan elimination on the augmented matrix
for this system.
 1 2 1 3 0
 1 0 0 2 0




1
−
1
0
1
0



0 1 0 1 0
0 1 −1 2 0
0 0 1 −1 0




42. In matrix form Ax = b, the system is
−4 9  x1 
−13

   =  .
 1 −3  x2 
 12
Use Gauss-Jordan elimination on the augmented matrix.
 1 0 −23
−4 9 −13

  0 1 − 35 

 1 −3 12
3

−23
 x1 
So, the solution is   =  35 .
x
 2
− 3 
Choosing x4 = t , the solution is
x1 = − 2t , x2 = −t , x3 = t , and x4 = t , where t is any
real number.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 2.1
44. In matrix form Ax = b, the system is
Operations with Matrices
33
50. The augmented matrix row reduces as follows.
 1 1 −3  x1 
−1

 
 
−1 2 0  x2  =  1.
 1 −1 1  x3 
 2

 
 
 1 2 4 1
 1 0 −2 −3




−1 0 2 3  0 1 3 2
 0 1 3 2
0 0 0 0




Use Gauss-Jordan elimination on the augmented matrix.
There are an infinite number of solutions. For example,
x3 = 0, x2 = 2, x1 = −3.
 1 0 0 2
 1 1 −3 −1




3
−1 2 0 1  0 1 0 2 
0 0 1 3 
 1 −1 1 2


2

 2
 x1 
 
 
So, the solution is  x2  =  32 .
3
 x3 
 
2
 1
 1
2
4
 
 
 
 
So, b = 3 = −3−1 + 2 0 + 0 2.
2
 0
 1
3
 
 
 
 
52. The augmented matrix row reduces as follows.
46. In matrix form Ax = b, the system is
 1 −1 4  x1 
 17

 
 
 1 3 0  x2  = −11.
0 −6 5  x3 
 40

 
 
−3 5 −22
 1 −3 10




4  0 9 −18
 3 4
 4 −8 32
0 −4
8



 1 −3 10
 1 0 4




 0
1 −2  0 1 −2.
0
0 0 0
1 −2



Use Gauss-Jordan elimination on the augmented matrix.
 1 −1 4 17
 1 0 0 4




 1 3 0 −11  0 1 0 −5
0 −6 5 40
0 0 1 2




 x1 
 4
 
 
So, the solution is  x2  = −5.
 x3 
 2
 
 
1
0
0
1
1
0
0
1
1
0
0
1
1 −1 1
0

0
0

1

−1
 x1 
0
 
 
 x2 
0
 x3  = 0 .
 
 
 x4 
0
 
 
 x5 
5
Use Gauss-Jordan elimination on the augmented matrix.
1

0
0

0

−1
1
0
0
1
1
0
0
1
1
0
0
1
1 −1 1
0
1


0 0
0

0 0  0


0
1 0


−1 5
0
0
−22
−3
 5


 
 
b =  4 = 4  3 + ( −2)  4.
 32
 4
−8


 
 
54. Expanding the left side of the equation produces
48. In matrix form Ax = b, the system is
1

0
0

0

−1
So,
0 0 0 0 −1

1 0 0 0 1
0 1 0 0 −1

0 0 1 0 1

0 0 0 1 −1
2 −1
2 −1  a11 a12 

A = 


3 −2
3 −2 a21 a22 
 2a11 − a21 2a12 − a22 
 1 0
= 
 = 

3a11 − 2a21 3a12 − 2a22 
0 1
and you obtain the system
− a21
2a11
− a22 = 0
2a12
− 2a21
3a11
3a12
= 1
= 0
− 2a22 = 1.
Solving by Gauss-Jordan elimination yields
a11 = 2, a12 = −1, a21 = 3, and a22 = −2.
2 −1
So, you have A = 
.
3 −2
So, the solution is  x1 
−1
 
 
x
 2
1
 x3  = −1 .
 
 
 x4 
1
 
 
 x5 
−1
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
34
Chapter 2
Matrices
3 0 0 −7 0 0



60. AB = 0 −5 0  0 4 0
0 0 0  0 0 12



56. Expanding the left side of the matrix equation produces
a b2 1
 2a + 3b a + b
3 17


 = 
 = 
.
 c d 3 1
2c + 3d c + d 
4 −1
3( −7) + 0 + 0
0 + 0 + 0 0 + 0 + 0


= 
0 + 0 + 0 0 + ( −5)4 + 0 0 + 0 + 0

0+0+0
0 + 0 + 0 0 + 0 + 0

You obtain two systems of linear equations (one
involving a and b and the other involving c and d).
2a + 3b = 3
a + b = 17,
0 0
−21


=  0 −20 0.
 0
0 0

and
2c + 3d = 4
c + d = −1.
Similarly,
Solving by Gauss-Jordan elimination yields a = 48,
b = −31, c = −7, and d = 6.
0 0
−21


BA =  0 −20 0.
 0
0 0

2 0 0 2 0 0
4 0 0





58. AA = 0 −3 0 0 −3 0 = 0 9 0
0 0 0 0 0 0
0 0 0





0
0  b11 b12
a11


0 b21 b22
62. (a) AB =  0 a22
 0
0 a33  b31 b32

b13 
 a11b11 a11b12


b23  = a22b21 a22b22
 a33b31 a33b32
b33 

a11b13 

a22b23 
a33b33 
The ith row of B has been multiplied by aii , the ith diagonal entry of A.
 b11 b12

(b) BA = b21 b22
b31 b32

0
0
b13  a11
 a11b11 a22b12



b23   0 a22
0 = a11b21 a22b22
a11b31 a22b32
b33   0
0 a33 

a33b13 

a33b23 
a33b33 
The ith column of B has been multiplied by aii , the ith diagonal entry of A.
(c) If a11 = a22 = a33 , then AB = a11B = BA.
64. The trace is the sum of the elements on the main
diagonal.
66. The trace is the sum of the elements on the main
diagonal.
1 + 0 + 2 + ( −3) = 0
1+1+1 = 3
68. Let AB = cij , where cij =
n
k =1
Similarly, if BA = dij , dij =
aik bkj . Then, Tr ( AB) =
n
bik akj . Then Tr ( BA) =
k =1
n
cii =
 n

 aik bki .
i =1  k =1

dii =
 n

 bik aki  = Tr ( AB ).
i =1  k =1

i =1
n
i =1
n
n
cos α
70. AB = 
 sin α
−sin α  cos β

cos α   sin β
−sin β   cos α cos β − sin α sin β

cos β  sin α cos β + cos α sin β
cos α ( −sin β ) − sin α cos β 

sin α ( −sin β ) + cos α cos β 
cos β
BA = 
 sin β
−sin β  cos α

cos β   sin α
−sin α  cos β cos α − sin β sin α

cos α  sin β cos α + cos β sin α
cos β ( −sin α ) − sin β cos α 

sin β ( −sin α ) + cos β cos α 
cos(α + β ) −sin (α + β )
So, you see that AB = BA = 
.
 sin (α + β ) cos(α + β )
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 2.1
Operations with Matrices
35
 a11 a12 
b11 b12 
72. Let A = 
 and B = 
.
a
a
 21 22 
b21 b22 
 1 0
Then the matrix equation AB − BA = 
 is equivalent to
0 1
 a11 a12   b11 b12   b11 b12   a11 a12 
 1 0


 − 

 = 
.
a21 a22  b21 b22  b21 b22  a21 a22 
0 1
This equation implies that
a11b11 + a12b21 − b11a11 − b12 a21 = a12b21 − b12 a21 = 1
a21b12 + a22b22 − b21a12 − b22 a22 = a21b12 − b21a12 = 1
which is impossible. So, the original equation has no solution.
74. Assume that A is an m × n matrix and B is a p × q matrix. Because the product AB is defined, you know that n = p.
Moreover, because AB is square, you know that m = q. Therefore, B must be of order n × m, which implies that the product
BA is defined.
76. Let rows s and t be identical in the matrix A. So, asj = aij for j = 1,  , n. Let AB = cij , where
cij =
n
aik bkj . Then, csj =
k =1
n
ask bkj , and ctj =
k =1
n
atk bkj . Because ask = atk for k = 1,  , n, rows s and t of AB
k =1
are the same.
78. (a) No, the matrices have different sizes.
70 50 25
84 60 30
80. 1.2 
 = 

35 100 70
42 120 84
(b) No, the matrices have different sizes.
(c) Yes; No, BA is undefined.
1 .
82. (a) Multiply the matrix for 2010 by 3090
This produces a matrix giving the information as percents of the total population.
A =
1
3090
12,306

16,095
27,799

 5698

12,222
35,240
41,830
72,075
13,717
31,867
7830 
3.98


9051 
5.21

14,985 ≈ 9.00


1.84
2710 


5901 
3.96
11.40 2.53

13.54 2.93
23.33 4.85

4.44 0.88

10.31 1.91
1
. This produces a matrix giving the information as percents of the total population.
Multiply the matrix for 2013 by 3160
1
B = 3160
(b)
12,026

15,772
27,954

 5710

12,124
3.81

4.99
B − A = 8.85

1.81

3.84
8446 
3.81


9791 
4.99

73,703 16,727 ≈ 8.85


1.81
14,067 3104 


32,614 6636 
3.84
35,471
41,985
11.23 2.67
3.98


13.29 3.10
5.21

23.32 5.29 − 9.00


4.45 0.98
1.84


10.32 2.10
3.96
11.23 2.67

13.29 3.10
23.32 5.29

4.45 0.98

10.32 2.10
− 0.18 − 0.18
11.40 2.53


13.54 2.93
− 0.22 − 0.25

23.33 4.85 = − 0.15 − 0.001


− 0.04 0.01
4.44 0.88


10.31 1.91
− 0.12 0.01
0.14

0.17
0.44

0.11

0.19
(c) The 65+ age group is projected to show relative growth from 2010 to 2013 over all regions because its column
in B − A contains all positive percents.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
36
Chapter 2
Matrices
 0 0

0 0
84. AB = 
−1 0

 0 −1
1 0 1

0 1 5
0 0 1

0 0 5
2 3 4
3 4
 1 2



6 7 8
5 6 7
8
= 
 −1 −2 −3 −4
2 3 4



6 7 8
−5 −6 −7 −8
86. (a) True. The number of elements in a row of the first matrix must be equal to the number of elements in a column of the
second matrix. See page 43 of the text.
(b) True. See page 45 of the text.
Section 2.2 Properties of Matrix Operations
8 + 5 + ( −7)
6 + 0 + ( −11)
 6 8  0 5 −11 −7
−5 6
2. 
 = 
 + 
 + 
 = 

1
3
2
0
1
1
−
+
−
+
+
−
+
−
1
0
3
1
2
1
−
−
−
−
(
)
(
)
(
)



 
 

−2 −2


4. 12 ([5 −2 4 0] + [14 6 −18 9]) = 12 5 + 14 −2 + 6 4 + ( −18) 0 + 9 = 12 [19 4 −14 9] = 19
2
2 −7
9
2
 −5 −1  7 5 
 −5 + 7
−1 + 5
 4 11
−4 −11


 1
 


 1
6. −1−2 −1 + 6   3 4 + −9 −1  =  2
1 + 6 3 + ( −9) 4 + ( −1)
  0 13  6 −1 
 0 + 6 13 + ( −1)
 9 3
−9 −3


 






−4 −11
 2 4
−4 −11  13 23 


 1


 
1 + 6 −6 3 =  2
1 + −1 12 
=  2
−9 −3
 6 12
−9 −3  1 2





 

− 11 − 31
 −4 + 13 −11 + 32 
3
3




3
= 2 + ( −1)
1 + 12  =  1
2
 −8 −1
 −9 + 1 −3 + 2




1 2  0 1
 1 3
8. A + B = 
 + 
 = 

3
4
−
1
2

 

2 6
 0 1
 0 1
0 −1
10. ( a + b) B = (3 + ( −4)) 
 = ( −1) 
 = 

−
1
2
−
1
2




 1 −2
0 0
0 0
0 0
12. ( ab)O = (3)( −4) 
 = ( −12) 
 = 

0
0
0
0




0 0
14. (a) X = 3 A − 2 B
(b) 2 X = 2 A − B
−6 −3  0 6

 

=  3
0 −  4 0
 9 −12 −8 −2

 

−4 −2  0 3

 

2 X =  2 0 −  2 0
 6 −8 −4 −1

 

 6 −9


0
= −1
17 −10


−4 −5


2X =  0
0
 10 −7


−2 − 5 
2


X =  0
0
 5 − 7
2

© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 2.2
(c)
2 X + 3A = B
(d)
Properties of Matrix Operations
2 A + 4 B = −2 X
−6 −3
 0 3




2X +  3
0 =  2 0
 9 −12
−4 −1




−4 −2  0 12

 

 2 0 +  8 0 = −2 X
 6 −8 −16 −4

 

 6 6


2 X =  −1 0
−13 11


 −4 10


0 = −2 X
 10
−10 −12


 3

X =  − 12
− 13
 2
37
3

0
11 
2
  0 1 1 3 
16. c(C B ) = ( − 2) 
 −1 0−1 2 



 2 −5


−5 0 = X
 5 6


24. (a)
−1 2 
= ( − 2) 

−1 − 3
− 3 4
6 
− 8 26

= 
  0 1
7
−
14
−
9



 −1 1
= 2 − 4


2 6 
 0 1   1 3  0 1 
18. C ( BC ) = 
  

 
−1 0  −1 2 −1 0 
 0 1  −3 1
−2 −1
= 

 = 

−
−
−
1
0
2
1



 3 −1
 18 0
= 

−12 5

− 3 4 
2   1 − 5 0 
− 4

(b) A( BC ) = 
 
  0 1 
3 3 
 1 − 3  − 2

 −1 1 

2 − 3 −1
− 4
= 


1
3  3 − 2
−

 1 3   0 1 0 0 
20. B(C + O) = 
  
 + 
 
−1 2  −1 0 0 0 
 18 0
= 

−12 5
 1 3  0 1
 −3 1
= 

 = 

−1 2 −1 0
−2 −1
 1 3 
 1 2 3 
22. B(cA) = 
  ( −2) 
 
−1 2 
0 1 −1 
 1 3 −2 −4 −6
−2 −10 0
= 

 = 

2
0 10
−1 2  0 −2
 2
− 3 4
2  1 − 5 0  



   0 1
1
3
2
3
3
−
−





  −1 1


 − 4
( AB)C =  
1
26. AB =  14
 2
1  1
2 2

1 1
2
  2
1
3
2
=
 18

1
 2

4
1
4

3
8

1
BA =  12
 2
1  1
2 4

1 1
  2
4
1
3
2
 =  18
1

2
 4
1
2
 ≠
3

8
AB
 1 2 3 0 0 0
12 −6 9





0
5
4
0
0
0
=
=
AC
28.



16 −8 12
3 −2 1 4 −2 3
 4 −2 3





 4 −6 3 0 0 0



=  5 4 4 0 0 0 = BC
−1 0 1 4 −2 3



But A ≠ B.
2 4  1 −2
0 0
30. AB = 
 = 
 − 1
 = O
1
2 4  2
0 0
But A ≠ O and B ≠ O.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
38
Chapter 2
Matrices
 1 2  1 2
 1 0
36. A2 = 

 = 
 = I2
0 −1 0 −1
0 1
 1 2  1 0
 1 2
32. AT = 

 = 

−
0
1
0
1



0 −1
2
 1 0
So, A4 = ( A2 ) = I 22 = I 2 = 
.
0 1
 1 2  1 0  1 2
34. A + IA = 
 + 


0 −1 0 1 0 −1
38. In general, AB ≠ BA for matrices.
 1 2  1 2
2 4
= 
 + 
 = 

−
−
0
1
0
1

 

0 −2
T
42. ( AB )
T
  1 2 −3 −1 
= 
 0 −2  2 1 



T
−3 −1  1 2
BT AT = 
 

 2 1 0 −2
T
19
 6 −7


T
40. D = − 7
0
23
 19 23 − 32


 1 1
= 

−4 −2
T
T
19
 6 −7


0
23
= − 7
 19 23 − 32


1 −4
= 

1 −2
−3 2  1 0
1 −4
= 

 = 

−
−
1
1
2
2



1 −2
T
 2 1 −1  1 0 −1 

T


44. ( AB) =  0 1 3 2 1 −2 
 4 0 2 0 1 3 



T
 1 0 −1 2 1 −1

 

T T
B A = 2 1 −2 0 1 3
0 1 3 4 0 2

 

T
4 0 −7


= 2 4 7
4 2 2


T
 4 2 4


=  0 4 2
−7 7 2


 1 2 0  2 0 4
 4 2 4





=  0
1 1  1 1 0 =  0 4 2
−7 7 2
−1 −2 3 −1 3 2





 1 −1
 1 3 0 
10 11

46. (a) AT A = 
 3 4 = 

−1 4 −2 
11 21

0 −2
 1 −1
 2 −1 2

  1 3 0


=
(b) AAT = 3 4 

−1 25 −8
1
4
2
−
−

0 −2 
 2 −8 4




8 168 −104
 4 2 14 6  4 −3 2 0
 252





−3 0 −2 8  2 0 11 −1
8 77 −70
50
= 
48. (a) AT A = 
 2 11 12 −5 14 −2 12 −9
 168 −70 294 −139





−
98
 0 −1 −9 4  6 8 −5 4
 104 50 −139
 4 −3 2 0  4 2


2 0 11 −1 −3 0
(b) AAT = 
14 −2 12 −9  2 11


 6 8 −5 4  0 −1
50.
(1)17

 0

17
A =  0

 0

 0

0
0
0
( −1)
0
0
0
(1)
17
0
0
0
( −1)
0
0
0
17
17
6
30 86 −10
 29



8
30
126
169 −47
= 
 86 169 425 −28
12 −5



−9 4
−
 10 −47 −28 141
14
−2
0 
1 0


0 
0 −1

= 0 0
0 


0 0

0 

0 0
17 
(1) 
0
0
1
0
0
0

0 0
0 0

−1 0

0 1
0
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 2.2
52.
(1)20

 0

20
A =  0

 0

 0

0
0
0
0
0
0
(1)
0
0
0
(−1)
0
0
0
( −1)
20
20
20
23
8 0 0



54. Because A3 = 0 −1 0 =  0

0 0 27
 0


0 
1


0 
0

0
=
0 


0


0

0
20 
(1) 
0
(−1)
0
3
Properties of Matrix Operations
39
0 0 0 0

1 0 0 0
0 1 0 0

0 0 1 0

0 0 0 1
0 
2 0 0



0 , you have A = 0 −1 0.

0 0 3
3


(3) 
 1 1
1 0
56. (a) False. In general, for n × n matrices A and B it is not true that AB = BA. For example, let A = 
, B = 
.
0 0
1 0
2 0
1 1
Then AB = 
 ≠ 
 = BA.
0
0


1 1
 1 1
1 0
2 0
2 0
(b) False. Let A = 
, B = 
, C = 
. Then AB = 
 = AC , but B ≠ C .
0
0
1
0
0
0






0 0
(c) True. See Theorem 2.6, part 2 on page 57.
58.
aX + A(bB ) = b( AB + IB)
Original equation
aX + ( Ab) B = b( AB + B )
Associative property; property of the identity matrix
aX + bAB = bAB + bB
Property of scalar multiplication; distributive property
aX + bAB + ( −bAB ) = bAB + bB + ( −bAB ) Add − bAB to both sides.
aX = bAB + bB + ( −bAB ) Additive inverse
aX = bAB + ( −bAB ) + bB Commutative property
aX = bB
Additive inverse
b
X = B
a
Divide by a.
2
 1 0 0
 2 1 −1
 2 1 −1
 2 1 −1








60. f ( A) = −10 0 1 0 + 5 1 0 2 − 2 1 0 2 +  1 0 2
0 0 1
−1 1 3
−1 1 3
−1 1 3








3
10 0 0  10 5 − 5
 2 1 −1 2 1 −1  2 1 −1 2 1 −1



 



 
= −  0 10 0 +  5 0 10 − 2  1 0 2 1 0 2 +  1 0 2 1 0 2
 0 0 10 − 5 5 15
−1 1 3−1 1 3 −1 1 3−1 1 3

 



 


2
5 − 5
 0
 6 1 − 3  2 1 −1 6 1 − 3



 


5 +  1 0 2 0 3
5
=  5 −10 10 − 2  0 3
− 5
− 4 2 12 −1 1 3− 4 2 12
5
5


 


5 − 5  12 2 − 6  16 3 −13
 0


 
 
21
=  5 −10 10 −  0 6 10 +  − 2 5
− 5
5
5 − 8 4 24 −18 8 44

6 −12
 4


=  3 −11 21
−15
9
25

© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
40
Chapter 2
Matrices
62. (cd ) A = (cd ) aij  = (cd ) aij  = c( daij ) = c daij  = c( dA)
64. (c + d ) A = (c + d ) aij  = (c + d )aij  = caij + daij  = caij  + daij  = c aij  + d aij  = cA + dA
66. (a) To show that A( BC ) = ( AB)C , compare the ijth entries in the matrices on both sides of this equality. Assume that A has
size n × p, B has size p × r , and C has size r × m. Then the entry in the kth row and the jth column of BC is
r
l =1
bkl clj . Therefore, the entry in ith row and jth column of A(BC) is
p
r
aik
k =1
bkl clj =
l =1
aik bkl clj .
k, l
The entry in the ith row and jth column of (AB)C is
r
l =1
dil clj , where d il is the entry of AB in the ith row and the lth
column.
p
So, dil =
p
r
k =1
aik bkl for each l = 1,  , r. So, the ijth entry of ( AB)C is
aik bkl clj =
i =1 k =1
aik bkl cij .
k, l
Because all corresponding entries of A(BC) and (AB)C are equal and both matrices are of the same size ( n × m), you
conclude that A( BC ) = ( AB)C.
(b) The entry in the ith row and jth column of ( A + B )C is ( ail + bil )c1 j + ( ai 2 + bi 2 )c2 j +  + ( ain + bin )cnj , whereas the
entry in the ith row and jth column of AC + BC is ( ai1c1 j +  + aincnj ) + (bi1c1 j +  + bincnj ), which are equal by the
distributive law for real numbers.
(c) The entry in the ith row and jth column of c( AB) is c ai1b1 j + ai 2b2 j +  + ainbnj . The corresponding entry for (cA) B is
(cai1 )b1 j + (cai 2 )b2 j +  + (cain )bnj and the corresponding entry for A(cB) is ai1(cb1 j ) + ai 2 (cb2 j ) +  + ain (cbnj ).
Because these three expressions are equal, you have shown that c( AB) = (cA) B = A(cB).
(
68. (2) ( A + B)
T
(3) (cA)
T
= aij  + bij 
(
= c aij 
) = a + b  = a + b  = a  + b  = A + B
T
T
ij
ij
ji
ji
ji
ji
T
T
) = ca  = ca  = c a  = c( A )
T
T
ij
ji
T
ji
(4) The entry in the ith row and jth column of ( AB) is a j1b1i + a j 2b2i +  a jnbni . On the other hand, the entry in the ith row
T
and jth column of B T AT is b1i a j1 + b2i a j 2 +  + bni a jn , which is the same.
0 1 −1 1
 1 0
70. (a) Answers will vary. Sample answer: 

 = 

1
0
1
0



−1 1
(b) Let A and B be symmetric.
If AB = BA, then ( AB)
T
If ( AB)
T
= BT AT = BA = AB and AB is symmetric.
= AB, then AB = ( AB) = BT AT = BA and AB = BA.
T
2 1
T
72. Because A = 
 = A , the matrix is symmetric.
 1 3
0 − 2 1


0 3 = AT , the matrix is skew-symmetric.
74. Because − A = 2
 1 − 3 0


76. If AT = − A and BT = − B, then ( A + B)
T
= AT + BT = − A − B = −( A + B), which implies that A + B is
skew-symmetric.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 2.3
78. Let
A = a11 a12

a21 a22


an1 an 2
A − AT = a11 a12

a21 a22


an1 an 2
0


a21 − a12
= 


an1 − a1n
The Inverse of a Matrix
41
a13  a1n 

a23  a2 n 
.

an3  ann 
a13  a1n  a11
 
a23  a2 n  a12
−
 
an3  ann  a1n
a12 − a21
0
an 2 − a2 n
a21
a22
a2 n
a31  an1 

a32  an 2 


a3n  ann 
a13 − a31  a1n − an1 

a23 − a32  a2 n − an 2 


0
an3 − a3n 

0
a12 − a21
a13 − a31
 a1n − an1 



0
a
a
−
 a2 n − an 2 
23
32
− ( a12 − a21 )
= 



− ( a1n − an1 ) − ( a2 n − an 2 ) − ( a3n − an3 ) 

0
So, A − AT is skew-symmetric.
Section 2.3 The Inverse of a Matrix
1 − 1
 1 −1 2 1
 2 −1
 1 0
2. AB = 

 = 
 = 

1
2
1
1
2
2
1
2
−
−
+
−
+





0 1
2 1  1 −1
2 − 1 −2 + 2
 1 0
BA = 

 = 
 = 

 1 1 −1 2
 1 − 1 −1 + 2
0 1
 1 −1  53
4. AB = 
 2
2 3 − 5
 3
BA =  25
− 5
1 1
5 

1 2
5

1
1
5
 = 
1

0
5
0

1
−1
1 0
 = 

3
0 1
 2 −17 11  1 1 2
 1 0 0





6. AB = −1 11 −7 2 4 −3 = 0 1 0
 0
0 0 1
3 −2 3 6 −5



 1 1 2  2 −17 11
 1 0 0





BA = 2 4 −3 −1 11 −7 = 0 1 0
3 6 −5  3
0 0 1
6 −2




8. Use the formula
A−1 =
d −b
1

,
ad − bc −c a 
where
a b 
2 − 2
A = 
 = 
.
2
c d 
2
So, the inverse is
A
−1
 1
 4
 2 2
=

 = 
2( 2) − ( − 2)( 2) − 2 2
 1
− 4
1
1
4
.
1
4 
10. Use the formula
A−1 =
 d −b
1

,
ad − bc −c a
where
a b
 1 −2
A = 
 = 
.
c d 
2 −3
So, the inverse is
A−1 =
1
−3 2
−3 2
 = 
.
1
−2 1
(1)( −3) − ( −2)( 2) −2
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
42
Chapter 2
Matrices
12. Using the formula
A−1 =
 d −b
1

,
ad − bc −c a
where
a b
−1 1
A = 
 = 

c
d


 3 −3
you see that ad − bc = ( −1)( −3) − (1)(3) = 0. So, the
matrix has no inverse.
14. Adjoin the identity matrix to form
2 1 0 0
 1 2

[ A I ] =  3 7 9 0 1 0.
−1 −4 −7 0 0 1


Using elementary row operations, reduce the matrix as
follows.
 1 0 0 −13 6 4


I A  = 0 1 0 12 −5 −3
0 0 1 −5 2
1

−1
16. Adjoin the identity matrix to form
10 5 −7 1 0 0
[ A I ] = −5 1 4 0 1 0.
 3 2 −2 0 0 1


Using elementary row operations, reduce the matrix as
follows.
 1 0 0 −10 − 4 27


2
1 −5
I A  = 0 1 0
0 0 1 −13 −5 35


−1
Therefore, the inverse is
−10 −4 27


−1
A =  2
1 −5.
−13 −5 35


18. Adjoin the identity matrix to form
 3 2 5 1 0 0
[ A I ] =  2 2 4 0 1 0.
−4 4 0 0 0 1


Using elementary row operations, you cannot form the
identity matrix on the left side. Therefore, the matrix has
no inverse.
20. Adjoin the identity matrix to form
1
− 56
3

2
[A I] =  0 3
 1 −1
2

1 0 0

2 0 1 0.
− 52 0 0 1
11
6
Using elementary row operations, you cannot form the
identity matrix on the left side. Therefore, the matrix has
no inverse.
22. Adjoin the identity matrix to form
 0.1 0.2 0.3 1 0 0
[ A I ] = − 0.3 0.2 0.2 0 1 0.
 0.5 0.5 0.5 0 0 1


Using elementary row operations, reduce the matrix as
follows.
0 −2 0.8
1 0 0


I A  = 0 1 0 −10 4 4.4
0 0 1 10 −2 −3.2


−1
Therefore, the inverse is
 0 −2 0.8


A−1 = −10 4 4.4.
 10 −2 −3.2


24. Adjoin the identity matrix to form
 1 0 0 1 0 0
[ A I ] = 3 0 0 0 1 0
2 5 5 0 0 1


Using elementary row operations, you cannot form the
identity matrix on the left side. Therefore, the matrix has
no inverse.
26. Adjoin the identity matrix to form
1

0
=
A
I
[ ] 
0

0
0 0 1 0 0 0

2 0 0 0 1 0 0
.
0 −2 0 0 0 1 0

0 0 3 0 0 0 1
0
Using elementary row operations, reduce the matrix as
follows.
1

0
I A−1  = 
0

0
0 0

0 0
0 1 0 0 0 − 12 0

0 0 1 0 0
0 13 
0 0 0 1 0
1 0 0 0
1
2
Therefore, the inverse is
0 0
1 0
 1

0 0
0 2
A −1 = 
.
0 0 − 12 0


0 13 
0 0
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 2.3
28. Adjoin the identity matrix to form
14 1 0 0 0

5 −4 6 0 1 0 0
.
2
1 −7 0 0 1 0

6 −5 10 0 0 0 1
ad − bc = (1)( 2) − ( −2)( −3) = − 4
Using elementary row operations, reduce the matrix as
follows.
1

0
[ A I ] = 
0

0
0 0 0
27 −10
1 0 0 −16
0 1 0 −17
0 0 1
−7
4 −29

5 −2 18
4 −2 20

2 −1
8
− 1
2 2
2
A−1 = − 14 
 =  3
−
3
1



 4
 27 −10 4 −29


−16
5 −2 18
A−1 = 
.
−17
4 −2 20


2 −1
8
 −7
ad − bc = ( −12)( −2) − 3(5) = 24 − 15 = 9
− 2
−2 −3
 9
A−1 = 19 
=

5
12
−
−
− 59



− 1
36. A =  45
 3
 8
36  9
A−1 = − 143
− 5
 3
0 0 0 1 −1.5
0 0 1 0
− 32
− 94 
 =  143
 60
− 14 
 143
81 
143 
9 
143 
2
 1 6 − 7 
1 − 56
2
1 
38. A− 2 = ( A−1 ) =  47
  = 2209 

 5
2 
40 − 31
 
Using elementary row operations, reduce the matrix as
follows.
0 1 0 0
9
4
8

9
( )( 89 ) − ( 94 )( 53 ) = − 143
36
3 −2 0 1 0 0 0

2 4 6 0 1 0 0
.
0 −2 1 0 0 1 0

0 0 5 0 0 0 1
1 0 0 0
− 13 

− 34 
ad − bc = − 14
30. Adjoin the identity matrix to form
1

0
I A−1  = 
0

0
− 12 

− 14 
3
−12
34. A = 

 5 −2
Therefore the inverse is
1

0
[ A I ] = 
0

0
43
 1 −2
32. A = 

−3 2
8 −7
4

2
[ A I ] = 
0

3
The Inverse of a Matrix
−4
2.6

1 − 0.8
.
0 − 0.5
0.1

0
0
0.2
0.5
A− 2 = ( A 2 )
−1
 − 31 56
= 

− 40 1
−1
1 − 56
1 
= 2209


40 − 31
The results are equal.
Therefore, the inverse is
−4 2.6
 1 −1.5


0
0.5
1 −0.8
A−1 = 
.
0
0 −0.5
0.1


0
0 0.2
0
2
40. A− 2 = ( A−1 )
2
 −15 − 4
28 
228 −1604
 873
1 



1
=  2  −1 0
2  = 4 
61
16 −112
  23
−1317 − 344 2420
6 − 42 


 
 48 4 32


2
−2
A = ( A ) = − 29 48 −17
 22 9 15


The results are equal.
−1
−1
228 −1604

61
16 −112
−1317 − 344 2420



=
1
4
873
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
44
Chapter 2
42. (a)
( AB)
−1
Matrices
48. The coefficient matrix for each system is
1 1 −2


A = 1 −2
1.
1 −1 −1


Using the algorithm to invert a matrix, you find that the
inverse is
= B −1 A−1
2 − 2
5
11
= 11
 73
3
1
11 − 11
 7
1
7

2
7

−4 9
1 
= 77


−9 1
(b) ( AT )
−1
(c)
( 2A)
−1
44. (a)
( AB)
−1
= ( A −1 )
T
− 2
=  73
 7
− 2
= 12 A−1 = 12  73
 7
1
7

2
7

T
− 2
=  17
 7
1
− 1
7
 =  37
2
 14
7

3
7

2
7

1
14

1
7

= B −1 A−1
 6 5 −3  1 −4 2



= −2 4 −1 0
1 3
 1 3 4 4 2 1



−6 −25 24


= −6 10 7
17
7 15

 1 −4 2
−1
T


(b) ( AT ) = ( A−1 ) = 0
1 3
4
2 1

(c)
(2 A)
−1
T
 1 0 4


= −4 1 2
 2 3 1


 12 −2 1
 1 −4 2




3
1
1 3 =  0
= 12 A−1 = 12 0
2
2
2
4 2 1
1 12 



46. The coefficient matrix for each system is
2 −1
A = 

2 1
and the formula for the inverse of a 2 × 2 matrix
produces
A− 1 =
 14
1  1 1

 =  1
2 + 2 −2 2
− 2
1
4
.
1

2
 1 1  −3
1
(a) x = A−1b =  14 14    =  
− 2 2   7
5
The solution is: x = 1 and y = 5.
 1 1 −1


A−1 =  23 13 −1.
 1 2 −1
3 3

 1 1 −1  0
1

 

(a) x = A−1b =  23 13 −1  0 = 1
 1 2 −1 −1
1

3 3
 
The solution is: x1 = 1, x2 = 1, and x3 = 1.
 1 1 −1 −1
 1
2 1
 
 
(b) x = A b =  3 3 −1  2 = 0
 1 2 −1  0
 1
 
3 3
 
The solution is: x1 = 1, x2 = 0, and x3 = 1.
−1
50. Using a graphing utility or software program, you have
Ax = b
 1
 
 2
−1
x = A b = −1
 
 0
 
 1
where
1

2
A = 1

2

3
1 −1
1
1
1 −1
1
4
1
1
3 −1
 x1 
 3

 
 
1 1
 4
 x2 
2 −1, x =  x3 , and b =  3.

 
 
−1
 x4 
1 −1

 
 
−2 1
 5
 x5 
The solution is: x1 = 1, x2 = 2, x3 = −1, x4 = 0, and
x5 = 1.
 1 1   −1
−1
(b) x = A−1b =  14 14    =  
− 2 2  −3
−1
The solution is: x = −1 and y = −1.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 2.3
52. Using a graphing utility or software program, you have
Ax = b
−1
 
 2
 1
x = A−1b =  
 3
 
 0
 1
 
 d −b
1


ad − bc −c a
 sec θ
1

2
sec θ − tan θ − tan θ
=
2
 sec θ
= 
− tan θ
 4 −2 4 2 −5 −1
 x1 


 
−
−
3
6
5
6
3
3


 x2 
 2 −3

x 
1 3 −1 −2
, x =  3 , and
A = 
 −1 4 −4 −6 2 4
 x4 


 
 3 −1 5 2 −3 −5
 x5 
−2

x 
3 −4 −6
1 2

 6
 1


 −11
 0
.
b = 
 −9


 1
−12


The solution is: x1 = −1, x2 = 2, x3 = 1, x4 = 3,
x5 = 0, and x6 = 1.
Letting A−1 = A, you find that
− tan θ 
.
sec θ 
[F
0.017 0.010 0.008 1 0 0


I ] = 0.010 0.012 0.010 0 1 0.
0.008 0.010 0.017 0 0 1


Using elementary row operations, reduce the matrix as
follows.
I
4.44
 1 0 0 115.56 −100



−100 250
−100
F  = 0 1 0
0 0 1
4.44 −100 115.56

−1
4.44
115.56 −100


So, F −1 =  −100 250
−100 and
 4.44 −100 115.56


w = F
−1
64. AT ( A−1 )
T
4.44  0  −15
115.56 −100


 

d =  −100 250 −100 0.15 = 37.5.
 4.44 −100 115.56  0  −15


 

= ( A−1 A)
T
T
1
= −1.
x−4
T
66. ( I − 2 A)( I − 2 A) = I 2 − 2 IA − 2 AI + 4 A2
 x 2
56. The matrix 
 will be singular if
−3 4
= I − 4 A + 4 A2
= I − 4A + 4A
ad − bc = ( x)( 4) − ( −3)( 2) = 0, which implies that
4 x = −6 or x = − 32 .
4 A = ( 4 A) 


Then, multiply by 14
to obtain
1
A = 14 ( 4 A) = 14  38
16
1
 32
− 14 
=


3
1
 64
8

1
− 16
.
1
32 

− 14 

1

8
(because A = A2 )
= I
So, ( I − 2 A)
58. First, find 4 A.
 18
1 2 −4
=

 = 3
4 + 12 3 2
16
= I nT = I n and
( A−1 ) AT = ( AA−1 ) = I nT = I n
T
−1
So, ( A−1 ) = ( AT ) .
So, x = 3.
−1 −1
− tan θ 

sec θ 
62. Adjoin the identity matrix to form
54. The inverse of A is given by
1 −2 − x

.
x − 4  1 2
45
60. Using the formula for the inverse of a 2 × 2 matrix, you
have
A− 1 =
where
A −1 =
The Inverse of a Matrix
−1
= I − 2 A.
68. Because ABC = I , A is invertible and A−1 = BC.
So, ABC A = A and BC A = I .
So, B −1 = CA.
70. Let A2 = A and suppose A is nonsingular. Then, A −1
exists, and you have the following.
A−1 ( A2 ) = A−1 A
( A−1 A) A = I
A = I
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
46
Chapter 2
Matrices
74. A has an inverse if aii ≠ 0 for all i = 1  n and
72. (a) True. See Theorem 2.8, part 1 on page 67.
1
a
 11

0
−1
A = 



0

 1 1
(b) False. For example, consider the matrix 
,
0 0
which is not invertible, but 1 ⋅ 1 − 0 ⋅ 0 = 1 ≠ 0.
(c) False. If A is a square matrix then the system
Ax = b has a unique solution if and only if A is a
nonsingular matrix.
0
1
a22
0

0 


0  0 
.


1 
0 
ann 
0 
 1 2
76. A = 

−2 1
−3 4  2 4 5 0
0 0
(a) A2 − 2 A + 5 I = 
 − 
 + 
 = 

−4 −3 −4 2 0 5
0 0
(
)
(b) A 15 ( 2 I − A) = 15 ( 2 A − A2 ) = 15 (5 I ) = I
1 −2
−1
 = A directly.
1

(c) The calculation in part (b) did not depend on the entries of A.
Similarly,
( 15 (2 I − A)) A = I . Or, 15 (2I − A) = 15 2
78. Let C be the inverse of ( I − AB ), that is C = ( I − AB) . Then C ( I − AB) = ( I − AB )C = I .
−1
Consider the matrix I + BCA. Claim that this matrix is the inverse of I − BA. To check this claim,
show that ( I + BCA)( I − BA) = ( I − BA)( I + BCA) = I .
First, show ( I − BA)( I + BCA) = I − BA + BCA − BABCA
= I − BA + B(C − ABC ) A
= I − BA + B(( I − AB)C ) A
I
= 1 − BA + BA = 1
Similarly, show ( I + BCA)( I − BA) = I .
80. Answers will vary. Sample answer:
 1
A = 
−1
0
1 0
 or A = 

0
1 0
0 
a b
a b  d −b
ad − bc
 1 0
1
1
1
  d −b
82. AA−1 = 

 =


 =

 = 


ad − bc  c d  −c a
ad − bc  0
ad − bc
 c d  ad − bc  −c a
0 1
A− 1 A =
0 
 d −b a b
ad − bc
 1 0
1
1


 =

 = 

ad − bc −c a  c d 
ad − bc  0
ad − bc
0 1
Section 2.4 Elementary Matrices
2. This matrix is not elementary, because it is not square.
4. This matrix is elementary. It can be obtained by
interchanging the two rows of I 2 .
6. This matrix is elementary. It can be obtained by
multiplying the first row of I 3 by 2, and adding the result
to the third row.
8. This matrix is not elementary, because two elementary
row operations are required to obtain it from I 4 .
10. C is obtained by adding the third row of A to the first
row. So,
 1 0 1


E = 0 1 0.
0 0 1


© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 2.4
Elementary Matrices
47
12. A is obtained by adding −1 times the third row of C to the first row. So,
 1 0 −1


E = 0 1 0.
0 0 1


14. Answers will vary. Sample answer:
Matrix
Elementary Row Operation
Elementary Matrix
 1 −1 2 − 2


6
0 3 − 3
0 0
2
2

R1 ↔ R2
0 1 0


 1 0 0
0 0 1


 1 −1 2 − 2


2
0 1 −1
0 0 2
2

 1 −1 2 − 2


2
0 1 −1
0 0 1
1

( 13 ) R → R
2
2
( 12 ) R → R
3
3
 1 0 0
 1 
0 3 0
0 0 1


 1 0 0


0 1 0
0 0 1 

2
 1 0 0  1 0 0 0 1 0 0 3 − 3
6
 1 −1 2 − 2

 1





So, 0 1 0 0 3 0  1 0 0  1 −1 2 − 2 = 0 1 −1 2.
0 0 1  0 0 1 0 0 1 0 0
0 0 1
2
2
1



2 

16. Answers will vary. Sample answer:
Matrix
3
0
1


0 −1 −1
3 − 2 − 4


3
0
1


0 −1 −1
0 −11 − 4


3
0
1


1
1
0
0 −11 − 4


Elementary Row Operation
Elementary Matrix
( − 2) R1 + R2 → R2
 1 0 0


− 2 1 0
 0 0 1


(− 3) R1 + R3 → R3
 1 0 0


 0 1 0
− 3 0 1


(−1) R2 → R2
 1 0 0


0 −1 0
0 0 1


 1 3 0


0 1 1
0 0 7


(11) R2 + R3 → R3
 1 0 0


0 1 0
0 11 1


 1 3 0


0 1 1
0 0 1


()
 1 0 0


0 1 0
0 0 1 
7

1 R
7 3
→ R3
 1 0 0  1 0 0  1 0 0  1 0 0  1 0 0  1
3
0
 1 3 0









So, 0 1 0 0 1 0 0 −1 0  0 1 0 − 2 1 0 2
5 −1 = 0 1 1.
0 0 1  0 11 1 0 0 1 − 3 0 1  0 0 1 3 − 2 − 4
0 0 1







7 

© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
48
Chapter 2
Matrices
18. Matrix
Elementary Row Operations
2
1 − 6 0
0 − 3 3
9


0 17 −1 − 3


4 8 − 5 1 
2
1 − 6 0
0 − 3 3
9


0 17 −1 − 3
0 32 − 5 − 7


R3 + ( − 2) R1 → R3
R4 + ( − 4) R2 → R4
2
1 − 6 0
0 1 −1 − 3


0 17 −1 − 3


0 32 − 5 − 7
( )
− 13 R2 → R2
2
1 − 6 0
0 1 −1 − 3


0 0 16 48 
0 32 − 5 − 7


1
6
0
2
−


0 1 −1 − 3


0 0 16 48


0 0 27 89 
R3 + ( −17) R2 → R3
R4 + ( − 32) R2 → R2
1 − 6 0 2 
0 1 −1 − 3


1
3
0 0
0 0 27 89 


0
1
0
0
1 0

0 1
0 −17

0 0
0
0
1
0
1 − 6 0
0 − 3 3

2 5 −1

4 8 − 5
0
1
0
0
0
0
1
0
0
0

0

1
1
0

0
− 4

0
1
0
0
0
0
1
0
0
0

0
1
1 0
0
1
 −3
0 0

0 0
0 0
0 0

1 0

0 1
1 0
0 1

0 −17
0 0

0
0
1
0
0
0

0
1
1 0
0 1

0 0

0 − 32
0
0
1
0
0
0

0

1
0
1
0
R4 + ( − 27) R3 → R4
1
0

0

0
0
0
1
0
0 1
0 − 27
( 18 ) R → R
1
0

0

0
0
1
0
0
3
0 0 1
0 0 0
1 0 0

0 1  0
8
1
0

− 2

0
1
0

0

0
(161 ) R → R
1 − 6 0 2 
0 1 −1 − 3


0 0 1 3 


0 0 0 8 
1 − 6 0 2 
0 1 −1 − 3


0 0 1 3 
0 0 0 1 


So, 1
0

0

0
Elementary Matrix
3
4
4
0
0
1
0
0
1
0 − 27
0 1
0 0
0 0

1 0

0 1 0

0 0 − 13
0 0 0

1 0 0
0
1
0
0
0
1
16
0
0
0 0  1
0 0  0


1 0  0

0 1 − 4
0
1
0
0
0
0
1
0
0 1
0
0 0 1
0 0 0

1 0 − 32
0
0
1
0
0
0
0

1
0  1
0  0
0 − 2

1  0
0
0
1
0
0
0
0

1
0
1
0
0
0
0
0
1
16
0
0
0

0

1
0
0

0

1
0 0
0 0

1 0
0 1 
8
2
1 − 6 0 2 
0 1 −1 − 3
9

= 
0 0 1 3  .
1



1
0 0 0 1 
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 2.4
Elementary Matrices
49
20. To obtain the inverse matrix, reverse the elementary row operation that produced it. So, multiply the
1 to obtain
first row by 25
 1 0
E −1 =  5 .
0 1
22. To obtain the inverse matrix, reverse the elementary row operation that produced it. So, add 3 times
the second row to the third row to obtain
 1 0 0


E −1 = 0 1 0.
0 3 1


24. To obtain the inverse matrix, reverse the elementary row operation that produced it. So, add − k times
the third row to the second row to obtain
1

0
E −1 = 
0

0
0 0

1 − k 0
.
0
1 0

0
0 1
0
26. Find a sequence of elementary row operations that can be used to rewrite A in reduced row-echelon form.
1 0


1 1
( 12 ) R → R
1
 1 0
E1 =  2 
 0 1
1
 1 0
E2 = 

−1 1
 1 0


0 1 R2 − R1 → R2
Use the elementary matrices to find the inverse.
 12
 1 0  12 0
A−1 = E2 E1 = 
 =  1

− 2
−1 1  0 1
0

1
28. Find a sequence of elementary row operations that can be used to rewrite A in reduced row-echelon form.
 1 0 −2

1
0 1 2 
0 0
1

()
1 R
2 2
 1 0 0


E1 = 0 12 0
0 0 1


→ R2
 1 0 0 R1 + 2 R3 → R1

1
0 1 2 
0 0 1


 1 0 2


E2 = 0 1 0
0 0 1


 1 0 0

1
0 1 2 
0 0 1


0
1 0


E3 = 0 1 − 12 
0 0
1

R2 −
( 12 ) R → R
3
2
Use the elementary matrices to find the inverse.
A
−1
0  1 0 2  1 0 0
1 0




= E3 E2 E1 = 0 1 − 12  0 1 0 0 12 0
0 0
1 0 0 1 0 0 1

2  1 0 0
2
1 0
1 0





= 0 1 − 12  0 12 0 = 0 12 − 12 
0 0
0 0
1 0 0 1
1


© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
50
Chapter 2
Matrices
For Exercises 30–36, answers will vary. Sample answers are shown below.
0 1
30. The matrix A = 
 is itself an elementary matrix, so the factorization is
 1 0
0 1
A = 
.
 1 0
 1 1
32. Reduce the matrix A = 
 as follows.
2 1
Matrix
Elementary Row Operation
Elementary Matrix
 1 1


0 −1
Add −2 times row one to row two.
 1 0
E1 = 

−2 1
 1 1


0 1
Multiply row two by −1.
 1 0
E2 = 

0 −1
 1 0


0 1
Add −1 times row two to row one.
 1 −1
E3 = 

0 1
So, one way to factor A is
 1 0  1 0  1 1
A = E1−1E2−1E3−1 = 


.
2 1 0 −1 0 1
 1 2 3


34. Reduce the matrix A = 2 5 6 as follows.
 1 3 4


Matrix
Elementary Row Operation
Elementary Matrix
 1 2 3


0 1 0
 1 3 4


Add −2 times row one to row two.
 1 0 0


E1 = −2 1 0
 0 0 1


Add −1 times row one to row three.
 1 0 0


E2 =  0 1 0
−1 0 1


 1 2 3


0 1 0
0 0 1


Add −1 times row two to row three.
 1 0 0


E3 = 0 1 0
0 −1 1


 1 2 0


0 1 0
0 0 1


Add −3 times row three to row one.
 1 0 −3


E4 = 0 1 0
0 0
1

 1 0 0


0 1 0
0 0 1


Add −2 times row two to row one.
 1 −2 0


E5 = 0
1 0
0 0 1


 1 2 3


0 1 0
0 1 1


So, one way to factor A is
 1 0 0  1 0 0  1 0 0  1 0 3  1 2 0






A = E1−1E2−1E3−1E4−1E5−1 = 2 1 0 0 1 0 0 1 0 0 1 0 0 1 0.
0 0 1  1 0 1 0 1 1 0 0 1 0 0 1






© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 2.4
Elementary Matrices
51
36. Find a sequence of elementary row operations that can be used to rewrite A in reduced row-echelon form.
1

0
0

 1
0
0
1
2
1

0
0

0

0
0
1
2
1
0
1

0
0

0
0
0
1

0
0

0
0 0
1

0
0

0
0 0
1

0
0

0
0 0
1

0
0

0
0 0 0

1 0 0
0 1 0

0 0 1

1 0
1
0 −1 2

0 0 −2
( 14 ) R → R
1
1

1
0 −1
2
0 0 − 52  R4 − R1 → R4
1
2

1 0 1
0 −1 2

0 0 1
(− 52 ) R → R
4
4
1
2

1
0 1 −2

0 0
1
1 0
0

1 0
1
0 1 −2

0 0
1
0

1 0 0
0 1 −2

0 0
1
− R3 → R3
R1 −
( 12 ) R → R
4
1
R2 − R4 → R2
R3 + 2 R4 → R3
 14

0
E1 = 
0

 0
0 0 0

1 0 0
0 1 0

0 0 1
 1

0
E2 = 
 0

−
 1
0 0 0

1 0 0
0 1 0

0 0 1
1

0
E3 = 
0

0
0 0
1

0
E4 = 
0

0
0
1

0
E5 = 
0

0
0 0 − 12 

1 0
0
0 1
0

0 0
1
1

0
E6 = 
0

0
0 0
1

0
E7 = 
0

0
0 0 0

1 0 0
0 1 2

0 0 1
0

1 0
0
0 1
0

0 0 − 52 
0 0

0 0
0 −1 0

0 0 1
1
0

1 0 −1
0 1 0

0 0 1
So, one way to factor A is
A = E1−1E2−1E3−1E4−1E5−1E6−1E7−1
4

0
= 
0

0
0 0 0  1

1 0 0 0
0 1 0 0

0 0 1  1
0 0 0  1

1 0 0 0
0 1 0 0

0 0 1 0
0  1

1 0
0 0
0 1
0 0

0 0 − 52  0
0 0
0 0  1

1 0 0 0
0 −1 0 0

0 0 1 0
0
1  1
2
0 0 0  1


1 0 0 0 1 0 1 0
0 1 0 0 0 1 0 0


0 0 1 0 0 0 1 0
0 0
0

0
.
0 1 −2

0 0
1
0 0
1 0
38. (a) EA has the same rows as A except the two rows that are interchanged in E will be interchanged in EA.
(b) Multiplying a matrix on the left by E interchanges the same two rows that are interchanged from I n in E.
So, multiplying E by itself interchanges the rows twice and E 2 = I n .
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
52
Chapter 2
Matrices
 1 0 0


40. A−1 = 0 1 0
0 0 c


−1
 1 0 0


b 1 0
0 0 1


−1
 1 a 0


0 1 0
0 0 1


−1




0
−a
 1 0 0
1

  1 0 0  1 − a 0








= 0 1 0 −b 1 0 0
1 0 = −b 1 + ab 0  .

  0 0 1 0


0 1
1
0 0 1  
0
0


c 
c 
42. (a) False. It is impossible to obtain the zero matrix by
applying any elementary row operation to the
identity matrix.
(b) True. If A = E1E2  Ek , where each Ei is an
elementary matrix, then A is invertible (because
every elementary matrix is) and
A−1 = Ek−1  E2−1E1−1.
(c) True. See equivalent conditions (2) and (3) of
Theorem 2.15.
44. Matrix
Elementary Matrix
−2 1

 = A
−6 4
−2 1

 = U
 0 1
 1 0
E1 = 

−3 1
1 0−2 1
E1 A = U  A = E1−1U = 

 = LU
3 1 0 1
46. Matrix
Elementary Matrix
48. Matrix
Elementary Matrix
 2 0 0 0


−2 1 −1 0 = A
 6 2 1 0


 0 0 0 −1
2 0 0 0


0 1 −1 0
6 2 1 0


0 0 0 −1
1

1
E1 = 
0

0
0 0 0

1 0 0
0 1 0

0 0 1
2 0 0 0


0 1 −1 0
0 2 1 0


0 0 0 −1
 1

0
E2 = 
−3

 0
0 0 0

1 0 0
0 1 0

0 0 1
2

0
0

0
1 0

0
1
E3 = 
0 −2

0 0
0

1 −1 0
= U
0 3 0

0 0 −1
0
0
0 0

0 0
1 0

0 1
E3 E2 E1 A = U  A = E1−1E2−1E3−1U
 2 0 0


 0 −3 1 = A
10 12 3


2 0 0


0 −3 1
0 12 3


 1 0 0


E1 =  0 1 0
−5 0 1


2 0 0


0 −3 1 = U
0 0 7


 1 0 0


E2 = 0 1 0
0 4 1


E2 E1 A = U  A = E1−1E2−1U
 1 0 0 2 0 0



1 0 0 −3 1
= 0
5 −4 1 0 0 7



= LU
 1 0 0 0 2 0 0 0



−1 1 0 0 0 1 −1 0
= 
 3 2 1 0 0 0 3 0



 0 0 0 1 0 0 0 −1
= LU
 1

−1
Ly = b : 
 3

 0
0 0 0  y1 
 4
 
 
1 0 0  y2 
−4
=  
 15
2 1 0  y3 
 
 
0 0 1  y4 
 −1
y1 = 4, − y1 + y2 = −4  y2 = 0,
3 y1 + 2 y2 + y3 = 15  y3 = 3, and y4 = −1.
2

0
Ux = y : 
0

0
0  x1 
 4
 
 
1 −1 0  x2 
0
=  
 3
0 3 0  x3 
 
 
0 0 −1  x4 
−1
0
0
x4 = 1, x3 = 1, x2 − x3 = 0  x2 = 1, and x1 = 2.
So, the solution to the system Ax = b is: x1 = 2,
x2 = x3 = x4 = 1.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 2.5
0 1 0 1
 1 0
50. A2 = 

 = 
 ≠ A.
1
0
1
0



0 1
2
= A( BA) B
= A( AB) B
Because A ≠ A, A is not idempotent.
Because A2 ≠ A, A is not idempotent.
54. Assume A is idempotent. Then
= ( AA)( BB)
= AB
So, ( AB) = AB, and AB is idempotent.
2
58. If A is row-equivalent to B, then
A = Ek  E2 E1B,
where E1 , , Ek are elementary matrices.
A2 = A
So,
( A2 ) = AT
( AT AT ) = AT
T
B = E1−1E2−1Ek−1 A,
which shows that B is row equivalent to A.
which means that AT is idempotent.
Now assume AT is idempotent. Then
AT AT = AT
( AT AT ) = ( AT )
T
53
56. ( AB ) = ( AB )( AB )
2
0 1 0 0 1 0
 1 0 0





2
52. A =  1 0 0  1 0 0 = 0 1 0.
0 0 1 0 0 1
0 0 1





Markov Chains
T
AA = A
which means that A is idempotent.
60. (a) When an elementary row operation is performed on
a matrix A, perform the same operation on I to obtain
the matrix E.
(b) Keep track of the row operations used to reduce A to
an upper triangular matrix U. If A row reduces to U
using only the row operation of adding a multiple of
one row to another row below it, then the inverse of
the product of the elementary matrices is the matrix
L, and A = LU .
(c) For the system Ax = b, find an LU factorization of
A. Then solve the system Ly = b for y and
U x = y for x.
Section 2.5 Markov Chains
2. The matrix is not stochastic because every entry of a stochastic matrix satisfies the inequality 0 ≤ aij ≤ 1.
4. The matrix is not stochastic because the sum of entries in a column of a stochastic matrix is 1.
6. The matrix is stochastic because each entry is between 0 and 1, and each column adds up to 1.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 2
54
Matrices
8. 60%
70%
Gas (G)
40%
Liquid (L)
50%
30%
Solid (S)
50%
The matrix of transition probabilities is shown.
From
G
L
S
0
0  G
0.60

 
P = 0.40 0.70 0.50 L  To
 0
0.30 0.50 S 

The initial state matrix represents the amounts of the physical states is shown.
0.20(10,000)
2000




X 0 = 0.60(10,000) = 6000
0.20(10,000)
2000




To represent the amount of each physical state after the catalyst is added, multiply P by X 0 to obtain
0
0 
0.60
PX 0 = 0.40 0.70 0.50
 0
0.30 0.50

2000
1200 

 = 

6000
6000 .
2000
2800




So, after the catalyst is added there are 1200 molecules in a gas state, 6000 molecules in a liquid state, and 2800 molecules in a
solid state.
10.
4
0.26
0  13  15
0.6 0.2


 1   1  
X 1 = PX 0 = 0.2 0.7 0.1   =   =  0.3
3
3
0.2 0.1 0.9  1   2   0.4

    

3  5 
4
 17  0.226 
0 15
0.6 0.2
75

  1   49  

X 2 = PX 1 = 0.2 0.7 0.1   =   = 0.326
3
150
0.2 0.1 0.9  3   67  0.446

    

 5  150 
  151  0.2013
0  17
0.6 0.2
75
750


  49   239  
X 3 = PX 2 = 0.2 0.7 0.1   =   = 0.3186
150
750
0.2 0.1 0.9  67   12   0.48

    

150   25 
12. Form the matrix representing the given transition
probabilities. A represents infected mice and B
noninfected.
From
A B
0.2 0.1 A
P = 
  To
0.8 0.9 B
The state matrix representing the current population is
0.3 A
X0 =   .
0.7 B
(a) The state matrix for next week is
0.2 0.1 0.3
0.13
X 1 = PX 0 = 
  = 
.
0.8
0.9
0.7

 
0.87
So, next week 0.13(1000) = 130 mice will be
infected.
0.2 0.1 0.13
0.113
(b) X 2 = PX 1 = 

 = 

0.8
0.9
0.87



0.887
0.2 0.1 0.113
0.1113
X 3 = PX 2 = 

 = 

0.8
0.9
0.887



0.8887
In 3 weeks, 0.1113(1000) ≈ 111 mice will be
infected.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 2.5
Markov Chains
55
14. Form the matrix representing the given transition probabilities. Let S represent those who swim
and B represent those who play basketball.
From
S
B
0.30 0.40 S 
P = 
  To
0.70 0.60 B
The state matrix representing the students is
0.4 S
X0 =   .
0.6 B
(a) The state matrix for tomorrow is
0.30 0.400.4
0.36
X 1 = PX 0 = 
  = 
.
0.70 0.600.6
0.64
So, tomorrow 0.36( 250) = 90 students will swim and 0.64( 250 ) = 160 students will play basketball.
(b) The state matrix for two days from now is
0.37 0.360.4
0.364
X 2 = P2 X 0 = 
  = 
.
0.63 0.640.6
0.636
So, two days from now 0.364( 250 ) = 91 students will swim and 0.636( 250 ) = 159 students will play basketball.
(c) The state matrix for four days from now is
0.363637 0.3636370.4
0.36364
X 4 = P4 X 0 = 
  = 
.
0.636363
0.636363
0.6

 
0.63636
So, four days from now, 0.36364( 250 ) ≈ 91 students will swim and 0.63636( 250 ) ≈ 159 students will play basketball.
16. Form the matrix representing the given transition
probabilities. Let A represent users of Brand A, B users
of Brand B, and N users of neither brands.
From
A
B
N
0.75 0.15 0.10 A 

 
P = 0.20 0.75 0.15 B  To
0.05 0.10 0.75 N 

 
The state matrix representing the current product usage is
2 A
11

3
X 0 = 11
 B
5 N
11
(a) The state matrix for next month is
2
0.2227 
0.75 0.15 0.10 11






1
3
X 1 = P X 0 = 0.20 0.75 0.15 11 =  0.309 .
 0.372
0.05 0.10 0.75  5 

 11


 0.2511


(b) X 2 = P X 0 ≈ 0.3330
 0.325


2
In 2 months, the distribution of users will be
0.2511 ⋅ 110,000 = 27,625 for Brand A,
0.3330 ⋅ 110,000 = 36,625 for Brand B, and
0.325 ⋅ 110,000 = 35,750 for neither.
0.3139


(c) X 18 = P X 0 ≈  0.3801
 0.2151


18
In 18 months, the distribution of users will be
0.3139 ⋅ 110,000 ≈ 34,530 for Brand A,
0.3801 ⋅ 110,000 ≈ 41,808 for Brand B, and
0.2151 ⋅ 110,000 ≈ 23,662 for neither.
So, next month the distribution of users will be
0.2227 ⋅ 110,000 = 24,500 for Brand A,
0.309 ⋅ 110,000 = 34,000 for Brand B, and
0.372 ⋅ 110,000 = 41,500 for neither.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
56
Chapter 2
Matrices
18. The stochastic matrix
2
P = 5
3
5
0 0.3
P = 

 1 0.7
is regular because P 2 has only positive entries.
0 0.3 x1 
 x1 
PX = X  
  =  
 1 0.7 x2 
 x2 
0.3x2 = x1

x1 + 0.7 x2 = x2
Because x1 + x2 = 1, the system of linear equations is
as follows.
− x1 + 0.3 x2 = 0
x1 − 0.3 x2 = 0
x1 +
22. The stochastic matrix
x2 = 1
The solution to the system is x2 = 10
and
13
3
x1 = 1 − 10
= 13
.
13
3
13
So, X = 10  .
 
13 
20. The stochastic matrix
0.2 0
P = 

0.8 1
is regular because P1 has only positive entries.
 2 7   x1 
 x1 
PX = X   5 10    =  
3
3
x


 x2 
 5 10   2 
2x
1
7
x2 = x1
+ 10
3
x
5 1
3
x2 = x2
+ 10
 5
Because x1 + x2 = 1, the system of linear equations is
as follows.
7
− 53 x1 + 10
x2 = 0
3
x
5 1
7
− 10
x2 = 0
x1 +
x2 = 1
6
and
The solution of the system is x2 = 13
6
7
x1 = 1 − 13
= 13
.
7
13
So, X =  6 .
 
13 
24. The stochastic matrix
is not regular because every power of P has a zero in the
second column.
0.2 0 x1 
 x1 
PX = X  
  =  
0.8
1
x

 2 
 x2 
0.2 x1
= x1

0.8 x1 + x2 = x2
Because x1 + x2 = 1, the system of linear equations is
as follows.
− 0.8 x1
= 0
0.8 x1
= 0
2
9
P =  13

 94
The solution of the system is x1 = 0 and x2 = 1.
1
4
1
2
1
4
1
3
1
3

1

3
is regular because P1 has only positive entries.
2 1 1 x 
 x1 
9 4 3 1 
 
1
1
1
PX = X   3 2 3   x2  =  x2 

 
 x3 
x
 94 14 13 
 
 3
2
x
9 1
1x
3 1
4
x
9 1
x1 + x2 = 1
0
So, X =   .
1
7
10 
3
10 
+ 14 x2 + 13 x3 = x1
+ 12 x2 + 13 x3 = x2
+ 14 x2 + 13 x3 = x3
Because x1 + x2 + x3 = 1, the system of linear
equations is as follows.
− 79 x1 + 14 x2 + 13 x3 = 0
1x
3 1
4x
9 1
− 12 x2 + 13 x3 = 0
+ 14 x2 + 23 x3 = 0
x1 +
x2 +
x3 = 1
The solution of the system is x3 = 0.33, x2 = 0.4, and
x1 = 1 − 0.4 − 0.33 = 0.27.
0.27


So, X =  0.4 .
0.33


© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 2.5
1
5
1
5
3
5
1

0

0
is regular because P 2 has only positive entries.
 1 1 1  x 
 x1 
2 5  1 
 
PX = X   13 15 0  x2  =  x2 

 
 x3 
x
 16 53 0
 
 3
1
x
2 1
1x
3 1
1
x
6 1
+ 15 x2
= x2
+ 53 x2
= x3
− 12 x1 + 15 x2 + x3 = 0
− 54 x2
= 0
3
x
5 2
− x3 = 0
+
x1 +
x2 + x3 = 1
The solution of the system is
( )
5
5
5 5
5 and
x3 = 22
, x2 = 17
− 17
= 22
,
22
5
5
x1 = 1 − 22
− 22
 0.1 0 0.3
P = 0.7 1 0.3
0.2 0 0.4
is not regular because every power of P has two zeros in
the second column.
 0.1 0 0.3 x1 
 x1 
 = x 
PX = X  0.7 1 0.3
x
 2 
 2
0.2 0 0.4
 x3 
 x3 
0.1x1
+ 0.3x3 = x1
0.7 x1 + x2 + 0.3x3 = x2
0.2 x1
+ 0.4 x3 = x3
+ 15 x2 + x3 = x1
Because x1 + x2 + x3 = 1, the system of linear
equations is as follows.
1x
3 1
1x
6 1
57
28. The stochastic matrix
26. The stochastic matrix
1
2
P =  13

 16
Markov Chains
6
= 11
.
6
 0.54 
 11



5
So, X =  22  ≈ 0.227  .
5
0.227 


 22 
Because x1 + x2 + x3 = 1, the system of linear
equations is as follows.
− 0.9 x1
+ 0.3 x3 = 0
+ 0.3 x3 = 0
0.7 x1
− 0.6 x3 = 0
0.2 x1
x1 + x2 +
x3 = 1
The solution of the system is x3 = 0, x2 = 1 − 0 = 1,
and x1 = 1 − 1 − 0 = 0.
0
 
So, X = 1.
0
 
30. The stochastic matrix
 1 0 0 0
0 0 1 0

P = 
0 1 0 0


0 0 0 1
is not regular because every power of P has three zeros
in the first column.
 1 0 0 0 x1 
 x1 
0 0 1 0 x 
 
 2  =  x2 
PX = X  
0 1 0 0 x3 
 x3 

 
 
0 0 0 1 x4 
 x4 
x1 = x1
x3 = x2
x2 = x3
x4 = x4
Because x1 + x2 + x3 + x4 = 1, the system of linear
equations is as follows.
0 = 0
− x2 + x3
= 0
x2 − x3
= 0
0 = 0
x1 + x2 + x3 + x4 = 1
Let x3 = s and x4 = t. The solution of the system is
x4 = t , x3 = s, x2 = s, and x1 = 1 − 2 s − t , where
0 ≤ s ≤ 1, 0 ≤ t ≤ 1, and 2 s + t ≤ 1.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
58
Chapter 2
Matrices
 x1 
 
32. Exercise 3: To find X , let X =  x2 . Then use the
 x3 
 
matrix equation PX = X to obtain
0.3 0.16 0.25  x1 
 x1 

 
 
0.3
0.6
0.25
x
=

 2
 x2 
0.3 0.16 0.5  x 
 x3 
 

 3
or
0.3 x1 + 0.16 x2 + 0.25 x3 = x1
0.3 x1 +
0.6 x2 + 0.25 x3 = x2
0.3 x1 + 0.16 x2 +
0.5 x3 = x3
Use these equations and the fact that x1 + x2 + x3 = 1
to write the system of linear equations shown.
− 0.6 x1 + 0.16 x2 + 0.25 x3 = 0
0.3 x1 − 0.3 x2 + 0.25 x3 = 0
0.3 x1 + 0.16 x2 +
0.5 x3 = 0
Let x2 = r , x3 = s, and x4 = t , where r, s, and t are
real numbers between 0 and 1.
The solution of the system is
x1 = 1 − r − s − t , x2 = r , x3 = s, and x4 = t , where
r, s, and t are real numbers such that
0 ≤ r ≤ 1, 0 ≤ s ≤ 1, 0 ≤ t ≤ 1, and r + s + t ≤ 1.
So, the steady state matrix is
1 − r − s − t 


r
.
X = 


s


t


 x1 
 
x2
Exercise 6: To find X , let X =  . Then use the
 x3 
 
 x4 
matrix equation PX = X to obtain
3
6
4
x1 = 13
, x2 = 13
, and x3 = 13
.
1
 12
6
1
6
1
6
So, the steady state matrix is
or
x1 +
x2 +
x3 = 1
The solution of the system is
3
13 
6
X = 13
.
 
4
13 
 x1 
 
x2
Exercise 5: To find X , let X =  . Then use the
 x3 
 
 x4 
matrix equation PX = X to obtain
1

0
0

0
0 0 0 x1 
 x1 
 
 
1 0 0 x2 
x2
=  



0 1 0 x3
x
 
 3
0 0 1
 x4 
 x4 
or
= x2
x2
x3
= x3
1x
2 1
1x
6 1
1x
6 1
1
x
6 1
1
4
1
4
1
4
1
4
4 x
15  1 

4 x 
15   2 
4 x 
15   3 
1  x 
5 4
 x1 
 
x2
=  
 x3 
 
 x4 
4x = x
+ 92 x2 + 14 x3 + 15
4
1
4x = x
+ 13 x2 + 14 x3 + 15
4
2
4x = x
+ 92 x2 + 14 x3 + 15
4
3
+ 92 x2 + 14 x3 +
1
x
5 4
.
= x4
Use these equations and the fact that
x1 + x2 + x3 + x4 = 1 to write the system of equations
shown.
4
− 12 x1 + 92 x2 + 14 x3 + 15
x4 = 0
1
x
6 1
1
x
6 1
1x
6 1
4
− 32 x2 + 14 x3 + 15
x4 = 0
4
− 92 x2 − 43 x3 + 15
x4 = 0
+ 92 x2 + 14 x3 −
x1 +
x2 +
x3 +
4x
5 4
= 0
x4 = 1
The solution of the system is
= x1
x1
2
9
1
3
2
9
2
9
x1 = 24
, x = 18
, x = 16
, and x4 = 15
.
73 2
73 3
73
73
.
x4 = x4
Use these equations and the fact that
x1 + x2 + x3 + x4 = 1 to write the system of linear
equations shown.
x1 + x2 + x3 + x4 = 1
So, the steady state matrix is
 24 
0.3288
 73 


 18

0.2466
X =  73  ≈ 
.
0.2192
 16

73


 15 
0.2055
 73 
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 2.5
Markov Chains
59
34. Form the matrix representing the given transition probabilities. Let A represent those who received an “A” and
let N represent those who did not.
From
A
N
0.70 0.10 A 
P = 
  To
0.30 0.90 N 
 x1 
To find the steady state matrix, solve the equation PX = X , where X =   and use the fact that x1 + x2 = 1
 x2 
to write a system of equations.
0.70 x1 + 0.10 x2 = x1
− 0.3x1 + 0.1x2 = 0
0.30 x1 + 0.90 x2 = x2 
0.3x1 − 0.1x2 = 0
x1 +
x2 =
x1 +
1
x2 = 1
1 
The solution of the system is x1 = 14 and x2 = 34 . So, the steady state matrix is X =  43 . This indicates that eventually 14
 4 
of the students will receive assignment grades of “A” and 34 of the students will not.
36. Form the matrix representing transition probabilities. Let A represent Theatre A, let B represent Theatre B, and let
N represent neither theatre.
A
From
B
N
0.10 0.06 0.03 A

 
P = 0.05 0.08 0.04 B To
0.85 0.86 0.97 B

 
 x1 
 
To find the steady state matrix, solve the equation PX = X where X =  x2  and use the fact that x1 + x2 + x3 = 1
 x3 
 
to write a system of equations.
0.10 x1 + 0.06 x2 + 0.03x3 = x1
− 0.90 x1 + 0.06 x2 + 0.03 x3 = 0
0.05 x1 + 0.08 x2 + 0.04 x3 = x2 
0.05 x1 − 0.92 x2 + 0.04 x3 = 0
0.85 x1 + 0.86 x2 + 0.97 x3 = x3
0.85 x1 + 0.86 x2 − 0.03 x3 = 0
x1 +
x2 +
x3 =
1
x1 +
x2 +
x3 = 1
 4 
119 
110
4 , x = 5 ,
 5 . This indicates
x
=
.
X
=
and
So,
the
steady
state
matrix
is
The solution of the system is x1 = 119
2
3
119
119
119 
110

119 
5
4 ≈ 3.4%
≈ 4.2% of the people will attend Theatre B, and
that eventually 119
of the people will attend Theatre A, 119
110
119
≈ 92.4% of the people will attend neither theatre on any given night.
38. The matrix is not absorbing; The first state S1 is
absorbing, however the corresponding Markov chain is
not absorbing because there is no way to move from S2
or S3 to S1.
40. The matrix is absorbing; The fourth state S4 is absorbing
and it is possible to move from any of the states to S4 in
one transition.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
60
Chapter 2
Matrices
42. Use the matrix equation PX = X , or
 0.1 0 0 x1 
 x1 

 
 
0.2 1 0 x2  =  x2 
0.7 0 1 x3 
 x3 

 
 
S0
along with the equation x1 + x2 + x3 = 1 to write the
linear system
− 0.9 x1
= 0
0.2 x1
= 0
0.7 x1
= 0
.
S1
From
S 2 S3
S4
0
0 0 S0 
 1 0.7
0

 
 
0 0.7
0 0 S1 
0
0

P = 0 0.3
0 0.7 0 S 2  To and X 0 =  1


 
0
0
0 0.3
0 0 S3 

 
 
0
0 0.3 1 S 4 
0
0
So,
x1 + x2 + x3 = 1
The solution of this system is x1 = 0, x2 = 1 − t , and
x3 = t , where t is a real number such that 0 ≤ t ≤ 1.
 0 


So, the steady state matrix is X = 1 − t  , where
 t 


0 ≤ t ≤ 1.
44. Use the matrix equation PX = X or
0.7

 0.1
 0

0.2
46. Let Sn be the state that Player 1 has n chips.
0 0.2 0.1 x1 
 x1 
 
 
1 0.5 0.6 x2 
x2
=  
 x3 
0 0.2 0.2 x3 
 
 
0 0.1 0.1
 x4 
 x4 
along with the equation x1 + x2 + x3 + x4 = 1 to write
the linear system
− 0.3x1
+ 0.2 x3 + 0.1x4 = 0
0.1x1
+ 0.5 x3 + 0.6 x4 = 0
− 0.8 x3 + 0.2 x4 = 0 .
 49 
 58 
 0
 
n
P X 0 → PX 0 =  0 .
 0
 
9
 58 
So, the probability that Player 1 reaches S4 and wins the
9
≈ 0.155.
tournament is 58
48. (a) To find the nth state matrix of a Markov chain,
compute X n = P n X 0 , where X 0 is the initial state
matrix.
(b) To find the steady state matrix of a Markov chain,
determine the limit of P n X 0 , as n → ∞, where X 0
is the initial state matrix.
(c) The regular Markov chain is PX 0 , P 2 X 0 , P 3 X 0 ,  ,
where P is a regular stochastic matrix and X 0 is the
initial state matrix.
The solution of this system is x1 = 0, x2 = 1, x3 = 0,
(d) An absorbing Markov chain is a Markov chain with
at least one absorbing state and it is possible for a
member of the population to move from any
nonabsorbing state to an absorbing state in a finite
number of transitions.
0
 
1
and x4 = 0. So, the steady state matrix is X =   .
0
 
0
(e) An absorbing Markov chain is concerned with
having an entry of 1 and the rest 0 in a column,
whereas a regular Markov chain is concerned with
the repeated multiplication of the regular stochastic
matrix.
0.2 x1
+ 0.1x3 − 0.9 x4 = 0
x1 + x2 +
x3 +
x4 = 1
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 2.5
50. (a) When the chain reaches S1 or S 4 , it is certain in the
next step to transition to an adjacent state, S2 or S3 ,
respectively, so S1 and S4 reflect to S2 or S3 .
(b)
0 0
0 0.4


0 0.3 0
1
P = 0 0.6
0 1


0 0.7 0
0
(c)
1
6
0
P 30 ≈  5
6

 0
0

5
5
0 12
12

0 56 0
7
7
0 12

12
0
0 1
6

 56 0
P 30 ≈  0 5
6
7
12 0
61
52. (a) Yes, it is possible.
(b) Yes, it is possible.
Both matrices X satisfy P1 X = X . The steady state
matrix depends on the initial state matrix. In general,
 6 −t
 511 5 
 − t
the steady state matrix is X = 11 6 ,
5
 6t 


 t 
1
6
0
Markov Chains
6
. In
where t is any real number such that 0 ≤ t ≤ 11
6
.
part (a) t = 0 and in part (b), t = 11
54. Let
1
6

5
0
12

0 56 

7
0
12
b 
 a
P = 

−
−
1
a
1
b

be a 2 × 2 stochastic matrix, and consider the system of
equations PX = X .
Other high even or odd powers of P give similar
results where the columns alternate.
1
 12

5
 24 
(d) X =  5 
 12 
7
 24 
b  x1 
 a
 x1 

  =  
1 − a 1 − b x2 
 x2 
You have
ax1 +
bx2 = x1
(1 − a) x1 + (1 − b) x2 = x2
or
Half the sum entries in the corresponding columns of
P n and P n + 1 approach the corresponding entries in
X.
(a − 1) x1 + bx2 = 0
(1 − a) x1 − bx2 = 0.
Letting x1 = b and x2 = 1 − a, you have the 2 × 1 state
matrix X satisfying PX = X
 b 
X = 
.
1 − a
56. Let P be a regular stochastic matrix and X 0 be the initial state matrix.
lim P n X 0 = lim P n ( x1 + x2 +  + xk )
n→∞
n→∞
= lim P n ⋅ x1 + lim P n ⋅ x2 +  + lim P n ⋅ xk
n→∞
n→∞
n→∞
= Px1 + Px2 +  + Pxk
= P ( x1 + x2 +  + xk )
= PX 0
= X , where X is a unique steady state matrix.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
62
Chapter 2
Matrices
Section 2.6 More Applications of Matrix Operations
2. Divide the message into groups of four and form the uncoded matrices.
H E L
P _ I S _ C O M I
N G _ _
[8
5 12 16]
[0 9 19 0] [3 15 13 9] [14
7
0 0]
Multiplying each uncoded row matrix on the right by A yields the coded row matrices
−2 3 −1 −1


1
−1 1 1
[8 5 12 16] A = [8 5 12 16]
2
−1 −1 1


 3 1 −2 − 4
= [15 33 −23 − 43]
[0 9 19 0] A = [−28 −10 28 47]
[3 15 13 9] A = [−7 20 7 2]
[14 7 0 0] A = [−35 49 −7 −7].
So, the coded message is 15, 33, −23, − 43, − 28, −10, 28, 47, −7, 20, 7, 2, −35, 49, −7, − 7.
3
−4
−1
4. Find A−1 = 
, and multiply each coded row matrix on the right by A to find the associated uncoded row matrix.
−
3
2


−4
3
 = [20 15]  T, O
 3 −2
[85 120] 
[6 8] A−1 = [0 2]  _, B
[10 15] A−1 = [5 0]  E, _
[84 117] A−1 = [15 18]  O, R
[42 56] A−1 = [0 14]  _, N
[90 125] A−1 = [15 20]  O, T
[60 80] A−1 = [0 20]  _, T
[30 45] A−1 = [15 0]  O, _
[19 26] A−1 = [2 5]  B, E
So, the message is TO_BE_OR_NOT_TO_BE.
 11 2 −8


6. Find A−1 =  4 1 −3, and multiply each coded row matrix on the right by A−1 to find the associated uncoded row matrix.
−8 −1 6


[112 −140 83] A
−1
[19 −25 13] A−1
[72 −76 61] A−1
[95 −118 71] A−1
21 38] A−1
[20
[35 −23 36] A−1
[42 −48 32] A−1
 11 2 −8


= [112 −140 83]  4 1 −3 = [8 1 22]  H, A, V
−8 −1 6


[5 0 1] 
[0 7 18] 
= [5 1 20] 
= [0 23 5] 
= [5 11 5] 
= [14 4 0] 
=
E,
_, A
=
_,
G, R
E,
A,
T
_, W,
E
E,
K,
E
N,
D,
_
The message is HAVE_A_GREAT_WEEKEND_.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 2.6
More Applications of Matrix Operations
10. You have
a b
8. Let A−1 = 
 and find that
c d 
w x
 = [10 15] and
 y z
[45 −35] 
_ S
w x 
 = [8 14].
 y z
a b
[−19 −19] 
 = [0 19]
c d 
[38 −30] 
U E
So, 45w − 35 y = 10
and 45 x − 35 z = 15
b
 = [21 5].
c
d


38w − 30 y = 8
38 x − 30 z = 14.
a
[37 16] 
Solving these two systems gives w = y = 1, x = −2,
and z = −3. So,
This produces a system of 4 equations.
−19a
− 19c
− 19b
= 0
1 −2
A−1 = 
.
1 −3
− 19d = 19
+ 16c
37 a
37b
63
= 21
(b) Decoding, you have:
+ 16d = 5.
This corresponds to the message
[45 −35] A−1
[38 −30] A−1
[18 −18] A−1
[35 −30] A−1
[81 −60] A−1
[42 −28] A−1
[75 −55] A−1
[2 −2] A−1
[22 −21] A−1
[15 −10] A−1
CANCEL_ORDERS_SUE.
The message is JOHN_RETURN_TO_BASE_.
Solving this system, you find a = 1, b = 1, c = −1, and
d = −2. So,
 1 1
A−1 = 
.
−1 −2
−1
Multiply each coded row matrix on the right by A to
yield the uncoded row matrices.
[3 1], [14 3], [5 12], [0 15], [18 4],
[5 18], [19 0], [0 19], [21 5].
[10 15] 
= [8 14] 
= [0 18] 
= [5 20] 
= [21 18] 
= [14 0] 
= [20 15] 
= [0
2] 
= [1 19] 
= [5
0] 
=
J, O
H, N
_, R
E,
T
U, R
N,
_
T, O
_,
B
A,
S
E,
_
12. Use the given information to find D.
User
A
B
0.30 0.20 A
D = 
  Supplier
0.40 0.40 B
The equation X = DX + E may be rewritten in the
form ( I − D ) X = E , that is
 0.7 − 0.2
10,000

X = 
.
0.6
− 0.4
20,000
Solve this system by using Gauss-Jordan elimination to
obtain
29,412
x ≈ 
.
 52,941
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
64
Chapter 2
Matrices
14. From the given matrix D, form the linear system
X = DX + E , which can be written as ( I − D ) X = E ,
18. (a) The line that best fits the given
points is shown in the graph.
y
that is
 0.8 −0.4 −0.4
5000




−0.4 0.8 −0.2 X = 2000 .
 0 −0.2 0.8
8000




21,875


Solving this system, X = 17,000.
14,250


3
(4, 2)
(6, 2)
(3,
1)
1
(1, 0)
(4, 1)
2
−2
(b) Using the matrices
1

1
1

1
X = 
1
1

1

1
y
4
3
(− 1, 1)
(3, 2)
(1, 1)
(− 3, 0) − 1
1
2
3
x
−2
 8
 8 28
T
XTX = 
, X Y =  , and
28
116


37
1 −3
0


 
1 −1
1
X = 
and Y =   ,
1 1
 1


 
1 3
2
4 0
4
T
you have X T X = 
, X Y =  , and
0 20
6
(
)
1
0

 
2
0

0
3

 
3
 1
 and Y =   ,
4
 1
2
4

 
2
5

 
6
2
you have
(b) Using the matrices
1
−1
A = X T X X TY =  4
 0
x
(2, 0) (3, 0) 5 6
16. (a) The line that best fits the given points is shown in
the graph.
2
(5, 2)
4
(
A = XTX
(c) Solving Y = XA + E for E, you have
 −0.1


0.3
E = Y − XA = 
.
−0.3


 0.1
So, the sum of the squares error is E T E = 0.2.
−1


2
So, the least squares regression line is y = 12 x − 43 .
(c) Solving Y = XA + E for E, you have
0 4
 1
=  3 .

1 6
 
10 
20 
3 x + 1.
So, the least squares regression line is y = 10
− 3 
) ( X T Y ) =  14  .
E = Y − XA =  14
− 14
− 34
1
4
− 14
3
4
1
4
− 14 
and the sum of the squares error is E T E = 1.5.
20. Using the matrices
1 1
0


 
X = 1 3 and Y = 3 ,
1 5
6


 
you have
1 1
1 1 1 
3 9

T
X X = 
 1 3 = 
,
1 3 5 
9 35

1
5


0
1 1 1  
 9
X Y = 
 3 =  , and
1 3 5  
39
6
T
A = (X T X )
−1
− 3 
( X T Y ) =  23  .


2
So, the least squares regression line is y = 32 x − 32 .
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
T
Section 2.6
1 −4
−1


 
1
−
2
 and Y =  0,
X = 
1 2
 4


 
1 4
 5
1 0
 6


 
1 4
 3
X = 1 5 and Y =  0,


 
1 8
− 4


 
1
10


 −5
you have
4 0
 8
T
XT X = 
, X Y =  , and
0 40
32
1
−1
A = ( X T X ) ( X TY ) =  4
 0
0  8
 2
=  .

1 32
 
0.8
40 
So, the least squares regression line is y = 0.8 x + 2.
24. Using matrices
you have
4 0
 7
T
X X = 
, X Y =  , and
0
20


−13
T
( X TY )
1
= 4

 0
you have
1 0


1 4
1
1
1
1
1


 5 27


XTX = 
 1 5 = 
,
0 4 5 8 10
27 205
1 8


1 10
 6
 
 3
1
1
1
1
1


 0
 
X TY = 
  0 = 
, and
0 4 5 8 10
−70
−4
 
−5
1 −3
4


 
1
−
1
 and Y = 2 ,
X = 
1 1
 1


 
1 3
0
−1
65
26. Using matrices
22. Using matrices
A = (XT X )
More Applications of Matrix Operations
 7
0  7
=
 134  .


1 
13
−
− 20 



20 
So, the least squares regression line is
y = −0.65 x + 1.75.
(
A = XT X
 205 −27  0


5 −70

) ( X T Y ) = 2961 −27
−1
1890
1 
= 296

.
−350
So, the least squares regression line is
945 .
y = − 175
x + 148
148
28. Using matrices
1

1
X = 1

1

1
9
0.72



10
0.92
11 and Y = 1.17,



1.34
12



13
1.60
you have
 5 55
 5.75 
T
XT X = 
 and X Y = 
.
55
615


65.43
−1
−1.248
A = ( X T X ) X TY = 

 0.218 
So, the least squares regression line is
y = 0.218 x − 1.248.
30. (a) To encode a message, convert the message to numbers and partition it into uncoded row matrices of size 1 × n.
Then multiply on the right by an invertible n × n matrix A to obtain coded row matrices. To decode a message,
multiply the coded row matrices on the right by A−1 and convert the numbers back to letters.
(b) A Leontief input-output model uses an n × n matrix to represent the input needs of an economic system, and
an n × 1 matrix to represent any external demands on the system.
(c) The coefficients of the least squares regression line are given by A = ( X T X ) X T Y .
−1
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
66
Chapter 2
Matrices
Review Exercises for Chapter 2
 1 2
7 1
 −2 −4 56 8
 54 4





 



2. −2 5 −4 + 8 1 2 = −10 8 +  8 16 = −2 24
6 0
 1 4
−12 0  8 32
−4 32





 



 1(6) + 5( 4) 1( −2) + 5(0) 1(8) + 5(0)
 1 5 6 −2 8
 26 −2 8
4. 
 = 

 = 

2(6) − 4( 4) 2( −2) − 4(0) 2(8) − 4(0)
2 −4 4 0 0
−4 −4 16
2 1  4 2 −2 4
 5 5 −2 4
 3 9
6. 

 + 
 = 
 + 
 = 

−
6
0
3
1
0
4
24
12
0
4


 


 

24 16
 5
2 −1
 x1 
8. Letting A = 
, x =  , and b =   , the
− 4
3 2
 x2 
system can be written as
Ax = b
 1
 
14. A = −2
−3
 
T
2 −1  x1 
 5

   =  .
3
2
x

  2
− 4
 1
 1 −2 −3
 


A A = −2[1 −2 −3] = −2 4 6
−3
−3 6 9
 


Using Gaussian elimination, the solution of the system is
 6
x =  7 .
− 23 
 7
 1
 
AAT = [1 −2 −3]−2 = [14]
−3
 
3
1
2
 x1 
 10


 
 
10. Letting A = 2 − 3 − 3, x =  x2 , and b =  22,
4 − 2
 x3 
− 2
3

 
 
the system can be written as
Ax = b
3
1  x1 
2
 10

 
 
2
−
3
−
3
x
=

  2
 22.
4 − 2



− 2
3  x3 

 
Using Gaussian elimination, the solution of the system is
5
 
x =  2 .
− 6
 
 3 2
12. AT = 

−1 0
 3 2 3 −1
 13 −3
A A = 

 = 

−1 0 2 0
−3 1
T
3 −1  3 2
10 6
AAT = 

 = 

−
2
0
1
0



 6 4
T
16. From the formula
A−1 =
 d −b
1

,
ad − bc −c a
you see that ad − bc = 4( 2) − ( −1)( −8) = 0, and so the
matrix has no inverse.
18. Begin by adjoining the identity matrix to the given
matrix.
1 1 1 1 0 0
[ A I ] = 0 1 1 0 1 0
0 0 1 0 0 1 


This matrix reduces to
I
 1 0 0 1 −1 0


A−1  = 0 1 0 0 1 −1.
0 0 1 0 0 1


So, the inverse matrix is
 1 −1 0


A−1 = 0 1 −1.
0 0 1


© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Review Exercises for Chapter 2
20.
A
x
 52
4 −2
1
Because A−1 = 10

 =  1
− 10
−1 3
equation Ax = b as follows.
 2
x = A−1b =  15
− 10
A
− 15 
, solve the
3
10 

1
1
 5
9
6
 188

4
A = − 9
− 13 
9
 17 − 2
1
9
6
 18
Solve the equation Ax = b as follows.
b
 1 1 2  x1 
 0

 
 
−
=
1
1
1
x

  2
−1
2 1 1  x3 
 2

 
 
Using Gauss-Jordan elimination, you find that
3
1
− 52
5
5
 1

3
1
A =  5 −5
.
5
 3
1 − 2
5
5
 5
Solve the equation Ax = b as follows.
5
 18
 8
x = A b = − 9
 17
 18
−1
− 52

x = A b =  15
 3
 5
24.
A
x
1
5
− 53
1
5
3 0
5  

1 −1
5  
− 52   2
−1
2 4
2A = 

0 1
−1
4
 11
4 1
1 
Because A−1 = 11

 =  3
− 11
− 3 2
equation
Ax = b as follows.
 4
x = A b =  11
3
− 11
=
1  1 − 4
1  1 − 4

 = 
.
2 − 0 0
2 0
2
2
2 x
30. The matrix 
 will be nonsingular if
 1 4
ad − bc = ( 2)( 4) − (1)( x) ≠ 0, which implies that
2 −1  x
 5

  =  
3 4  y
− 2
−1
23 
1
− 18
0
6 





− 13   −1 =  17
9
 − 17 
1  − 7

6 
 18 
1

 4 −1
1  1 −4
So, A = 
.
 = 
4 0 2
1

0
2 

 1
 
=  1
−1
 
b
1
9
4
9
− 92
2 4
= 
, you can use the formula for
0 1
the inverse of a 2 × 2 matrix to obtain
28. Because ( 2 A)
−1
−1
b
−1
− 15   1
 1
=  
3  
−
3
 
−1
10 
x
x
1
2  x
0
 0

 
 
1  y =  −1
3 2
4 − 3 − 4  z
− 7

 
 
Using Gauss-Jordan elimination, you find that
3 2  x1 
 1

  =  
1 4  x2 
−3
22.
A
26.
b
67
1
11
, solve the
2

11
x ≠ 8.
32. Because the given matrix represents 6 times the second
row, the inverse will be 16 times the second row.
 1 0 0
 1 
0 6 0
0 0 1


1
 18 
5
11  
11
=

 19



2 −2
 
11 
− 11 
For Exercises 34 and 36, answers will vary. Sample answers are shown below.
34. Begin by finding a sequence of elementary row operations to write A in reduced row-echelon form.
Matrix
 1 − 4


−3 13
Elementary Row Operation
Interchange the rows.
Elementary Matrix
0 1
E1 = 

 1 0
 1 − 4


1
0
Add 3 times row 1 to row 2.
1 0
E2 = 

3 1
 1 0


0 1
Add 4 times row 2 to row 1.
 1 4
E3 = 

0 1
Then, you can factor A as follows.
0 1  1 0  1 −4
A = E1−1E2−1E3−1 = 



1
 1 0 −3 1 0
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
68
Chapter 2
Matrices
36. Begin by finding a sequence of elementary row operations to write A in reduced row-echelon form.
Matrix
Elementary Row Operation
Elementary Matrix
 1 0 2


0 2 0
 1 0 3


Multiply row one by 13 .
 13 0 0


E1 = 0 1 0
0 0 1


 1 0 2


0 2 0
0 0 1


Add −1 times row one to row three.
 1 0 0


E2 =  0 1 0
−1 0 1


 1 0 0


0 2 0
0 0 1


Add −2 times row three to row one.
 1 0 −2


E3 = 0 1 0
0 0
1

 1 0 0


0 1 0
0 0 1


Multiply row two by
1.
2
 1 0 0


E4 = 0 12 0
0 0 1


So, you can factor A as follows.
3 0 0  1 0 0  1 0 2  1 0 0





A = E1−1E2−1E3−1E4−1 = 0 1 0 0 1 0 0 1 0 0 2 0
0 0 1  1 0 1 0 0 1 0 0 1





a b
38. Letting A = 
 , you have
c d 
a 2 + bc ab + bd 
a b a b
0 0
A2 = 
=
= 



.
2
c d  c d 
0 0
ac + dc cb + d 
So, many answers are possible.
0 0 0 1

, 
, etc.
0 0 0 0
40. There are many possible answers.
0 1
 1 0
0 1 1 0
0 0
A = 
, B = 
  AB = 

 = 
 = 0
0 0
0 0
0 00 0
0 0
 1 0 0 1
0 1
But, BA = 

 = 
 ≠ 0.
0 0 0 0
0 0
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Review Exercises for Chapter 2
(
)(
69
)
42. Because ( A−1 + B −1 )( A−1 + B −1 ) = I , if ( A−1 + B −1 ) exists, it is sufficient to show that A−1 + B −1 A( A + B) B = I
−1
−1
for equality of the second factors in each equation.
( A−1 + B −1)( A( A + B)−1 B) = A−1( A( A + B)−1 B) + B −1( A( A + B)−1 B)
= A−1 A( A + B) B + B −1 A( A + B) B
−1
−1
= I ( A + B ) B + B −1 A( A + B ) B
−1
(
−1
= ( I + B −1 A) ( A + B) B
−1
(
)
= ( B −1B + B −1 A) ( A + B ) B
−1
)
( B + A)( A + B) B
−1
= B −1 ( A + B)( A + B ) B
= B
−1
−1
= B −1IB
= B −1 B
= I
Therefore, ( A−1 + B −1 )
−1
= A( A + B ) B.
−1
44. Answers will vary. Sample answer:
Matrix
Elementary Matrix
− 3 1

 = A
 12 0
− 3 1

 =U
 0 4
EA = U
 1 0
E = 

− 4 1
 1 0− 3 1
A = E −1 U = 

 = LU
− 4 1 0 4
46. Matrix
Elementary Matrix
1 1 1


1 2 2 = A
1 2 3


 1 1 1


0 1 1
 1 2 3


 1 0 0


E1 = −1 1 0
 0 0 1


 1 1 1


0 1 1
0 1 2


 1 0 0


E2 =  0 1 0
−1 0 1


 1 1 1


0 1 1 = U
0 0 1


 1 0 0


E3 = 0 1 0
0 −1 1


E3 E2 E1 A = U  A =
E1−1E2−1E3−1U
1 0 0  1 1 1



= 1 1 0 0 1 1 = LU
1 1 1 0 0 1



© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
70
Chapter 2
Matrices
48. Matrix
2
0

0

2
Elementary Matrix
1 −1
3
1 −1
= A
0 −2 0

1 1 −2
1
2 1 1 −1
0 3
1 −1

=U
0 0 −2 0


0 0 0 −1
1
0
EA = U  A = 
0

1
 1
 0
E = 
 0

−1
0 0 0 2
1 0 0 0
0 1 0 0

0 0 1 0
0 0 0
1 0 0
0 1 0

0 0 0
1 1 −1
3
1 −1
= LU
0 −2 0

0 0 −1
1
0
Ly = b : 
0

1
0 0 0  y1 
 7
 7
−3
−3
1 0 0  y2 
=    y =  
 2
 2
0 1 0  y3 
 
 
 
0 0 1  y4 
 8
 1
2
0
Ux = y : 
0

0
1 −1  x1 
 7
 4





−1
3
1 −1  x2 
−3
=    x =  
 2
−1
0 −2 0  x3 
 
 
 
0 0 −1  x4 
 1
−1
1
100 90 70 30
110 99 77 33
50. 1.1
 = 

40
20
60
60


 44 22 66 66
52. (a) In matrix B, grading system 1 counts each midterm
as 25% of the grade and the final exam as 50% of
the grade.
Grading system 2 counts each midterm as 20% of
the grade and the final exam as 60% of the grade.
78

84
92
(b) AB = 
88

74
96

82 80
80
 80



88 85
85.5 85.4

0.25
0.20

 
93 90 
91.25
91
 0.25 0.20 = 

86 90 
 88.5 88.8

 0.50 0.60 

78 80
 78 78.4
96.75
95 98
97

(c) Two students received an “A” in each grading system.
3
 2 1
 2 1
 1 0
54. f ( A) = 
 − 3
 + 2

−1 0
−1 0
0 1
3  6 3 2 0
 4
= 
 − 
 + 

− 3 − 2 − 3 0 0 2
0 0
= 

0 0
56. The matrix is not stochastic because the sum of entries in
columns 1 and 2 do not add up to 1.
58. This matrix is stochastic because each entry is between
0 and 1, and each column adds up to 1.
0.307
60. X 1 = PX 0 = 

0.693
0.38246
X 2 = PX 1 = 

0.61754
0.3659
X 3 = PX 2 = 

0.6341
4
 0.4 
9


5
62. X 1 = PX 0 =  27  ≈ 0.185
 


 10

0.370
27 
 37 
0.4568
 81 


22
X 2 = PX 1 =  81  ≈ 0.2716
 
0.2716
 22



81 
 103 
0.4239
 243 


59 
≈ 0.2428
X 3 = PX 2 =  243
 
 0.3 
 13 


© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Review Exercises for Chapter 2
71
64. Begin by forming the matrix of transition probabilities.
From Region
1
2
3
0.85 0.15 0.10 1

 
P = 0.10 0.80 0.10 2 To Region
0.05 0.05 0.80 3

 
(a) The population in each region after 1 year is given by
1
0.36
0.85 0.15 0.10  3 



 1 


PX = 0.10 0.80 0.10 3 =  0.3  .
 0.3 
0.05 0.05 0.80  1 

  3 


0.36 
110,000 Region 1




So, 300,000  0.3  = 100,000 Region 2.
 0.3 
 90,000  Region 3




(b) The population in each region after 3 years is given by
0.665375 0.322375 0.2435  3 
 0.4104 

 


3
P X =  0.219
0.562 0.219  13  =  0.3  .


0.115625 0.115625 0.5375 1
0.25625

  3 


1
 0.4104 
123,125  Region 1




So, 300,000  0.3  = 100,000 Region 2.
0.25625
 76,875  Region 3




66. The stochastic matrix
1
P = 
0

4
7

3
7
is not regular because P n has a zero in the first column
for all powers.
 x1 
To find X , begin by letting X =  . Then use the
 x2 
matrix equation PX = X to obtain
1

0

4
 x1 
7  x1 

=  .
3  x 
 x2 
7   1
Use these matrices and the fact that x1 + x2 = 1 to write
the system of linear equations shown.
68. The stochastic matrix
0 0.2
 0


P = 0.5 0.9
0
0.5 0.1 0.8


is regular because P 2 has only positive entries.
To find X , let X = [ x1 x2 x3 ] . Then use the matrix
T
equation PX = X to obtain.
0 0.2 x1 
 0
 x1 

 
 
0 x2  =  x2 .
0.5 0.9
0.5 0.1 0.8 x3 
 x3 

 
 
Use these matrices and the fact that x1 + x2 + x3 = 1 to
write the system of linear equations shown.
− x1
+ 0.2 x3 = 0
4
x
7 2
= 0
0.5 x1 − 0.1x2
− 74 x2
= 0
0.5 x1 + 0.1x2 − 0.2 x3 = 0
x1 + x2 = 1
x1 +
x2 +
= 0
x3 = 1
The solution of the system is x1 = 1 and x2 = 0
5
1
, x2 = 11
, and
The solution of the system is x1 = 11
1
So, the steady state matrix is X =  .
0
5
x3 = 11
.
1
11
5
So, the steady state matrix is X = 11
.
5
11
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
72
Chapter 2
Matrices
70. Form the matrix representing the given probabilities. Let C represent the classified documents, D represent the declassified
documents, and S represent the shredded documents.
C
From
D
S
0.70 0.20 0 C 

 
P = 0.10 0.75 0 D To
0.20 0.05 1 S 

 
 x1 
 
Solve the equation PX = X , where X =  x2  and use the fact that x1 + x2 + x3 = 1 to write a system of equations.
 x3 
 
0.70 x1 + 0.20 x2
= x1
− 0.3x1 +
0.2 x2
= 0
0.10 x1 + 0.75 x2
= x2 
0.1x1 − 0.25 x2
= 0
0.2 x1 + 0.05 x2
= 0
0.20 x1 + 0.05 x2 + x3 = x3
x1 +
x2 + x3 =
x1 +
1
x2 + x3 = 0
0
 
So, the steady state matrix is X = 0.
1
 
This indicates that eventually all of the documents will be shredded.
72. The matrix
0
0.38
1


P = 0 0.30
0 
0 0.70 0.62


is absorbing. The first state S1 is absorbing and it is possible to move from S2 to S1 in two transitions and to move from S3 to
S1 in one transition.
74. (a) False. See Exercise 65, page 61.
 1 0
(b) False. Let A = 

0 1
−1 0
B = 
.
 0 −1
and
0 0
Then A + B = 
.
0 0
A + B is a singular matrix, while both A and B are nonsingular matrices.
76. (a) True. See Section 2.5, Example 4(b).
(b) False. See Section 2.5, Example 7(a).
78. The uncoded row matrices are
B E A
M
_
M
E
_
[2
[13
0 13]
[5
0 21]
5
1]
U
P _
[16
S
0 19]
Y
_
[3 15 20] [20 25
C
O
T
T
0]
Multiplying each 1 × 3 matrix on the right by A yields the coded row matrices.
[17 6 20] [0 0 13] [−32 −16 −43] [−6 −3 7] [11 −2 −3] [115 45 155]
So, the coded message is
17, 6, 20, 0, 0, 13, − 32, −16, − 43, − 6, − 3, 7, 11, − 2, − 3, 115, 45, 155.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Review Exercises for Chapter 2
73
80. Find A−1 to be
−3 − 4
A−1 = 

1
 1
and the coded row matrices are
[11 52], [−8 −9], [−13 −39], [5 20], [12 56], [5 20], [−2 7], [9 41], [25 100].
Multiplying each coded row matrix on the right by A−1 yields the uncoded row matrices.
S H
O
[19
[15 23]
8]
W
_
M
E
_
T H
E
_
M
[0 13]
[5
0]
[20
[5
0]
[13 15]
8]
O
N E
Y _
[14
[25
5]
0]
So, the message is SHOW_ME_THE_MONEY_.
82. Find A−1 to be
A
−1
4
13
8
= 13
5
13
1
13

2 ,
13 
2
− 13

2
13
9
− 13
4
− 13
and multiply each coded row matrix on the right by A−1 to find the associated uncoded row matrix.
[66
27 −31] A
−1
4
13
8
= [66 27 −31]13
5
13
[37 5 −9] A−1
[61 46 −73] A−1
[46 −14 9] A−1
[94 21 −49] A−1
[32 −4 12] A−1
[66 31 −53] A−1
[47 33 −67] A−1
2
13
9
− 13
4
− 13
[11 5 5] 
[19 0 23] 
= [9 14 0] 
= [23 15 18] 
= [12 4 0] 
= [19 5 18] 
= [9
5 19] 
1
13

2 =
13 
2
− 13

[25 1 14]  Y, A, N
=
K,
E,
=
S,
_, W
E
I, N,
_
W, O,
R
L, D,
_
S,
E,
R
I,
E,
S
[32 19 − 56] A−1 = [0 9 14]  _, I, N
[43 − 9 − 20] A−1 = [0 19 5]  _, S, E
[68 23 − 34] A−1 = [22 5 14]  V, E, N
The message is YANKEES_WIN_WORLD_SERIES_IN_SEVEN.
84. Solve the equation X = DX + E for X to obtain
( I − D) X = E , which corresponds to solving the
86. Using the matrices
 0.9 − 0.3 − 0.2 3000


0.8 − 0.3 3500
 0
− 0.4 − 0.1 0.9 8500


1

1
X = 1

1

1
The solution to this system is
you have
10,000


X = 10,000.
15,000


 5 20
14
T
XT X = 
, X Y =  , and
20 90
63
augmented matrix.
2
 1

 
3
3

4 and Y = 2 ,

 
4
5

 
6
4
−1
 1.8 −0.4 14
 0
A = ( X T X ) X TY = 
   =  .
0.1 63
−0.4
0.7
So, the least squares regression line is y = 0.7 x.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
74
Chapter 2
Matrices
88. Using the matrices
1 −2
 4
1 −1
 2


 
X = 1 0 and Y =  1, you have


 
1 1
−2
1 2
−3


 
1
−1
5 0
 2
XTX = 
, X TY = 
, and A = X T X X T Y =  5


0 10
−18
0
(
)
0 

2  0.4
1 −18 −1.8



10 
.
So, the least squares regression line is y = −1.8 x + 0.4, or y = − 95 x + 52 .
1

1
1
90. (a) Using the matrices X = 
1

1
1

8
2.93



9
3.00
 3.01
10
 and Y = 
 , you have
11
3.10



12
 3.21
3.39
13


1

1
1
1
1
1
1
1

 1
XT X = 

8 9 10 11 12 13 1

1
1

8

9
10
 6 63 
 = 

11
63 679

12
13
and
2.93


3.00
1 1 1 1 1 1  3.01
 18.64 
 = 
X TY = 

.
8 9 10 11 12 13 3.10
197.23


 3.21
3.39


Now, using ( X T X ) to find the coefficient matrix A, you have
−1
 97
−1
15
A = ( X T X ) X TY = 
−3
 5
−3
5   18.64 

2.2007
 ≈ 
.
0.0863
2  197.23

35 
So, the least squares regression line is y = 0.0863 x + 2.2007.
(b) Using a graphing utility, the regression line is y = 0.0863 x + 2.2007.
(c)
Year
2008
2009
2009
2010
2011
2012
2013
Actual
2.93
2.93
3.00
3.01
3.10
3.21
3.39
Estimated
2.89
2.89
2.98
3.06
3.15
3.24
3.32
The estimated values are close to the actual values.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Project Solutions for Chapter 2
75
Project Solutions for Chapter 2
1
Exploring Matrix Multiplication
1. Test 1 seems to be the more difficult test. The averages
were:
Test 1 average = 75
Test 2 average = 85.5
2. Anna, David, Chris, Bruce
3. Answers will vary. Sample answer:
 1
M   represents scores on the first test.
0
0
M   represents scores on the second test.
 1
4. Answers will vary. Sample answer:
[1 0 0 0]M represents Anna’s scores.
[0 0 1 0]M represents Chris’s scores.
5. Answers will vary. Sample answer:
1
M   represents the sum of the test scores for each
1
1
student, and 12 M   represents each students’ average.
1
6. [1 1 1 1]M represents the sum of scores on each test;
[
11
4
1 1 1]M represents the average on each test.
1
7. [1 1 1 1]M   represents the overall points total for
1
all students on all tests.
1
8. 18 [1 1 1 1]M   = 80.25
1
1.1
9. M  
1.0
2
0 0 1
0 1 1




3. 0 0 0 index 2; 0 0 1 index 3
0 0 0
0 0 0




0

0
4. 
0

0
0

0
0

0
0

0
5. 0

0

0
0 0 1
0


0 0 0
0
index 2; 


0 0 0
0


0
0 0 0
0 1 1

0 0 1
index 3;
0 0 0

0 0 0
1 1 1

0 1 1
index 4
0 0 1

0 0 0
1 1 1 1

0 1 1 1
0 0 1 1

0 0 0 1

0 0 0 0
6. No. If A is nilpotent and invertible, then Ak = O for
some k and Ak −1 ≠ O. So,
A−1 A = I  O = A−1 Ak = ( A−1 A) Ak −1 = IAk −1 ≠ O,
which is impossible.
7. If A is nilpotent, then ( Ak )
T
( AT )
k −1
= ( A k −1 )
T
= ( AT ) = O. But
k
≠ O, which shows that AT is
nilpotent with the same index.
8. Let A be nilpotent of index k. Then
( I − A)( Ak −1 + Ak − 2 +  + A2 + A + I ) = I − Ak = I ,
which shows that
( Ak −1 + Ak − 2 +  + A2 + A + I )
is the inverse of I − A.
Nilpotent Matrices
1. A2 ≠ 0 and A3 = 0, so the index is 3.
2. (a) Nilpotent of index 2
(b) Not nilpotent
(c) Nilpotent of index 2
(d) Not nilpotent
(e) Nilpotent of index 2
(f) Nilpotent of index 3
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
C H A P T E R 3
Determinants
Section 3.1
The Determinant of a Matrix ............................................................... 77
Section 3.2
Determinants and Elementary Operations........................................... 82
Section 3.3
Properties of Determinants................................................................... 84
Section 3.4
Applications of Determinants .............................................................. 89
Review Exercises ..........................................................................................................95
Project Solutions.........................................................................................................102
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
C H A P T E R
Determinants
3
Section 3.1 The Determinant of a Matrix
2. The determinant of a matrix of order 1 is the entry in the
matrix. So, det[−3] = −3.
4.
6.
8.
10.
−3 1
5 2
2 −2
4
3
1
3
5
4 −9
2 −3
−6
9
12.
λ−2
0
4
λ−4
= (λ − 2)(λ − 4) − 4(0) = λ 2 − 6λ + 8
14. (a) The minors of the matrix are shown.
= −3( 2) − 5(1) = −11
= 2(3) − 4( −2) = 14
M 11 = − 5 = 5
M 12 = 6 = 6
M 21 = 1 = 1
M 22 = 0 = 0
(b) The cofactors of the matrix are shown.
C11 = ( −1) M 11 = 5
2
C12 = ( −1) M 12 = 6
C21 = ( −1) M 21 = 1
C22 = ( −1) M 22 = 0
3
= 13 ⋅ ( −9) − 5 ⋅ 4 = −23
3
4
= 2(9) − ( −6)( −3) = 0
16. (a) The minors of the matrix are shown.
M 11 =
M 21 =
M 31 =
3
1
−7 −8
4
2
−7 −8
4 2
3 1
= −17
M 12 =
= −18
M 22 =
= −2
M 32 =
6
1
= −52
M 13 =
= 16
M 23 =
= −15
M 33 =
4 −8
−3
2
4 −8
−3 2
6
1
6
3
4 −7
−3
4
4 −7
−3 4
6 3
= −54
= 5
= −33
(b) The cofactors of the matrix are shown.
C11 = ( −1) M 11 = −17
2
C12 = ( −1) M 12 = 52
3
C13 = ( −1) M13 = −54
C21 = ( −1) M 21 = 18
3
C22 = ( −1) M 22 = 16
4
C23 = ( −1) M 23 = −5
C31 = ( −1) M 31 = −2
C32 = ( −1) M 32 = 15
C33 = ( −1) M 33 = −33
4
5
4
5
6
18. (a) You found the cofactors of the matrix in Exercise 16. Now find the determinant by expanding along the third row.
−3
4
2
6
3
1 = 4C31 − 7C32 − 8C33 = 4( −2) − 7(15) − 8( −33) = 151
4 − 7 −8
(b) Expand along the first column.
−3
4
2
6
3
1 = −3C11 + 6C21 + 4C31 = −3( −17) + 6(18) + 4( −2) = 151
4 − 7 −8
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part
77
78
Chapter 3
Determinants
20. Expand along the third row because it has a zero.
3 −1 2
4
1 4 = ( − 2)
−2
0
−1 2
1
1
4
−0
3 2
3 −1
+1
4 4
4
1
= ( − 2)( − 6) − 0( 4) + 1(7)
= 19
22. Expand along the first row because it has two zeros.
−3
0 0
7 11 0 = −3
1
2 2
11 0
2 2
7 0
−0
1 2
+0
7 11
1
2
= −3( 22) = −66
24. Expand along the first row.
0.1 0.2 0.3
− 0.3 0.2 0.2 = 0.1
0.5 0.4 0.4
0.2 0.2
0.4 0.4
− 0.2
− 0.3 0.2
0.5 0.4
+ 0.3
− 0.3 0.2
0.5 0.4
= 0.1(0) − 0.2( − 0.22) + 0.3( − 0.22)
= − 0.022
26. Expand along the first row.
x
y 1
−2 1
−2 −2 1 = x
1
5 1
5 1
− y
−2 1
1 1
+1
−2 −2
1
5
= x( −7) − y( −3) + ( −8)
= −7 x + 3 y − 8
28. Expand along the first row, because it has two zeros.
3 0
7
0
2 6 11 12
4 1 −1
1 5
2
6 11 12
= 3 1 −1
5
2 10
2 6 12
2 +74 1
2 10
2
1 5 10
The determinants of the 3 × 3 matrices are:
6 11 12
1 −1
5
2 = 6
−1
2 10
2
2 10
− 11
1
2
5 10
+ 12
1 −1
5
2
= 6( −10 − 4) − 11(10 − 10) + 12( 2 + 5) = −84 + 84 = 0
2 6 12
4 1
2 = 2
1 5 10
1
2
5 10
−6
4
2
1 10
+ 12
4 1
1 5
= 2(10 − 10) − 6( 40 − 2) + 12( 20 − 1) = 0
So, the determinant of the original matrix is 3(0) + 7(0) = 0.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 3.1
The Determinant of a Matrix
79
30. Expand along the first row.
w
x
y
z
10 15 −25
30
−30 20 −15 −10
15 −25
30
10 −25
30
10 15
30
10 15 −25
= w 20 −15 −10 − x −30 −15 −10 + y −30 20 −10 − z −30 20 −15
35 −25 −40
30 35 −25 −40
30 −25 −40
30 35 −40
30 35 −25
The determinants of the 3 × 3 matrices are:
15 −25
30
20 −15 −10 = 15
35 −25 −40
−15 −10
−25 −40
+ 25
20 −10
35 −40
+ 30
20 −15
30 −25
= 15(600 − 250) + 25( −800 + 350) + 30( −500 + 525)
= 5250 − 11,250 + 750
= −5250
10 −25
30
−30 −15 −10 = 10
30 −25 −40
−15 −10
−25 −40
+ 25
−30 −10
30 −40
+ 30
−30 −15
30 −25
= 10(600 − 250) + 25(1200 + 300) + 30(750 + 450)
= 3500 + 37,500 + 36,000
= 77,000
10 15
30
−30 20 −10 = 10
30 35 −40
20 −10
35 −40
− 15
−30 −10
30 −40
+ 30
−30 20
30 35
= 10( −800 + 350) − 15(1200 + 300) + 30( −1050 − 600)
= −4500 − 22,500 + 49,500
= −76,500
10 15 −25
−30 20 −15 = 10
30 35 −25
20 −15
35 −25
− 15
−30 −15
30 −25
− 25
−30 20
30 35
= 10( −500 + 525) − 15(750 + 450) − 25( −1050 − 600)
= 250 − 18,000 + 41,250
= 23,500
So, the determinant is −5250 w − 77,000 x − 76,500 y − 23,500 z.
32. Expand along the fourth row because it has all zeros.
−4
3
1 −2
2
−1
−2
7 −13 −12
−6
2 −5
−6
−7 = 0
0
0
0
0
0
1 −4 −2
0
9
34. Copy the first two columns and complete the diagonal
products as follows.
3 8 −7
0 −5 4
8 1 6
280 12 0
3 8
0 −5
8 1
− 90 256 0
Add the lower three products and subtract the upper
three products to find the determinant.
3
8 −7
0 −5
4 = −90 + 256 + 0 − 280 − 12 − 0 = −126
8
6
1
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
80
36.
38.
Chapter 3
Determinants
4 3
2
5
1 6 −1
2
−3 2
4
5
6 1
3 −2
44. (a) False. The determinant of a triangular matrix is equal
to the product of the entries on the main diagonal.
For example, if
= −1098
 1 0
A = 
,
0 2
8 5
1 −2
0
−1 0
7
1
6
0 8
6
5 −3 = 48,834
1 2
5 −8
2 6 −2
then det ( A) = 2 ≠ 3 = 1 + 2.
(b) True. See Theorem 3.1 on page 113.
(c) True. This is because in a cofactor expansion each
cofactor gets multiplied by the corresponding entry.
If this entry is zero, the product would be zero
independent of the value of the cofactor.
4
0
6
40. The determinant of a triangular matrix is the product of
the elements on the main diagonal.
46. ( x − 6)( x + 1) − 3( − 2) = 0
x2 − 5x − 6 + 6 = 0
4 0
0
0 7
0 = 4(7)( −2) = −56
x2 − 5x = 0
0 0 −2
x( x − 5) = 0
42. The determinant of a triangular matrix is the product of
the elements on the main diagonal.
4 0 0
0
1
2
0
0
3 5 3
0
−1
x = 0, 5
48. ( x + 3)( x − 1) − ( −4)(1) = 0
x2 + 2x − 3 + 4 = 0
( )(3)(−2) = −12
= 4 12
x2 + 2x + 1 = 0
( x + 1)
−8 7 0 −2
2
= 0
x = −1
50.
λ −5
3
1
λ −5
= (λ − 5)(λ − 5) − 3(1) = λ 2 − 10λ + 22
The determinant is zero when λ 2 − 10λ + 22 = 0. Use the Quadratic Formula to find λ.
λ =
=
−( −10) ±
10 ±
( −10) − 4(1)( 22)
2(1)
2
12
2
10 ± 2 3
=
2
= 5±
λ
3
0
1
52. 0 λ
3
2
2 λ −2
= λ
λ
3
2 λ −2
+1
0 λ
2
2
= λ (λ 2 − 2λ − 6) + 1(0 − 2λ )
= λ 3 − 2λ 2 − 8λ
= λ (λ 2 − 2λ − 8)
= λ (λ − 4)(λ + 2)
The determinant is zero when λ (λ − 4)(λ + 2) = 0. So, λ = 0, 4, − 2.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 3.1
54. (a) Take the determinant of the ( n − 1) × ( n − 1)
matrix that is left after deleting the ith row and jth
column.
58.
The Determinant of a Matrix
e− x
xe − x
−x
(1 − x)e − x
−e
= (1 − x + x )(e −2 x ) = e −2 x
then Cij = M ij .
56.
A = a11C11 + a12C12 +  + a1nC1n
3x 2
−3 y 2
1
1
1−v
60.
= (3x 2 )(1) − 1( −3 y 2 ) = 3x 2 + 3 y 2
−u
= (e − x )(1 − x )e − x − ( −e − x )( xe − x )
= (1 − x)e −2 x + xe −2 x
(b) If i + j is odd, then Cij = − M ij . If i + j is even,
(c)
81
x
x ln x
1 1 + ln x
= x(1 + ln x) − 1( x ln x )
= x + x ln x − x ln x = x
0
62. v(1 − w) u (1 − w) − uv = (1 − v) u 2v(1 − w) + u 2vw + u uv 2 (1 − w) + uv 2 w
vw
uw
uv
= (1 − v)(u 2v) + u (uv 2 )
= u 2v − u 2v 2 + u 2 v 2
= u 2v
64. Evaluating the left side yields
w cx
y
cz
= cwz − cxy.
Evaluating the right side yields
c
w x
y
z
= c( wz − xy ) = cwz − cxy.
66. Evaluating the left side yields
w
x
cw cx
= cwx − cwx = 0.
68. Expand the left side of the equation along the first row.
1
1
1
a
b
c =1
a 3 b3 c 3
b
c
3
3
b
c
−1
a
c
3
3
a
c
+1
a
b
3
b3
a
= bc3 − b3c − ac3 + a 3c + ab3 − a 3b
= b(c3 − a 3 ) + b3 ( a − c) + ac( a 2 − c 2 )
= (c − a ) bc 2 + abc + ba 2 − b3 − a 2c − ac 2 
= (c − a ) c 2 (b − a ) + ac(b − a ) + b( a − b)( a + b)
= (c − a )(b − a) c 2 + ac − ab − b 2 
= (c − a )(b − a) (c − b)(c + b) + a(c − b)
= (c − a )(b − a)(c − b)(c + b + a )
= ( a − b)(b − c)(c − a )( a + b + c)
70. Expanding along the first row, the determinant of a 4 × 4 matrix involves four 3 × 3 determinants. Each of these 3 × 3
determinants requires 6 triple products. So, there are 4(6) = 24 quadruple products.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 3
82
Determinants
Section 3.2 Determinants and Elementary Operations
2. Because the second row is a multiple of the first row, the
determinant is zero.
4. Because the first and third rows are the same, the
determinant is zero.
1
26. 2
1
1
10. Because 2 has been factored out of the second column
and 3 factored out of the third column, the first
determinant is 6 times the second one.
12. Because 6 has been factored out of each row, the first
determinant is 6 4 times the second one.
−1
1
0
3
0 6
0
3
2
0 2
0 = 2
−1
1 1 −1
2
1 −1
0 0
1 −2 2
3
8 −7
30. 0 −5
6
1 −2 0
1
8 −7
3
4 = 0
6
−5
A graphing utility or a software program gives the same
determinant, −2.
24.
3 2
1 1
3 2
1 1
−1 0
2 0
−1 0
2 0
4
1 −1 0
3 1
=
1 0
=
4
1 −1 0
−1 0
2 0
3 2
1 1
−1 0
2 0
4
1 −1 0
0 0
4
0 −15 20
8 −7
3
= 0 −5
4
0
8
0
= 3( −5)(8)
= −120
9 −4
32. 2
4
7
7
3
2
5
6 −5
1 −2
0
=
4 10
=
= 2(1 − 2) = −2
2
28. 2 −3 4 = 2 −3 0 = 0
22. Expand by cofactors along the second row.
−1 3
1
= 0 −3 −4 = 1( −3)( 2) = −6
16. Because a multiple of the second column was added to
the third column to produce a new third column, the
determinants are equal.
20. Because the fifth column is a multiple of the second
column, the determinant is zero.
1
0 −3 −2
1
14. Because a multiple of the first row was added to the
second row to produce a new second row, the
determinants are equal.
18. Because the second and third rows are interchanged, the
sign of the determinant is changed.
1
−1 −2 = 0 −3 −4
1 −2
6. Because the first and third rows are interchanged, the
sign of the determinant is changed.
8. Because 3 has been factored out of the third row, the first
determinant is 3 times the second one.
1
9 −4
2 5
11
3
8 0
4
1 −2 0
−11 11
0 0
9 −4
2 5
27
7
0 0
4
1 −2 0
−11 11
0 0
27
7
4
1 −2
= ( −5)
0
−11 11
= ( −5)( 2)
27
0
7
−11 11
= ( −10)(11)
27 7
−1 1
= ( −110)( 27 + 7)
= −3740
0 0
Because there is an entire row of zeros, the determinant
is 0. A graphing utility or a software program gives the
same determinant, 0.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 3.2
34.
0 −4
9
9
2 −2
−5
−8
7
0
3
7
0 11
0 16
= ( − 8)
0 −4
9
9
2 −2
−5
1
7
0
3
7
0 11
0 −2
0 −4
9
0
2 −2
= ( − 8)
0
7
0
1
3
25
1
0 −2
0
−4 9
3
= − ( − 8)(1) 2 − 2 25
7
0
1
Determinants and Elementary Operations
38. (a) False. Adding a multiple of one row to another does
not change the value of the determinant.
(b) True. See page 118.
(c) True. In this case you can transform a matrix into
a matrix with a row of zeros, which has zero
determinant as can be seen by expanding by
cofactors along that row. You achieve this
transformation by adding a multiple of one row to
another (which does not change the determinant of
a matrix).
0 0 1
= 12,856
3 −2
−1 0
4 3 1
2 1 0
3 −2
−1 0
5 −1 0 3 2 = −1
4 7 −8 0 0
4
1 2 3 0 2
−5
4
2
3 1
1 0
3 −8 −3 0
7 −8 0 0
6 −5 −6 0
1 0 0
40. 0 1 0 = − 0 1 0 = −1
= (8)[8 + 1575 + 0 + 42 + 0 −18]
36.
1 0 0
0 0
1 0 0
1 0 0
1
1 0 = 0 1 0 =1
42. 0
0 k
1
0 0 1
1+ a
1
1
1
1+b
1
= 0
b
−c
1
1
1+c
1
1
1+ c
44.
−1 0 2 1
−1 3 −8 −3
=
4 7 −8 0
0 −a −a − c − ac
= ac − b( − a − c − ac)
= ac + ab + bc + abc
abc( ac + ab + bc + abc)
abc
1 1 1

= abc1 + + + 
b c
a

−5 6 −5 −6
−1 0
=
2 1
−4 3 −2 0
=
4 7 −8 0
−11 6 7 0
−4 3 −2
= − 4 7 −8
−11 6
0 b 0
46. (a)
0 4
0
a 0 0 = 1 0
0
0 10 −10
= − 4 7 −8
7
−11 6
1 0
0
= − 0 4
0
0 0 −3
= −(1)( 4)( −3)
0 1 −1
= −10 4 7 −8
−11 6 7
= 410
0 0 −3
0 0 c
7
= −10 ( −1)( 28 − 88) − 1( 24 + 77)
83
= 12
(b)
a 0
1
0
1
0 c
0 = 0 −3
0
b 0 −16
1
4
0 −16
Expand by cofactors in the second row.
1
0
0 −3
4
1
0 = −3
0 −16
1
1
4 −16
= −3( −16 − 4) = 60
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 3
84
Determinants
48. If B is obtained from A by multiplying a row of A by a nonzero constant c, then
 a11  a1n 




det ( B ) = det cai1  cain  = cai1Ci1 +  + cainCin = c( ai1Ci1 +  + cainCin ) = c det ( A).






a
a

nn 
 n1
Section 3.3 Properties of Determinants
2. (a)
A =
(b)
B =
3 4
4 3
2 −1
5
2
= −7
4. (a)
= 5
0
3 42 −1
26 − 3
(c) AB = 

 = 

4
3
5
0



23 − 4
(d)
26 − 3
AB =
= − 35
23 − 4
Notice that A B = ( − 7 )(5) = − 35 = AB .
(b)
0
1
A = 1 −1 2 = 0
3
1 0
2
−1 4
B = 0
1 3 = −7
3 −2
1
2 0 1 2 −1 4
7 −4 9





(c) AB =  1 −1 2 0
1 3 = 8 −6 3
3 1 0 3 −2 1
6 −2 15





7 −4 9
(d)
AB = 8 −6
3 = 0
6 −2 15
Notice that A B = 0( −7) = 0 = AB .
2
6. (a)
(b)
A =
B =
4 7 0
1 −2
1 1
0
0 2
1
−1 1 0
6
1 −1
= 7
0
−1 2
1
1
0 0
1
2
0 0
0 −1
4
2

1 −2
(c) AB = 
0
0

 1 −1
8
(d)
1
AB =
= −13
7 0 6

1 1−1
2 1 0

1 0
 0
10
1 −1
2
0
0
0
9 18
8 10



1 1
8 − 3 − 2 −1
= 
0
1 2
0
2 3



7 −1 −1 1
0 −1
9 18
8 − 3 − 2 −1
0
0
2
3
7
−1
−1
1
= − 91
Notice that A B = 7( −13) = − 91 = AB .
8.
A =
21
7
28 − 56
= 72
3
1
4 −8
= 49( − 28) = −1372
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 3.3
4 16
0
10. A = 12 −8
1
4
0
8 = 4 3 −2
2
3
16 20 −4
Properties of Determinants
85
5 −1
4
1 4
0
= 43 11 8
0
4 5 −1
= ( −64)( −36) = 2304
40 25 10
8 5 2
5 20 = 5 6
3
12. A = 30
15 35 45
1 4
3 7 9
−22 0 −18
= 5
3
6
1
4
−39 0 −19
= 125( −284) = −35,500
16 − 8 − 32
0
14. A =
−16
8 −8
16
8 − 24
8
−8
−8
0
32
16. (a)
A =
(b)
B =
32
1 −2
1
0
3 −2
0
0
= 84
0
2 −1 − 4
−2
1 −1
2
1 −3
1
−1
−1
0
4
4
= 4096(15) = 61,440
= 2
= 0
1 −2 3 −2
4 − 4
(c) A + B = 
 + 
 = 

0
1 0 0 0
1
4 −4
(d) A + B =
= 4
1
0
Notice that A + B = 2 + 0 = 2 ≠ A + B .
0
18. (a)
1 2
0
1 2
A = 1 −1 0 = 1 −1 0 = 5
2
1 1
0
3 1
0 1 −1
(b)
B = 2 1
0 1
1 = −2( 2) = −4
1
0 2 1
0 1 2 0 1 −1

 

(c) A + B =  1 −1 0 + 2 1 1 = 3 0 1
2 1 1 0 1 1
2 2 2

 

0 2 1
(d)
A+ B = 3 0
1 = −2
2 2 2
Notice that A + B = 5 + ( −4) = 1 ≠ A + B .
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
86
Chapter 3
Determinants
20. Because
3 −6
4
2
A
26.
= 30 ≠ 0,
the matrix is nonsingular.
−1
 1
 3
1  2 2
= 
 = 
6 −2 1
 1
− 3
A−1 =
22. Because
14
5
7
−15
0
3 = 0,
1
3

1
6 
1  1   1  1 
1
1
1
+
=
  −  −   =
3  6   3  3  18 9
6
Notice that A = 6.
So, A−1 =
1 − 5 −10
1
1
= .
A
6
the matrix is singular.
1
 1
1 − 
− 2
2


0
A−1 =  2 −1


1
 3 −1
2 
 2
24. Because
0.8
0.2 −0.6 0.1
−1.2
0.6
0.6
0
0.7 −0.3
0.1
0
0.2 −0.3
0.6
0
28.
= 0.015 ≠ 0,
−
the matrix is nonsingular.
A−1 =
1
1
1 0 0
1 −
2
2
1
2 −1 0
= −
2 −1
0 =
2
3
1
3
1
−1
−1
2
2
2
2
1
Notice that A = 2
0
1
−1 2 =
1 −2 3
So, A−1 =
30.
1
0
2 −1
−3
1
2 = −2.
0 −1
1
1
= − .
A
2
7


4
2 −3
2


3


1 −3
3
A−1 = 
2


1
0 −1
0


1
0
−1
1 −


2
2 −3
A−1 =
1 −3
0
1
0
1 −
7
4
2 −3
2
3
3
1 −3
=
2
0 −1
0
1
1
2
−1
7
4
2
2 −3 4
0 3 −2
3
3
1
1
1
=
1 −3 3 =
1 −3 3 =
2
2
2
2
0 −1
0
1 −1
0
1 −1
1
0 −
0
2
0
0
Notice that A =
0
3
1 −2 −3
1
0
1
0
2 −2
1 −2 −4
So, A−1 =
1
=
0
1
0
3
0
0
1
0
0
0
2 −2
1 −2 −4
1
1 0
= −0 1
3
0 = 2.
0 2 −2
1
1
= .
A
2
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 3.3
Properties of Determinants
42. Find the value of k necessary to make A singular by
setting A = 0.
32. The coefficient matrix of the system is
 3 − 4
2
8 .
 3 − 9 
k
Because the determinant of this matrix is zero, the
system does not have a unique solution.
−3 − k
A = −2
k
1 = − k 3 − 2k = 0
k
1
0
So, k = 0 or k = ± 2.
34. The coefficient matrix of the system is
 1 1 −1


2 −1 1.
3 −2 2


44. First obtain A =
Because the determinant of this matrix is zero, the
system does not have a unique solution.
36. The coefficient matrix of the system is
1 −1 −1 −1


1 1 −1 −1.
1 1 1 −1


1 1 1 1
−4 10
5
38. Find the values of k that make A singular by setting
A = 0.
6
= −74.
(a)
AT = A = −74
(b)
A2 = A A = ( −74) = 5476
(c)
AAT = A AT = ( −74)( −74) = 5476
(d)
2 A = 22 A = 4( −74) = −296
(e)
A −1 =
2
1
1
1
=
= −
A
74
( −74)
Because the determinant of this matrix is 8, and not zero,
the system has a unique solution.
1
5
46. First obtain A = 0 −6
4
2 = 18.
0 −3
0
(a)
AT = A = 18
(b)
A2 = A A = 182 = 324
= k2 + k − 6
(c)
AAT = A AT = (18)(18) = 324
= ( k + 3)( k − 2) = 0
(d)
2 A = 23 A = 8(18) = 144
(e)
A −1 =
A =
k −1
2
2
k + 2
= ( k − 1)( k + 2) − 4
which implies that k = −3 or k = 2.
40. Find the values of k that make A singular by setting
A = 0. Using the second column in the cofactor
expansion, you have
1 k
1
1
=
A
18
4 1
9
48. First observe A = −1 0 −2 = 3.
2
A = −2 0 − k = − k
3
87
−2 −k
3 −4
−1
1
−3 3
2
−2 −k
(a)
AT = A = 3
= −k (8 + 3k ) − ( −k + 4)
(b)
A2 = A A = 9
(c)
AAT = A AT = 9
1 −4
= −3k − 7 k − 4
2
= −(3k + 4)( k + 1).
So, A = 0 implies that k = − 43 or k = −1.
0
(d) 2 A = 23 A = 24
(e)
A −1 =
1
1
=
A
3
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
88
Chapter 3
Determinants
2
0 0
0 −3 0 0
50. First observe that A =
6 5
1 −1
−2 4
3
1
0
0 4 0
1
0 0
= −12.
56. (a)
A =
1
6
5
1 − 4 −2
2 2
1
3
(a)
AT = A = −12
(b)
AT = A = −312
(b)
A2 = A A = 144
(c)
A2 = A A = A
(c)
AAT = A AT = 144
(d) 2 A = 24 A = − 4992
(d) 2 A = 24 A = −192
1
1
= −
A
12
A −1 =
(e)
A =
52. (a)
T
(e)
−2 4
58. (a)
A
(c)
A2 = A A = A
2
= 1600
1
1
= −
A
40
A −1 =
1
1
= −
A
312
AB = A B = 4(5) = 20
nonsingular.
(d) 2 A = 22 A = −160
(e)
= 97,344
(c) Because A ≠ 0 and B ≠ 0, A and B are
= A = − 40
(b)
2
(b) 2 A = 23 A = 8( 4) = 32
= −16 − 24 = − 40
6 8
A −1 =
= −312
1
1
1
1
= , B −1 =
=
A
4
B
5
(d)
A −1 =
(e)
( AB)T = AB = 20
60. Given that AB is singular, then AB = A B = 0. So,
either A or B must be zero, which implies that either
54. A =
3
4
2
3
− 14
2
3
− 14
1
1
3
3
4
1
3
A or B is singular.
1.
= − 36
(a)
1
AT = A = − 36
(b)
1
A2 = A A = 1296
(c)
2 A = 23 A = − 92
(d)
A −1 =
1
= − 36
A
62. Expand the determinant on the left
a +b
a
a
a
a +b
a
a
a
a +b
(
)
= ( a + b) ( a + b) − a 2 − a(( a + b)a − a 2 ) + a( a 2 − a( a + b))
2
= ( a + b)( 2ab + b 2 ) − a( ab) + a( − ab)
= 2a 2b + ab 2 + 2ab 2 + b3 − 2a 2b
= b 2 (3a + b).
64. Because the rows of A all add up to zero, you have
2
A = −3
−1 −1
1
0 −2
2
2 = −3
2
−1 0
1 0 = 0.
0 −2 0
66. Calculating the determinant of A by expanding along the
first row is equivalent to calculating the determinant of
AT by expanding along the first column. Because the
determinant of a matrix can be found by expanding along
any row or column, you see that A = AT .
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 3.4
68.
A10 = A
10
= 0  A = 0  A is singular.
70. If the order of A is odd, then ( −1) = −1, and the result
n
of Exercise 59 implies that A = − A or A = 0.
 1 0
A = 

0 1
and
−1 0
B = 
.
 0 −1
Then det ( A) = det ( B ) = 1 ≠ 0 = det ( A + B )
(b) True. Because det ( A) = det ( B ),
det ( AB ) = det ( A)det ( B )
= det ( A)det ( A)
= det ( AA)
= det ( A2 ).
(c) True. See page 129 for “Equivalent Conditions for a
Nonsingular Matrix” and Theorem 3.7 on page 128.
74. Because
A
89
76. Because
 1

2
A−1 = 
 1
−
2

1 
2 
= AT ,
1 
−

2
−
this matrix is orthogonal.
72. (a) False. Let
−1
Applications of Determinants
 1 0
 1 1
T
= 
 and A = 
,
−
1
1
0 1


A−1 ≠ AT and the matrix is not orthogonal.
78. Because
1 
 1
0 −

2
2 

−1
A = 
0 1
0 = AT ,


1 
− 1
0

2
2 
this matrix is orthogonal.
80. If AT = A−1, then AT = A−1 and so
I = AA−1 = A A−1 = A AT = A
2
3
82. A =  23

 13
− 23
1
3
2
3
2
= 1  A = ±1.
1
3

2
−3

2

3
Using a graphing utility, you have
A
−1
 23

= − 23
 1
 3
2
3
1
3
− 32
1
3

2 =
3
2
3
AT .
Because A−1 = AT , A is an orthogonal matrix.
For this given A, A = 1.
84. SB = S B = 0 B = 0  SB is singular.
Section 3.4 Applications of Determinants
2. The matrix of cofactors is
 4 − 0
4 0

 = 
.
− 0 −1
0 −1
So, the adjoint of A is
4 0
adj ( A) = 

0 −1
T
4 0
= 
.
0 −1
Because A = −4, the inverse of A is
A−1 =
−1 0
1
1 4 0

.
adj( A) = − 
=

 0 1
A
4 0 −1
4 

© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
90
Chapter 3
Determinants
6. The matrix of cofactors is
4. The matrix of cofactors is
 1 −1
0 −1
0 1 
−


2 2
2 2 
 2 2
 4 −2 −2


1 3
1 2 


 −2 3
−
=  2 −4 2.

2 2
2 2
2 2 
−5


1 1

 2 3
1 3
1 2 
−


0 −1
0 1 
 1 −1
So, the adjoint of A is

2 3
1 3
−

1
2
1
−
−
−
−2


0 1
 −1 1

−1 −2
1 −2


1 1
0 1
−

2
3
1 3

1 2 

−1 −1 
−1 −1 1

0 1 


−
=  1 1 −1.
−1 −1 
 1 1 −1



0 1 

0 2 
So, the adjoint of A is
 4 2 −5


adj( A) = −2 −4
1.
−2 2
1

−1 1 1


adj( A) = −1 1 1.
 1 −1 −1


Because A = −6, the inverse of A is
Because det ( A) = 0, the matrix A has no inverse.
1
5
 2
− 3 − 3
6


1
2
1
 1
A−1 =
adj ( A) = 
− .
A
3
3
6


1
1
 1
−
−
 3
3
6 
8. The matrix of cofactors is
 1 0 1

 0 1 1
 1 1 1

 1 1 0

 −0 1 1

 1 1 1

 1 1 0
 1 0 1

 1 1 1

 1 1 0

−1 0 1
 0 1 1

1 0 1
1 1 1
− 1 1 1
1 0 1
0 1 1
0 1 1
1 1 0
1 1 0
1 1 1
− 1 0 1
0 1 1
0 1 1
1 1 0
1 1 0
− 1 0 1
1 1 1
0 1 1
0 1 1
1 1 0
1 1 0
1 0 1
−1 1 1
1 1 1
1 0 1
1 1 0 

− 1 0 1 
0 1 1 

1 1 1 

1 0 1 
−1 −1 −1 2



0 1 1 
−1 −1 2 −1.
=

−1 2 −1 −1
1 1 1 


 2 −1 −1 −1
− 1 1 0 

0 1 1 

1 1 1 

1 1 0 
1 0 1 
−1 −1 −1 2


−1 −1 2 −1
So, the adjoint of A is adj ( A) = 
. Because det ( A) = −3, the inverse of A is
−1 2 −1 −1


 2 −1 −1 −1
1
1
2
 1
− 
 3
3
3
3


1
2
1
 1
−
 3
1
3
3
3
.
adj ( A) = 
A−1 =
A
2
1
1
 1
−
 3
3
3
3


1
1
1
 2
− 3
3
3
3 
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 3.4
10. The coefficient matrix is
2 −1
A = 
, where
3 2
A = 7.
Applications of Determinants
16. The coefficient matrix is
−0.4 0.8
A = 
 , where
 0.2 0.3
A = −0.28.
Because A ≠ 0, you can use Cramer’s Rule.
Because A ≠ 0, you can use Cramer’s Rule.
−10 −1
A1 = 
,
 −1 2
A1 = −21
2 −10
A2 = 
,
3 −1
A2 = 28.
1.6 0.8
A1 = 
A1 = 0
,
0.6 0.3
−0.4 1.6
A2 = 
A2 = −0.56
,
 0.2 0.6
The solution is
A1
0
x =
=
= 0
A
−0.28
The solution is
x =
A1
21
= −
= −3
A
7
y =
A2
28
y =
=
= 4.
A
7
12. The coefficient matrix is
18 12
where
A = 
,
30 24
A = 72.
A1 = 36
A2 = 24
4

A = 2

8

A = 0.
Because A = 0, Cramer’s Rule cannot be applied. (The
3

5  , where

−2 

−2
2
−5
A = −82.
Because A ≠ 0, you can use Cramer’s Rule.
−2

A1 =  16

 4
4

A2 = 2

8

A2
24
1
x2 =
=
= .
A
72
3
14. The coefficient matrix is
 13 −6
A = 
 , where
26 −12
A2
−0.56
=
= 2.
A
−0.28
18. The coefficient matrix is
Because A ≠ 0, you can use Cramer’s Rule.
13 12
A1 = 
,
23 24
18 13
A2 = 
,
30 23
The solution is
A1
36
1
x1 =
=
=
A
72
2
91
−2
2
−5
−2
16
4
4 −2

A3 = 2
2

8 −5
The solution is
3

5 ,

−2 
A1 = −410
3

5 ,

−2 
A2 = −656
−2 

16 ,

4 
A3 = 164
x =
A1
−410
=
= 5
A
−82
y =
A2
−656
=
= 8
A
−82
z =
A3
164
=
= −2.
A
−82
system does not have a solution.)
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
92
Chapter 3
Determinants
20. The coefficient matrix is
 14 −21 −7 


A = −4
2 −2  , where A = 1568.


56 −21
7


Because A ≠ 0, you can use Cramer’s Rule.
−21

A1 =  2

7

−7 

2 −2 ,

7
−21

 14 −21 −7 


2 −2 ,
A2 = −4


56
7
7


 14 −21 −21 


2
2 ,
A3 = −4


56 −21
7


The solution is
A1
1568
x1 =
=
=1
A
1568
−21
x2 =
A2
3136
=
= 2
A
1568
x3 =
A3
1568
= −
= −1.
A
1568
A1 = 1568
A2 = 3136
A3 = −1568
22. The coefficient matrix is
2 3
5


A = 3 5
9 , where A = 0.


5 9 17 
Because A = 0, Cramer’s Rule cannot be applied.
26. The coefficient matrix is
 −1 −1 0 1


3 5 5 0
A = 
.


 0 0 2 1
−2 −3 −3 0
 − 8 −1 0

24 5 5
A1 = 
 −6 0 2

−15 −3 −3
 −1 −1 − 8 1
 −1 −1 0



3
5 24 0
3
5 5
A3 = 
, A4 = 
 0


0 −6 1
0
0 2



− 2 − 3 −15 0
− 2 − 3 −3
−151 7 −10
− 8 −151 −10




A1 =  86
3 −5, A2 =  12
86 − 5 ,




15 187
2
2
 187 −9


− 8
7 −151


3
86
A3 =  12


15 − 9 187


Using a graphing utility, A = 1149, A1 = 11,490,
A2 = −3447, and A3 = 5745.
So, x1 =
x2 =
A1
11,490
=
= 10,
1149
A
1

0
,
1

0
− 8

24
− 6

−15
Using a graphing utility, A = 1, A1 = 3,
A2 = 7, A3 = − 4, and A4 = 2.
A1
A2
3
7
=
= 3, x2 =
=
= 7,
A
1
A
1
A3
A4
−4
2
=
= − 4 and x4 =
x3 =
=
= 2.
1
1
A
A
So, x1 =
28. Draw the altitude from vertex C to side c, then from
trigonometry
c = a cos B + b cos A.
Similarly, the other two equations follow by using the
other altitudes. Now use Cramer’s Rule to solve for cos C
in this system of three equations.
0 c a
c 0 b
(The system does not have a solution.)
24. The coefficient matrix is
−8 7 −10


3 −5.
A = 12


2
 15 −9

1
0
 −1 − 8


0
3 24
5
, A2 = 
 0 −6
1
2


0
− 2 −15 − 3
cos C =
b a
c
0 c b
c 0 a
b a 0
=
−c(c 2 − b 2 ) + a( ac)
−c( −ba ) + b( ac)
=
a 2 + b2 − c2
.
2ab
Solving for c 2 you obtain
2ab cos C = a 2 + b 2 − c 2
c 2 = a 2 + b 2 − 2ab cos C.
30. Use the formula for area as follows.
x1
Area
= ± 12
x2
x3
y1 1
1 1 1
y2 1 = ± 12 2 4 1 = ± 12 ( −8) = 4.
y3 1
4 2 1
A2
A3
−3447
5745
=
= −3, and x3 =
=
= 5.
1149
1149
A
A
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 3.4
32. Use the formula for area as follows.
x1
Area = ± 12 x2
x3
y1 1
1
y2 1 = ± 12 −1
x
1 1
1
( )=3
1 = ± 12 6
0 = x1
x2
y 1
x
x2
y1 1
−1 0 1
y2 1 = 1 1 1 = 2
x3
y3 1
y1 1 = 1 4 1 = 2 y − 8
y2 1
3 4 1
42. Use the formula for volume as follows.
3 3 1
x1
y1
x
1 2
y2
x3
y3
z1 1
z2 1
z3 1
x4
y4
z4 1
Volume = ± 6
to determine that the three points are not collinear.
36. Use the fact that
x2
y1 1
−1
y 2 1 = −4
x3
y3 1
x1
−3 1
= ± 16
7 1 = 0
x2
y 1
x
2
1 −1 1
= 16 (3) = 12
2 1
44. Use the formula for volume as follows.
y 1
Volume = ± 16
y1 1 = −4 7 1 = 3 x + 6 y − 30
y2 1
1 1
0 1
−1 1
38. Find an equation as follows.
x
1 1
0 0
2
2 −13 1
to determine that the three points are collinear.
0 = x1
y 1
So, an equation of the line is y = 4.
34. Use the fact that
x1
93
40. Find an equation as follows.
0 −2 1
y3 1
Applications of Determinants
4 1
So, an equation of the line is 2 y + x = 10.
x1
y1
x2
y2
x3
y3
x4
y4
z1 1
z2 1
z3 1
z4 1
0 0 0 1
= ± 16
0 2 0 1
3 0 0 1
= ± 16 ( 24) = 4
1 1 4 1
46. Use the formula for volume as follows.
Volume = ± 16
y1
z1 1
5
y2
z2 1
4 −6 −4 1
x3
y3
z3 1
x4
y4
z4 1
= ± 16
4
−6 −6
−5 1
0
10 1
0
48. Use the fact that
x1
y1
x2
y2
x3
y3
x4
y4
z1 1
z2 1
z3 1
z4 1
−3 1
x1
x2
= ± 16 (1386) = 231
52. Use the fact that
=
1
2
3 1
−1
0
1 1
0 −2 −5 1
2
6
= 0
11 1
to determine that the four points are coplanar.
x1
y1
z1 1
x2
y2
z2 1
x3
y3
z3 1
x4
y4
z4 1
=
1 −5
9 1
−1 − 5
9 1
1 −5 −9 1
= 0
−1 − 5 − 9 1
to determine that the four points are coplanar.
50. Use the fact that
x1
y1
x2
y2
x3
y3
x4
y4
z1 1
z2 1
z3 1
z4 1
1 2 7 1
=
−3 6 6 1
4 4 2 1
= −1
3 3 4 1
to determine that the four points are not coplanar.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
94
Chapter 3
Determinants
54. Find an equation as follows.
0 =
x
y
z 1
x
x1
y1
z1 1
0 −1 0 1
x2
y2
z2 1
x3
y3
z3 1
=
−1 0 1
y
z 1
1
1 0 1
2
1 2 1
0 −1 1
0 0 1
= x 1 0 1 − y 1 0 1 + z 1
1 2 1
2 2 1
= 4 x − 2 y − 2 z − 2,
2
0 −1 0
1 1 − 1
1 0
1 1
1 2
2
2x − y − z = 1
or
56. Find an equation as follows.
0 =
x
y
z 1
x1
y1
z1 1
x2
y2
z2 1
x3
y3
z3 1
2 7 1
x
=
y z 1
1 2 7 1
4 4 2 1
3 3 4 1
1 7 1
1 2 7
1 2 1
= x4 2 1 − y4 2 1 + z4 4 1 − 4 4 2
3 4 1
3 4 1
= − x − y − z + 10,
3 3 1
3 3 4
x + y + z = 10
or
58. Find an equation as follows.
0 =
x
y
z 1
x
y
x1
y1
z1 1
3
2 −2 1
x2
y2
z2 1
y3
z3 1
x3
=
3 −2
2 1
−3 − 2 − 2 1
2 −2 1
= x −2
z 1
3 −2 1
2 1 − y
−2 −2 1
3
2 1 + z
−3 − 2 1
3
2 1
3
2 −2
3 −2 1 −
3 −2
−3 − 2 1
−3 − 2 − 2
2
= 16 x − 24 y − 24 z − 48, or 2 x − 3 y − 3z = 6
60. Cramer’s Rule was used correctly.
62. Given the system of linear equations,
a1 x + b1 y = c1
a2 x + b2 y = c2
if
a1
b1
a2 b2
= 0, then the lines must be parallel or
coinciding.
64. Following the proof of Theorem 3.10, you have
A adj ( A) = A I .
Now, if A is not invertible, then A = 0, and A adj ( A)
is the zero matrix.
66. adj(adj ( A)) = adj( A A−1 )
= det ( A A−1 )( A A−1 )
= A
n
A −1
−1
1
n−2
A = A
A.
A
68. Answers will vary. Sample answer:
−1 3
Let A = 
.
 1 2
 2 −1
−1 3
0 −1 3
adj ( A) = 
  adj(adj( A)) = 
 = A 

−3 −1
 1 2
 1 2
So, adj(adj( A)) = A
n−2
A.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Review Exercises for Chapter 3
95
70. Answers will vary. Sample answer:
1 3
Let A = 
.
1 2
−2 3
 −1 −1
−1
A−1 = 
  adj( A ) = 
.
−
1
1


−3 −2
 2 −1
 −1 −1
−1
adj( A) = 
.
  (adj ( A)) = 
−
3
1


−3 −2
( )
−1
So, adj A−1 = adj( A) .
Review Exercises for Chapter 3
2. Using the formula for the determinant of a 2 × 2 matrix,
you have
0 −3
1
6. The determinant of a triangular matrix is the product of
the entries along the main diagonal.
5
= 0( 2) − (1)( −3) = 3.
2
0 −1 3 = 5( −1)(1) = −5
0
4. Using the formula for the determinant of a 2 × 2 matrix,
you have
−2 0
−15
10.
3
0
3
9 −6 = 3
3
12 −3
−5
0
1
3 −2 = 27 −9
4 −1
6
−5
1
0
1
3 0 = 27
14 −1 0
2
0
1
8. Because the matrix has a column of zeros, the
determinant is 0.
= ( −2)(3) − (0)(0) = −6.
0 3
0 2
−9
3
14 −1
= 27(9 − 42)
= − 891
12. The determinant of a triangular matrix is the product of its diagonal entries. So, the determinant equals 2(1)(3)( −1) = −6.
3 −1
14. −2 0
2
3 −1 2
1
1 −3
−1
2 −3
4
−2
1 −2
1
=
2 −1
3
4
1
2 −1
3 −1
2 −2
0
−1
1 −4 −10
2
1 −1 = 0
0
1 −2
−5
0
0 −2
3 −4
−4
2
0
1
2
1
1
−2
0
1 −3
5
0
1
6
2
16. 1
0
0 0
2
1
0
2 −1
1
1
−2 1 −3
= −( −1) 5 1
6
1 0
2
=1
1 −3
1
6
+ 2
−2 1
5 1
= 9 + 2( −7) = −5
0 −1
0
−1
1 −1
= −
3
0
4
4 10
0
1 −2 −5
0
1
4 16
0
0
4 12
1 −2 −5
= −0
0
6
21
4 12
= −(72 − 84) = 12
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
96
Chapter 3
Determinants
0 0 0 0 3
0 1 2
0 0 0 3
0 0 0 3 0
18. 0 0 3 0 0 = 3
0 3 0 0 0
24. (a)
0 0 3 0
7 6 8
0 3 0 0
2
3 0 0 0
3 0 0 0 0
(b)
0 0 3
3 0 0
0 3
3 0
1
= ( −27)( − 9) = 243
20. Because the second and third columns are interchanged,
the sign of the determinant is changed.
22. Because a multiple of the first row of the matrix on the
left was added to the second row to produce the matrix
on the right, the determinants are equal.
2
B = 1 −1
0 = 12
3 −2
0
= 3( −3) 0 3 0
= − 9(3)
A = 5 4 3 = −15
0 1 22 1 2
 1 5 − 4





4
(c) AB = 5 4 3 1 −1 0 = 14 10
7 6 80 3 2
20 25 − 2





1
(d)
5 −4
AB = 14 10
20 25
4 = −180
−2
Notice that A B = ( −15)(12) = −180 = AB .
26. First find
3 0
1
A = −1 0 0 = −1.
2
1 2
(a)
AT = A = −1
(b)
A3 = A
(c)
AT A = AT A = ( −1)( −1) = 1
3
= ( −1) = −1
3
(d) 5 A = 53 A = 125( −1) = −125
28. (a)
(b)
30. A
−1
−2 1 3
0 −9 3
2 0 4 =
0
10 4 = ( −1)
−1 5 0
−1
5 0
A =
A−1 =
=
10 4
= 66
1
1
=
A
66
7
 74
1 7 −2
=

 = 
74 2 10 
1
 37
A −1 =
−9 3
1
37 

5 
37 
−
7  5   1 1 
  −   − 
74  37   37  37 
35
1
1
+
=
2738 1369
74
Notice that A = 74.
So, A−1 =
1
1
=
.
A
74
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Review Exercises for Chapter 3
 2
− 3

 2
32. A−1 = −
3

 1
 2
1
6
1
6
2
3
2
A−1 = −
3
1
2
1
6
1
6
97

0


−1

1
0
2 
−
0
−1 = −
1
2
0
0
0
1
2
3
1
2
1
6
−1 = −
0
1
2
1
12
Notice that
−1
1
2
A = 2
4
8 = 1(8 − 8) − ( −1)( −8 − 4) = −12.
1
−1 0
So, A−1 =
1
1
= − .
A
12
1 4
1 4
 1 2
1 2
 1 2 1 4






1 13  0 1 3 −1
34. (a) − 3 1 − 2 1  0 7
 2 3 −1 9
0 −1 − 3 1
0 7 1 13






1 4
1 2
 1 2 1 4




3 −1  0 1 3 −1
 0 1
0 0 − 20 20
0 0 1 −1




So, x3 = −1, x2 = −1 − 3( −1) = 2, and x1 = 4 − ( −1) − 2( 2) = 1.
 1 2 1 4
 1 0 − 5 6
 1 0 0 1






3 −1  0 1 0 2
(b) 0 1 3 −1  0 1





1 −1
0 7 1 13
0 0
0 0 1 −1
(c) The coefficient matrix is
1
 1 2


A = − 3 1 − 2 , where A = − 20.
 2 3 −1


1
4 2


A1 =  1 1 − 2 and A1 = − 20
9 3 −1


1
 1 4


A2 = − 3 1 − 2 and A2 = − 40
 2 9 −1


 1 2 4


A3 = − 3 1 1 and A3 = 20
 2 3 9


So, x1 =
− 20
− 40
20
= 1, x2 =
= 2, and x3 =
= −1.
− 20
− 20
− 20
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
98
Chapter 3
Determinants
1 3 5 2
2 3 5 4
 2 2



36. (a) 3 5 9 7  3 5 9 7
5 9 13 17
5 9 13 17




1

 0
0

3
2
1
2
3
2
5
2
3
2
1
2
2

1
7
 1 3 5 2
 2 2

 0 1 3 2
0 0 1 −1


So, x3 = −1, x2 = 2 − 3( −1) = 5, and x1 = 2 − 52 ( −1) − 23 (5) = −3.
1 3 5
2
 1 0 −2 −1
 2 2



(b) 0 1 3 2  0 1 3 2
0 0 1 −1
0 0
1 −1



 1 0 0 −3


 0 1 0 5


0 0 1 −1
So, x1 = −3, x2 = 5, and x3 = −1.
(c) The coefficient matrix is
2 3 5


A = 3 5 9 and A = −4.


5 9 13
 4 3 5


Also, A1 =  7 5 9


17 9 13
and
A1 = 12,
2 4 5


A2 = 3 7 9 and


5 17 13
A2 = −20,
2 3 4


A3 = 3 5 7 and
A3 = 4.


5 9 17
12
−20
4
= −3, x2 =
= 5, and x3 =
= −1.
So, x1 =
−4
−4
−4
38. Because the determinant of the coefficient matrix is
2 −5
3 −7
= 1 ≠ 0,
the system has a unique solution.
40. Because the determinant of the coefficient matrix is
2
3
1
2 −3 −3 = 0,
8
6
0
the system does not have a unique solution.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Review Exercises for Chapter 3
42. Because the determinant of the coefficient matrix is
1 5
3 0
0
4 2
5 0
0
0 0
3 8
6 = −896 ≠ 0,
2 4
0 0 −2
2 0 −1 0
44. (a)
0
the system has a unique solution.
BA = B A = 5( −2) = −10
4
(b)
B4 = B
(c)
2 A = 23 A = 23 ( −2) = −16
(d)
( AB)
= AB = A B = −10
(e)
B −1 =
1
1
=
A
5
1
0 2
1
T
= 54 = 625
0
46. 1 −1 2 = 1 −1
5
99
1 0
2
2
1
0 2
2 + 1 −1 2
1 −1
3
0
1
10 = 5 + 5
a
1
48.
1
1
1
a
1
1
1
1
a
1
1
0 1 − a2 1 − a 1 − a
1
1
1
1
a
=
1
0 1− a a −1
0
a
0 1−a
0
a −1
1 − a2 1 − a 1 − a
0
= − 1− a a −1
1− a
0
a −1
= (1 − a)
3
1+ a 1 1
1
−1 0
1
0 −1
= (1 − a) (1(1) − 1( −1 − a − 1))
3
(factoring out (1 − a) from each row )
(expanding along the third row )
= (1 − a) ( a + 3)
3
∂x
∂u
50. J (u , v) =
∂y
∂u
52. J (u , v) =
∂x
a b
∂v
=
= ad − bc
∂y
c d
∂v
eu sin v
eu cos v
e cos v − eu sin v
u
= − e 2u sin 2 v − e 2u cos 2 v = − e 2u
1 −1 1
54. J (u , v, w) = 2v 2u 0
1
1 1
= 1( 2u ) + 1( 2v) + 1( 2v − 2u ) = 4v
58. Because B ≠ 0, B −1 exists, and you can let
C = AB −1 , then
A = CB and
C = AB −1 = A B −1 = A
1
= 1.
B
56. Use the information given in the table on page 122.
Cofactor expansion would cost:
(3,628,799)(0.001) + (6,235,300)(0.003) = $22,334.70.
Row reduction would cost much less:
(285)(0.001) + 339(0.003) = $1.30.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
100
Chapter 3
Determinants
60. The matrix of cofactors is given by
 1 2
0 2
0 1
−


0 −1
0 0
 0 −1
 −1 0 0


1 1
1 −1 


− −1 1
−
=  −1 −1 0 .
 0 −1
0 −1
0 0
−3 −2 1




 −1 1
1 1
1 −1
−


1 2
0 2
0 1

So, the adjoint is
 1 −1 1
−1 −1 −3




adj 0 1 2 =  0 −1 −2.
0 0 −1
 0 0
1



62. The determinant of the coefficient matrix is
2
1
3 −1
= −5 ≠ 0.
So, the system has a unique solution. Using Cramer’s
Rule,
 0.3 1
A1 = 
, A1 = 1.0
−1.3 −1
2 0.3
A2 = 
, A2 = −3.5.
3 −1.3
So,
x =
y =
A1
1
=
= −0.2
A
−5
A2
−3.5
=
= 0.7.
A
−5
64. The determinant of the coefficient matrix is
4
4
4
4 −2 −8 = 0.
8
4 −1 1


66. The coefficient matrix is A = 2 2 3.
5 −2 6


−5 −1 1
4 −5 1




A1 = 10
2 3 , A2 = 2 10 3 ,
 1 −2 6
5
1 6



4 −1 −5


A3 = 2 2 10
5 −2
1

Using a graphing utility, A = 55, A1 = −55,
A2 = 165, and A3 = 110.
So, x1 = A1
x3 = A3
A = 3, and
A = 2.
68. Use the formula for area as follows.
Area
−4 0 1
y1 1
x1
= ± 12 x2
1 = ± 12
y2
y3 1
x3
4 0 1
0 6 1
= ± 12 ( −6)( −4 − 4) = 24
70. Find an equation as follows.
x y 1
x y 1
0 = x1
y1 1 = 2
5 1 = x(6) − y( −4) − 32
6 −1 1
y2 1
x2
So, an equation of the line is 2 y + 3 x = 16.
72. Find an equation as follows.
x y z 1
0 =
2 −4
So, Cramer’s Rule does not apply. (The system does not
have a solution.)
A = −1, x2 = A2
=
x1
y1
z1 1
x2
y2
z2 1
x3
y3
z3 1
x
y
z 1
0
0 0 1
2 −1 1 1
−3
2 5 1
x
y
z
= 1 2 −1 1
−3
2 5
= x( −7) − y (13) + z (1) = 0.
So, the equation of the plane is 7 x + 13 y − z = 0.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Review Exercises for Chapter 3
101
a + b + c = 1765
74. (a)
4a + 2b + c = 1855
9a + 3b + c = 1920
(b) The coefficient matrix is
 1 1 1


A = 4 2 1 , where A = − 2.
9 3 1


1765 1 1


A1 = 1855 2 1 and A1 = 25
1920 3 1


 1 1765 1


A2 = 4 1855 1 and A2 = − 255
9 1920 1


 1 1 1265


A3 = 4 2 1855 and A3 = − 3300
9 3 1920


So, a =
25
− 255
− 3300
= −12.5, b =
= 127.5, and c =
= 1650.
−2
−2
−2
(c) y = −12.5t 2 + 127.5t + 1650
2,000
0
0
9
(d) The function fits the data exactly.
76. (a) True. If either A or B is singular, then det(A) or det(B) is zero (Theorem 3.7), but then
det ( AB ) = det ( A)det( B ) = 0 ≠ −1, which leads to a contradiction.
(b) False. det ( 2 A) = 23 det ( A) = 8 ⋅ 5 = 40 ≠ 10.
(c) False. Let A and B be the 3 × 3 identity matrix I 3. Then det ( A) = det ( B ) = det ( I 3 ) = 1, but
det ( A + B) = det ( 2 I 3 ) = 23 ⋅ 1 = 8 while det ( A) + det ( B ) = 1 + 1 = 2.
78. (a) False. The transpose of the matrix of cofactors of A is called the adjoint matrix of A.
(b) False. Cramer’s Rule requires the determinant of this matrix to be in the numerator. The denominator is always det ( A),
where A is the coefficient matrix of the system (assuming, of course, that it is nonsingular).
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
102
Chapter 3
Determinants
Project Solutions for Chapter 3
1
Stochastic Matrices
 7
 7
 
 
1. Px1 = P 10 = 10
 4
 4
 
 
 0
 0
 


Px 2 = P −1 = −.65
 1
 .65
 


−2
−1.1
 


Px3 = P  1 =  .55
 1
 .55
 


0
0
 7 0 −2
1




S −1PS = 0 .65
0 = D
2. S = 10 −1 1
 4 1 1
0
0 .55



The entries along D are the corresponding eigenvalues of P.
3. S −1PS = D  PS = SD  P = SDS −1. Then
P n = ( SDS −1 ) = ( SDS −1 )( SDS −1 )  ( SDS −1 ) = SD n S −1.
n
1

For n = 15, D = 0

0
n
2
0
(0.65)
15
0
0 
0.333 0.333 0.333
0.333





15
15 −1
15
0   P = SD S ≈ 0.476 0.477 0.475  P X ≈ 0.476.

 0.191 0.190 0.192
 0.191
15




(0.65) 
The Cayley-Hamilton Theorem
1. λ I − A =
λ −2
2
2
λ +1
= λ2 − λ − 6
 2 −2  2 −2  2 −2
 1 0
 8 −2  8 −2
0 0
A2 − A − 6 I = 

 − 
 − 6
 = 
 − 
 = 

5
−2 −1 −2 −1 −2 −1
0 1
−2 5 −2
0 0
2. λ I − A =
λ −6
0
−4
2
λ −1
−3
−2
0
λ −4
= λ 3 − 11λ 2 + 26λ − 16
 344 0 336
44 0 40
 6 0 4
 1 0 0
0 0 0










A − 11A + 26 A − 16 I = −36 1 −1 − 11−8 1 7 + 26 −2 1 3 − 16 0 1 0 = 0 0 0
 168 0 176
20 0 24
 2 0 4
0 0 1
0 0 0










3
2
3. λ I − A =
λ −a
−b
−c
λ −d
= λ 2 − ( a + d )λ + ( ad − bc)
a 2 + bc ab + bd 
a b 
1
A2 − ( a + d ) A + ( ad − bc ) I = 
 − (a + d )
 + ( ad − bc) 
2
ac + dc bc + d 
c d 
0
0
0
 = 
0
1
0

0
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Project Solutions for Chapter 3
103
1
1
1
− An − cn −1 An −1 −  − c2 A2 − c1 A = (c0 I ) = I
4.   − An −1 − cn −1 An − 2 −  − c2 A − c1I A =
c
c
c
0
0
 0
(
)
(
)
Because c0 I = − An − cn −1 An −1 −  − c2 A2 − c1 A from the equation, p ( A) = 0.
λI − A =
λ −1
−2
−3
λ −5
= λ 2 − 6λ − 1

1
A−1 =
(− A + 6 I ) = A − 6 I = −5
(−1)
 3
2

−1 
5. (a) Because A2 = 2 A + I you have A3 = 2 A2 + A = 2( 2 A + I ) + A = 5 A + 2 I .
3 −1
1
A3 = 5
 + 2
2 −1
0
17
0
 = 
1
10
−5 

−3 
Similarly, A4 = 2 A3 + A2 = 2(5 A + 2 I ) + ( 2 A + I ) = 12 A + 5I . Therefore,
 3 −1
1
A4 = 12 
 + 5
2 −1
0
41
0
 = 
1
24
−12 
.
−7 
Note: This approach is a lot more efficient because you can calculate An without calculating all the previous powers of A.
(b) First, calculate the characteristic polynomial of A.
λ
0
λ I − A = −2 λ − 2
−1
0
−1
1
= λ 3 − 4λ 2 + 3λ + 2.
λ −2
By the Cayley-Hamilton Theorem, A3 − 4 A2 + 3 A + 2 I = O or A3 = 4 A2 − 3 A − 2 I . Now you can write any positive
power An as a linear combination of A2 , A and I. For example,
A4 = 4 A3 − 3 A2 − 2 A = 4( 4 A2 − 3 A − 2 I ) − 3 A2 − 2 A = 13 A2 − 14 A − 8 I ,
A5 = 4 A4 − 3 A3 − 2 A2 = 4(13 A2 − 14 A − 8 I ) − 3(4 A2 − 3 A − 2 I ) − 2 A2 = 38 A2 − 47 A − 26 I .
Here
0 0 1


A = 2 2 −1,
 1 0 2


0 0 10 0 1
 1 0 2





A2 = AA = 2 2 −12 2 −1 = 3 4 −2.
 1 0 2 1 0 2
2 0 5





With this method, you can calculate A5 directly without calculating A3 and A4 first.
 1 0 2
0 0 1
 1 0 0
12 0 29








A5 = 38 A2 − 47 A − 26 I = 383 4 −2 − 47 2 2 −1 − 26 0 1 0 = 20 32 −29
2 0 5
 1 0 2
0 0 1
29 0 70








Similarly,
 1 0 2
0 0 1
 1 0 0
 5 0 12








A = 13 A − 14 A − 8 I = 133 4 −2 − 14 2 2 −1 − 80 1 0 = 11 16 −12
2 0 5
 1 0 2
0 0 1
12 0 29








4
2
2 0 5


A = 4 A − 3 A − 2 I = 6 8 −5 .
5 0 12


3
2
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
C H A P T E R 4
Vector Spaces
Section 4.1
Vectors in R n .....................................................................................105
Section 4.2
Vector Spaces .....................................................................................110
Section 4.3
Subspaces of Vector Spaces...............................................................115
Section 4.4
Spanning Sets and Linear Independence ...........................................117
Section 4.5
Basis and Dimension ..........................................................................122
Section 4.6
Rank of a Matrix and Systems of Linear Equations .........................125
Section 4.7
Coordinates and Change of Basis ......................................................130
Section 4.8
Applications of Vector Spaces ...........................................................135
Review Exercises ........................................................................................................143
Project Solutions.........................................................................................................152
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
C H A P T E R
Vector Spaces
4
Section 4.1 Vectors in R n
2. v = ( −6, 3)
12. v = u + w = ( −2, 3) + ( −3, − 2) = ( −5, 1)
y
y
4.
(− 2, 3) 4
4
(− 2, 3)
v=u+w
3
2
v
1
−4
14. v = −u + w = −( −2, 3) + ( −3, − 2) = ( −1, − 5)
1
−5 −4 −3 −2 −1
1
x
(− 2, 3)
(− 3, − 2)
v
(− 1, − 5)
(4, 7)
u
(3, 1)
1
v = u − 2w
(6, 4)
(− 2, 3)
u+v
2 3 4 5
u
x
2
− 2w
w
2
(4, − 3)
= ( 2, − 5)
(
)
(b) − 12 v = − 12 (3, − 2) = − 32 , 1
(c) 0 v = 0(3, − 2) = (0, 0)
y
y
8
1
3 4 5
(2, −5)
x
6
18. (a) 4 v = 4(3, − 2) = (12, − 8)
= ( 4 − 2, − 2 − 3)
u
u+v
4
(− 3, − 2)
v
10. u + v = ( 4, − 2) + ( −2, − 3)
−6
−7
v = −u + w
6
5
(−2, −3)
−u
y
y
−3
x
1 2 3 4
16. v = u − 2w = ( −2, 3) − 2( −3, − 2) = ( 4, 7)
= (3, 1)
v
u
(2, − 3)
= ( −1 + 4, 4 − 3)
−2 −1
3
2
w
8. u + v = ( −1, 4) + ( 4, − 3)
− 3− 2 − 1
−2
−3
−4
y
−2
−4
−5
−6
(− 2, − 5)
x
(− 3, − 2)
x
y
6.
2
−2 w
−2
(− 5, 1)
−4 −3 −2 −1
−1
(− 1, 4)
u
x
(4, −2)
4
(− 32 , 1(
− 12 v v
8
0v
4v
(3, − 2)
−8
(12, − 8)
12
x
−4
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
105
106
Chapter 4
Vector Spaces
20. u − v + 2 w = (1, 2, 3) − ( 2, 2, −1) + 2( 4, 0, − 4)
= ( −1, 0, 4) + (8, 0, − 8) = (7, 0, − 4)
22. 5u − 3v − 12 w = 5(1, 2, 3) − 3( 2, 2, −1) − 12 ( 4, 0, − 4)
= (5, 10, 15) − (6, 6, − 3) − ( 2, 0, − 2)
= ( −3, 4, 20)
(
)
= (6 + 2, − 5 − 53 , 4 + 43 , 3 + 1)
= (8, − 20
, 16 , 4)
3 3
32. (a) u − v = (6, − 5, 4, 3) − − 2, 53 , − 34 , −1
(
= 2(6, − 5, 4, 3) + ( − 6, 5, − 4, − 3)
24. 2u + v − w + 3z = 0 implies that
3z = −2u − v + w.
= 2(6 − 6, − 5 + 5, 4 − 4, 3 − 3)
= 2(0, 0, 0, 0)
So,
= (0, 0, 0, 0)
3z = −2(1, 2, 3) − ( 2, 2, −1) + ( 4, 0, − 4)
= ( −2, − 4, − 6) − ( 2, 2, −1) + ( 4, 0, − 4) = (0, − 6, − 9).
z = 13 (0, − 6, − 9) = (0, − 2, − 3).
26. (a) − v = −( 2, 0, 1) = ( −2, 0, −1)
(b) 2 v = 2( 2, 0, 1) = ( 4, 0, 2)
(c)
1
v = 12 2, 0, 1
2
(a) v + 3w = (6, − 4, 2, 7)
(2, 0, 1)
2
(4, 0, 2)
x
v
2
4
)
= ( − 4, 10
, − 83 , − 2) − (6, − 5, 4, 3)
3
= ( −10, 25
, − 20
, − 5)
3
3
w = ( 2, − 2, 1, 3), you have the following.
z
2v
(
(c) 2 v − u = 2 − 2, 53 , − 34 , −1 − (6, − 5, 4, 3)
34. Using a graphing utility with
u = (1, 2, − 3, 1), v = (0, 2, −1, − 2), and
) = (1, 0, 12 )
(
)
(b) 2(u + 3v ) = 2 (6, − 5, 4, 3) + 3 − 2, 53 , − 43 , −1 


−v
( 72 , − 5, 72 , 112 )
(c) 12 ( 4 v − 3u + w ) = ( − 12 , 0, 3, − 4)
(1, 0, 12 )
(b) 2w − 12 u =
(− 2, 0, − 1)
4
y
(
36. w + u = − v
)
28. (a) Because (6, − 4, 9) ≠ c 12 , − 23 , 34 for any c, u is not
a scalar multiple of z.
(
)
(
)
(b) Because −1, 43 , − 32 = −2 12 , − 23 , 34 , v is a scalar
multiple of z.
30. (a) u − v = (0, 4, 3, 4, 4) − (6, 8, − 3, 3, − 5)
= ( − 6, − 4, 6, 1, 9)
(b) 2(u + 3v ) = 2 (0, 4, 3, 4, 4) + 3(6, 8, − 3, 3, − 5)
= 2(0, 4, 3, 4, 4) + (18, 24, − 9, 9, −15)
= 2(18, 28, − 6, 13, −11)
= (36, 56, −12, 26, − 22)
(c) 2 v − u = 2(6, 8, − 3, 3, − 5) − (0, 4, 3, 4, 4)
= (12, 16, − 6, 6, −10) − (0, 4, 3, 4, 4)
= (12, 12, − 9, 2, −14)
w = −v − u
= −(0, 2, 3, −1) − (1, −1, 0, 1)
= ( −1, −1, − 3, 0)
38. w + 3v = −2u
w = −2u − 3v
= −2(1, −1, 0, 1) − 3(0, 2, 3, −1)
= ( − 2, 2, 0, − 2) − (0, 6, 9, − 3)
= ( −2, − 4, − 9, 1)
40. 2u + v − 3w = 0
w = 23 u + 13 v
= 23 ( − 6, 0, 2, 0) + 13 (5, − 3, 0, 1)
(
) ( 53 , −1, 0, 13 )
= ( − 73 , −1, 34 , 13 )
= − 4, 0, 43 , 0 +
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 4.1
42. The equation
au + bw = v
a(1, 2) + b(1, −1) = (0, 3)
a(1, 2) + b(1, −1) = (1, − 4)
yields the system
yields the system
a +b =
1
2a − b = 3.
2a − b = −4.
Solving this system produces a = 1 and b = −1.
So, v = u − w.
Solving this system produces a = −1 and b = 2.
So, v = −u + 2 w .
44. The equation
107
46. The equation
au + bw = v
a +b = 0
Vectors in R n
48. The equation
au1 + bu 2 + cu3 = v
au + bw = v
a(1, 2) + b(1, −1) = (1, −1)
a(1, 3, 5) + b( 2, −1, 3) + c( −3, 2, − 4) = ( −1, 7, 2)
yields the system
yields the system
a +b =
1
a + 2b − 3c = −1
2a − b = −1.
3a −
Solving this system produces a = 0 and b = 1.
So, v = w = 0u + 1w .
5a + 3b − 4c =
b + 2c = 7
2.
Solving this system you discover that there is no
solution. So, v cannot be written as a linear combination
of u1, u2 , and u 3 .
50. The equation
au1 + bu 2 + cu3 = v
a( 2, 1, 1, 2) + b( − 3, 3, 4, − 5) + c( − 6, 3, 1, 2) = (7, 2, 5, − 3)
yields the system
2a − 3b − 6c = 7
a + 3b + 3c = 2
a + 4b + c = 5
2a − 5b + 2c = − 3.
Solving this system produces a = 2, b = 1, and c = −1.
So, v = 2u1 + u 2 − u 3.
52. The equation
 1
2
3
 
 
 
a 7 + b 8 = 9
4
5
7
 
 
 
yields the system
a + 2b = 3
7a + 8b = 9
4a + 5b = 7.
Because the system has no solution, it is not possible to write the third column as a linear combination of the first two columns.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
108
Chapter 4
Vector Spaces
54. Write a matrix using the given u1, u2 , , u5 as columns and augment this matrix with v as a column.
 1 2

 1 1
A = −1 2

 2 −1

 1 1
1
0
2
2
1
0
0
2
1
1 −1
2 −4
1
2
5

8
7

−2

4
The reduced row-echelon form for A is
1

0
A = 0

0

0
0 0 0 0 −1

1 0 0 0 1
0 1 0 0 2.

0 0 1 0 1

0 0 0 1 2
So, v = −u1 + u 2 + 2u3 + u 4 + 2u5 .
Verify the solution by showing that
−(1, 1, −1, 2, 1) + ( 2, 1, 2, −1, 1) + 2(1, 2, 0, 1, 2) + (0, 2, 0, 1, − 4) + 2(1, 1, 2, −1, 2) = (5, 8, 7, − 2, 4).
56. The equation
av1 + bv 2 + cv 3 = 0
a(1, 0, 1) + b( −1, 1, 2) + c(0, 1, 3) = (0, 0, 0)
yields the homogeneous system
a −
b
= 0
b +
c = 0
a + 2b + 3c = 0.
Solving this system produces a = −t , b = −t , and c = t , where t is any real number.
Letting t = −1, you obtain a = 1, b = 1, c = −1, and so, v1 + v 2 − v 3 = 0.
58. (a) True. See page 155.
(b) False. The zero vector is the additive identity.
60. You can describe vector subtraction u − v as follows.
v
u
u− v
Or, write subtraction in terms of addition, u − v = u + ( −1) v.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 4.1
Vectors in R n
109
y
62. (a)
4
3
2
1
v
(9, 1)
x
2 3 4 5 6 7 8 9
−2
−3
−4
u
(3, − 4)
(b) u + v = (3, − 4) + (9, 1) = (12, − 3)
(c) 2 v − u = 2(9, 1) − (3, − 4) = (18, 2) − (3, − 4) = (15, 6)
(d) The equation
a u + bv = w
a (3, − 4) + b (9, 1) = (39, 0)
yields the system
3a + 9b = 39
− 4a + b = 0.
Solving this system produces a = 1 and b = 4. So, w = u + 4 v.
64. Prove each of the ten properties.
(1) u + v = (u1,  , un ) + (v1,  , vn ) = (u1 + v1,  , un + vn ) is a vector in R n .
(2) u + v = (u1 ,  , un ) + (v1 ,  , vn ) = (u1 + v1 ,  , un + vn )
= (v1 + u1 ,  , vn + un )
= (v1 ,  , vn ) + (u1 ,  , un ) = v + u
(3) (u + v ) + w = (u1 ,  , un ) + (v1 ,  , vn ) + ( w1 ,  , wn )
= (u1 + v1 ,  , un + vn ) + ( w1 ,  , wn )
= ((u1 + v1 ) + w1 ,  , (un + vn ) + wn )
= (u1 + (v1 + w1 ),  , un + (vn + wn ))
= (u1 ,  , un ) + (v1 + w1 ,  , vn + wn )
= (u1 ,  , un ) + (v1 ,  , vn ) + ( w1 ,  , wn )
= u + ( v + w)
(4) u + 0 = (u1,  , un ) + (0,  , 0) = (u1 + 0,  , un + 0) = (u1,  , un ) = u
(5) u + ( −u ) = (u1 ,  , u n ) + ( −u1 ,  , − u n )
= (u1 − u1 ,  , u n − un ) = (0,  , 0) = 0
(6) cu = c(u1,  , un ) = (cu1,  , cun ) is a vector in R n .
(7) c(u + v ) = c (u1 ,  , un ) + (v1 ,  , vn ) = c(u1 + v1 ,  , un + vn )
= (c(u1 + v1 ),  , c(un + vn )) = (cu1 + cv1 ,  , cun + cvn )
= (cu1 ,  , cun ) + (cv1 ,  , cvn )
= c(u1 ,  , un ) + c(v1 ,  , cvn ) = cu + cv
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
110
Chapter 4
Vector Spaces
(8) (c + d )u = (c + d )(u1 ,  , un ) = ((c + d )u1 ,  , (c + d )un )
= (cu1 + du1 ,  , cun + dun )
= (cu1 ,  , cun ) + ( du1 ,  , dun )
= cu + du
(9) c( du ) = c( d (u1 ,  , un )) = c( du1 ,  , dun ) = (c( du1 ),  , c( dun ))
= ((cd )u1 ,  , (cd )un ) = (cd )(u1 ,  , un ) = (cd )u
(10) 1u = 1(u1,  , un ) = (1u1,  , 1u1 ) = (u1,  , un ) = u
68. (a) Additive inverse
66. (a) Additive identity
(b) Distributive property
(c) Add − c 0 to both sides.
(b) Transitive property
(c) Add v to both sides.
(d) Additive inverse and associative property
(e) Additive inverse
(f ) Additive identity
(d) Associative property
(e) Additive inverse
(f ) Additive identity
Section 4.2 Vector Spaces
2. The additive identity of C[−1, 0] is the zero function,
f ( x) = 0, −1 ≤ x ≤ 0.
ten vector space axioms hold.
4. The additive identity of M 5,1 is the 5 × 1 zero matrix
polynomial.
18. This set is not a vector space. Axiom 1 fails. For
example, given f ( x) = x + 1 and g ( x) = − x − 1,
6. The additive identity of M 2,2 is the 2 × 2 zero matrix
8. In C ( −∞, ∞), the additive inverse of f ( x ) is − f ( x).
10. In M1,4 , the additive inverse of [v1 v2
−v2
f ( x) + g ( x) = 0 is not of the form ax + b, where
a, b ≠ 0.
0 0

.
0 0
−v3 −v4 ].
a12
a13
a14
a22
a23
a24
a32
a33
a34
a42
a43
a44
a52
a53
a54
a15 

a25 
a35  is

a45 

a55 
− a13
− a14
− a23
− a24
− a33
− a34
− a43
− a44
− a53
− a54
 − a11 − a12

− a21 − a22
− a31 − a32

− a41 − a42

− a51 − a52
v3 v4 ] is
20. This set is not a vector space. Axiom 1 fails. For
example, given f ( x ) = x 2 and g ( x ) = − x 2 + x,
f ( x) + g ( x) = x is not a quadratic function.
22. This set is not a vector space. Axiom 6 fails. A
counterexample is −2( 4, 1) = ( −8, − 2) is not in the set
because x < 0, y < 0.
12. The additive inverse of
 a11

a21
a31

a41

a51
16. This set is not a vector space. The set is not closed under
addition or scalar multiplication. For example,
(−x5 + x4 ) + ( x5 − x3 ) = x4 − x3 is not a fifth-degree
0
0
0.
 
0
0
[−v1
14. M 1,1 with the standard operations is a vector space. All
24. This set is a vector space. All ten vector space axioms
hold.
26. This set is not a vector space. The set is not closed under
addition nor scalar multiplication. A counterexample is
− a15 

− a25 
− a35  .

− a45 

− a55 
1 1 1 1
2 2

 + 
 = 
.
1
1
1
1

 

2 2
Each matrix on the left is in the set, but the sum is not in
the set.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 4.2
28. This set is not a vector space. Axiom 1 fails. For
example,
111
32. This set is not a vector space. The set is not closed under
addition nor scalar multiplication. A counterexample is
1 0 0 1 1 1
2 1 1 

 



+
=
0
1
0
1
1
1

 

1 2 1 .
0 0 1  1 1 1
1 1 2

 



Each matrix on the left is in the set, but the matrix on the
right is not.
30. This set is a vector space. All ten vector space axioms
hold.
Vector Spaces
 1 0  1 0
2 0

 + 
 = 
.
0 1 0 −1
0 0
Each matrix on the left is nonsingular, and the sum is
not.
34. This set is a vector space. All ten vector space axioms
hold.
36. This set is a vector space. All ten vector space axioms hold.
38. This set is not a vector space because Axiom 5 fails. The additive identity is (1, 1) and so (0, 0) has no additive inverse.
Axioms 7 and 8 also fail.
40. Verify the ten axioms in the definition of vector space.
 u1 u2   v1 v2 
 u1 + v1 u2 + v2 
(1) u + v = 
 + 
 = 
 is in M 2,2 .
u3 u4  v3 v4 
u3 + v3 u4 + v4 
 u1 u2   v1 v2 
 u1 + v1 u2 + v2 
+ 
= 
(2) u + v = 



u3 u4  v3 v4 
u3 + v3 u4 + v4 
 v1 + u1 v2 + u2 
 v1 v2   u1 u2 
= 
 = v v  + u u  = v + u
v
u
v
u
+
+
3
3
4
4


 3 4  3 4
 u1
(3) u + ( v + w ) = 
u3
 u1
= 
u3
u2    v1 v2   w1 w2  
+
+

u4   v3 v4  w3 w4  
u2   v1 + w1 v2 + w2 
+
u4  v3 + w3 v4 + w4 
 u1 + (v1 + w1 ) u2 + (v2 + w2 )
= 

u3 + (v3 + w3 ) u4 + (v4 + w4 )
 (u1 + v1 ) + w1 (u2 + v2 ) + w2 
= 

(u3 + v3 ) + w3 (u4 + v4 ) + w4 
 u1 + v1 u2 + v2   w1 w2 
= 
 + 

u3 + v3 u4 + v4  w3 w4 
  u1 u2   v1 v2    w1 w2 
= 
 + 
 + 
 = (u + v ) + w
 u3 u4  v3 v4   w3 w4 
(4) The zero vector is
0 0
0 = 
 . So,
0 0
 u1 u2  0 0
 u1 u2 
u+0 = 
 + 
 = 
 = u.
u3 u4  0 0
u3 u4 
(5) For every
 u1 u2 
 −u1 −u2 
u = 
, you have − u = 
.
u3 u4 
−u3 −u4 
 u1 u2   −u1 −u2 
u + ( −u) = 
 + 

u3 u4  −u3 −u4 
0 0
= 

0 0
= 0
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
112
Chapter 4
Vector Spaces
 u1 u2 
 cu1 cu2 
(6) cu = c 
 = 
 is in M 2,2 .
u
u
 3 4
cu3 cu4 
  u1 u2   v1 v2  
 u1 + v1 u2 + v2 
(7) c(u + v ) = c 
 + 
  = c 

u3 + v3 u4 + v4 
 u3 u4  v3 v4  
 c(u1 + v1 ) c(u2 + v2 )
 cu1 + cv1 cu2 + cv2 
= 
 = 

cu3 + cv3 cu4 + cv4 
c(u3 + v3 ) c(u4 + v4 )
 cu1 cu2   cv1 cv2 
 u1 u2 
 v1 v2 
= 
 + 
 = c
 + c

cu
cu
cv
cv
u
u
4
4
 3
 3
 3 4
v3 v4 
= cu + cv
 (c + d )u1
 u1 u2 
(8) (c + d )u = (c + d ) 
 = 
(c + d )u3
u3 u4 
(c + d )u2 

(c + d )u4 
 cu1 + du1 cu2 + du2 
 cu1 cu2   du1 du2 
= 
 = 
 + 

cu3 + du3 cu4 + du4 
cu3 cu4  du3 du4 
 u1 u2 
 u1 u2 
= c
 + d
 = cu + du
u3 u4 
u3 u4 
 c( du1 ) c( du2 )
  u1 u2  
 du1 du2 
(9) c( du) = c d 

  = c 
 = 
du3 du4 
 u3 u4  
c( du3 ) c( du4 )
 (cd )u1
= 
(cd )u3
(cd )u2 
 u1
 = (cd ) 
cd
u
( ) 4 
u3
u2 
 = (cd )u
u4 
 u1 u2 
1u1 1u2 
(10) 1(u) = 1 
 = 
 = u
u3 u4 
1u3 1u4 
42. (a) Axiom 10 fails. For example,
1( 2, 3, 4) = ( 2, 3, 0) ≠ ( 2, 3, 4).
(b) Axiom 4 fails because there is no zero vector. For example,
(2, 3, 4) + ( x, y, z) = (0, 0, 0) ≠ (2, 3, 4) for all choices of ( x, y, z).
(c) Axiom 7 fails. For example,
2 (1, 1, 1) + (1, 1, 1) = 2(3, 3, 3) = (6, 6, 6)
2(1, 1, 1) + 2(1, 1, 1) = ( 2, 2, 2) + ( 2, 2, 2) = (5, 5, 5).
So, c(u + v) ≠ cu + cv.
(d) ( x1 , y1 , z1 ) + ( x2 , y2 , z 2 ) = ( x1 + x2 + 1, y1 + y2 + 1, z1 + z2 + 1)
c( x, y , z ) = (cx + c − 1, cy + c − 1, cz + c − 1)
This is a vector space. Verify the 10 axioms.
(1)
( x1, y1, z1 ) + ( x2 , y2 , z2 ) ∈ R3
(2)
( x1, y1, z1 ) + ( x2 , y2 , z2 ) = ( x1 + x2 + 1, y1 + y2 + 1, z1 + z2 + 1)
= ( x2 + x1 + 1, y2 + y1 + 1, z2 + z1 + 1)
= ( x2 , y2 , z2 ) + ( x1 , y1 , z1 )
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 4.2
(3)
( x1, y1, z1 ) + ( x2 , y2 , z2 ) + ( x3 , y3 , z3 )
= ( x1 , y1 , z1 ) + ( x2 + x3 + 1, y2 + y3 + 1, z2 + z3 + 1)
= ( x1 + ( x2 + x3 + 1) + 1, y1 + ( y2 + y3 + 1) + 1, z1 + ( z2 + z3 + 1) + 1)
= (( x1 + x2 + 1) + x3 + 1, ( y1 + y2 + 1) + y3 + 1, ( z1 + z2 + 1) + z3 + 1)
= ( x1 + x2 + 1, y1 + y2 + 1, z1 + z2 + 1) + ( x3 , y3 , z3 )
= ( x1 , y1 , z1 ) + ( x2 , y2 , z2 ) + ( x3 , y3 , z3 )
(4)
0 = ( −1, −1, −1): ( x, y , z ) + ( −1, −1, −1) = ( x − 1 + 1, y − 1 + 1, z − 1 + 1)
Vector Spaces
113
= ( x, y , z )
(5)
−( x, y, z ) = ( − x − 2, − y − 2, − z − 2):
( x, y, z ) + (−( x, y, z )) = ( x, y, z ) + ( − x − 2, − y − 2, − z − 2)
= ( x − x − 2 + 1, y − y − 2 + 1, z − z − 2 + 1)
= ( −1, −1, −1)
= 0
(6)
c( x, y , z ) ∈ R 3
(7)
c(( x1 , y1 , z1 ) + ( x2 , y2 , z2 ))
= c( x1 + x2 + 1, y1 + y2 + 1, z1 + z2 + 1)
= (c( x1 + x2 + 1) + c − 1, c( y1 + y2 + 1) + c − 1, c( z1 + z2 + 1) + c − 1)
= (cx1 + c − 1 + cx2 + c − 1 + 1, cy1 + c − 1 + cy2 + c − 1 + 1, cz1 + c − 1 + cz2 + c − 1 + 1)
= (cx1 + c − 1, cy1 + c − 1, cz1 + c − 1) + (cx2 + c − 1, cy2 + c − 1, cz2 + c − 1)
= c( x1 , y1 , z1 ) + c( x2 , y2 , z2 )
(8)
(c + d )( x, y, z ) = ((c + d ) x + c + d − 1, (c + d ) y + c + d − 1, (c + d ) z + c + d − 1)
= (cx + c − 1 + dx + d − 1 + 1, cy + c − 1 + dy + d − 1 + 1, cz + c − 1 + dz + d − 1 + 1)
= (cx + c − 1, cy + c − 1, cz + c − 1) + ( dx + d − 1, dy + d − 1, dz + d − 1)
= c( x, y , z ) + d ( x, y , z )
(9)
c( d ( x, y, z )) = c( dx + d − 1, dy + d − 1, dz + d − 1)
= (c( dx + d − 1) + c − 1, c( dy + d − 1) + c − 1, c( dz + d − 1) + c − 1)
= ((cd ) x + cd − 1, (cd ) y + cd − 1, (cd ) z + cd − 1)
= (cd )( x, y, z )
(10) 1( x, y , z ) = (1x + 1 − 1, 1 y + 1 − 1, 1z + 1 − 1)
= ( x, y , z )
Note: In general, if V is a vector space and a is a constant vector, then the set V together with the operations
u ⊕ v = (u + a ) + ( v + a ) − a
c * u = c (u + a ) − a
is also a vector space. Letting a = (1, 1, 1) ∈ R 3 gives the above example.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
114
Chapter 4
Vector Spaces
44. Let u be an element of the vector space V. Then − u is the additive inverse of u. Assume, to the contrary, that v is another
additive inverse of u. Then
u + v = 0
−u + u + v = −u + 0
0 + v = −u + 0
v = − u.
46. (a) A set on which vector addition and scalar multiplication are defined is a vector space when the following properties hold.
1. u, v ∈ V  u + v ∈ V
2. u + v = v + u
3. u + ( v + w) = (u + v) + w
4. 0 ∈ V such that u + 0 = u for all u ∈ V .
5. If u ∈ V , then − u ∈ V and u + ( − u) = 0.
6. If u ∈ V , c ∈ R, cu ∈ V .
7. c(u + v) = cu + cv
8.
(c + d )u = cu + du
9. c( du) = (cd )u
10. 1(u) = u
(b) The set of all polynomials of degree 6 or less is a vector space.
The set of all sixth-degree polynomials is not a vector space.
48. R ∞ is a vector space. Verify the ten vector space axioms.
(1) u + v = (u1 + v1 , u2 + v2 , u3 + v3 , ) is in R∞ .
(2) u + v = (u1, u2 , u3 , ) + (v1, v2 , v3 , ) = (u1 + v1 , u2 + v2 , u3 + v3 , ) = (v1 + u1, v2 + u2 , v3 + u3 , ) = v + u
(3) u + ( v + w ) = (u1 , u2 , u3 , ) + (v1 + w1 , v2 + w2 , v3 + w3 , )
= (u1 + (v1 + w1 ), u2 + (v2 + w2 ), u3 + (v3 + w3 ), )
= ((u1 + v1 ) + w1 , (u2 + v2 ) + w2 , (u3 + v3 ) + w3 , )
= (u1 + v1 , u2 + v2 , u3 + v3 , ) + ( w1 , w2 , w3 , )
= (u + v ) + w
(4) The zero vector is
0 = (0, 0, 0, )
u + 0 = (u1 , u2 , u3 , ) + (0, 0, 0, ) = (u1 , u2 , u3 , ).
(5) The additive inverse of u is
− u = ( − u1 , − u2 , − u3 , )
u + ( − u) = (u1 + ( − u1 ), u2 + ( − u2 ), u3 + ( − u3 ), ) = (0, 0, 0, ) = 0.
(6) cu = (cu1 , cu2 , cu3 , ) is in the set.
(7) c (u + v) = c (u1 + v1 , u2 + v2 , u3 + v3 , )
= (c (u1 + v1 ), c (u2 + v2 ), c (u3 + v3 ), )
= (cu1 + cv1 , cu2 + cv2 , cu3 + cv3 , )
= (cu1 , cu2 , cu3 , ) + (cv1 , cv2 , cv3 , )
= cu + cv
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 4.3
Vector Spaces
115
(8) (c + d )u = ((c + d )u1 , (c + d )u2 , (c + d )u3 , ) = (cu1 + du1 , cu2 + du2 , cu3 + du3 , ) = cu + du
(9) c( du) = c( du1 , du2 , du3 , ) = (c( du1 ), c( du2 ), c( du3 ), ) = ((cd )u1 , (cd )u2 , (cd )u3 , ) = (cd )u
(10) 1u = (1u1 , 1u2 , 1u3 , ) = (u1 , u2 , u3 , ) = u
50. (a) True. For a set with two operations to be a vector
space, all ten axioms must be satisfied. Therefore, if
one of the axioms fails, then this set cannot be a
vector space.
(b) False. The first axiom is not satisfied, because
x + (1 − x) = 1 is not a polynomial of degree 1, but
52. ( −1) v + 1( v) = ( −1 + 1) v = 0 v = 0. Also,
− v + v = 0. So, ( −1) v and −v are both additive
inverses of v. Because the additive inverse of a vector is
unique, ( −1) v = − v.
is a sum of polynomials of degree 1.
(c) True. This set is a vector space because all ten vector
space axioms hold.
Section 4.3 Subspaces of Vector Spaces
2. Because W is nonempty and W ⊂ R3 , you need only check that W is closed under addition and scalar multiplication. Given
( x1, y1, 4 x1 − 5 y1) and ( x2 , y2 , 4 x2 − 5 y2 ),
it follows that
( x1, y1, 4 x1 − 5 y1 ) + ( x2 , y2 , 4 x2 − 5 y2 ) = ( x1 + x2 , y1 + y2 , 4( x1 + x2 ) − 5( y1 + y2 )) ∈ W .
Furthermore, for any real number c and ( x, y, 4 x − 5 y) ∈ W , it follows that
c( x, y, 4 x − 5 y ) = (cx, cy, 4(cx) − 5(cy )) ∈ W .
4. Because W is nonempty and W ⊂ M 3,2 , you need only check that W is closed under addition and scalar multiplication. Given
b1 
 a1


−
2
0  ∈ W and
a
b
1
 1
 0
c1 

 a2

a2 − 2b2

0

b2 

0 ∈ W
c2 
it follows that
b1   a2
 a1

 
a
b
−
2
0  + a2 − 2b2
1
 1
 0
c1  
0

b2 
a1 + a2
b1 + b2 




0  = ( a1 + a2 ) − 2(b1 + b2 )
0  ∈ W.

c2 
c1 + c2 
0

Furthermore, for any real number d,
b
db
 a
 da




d a − 2b 0 = da − 2db 0  ∈ W .
 0

c 
dc 
0


6. Recall from calculus that differentiability implies
continuity. So, W ⊂ V . Furthermore, because W is
nonempty, you need only check that W is closed under
addition and scalar multiplication. Given differentiable
functions f and g on [−1, 1], it follows that f + g is
differentiable on [−1, 1] and so f + g ∈ W . Also, for
any real number c and for any differentiable function
f ∈ W , cf is differentiable, and therefore cf ∈ W .
8. The vectors in W are of the form ( 2, a). This set is not
closed under addition or scalar multiplication. For
example,
(2, 1) + (2, 1) = ( 4, 2) ∉ W
and
2( 2, 1) = ( 4, 2) ∉ W .
10. This set is not closed under scalar multiplication. For
example,
(
) = (2, 32 ) ∉ W .
1 4, 3
2
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
116
Chapter 4
Vector Spaces
12. This set is not closed under addition. For example,
consider f ( x) = − x + 1 and g ( x) = x + 2, and
f ( x) + g ( x) = 3 ∉ W .
22. This set is not a subspace because it is not closed under
scalar multiplication.
24. This set is a subspace of C ( −∞, ∞) because it is closed
14. This set is not closed under addition. For example,
(3, 4, 5) + (5, 12, 13) = (8, 16, 18) ∉ W .
16. This set is not closed under addition. For instance,
2  1 
3 
   
 
0  + 0 = 0  ∉ W .
12 3
15
   
 
under addition and scalar multiplication.
26. This set is not a subspace because it is not closed under
addition or scalar multiplication.
28. This set is not a subspace of C ( −∞, ∞) because it is not
closed under addition or scalar multiplication.
30. This set is a subspace because it is closed under addition
and scalar multiplication.
18. This set is not closed under addition or scalar
multiplication. For example,
addition and scalar multiplication.
 1 0  1 0
2 0

 + 
 = 
 ∉W
0
1
0
1

 

0 2
34. This set is not a subspace because it is not closed under
addition or scalar multiplication.
 1 0
2 0
2
 = 
 ∉ W.
0 1
0 2
(
32. This set is a subspace of M m,n because it is closed under
)
20. The vectors in W are of the form a, a 2 . This set is not
closed under addition or scalar multiplication. For
example,
(3, 9) + (2, 4) = (5, 13) ∉ W
and
36. This set is not a subspace because it is not closed under
addition.
38. W is not a subspace of R3. For example,
(0, 0, 4) ∈ W and (1, 1, 4) ∈ W , but
(0, 0, 4) + (1, 1, 4) = (1, 1, 8) ∉ W , so W is not closed
under addition.
2(3, 9) = (6, 18) ∉ W .
40. W is a subspace of R3. Note first that W ⊂ R3 and W is nonempty. If ( s1 , t1, s1 + t1 ) and ( s2 , t2 , s2 + t2 ) are in W, then their
sum is also in W.
( s1, t1 , s1 + t1 ) + ( s2 , t2 , s2 + t2 ) = ( s1 + s2 , t1 + t2 , ( s1 + s2 ) + (t1 + t2 )) ∈ W .
Furthermore, if c is any real number,
c( s, t , s + t ) = (cs, ct , cs = ct ) ∈ W .
42. W is not a subspace of R3. For example,
(1, 1, 1) ∈ W and (1, 1, 1) ∈ W , but their sum,
(2, 2, 2) ∉ W . So, W is not closed under addition.
44. (a) False. Zero subspace and the whole vector space are
not proper subspaces, even though they are
subspaces.
(b) True. Because W must itself be a vector space under
inherited operations, it must contain an additive
identity.
(c) True. See Theorem 4.5, part 1 on page 168.
(d) True. See Definition of Subspace, page 168.
46. Example 5 showed that Wi ⊂ W j for i ≤ j. To show Wi
is a subspace, show that it is closed under addition and
scalar multiplication.
W4 : If f and g are integrable, f + g and cf are
integrable. So, W4 is a subspace.
W3 : The sum of two continuous functions is
continuous, and a continuous function multiplied
by a constant is continuous. So, W3 is a subspace.
W2 : If y1 and y2 are differentiable, y1 + y2 and cy1 are
differentiable. So, W2 is a subspace.
W1 : The sum of two polynomials is a polynomial, and a
polynomial multiplied by a constant is a
polynomial. So, W1 is a subspace.
So, Wi is a subspace of W j for i ≤ j.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 4.4
Spanning Sets and Linear Independence 117
48. S is a subspace of C[0, 1]. S is nonempty because the zero function is in S. If f1, f 2 ∈ S , then
1
0
( f1 + f 2 )( x)dx =
1
 f1 ( x) + f 2 ( x)dx
1
f1 ( x)dx +
0 
=
0
= 0+0 = 0
1
0

f 2 ( x)dx
f1 + f 2 ∈ S .
If f ∈ S and c ∈ R, then
1
0
(cf )( x)dx =
1
0
cf ( x)dx = c
1
0
f ( x)dx = c0 = 0  cf ∈ S .
So, S is closed under addition and scalar multiplication.
50. The commutative, associative, and distributive properties
in the larger vector space still hold for a subset of the
larger space. If the set is closed under addition and scalar
multiplication, the remaining axioms for a vector space
are satisfied, and the subset is a subspace.
52. Because W is not empty (for example, x ∈ W ) you
need only check that W is closed under addition and
scalar multiplication. Let
a1x + b1y + c1z ∈ W ,
54. Because W is not empty you need only check that W is
closed under addition and scalar multiplication. Let
c ∈ R and x, y, ∈ W . Then A x = 0 and Ay = 0. So,
A( x + y ) = Ax + Ay = 0 + 0 = 0,
A(cx) = cAx = c 0 = 0.
Therefore, x + y ∈ W and cx ∈ W .
56. Let V = R2 . Consider
a2 x + b2 y + c2 z ∈ W .
W = {( x, 0) : x ∈ R},
Then
Then W ∪ U is not a subspace of V, because it is not
closed under addition. Indeed, (1, 0), (0, 1) ∈ W ∪ U ,
(a1x + b1y + c1z ) + (a2x + b2 y + c2 z ) =
(a1x + a2x) + (b1y + b2 y ) + (c1z + c2 z ) =
(a1 + a2 )x + (b1 + b2 )y + (c1 + c2 )z ∈ W .
U = {(0, y ) : y ∈ R}.
but (1, 1) (which is the sum of these two vectors) is not.
Similarly, if ax + by + cz ∈ W and d ∈ R, then
d ( ax + by + cz) = dax + dby + dcz ∈ W .
58. (a) V + W is nonempty because 0 = 0 + 0 ∈ V + W .
Let u1, u2 ∈ V + W . Then u1 = v1 + w1, u2 = v 2 + w 2 , where v i ∈ V and w i ∈ W . So,
u1 + u2 = ( v1 + w1 ) + ( v 2 + w 2 ) = ( v1 + v 2 ) + ( w1 + w 2 ) ∈ V + W .
For scalar c,
cu1 = c( v1 + w1 ) = cv1 + cw1 ∈ V + W .
(b) If V = {( x, 0): x is a real number} and W = {(0, y ) : y is a real number}, then V + W = R 2 .
Section 4.4 Spanning Sets and Linear Independence
2. (a) Solving the equation
c1 (1, 2, − 2) + c2 ( 2, −1, 1) = ( − 4, − 3, 3)
for c1 and c2 yields the system
c1 + 2c2 = − 4
2c1 −
c2 = −3
−2c1 +
c2 =
3.
The solution of this system is c1 = −2 and c1 = −1. So, z can be written as a linear combination of the vectors in S.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
118
Chapter 4
Vector Spaces
(b) Proceed as in (a), substituting ( −2, − 6, 6) for (1, − 5, − 5). So, the system to be solved is
c1 + 2c2 = −2
2c1 −
c2 = −6
−2c1 +
c2 = 6.
The solution of this system is c1 = − 14
and c2 = 52. So, v can be written as a linear combination of the vectors in S.
5
(c) Proceed as in (a), substituting ( −1, − 22, 22) for (1, − 5, − 5). So, the system to be solved is
c1 + 2c2 = −1
2c1 −
c2 = −22
−2c1 +
c2 = 22.
The solution of this system is c1 = −9 and c2 = 4. So, w can be written as a linear combination of the vectors in S.
(d) Proceed as in (a), substituting (1, − 5, − 5) for ( − 4, − 3, 3), which yields the system
c1 + 2c2 =
1
2c1 −
c2 = −5
−2c1 +
c2 = −5.
This system has no solution. So, u cannot be written as a linear combination of the vectors in S.
4. (a) Solving the equation
c1 (6, − 7, 8, 6) + c2 ( 4, 6, − 4, 1) = ( 2, 19, −16, − 4)
for c1 and c2 yields the system
6c1 + 4c2 =
2
−7c1 + 6c2 =
19
8c1 − 4c2 = −16
6c1 +
c2 = − 4.
The solution of this system is c1 = −1 and c2 = 2. So, u can be written as a linear combination of the vectors in S.
(b) Proceed as in (a), substituting
6c1 + 4c2 =
−7c1 + 6c2 =
( 492 , 994 , −14, 192 ) for (−42, 113, −112, − 60), which yields the system
49
2
99
4
8c1 − 4c2 = −14
6c1 +
c2 =
19
.
2
The solution of this system is c1 = 34 and c2 = 5. So, v can be written as a linear combination of the vectors in S.
(
)
(c) Proceed as in (a), substituting −4, −14, 27
, 53 for ( −42, 113, −112, − 60), which yields the system
2 8
6c1 + 4c2 =
−4
−7c1 + 6c2 = −14
8c1 − 4c2 =
6c1 +
c2 =
27
2
53 .
8
This system has no solution. So, w cannot be written as a linear combination of the vectors in S.
(
)
(d) Proceed as in (a), substituting 8, 4, −1, 17
for ( −42, 113, −112, − 60), which yields the system
4
6c1 + 4c2 =
8
−7c1 + 6c2 =
4
8c1 − 4c2 = −1
6c1 +
c2 = 17
4.
The solution of this system is c1 = 12 and c2 = 54. So, z can be written as a linear combination of vectors in S.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 4.4
6. From the vector equation
2 −3
0 5
6 2
c1 
 + c2 
 = 

1
4
 1 −2
9 11
you obtain the linear system
−3c1 + 5c2 = 2
c2 = 9
c1 − 2c2 = 11.
This system is inconsistent, and so the matrix is not a
linear combination of A and B.
8. From the vector equation
2 −3
0 5
0 0
c1 
 + c2 
 = 

1
4
 1 −2
0 0
you obtain the trivial combination
2 −3
0 5
0 0
0
 + 0
 = 
 = 0 A + 0 B.
−
4
1
1
2




0 0
10. Let u = (u1, u2 ) be any vector in R 2 . Solving the
equation
c1 ( −1, 1) + c2 (3, 1) = (u1, u2 )
for c1 and c2 yields the system
− c1 + 3c2 = u1
c1 +
c2 = u2 .
The system has a unique solution because the
determinant of the coefficient matrix is nonzero. So, S
spans R 2 .
12. Let u = (u1, u2 ) be any vector in R 2 . Solving the
equation
c1 ( 2, 0) + c2 (0, 1) = (u1, u2 )
for c1 and c2 yields the system
2c1
= u1
c2 = u2 .
The system has a unique solution because the
determinant of the coefficient matrix is nonzero. So, S
spans R 2 .
14. S does not span R 2 because only vectors of the form
t (1, 1) are in span(S). For example, (0, 1) is not in
span(S). S spans a line in R 2 .
119
16. Let u = (u1, u2 ) be any vector in R 2 . Solving the
equation
c1 (0, 2) + c2 (1, 4) = (u1, u2 )
for c1 and c2 yields the system
= 6
2c1
4c1 +
Spanning Sets and Linear Independence
c2 = u1
2c1 + 4c2 = u2 .
The system has a unique solution because the
determinant of the coefficient matrix is nonzero. So, S
spans R 2 .
18. Let u = (u1, u2 ) be any vector in R 2 . Solving the
equation
c1 ( −1, 2) + c2 ( 2, −1) + c3 (1, 1) = (u1, u2 )
for c1, c2 , and c3 yields the system
−c1 + 2c2 + c3 = u1
2c1 −
c2 + c3 = u2 .
This system is equivalent to
c1 − 2c2 − c3 = −u1
3c2 + 3c3 = 2u1 + u2 .
So, for any u = (u1, u2 ) in R 2 , you can take
c3 = 0, c2 = ( 2u1 + u2 ) 3, and
c1 = 2c2 − u1 = (u1 + 2u2 ) 3. So, S spans R 2 .
20. Let u = (u1, u2 , u3 ) be any vector in R3. Solving the
equation
c1(5, 6, 5) + c2 ( 2, 1, − 5) + c3 (0, − 4, 1) = (u1, u2 , u3 )
for c1, c2 , and c3 yields the system
5c1 + 2c2
6c1 +
= u1
c2 − 4c3 = u2
5c1 − 5c2 +
c3 = u3 .
This system has a unique solution because the
determinant of the coefficient matrix is non zero. So, S
spans R3.
22. Let u = (u1, u2 , u3 ) be any vector in R3. Solving the
equation
c1 (1, 0, 1) + c2 (1, 1, 0) + c3 (0, 1, 1) = (u1, u2 , u3 )
for c1, c2 , and c3 yields the system
c1 + c2
= u1
c2 + c3 = u2
c1
+ c3 = u3 .
This system has a unique solution because the
determinant of the coefficient matrix is nonzero. So, S
spans R3.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
120
Chapter 4
Vector Spaces
24. This set does not span R3. Notice that the third and fourth vectors are spanned by the first two.
(4, 0, 5) = 2(1, 0, 3) + ( 2, 0, −1)
(2, 0, 6) = 2(1, 0, 3)
So, S spans a plane in R3.
26. Let a0 + a1x + a2 x 2 + a3 x3 be any vector in P3 . Solving the equation
c1( x2 − 2 x) + c2 ( x3 + 8) + c3 ( x3 − x2 ) + c4 ( x2 − 4) = a0 + a1x + a2 x2 + a3 x3
for c1, c2 , c3 , and c4 yields the system
c2 + c3
= a3
− c3 +
c1
−2c1
c4 = a2
= a1
− 4c4 = a0 .
8c2
This system has a unique solution because the determinant of the coefficient matrix is nonzero. So, S spans P3 .
28. The set is linearly dependent because
34. Because these vectors are multiples of each other, the set
S is linearly dependent.
(3, − 6) + 3(−1, 2) = 0.
36. From the vector equation
30. This set is linearly dependent because
c1 ( −4, − 3, 4) + c2 (1, − 2, 3) + c3 (6, 0, 0) = 0
−3(1, 0) + (1, 1) + ( 2, −1) = (0, 0).
you obtain the homogenous system
32. Because ( −1, 3, 2) is not a scalar multiple of (6, 2, 1), the
−4c1 +
set is linearly independent.
c2 + 6c3 = 0
−3c1 − 2c2
= 0
4c1 + 3c2
= 0.
This system has only the trivial solution
c1 = c2 = c3 = 0. So, the set S is linearly independent.
38. From the vector equation
c1 ( 4, − 3, 6, 2) + c2 (1, 8, 3, 1) + c3 (3, − 2, −1, 0) = (0, 0, 0,0)
you obtain the homogeneous system
4c1 + c2 + 3c3 = 0
−3c1 + 8c2 − 2c3 = 0
6c1 + 3c2 −
c3 = 0
2c1 + c2
= 0.
This system has only the trivial solution c1 = c2 = c3 = 0. So, the set S is linearly independent.
40. This set is linearly independent because
5( 4, 1, 2, 3) − 7(3, 2, 1, 4) + 3(1, 5, 5, 9) − 2(1, 3, 9, 7) = (0, 0, 0, 0).
42. From the vector equation
c1( x − 1) + c2 ( 2 x + 5) = 0 + 0 x + 0x
2
44. From the vector equation
2
you obtain the homogenous system
−c1 + 5c2 = 0
c1
c1( x2 ) + c2 ( x2 + 1) = 0 + 0 x + 0 x2
you obtain the homogenous system
c2 = 0
2c2 = 0
0 = 0
= 0.
c1 + c2 = 0.
This system has only the trivial solution. So, the set is
linearly independent.
This system has only the trivial solution. So, the set is
linearly independent.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 4.4
Spanning Sets and Linear Independence
121
46. From the vector equation
c1 ( − 2 − x) + c2 ( 2 + 3x + x 2 ) + c3 (6 + 5 x + x 2 ) = 0 + 0 x + 0 x 2
you obtain the homogenous system
− 2c1 + 2c2 + 6c3 = 0
− c1 + 3c2 + 5c3 = 0.
c2 +
c3 = 0
This system has infinitely many solutions. For example, c1 = 2, c2 = −1, and c3 = 1. So, S is linearly dependent.
48. From the vector equation
c1(7 − 4 x + 4 x 2 ) + c2 (6 + 2 x − 3x 2 ) + c3 ( 20 − 6 x + 5 x 2 ) = 0 + 0 x + 0 x 2
you obtain the homogenous system
7c1 + 6c2 + 20c3 = 0
− 4c1 + 2c2 −
6c3 = 0.
4c1 − 3c2 +
5c3 = 0
This system has infinitely many solutions. For example, c1 = 2, c2 = 1, and c3 = −1. So, S is linearly dependent.
50. From the vector equation
1 0
0 1 
0 0
0 0
c1 
 + c2 
 + c3 
 = 

0
1
0
0
1
0






0 0
you obtain the homogeneous system
c1 = 0
c2 = 0
c3 = 0.
So, the set is linearly independent.
52. The set is linearly dependent because
 2 0
− 4 −1
 − 8 − 3
2
 + 3
 = 
.
− 3 1
 0 5
− 6 17
54. One example of a nontrivial linear combination of
vectors in S whose sum is the zero vector is
(2, 4) + 2(−1, − 2) + 0(0, 6) = (0, 0).
Solving this equation for ( 2, 4) yields
(2, 4) = −2( −1, − 2) + 0(0, 6).
56. One example of a nontrivial linear combination of
vectors in S whose sum is the zero vector is
2(1, 2, 3, 4) − (1, 0, 1, 2) − (1, 4, 5, 6) = (0, 0, 0, 0).
Solving this equation for (1, 4, 5, 6) yields
(1, 4, 5, 6) = 2(1, 2, 3, 4) − (1, 0, 1, 2).
58. (a) From the vector equation
c1 (t , 0, 0) + c2 (0, 1, 0) + c3 (0, 0, 1) = (0, 0, 0)
you obtain the homogeneous system
tc1
= 0
c2
= 0
c3 = 0.
Because c2 = c3 = 0, the set will be linearly
independent if t ≠ 0.
(b) Proceeding as in (a), you obtain the homogeneous
system
tc1 + tc2 + tc3 = 0
tc1 + c2
tc1
= 0
+ c3 = 0.
The coefficient matrix will have a nonzero
determinant if 2t 2 − t ≠ 0. That is, the set will be
linearly independent if t ≠ 0 or t ≠ 12 .
60. (a) Because ( −2, 4) = −2(1, − 2), S is linearly
dependent.
(b) Because 2(1, − 6, 2) = ( 2, −12, 4), S is linearly
dependent.
(c) Because (0, 0) = 0(1, 0), S is linearly dependent.
 1 0 0
0 0 2




62. The matrix 0 1 1 row reduces to 0 1 0 and
0 0 1
 1 1 1




 1 0 0
1 1 2




1 1 1 row reduces to 0 1 0 as well. So, both
0 0 1
1 2 1




sets of vectors span R3.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
122
Chapter 4
Vector Spaces
64. (a) False. A set is linearly dependent if and only if one
of the vectors of this set can be written as a linear
combination of the others.
(b) True. See “Definition of a Spanning Set of a Vector
Space,” page 177.
 1 3 0
 1 0 0




66. The matrix 2 2 0 row reduces to 0 1 0, which
3 1 1
0 0 1




shows that the equation
c1 (1, 2, 3) + c2 (3, 2, 1) + c3 (0, 0, 1)
only has the trivial solution. So, the three vectors are
linearly independent. Furthermore, the vectors span R 3
because the coefficient matrix of the linear system
72. Suppose v k = c1v1 +  + ck −1v k −1. For any vector
u ∈ V,
u = d1v1 +  + d k −1v k −1 + d k v k
= d1v1 +  + d k −1v k −1 + d k (c1v1 +  + ck −1v k −1 )
= ( d1 + c1d k ) v1 +  + ( d k −1 + ck −1d k ) v k −1
which shows that u ∈ span( v1 ,  , v k −1 ).
74. The vectors are linearly dependent because
( v − u) + (w − v) + (u − w) = 0.
76. On [0, 1], f 2 ( x) = x = x = 13 (3x)
= 13 f1 ( x)
 { f1 , f 2} dependent.
 1 3 0  c1 
 u1 

 
 
=
c
2
2
0

  2
u2 
3 1 1 c3 
u3 

 
 
On [−1, 1], f1 and f 2 are not multiples of each other.
f 2 ( x) ≠ cf1 ( x) for −1 ≤ x < 0, that is
is nonsingular.
68. If S1 is linearly dependent, then for some
u1,  , un , v ∈ S1, v = c1u1 +  + cnun . So, in S 2 ,
f ( x) = x ≠ 13 (3x) for −1 ≤ x ≤ 0.
y
you have v = c1u1 +  + cnu n , which implies that S2
is linearly dependent.
70. Because {u1,  , un , v} is linearly dependent, there exist
scalars c1,  , cn , c not all zero, such that
4
3
2
−4
−2
f1 (x) = 3x
f2(x) = |x|
1 2 3 4
x
c1u1 +  + cnu n + cv = 0.
But, c ≠ 0 because {u1,  , un} are linearly
independent. So,
cv = −c1u1 −  − cnu n  v =
c
−c1
u1 −  − n u n .
c
c
Section 4.5 Basis and Dimension
2. There are four vectors in the standard basis for R 4 .
{(1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1)}
4. There are four vectors in the standard basis for M 4,1.
12. A basis for R 2 can only have two vectors. Because S has
three vectors, it is not a basis for R 2 .
14. S is linearly dependent and does not span R 2 .
16. A basis for R 3 contains three linearly independent
vectors. Because
 1 0 0 0
       
0  1 0 0
 ,  ,  ,  
0 0 1 0
       
0 0 0  1
−1( 2, 1, − 2) + ( −2, −1, 2) + ( 4, 2, − 4) = (0, 0, 0)
S is linearly dependent and is, therefore, not a basis for
R3 .
6. There are three vectors in the standard basis for P2 .
{1, x, x2}
18. S does not span R 3 , although it is linearly independent.
20. S is linearly dependent and does not span R3.
2
8. S is linearly dependent and does not span R .
2
10. S does not span R , although it is linearly independent.
22. S is not a basis because it has too many vectors. A basis
for R 3 can only have three vectors.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 4.5
24. S is not a basis because it has too many vectors. A basis
for P2 can only have three vectors.
Basis and Dimension
123
26. S does not span P2 , although S is linearly independent.
For example, 1 + x + x 2 ∉ span ( S ).
28. S is not a basis because the vectors are linearly dependent. For example,
− 1 − 2 x + x 2 + 3 − 6 x + 3x2 + − 2 + 4 x − 2 x 2 = 0 + 0 x + 0 x2 . Also, S does not span P2 .
(
) (
) (
)
30. S is not a basis because the vectors are linearly
dependent.
1( −3 + 6 x) + 1(3x ) + 3(1 − 2x − x ) = 0
2
2
32. S is not a basis because the vectors are linearly
dependent.
0 1  1 1
−1 0
For example, 
 − 
 = 

 1 0 0 0
 1 0
Also, S does not span M 2, 2 .
34. S does not span M 2,2 , although it is linearly independent.
36. Because v1 and v 2 are multiplies of each other, they do
2
not form a basis for R .
38. Because {v1, v2} consists of exactly two linearly
independent vectors, it is a basis for R 2 .
40. Because the vectors in S are not scalar multiples of one
another, they are linearly independent. Because S
consists of exactly two linearly independent vectors, it is
a basis for R 2 .
42. S does not span R 3 , although it is linearly independent.
So, S is not a basis for R3.
44. This set contains the zero vector, and is therefore linearly
dependent.
1(0, 0, 0) + 0(1, 5, 6) + 0(6, 2, 1) = (0, 0, 0)
So, S is not a basis for R3.
46. To determine if the vectors of S are linearly independent, find the solution of
c1 (1, 0, 0, 1) + c2 (0, 2, 0, 2) + c3 (1, 0, 1, 0) + c4 (0, 2, 2, 0) = (0, 0, 0, 0).
Because the corresponding linear system has nontrivial solutions (for instance, c1 = 2, c2 = −1, c3 = −2, and c4 = 1), the
vectors are linearly dependent, and S is not a basis for R 4 .
48. Form the equation
c1( 4t − t 2 ) + c2 (5 + t 3 ) + c3 (5 + 3t ) + c4 ( − 3t 2 + 2t 3 ) = 0
which yields the homogeneous system
+ 2c4 = 0
c2
−c1
− 3c4 = 0
+ 3c3
= 0
5c2 + 5c3
= 0.
4c1
This system has only the trivial solution. So, S consists of exactly four linearly independent vectors. Therefore, S is a
basis for P3 .
50. Form the equation
c1( −1 + t 3 ) + c2 ( 2t 2 ) + c3 (3 + t ) + c4 (5 + 2t + 2t 2 + t 3 ) = 0
which yields the homogeneous system
+
c1
2c2
c4 = 0
+ 2c4 = 0
c3 + 2c4 = 0
−c1
+ 3c3 + 5c4 = 0.
This system has nontrivial solutions (for instance, c1 = 1, c2 = 1, c3 = 2, and c4 = −1 ). Therefore, S is not a basis for P3
because the vectors are linearly dependent.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
124
Chapter 4
Vector Spaces
52. Form the equation
 1 2
2 −7
 4 −9
12 −16
0 0
c1 
 + c2 
 + c3 
 + c4 
 = 

−5 4
6 2
11 12
17 42
0 0
which yields the homogeneous system
c1 + 2c2 + 4c3 + 12c4 = 0
2c1 − 7c2 − 9c3 − 16c4 = 0
−5c1 + 6c2 + 11c3 + 17c4 = 0
4c1 + 2c2 + 12c3 + 42c4 = 0.
Because this system has nontrivial solutions (for instance, c1 = 2, c2 = −1, c3 = 3, and c4 = −1 ), the set is linearly
dependent, and is not a basis for M 2,2 .
54. Form the equation
56. Form the equation
(
)
(
)
c1 (1, 0, 0) + c2 (1, 1, 0) + c3 (1, 1, 1) = (0, 0, 0)
c1 23 , 52 , 1 + c2 1, 23 , 0 + c3 ( 2, 12, 6) = (0, 0, 0)
which yields the homogeneous system
which yields the homogeneous system
c1 + c2 + c3 = 0
c2 + c3 = 0
2
c
3 1
5
c
2 1
c3 = 0.
c1
+
c2 + 2c3 = 0
+ 32 c2 + 12c3 = 0
+ 6c3 = 0.
This system has only the trivial solution, so S is a basis
for R3. Solving the system
Because this system has nontrivial solutions (for
instance, c1 = 6, c2 = −2, and c3 = −1 ), the vectors
c1 + c2 + c3 = 8
are linearly dependent. So, S is not a basis for R3.
c2 + c3 = 3
58. Because a basis for R has one linearly independent
vector, the dimension of R is 1.
c3 = 8
yields c1 = 5, c2 = −5, and c3 = 8. So,
u = 5(1, 0, 0) − 5(1, 1, 0) + 8(1, 1, 1) = (8, 3, 8).
60. Because a basis for P4 has five linearly independent
vectors, the dimension of P4 is 5.
62. Because a basis for M3,2 has six linearly independent
vectors, the dimension of M3,2 is 6.
64. Because a basis for P2 m −1 has 2m linearly independent vectors, the dimension for P2 m −1 is 2m.
66. One basis for the vector space of all 3 × 3 symmetric matrices is
 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0

 
 
 
 
 

0 0 0 ,  1 0 0 , 0 0 0 , 0 1 0 , 0 0 1 , 0 0 0 .

 
 
 
 
 

0 0 0 0 0 0  1 0 0 0 0 0 0 1 0 0 0 1
용
Because this basis has 6 vectors, the dimension is 6.
68. Although there are four subsets of S that contain three vectors, only three of them are bases for R3.
{(1, 3, − 2), (−4, 1, 1), (2, 1, 1)}, {(1, 3, − 2), (−2, 7, − 3), (2, 1, 1)}, {(−4, 1, 1), (−2, 7, − 3), (2, 1, 1)}
The set {(1, 3, − 2), ( −4, 1, 1), ( −2, 7, − 3)} is linearly dependent.
70. You can add any vector that is not in the span of
S = {(1, 0, 2), (0, 1, 1)}. For instance, the set
{(1, 0, 2), (0, 1, 1), (1, 0, 0)} is a basis for R .
3
72. (a) W is a line through the origin (the y-axis).
(b) A basis for W is {(0, 1)}.
(c) The dimension of W is 1.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 4.6
125
Rank of a Matrix and Systems of Linear Equations
74. (a) W is a plane through the origin.
(b) A basis for W is {( 2, 1, 0), ( −1, 0, 1)}, obtained by
letting s = 1, t = 0, and then s = 0, t = 1.
(c) The dimension of W is 2.
76. (a) A basis for W {(5, − 3, 1, 1)}.
(b) The dimension of W is 1.
78. (a) A basis for W is {(1, 0, 1, 2), ( 4, 1, 0, −1)}.
(b) The dimension of W is 2.
80. (a) True. See Theorem 4.10, page 189, and “Definition
of Dimension of a Vector Space,” page 191.
(b) False. A set of n − 1 vectors could be linearly
dependent. For instance, they can all be multiples of
each other.
82. (1) Let S = {v1,  , v n} be a linearly independent set
of vectors. Suppose, by way of contradiction, that S
does not span V. Then there exists v ∈ V such that
v ∉ span( v1,  , v n ). So, the set {v1,  , v n , v} is
linearly independent, which is impossible by
Theorem 4.10. So, S does span V, and therefore is
a basis.
(2) Let S = {v1,  , v n} span V. Suppose, by way of
contradiction, that S is linearly dependent. Then,
some v i ∈ S is a linear combination of the other
vectors in S. Without loss of generality, you can
assume that v n is a linear combination of
v1,  , v n −1, and therefore, {v1,  , v n −1} spans V.
But, n − 1 vectors span a vector space of at most
dimension n − 1, a contradiction. So, S is linearly
independent, and therefore a basis.
84. (a) Since the dimension of R 3 is three, any basis must
have exactly three vectors. S1 cannot span R3 .
(b) Four vectors in R 3 must form a linearly dependent
set.
(c) If S3 is linearly independent, it will be a basis
for R3 .
86. Let the number of vectors in S be n. If S is linearly independent, then you are done. If not, some v ∈ S is a linear combination
of other vectors in S. Let S1 = S − v. Note that span( S ) = span( S1 ) because v is a linear combination of vectors in S1. You
now consider spanning set S1. If S1 is linearly independent, you are done. If not, repeat the process of removing a vector,
which is a linear combination of other vectors in S1 , to obtain spanning set S 2 . Continue this process with S 2 . Note that this
process would terminate because the original set S is a finite set and each removal produces a spanning set with fewer vectors
than the previous spanning set. So, in at most n − 1 steps, the process would terminate leaving you with minimal spanning set,
which is linearly independent and is contained in S.
Section 4.6 Rank of a Matrix and Systems of Linear Equations
2. (a)
(6, 5, −1)
(b) [6], [5], [−1]
4. (a)
(0, 3, − 4), ( 4, 0, −1), (− 6, 1, 1)
 0 3 − 4
     
(b)  4 , 0 ,  −1
− 6  1  1
     
6. (a) A basis for the row space is {(0, 1, − 2)}.
(b) Because this matrix is already row-reduced, the rank
is 1.
8. (a) A basis for the row space is
{(1, )}.
5
2
(b) Because this matrix row reduces to
1 5 
 2
0 0
0 0


the rank of the matrix is 1.
10. (a) A basis for the row space is
{(1, 0, ), (0, 1, )}.
4
5
1
5
(b) Because this matrix row reduces to
1 0 4 
5


0 1 15 


0 0 0
the rank of the matrix is 2.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
126
Chapter 4
Vector Spaces
12. (a) A basis for the row space is {(1, 0, 0, 0, 0), (0, 1, 0, 0, 0), (0, 0, 1, 0, 0), (0, 0, 0, 1, 0), (0, 0, 0, 0, 1)}.
(b) Because this matrix row reduces to
1

0
0

0

0
0 0 0 0

1 0 0 0
0 1 0 0

0 0 1 0

0 0 0 1
the rank of the matrix is 5.
14. Use v1, v 2 , and v3 to form the rows of matrix A. Then
write A in row-echelon form.
2 3 −1 v1
 1 0 0 w1




A =  1 3 − 9 v 2 → B = 0 1 0 w 2
0 1
0 0 1 w 3
5 v 3



So, the nonzero row vectors of B
w1 = (1, 0, 0), w 2 = (0, 1, 0), and w3 = (0, 0, 1)
form a basis for the row space of A. That is, they form a
basis for the subspace spanned by S.
16. Use v1, v 2 , and v3 to form the rows of matrix A. Then
write A in row-echelon form.
 1 2 2 v1
 1 0 0 w1




A = −1 0 0 v 2 → B = 0 1 1 w 2
 1 1 1 v 3
0 0 0




So, the nonzero row vectors of B
w1 = (1, 0, 0) and w 2 = (0, 1, 1)
form a basis for the row space of A. That is, they form a
basis for the subspace spanned by S.
18. Begin by forming the matrix whose rows are vectors in S.
 6 −3 6 34


3 19
 3 −2
 8 3 −9 6


−2 0 6 −5
This matrix reduces to
1

0
0

0
0 0 0

1 0 0
.
0 1 0

0 0 1
So, a basis for span(S) is
{(1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1)}.
(span(S) = R 4 )
20. Begin by forming the matrix whose rows are the vectors
in S.
 2 5 −3 −2


−2 −3 2 −5
 1 3 −2 2


 −1 −5 3 5
This matrix reduces to
1

0
0

0
3

1 0 −13
.
0 1 −19

0 0
0
0 0
So, a basis for span(S) is
{(1, 0, 0, 3), (0, 1, 0, −13), (0, 0, 1, −19)}.
22. (a) A basis for the column space is {[1]}.
(b) Because this matrix is already row-reduced, the rank
is 1.
24. (a) Row-reducing the transpose of the original matrix
produces
1 0 − 2 
5


3
0 1
.
5


0
0 0
So, a basis for the column space is
{(1, 0, − 52 ), (0, 1, 53 )}.
Equivalently, a basis for the column space consists
of columns 1 and 2 of the original matrix
4  20
   
6,  −5 .
   
2 −11
(b) Because this matrix row reduces to
1 0 1 
4


3
0 1 2 
0 0 0


the rank of the matrix is 2.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 4.6
Rank of a Matrix and Systems of Linear Equations
26. (a) Row reducing the transpose of the original matrix
produces
1

0
0

0

0
0 0 0 0

1 0 0 0
0 1 0 0.

0 0 1 0

0 0 0 1
basis is
{[−5, 1, 0, 1] , [2, −1, 1, 0] }.
T
T
40. The only solution of the system A x = 0 is the trivial
solution. So, the solution space is
{[0, 0, 0, 0] } whose
T
dimension is 0.
{(1, 0, 0, 0, 0),
42. (a) rank( A) = rank( B) = 3
(0, 1, 0, 0, 0),
(0, 0, 1, 0, 0),
(0, 0, 0, 1, 0),
(0, 0, 0, 0, 1)}
nullity( A) = n − r = 5 − 3 = 2
(b) Choosing x3 = s and x5 = t as the free variables,
you have
x1 = − s − t
x2 = 2 s − 3t
(b) Because this matrix row reduces to
x3 = s
0 0 0 0

1 0 0 0
0 1 0 0

0 0 1 0

0 0 0 1
x4 = 5t
x5 = t.
A basis for nullspace is
{(−1, 2, 1, 0, 0), (−1, − 3, 0, 5, 1)}.
the rank of the matrix is 5.
28. Solving the system Ax = 0 yields only the trivial
solution x = (0, 0). So, the dimension of the solution
space is 0. The solution space consists of the zero vector
itself.
30. Solving the system A x = 0 yields solutions of the form
( −4s − 2t , s, t ), where s and t are any real numbers. The
dimension of the solution space is 2, and a basis is
{[−4, 1, 0] , [−2, 0, 1] }.
T
38. Solving the system A x = 0 yields solutions of the form
(2s − 5t , − s + t , s, t ), where s and t are any real
numbers. The dimension of the solution set is 2, and a
So, a basis for the column space is
1

0
0

0

0
127
T
32. Solving the system A x = 0 yields solutions of the form
(−4t , t , 0), where t is any real number. The dimension of
the solution space is 1, and a basis is
{[− 4, 1, 0] }.
T
(c) A basis for the row space of A (which is equal to the
row space of B) is
{(1, 0, 1, 0, 1), (0, 1, − 2, 0, 3), (0, 0, 0, 1, − 5)}.
(d) A basis for the column space A (which is not the
same as the column space of B) is
{(−2, 1, 3, 1), (−5, 3, 11, 7), (0, 1, 7, 5)}.
(e) Linearly dependent
(f ) (i) and (iii) are linearly independent, while (ii) is
linearly dependent.
44. (a) This system yields solutions of the form
( 2s − 3t , s, t ), where s and t are any real numbers
and a basis for the solution space is
{(2, 1, 0), (−3, 0, 1)}.
(b) The dimension of the solution space is 2.
34. Solving the system A x = 0 yields solutions of the form
(2s − t , s, t ), where s and t are any real numbers. The
dimension of the solution space is 2, and a basis is
{[−1, 0, 1] , [2, 1, 0] }.
T
T
36. Solving the system A x = 0 yields solutions of the form
 t
  , where t is any real number. The dimension of the
16t 
 1
solution space is 1, and a basis is   .
16
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
128
Chapter 4
Vector Spaces
46. (a) This system yields solutions of the form
(
)
5
t , − 15
t , 89 t , t , where t is any real number. A basis
8
8
for the solution space is
{(5, −15, 9, 8)}.
{( , − , , 1)} or
5
8
15 9
8 8
(b) The dimension of the solution space is 1.
48. (a) This system yields solutions of the form
(−t + 2s − r, − 4t − 8s − 13 r, r, s, t ), where r, s,
and t are any real numbers. A basis for the solution
space is
{(−1, − 4, 0, 0, 1), (2, −8, 0, 1, 0), (−1, − , 1, 0, 0)}.
1
3
(b) The dimension of the solution space is 3.
50. The system A x = b is consistent because its augmented
matrix reduces to
 1 2 − 4 −1

.
0 0
0 0
The solutions of A x = b are of the form
(−1 − 2s + 4t , s, t ), where s and t are any real numbers.
54. This system A x = b is inconsistent because its
augmented matrix reduces to
 1 0 4 2 0


0 1 −2 4 0.
0 0 0 0 1


56. (a) The system A x = b is consistent because its
augmented matrix reduces to
 1 0 4 −5 6 0


0 1 2 2 4 1 .
0 0 0 0 0 0


(b) The solutions of the system are of the form
(−6t + 5s − 4r , 1 − 4t − 2s − 2r, r , s, t ),
where r , s , and t are any real numbers. That is,
0
−4
 5
−6
 
 
 
 
 1
−2
−2
−4
x = 0 + r  1 + s  0 + t  0,
 
 
 
 
0
 0
 1
 0
 
 
 
 
0
0
0
 
 
 
 1
That is,
where
−1
− 2
4
 
 
 
x =  0 + s  1 + t c ,
 0
 0
 1
 
 
 
0
−4
 5
−6
 
 
 
 
−
−
1
2
2
 
 
 
−4






x p = 0 and x h = r 1 + s 0 + t  0.
 
 
 
 
0
 0
 1
 0
 
 
 
 
0
 0
 0
 1
where
−1
 
xp =  0
 0
 
− 2
4
 
 
and xn = s  1 + t 0.
 0
 1
 
 
52. (a) The system A x = b is consistent because its
augmented matrix reduces to
 1 −2 0 4


0 0 1 0.
0 0 0 0


(b) The solutions of A x = b are of the form
(4 + 2t , t , 0), where t is any real number. That is,
4
2
 
 
x = 0 + t  1,
0
0
 
 
58. The vector b is not in the column space of A because the
linear system A x = b is inconsistent.
60. The vector b is in the column space of A if the equation
A x = b is consistent. Because A x = b has the solution
− 5 
 4
x =  34 ,
− 1 
 2 
b is in the column space of A. Furthermore,
 1
3
0
 1
 
 
 
 
b = − 54 −1 + 34  1 − 12 0 =  2.
 2
0
 1
−3
 
 
 
 
where
4
 
x p = 0
0
 
2
 
and x h = t  1 .
0
 
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 4.6
Rank of a Matrix and Systems of Linear Equations
62. The vector b is in the column space of A if the equation
A x = b is consistent. Because A x = b has the solution
 −1
 
x =  2 ,
− 3
 
b is in the column space of A. Furthermore,
 5
4
 4
 − 9
 
 
 


b = − − 3 + 2  1 − 3− 2 =  11.
 1
0
 8
− 25
 
 
 


64. Many examples are possible. For instance,
 1 0 0 0
0 0


 = 
.
0 0 0 1
0 0
rank 1 rank 1
rank 0
66. Let aij  = A be an m × n matrix in row-echelon form.
The nonzero row vectors r1,  , rk of A have the form (if
the first column of A is not all zero)
r1 = (e11 ,  , e1 p ,  , e1q , )
r2 = (0,  , 0, e2 p ,  , e2 q , )
r3 = (0,  , 0, 0,  , 0, e3q , )
and so forth, where e11 , e2 p , e3q denote leading ones.
Then the equation
c1r1 + c2r2 +  + ck rk = 0
1 2
70. For n = 2, 
 has rank 2.
3 4
 1 2 3


For n = 3, 4 5 6 has rank 2.
7 8 9


In general, for n ≥ 2, the rank is 2, because rows
3,  , n, are linear combinations of the first two rows.
For example, R3 = 2 R2 − R1.
72. Let
x ∈ N ( A)  Ax = 0  AT Ax = 0  x ∈ N ( AT A).
74. (a) True. See Theorem 4.13, page 196.
(b) False. The dimension of the solution space of
A x = 0 for m × n matrix of rank r is n − r . See
Theorem 4.17, page 202.
76. (a) True. The columns of A become rows of the
transpose, AT . So, the columns of A span the same
space as the rows of AT .
(b) True. The rows of A become columns of the
transpose, AT . So, the rows of A span the same space
as the columns of AT .
78. (a) The row space and column space of a matrix have
the same dimension, so the column space has a
dimension of 2.
implies that
(b) 2
c1e11 = 0, c1e1 p + c2e2 p = 0, c1e1q + c2e2 q + c3e3q = 0
(c)
and so forth. You can conclude in turn that c1 = 0,
c2 = 0,  , ck = 0, and so the row vectors are linearly
(d) 3
independent.
68. Suppose that the three points are collinear. If they are on
the same vertical line, then x1 = x2 = x3. So, the matrix
has two equal columns, and its rank is less than 3.
Similarly, if the three points lie on the nonvertical line
y = mx + b, you have
 x1

 x2
 x3

129
(rank ) + (nullity) = (number of columns), so the
nullity is 3.
80. Let A and B be 2 m × n row equivalent matrices. The
dependency relationships among the columns of A can be
expressed in the form Ax = 0, while those of B in the
form Bx = 0. Because A and B are row-equivalent,
Ax = 0 and Bx = 0 have the same solution sets, and
therefore the same dependency relationships.
y1 1
 x1 mx1 + b 1



y2 1 =  x2 mx2 + b 1.
 x3 mx3 + b 1
y3 1


Because the second column is a linear combination of
the first and third columns, this determinant is zero, and
the rank is less than 3.
On the other hand, if the rank of the matrix
 x1

 x2
 x3

y1 1

y2 1
y3 1
is less than 3, then the determinant is zero, which implies
that the three points are collinear.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
130
Chapter 4
Vector Spaces
Section 4.7 Coordinates and Change of Basis
 1
 
2. − 3
 0
 
− 6
 
 12
4. − 4
 
 9
 
 − 8
−1
6. Because [x]B =   , you can write
 4
x = −( −2, 3) + 4(3, − 2) = (14, −11)
 14
which implies that the coordinates of x relative to the standard basis S are [x]S =  .
−11
2
 
8. Because [x]B = 0, you can write
4
 
(
)
(
)
(
) (
)
x = 2 34 , 52 , 23 + 0 3, 4, 72 + 4 − 23 , 6, 2 = − 92 , 29, 11
− 9 
 2
which implies that the coordinates of x relative to the standard basis S are [xS ] =  29.
 11
 
−2
 
3
10. Because [x]B =  , you can write
 4
 
 1
x = −2( 4, 0, 7, 3) + 3(0, 5, −1, −1) + 4( −3, 4, 2, 1) + 1(0, 1, 5, 0) = ( −20, 32, − 4, − 5)
which implies that the coordinates of x relative to the standard basis S are
−20


32
[x]S =  .
−4


 −5
12. Begin by writing x as a linear combination of the vectors in B.
x = ( −17, 22) = c1 ( −5, 6) + c2 (3, − 2)
Equating corresponding components yields the following system of linear equations.
− 5c1 + 3c2 = −17
6c1 − 2c2 =
22
4
The solution of this system is c1 = 4 and c2 = 1. So, x = 4( − 5, 6) + (3, − 2) and [x]B =  .
 1
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 4.7
Coordinates and Change of Basis
131
14. Begin by writing x as a linear combination of the vectors in B.
(
(
)
)
(
)
(
)
x = 3, − 12 , 8 = c1 32 , 4, 1 + c2 34 , 52 , 0 + c3 1, 12 , 2
Equating corresponding components yields the following system of linear equations.
3
c
2 1
+ 34 c2 +
c3 =
3
4c1 + 52 c2 + 12 c3 = − 12
+ 2c3 =
c1
8
 2
 
The solution of this system is c1 = 2, c2 = − 4, and c3 = 3. So, x = 2 32 , 4, 1 − 4 34 , 52 , 0 + 3 1, 12 , 2 and [x]B = −4.
 3
 
(
)
(
)
(
)
16. Begin by writing x as a linear combination of the vectors in B.
x = (0, − 20, 7, 15) = c1 (9, − 3, 15, 4) + c2 (3, 0, 0, 1) + c3 (0, − 5, 6, 8) + c4 (3, − 4, 2, − 3)
Equating corresponding components yields the following system of linear equations.
9c1 + 3c2
−3c1
0
+ 6c3 + 2c4 =
7
c2 + 8c3 − 3c4 =
15
15c1
4c1 +
+ 3c4 =
− 5c3 − 4c4 = −20
The solution of this system is c1 = −1, c2 = 1, c3 = 3, and c4 = 2.
−1
 
1
So, (0, − 20, 7, 15) = −1(9, − 3, 15, 4) + 1(3, 0, 0, 1) + 3(0, − 5, 6, 8) + 2(3, − 4, 2, − 3) and [x]B =  .
 3
 
 2
18. Begin by forming the matrix
1 5
[ B ′ B] = 
1 0

1
1 6 0
22. Begin by forming the matrix
 1
2 1 0 0

9 0 1 0
−1 −4 −7 0 0 1


[B′ B] =  3
and then use Gauss-Jordan elimination to produce
 1 0 6 −5
I 2 P −1  = 
.
1
0 1 −1
So, the transition matrix from B to B′ is
 6 −5
P −1 = 
.
−1 1
20. Begin by forming the matrix
 1 0 1 1
[ B ′ B] = 
.
0 1 1 0
2
7
and then use Gauss-Jordan elimination to produce
 1 0 0 −13 6 4


I3 P −1 = 0 1 0 12 −5 −3.
0 0 1 −5 2 1


So, the transition matrix from B to B′ is
P
−1
−13 6 4


=  12 −5 −3.
 −5 2
1

Because this matrix is already in the form I 2 P−1, you
see that the transition matrix from B to B′ is
1 1
P −1 = 
.
1 0
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
132
Chapter 4
Vector Spaces
24. Begin by forming the matrix
32. Begin by forming the matrix
 1 0 0 1 2 5
′
=
B
B
[
] 0 1 0 3 −1 6.
0 0 1 2 2 1


 1 0 −1 3 1 1
′
B
B
=
[
]  1 1 4 2 1 2.
−1 2 0 1 2 0


Because this matrix is already in the form I 3 P −1  , the
and then use Gauss-Jordan elimination to produce
transition matrix from B to B′ is
I 3
 1 2 5


−1
P = 3 −1 6.
2 2 1


P
 1 −1 −2 3
[ B ′ B] = 

1 2
2 0
and then use Gauss-Jordan elimination to produce
I 2
1
2
5
2
1
.
−2
So, the transition matrix from B to B′ is
1
P −1 =  52
 2
12 
11

6 .
11 
1
11 
So, the transition matrix from B to B′ is
26. Begin by forming the matrix
1 0
P −1  = 
0 1
27
8
1 0 0
11
11

19
15
P −1  = 0 1 0
11
11
0 0 1 − 6 − 3
11
11

1
.
−2
−1
8
 27
11
 11
19
15
=  11
11
− 6 − 3
 11
11
12 
11

6
.
11
1
11

34. Begin by forming the matrix
1

1
[B′ B] = 
1

1
0 0 0 1 0 0 0

1 0 0 0 1 0 0
1 1 0 0 0 1 0

1 1 1 0 0 0 1
and then use Gauss-Jordan elimination to produce
28. Begin by forming the matrix
I 4
2 − 2
 3 −3
 B1 B = 

− 3 − 3 − 2 − 2
1

0
P −1  = 
0

0
0 0 0
0 0

0 0
.
0 −1 1 0

0 0 −1 1
1
0
1 0 0 −1
1
0 1 0
0 0 1
and then use Gauss-Jordan elimination to produce
So, the transition matrix from B to B′ is
 1 0 2 0
3
.
I 2 P −1  = 
0 1 0 23 


 1 0 0

−1 1 0
P −1 = 
 0 −1 1

 0 0 −1
 2 0
.
So, the transition matrix from B to B1 is P −1 =  3
 0 23 


0

0
.
0

1
30. Begin by forming the matrix
 2 0 − 3 1 0 0
′
B
B
=
[
] −1 2 2 0 1 0
 4 1 1 0 0 1


and then use Gauss-Jordan elimination to produce
I 3
1 0 0
0 − 19

1
14
P  = 0 1 0
3
27
0 0 1 − 1 − 2
3
27

−1
2
9

1
− 27 .
4
27 
So, the transition matrix from B to B′ is
P
−1
 0 −1
9

14
=  13
27
− 1 − 2
27
 3
2
9

1
− 27 .
4
27 
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 4.7
Coordinates and Change of Basis
133
36. Begin by forming the matrix
 2 3 0 2 0

 4 −1 0 −1 1
[B′ B] = −2 0 −2 2 2
 1 1 4 1 −3

5 1 1
 0 2
1 0 0 0 0

0 1 0 0 0
0 0 1 0 0

0 0 0 1 0

0 0 0 0 1
and then use Gauss-Jordan elimination to produce
I 5
1

0
P −1  = 0
0

0
0 0 0 0
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
12
157
45
157
17
− 157
1
− 157
4
− 157
32
157
37
− 157
7
157
47
314
31
314
5
314
99
− 314
3
157
287
628
49
628
7 
− 157

13
157 
23 
157 
25 
− 314

57

314 
10
157
41
− 157
12
157
103
314
59
− 314
So, the transition matrix from B to B′ is
 12
 157
 45
 157
17
P −1 = − 157
 1
− 157
 4
− 157
32
157
37
− 157
7
157
47
314
31
314
5
314
99
− 314
3
157
287
628
49
628
10
157
41
− 157
12
157
103
314
59
− 314
7 
− 157

13 
157

23
.
157

25
− 314


57
314 

2 6
1 32
 1 0 −126 −90
38. (a) [B′ B] = 
  
 = I
4
3
1 31 −2 3
0 1
 1 0 − 16
 2 6 1 32
(b) [ B B′] = 
  
2
0 1
−2 3 1 31
9
− 1
(c) PP −1 =  6
 92

−5
 = [I
7
−126 −90
P −1   P −1 = 

4
3

− 1
P]  P =  62
 9
−5
.
7
−5 −126 −90
 1 0

 = 

4
3
7 
0 1
− 1
(d) [x]B = P[x]B ′ =  6
 92

 14 
−5  2
  =  3 
− 59

7 −1
 9
1 0 0
2 0 1 1 1 0



′
40. (a) [B B] = 2 1 0 1 −1 0  0 1 0
0 0 1
0 1 1 1 1 1



1
4
1
2
1
2
− 14
− 12
3
2
 14
− 14 


1 =  I P −1   P −1 = 1
2


2
1
1
2
2
− 14
− 12
3
2
− 14 

1
2
1
2
1
1
1
1
1 0 0
 2
2
1 1 0 2 0 1
2
2
2
2






1
1
1
(b) [ B B′] = 1 −1 0 2 1 0  0 1 0 0 − 2 2  = [ I P]  P =  0 − 2 12 .
0 0 1 −2
−2
1 1 1 0 1 1
1 0
1 0




1 1  1
 2
2 2 4


(c) PP −1 =  0 − 12 12   12
−2
1 0  12

− 14
− 12
3
2
− 14 
 1 0 0



1
=
0 1 0
2
1
0 0 1


2
1 1
 2
2
 6
2 2  

 
 
(d) [x]B = P[x]B′ =  0 − 12 12  3 = −1
−2
−1
1 0  1
 

© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
134
Chapter 4
42. (a)
Vector Spaces
0
1 − 9 1
3 − 3


44. (a) B B = 0
3 − 3 −1
1 9
3
0
3 9
1 −1

 1 4 −2 1 2 −4
[B′ B] =  2 1 5 3 −5 2
−2 −4 8 4 2 −6


 1 0 0 − 11
16

25
I P −1  = 0 1 0
32
0 0 1
23
32

− 11
 16
25
So, P −1 =  32

23
 32
55
− 16
45
32
3
32
55
− 16
45
32
3
32
1
1 0 0

I P −1  = 0 1 0



0 0 1
73 
− 16

83
− 32

29 
− 32

73 
16

83 
− 32
.

29
− 32 
− 33
 13
37
So, P = − 13

30
− 13
86
− 13
85
− 13
77
− 13
86
− 13
85
− 13
77
− 13
3
2
−1
P =  76

 32
11
6
3
2
3
2

7
6

− 11

6
− 76
11
6
3
2
3
2
7
.
6

11
− 6 
1 3 −3
0
 1 −9


(b) B B1  = −1
1 9 0
3 − 3
 9
1 −1 3
0
3

80 
13

57
13 
55 
13 
69
1 0 0
185

21
[I P] = 0 1 0 − 74

27
0 0 1 370
80 
13

57 
.
13

55

13 
3
− 370
27
74
108
370
3
10

0

3
− 10

(c) Using a graphing utility, you have P P −1 = I .
(c) Using a graphing utility, you have PP
−1
= I.
193 
−1
 13 
 
x B = P x B′ = P  0 =  151
13 
140 
 2
 
 13 
(d) [ ]
− 76
So, the transition matrix from B to B1 is
1 4 −2
 1 2 −4


′
(b) [ B B ] =  3 −5 2 2
1 5
4 2 −6 −2 −4 8


 1 0 0 − 33
13

37
[I P] = 0 1 0 − 13
0 0 1 − 30
13

3
2
7
6
3
2
[]
{
}
− 567 
 − 5
 370 
 
3
(d) [x]B = P[x]B1 = P − 4 =  − 74


 1
339
 185

 
( )
( )
46. The standard basis for P3 is S = 1, x, x2 , x3 and because p = − 2(1) − 3( x) + 0 x2 + 4 x3
it follows that
− 2
 
−3
[ p]S =  .
0
 
 4
48. The standard basis for P3 is S = {1, x, x 2 , x3} and because p = 4(1) + 11( x) + 1( x 2 ) + 2( x3 )
it follows that
 4
 
11
p
=
[ ]S  .
1
 
 2
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 4.8
50. The standard basis in M 3,1 is
 1 0 0
     
S = 0 ,  1 , 0
     
0 0  1
Applications of Vector Spaces
135
−1
54. (a) [ B′ B] = [ B′ I n ]  I n ( B′)  = I n P −1 


 ( B′)
−1
= P −1
(b) [B′ B] = [ I n B]  B = P −1
(c) [ B B′] = [ I n B′]  B′ = P
and because
 1
0
0
 
 
 
X = 20 − 1 1 + 40
0
0
 1
 
 
 
(d) [ B B′] = [ B I n ]  I n B −1  = [ I n P]
 B −1 = P
56. (a) True. If P is the transition matrix from B1 to B, then
P[x]B1 = [x]B . Multiplying both sides by P −1 you
it follows that
 2
X
=
[ ]S −1.
 4
 
see that [x]B1 = P −1[x]B matrix from B to B1.
(b) True. See discussion before Example 5, page 214.
(c) False. [ p]S = [− 3 1 5] .
T
52. The standard basis in M 3,1 is
 1 0 0
     
S = 0 ,  1 , 0
     
0 0  1
and because
 1
0
0
 
 
 
X = 10 + 0 1 − 40
0
0
 1
 
 
 
it follows that
 1
[ X ]S =  0.
−4
 
58. Let P be the transition matrix from B′′ to B′ and let Q be the transition matrix from B′ to B. Then for any vector x, the
coordinate matrices with respect to these bases are related as follows.
[x]B′ = P[x]B′′
and
[x]B = Q[x]B′
Then the transition matrix from B′′ to B is QP because
[x]B = Q[x]B′ = QP[x]B′′ .
So, the transition matrix from B to B′′, which is the inverse of the transition matrix from B′′ to B, is equal to
(QP)
−1
= P−1Q−1.
Section 4.8 Applications of Vector Spaces
2. (a) If y = e x , then y′′′ = e x and y′′′ + y = 2e x ≠ 0. So, e x is not a solution of the equation.
(b) If y = e − x , then y′′′ = − e− x and y ′′′ + y = 0. So, e − x is a solution of the equation.
(c) If y = e− 2 x , then y ′′′ = − 8e − 2 x and y ′′′ + y = − 7e− 2 x ≠ 0. So, e − 2 x is not a solution of the equation.
(d) If y = 2e − x , then y ′′′ = − 2e− x and y ′′′ + y = 0. So, 2e − x is a solution of the equation.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
136
Chapter 4
Vector Spaces
4. (a) If y = e3 x , then y′ = 3e3 x and y′′ = 9e3 x . So, y′′ − 6 y′ + 9 y = 9e3 x − 6(3e3 x ) + 9(e3 x ) = 0 and e3 x is a solution.
(b) If y = xe3 x , then y′ = (3 x + 1)e3 x and y′′ = (9 x + 6)e3 x . So,
y′′ − 6 y′ + 9 y = (9 x + 6)e3 x − 6(3 x + 1)e3 x + 9 xe3 x = 0 and xe3 x is a solution.
(c) If y = x 2e3 x , then y′ = (3 x 2 + 2 x)e3 x and y′′ = (9 x 2 + 12 x + 2)e3 x . So,
y′′ − 6 y′ + 9 y = (9 x 2 + 12 x + 2)e3 x − 6(3x 2 + 2 x)e3 x + 9 x 2e3 x ≠ 0.
So, x 2e3 x is not a solution of the equation.
(d) If y = ( x + 3)e3 x , then y′ = (3x + 10)e3 x and y′′ = (9 x + 33)e3 x . So,
y′′ − 6 y′ + 9 y = (9 x + 33)e3 x − 6(3 x + 10)e3 x + 9( x + 3)e3 x = 0 and ( x + 3)e3 x is a solution.
6. (a) If y = 3 cos x, y (4) = 3 cos x and y (4) − 16 y = − 45 cos x ≠ 0. So, 3 cos x is not a solution of the equation.
(b) If y = 3 cos 2 x, then y (4) = 48 cos 2 x and y (4) − 16 y = 0. So, 3 cos 2x is a solution of the equation.
(c) If y = e− 2 x , then y (4) = 16e− 2 x and y (4) − 16 y = 0. So, e − 2 x is a solution of the equation.
(d) If y = 3e 2 x − 4 sin 2 x, then y (4) = 48e 2 x − 64 sin 2 x and y (4) − 16 y = 0. So, 3e2 x − 4 sin 2 x is a solution of the
equation.
8. (a) If y = e x − x , then y ′ = (1 − 2 x)e x − x and y ′ + ( 2 x − 1) y = 0. So, e x − x is a solution of the equation.
2
2
2
(b) If y = 2e x − x , then y ′ = ( 2 − 4 x) e x − x and y ′ + ( 2 x − 1) y = 0. So, 2e x − x is a solution of the equation.
2
2
2
(c) If y = 3e x − x , then y ′ = (3 − 6 x)e x − x and y ′ + ( 2 x − 1) y = 0. So, 3e x − x is a solution of the equation.
2
2
2
(d) If y = 4e x − x , then y ′ = ( 4 − 8 x)e x − x and y ′ + ( 2 x − 1) y = 0. So, 4e x − x is a solution of the equation.
2
2
2
10. (a) If y = x, then y′ = 1 and y′′ = 0. So, xy′′ + 2 y′ = x(0) + 2(1) ≠ 0, and y = x is not a solution.
(b) If y =
1
1
1
2
2
 1
, then y′ = − 2 and y′′ = 3 . So, xy′′ + 2 y′ = x 3  + −2 − 2  = 0, and y = is a solution.
x
x
x
x
x
x
 


(
)
(
)
(c) If y = xe x , then y′ = xe x + e x and y′′ = xe x + 2e x . So, xy′′ + 2 y′ = x xe x + 2e x + 2 xe x + e x ≠ 0, and y = xe x
is not a solution.
(
)
(
)
(d) If y = xe− x , then y′ = e − x − xe− x and y′′ = xe − x − 2e− x . So, xy′′ + 2 y′ = x xe− x − 2e− x + 2 e− x − xe− x ≠ 0, and
−x
y = xe is not a solution.
( ) = 0, and y = 3e is a solution.
(b) If y = xe , then y′ = 2 x e + e . So, y′ − 2 xy = 2 x e + e − 2 x( xe ) ≠ 0, and y = xe is not a solution.
2
2
x2
2 x2
2
12. (a) If y = 3e x , then y′ = 6 xe x . So, y′ − 2 xy = 6 xe x − 2 x 3e x
x2
2 x2
2
x2
x2
x2
(
x2
)
(c) If y = x 2e x , then y′ = x 2e x + 2 xe x . So, y′ − 2 xy = x2ex + 2 xex − 2 x x2ex ≠ 0, and y = x 2e x is not a solution.
(d) If y = xe − x , then y′ = e− x − xe − x . So, y′ − 2 xy = e − x − xe − x − 2 x( xe − x ) ≠ 0, and y = xe − x is not a solution.
14. W (e3 x , sin 2 x) =
e3 x
3e
3x
sin 2 x
2 cos 2 x
= 2e3 x cos 2 x − 3e3 x sin 2 x
(
2
16. W e x , e − x
2
)
=
ex
2
e− x
2
2 xe x
2
−2 xe − x
2
= −4 x
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 4.8
−sin x
x
cos x
18. W ( x, − sin x, cos x) = 1 −cos x
2
x2
ex
22. W x 2 , e x , x 2e x = 2 x
2 xe x
(
)
2
(
)
24. W x, x 2 , e x , e− x =
2
2
x
x2
ex
e− x
x
−x
−e
0
2 ex
e− x
0
0 ex
2
x2 1
x
=
1 2 x 1 −1
− e− x
0
0 1 −1
x ex
sin x
cos x
1 e
x
cos x
− sin x
0 e
x
sin x
x 2e
x
0
0
1
x
0
0
x
− sin x
− cos x
0 ex
− cos x
sin x
2e
1
x
x2
2
1
1 2 x 0 −1
=
0
2 2
1
0
0 0 −1
= −1( 4 x 2 + 4 − 2 x 2 ) = − 2 x 2 − 4
− sin x − cos x
− cos x
0 e
ex
2
1
2 1
2e
e
e x = −2 x
2e x + 4 xe x + x 2e x
0
0 ex
=
0
−x
ex
= −2 x 2e x + x ( 2 x 4 − x3 − 3x 2 + x + 3)
2 xe x + x 2e x
2e x + 4 x 2e x
26. W ( x, e x , sin x, cos x) =
e− x
137
x 2e x
2
1 2x e
x
20. W ( x, e − x , e x ) = 1 −e − x
−sin x = x
sin x −cos x
0
Applications of Vector Spaces
x
2e x
0
0
= xe
x
− sin x
− cos x − 1 e
e
x
− cos x
sin x
e
x
0
0
− sin x
− cos x
− cos x
x
sin x
= 2 xe ( − sin x − cos x) − 2e ( − sin x − cos x)
2
x
2
2
x
2
= − 2 xe x + 2e x
28. First calculate the Wronskian of the two functions.
W (e ax , xe ax ) =
(
e ax
ae
xe ax
(ax + 1)eax
ax
= ( ax + 1)e 2 ax − axe 2 ax = e 2 ax
)
Because W eax , xeax ≠ 0 and the functions are solutions to y′′ − 2ay′ + a 2 y = 0, they are linearly independent.
30. First, calculate the Wronskian of the two functions
W (e ax cos bx, e ax sin bx) =
e ax cos bx
e
ax
= be
e ax sin bx
(a cos bx − b sin bx)
2 ax
≠ 0,
e
ax
(a sin bx + b cos bx)
because b ≠ 0
Because these functions satisfy the differential equation y′′ − 2ay′ + ( a 2 + b 2 ) y = 0, they are linearly independent.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
138
Chapter 4
Vector Spaces
32. (a) y = e 2 x sin x  y′ = (cos x + 2 sin x)e 2 x , y′′ = ( 4 cos x + 3 sin x)e 2 x  y′′ − 4 y′ + 5 y = 0
y = e 2 x cos x  y′ = ( 2 cos x − sin x)e 2 x , y′′ = (3 cos x − 4 sin x)e2 x  y′′ − 4 y′ + 5 y = 0
e 2 x sin x
(b) Because W (e 2 x sin x, e 2 x cos x) =
(cos x + 2 sin x)e
e 2 x cos x
x
( 2 cos x − sin x)e 2 x
= e 4 x ≠ 0,
the set is linearly independent.
(c) y = C1e 2 x sin x + C2e 2 x cos x
34. (a) y = 1  y′′′ = y′′ = y′ = 0
 y′′′ + 4 y′ = 0
y = 2 cos 2 x  y′ = − 4 sin 2 x, y′′ = − 8 cos 2 x, y′′′ = 16 sin 2 x
 y′′′ + 4 y′ = 0
y = 2 + cos 2 x  y′ = − 2 sin 2 x, y′′ = − 4 cos 2 x, y′′′ = 8 sin 2 x
 y′′′ + 4 y′ = 0
(b) Because
2 cos 2 x 2 + cos 2 x
1
W (1, 2 cos 2 x, 2 + cos 2 x) = 0 − 4 sin 2 y
− 2 sin 2 x
0 − 8 cos 2 x
− 4 cos 2 x
= 16 sin 2 x cos 2 x − 16 sin 2 x cos 2 x
= 0,
the set is linearly dependent.
36. (a) y = e − x  y ′ = − e − x , y ′′ = e − x , y ′′′ = − e − x  y ′′′ + 3 y ′′ + 3 y ′ + y = 0
y = xe − x  y ′ = (1 − x )e − x , y ′′ = ( x − 2)e − x , y ′′′ = (3 − x )e − x  y ′′′ + 3 y ′′ + 3 y ′ + y = 0
y = x 2 e − x  y ′ = ( 2 x − x 2 )e − x , y ′′ = ( x 2 − 4 x + 2)e − x , y ′′′ = ( − x 2 + 6 x − 6)e − x  y ′′′ + 3 y ′′ + 3 y ′ + y = 0
(b) Because
e− x
xe − x
W (e − x , xe − x , x 2e − x ) = −e − x
( 2 x − x 2 )e − x
( x − 2)e− x ( x 2 − 4 x + 2)e− x
(1 − x)e− x
e− x
= e
x2
x
1
−3 x
x 2e − x
−1 1 − x
2x − x2
1
x − 2 x2 − 4 x + 2
1
x
x2
= e −3 x 0
1
2x
0 −2 −4 x + 2
= 2e −3 x ≠ 0,
the set is linearly independent.
(c) y = C1e− x + C2 xe− x + C3 x 2 e− x
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 4.8
Applications of Vector Spaces
139
38. (a) y = 1  y ′′ = y ′′′ = y (4) = 0  y (4) − 2 y ′′′ + y ′′ = 0
y = x  y ′′ = y ′′′ = y (4) = 0  y (4) − 2 y ′′′ + y ′′ = 0
y = e x  y ′′ = y ′′′ = y (4) = e x  y (4) − 2 y ′′′ + y ′′ = 0
y = xe x  y ′′ = ( x + 2)e x , y ′′′ = ( x + 3)e x , y (4) = ( x + 4)e x  y (4) − 2 y ′′′ + y ′′ = 0
(b) Because
x ex
1
W (1, x, e x , xe x ) =
0 1 e
x
0 0 ex
0 0 ex
xe x
( x + 1)e x e x ( x + 2)e x
=
= e 2 x ( x + 3) − e 2 x ( x + 2) = e 2 x ≠ 0,
( x + 2)e x e x ( x + 3)e x
( x + 3)e x
the set is linearly independent.
(c) y = C1 + C2 x + C3e x + C4 xe x
40. Proving that { y1 , y2} is linearly independent if and only if W ( y1 , y2 ) ≠ 0 is equivalent to proving that { y1 , y2} is linearly
dependent if and only if W ( y1 , y2 ) = 0.
To prove one direction, assume { y1 , y2} is linearly dependent. By the Corollary to Theorem 4.8 on page 183, one of the
functions is a scalar multiple of the other. So, y1 = cy2 . Then
W ( y1 , y2 ) = W ( y1 , cy1 ) =
y1 cy1
y1′ cy1′
= 0.
 y1 
 y2 
To prove the other direction, assume W ( y1 , y2 ) = 0. Then the column vectors   and   are linearly dependent (see
 y1′ 
 y2′ 
 y1 
 y2 
Summary of Equivalent Conditions for Square Matrices, page 204). So,   = c    y1 = cy2 , and { y1 , y2} is linearly
′
y
 1
 y2′ 
dependent.
42. No. For instance, consider the nonhomogeneous
differential equation y′′ = 1. Clearly, y = x 2 /2 is a
(
)
solution, whereas the scalar multiple 2 x 2 /2 is not.
x2
y2
+
= 1 is an ellipse
3
5
centered at the origin with major axis falling along the
y-axis.
46. The graph of the equation
y
44. The graph of the equation x 2 = 6 y is a parabola
opening upward, with the vertex at the origin.
3
y
1
5
4
−3 −2 −1
1
2
3
x
3
2
x 2 = 6y
1
−3
x
−6 −4 −2
2
4
5x 2 + 3y 2 − 15 = 0
6
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
140
Chapter 4
Vector Spaces
48. The graph of the equation is a hyperbola centered at the
origin with transverse axis along the x-axis.
y
15
10
5
− 10
5 10 15
2
1

2
y − 
( x + 2) = 1
2
54. The graph of the equation 
−
2
4
1

is a hyperbola centered at  −2, , with a vertical
2

transverse axis.
x
y
8
6
4
− 10
− 15
2
(−2, 12 (
2
x
y
−
=1
36 49
−8
50. The graph of the equation ( y − 3) = 4( x − 3) is a
2
parabola opening to the right, with the vertex at (3, 3).
y
x
4 6
−2
−4
−6
−8
4y 2 − 2x 2 − 4y − 8x − 15 = 0
56. The graph of the equation ( x − 3) + y 2 = 14 is a circle
2
8
with the center at (3, 0) and a radius of 12 .
6
y
4
(3, 3)
3
2
2
x
−2
−2
2
4
(3, 0)
1
y 2 − 6y − 4x + 21 = 0
−1
−1
( x − 1)
52. The graph of the equation
1
4
ellipse with the center at (1, 0).
y
1
2
x
4
−2
2
+ y = 1 is an
2
4y 2 + 4x 2 − 24x + 35 = 0
58. The graph of the equation ( y + 3) = 4( − 2)( x + 2) is
2
a parabola that opens to the left, with vertex at ( −2, − 3).
y
1
−1
1
2
−1
x
−5 −4
(−2, −3)
−2
4x 2 + y 2 − 8x + 3 = 0
−1
1
x
−2
−3
−4
−5
−6
y 2 + 8x + 6y + 25 = 0
60. −2 x 2 + 3xy + 2 y 2 + 3 = 0
cot 2θ =
a −c
4
= −  θ ≈ −18.43°
b
3
Matches graph (b).
62. x 2 − 4 xy + 4 y 2 + 10 x − 30 = 0
cot 2θ =
a −c
1− 4
3
 θ ≈ 26.57°
=
=
−4
b
4
Matches graph (d).
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 4.8
64. Begin by finding the rotation angle θ , where
a −c
0−0
cot 2θ =
=
= 0, implying that
b
1
θ = π 4.
So, sin θ = 1
2 and cos θ = 1
cot 2θ =
2. By substituting
y = x′ sin θ + y′ cos θ = 1
2 ( x′ + y′) into
xy − 8 x − 4 y = 0 and simplifying, you obtain
( x′)
−
2
12 x′
2
−
( y′)
2
4 y′
+
= 0.
2
( x′ − 6 2 ) − ( y′ − 2 2 ) = 1.
64
2
64
This is the equation of a hyperbola with a transverse axis
along the x′-axis.
y
y'
and
5 x 2 − 2 xy + 5 y 2 − 24 = 0 and simplifying, you obtain
4( x′) + 6( y′) − 24 = 0.
2
( x′) + ( y′)
2
6
3
x
2
= 1.
4
−3
a − c
1−1
π
=
= 0  θ = .
b
2
4
1
and cos θ =
2
x = x′ cos θ − y′ sin θ =
1
. By substituting
2
3
x
−2
−3
70. Begin by finding the rotation angle θ , where
a −c
5−5
π
cot 2θ =
=
= 0, implying that θ = .
b
−6
4
1
( x′ + y′)
2
and
x 2 + 2 xy + y 2 − 8 x + 8 y = 0 and simplifying, you
−1
2
2
obtain ( x′) = −4 2 y′ or y′ =
( x′) , which is
4 2
a parabola.
2 and cos θ = 1
x = x′ cos θ − y′ sin θ =
y = x′ sin θ + y′ cos θ =
2. By substituting
1
( x′ − y′)
2
1
( x′ + y′)
2
into
5 x 2 − 6 xy + 5 y 2 − 12 = 0 and simplifying, you obtain
2( x′) + 2( y′) − 12 = 0.
2
2
In standard form, ( x′) + ( y′) = 6.
2
2
This is an equation of a circle with the center at (0, 0)
x'
45°
1
1
So, sin θ = 1
into
y
45°
−1
1
( x′ − y′)
2
and
y = x′ sin θ + y′ cos θ =
x'
1
8 12 16 20
So, sin θ =
−1
2
In standard form,
66. Begin by finding the rotation angle θ , where
1
1
( x′ + y′)
2
y = x′ sin θ + y′ cos θ =
y'
−8
−12
y'
1
( x′ − y′)
2
x = x′ cos θ − y′ sin θ =
y
−8
cot 2θ =
2. By substituting
This is the equation of an ellipse with major axis along
the x′-axis.
x'
20
16
12
8
2 and cos θ = 1
into
2
In standard form,
5 − 5
π
= 0, implying that θ = .
−2
4
So, sin θ = 1
2 ( x′ − y′) and
2
141
68. Begin by finding the rotation angle θ , where
x = x′ cos θ − y′ sin θ = 1
2
Applications of Vector Spaces
2
3
x
and a radius of
6.
y
y'
−2
2
−3
1
−3
x'
3
−1
45°
1
2
3
x
−2
−3
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
142
Chapter 4
Vector Spaces
72. Begin by finding the rotation angle θ , where
cot 2θ =
a−c
7−5
−1
2π
=
=
 2θ =
,
b
3
−2 3
3
π
implying that θ =
So, sin θ =
3
76. Begin by finding the rotation angle θ , where
a −c
5−5
π
=
= 0, implying that θ = .
b
4
−2
cot 2θ =
1
and cos θ =
2
So, sin θ =
.
3
1
and cos θ = . By substituting
2
2
x = x′ cos θ − y′ sin θ =
1
3
x′ −
y′
2
2
x = x′ cos θ − y′ sin θ =
3
1
x′ + y′
2
2
y = x′ sin θ + y′ cos θ =
obtain
( x′) + ( y′)
4( x′) + 6( y′) = 0, which is a single point, (0, 0).
2
2
y
y'
2
4
2
major axis along the x′ -axis.
1
−2
1
−2
−1
x'
2
y'
45°
1
−1
60°
2
x
74. Begin by finding the rotation angle θ , where
1−3
1
π
cot 2θ =
= −
, implying that θ = .
3
2 3
3
3 2 and cos θ = 1 2. By substituting
(
1
x = x′ cos θ − y′ sin θ =
x′ −
2
a − c
1−1
π
=
= 0, implying that θ = .
b
−10
4
)
1
y = x′ sin θ + y′ cos θ =
2
So, sin θ = 1
( 3x′ + y′)
into x 2 + 2 3 xy + 3 y 2 − 2 3x + 2 y + 16 = 0
and simplifying, you obtain
x = x′ cos θ − y sin θ =
y = x′ sin θ + y′ cos θ =
x 2 − 10 xy + y 2 = 0 and simplifying, you obtain
6( y′) − 4( x′) = 0.
2
2
The graph of this equation is two lines y′ = ±
y'
6
x′.
3
This is the equation of a parabola with axis on the
y′-axis.
x
x'
8
6
4
In standard form, y′ + 4 = −( x′) .
6
1
( x′ + y′)
2
into
2
4
1
( x′ − y′)
2
and
2
2
2. By substituting
y
4( x′) + 4 y′ + 16 = 0.
60°
2 and cos θ = 1
3 y′
and
y x'
x
78. Begin by finding the rotation angle θ , where
cot 2θ =
y'
2
−2
−1
So, sin θ =
x'
2
= 1, which is an ellipse with
y
1
( x′ + y′) into
2
5 x 2 − 2 xy + 5 y 2 = 0 and simplifying, you obtain
into 7 x 2 − 2 3xy + 5 y 2 = 16 and simplifying, you
2
1
( x′ − y′)
2
and
and
y = x′ sin θ + y′ cos θ =
1
. By substituting
2
45°
−8
−4
4 6 8
x
−4
−6
−2
−4
−6
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Review Exercises for Chapter 4
143
80. Let θ satisfy cot 2θ = ( a − c) b. Substitute x = x′ cos θ − y′ sin θ and y = x′ sin θ + y′ cos θ into the equation
ax 2 + bxy + cy 2 + dx + ey + f = 0. To show that the xy-term will be eliminated, analyze the first three terms under this
substitution.
ax 2 + bxy + cy 2 = a( x′ cos θ − y′ sin θ ) + b( x′ cos θ − y′ sin θ )( x′ sin θ + y′ cos θ ) + c( x′ sin θ + y′ cos θ )
2
2
= a( x′) cos 2 θ + a( y′) sin 2 θ − 2ax′y′ cos θ sin θ
2
2
+ b( x′) cos θ sin θ + bx′y′ cos 2 θ − bx′y′ sin 2 θ − b( y′) cos θ sin θ
2
2
+ c( x′) sin 2 θ + c( y′) cos 2 θ + 2cx′y′ sin θ cos θ .
2
2
So, the new xy-terms are
−2ax′y′ cos θ sin θ + bx′y′(cos 2 θ − sin 2 θ ) + 2cx′y′ sin θ cos θ = x′y′[− a sin 2θ + b cos 2θ + c sin 2θ ]
= − x′y′( a − c) sin 2θ − b cos 2θ .
But, cot 2θ =
cos 2θ
a−c
=
 b cos 2θ = ( a − c) sin 2θ , which shows that the coefficient is zero.
sin 2θ
b
82. (a) Set up the Wronskian with the given solutions and their derivatives. Then find the determinant. If the determinant is
nonzero, the solutions are linearly independent.
(b) Use the substitutions x = x′ cos θ − y ′ sin θ and y = x′ sin θ + y ′ cos θ , where θ is found by using the coefficients of
the original equation in the formula cot 2θ =
a − c
.
b
Review Exercises for Chapter 4
2. (a) u + v = ( −1, 2,1) + (0, 1, 1) = ( −1, 3, 2)
8. 3u + 2x = w − v
2x = − 3u − v + w
(b) 2 v = 2(0,1,1) = (0, 2, 2)
x = − 32 u − 12 v + 12 w
(c) u − v = ( −1, 2,1) − (0,1,1) = ( −1,1, 0)
= − 32 (1, −1, 2) − 12 (0, 2, 3) + 12 (0, 1, 1)
(d) 3u − 2 v = 3( −1, 2, 1) − 2(0, 1, 1)
)
(
) ( ) (
= ( − 32 − 0 + 0, 32 − 1 + 12 , − 3 − 23 + 12 )
= ( − 32 , 1, − 4)
= − 32 , 32 , − 3 − 0, 1, 32 + 0, 12 , 12
= ( −3, 6, 3) − (0, 2, 2) = ( −3, 4, 1)
4. (a) u + v = (0, 1, −1, 2) + (1, 0, 0, 2) = (1, 1, −1, 4)
(b) 2 v = 2(1, 0, 0, 2) = ( 2, 0, 0, 4)
(c) u − v = (0,1, −1, 2) − (1, 0, 0, 2) = ( −1,1, −1, 0)
(d) 3u − 2 v = 3(0, 1, −1, 2) − 2(1, 0, 0, 2)
= (0, 3, − 3, 6) − ( 2, 0, 0, 4) = ( − 2, 3, − 3, 2)
6. x = 13[− 2u + v − 2w]
(
solve the equation
c1u1 + c2u 2 + c3u 3 = v
for c1 , c2 , and c3. This vector equation corresponds to the
system
= 13 − 2(1, −1, 2) + (0, 2, 3) − 2(0, 1, 1)
= 13 ( − 2, 2, − 4) + (0, 0, 1)
10. To write v as a linear combination of u1 , u 2 , and u 3 ,
)
= 13 ( − 2, 2, − 3) = − 32 , 23 , −1
c1 − 2c2 + c3 = 4
= 4
2c1
3c1 +
c2
= 5.
The solution of this system is c1 = 2, c2 = −1, and
c3 = 0. So, v = 2u1 − u 2 .
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
144
Chapter 4
Vector Spaces
12. To write v as a linear combination of u1 , u 2 , and u 3 , solve the equation
c1u1 + c2u 2 + c3u 3 = v
for c1, c2 , and c3. This vector equation corresponds to the system of linear equations
c1 −
c2
=
4
−2c1 + 2c2 − c3 = −13
c1 + 3c2 − c3 =
−5
c1 + 2c2 − c3 =
−4.
The solution of this system is c1 = 3, c2 = −1, and c3 = 5. So, v = 3u1 − u 2 + 5u 3 .
14. The zero vector is the zero polynomial p( x) = 0. The additive inverse of a vector in P8 is
−( a0 + a1x + a2 x2 +  + a8 x8 ) = − a0 − a1x − a2 x2 −  − a8 x8.
16. The zero vector is
22. W is not a subspace of R 3 , because it is not closed under
0 0 0

.
0 0 0
scalar multiplication. For instance (1, 1, 1) ∈ W and
−2 ∈ R , but −2(1, 1, 1) = ( −2, − 2, − 2) ∉ W .
The additive inverse of
 a11

a21
a12
a22
a13   − a11
 is 
a23  − a21
− a12
− a22
− a13 
.
− a23 
18. W is not a subspace of R 2 . For instance, ( 2, 1) ∈ W and
(3, 2) ∈ W , but their sum (5, 3) ∈ W . So, W is not
closed under addition (nor scalar multiplication).
20. W is not a subspace of R 2 . For instance (1, 3) ∈ W and
(2, 12) ∈ W , but their sum (3, 15) ∉ W . So, W is not
24. Because W is a nonempty subset of C[−1, 1], you need
only check that W is closed under addition and scalar
multiplication. If f and g are in W, then
f ( −1) = g ( −1) = 0, and
( f + g )(−1) = f (−1) + g (−1) = 0, which implies that
f + g ∈ W . Similarly, if c is a scalar, then
cf ( −1) = c0 = 0, which implies that cf ∈ W . So, W is
a subspace of C[−1, 1].
closed under addition (nor scalar multiplication).
26. (a) W is a subspace of R3, because W is nonempty
((0, 0, 0) ∈ W ) and W is closed under addition and scalar multiplication.
For if ( x1 , x2 , x3 ) and ( y1 , y2 , y3 ) are in W, then x1 + x2 + x3 = 0 and y1 + y2 + y3 = 0. Because
( x1 , x2 , x3 ) + ( y1, y2 , y3 ) = ( x1 + y1, x2 + y2 , x3 + y3 ) satisfies ( x1 + y1 ) + ( x2 + y2 ) + ( x3 + y3 ) = 0, W is closed
under addition. Similarly, c( x1 , x2 , x3 ) = (cx1 , cx2 , cx3 ) satisfies cx1 + cx2 + cx3 = 0, showing that W is closed under
scalar multiplication.
(b) W is not closed under addition or scalar multiplication, so it is not a subspace of R3. For example, (1, 0, 0) ∈ W , and yet
2(1, 0, 0) = ( 2, 0, 0) ∉ W .
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Review Exercises for Chapter 4
28. (a) To find out whether S spans R 3 , form the vector
equation
c1 ( 4, 0, 1) + c2 (0, − 3, 2) + c3 (5, 10, 0) = (u1 , u2 , u3 ).
This yields the system of equations
+
4c1
−3c2
c1 +
5c3
=
u1
+ 10c3
=
u2
= u3 .
2c2
This system has a unique solution for every
(u1, u2 , u3 ) because the determinant of the coefficient
matrix is not zero. So, S spans R3.
(b) Solving the same system in (a) with
(u1, u2 , u3 ) = (0, 0, 0) yields the trivial solution. So,
S is linearly independent.
(c) Because S is linearly independent and spans R3, it is
a basis for R3.
3
30. (a) To find out whether S spans R , form the vector
equation
34. S has three vectors, so you need only check that S is
linearly independent.
Form the vector equation
c1(1) + c2 (t ) + c3 (1 + t 2 ) = 0 + 0t + 0t 2
which yields the homogeneous system of linear
equations
+ c3 = 0
c1
= 0
c2
c3 = 0.
This system has only the trivial solution. So, S is linearly
independent and S is a basis for P2 .
36. S has four vectors, so you need only check that S is
linearly independent.
Form the vector equation
 1 0
−1 0
2 1
 1 1
0 0
c1 
 + c2 
 + c3 
 + c4   = 

0 1
 1 1
 1 0
0 1
0 0
c1 ( 2, 0, 1) + c2 ( 2, −1, 1)) + c3 ( 4, 2, 0) = (u1 , u2 , u3 ).
which yields the homogeneous system of linear
equations
This yields the system of linear equations
c1 − c2 + 2c3 + c4 = 0
2c1 +
2c2
+ 4c3
=
u1
−c2
+ 2c3
=
u2
=
u3 .
c1 +
c2
This system has a unique solution for every
(u1, u2 , u3 ) because the determinant of the coefficient
matrix is not zero. So, S spans R3.
(b) Solving the same system in part (a) with
(u1, u2 , u3 ) = (0, 0, 0) yields the trivial solution. So,
S is linearly independent.
(c) Because S is linearly independent and S spans R3, it
is a basis for R3.
32. (a) The set
S = {(1, 0, 0), (0, 1, 0), (0, 0, 1), ( 2, −1, 0)} spans R3
because any vector u = (u1 , u2 , u3 ) in R 3 can be
written as
u = u1 (1, 0, 0) + u2 (0, 1, 0) + u3 (0, 0, 1)
= (u1 , u2 , u3 ).
145
c3 + c4 = 0
c2 +
= 0
c3
c1 + c2
+ c4 = 0.
This system has only the trivial solution. So, S is linearly
independent and S is a basis for M 2,2 .
38. (a) The system given by Ax = 0 has only the trivial
solution (0, 0). So, the solution space is {(0, 0)},
which does not have a basis.
(b) The nullity is 0.
Note that rank ( A) + nullity( A) = 2 + 0 = 2 = n.
(c) The rank of A is 2 (the number of nonzero row
vectors in the reduced row-echelon matrix).
40. (a) The system given by Ax = 0 has solutions of the
form ( 2t , 5t , t , t ), where t is any real number. So, a
basis for the solution space of Ax = 0 is
{(2, 5, 1, 1)}.
(b) The nullity of A is 1.
(b) S is not linearly independent because
2(1, 0, 0) − (0, 1, 0) + 0(0, 0, 1) = ( 2, −1, 0).
3
(c) S is not a basis for R because S is not linearly
independent.
Note that rank ( A) + nullity( A) = 3 + 1 = 4 = n.
(c) The rank of A is 3 (the number of nonzero row
vectors in the reduced row-echelon matrix).
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
146
Chapter 4
Vector Spaces
42. (a) The system given by Ax = 0 has only the trivial
solution (0, 0, 0, 0). So, the solution space is
{(0, 0, 0, 0)}, which does not have a basis.
(15, 9) = 15(1, 0) + 9(0, 1), the coordinate vector of x
(b) The nullity is 0.
Note that rank ( A) + nullity( A) = 4 + 0 = 4 = n.
15
 9
44. (a) Using Gauss-Jordan elimination, the matrix reduces to
26 
11

8
.
11
0
4
 
56. Because [x]B = 0 , write x as
2
 
x = 4(1, 0, 1) + 0(0, 1, 0) + 2(0, 1, 1) = ( 4, 2, 6).
So, the rank is 2.
(b) A basis for the row space is
{(
)(
26
1, 0, 11
,
8
0, 1, 11
)}.
46. (a) Using Gauss-Jordan elimination, the matrix reduces to
 1 0 0


0 1 0.
0 0 1


Because ( 4, 2, 6) = 4(1, 0, 0) + 2(0, 1, 0) + 6(0, 0, 1),
the coordinate vector of x relative to the standard basis is
4
[x]S = 2.
6
 
 c1 
58. To find [x]B1 =   , solve the equation
c2 
So, the rank is 3.
c1 ( 2, 2) + c2 (0, −1) = ( −1, 2).
(b) A basis for the row space is
{(1, 0, 0), (0, 1, 0), (0, 0, 1)}.
The resulting system of linear equations is
48. (a) This system has solutions of the form
(1 − 32 s − 12 t + 2r, s, t , r ), where r, s, and t are any
real numbers. A basis for the solution space is
{(− 3, 2, 0, 0), (−1, 0, 2, 0), (2, 0, 0, 1)}.
(b) The dimension of the solution space is 3, the number
of vectors in a basis for the solution space.
50. (a) This system has solutions of the form
(0, − 32 t , − t, t ), where t is any real number. A basis
for the solution space is
relative to the standard basis is
[x]S =  .
(c) The rank of A is 4 (the number of nonzero row
vectors in the reduced row-echelon matrix).
1 0

0 1
0 0

 4
54. Because [x]B =   , write x as
− 7 
x = 4( 2, 4) − 7( −1, 1) = (15, 9). Because
{(0, − , −1, 1)}.
3
2
(b) The dimension of the solution space is 1, the number
of vectors in a basis.
1
52. Because [x]B =   , write x as
1
x = 1( 2, 0) + 1(3, 3) = (5, 3). Because
2c1
2c1
− c2
= −1
.
= 2
So, c1 = − 12 and c2 = − 3, and you have
− 1 
[x]B1 =  2 .
 − 3
 c1
 
x
=
60. To find [ ]B′
c2 , solve the equation
c3 
 
c1 (1, 0, 0) + c2 (0, 1, 0) + c3 (1, 1, 1) = ( 4, − 2, 9).
Forming the corresponding linear system, the solution is
c1 = − 5, c2 = −11, and c3 = 9. So,
 −5
[x]B′ = −11.
 9
 
(5, 3) = 5(1, 0) + 3(0, 1), the coordinate vector of x
relative to the standard basis is
5
[x]S =  .
3
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Review Exercises for Chapter 4
147
 c1
 
c2
62. To find [x]B′ =  , solve the equation
c3 
 
c4 
c1 (1, −1, 2, 1) + c2 (1, 1, − 4, 3) + c3 (1, 2, 0, 3) + c4 (1, 2, − 2, 0) = (5, 3, − 6, 2).
The resulting system of linear equations is
c1 +
c2 +
c4 =
5
−c1 +
c2 + 2c3 + 2c4 =
c3 +
3
2c1 − 4c2
− 2c4 = −6
c1 + 3c2 + 3c3
=
2.
So, c1 = 2, c2 = 1, c3 = −1, and c4 = 3, and you have
 2
 
1
[x]B′ =   .
−1
 
 3
64. Begin by forming
68. Begin by forming
 1 −1
[ B ′ B] = 
2
 1 −2
1 1 3 3


B1 B = −1
1 0 1 4 3 .
 2
0 − 13 1 3 4
 3
1 3
.
0 −1 1
Then use Gauss-Jordan elimination to obtain
 1 0 − 12
−1
I 2 P  = 0 1 − 3
2

1
2
.
− 52 
Then use Gauss-Jordan elimination to obtain
 1 0 0 6 20 21


I 3 P −1  = 0 1 0 7 24 24 .
0 0 1 9 31 30


So,
− 1 1 
P −1 =  23 52 .
− 2 − 2 
So,
66. Begin by forming
P
 1 0 1 1 1 1
[B′ B] = 2 1 0 1 1 0.
3 0 1 1 0 0


−1
6 20 21


= 7 24 24 .
9 31 30


Then use Gauss-Jordan elimination to obtain
I 3
 1 0 0 0 − 12 − 12 


P  = 0 1 0 1
2
1.
0 0 1 1
3
3
2
2

−1
So,
P
−1
0 − 1 − 1 
2
2


2
1 .
= 1
1
3
3
2
2

1 0
1 1 1 1
70. (a) [B′ B] = 
  
−
−
1
1
0
1


0 1
1
2
1
2
0
 = I P −1 
1
 1 1 1 1
 1 0 2 0
(b) [B B′] = 
  
 = [ I P]
0 −1 1 −1
0 1 −1 1
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
148
Chapter 4
Vector Spaces
1
(c) P −1 P =  12
 2
0  2 0
 1 0

 = 

1 −1 1
0 1
1
(d) [x]B′ = P −1[x]B =  12
 2
0  2
 1
  =  
1 − 2
−1
1 0 0
0 0
1
 1 2 2 1 1 1




2
1
2
72. (a) [B′ B] = −1 2 2 1 1 −1  0 1 0
= I P −1 
3
3
3
0 0 1 − 1 1 − 2 
 2 −1 2 −1 0 0


6 6
3

 1 1 1 1 2 2
 1 0 0 − 2 1 − 2




2 1 4 = [ I P]
(b) [ B B′] =  1 1 −1 −1 2 2  0 1 0
−1 0 0 2 −1 2
0 0 1
1 0
0



(c) P
−1
 0 0
1 − 2 1 − 2
 1 0 0
 2 1




2
=
P =  3 3
2
1
4

0 1 0
3 
− 1 1 − 2   1 0
0 0 1
0


3 
 6 6
 0 0
−1
1  2
 2 1
 
 
2
(d) [x]B′ = P [x]B =  3 3
2 =  34 
3  
− 1 1 − 2  −1
 2
3  
 6 6
 3
−1
74. (a) Because W is a nonempty subset of V, you need to
only check that W is closed under addition and scalar
multiplication. If f , g ∈ W , then f ′ = 4 f and
g ′ = 4 g . So,
( f + g )′ = f ′ + g ′ = 4 f + 4 g = 4( f + g ),
which shows that f + g ∈ W . Finally, if c is a
scalar, then (cf )′ = (cf ′) = c( 4 f ) = 4(cf ), which
implies that cf ∈ W .
(b) V is not closed under addition nor scalar
multiplication. For instance, let f = e x − 1 ∈ U .
Note that 2 f = 2e x − 2 ∉ U because
(2 f )′ = 2e x ≠ ( 2 f ) + 1 = 2e x − 1.
76. Suppose, on the contrary, that A and B are linearly
dependent. Then B = cA for some scalar c. So,
(cA)
T
= BT = −B, which implies that cA = − B. So,
B = O , a contradiction.
78. Because − ( v1 − 2 v 2 ) − ( 2 v 2 − 3v 3 ) = 3v 3 − v1 , the
set is linearly dependent.
80. S is a nonempty subset of Rn, so you need only show
closure under addition and scalar multiplication. Let
x, y ∈ S . Then Ax = λ x and Ay = λ y. So,
A( x + y ) = Ax + Ay = λ x + λ y = λ ( x + y ), which
implies that x + y ∈ S . Finally, for any scalar
c, A(cx) = c( Ax) = c(λ x) = λ (cx), which implies that
cx ∈ S .
If λ = 3, then solve for x in the equation
Ax = λ x = 3x, or Ax − 3x = 0, or ( A − 3I 3 )x = 0.
 3 1 0
 1 0 0   x1 
0



 
 
 0 3 0 − 30 1 0   x2  = 0
 0 0 1
0 0 1   x3 
0


 
 

0 1 0  x1 
0

 
 
x
0
0
0
=

  2
0
0 0 −2  x3 
0

 
 
The solution to this homogeneous system is
x1 = t , x2 = 0, and x3 = 0, where t is any real number.
So, a basis for S is {(1, 0, 0)}, and the dimension of S
is 1.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Review Exercises for Chapter 4
82. From Exercise 81, you see that a set of functions
{ f1, , f n} can be linearly independent in C[a, b] and
149
88. (a) Because y′ = y′′ = y′′′ = y (4) = e x , you have
linearly dependent in C[c, d ], where [a, b] and [c, d ] are
different domains.
84. (a) False. This set is not closed under addition or scalar
multiplication:
(0, 1, 1) ∈ W , but 2(0, 1, 1) = (0, 2, 2) is not in W.
(b) True. See “Definition of Basis,” on page 186.
(c) False. For example, let A = I 3 be the 3 × 3 identity
matrix. It is invertible and the rows of A form the
standard basis for R3 and, in particular, the rows of
A are linearly independent.
86. (a) True. It is a nonempty subset of R2, and it is closed
under addition and scalar multiplication.
(b) False. These operations only preserve the linear
relationships among the columns.
y (4) − y = e x − e x = 0.
Therefore, ex is a solution.
(b) Because y′ = −e − x , y′′ = e − x , y′′′ = −e − x , and
y (4) = e − x , you have
y (4) − y = e − x − e − x = 0.
Therefore e
−x
is a solution.
(c) Because y′ = −sin x, y′′ = −cos x, y′′′ = sin x,
and y (4) = cos x, you have
y (4) − y = cos x − cos x = 0.
Therefore, cos x is a solution.
(d) Because y′ = cos x, y′′ = −sin x, y′′′ = −cos x,
and y (4) = sin x, you have
y (4) − y = sin x − sin x = 0.
Therefore, sin x is a solution.
90. (a) Because y′′ = − 25 cos 5 x − 25 sin 5 x, you have
y′′ + 25 y = −25 cos 5 x − 25 sin 5 x + 25(sin 5 x + cos 5 x)
= − 25 cos 5 x − 25 sin 5 x + 25 sin 5 x + 25 cos 5 x
= 0
Therefore, sin 5 x + cos 5 x is a solution.
(b) Because y′′ = − 5 sin x − 5 cos x, you have
y′′ + 25 y = − 5 sin x − 5 cos x + 25(5 sin x + 5 cos x)
= − 5 sin x − 5 cos x + 125 sin x + 125 cos x
= 120 sin x + 120 cos x
≠ 0
Therefore, 5 sin x + 5 cos x is not a solution.
(c) Because y′′ = − 25 sin 5 x, you have
y′′ + 25 y = − 25 sin 5 x + 25(sin 5 x)
= − 25 sin 5 x + 25 sin 5 x
= 0
Therefore, sin 5x is a solution.
(d) Because y′′ = − 25 cos 5 x, you have
y′′ + 25 y = − 25 cos 5 x + 25(cos 5 x)
= − 25 cos 5 x + 25 cos 5 x
= 0
Therefore, cos 5x is a solution.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
150
Chapter 4
Vector Spaces
2
x2
3+ x
0
2
0
92. W ( 2, x 2 , 3 + x) = 0 2 x
1
= −4
sin 2 x
x
94. W ( x, sin 2 x, cos 2 x) = 1
cos 2 x
−2 sin x cos x = 4 cos 2 x − 2
2 sin x cos x
0 4 cos 2 x − 2
2 − 4 cos 2 x
96. (a) y = e− 3 x  y′ = − 3e− 3 x , y′′ = 9e− 3 x  y′′ + 6 y′ + 9 y = 0
y = 3e− 3 x  y′ = − 9e− 3 x , y′′ = 27e− 3 x  y′′ + 6 y′ + 9 y = 0
(b) The Wronskian of this set is
W (e−3 x , 3e−3 x ) =
e−3 x
3e−3 x
−3x −3 x
−9e−3 x
(
= −9e−6 x + 9e−6 x = 0 = 0.
)
Because W e−3x , 3e−3 x = 0, the set is linearly dependent.
98. (a) y = sin 3 x  y′′ = − 9 sin 3 x  y′′ + 9 y = 0
y = cos 3 x  y′′ = − 9 cos 3 x  y′′ + 9 y = 0
(b) The Wronskian of this set is
W (sin 3 x, cos 3 x ) =
sin 3 x
cos 3 x
3 cos 3 x −3 sin 3 x
= −3 sin 2 3 x − 3 cos 2 3 x = −3.
Because W (sin 3 x, cos 3 x) ≠ 0 the set is linearly independent.
(c) y = C1 sin 3x + C2 cos 3x
100. Begin by completing the square.
102. Begin by completing the square.
9 x + 18 x + 9 y − 18 y = −14
2
2
4 x 2 + 8 x − y 2 − 6 y = −4
9( x 2 + 2 x + 1) + 9( y 2 − 2 y + 1) = −14 + 9 + 9
4( x 2 + 2 x + 1) − ( y 2 + 6 y + 9) = −4 + 4 − 9
( x + 1) + ( y − 1)
( y + 3)2 − ( x + 1)2 = 1
2
2
= 94
radius of 23 .
This is the equation of a hyperbola centered at ( −1, − 3).
y
(− 1, 1)
−3
−2
y
8
6
4
2
2
x
−1
9
4
9
This is the equation of a circle centered at ( −1, 1) with a
−8 −4
x
2 4 6 8 10
(−1, −3)
−1
9x 2 + 9y 2 + 18x − 18y + 14 = 0
−12
4x 2 − y 2 + 8x − 6y + 4 = 0
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Review Exercises for Chapter 4
108. From the equation
104. y 2 − 4 x − 4 = 0
y2 = 4x + 4
This is the equation of a parabola with vertex ( −1, 0).
y
you find that the angle of rotation is θ =
sin θ =
8
6
4
2
1
and cos θ =
2
π
4
. Therefore,
1
.
2
By substituting
−6 −4 −2
2 4 6 8
x = x′ cos θ − y′ sin θ =
x
−4
−6
−8
1
( x′ − y′)
2
and
y = x′ sin θ + y′ cos θ =
y 2 − 4x − 4 = 0
1
( x′ + y′)
2
into 9 x 2 + 4 xy + 9 y 2 − 20 = 0, you obtain
106. Begin by completing the square.
11( x′) + 7( y′) = 20.
2
16 x 2 − 32 x + 25 y 2 − 50 y = −16
16( x 2 − 2 x + 1) + 25( y 2 − 2 y + 1) = −16 + 16 + 25
( x − 1)2 + y − 1 2 = 1
(
)
25
16
This is the equation of an ellipse centered at (1, 1).
y
2
In standard form,
( x′)2 + ( y′)2 = 1
20
11
20
7
which is the equation of an ellipse with major axis along
the y′-axis.
y
3
y'
2
(1, 1)
−1
a − c
9 −9
=
= 0,
4
b
cot 2θ =
y 2 = 4( x + 1)
(−1, 0)
151
1
x'
2
1
2
3
x
−2
45°
1
2
x
−1
−2
16x 2 + 25y 2 − 32x − 50y + 16 = 0
110. From the equation cot 2θ =
Therefore, sin θ =
a −c
1−1
π
=
= 0, you find the angle of rotation to be θ = .
b
2
4
2
and cos θ =
2
y = x′ sin θ + y′ cos θ =
2
. By substituting x = x′ cos θ − y′ sin θ =
2
2
( x′ + y′) into x 2 + 2 xy + y 2 +
2
2x −
2
( x′ − y′) and
2
2 y = 0, you obtain 2( x′) − 2 y′ = 0.
2
In standard form, ( x′) = y′ which is the equation of a parabola with vertex (0, 0).
2
y
y'
5
4
x'
2
−4 −2
−2
−3
−4
−5
2 3 4 5
x
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
152
Chapter 4
Vector Spaces
Project Solutions for Chapter 4
1 Solutions of Linear Systems
1. Because ( −2, −1, 1, 1) is a solution of Ax = 0, so is any multiple −2( −2, −1, 1, 1) = ( 4, 2, − 2, − 2) because the solution space
is a subspace.
2. The solutions of Ax = 0 form a subspace, so any linear combination 2 x1 − 3x 2 of solutions x1 and x 2 is again a solution.
3. Let the first system be Ax = b1. Because it is consistent, b1 is in the column space of A. The second system is Ax = b 2 , and
b 2 is a multiple of b1 , so it is in the column space of A as well. So, the second system is consistent.
4. 2 x1 − 3x 2 is not a solution (unless b = 0 ). The set of solutions to a nonhomogeneous system is not a subspace. If A x1 = b
and Ax 2 = b , then
A( 2x1 − 3x 2 ) = 2 Ax1 − 3 Ax 2 = 2b − 3b = −b ≠ b.
5. Yes, b1 and b 2 are in the column space of A, therefore so is b1 + b 2 .
2 Direct Sum
1. Basis for U : {(1, 0, 1), (0, 1, −1)}
Basis for W : {(1, 0, 1)}
Basis for Z : {(1, 1, 1)}
U + W = U because W ⊆ U
U + Z = R 3 because {(1, 0, 1), (0, 1, −1), (1, 1, 1)} is a basis for R3.
W + Z = span{(1, 0, 1), (1, 1, 1)} = span{(1, 0, 1), (0, 1, 0)}
2. Suppose u1 + w1 = u 2 + w 2 , which implies u1 − u 2 = w 2 − w1.
Because u1 − u 2 ∈ U ∩ W and w 2 − w1 ∈ U ∩ W , and U ∩ W = {0}, u1 = u 2 and w1 = w 2 .
U ⊕ Z and W ⊕ Z are direct sums.
3. Let v ∈ V , then v = u + w , u ∈ U , w ∈ W . Then v = (c1u1 +  + ck u k ) + ( d1w1 +  + d m w m ),
and v is in the span of {u1 ,  , u k , w1 ,  , w m}. To show that this set is linearly independent, suppose
c1u1 +  + ck u k + d1w1 +  + d m w m = 0
 c1u1 +  + ck u k = −( d1w1 +  + d m w m )
But U ∩ W ≠ {0}  c1u1 +  + ck u k = 0 and d1w 1 +  + d m w m = 0.
Because {u1,  , u k } and {w1,  , w m} are linearly independent,
c1 =  = ck = 0 and d1 =  = d m = 0.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Project Solutions for Chapter 4
153
U + W is spanned by {(1, 0, 0), (0, 0, 1), (0, 1, 0)}  U + W = R 3 . This is not a direct sum because (0, 0, 1) ∈ U
W.
4. Basis for U : {(1, 0, 0), (0, 0, 1)}
Basis for W : {(0, 1, 0), (0, 0, 1)}
dimU = 2, dimW = 2, dim(U ∩ W ) = 1
dimU
2
+ dimW
= dim(U + W ) + dim(U ∩ W ).
+
=
2
3
+
1
In general, dimU + dimW = dim(U + W ) + dim(U ∩ W ).
5. No, dimU + dimW = 2 + 2 = 4, then dim(U + W ) + dim(U ∩ W ) = dim(U + W ) = 4, which is impossible in R3.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
C H A P T E R 5
Inner Product Spaces
Section 5.1
Length and Dot Product in R n ..........................................................155
Section 5.2
Inner Product Spaces ..........................................................................160
Section 5.3
Orthonormal Bases: Gram-Schmidt Process..................................... 171
Section 5.4
Mathematical Models and Least Squares Analysis .......................... 179
Section 5.5
Applications of Inner Product Spaces ...............................................185
Review Exercises ........................................................................................................195
Project Solutions.........................................................................................................204
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
C H A P T E R 5
Inner Product Spaces
Section 5.1 Length and Dot Product in Rn
12. (a) A unit vector v in the direction of u is given by
2.
v =
02 + 12 =
4.
v =
2 + 0 + ( −5) + 5 =
6. (a)
(b)
(c)
2
1 =1
2
2
1
12 +  
 2
u =
2
u + v =
2
(3, 0) =
54 = 3 6
=
5
32 + 0 2 =
8. (a)
u =
02 + 12 + ( −1) + 22 =
(b)
v =
12 + 12 + 32 + 02 =
(c)
u + v = (1, 2, 2, 2)
=
6
=
2 + ( − 2)
13
2
u
=
u
2
2
1
9
16
+
+
26
26
26
1
( −1) + 1
2
2
( −1, 1) =
1
1 
 1
(−1, 1) =  − ,

2
2
2

Then v is four times this vector.
2
1
1
+
=1
2
2
(b) A unit vector in the direction opposite that of u is
given by
 2

2
2
2
− v = −
,−
,
 =  −
.
2 
2 
 2
 2
v =
4 

26 
14. First find a unit vector in the direction of u.
 2

2

 +  −
 =
 2 
 2 
2
3
,
26
=1
 2
2
= 
,−
.
2
2


v =
2
3 
4 
 1 



 + −
 + −

26 
26 
 26 


−v =
(2, − 2)
2
(2, − 2)
4
2
2
3
4 
 1
,−
,−
= 
.
26
26 
 26
=
2
4 
.
26 
3
,
26
1
9
16
+
+
=1
26
26
26
=
u
u
2
1
1

,
(−1, 3, 4) =  −
26
26

1

,
− v = − −
26

11
12 + 22 + 22 + 22 =
1
(−1, 3, 4)
(b) A unit vector in the direction opposite that of u is
given by
10. (a) A unit vector v in the direction of u is given by
v =
(−1) + 32 + 42
1 

 3 
 4 
−
 +
 +

26 

 26 
 26 
v =
9 = 3
2
=
1
2
2
17
1
=
17
4
2
=
u
=
u
v =
5
1
=
4
2
=
 1
22 +  − 
 2
v =
2
2

 2
2
 −
 + 
 =
2


 2 
1
1
+
=1
2
2
v = 4
u
1 
 1
= 4 −
,

u
2
2

(
4 
 4
= −
,
 = −2 2, 2 2
2
2

)
16. First find a unit vector in the direction of u.
u
=
u
1
(0, 2, 1, −1) =
0+ 4+1+1
1
(0, 2, 1, −1)
6
Then v is three times this vector.
v = 3
1
6
3
3 

(0, 2, 1, −1) =  0, , , − 
6
6
6
6

© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
155
156
Chapter 5
Inner Product Spaces
18. Solve the equation for c as follows.
1 1

 1 1
30. u =  −1, ,  and v =  0, , − 
2 4

 4 2
c(1, 2, 3) = 1
c
(1, 2, 3) = 1
c =
1
=
(1, 2, 3)
c = ±
1
14
1
(b)
1
v = (0, 0.4472, − 0.8944)
v
1
u = (0.8729, − 0.4364, − 0.2182)
u
(d) u ⋅ v = 0
(e) u ⋅ u = 1.3125
= ( − 4, 2, 6)
(f) v ⋅ v = 0.3125
(− 4)2 + 22 + 62
(
)
32. u = −1,
= 2 14
22. d (u, v) = u − v =
(−1, 0, − 3, 0)
=
(−1) + 02 + (−3) + 02
=
10
2
2
24. (a) u ⋅ v = ( −1)( 2) + ( 2)( − 2) = − 2 − 4 = − 6
(b) v ⋅ v = 2( 2) + ( − 2)( − 2) = 4 + 4 = 8
(c)
u = 1.1456 and v = 0.5590
(c) −
14
20. d (u, v) = u − v
=
(a)
u 2 = u ⋅ u = ( −1)( −1) + ( 2)( 2) = 1 + 4 = 5
(d) (u ⋅ v) v = − 6( 2, − 2) = ( −12,12)
(e) u ⋅ (5v) = 5(u ⋅ v) = 5( − 6) = − 30
26. (a) u ⋅ v = 4(0) + 0( 2) + ( − 3)(5) + (5)( 4)
= 0 + 0 − 15 + 20
= 5
3, 2 and v =
(a)
u = 2.8284 and v = 2.2361
(b)
1
v = (0.6325, − 0.4472, − 0.6325)
v
(c) −
1
u = (0.3536, − 0.6124, − 0.7071)
u
(d) u ⋅ v = −5.9747
(e) u ⋅ u = 8
(f) v ⋅ v = 5
34. (a)
(b)
u =
6, v =
(c) −
(d) u ⋅ v = − 2
(e) u ⋅ u = 6
u 2 = u ⋅ u = 4( 4) + 0(0) + ( − 3)( − 3) + 5(5)
= 16 + 0 + 9 + 25
= 50
(d) (u ⋅ v ) v = 5(0, 2, 5, 4)
= (0, 10, 25, 20)
(e) u ⋅ (5v) = 5(u ⋅ v) = 5(5) = 25
28. (3u − v) ⋅ (u − 3v) = 3u ⋅ (u − 3v) − v ⋅ (u − 3v)
= 3u ⋅ u − 9u ⋅ v − v ⋅ u + 3v ⋅ v
6

6 

u
6
3
6
3
=  −
,−
,
,−

u
3
6
3 
 6
= 0 + 4 + 25 + 16
(c)
3
 3
v
6
3
= 
,−
,
−
v
3
6
3

(b) v ⋅ v = 0(0) + 2( 2) + 5(5) + 4( 4)
= 45
( 2, −1, − 2 )
(f) v ⋅ v = 3
36. You have
u ⋅ v = −1(1) + 0(1) = −1,
u =
(−1) + 02 =
v =
12 + 12 =
2
1 = 1, and
2. So,
u⋅v ≤ u v
−1 ≤ 1 2
1≤
2.
= 3u ⋅ u − 10u ⋅ v + 3v ⋅ v
= 3(8) − 10(7) + 3(6)
= −28
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Length and Dot Product in R n
Section 5.1
157
40. The cosine of the angle θ between u and v is given by
38. You have
u ⋅ v = 1(0) − 1(1) + 0( −1) = −1,
cos θ =
u =
12 + ( −1) + 02 =
2, and
v =
02 + 12 + ( −1)
2. So,
2
2
=
=
u⋅v ≤ u v
−1 ≤
2 ⋅
u⋅v
=
u v
(− 4)(5) + (1)(0)
2
(− 4) + 12 52 + 02
− 20
−4
=
5 17
17
4 

So, θ = cos −1  −
 ≈ 2.897 radians (165.96°).
17 

2
1 ≤ 2.
42. The cosine of the angle θ between u and v is given by
cos
u⋅v
=
cos θ =
u v
So, θ =
π
12
π
π
π π
 cos  + sin  sin 
3
4
3
4
2
π
π


 cos  +  sin 
3
3


2
2
π
π


 cos  +  sin 
4
4


2
π
π
cos − 
3
4
π 

=
= cos .
1⋅1
 12 
radians (15°).
44. The cosine of the angle θ between u and v is given by
cos θ =
So, θ =
2( −3) + 3( 2) + 1(0)
u⋅v
=
= 0.
u v
u v
π
2
radians (90°).
46. The cosine of the angle θ between u and v is given by
cos θ =
1( −1) − 1( 2) + 0( −1) + 1(0)
u⋅v
=
u v
1 + ( −1) + 0 + 1
2
2
2
2
(−1) + 2 + ( −1) + 0
2
2
2
2
=
−3
3
2
= −
= −
.
2
3 6
3 2

2
3π
radians (135°).
So, θ = cos −1  −
 2  ≈ 4


48. Because u ⋅ v = ( 4, 3) ⋅
( 12 , − 23 ) = 2 − 2 = 0, the
vectors u and v are orthogonal.
50. Because u ⋅ v = 1(0) − 1( −1) = 1 ≠ 0, the vectors u
and v are not orthogonal. Moreover, because one is not a
scalar multiple of the other, they are not parallel.
52. Because u ⋅ v = 0(1) + (3)( −8) + ( − 4)( − 6) = 0,
the vectors u and v are orthogonal.
54. Because
( )
()
(4, −1, 0) ⋅ (v1, v2 , v3 ) = 0
4v1 + ( −1)v2 + 0v3 = 0
4v1 − v2 = 0
So, v = (t , 4t , s), where s and t are any real numbers.
60. Because u + v = ( −1, 1) + ( 2, 0) = (1, 1), you have
u + v ≤ u + v
( )
u ⋅ v = 4( −2) + 32 − 34 + ( −1) 12 + 12 − 14 = − 39
≠ 0,
4
the vectors are not orthogonal. Moreover, because one
vector is a scalar multiple of the other, they are parallel.
56.
u⋅v = 0
58.
u⋅v = 0
(11, 2) ⋅ (v1 , v2 ) = 0
11v1 + 2v2 = 0
So, v = ( −2t , 11t ), where t is any real number.
(1, 1) ≤ (−1, 1) + (2, 0)
2 ≤
2 + 2.
62. Because u + v = (1, −1, 0) + (0, 1, 2) = (1, 0, 2), you
have
u + v ≤ u + v
(1, 0, 2) ≤ (1, −1, 0) + (0, 1, 2)
5 ≤
2 +
5.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
158
Chapter 5
Inner Product Spaces
64. First note that u and v are orthogonal, because
u ⋅ v = (3, − 2) ⋅ ( 4, 6) = 0.
66. First note that u and v are orthogonal, because
Then note
u ⋅ v = ( 4,1, − 5) ⋅ ( 2, − 3,1) = 0.
Then note
u+ v
2
2
= u
(7, 4)
2
= (3, − 2)
+ v
2
2
+
(4, 6)
2
u + v
2
= u
(6, − 2, − 4)
2
=
2
+ v
2
(4, 1, − 5)
2
65 = 13 + 52
56 = 42 + 14
65 = 65.
56 = 56.
+ ( 2, − 3, 1)
2
 2
68. (a) u ⋅ v = uT v = [−1 2]   = ( −1)( 2) + ( 2)( − 2) = [− 2 − 4] = − 6
− 2
 2
(b) v ⋅ v = vT v = [2 − 2]   = 2( 2) + ( − 2)( − 2) = [4 + 4] = 8
− 2
(c)
−1
u 2 = uT u = [−1 2]   = ( −1)( −1) + 2( 2) = [1 + 4] = 5
 2

 2   2
 2
−12
(d) (u ⋅ v ) v = (uT v ) v =  [−1 2]      = − 6   = 



− 2  − 2
− 2
 12


 2 
(e) u ⋅ (5 v ) = 5(uT v ) = 5[−1 2]    = 5( − 6) = − 30


− 2 

0
 
2
T
⋅
=
=
−
4
0
3
5
u
v
u
v
70. (a)
[
]   = 4(0) + 0(2) + (− 3)(5) + (5)(4) = [0 + 0 − 15 + 20] = 5
5
 
4
0
 
2
(b) v ⋅ v = vT v = [0 2 5 4]   = 0(0) + 2( 2) + 5(5) + 4( 4) = [0 + 4 + 25 + 16] = 45
5
 
4
(c)
 4
 
0
u 2 = uT u = [4 0 − 3 5]   = 4( 4) + 0(0) + ( − 3)( − 3) + 5(5) = [16 + 0 + 9 + 25] = 50
− 3
 
 5

0  0
0
 0

   
 
 
2
2
2
10


(d) (u ⋅ v ) v = (uT v) v = [4 0 − 3 5]      = 5  =  
5 5
5
25

   
 
 

4  4
4
20


0 

 
2 

T
(e) u ⋅ (5 v) = 5(u v ) = 5[4 0 − 3 5]   = 5(5) = 25
5

 

4 

© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 5.1
72. Because u ⋅ v = −sin θ sin θ + cos θ ( −cos θ ) + 1(0)
= −(sin θ ) − (cos θ )
2
2
= −(sin 2θ + cos 2θ )
= −1 ≠ 0,
the vectors u and v are not orthogonal. Moreover,
because one is not a scalar multiple of the other, they are
not parallel.
Length and Dot Product in R n
159
74. (a) False. The unit vector in the direction of v is given
v
by
.
v
(b) False. If u ⋅ v < 0 then the angle between them lies
π
and π , because
between
2
π
cos θ < 0 
< θ < π.
2
76. (a)
(u ⋅ v) ⋅ u is meaningless because u ⋅ v is a scalar.
(b) c ⋅ (u ⋅ v) is meaningless because c is a scalar, as
well as u ⋅ v.
78. v = ( v1, v 2 ) = (8, 15), ( v2 , − v1) = (15, − 8)
(8, 15) ⋅ (15, −8) = 8(15) + 15(−8) = 120 − 120 = 0
So, ( v 2 , − v1 ) is orthogonal to v.
Answers will vary. Sample answer:
Two unit vectors orthogonal to v:
−1(15, − 8) = ( −15, 8): (8, 15) ⋅ ( −15, 8) = 8( −15) + 15(8)
= −120 + 120
= 0
3(15, − 8) = ( 45, − 24): (8, 15) ⋅ ( 45, − 24) = 8( 45) + (15)( −24)
= 360 − 360
= 0
80. u ⋅ v = ( 4600, 4290, 5250) ⋅ ( 499.99, 199.99, 99.99)
= 4600( 499.99) + 4290(199.99) + 5250(99.99)
82. Let v = (t , t , t ) be the diagonal of the cube, and
u = (t , t , 0) the diagonal of one of its sides. Then,
= $3,682,858.60
This represents the total revenue earned from selling the
three models of cellular phones.
cos θ =
u⋅v
=
u v
2t 2
( 2 t )( 3 t )
=
2
=
6
6
3
 6
and θ = cos −1 
 ≈ 35.26°.
 3 
84. 14 u + v
2
− 14 u − v
2
= 14 (u + v ) ⋅ (u + v ) − (u − v ) ⋅ (u − v )
= 14 u ⋅ u + 2u ⋅ v + v ⋅ v − (u ⋅ u − 2u ⋅ v + v ⋅ v)
= 14 [4u ⋅ v] = u ⋅ v
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
160
Chapter 5
Inner Product Spaces
86. If u and v have the same direction, then u = cv , c > 0, and
u + v = cv + v = (c + 1) v
= c v + v = cv + v
= u + v .
On the other hand, if
u + v = u + v , then
u + v
u
2
= ( u + v )
(u + v ) ⋅ (u + v ) = u
2
2
2
+ v
2
+ 2u ⋅ v = u
2
+ v
2
+ 2 u v
+ v
2
+ 2 u v
2u ⋅ v = 2 u v
 cos θ =
u⋅v
=1
u v

θ = 0

u and v have the same direction.
88. (a) When u ⋅ v = 0, the vectors u and v are orthogonal (θ = 90°).
π

(b) When u ⋅ v > 0, the vectors form an acute angle for θ  0° ≤ θ < 90° or 0 ≤ θ <  .
2

π


< θ ≤ π .
(c) When u ⋅ v < 0, the vectors form an obtuse angle for θ  90° < θ ≤ 180° or
2


Section 5.2 Inner Product Spaces
2. 1. Since the product of real numbers is commutative,
u, v = u1v1 + 9u2v2 = v1u1 + 9v2u2 = v, u .
2. Let w = ( w1, w2 ). Then,
u, v + w = u1 (v1 + w1 ) + 9u2 (v2 + w2 )
= u1v1 + u1w1 + 9u2v2 + 9u2 w2
= u1v1 + 9u2v2 + u1w1 + 9u2 w2
= u , v + u, w .
3. If c is any scalar, then
c u, v = c(u1v1 + 9u2v2 ) = (cu1)v1 + 9(cu2 )v2 = cu, v .
4. Since the square of a real number is nonnegative, v, v = v12 + 9v22 ≥ 0. Moreover, this expression is equal to zero if and
only if v = 0 (that is, if and only if v1 = v2 = 0 ).
4. 1. Since the product of real numbers is commutative,
u, v = 2u1v2 + u2v1 + u1v2 + 2u2v2 = 2v2u1 + v1u2 + v2u1 + 2v2u2 = v, u .
2. Let w = ( w1, w2 ). Then,
u, v + w = 2u1 (v2 + w2 ) + u2 (v1 + w1 ) + u1 (v2 + w2 ) + 2u2 (v2 + w2 )
= 2u1v2 + 2u1w2 + u2v1 + u2 w1 + u1v2 + u1w2 + 2u2v2 + 2u2 w2
= 2u1v2 + u2v1 + u1v2 + 2u2v2 + 2u1w2 + u2 w1 + u1w2 + 2u2 w2
= u, v + u, w .
3. If c is any scalar, then
c u, v = c( 2u1v2 + u2v1 + u1v2 + 2u2v2 ) = 2(cu1)v2 + (cu2 )v1 + (cu1)v2 + 2(cu2 )v2 = cu, v .
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 5.2
Inner Product Spaces
161
4. Since the square of a real number is nonnegative, v, v = 2v22 + v12 + v22 + 2v22 ≥ 0. Moreover, this expression is equal
to zero if and only if v = 0 (that is, if and only if v1 = v2 = 0 ).
6. 1. Since the product of real numbers is commutative,
u, v = u1v1 + 2u2v2 + u3v3 = v1u1 + 2v2u2 + v3u3 = v, u .
2. Let w = ( w1, w2 , w3 ). Then,
u, v + w = u1 (v1 + w1 ) + 2u2 (v2 + w2 ) + u3 (v3 + w3 )
= u1v1 + u1w1 + 2u2v2 + 2u2 w2 + u3v3 + u3 w3
= u1v1 + 2u2v2 + u3v3 + u1w1 + 2u2 w2 + u3 w3
= u, v + u, w .
3. If c is any scalar, then
c u, v = c(u1v1 + 2u2v2 + u3v3 ) = (cu1)v1 + 2(cu2 )v2 + (cu3 )v3 = cu, v .
4. Since the square of a real number is nonnegative, v, v = v12 + 2v22 + v32 ≥ 0. Moreover, this expression is equal to zero
if and only if v = 0 (that is, if and only if v1 = v2 = v3 = 0 ).
8. 1. Since the product of real numbers is commutative,
u, v = 12 u1v1 + 14 u2v2 + 12 u3v3 = 12 v1u1 + 14 v2u2 + 12 v3u3 = v, u .
2. Let w = ( w1, w2 , w3 ). Then
u, v + w = 12 u1 (v1 + w1 ) + 14 u2 (v2 + w2 ) + 12 u3 (v3 + w3 )
= 12 u1v1 + 12 u1w1 + 14 u2v2 + 14 u2 w2 + 12 u3v3 + 12 u3w3
= 12 u1v1 + 14 u2v2 + 12 u3v3 + 12 u1w1 + 14 u2 w2 + 12 u3w3
= u, v + u , w .
3. If c is any scalar, then
(
)
c u, v = c 12 u1v1 + 14 u2v2 + 12 u3v3 = 12 (cu1 )v1 + 14 (cu2 )v2 + 12 (cu3 )v3 = cu, v .
4. Since the square of a real number is nonnegative, v, v = 12 v12 + 14 v22 + 12 v32 ≥ 0. Moreover, this expression is equal to
zero if and only if v = 0 (that is, if and only if v1 = v2 = 0 ).
10. The product u, v is not an inner product because Axiom 4 is not satisfied. For example, let v = (1, 1). Then
v, v = (1)(1) − 6(1)(1) = − 5, which is less than zero.
12. The product u, v is not an inner product because it is not commutative. For example, if u = (1, 2), and v = ( 2, 3), then
u, v = 3(1)(3) − 2( 2) = 5 while v, u = 3( 2)( 2) − 3(1) = 9.
14. The product u, v is not an inner product because nonzero vectors can have a norm of zero. For example, if v = (1,1, 0), then
(1, 1, 0), (1, 1, 0) = 0.
16. The product u, v is not an inner product because Axiom 2 is not satisfied. For example, let u = (1, 0, 0), v = (1, 1, 1), and
w = ( 2, 1, 2).
u, v + w = 2(1)(0) + 3(3)( 2) + 0(3) = 18
u, v + u, w = 2(1)(0) + 3(1)(1) + 0(1) + 2(1)(0) + 3( 2)(1) + 0( 2) = 9
So, u, v + w ≠ u, v + u, w .
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
162
Chapter 5
18. (a)
Inner Product Spaces
u, v = −1(6) + 1(8) = 2
(b)
u =
u, u =
(−1) + 12 =
(c)
v =
v, v =
6 2 + 82 = 10
2
2
(d) d (u, v ) = u − v = ( −7, − 7) =
20. (a)
(−7) + (−7)
2
u =
u, u =
0(0) + 2( −6)( −6) = 6 2
(c)
v =
v, v =
(−1)(−1) + 2(1)(1) =
(d) d (u, v ) = u − v = (1, − 7) =
3
1(1) + 2( −7)( −7) =
99 = 3 11
u, v = 0(1) + 1(2) + 2(0) = 2
(b)
u =
u, u =
0 2 + 12 + 2 2 =
5
(c)
v =
v, v =
12 + 2 2 + 0 2 =
5
(d) d (u, v ) = u − v = ( −1, −1, 2) =
24. (a)
= 7 2
u, v = 0( −1) + 2( −6)(1) = −12
(b)
22. (a)
2
(−1) + (−1) + 22 =
2
2
u, v = (1)( 2) + 2(1)(5) + (1)( 2) = 14
(b)
u =
u, u =
(1) + 2(1) + (1)
(c)
v =
v, v =
(2) + 2(5) + ( 2)
2
2
2
2
2
= 2
2
=
58
(d) d (u, v ) = u − v = (1, 1, 1) − ( 2, 5, 2) = ( −1, − 4, −1) =
26. (a)
6
(−1) + 2(−4) + (−1)
2
2
2
=
34
u, v = 1( 2) + ( −1)(1) + 2(0) + 0( −1) = 1
(b)
u =
u, u =
12 + ( −1) + 22 + 02 =
6
(c)
v =
v, v =
22 + 12 + 02 + ( −1) =
6
2
(d) d (u, v) = u − v = ( −1, − 2, 2, 1) =
2
(−1) + (−2) + 22 + 12 =
2
2
10
28. 1. Since the product of real numbers within a matrix is commutative,
A, B = 2a11b11 + a12b12 + a21b21 + 2a22b22
= 2b11a11 + b12 a12 + b21a21 + 2b22 a22
= B, A .
 w11 w12 
2. Let W = 
. Then,
w21 w22 
A, B + W = 2a11 (b11 + w11 ) + a12 (b12 + w12 ) + a21 (b21 + w21 ) + 2a22 (b22 + w22 )
= 2a11b11 + 2a11w11 + a12b12 + a12 w12 + a21b21 + a21w21 + 2a22b22 + 2a22 w22
= 2a11b11 + a12b12 + a21b21 + 2a22b22 + 2a11w11 + a12 w12 + a21w21 + 2a22 w22
= A, B + A, W .
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 5.2
Inner Product Spaces
163
3. If c is any scalar, then
c A, B = c( 2a11b11 + a12b12 + a21b21 + 2a22b22 )
= 2(ca11 )b11 + (ca12 )b12 + (ca21 )b21 + 2(ca22 )b22
= CA, B
2
2
+ 2b22
≥ 0. Moreover, this expression is
4. Since the square of a real number is nonnegative, B, B = 2b112 + b122 + b21
equal to zero if and only if B = 0 (that is, if and only if b11 = b12 = b21 = b22 = 0 ).
30. (a)
A, B = 2(1)(0) + (0)(1) + (0)(1) + 2(1)(0) = 0
(b)
A =
A, A =
2(1) + 02 + 02 + 2(1)
(c)
B =
B, B =
2 ⋅ 0 2 + 12 + 12 + 2 ⋅ 02 =
2
2
= 2
2
(d) Use the fact that d ( A, B) = A − B . Because
 1 −1
A− B = 
, you have
−1 1
A − B, A − B = 2(1) + ( −1) + ( −1) + 2(1) = 6.
2
d ( A, B ) =
32. (a)
2
A − B, A − B =
2
2
6
A, B = 2(1)(1) + (0)(0) + (0)(1) + 2( −1)( −1) = 4
(b)
A =
A, A =
2(1) + 02 + 02 + 2( −1)
(c)
B =
B, B =
2(1) + 02 + 12 + 2( −1)
2
2
2
=
4 = 2
2
=
5
0 −1
(d) Use the fact that d ( A, B) = A − B . Because A − B = 
 , you have
0 0
A − B, A − B = 2(0) + 0 2 + ( −1) + 2(0) = 1.
d ( A, B ) =
2
2
A − B, A − B =
1 =1
2
34. 1. Since the product of real numbers is commutative,
p, q = a0b0 + a1b1 +  + anbn = b0a0 + b1a1 +  + bnan = q, p .
2. Let w = w0 + w1 x +  + wn x n , then
p, q + w = a0 (b0 + w0 ) + a1 (b1 + w1 ) x +  + an (bn + wn ) x n
= a0b0 + a0 w0 + a1b1 x + a1w1x +  + anbn x n + an wn x n
= a0b0 + a1b1 x +  + anbn x n + a0 w0 + a1w1 x +  + an wn x n
= p, q + p, w .
3. If c is any scalar, then
c p, q = c( a0b0 + a1b1 x +  + anbn x n )
= (ca0 )b0 + (ca1 )b1 x +  + (can )bn x n
= cp, q .
4. Since the square of a real number is nonnegative, q, q = b02 + b12 x 2 +  + bn2 x 2 n ≥ 0. Moreover, this expression is
equal to zero if and only if q = 0 (that is, if and only if q0 =  = qn = 0 ).
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
164
Chapter 5
36. (a)
(b)
Inner Product Spaces
p, q = 1(1) + 1(0) +
p
q
2
q, q =
p, p =
11
q 2 = q, q = 02 + ( −1) + 22 = 5
2
(c)
q =
= q, q = 12 + 02 + 22 = 5
q =
2
p =
9
3
=
4
2
p, p =
p 2 = p, p = 12 + ( − 3) + 12 = 11
(b)
9
1
= p, p = 12 + 12 +   =
4
2
2
p , q = 1(0) + ( − 3)( −1) + 1( 2) = 5
38. (a)
2
p =
(c)
1
( 2) = 2
2
q, q =
5
(d) Use the fact that d ( p, q ) = p − q . Because
5
p − q = 1 − 2 x − x 2 , you have
(d) Use the fact that d ( p, q) = p − q . Because
p − q, p − q = 12 + ( − 2) + ( −1) = 6.
2
3
p − q = x − x 2 , you have
2
d ( p, q ) =
2
p − q, p − q =
6
2
13
 3
p − q , p − q = 0 2 + 12 +  −  =
.
2
4


d ( p, q ) =
40. (a)
(b)
(c)
1
f,g =
f
g
2
−1
=
f, f
f =
2
3
2
f ( x ) g ( x )dx =
=
= g, g =
g =
13
=
4
p − q, p − q =
1
−1
13
2
(− x)( x 2 − x + 2)dx =
1
(− x)(− x)dx =
−1
1
 x4

x3
2
− x 3 + x 2 − 2 x)dx = −
+
− x2  =
(
−1
4
3
3

 −1
1
1
x3 
2
 =
3  −1
3
 5
4
3
1

( x 2 − x + 2) dx = −1 ( x 4 − 2 x3 + 5 x 2 − 4 x + 4)dx =  x5 − x2 + 53x − 2 x 2 + 4 x
−1
1
1
2

 −1
=
176
15
176
15
(d) Use the fact that d ( f , g ) = f − g . Because f − g = − x − ( x 2 − x + 2) = − x 2 − 2, you have
f − g , f − g = − x 2 − 2, − x 2 − 2 =
d( f , g) =
42. (a)
(b)
f,g =
f
2
(c)
g
2
1


 −1
=
166
.
15
1
2
xe− x dx = −e− x ( x + 1) = −2e−1 + 0 = −
 −1
−1
e
1
=
2
=
3
= g, g =
g =
3
166
.
15
f − g, f − g =
= f, f
f =
 5
( x 4 + 4 x 2 + 4)dx =  x5 + 43x + 4 x
−1
1
1
−1
1
x 2 dx =
x3 
1 1
2
 = + =
3  −1
3 3
3
6
3
1
−1
1
e −2 x dx = −
e −2 x 
1 −2
2
 = ( −e + e )
2  −1
2
1 −2
( −e + e 2 )
2
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 5.2
Inner Product Spaces
165
(d) Use the fact that d ( f , g ) = f − g . Because f − g = x − e − x , you have
1
f − g, f − g =
1
=
( x − e− x ) dx
2
−1
−1
( x 2 − 2e− x + e−2 x )dx
1
 x3
e −2 x 
=  + 2e − x ( x + 1) −

2  −1
3
2
e −2
e2
+ 4e −1 −
+ .
3
2
2
=
d( f , g) =
2
e −2
e2
+ 4e −1 −
+
3
2
2
f − g, f − g =
π
1
44. Because u, v = (3)  + ( −1)(1) = 0, the angle between u and v is .
2
 3
π
1
46. Because u, v = 2 ( 2) + ( −1)(1) = 0, the angle between u and v is .
4
2
 
48. Because
u, v
u
v
(0)(3) + (1)( − 2) + (−2)(1)
=
(0) + (1)2 + (−2)2 (3)2 + ( −1)2 + (1)2
=
2
−4
4
= −
,
5 ⋅ 14
70
4 

the angle between u and v is cos −1  −
 ≈ 2.069 radians (118.56°).
70 

50. Because
p, q
p
q
(1)(0) + 2(0)(1) + (1)(−1)
=
2
2
2
2
2
(1) + 2(0) + (1) (0) + 2(1) + ( −1)
=
2
−1
1
= −
,
2 3
6
 1 
the angle between p and q is cos −1  −
 ≈ 1.991 radians (114.09°).
6

52. First compute
1
1
f , g = 1, x 2 =
−1
2
x3 
 =
3  −1
3
x 2 dx =
1
1 dx = x = 2  f =
 −1
−1
1
f
2
= 1, 1 =
g
2
= x2 , x2 =
1
−1
1
x 4 dx =
2
2
x5 
 g =
 =
5  −1
5
2
.
5
So,
f, g
f
g
=
23
=
2 25
5
3
 5
and the angle between f and g is cos −1 
radians ( 41.81°).
 3  ≈ 0.73


© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
166
Chapter 5
Inner Product Spaces
54. (a) To verify the Cauchy-Schwarz Inequality, observe
u, v ≤ u
( −1)(1) + (1)( −1) ≤
v
( −1) + (1) ⋅ (1) + (−1)
2
−2 ≤
2
2 ⋅
2
2
2
2 ≤ 2.
(b) To verify the Triangle Inequality, observe
u + v ≤ u + v
( 0) + ( 0 )
2
2
≤
0 ≤
(−1) + (1)
2
2 +
2
(1) + (−1)
2
+
2
2
0 ≤ 2 2.
56. (a) To verify the Cauchy-Schwarz Inequality, observe
u, v ≤ u
(1)(1) + (0)( 2) + ( 2)(0) ≤
v
(1) + (0) + (2) ⋅ (1) + ( 2) + (0)
2
1 ≤
2
5 ⋅
2
2
2
2
5
1 ≤ 5.
(b) To verify the Triangle Inequality, observe
u + v ≤ u + v
( 2) + (2) + ( 2)
2
2
2
(1) + (0) + (2)
2
≤
12 ≤
2
5 +
2
(1) + ( 2) + (0)
2
+
2
2
5
2 3 ≤ 2 5.
58. (a) To verify the Cauchy-Schwarz Inequality, observe
p, q ≤ p
(0)(1) + 2(1)(0) + (0)(−1) ≤
q
(0) + 2(1) + (0) ⋅ (1) + 2(0) + (−1)
2
0 ≤
2 ⋅
2
2
2
2
2
2
0 ≤ 2.
(b) To verify the Triangle Inequality, observe
p + q ≤ p + q
(1) + 2(1) + (−1)
2
2
2
≤
4 ≤
(0) + 2(1) + (0)
2
2 +
2
2
+
(1) + 2(0) + ( −1)
2
2
2
2
2 ≤ 2 2.
60. (a) To verify the Cauchy-Schwarz Inequality, observe
A, B ≤ A
(0)(1) + (1)(1) + (2)(2) + (−1)(−2) ≤
B
(0)2 + (1)2 + (2)2 + (−1)2 ⋅ (1)2 + (1)2 + (2)2 + (−2)2
7 ≤
6 ⋅
7 ≤
60
10
7 ≤ 7.746.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 5.2
Inner Product Spaces
167
(b) To verify the Triangle Inequality, observe
A+ B ≤ A + B
(1) + (2) + (4) + (−3)
2
2
2
2
(0) + (1) + (2) + (−1)
2
≤
30 ≤
6 +
2
2
2
+
(1) + (1) + (2) + (−2)
2
2
2
2
10
5.477 ≤ 5.612.
62. (a) To verify the Cauchy-Schwarz Inequality, observe
2
f
2
 x3 
8
2 6
x 2 dx =   =
 f =
0
3
3
 3 0
2
= x, x =
g 2 = cos π x, cos π x =
=
2 1 + cos 2
πx
2
0
2
x sin π x 
 cos π x
x cos π xdx = 
+
 = 0
2
0
π
 π
0
2
f , g = x, cos π x =
2
0
cos 2 π dx
2
sin 2π x 
1
dx =  x +
=1 g =1
4 x  0
2
and observe that
f, g
≤ f
0 ≤
g
2 6
(1).
3
(b) To verify the Triangle Inequality, observe
f + g 2 = x + cos π x 2 =
 f + g =
So,
2
 x2
sin π x 
+
 = 2
2
π 0

2
( x + cos π x)dx = 
0
2.
f + g ≤ f + g
2 ≤
2 6
+ 1.
3
64. (a) To verify the Cauchy-Schwarz Inequality, compute
f
2
g
2
1
= x, x =
0
= e− x , e− x =
and observe that
f, g ≤ f
1
xe − x dx = −e − x ( x + 1) = 1 − 2e −1
 0
0
1
f , g = x, e − x =
1
x 2 dx =
1
0
x3 
1

 =
3 0
3
f =
3
3
1
e −2 x dx = −
e −2 x 
e −2
1
+ 
 = −
2 0
2
2
g =
−
e −2
1
+
2
2
g
 3 
1 − 2e −1 ≤ 


 3 
−
e −2
1
+ 
2
2 
0.264 ≤ 0.380.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
168
Chapter 5
Inner Product Spaces
(b) To verify the Triangle Inequality, compute
2
f + g
3 1
−2 x


( x + e− x ) dx = −2e− x ( x + 1) − e 2 + x3 
0
1
= x + e− x , x + e− x =
2

0
−2

e
1 
1

= −4e−1 −
+  − −2 − + 0
2
3 
2


= −4e −1 −
e −2 17
+
 f + g =
2
6
−4e −1 −
e −2 17
+
2
6
and observe that
f + g ≤ f + g
e −2
17
3
+
≤
+
2
6
3
1.138 ≤ 1.235.
−4e −1 −
66. The functions f ( x) = x and g ( x) =
f, g =
−
e −2
1
+
2
2
1 2
(3x − 1) are orthogonal because
2
1
 1  3x 4
1
1 1
x 2 
x (3 x 2 − 1)dx =
3 x3 − x)dx =  
−
(
 = 0.
1
−1 2
−
2
2  −1
2 4
1
68. The functions f ( x) = 1 and g ( x) = cos( 2nx) are orthogonal because f , g =
u, v
70. (a) projv u =
v, v
π
1

cos( 2nx )dx =  sin ( 2nx) = 0.
0
2
n

0
π
(− 3)(6) + (−1)(3) 6, 3
( )
2
2
v =
6 +3
7
= − (6, 3)
15
 14 7 
= − , − 
5
 5
v, u
(b) proju v =
u, u
u =
6( − 3) + 3( −1)
(− 3) + (−1)
2
2
( − 3, −1)
21
( − 3, −1)
10
 63 21 
=  , 
 10 10 
= −
(c)
y
4
v(6, 3)
2
projuv
u(− 3, − 1)
−4
2
4
x
6
projvu − 2
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 5.2
u, v
72. (a) projv u =
v, v
v, u
(b) proju v =
u, u
v =
( 2)(3) + ( −2)(1) 3, 1 = 4 3, 1 =  6 , 2 
( )
( ) 

2
2
10
5 5
(3) + (1)
u =
(3)( 2) + (1)(−2) 2, − 2 = 4 2, − 2 = 1, −1
)
(
) ( )
2
2 (
8
(2) + ( −2)
Inner Product Spaces
169
y
(c)
2
projvu
v = (3, 1)
x
4
−2
projuv
u = (2, − 2)
u, v
74. (a) projv u =
v, v
v, u
(b) proju v =
u, u
76. (a) projvu =
v =
(1)( −1) + ( 2)( 2) + ( −1)(−1) −1, 2, −1 = 4 −1, 2, −1 =  − 2 , 4 , − 2 
(
)
(
) 

2
2
2
6
 3 3 3
(−1) + ( 2) + ( −1)
u =
( −1)(1) + ( 2)(2) + (−1)(−1) 1, 2, −1 = 4 1, 2, −1 =  2 , 4 , − 2 
(
)
(
) 

2
2
2
6
 3 3 3
(1) + ( 2) + (−1)
u, v
( −1)( 2) + (4)( −1) + ( −2)( 2) + (3)(−1) 2, −1, 2, −1
v =
(
)
v, v
( 2)2 + (−1)2 + (2)2 + (−1)2
=
(b) proju v =
13 13 13 13
−13
( 2, −1, 2, −1) =  − , , − , 
10
 5 10 5 10 
v, u
(2)( −1) + ( −1)( 4) + ( 2)( −2) + (−1)(3) −1, 4, − 2, 3
u =
(
)
u, u
(−1)2 + ( 4)2 + ( −2)2 + (3)2
=
13 26 13 13
−13
(−1, 4, − 2, 3) =  , − , , − 
30
 30 15 15 10 
78. The inner products f , g and g , g are as follows.
f,g =
g, g =
1
( x3 − x)(2 x − 1)dx =
−1
1
(2 x − 1) dx =
−1
2
1
 2 x5
x4
2 x3
x2 
8
2 x 4 − x3 − 2 x 2 + x)dx = 
−
−
+
(
 = −
−1
5
4
3
2
15

 −1
1
1
 4 x3

14
4 x 2 − 4 x + 1)dx = 
− 2 x 2 + x =
(
−1
3
3

 −1
1
So, the projection of f onto g is projg f =
f,g
−8 15
4
g =
(2 x − 1) = − ( 2 x − 1).
14 3
35
g, g
80. The inner products f , g and g , g are as follows.
f, g =
g, g =
1
0
1
0
1
xe − x dx = −e − x ( x + 1) = −2e −1 + 1
0
−2 x 1
 −e 
−e −2
1
1 − e −2
e −2 x dx = 
+
=
 =
2
2
2
 2 0
So, the projection of f onto g is projg f =
f, g
g, g
g =
−2e −1 + 1 − x
−4e −1 + 2 − x
−4e − x −1 + 2e − x
=
=
.
e
e
−2
−2
1−e
1−e
1 − e−2
2
( )
( )
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
170
Chapter 5
Inner Product Spaces
82. The inner product f , g is
f,g =
π
−π
sin 2 x sin 3 x dx =
π
1
1
sin 5 x 
(cos x − cos 5 x) dx =  sin x −
 = 0
−π 2
2
5  −π
π
which implies that projg f = 0.
84. The inner product f , g is f , g =
π
x sin 2 x 
1 1
 cos 2 x
+
x cos 2 x dx = 
 = 4 − 4 = 0
−π
4
2

 −π
π
which implies that projg f = 0.
86. (a) False. The norm of a vector u is defined as a square root of u, u .
(b) False. The angle between av and v is zero if a > 0 and it is π if a < 0.
88.
u + v
2
+ u − v
2
= u + v , u + v + u − v, u − v
= ( u, u + 2 u, v + v , v ) + ( u, u − 2 u, v + v , v )
= 2 u
2
+ 2 v
2
90. To prove that u − projvu is orthogonal to v, you calculate their inner product as follows
u − projvu, v = u, v − projvu, v = u, v −
u, v
v, v
v, v
= u, v −
u, v
v , v = u, v − u, v = 0.
v, v
92. You have from the definition of inner product u, cv = cv, u = c v, u = c u, v .
94. Let W = {(c, 2c, 3c) : c ∈ R}. Then
W ⊥ = {v ∈ R 3 : v ⋅ (c, 2c, 3c) = 0} = {( x, y , z ) ∈ R 3 : ( x, y , z ) ⋅ (1, 2, 3) = 0}.
You need to solve x + 2 y + 3z = 0. Choosing y and z as free variables, you obtain the solution x = − 2t − 3 s , y = t , z = s
for any real numbers t and s. Therefore,
W ⊥ = {t ( −2, 1, 0) + s( −3, 0, 1) : t , s ∈ R} = span{( −2, 1, 0), ( −3, 0, 1)}.
96. (a) All four axioms of the definition of an inner product must be satisfied.
(i) u, v = v, u
(ii) u, v + w = u, v + u, w
(iii) c u, v = cu, v
(iv) v, v ≥ 0, and v, v = 0 if and only if v = 0.
(b) To find an orthogonal projection, find u, v and v, v , and have v ≠ 0 so that
projv u =
u, v
v.
v, v
98. Let u = ( x, y). Then u =
c1 x 2 + c2 y 2 = 1. Since the equation of the graph is 14 x 2 + 19 y 2 = 1, c1 = 14 and c2 = 19 .
100. Let u = ( x, y). Then u =
1 2
1 and
c1 x 2 + c2 y 2 = 1. Since the equation of the graph is 25
x + 91 y 2 = 1, c1 = 25
c2 = 19 .
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 5.3
Orthonormal Bases: Gram-Schmidt Process
171
Section 5.3 Orthonormal Bases: Gram-Schmidt Process
2. (a) The set is not orthogonal since
(− 3, 5) ⋅ (4, 0) = ( − 3)(4) + 5(0) = −12 ≠ 0.
(b) The set is not orthonormal since it is not orthogonal.
(c) Because the two vectors are not scalar multiples of each other, by the Corollary to Theorem 4.8 they are linearly
independent. By Theorem 4.12, they are a basis for R 2 .
4. (a) The set is orthogonal since ( 2, 1) ⋅
(b) The set is not orthonormal since
( 13 , − 23 ) = 2( 13 ) + 1(− 32 ) = 0.
( 2, 1) =
22 + 12 =
5 ≠ 1.
(c) Because the vectors are not scalar multiples of each other, by the Corollary to Theorem 4.8 they are linearly independent.
By Theorem 4.12, they form a basis for R 2 .
6. (a) The set is orthogonal since ( 2, − 4, 2) ⋅ (0, 2, 4) = 0 − 8 + 8 = 0, ( 2, − 4, 2) ⋅ ( −10, − 4, 2) = − 20 + 16 + 4 = 0, and
(0, 2, 4) ⋅ (−10, − 4, 2) = 0 − 8 + 8 = 0.
(b) The set is not orthonormal since
(2, − 4, 2) =
22 + ( − 4) + 22 =
2
24 ≠ 1.
(c) Because the three vectors do not lie in the same plane, they span R 3 . By Theorem 4.12, they form a basis for R 3 .
 2
 2
2 − 6
6
6
2  3
3 − 3
8. (a) The set is orthogonal since 
and
 2 , 0, 2  ⋅  6 , 3 , 6  = 0,  2 , 0, 2  ⋅  3 , 3 , 3  = 0,

 


 

− 6
6
6  3
3 − 3
,
,
,
,

 ⋅ 
 = 0.
6
3
6
3
3
3 

 
 2
2
, 0,
(b) The set is orthonormal since 
 =
2 
 2
 3
3 − 3
 3 , 3 , 3  =


1
1
+0+
= 1,
2
2
− 6
6
6
,
,

 =
3
6 
 6
1 2 1
+ +
= 1, and
6 3 6
1 1 1
+ +
= 1.
3 3 3
(c) Because the three vectors do not lie in the same plane, they span R3. By Theorem 4.12, they form a basis for R3.
10. (a) The set is orthogonal since ( − 6, 3, 2, 1) ⋅ ( 2, 0, 6, 0) = −12 + 12 = 0.
(b) The set is not orthonormal since
(−6, 3, 2, 1) =
36 + 9 + 4 + 1 =
50 ≠ 1.
(c) Since there aren’t enough vectors, the set is not a basis for R 4 .
12. (a) The set is orthogonal since
 10
3 10 
, 0, 0,

 ⋅ (0, 0, 1, 0) = 0,
10
10 

 10
3 10 
, 0, 0,

 ⋅ (0, 1, 0, 0) = 0,
10 
 10
 10
3 10   − 3 10
10 
−3
3
, 0, 0,
, 0, 0,
+
= 0,

 ⋅ 
 =
10
10
10
10
10
10

 

(0, 0, 1, 0) ⋅ (0, 1, 0, 0) = 0,
 − 3 10
10 
, 0, 0,
 = 0,
10 
 10
(0, 0, 1, 0) ⋅ 
 − 3 10
10 
and (0, 1, 0, 0) ⋅ 
 10 , 0, 0, 10  = 0.


© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
172
Chapter 5
Inner Product Spaces
 10
3 10 
1
9
(b) The set is orthonormal since 
, 0, 0,
+
= 1, (0, 0, 1, 0) = 1, (0, 1, 0, 0) = 1, and
 =
10
10
10
10


 − 3 10
10 
9
1
, 0, 0,
+
= 1.

 =
10
10
10
10


(c) By the Corollary to Theorem 5.10, the set of four vectors is a basis for R 4 .
14. (a) The set is orthogonal since ( 2, − 5) ⋅ (10, 4) = 20 − 20 = 0.
(2, − 5) =
(b) Since
22 + ( − 5)
2
=
29 and (10, 4) =
102 + 42 = 2 29, normalizing the set produces an
orthonormal set.
 2 29 5 29 
1
,−
(2, − 5) = 

29 
29
 29
u1 =
v1
=
v1
u2 =
 5 29 2 29 
1
v2
,
=
(10, 4) = 

29 
v2
2 29
 29
2 3   2 6  12 12
6
−
+ 0 = 0.
16. (a) The set is orthogonal since  , − ,  ⋅  , , 0  =
13
13
13   13 13 
13 13

2
2 6 
 , , 0 =
 13 13 
2
2
6
 2
3
  + −  +   =
 13 
 13 
 13 
6
2 3
,− ,
=
13 13 13
(b) Since
2
2
2
6
2
  +  +0 =
 13 
 13 
49
7
=
and
169
13
40
2 10
=
, normalizing the set produces an orthonormal set.
169
13
u1 =
v1
13  6
2 3
6 2 3
=
 ,− ,  =  ,− , 
v1
7  13 13 13 
7 7 7
u2 =
v2
13  2 6 
 1
=
,
 , , 0 = 
v2
2 10  13 13 
 10
3

, 0
10 
18. The set {(sin θ , cos θ ), (cos θ , −sin θ )} is orthogonal because
(sin θ , cos θ ) ⋅ (cos θ , −sin θ ) = sin θ cos θ − cos θ sin θ = 0.
Furthermore, the set is orthonormal because
(sin θ , cos θ ) = sin 2θ + cos2θ = 1
(cos θ , −sin θ ) = cos2θ + ( −sin θ )2 = 1.
So, the set forms an orthonormal basis for R 2 .
20. Use Theorem 5.11 to find the coordinates of w = ( 4, − 3) relative to B.

3
6
4 3
3 6
−
,
 =
3
3
3
3


(4, − 3) ⋅ 

(4, − 3) ⋅  −

6
3
4 6
3 3
−
,
 = −
3
3 
3
3
 4 3

− 6

3
.
So, [w]B = 
 4 6

−
− 3
3


© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 5.3
Orthonormal Bases: Gram-Schmidt Process
173
24. Use Theorem 5.11 to find the coordinates of
w = ( 2, −1, 4, 3) relative to B.
22. Use Theorem 5.11 to find the coordinates of
w = (3, − 5, 11) relative to B.
48
58
,0
= 10
+ 13
= 13
(2, −1, 4, 3) ⋅ (135 , 0, 12
13 )
13
= −1
(2, −1, 4, 3) ⋅ (0, 1, 0, 0)
5
20
12
24
4
+ 13
= − 13
(2, −1, 4, 3) ⋅ (− 13 , 0, 13 , 0) = − 13
= 3
(2, −1, 4, 3) ⋅ (0, 0, 0, 1)
(3, −5, 11) ⋅ (1, 0, 0) = 3
(3, −5, 11) ⋅ (0, 1, 0) = −5
(3, −5, 11) ⋅ (0, 0, 1) = 11
3
So, [w]B = − 5.
 
11 
T
58
So, [w ]B =  13
4
3 .
−1 − 13
26. First, orthogonalize each vector in B.
w1 = v1 = ( −1, 2)
w2 = v2 −
v 2 , w1
( −1)(1) + 2(0) −1, 2 = 1, 0 + 1 −1, 2 =  4 , 2 
w1 = (1, 0) −
(
) ( )
(
)  
2
5
w1 , w1
5 5
( −1) + 22
Then, normalize the vectors.
u1 =
w1
=
w1
u2 =
w2
=
w2

1
(−1) + 2
2
2
(−1, 2) =  −

5 2 5
,

5
5 
2 5
5
 4 2
,  = 
,

5
5
5



 4
 2
+
 
 
5
5
1
2
5
2

5 2 5 2 5
5
So, the orthonormal basis is B′ =  −
,
,
, 
 .
5
5
5
5
 


28 First, orthogonalize each vector in B.
30. First, orthogonalize each vector in B.
w1 = v1 = ( 4, −3)
w2 = v2 −
w1 = v1 = (1, 0, 0)
v 2 , w1
w1
w1 , w1
= (3, 2) −
= (3, 2) −
w 2 = v2 −
3( 4) + 2( −3)
4 + ( −3)
2
2
= (1, 1, 1) −
( 4, −3)
= (0, 1, 1)
6
(4, −3)
25
w 3 = v3 −
 51 68 
=  , 
 25 25 
w1
=
w1
u2 =
w2
=
w2
1
42 + ( −3)
1
(1, 0, 0)
1
v 3 , w1
v3 , w 2
w1 −
w2
w1 , w1
w2, w2
= (1, 1, − 1) −
Then, normalize the vectors.
u1 =
v 2 , w1
w1
w1 , w1
(4, − 3) =  , − 
2
4
5
3
5
 51 68   3 4 
,  =  , 
2
2
 51 
 68   25 25   5 5 
  + 
 25 
 25 
1
 4 3
3 4
So, the orthonormal basis is B′ =  , − ,  ,  .
 5 5   5 5 
= (0, 1, −1)
1
0
(1, 0, 0) − (0, 1, 1)
1
2
Then, normalize the vectors.
u1 =
w1
= (1, 0, 0)
w1
u2 =
w2
=
w2
1
1
1 

,
(0, 1, 1) =  0,

2
2
2

u3 =
w3
=
w3
1
1
1 

,−
(0, 1, −1) =  0,

2
2
2

So, the orthonormal basis is

1
1  
1
1 

,
,−
(1, 0, 0),  0,
,  0,
 .
2
2 
2
2


© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
174
Chapter 5
Inner Product Spaces
32. First, orthogonalize each vector in B.
w1 = v1 = (0, 1, 2)
w 2 = v2 −
v 2 , w1
w1 = ( 2, 0, 0) − 0(0, 1, 2) = ( 2, 0, 0)
w1 , w1
w 3 = v3 −
v 3 , w1
v3 , w 2
3
2
 2 1
w1 −
w 2 = (1, 1, 1) − (0, 1, 2) − ( 2, 0, 0) =  0, , − 
w1 , w1
w2, w2
5
4
 5 5
Then, normalize the vectors.
1
1
2 

(0, 1, 2) =  0, ,

5
5
5

u1 =
w1
=
w1
u2 =
w2
1
= ( 2, 0, 0) = (1, 0, 0)
w2
2
u3 =
w3
=
w3
2
1 

 2 1
5  0, , −  =  0,
,−

5
5
 5 5


1
2 
2
1 

So, the orthonormal basis is  0,
,
,−
, (1, 0, 0),  0,
 .
5
5
5
5




34. First, orthogonalize each vector in B.
w1 = v1 = (3, 4, 0, 0)
w2 = v2 −
w 3 = v3 −
v 2 , w1
w1 , w1
v3 , w1
w1 , w1
w1 = ( −1, 1, 0, 0) −
w1 −
v3 , w 2
w2, w2
1
28 21
(3, 4, 0, 0) =  − , , 0, 0 
25
 25 25

w2
7
−
10
28 21
= ( 2, 1, 0, −1) −
(3, 4, 0, 0) − 495  − , , 0, 0
25
 25 25

25
6 8
  4 3

= ( 2, 1, 0, −1) −  , , 0, 0  +  − , , 0, 0  = (0, 0, 0, −1)
5
5
5
5

 

w4 = v4 −
v 4 , w1
w1 , w1
w1 −
v4 , w2
w2, w2
w2 −
v4 , w3
w3 , w3
w3
21
4
28 21
25
= (0, 1, 1, 0) −
(3, 4, 0, 0) − 49  − , , 0, 0  − 0(0, 0, 0,−1)
25
 25 25

25
 12 16
  12 9

= (0, 1, 1, 0) −  , , 0, 0  −  − , , 0, 0  = (0, 0, 1, 0)
25
25
25
25

 

Then, normalize the vectors.
u1 =
w1
1
3 4

= (3, 4, 0, 0) =  , , 0, 0 
w1
5
5 5

u2 =
w2
5  28 21
  4 3

=  − , , 0, 0  =  − , , 0, 0 
w2
7  25 25
5
5
 

u3 =
w3
= (0, 0, 0, −1)
w3
u 4 = w 4 = (0, 0, 1, 0)
 3 4
4 3
So, the orthonormal basis is  , , 0, 0 ,  − , , 0, 0 , (0, 0, 0, −1), (0, 0, 1, 0) .
  5 5

 5 5
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 5.3
Orthonormal Bases: Gram-Schmidt Process
175
36. Because there is just one vector, you simply need to normalize it.
1
u1 =
2 + ( − 9) + 6
2
2
2
(2, − 9, 6) =
1
2
9 6
(2, − 9, 6) =  , − , 
11
 11 11 11 
38. First, orthogonalize each vector in B.
w1 = v1 = (1, 3, 0)
v 2 , w1
w2 = v2 =
w1 , w1
3
27
9
(1, 3, 0) =  , − , − 3
10
10
10


w1 = (3, 0, − 3) −
Then, normalize the vectors.
1
 1
,
(1, 3, 0) = 
10
 10
3

, 0
10 
u1 =
w1
=
w1
u2 =
10  27
9
w2


=
 , − , − 3 = 
w2
3 190  10 10


9
,−
190
3
10 
,−

190
190 
40. First, normalize each vector in B.
w1 = v1 = (7, 24, 0, 0)
w 2 = v2 −
v 2 , w1
w1 = (0, 0, 1, 1) − 0(7, 24, 0, 0) = (0, 0, 1, 1)
w1 , w1
w 3 = v3 −
v 3 , w1
v3 , w 2
−1
3 3
w1 −
w 2 = (0, 0, 1, − 2) − 0(7, 24, 0, 0) −
(0, 0, 1, 1) =  0, 0, , − 
2
2 2
w1 , w1
w2 , w2

Then, normalize the vectors.
u1 =
w1
1
7 24
=
(7, 24, 0, 0) =  , , 0, 0 
25
25
25
w1


u2 =
w2
=
w2
u3 =
w3
1 
3 3
1
1 

,−
=
 0, 0, , −  =  0, 0,

w3
2 2
3 2
2
2

1
1
1 

(0, 0, 1, 1) =  0, 0, ,

2
2
2

So, the orthonormal basis is
 7 24
1
1  
1
1 
 
,
,  0, 0,
,−
 , , 0, 0 ,  0, 0,

 .
2
2 
2
2
 
 25 25
 2 1   2 2 2 
42. The set  , − , 
,
 from Exercise 41 is not orthonormal using the Euclidean inner product because
3 
 3 3   6
 2 1
 ,−  =
 3 3
44. 1, 1 =
46.
x2 , x =
1
−1
4
1
+
=
9
9
5
≠ 1.
3
1
1 dx = x = 1 − ( −1) = 2
 −1
1
−1
x 2 x dx =
1
−1
1
x 3dx =
x4 
1 1
−  = 0
 =
4  −1
4  4
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
176
Chapter 5
Inner Product Spaces
48.
1
1
x2 − , x2 −
=
3
3
1
−1

1
=
−1

1
=
=
−1

x2 −
1  2 1 
 x −  dx
3 
3
x4 −
1 2 1 2 1
x − x +  dx
3
3
9
x4 −
2 2 1
x +  dx
3
9
1
x
2
1 
− x 3 + x
5
9
9  −1
5
2
1
5
3
1 5 2 3 1  1

=  (1) − (1) + (1) −  ( −1) − ( −1) + ( −1)
9
9  5
9
9
5

8
=
45
50. The solutions of the homogeneous system are of the form ( −3s + 3t , s, t ), where s and t are any real numbers. So, a basis for
the solution space is {( −3, 1, 0), (3, 0, 1)}.
To find an orthonormal basis B = {u1 , u 2}, use the alternative form of the Gram-Schmidt orthonormalization process, as
shown below.
u1 =
v1
 −3
= 
,
v1
 10
1

, 0
10 
w 2 = v 2 − v 2 , u1 u1

3

= (3, 0, 1) − (3, 0, 1) ⋅  −
,
10


1
3
 
, 0   −
,
10  
10
1

, 0
10 
3 9 
=  , , 1
 10 10 
u2 =
w2
=
w2
 3 190 9 190
10  3 9 
,
,
 , , 1 = 
190
190  10 10 
 190
190 

19 
So, an orthonormal basis for the solution space is
 3 10
10   3 190 9 190
,
, 0 , 
,
,
 −
10
190
  190
 10
190 
 .
19 
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 5.3
Orthonormal Bases: Gram-Schmidt Process
177
52. The solutions of the homogeneous system are of the form ( s + t, 0, s, t ), where s and t are any real numbers. So, a basis for the
solution space is {(1, 0, 1, 0), (1, 0, 0, 1)}.
To find an orthonormal basis B = {u1 , u 2}, use the alternative form of the Gram-Schmidt orthonormalization process as
shown.
u1 =
v1
=
v1
 2
1
2 
(1, 0, 1, 0) =  , 0, , 0 
2
2
 2

u 2 = v 2 − v 2 , u1 u1

 2
2  2
2 
, 0,
, 0 
, 0,
, 0 
= (1, 0, 0, 1) − (1, 0, 0, 1) ⋅ 
2
2
2
2




1 
1
=  , 0, − , 1
2 
2
u2 =
w2
=
w2
 6
1 1
1 
6
6
, 0, −
,
 , 0, − , 1 = 

2 
6
6
3
3 22


 2
2   6
6
6
So, an orthonormal basis for the solution space is 
, 0,
, 0 , 
, 0, −
,
 .
2
6
3 
  6
 2
54. The solutions of the homogenous system are of the form ( − r − t , − s , r , s, t ), where r, s, and t are any real numbers. So,
a basis for the solution space is {( −1, 0, 1, 0, 0), (0, −1, 0, 1, 0), ( −1, 0, 0, 0, 1)}.
To find an orthonormal basis B = {u1 , u 2 , u 3}, use the alternative form of the Gram-Schmidt orthonormalization process
as shown.
u1 =
v1
=
v1
1
1
 1

( −1, 0, 1, 0, 0) =  − , 0, , 0, 0
2
2
2


w 2 = v 2 − v 2 , u1 u1

1
1
 1
  1

= (0, −1, 0, 1, 0) − (0, −1, 0, 1, 0) ⋅  −
, 0,
, 0, 0   −
, 0,
, 0, 0 
2
2
2
2

 


= (0, −1, 0, 1, 0)
u2 =
w2
=
w2
1
1
1


(0, −1, 0, 1, 0) =  0, − , 0, , 0
2
2
2 

w 3 = v 3 − v 3 , u1 u1 − v 3 , u 2 u 2

1
1
 1
  1

, 0,
, 0, 0   −
, 0,
, 0, 0 
= ( −1, 0, 0, 0, 1) − ( −1, 0, 0, 0, 1) ⋅  −
2
2
2
2

 



1
1
1
1



, 0,
, 0   0, −
, 0,
, 0
− ( −1, 0, 0, 0, 1) ⋅  0, −
2
2
2
2

 


1
 1

=  − , 0, − , 0, 1
2
2


u3 =
w3
=
w3
 1
1  1
1
1

, 0, −
, 0,
 − , 0, − , 0, 1 =  −
2
3 2 2
6
6


2

3 
So, an orthonormal basis of the solution space is
 1
1
1
1
1
 
  1
, 0,
, 0, 0 ,  0, −
, 0,
, 0 ,  −
, 0, −
, 0,
 −
2
2
2
2
6
6
 
 

2
 .
3 
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
178
Chapter 5
Inner Product Spaces
56. (a) True. See definition on page 254.
(b) True. See Theorem 5.10 on page 257.
58. Let p1 ( x) = x 2 , p2 ( x) = 2 x + x 2 , and p3 ( x) = 1 + 2 x + x 2 .
Then, because p1 , p2 = 0(0) + 0( 2) + 1(1) = 1 ≠ 0, the set is not orthogonal. Orthogonalize the set as follows.
w1 = p1 = x 2
w 2 = p2 −
p2 , w1
0(0) + 2(0) + 1(1) 2
w1 = x 2 + 2 x −
x = 2x
w1 , w1
02 + 02 + 12
w 3 = p3 −
p3 , w 2
p3 , w1
w2 −
w1
w2 , w2
w1 , w1
= 1 + 2 x + x2 −
1(0) + 2( 2) + 1(0)
1(0) + 2(0) + 1(1)
(2 x) − 2 2 2 x 2
2
2
2
0 + 2 +0
0 +0 +1
= 1 + 2 x + x2 − 2x − x2 = 1
Then, normalize the vectors.
1
w1
=
x2 = x2
u1 =
w1
02 + 02 + 12
1
u2 =
w2
=
w2
0 2 + 22 + 0 2
u3 =
w3
=
w3
1 + 0 2 + 02
1
2
( 2 x) = x
(1) = 1
So, the orthonormal set is {x 2 , x, 1}.
5 x + 12 x 2
12 x − 5 x 2
60
60
, p2 ( x) =
, and p3 ( x ) = 1. Then p1 , p2 =
−
= 0, p1 , p3 = 0, and
13
13
169 169
60. Let p1( x) =
p2 , p3 = 0.
Furthermore,
p1 =
25 + 144
= 1, p 2 =
169
25 + 144
, and p3 = 1.
169
So, { p1 , p2 , p3} is an orthonormal set.
2 ( −1 + x 2 ) and q ( x ) =
62. Let p( x) =
2 ( 2 + x + x 2 ). Because p, q =
2
( 2 ) + (− 2 )(2 2 ) = −2 ≠ 0,
2 +0
the set is not orthogonal.
Orthogonalize the set as follows.
w1 = p =
2 ( x 2 − 1)
w2 = q −
q, w1
w1 =
w1 , w1
2 (2 + x + x 2 ) −
−2
4
( 2 ( x − 1)) = 3 2 2 +
2
2x +
3 2 2
x
2
Then, normalize the vectors.
u1 =
w1
1
=
w1
2
u2 =
w2
=
w2
2 ( −1 + x 2 ) = −
1 3 2
+

11  2
2
+
2
2x +

2
So, the orthonormal set is −
+
 2
2 2
x
2
3 2 2
x  =
2

2 2
x ,
2
3
+
22
3
+
22
2
x +
22
2
x +
22
3 2
x
22
3 2
x .
22
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 5.4
Mathematical Models and Least Squares Analysis
179
64. Let v = c1v1 +  + cn v n be an arbitrary linear combination of vectors in S. Then
w, v = w, c1v1 +  + cn v n
= w, c1v1 +  + w, cn v n
= c1 w, v1 +  + cn w, v n = c1 ⋅ 0 +  + cn ⋅ 0 = 0.
Because c1 ,  , cn are arbitrary real numbers, you conclude that w is orthogonal to any linear combination of vectors in S.
66. Let v ∈ W ∩ W ⊥ . Then v ⋅ w = 0 for all w in W . In particular, since v ∈ W , v ⋅ v = 0, which implies that v = 0.
70. To form an orthonormal basis B′ for V , follow these
steps:
1 −1
0
0 1 −1




68. A = 0 −2 2  0 0 0
0 −1 1
0 0 0




(i)
 0 0 1
 1 −2 −1




A =  1 −2 −1  0 0 0
−1 2 1
0 0 0




T
1 0
   
N ( A) = span 0, 1
   
0 1
Begin with a basis for the inner product space.
It need not be orthogonal nor consist of unit
vectors.
(ii) Convert the given basis to an orthogonal basis.
(iii) Normalize each vector in the orthogonal basis
to form an orthonormal basis.
2 1
   
N ( A ) = span 1, 0
   
0 1
T
 1
 
R( A) = span −2
 
 −1
 0
 
R( A ) = span  1
 
−1
T
N ( A) = R( AT ) and N ( AT ) = R( A)
⊥
⊥
Section 5.4 Mathematical Models and Least Squares Analysis
2. The system
c0
T
= 0
c0 + 3c1 = 1
c0 + 4c1 = 2
has no solution. The points are not collinear.
 0
 1 0
 
   
⊥
10. (a) S = span − 2  S = span 0 ,  1
 
   
 1
0 2
4. The system
c0 − c1 =
 0  0
   
0
1
8. Not orthogonal:   ⋅   = −6 ≠ 0
1
−2
   
−2  2
5
c0 + c1 = −1
(b) S ⊕ S ⊥ = R3
c0 + c1 = − 4
has no solution. The points are not collinear.
T
−3
2
−3 0
 
 
   
6. Orthogonal:  0 ⋅  1 =  0 ⋅  1 = 0
 1
6
 1 0
 
 
   
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
180
Chapter 5
Inner Product Spaces
 0  0 2
 1 −1
     
   
1
1
0
     
− 4  1
⊥






12. (a) S = span  −1 , 0 , 1  S = span − 2,  0
     
   
 1  2 0
 2  0
     
   
−1 −1 2
 0  1
(b) S ⊕ S ⊥ = R5
14. (a) Because S =
{[x, y, 0, 0, z] },
T
0 0
   
0 0
⊥
S = span 1 , 0 .
   
0  1
   
0 0
 1 0 0
     
0  1 0
(b) Since S = span 0, 0 , 0 , you can see that S ⊕ S ⊥ = R 5 .
     
0 0 0
     
0 0  1
16. The orthogonal complement of
 1 −1
   
− 4  1
⊥
S = span − 2,  0
   
 2  0
   
 0  1
is
 0  0 2
     
 1  1 0
⊥
⊥
(S ) = S = span = −1,  0,  1 .
 1  2 0
     
−1 −1 2
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 5.4
18. Using the Gram-Schmidt process, an orthogonal basis
 1 
− 5  0 0

    
 2  0 0
,
.
for S is 
,
5   1 0

   

0 0  1


0

projS v = (u1 ⋅ v )u1 + (u 2 ⋅ v )u 2 + (u 3 ⋅ v )u 3
=
 1 
 1
− 5 
− 5 
0
0


 
 
 
1  2 
0 + 10 =  2 
+
1


 5
 1
0
5
5
 
 
 

 1
0
0
 1




 
0
 1

20. Using the Gram-Schmidt process, an orthonormal basis
for S is
 1 
 1
2 
0 − 2 
  
  
1  1   1 
2 
2   2
,   .
  , 
1  1   1 
−
2 
2   2
  
  
0 − 1 
 1  
 2 
 2 
projS v = (u1 ⋅ v )u1 + (u 2 ⋅ v )u 2 + (u 3 ⋅ v )u3
1 
 1
2
− 2 
0

5
 
 


2
1
1 
 1


 
2
 2
 2
−1 
2
= 5  +

 + 0  =  
2 1 
1 
 1
 3
−
2
 2

5
2
 
 


 
1
1
0
 
− 
2


 2 
 2 
Mathematical Models and Least Squares Analysis
0 −1 1


22. A =  1 2 0
 1 1 1


181
 1 0 2


 0 1 −1
0 0 0


 0 1 1


AT = −1 2 1 
 1 0 1


 1 0 1


0 1 1
0 0 0


−2
 
N ( A) = span  1
 
 1
−1
 
N ( AT ) = span −1
 
 1
0 −1
   
R( A) = span  1,  2
   
 1  1
 0  1
   
R( AT ) = span −1, 2
   
 1 0
 1 0 −1


0 −1 1
=
A
24.
 1 1 0


 1 0 1
1

0
 
0

0
 1 0 1 1


AT =  0 −1 1 0
−1 1 0 1


0 0

1 0
0 1

0 0
 1 0 0 0


 0 1 0 1
0 0 1 1


0
 
N ( A) = 0
 
0
 0
 
−1
N ( AT ) = span  
−1
 
 1
 1  0 −1
     
0 −1
1
R( A) = span  ,  ,  
 1  1  0
     
 1  0  1
 1  0  1
     
R( A ) = span  0, −1,  1
     
−1  1 0
T
( R( A ) = R )
T
3
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
182
Chapter 5
Inner Product Spaces
 1 −1
 1 1 0 1 

 1 1
26. AT A = −1 1 1 0 

 1 1 1 1 0 1


 1 0
1
3 0 3

1


= 0 3 1

1
3 1 4



1
2
 1 1 0 1  
 5

 1
 
AT b = −1 1 1 0   = −1
0
 1 1 1 1  
 5


 
2
7
1 0 0
 7
3 0 3 5
6


 16 


1
0 3 1 −1  0 1 0 − 2   x = − 2 
0 0 1
 1
1
3 1 4 5


2

 2
0

0 1 2 1 0  1


28. AT A = 2 1 1 1 2 2

 1 −1 0 1 −1  1



0
2
1
1
1
2
1

−1
6 4 0


0 = 4 11 0

0 0 4
1



−1
 1
 
0 1 2 1 0  0
 1

 
 
T
A b = 2 1 1 1 2 1 = 2
 
 1 −1 0 1 −1 −1
0


 
 
 0
1 0 0
6 4 0 1



4 11 0 2  0 1 0

0 0 4 0
0 0 1


3
3
50 
 50 
4  x = 4
25
25

0
0 2
0 1 1 
2 4

30. AT A = 
  1 1 = 

2
1
3


4 14

 1 3
 2
0 1 1  
−1
A b = 
 − 2 =  
2
1
3

 
 5
 1
T
 
 0
1 1
1 1 1 
 3 7

32. A A = 
 1 2 = 

1 2 4 
7 21

1 4
T
 1
1 1 1  
 9
AT b = 
 3 =  
1 2 4  
27
5
 1 0 0
 0
3 7 9
  x = 9 

  
9
0 1 7 
 7 
7 21 27
line: y = 97 x
y
6
(4, 5)
4
(2, 3)
(1, 1)
2
2 4  x1 
−1

  =  
4 14  x2 
 5
The solution is
4
6
x
1 −2


1 −1
1
1
1
1
1


5 0 


34. AT A = 
 1 0 = 

−2 −1 0 1 2
0 10
1 1


1 2
0
 
2
1
1
1
1
1


16
T
 
A b = 
 3 =  
−2 −1 0 1 2
15
5
 
6
5 0 16
 1 0 3.2
3.2

  
  x =  
0 10 15
0 1 1.5
1.5
line: y = 3.2 + 1.5 x
y
The normal equations are
AT Ax = AT b
y = 97 x
2
(2, 6)
4
y = 3.2 + 1.5x
(0, 3)
(− 1, 2)
(− 2, 0)
−4
(1, 5)
6
2
4
x
−2
− 17 
 x1 
x =   =  67 .
 6 
 x2 
Finally, the projection of b onto S is
 7
0 2 17
 35 

 − 6 
Ax =  1 1  7  = − 3 .
 2
 1 3  6 


 3
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 5.4
1
 1 1 1 1 

 1
36. AT A = 0 1 2 3 

0 1 4 9 1


1
0 0
 4 6 14

1 1


=  6 14 36

2 4
14 36 98



3 9
 2
10
 1 1 1 1  3 
 


AT b = 0 1 2 3  52  =  37
2
 
 95 
0 1 4 9  2 


2
 4
Mathematical Models and Least Squares Analysis
183
1 −2 4


 1 1 1 1 1 1 −1 1  5 0 10




38. AT A = −2 −1 0 1 2 1 0 0 =  0 10 0


 4 1 0 1 4 1 1 1 10 0 34






1 2 4
 1 0 0 39 
 39 
 4 6 14 10
20


 20 

37 
4   x = − 4 


−
6
14
36
0
1
0

2
5


 5
14 36 98 95 
1
1
0
0
1



2
2

 2 
Quadratic Polynomial: y = 39
− 54 x + 12 x 2
20
 6
 

1
1
1
1
1

  5  31
2


  7 
T
A b = −2 −1 0 1 2 2 = −17
 
 4 1 0 1 4  2  27




 
−1
31 
 5 0 10
 1 0 0 257 
 257 
2
70




 70

17
17
 0 10 0 −17  0 1 0 − 10   x = − 10 
10 0 34 27
0 0 1 − 2 
 − 2
7



 7
Quadratic Polynomial: y = 257
− 17
x − 72 x 2
70
10
40. Substitute the data points (8, 29.3), (9, 32.0), (10, 32.5), (11, 32.7 ), (12, 31.7), and (13, 31.2) into the quadratic polynomial
y = c0 + c1t + c2t 2 . You then obtain the system of linear equations
c0 + 8c1 + 64c2 = 29.3
c0 + 9c1 + 81c2 = 32.0
c0 + 10c1 + 100c2 = 32.5
c0 + 11c1 + 121c2 = 32.7
c0 + 12c1 + 144c2 = 31.7
c0 + 13c1 + 169c2 = 31.2.
This produces the least squares problem
At = b
1

1
1

1

1
1

8
9
10
11
12
13
64
29.3



81
32.0
c0 

32.5
100  
  c1  = 
.
121  
32.7
 c2 


144
31.7
31.2
169


The normal equations are
AT At = AT b
53
679c0 
 6
 189.4

 


53
579
6497
c
=

 1 
 1668.1 .
679 6497 89,595c2 
21,511.5

 


and the solution is
c0 
 −13.2
 


t =  c1  =  8.50 .
c2 
− 0.393
 


© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
184
Chapter 5
Inner Product Spaces
The least squares quadratic is y = −13.2 + 8.50t − 0.393t 2 . Substitute the same data points into the cubic polynomial
y = c0 + c1t + c2t 2 + c3t 3. You then obtain the system of linear equations
c0 + 8c1 + 64c2 + 512c3 = 29.3
c0 + 9c1 + 81c2 + 729c3 = 32.0
c0 + 10c1 + 100c2 + 1000c3 = 32.5
c0 + 11c1 + 121c2 + 1331c3 = 32.7
c0 + 12c1 + 144c2 + 1728c3 = 31.7
c0 + 13c1 + 169c2 + 2197c3 = 31.2 .
This produces the least squares problem
At = b
1

1
1

1

1
1

8
64
9
81
10 100
11 121
12 144
13 169
512
29.3



729 c0 
32.0
 
32.5
1000  c1 

.
= 
1331 c2 
32.7
 


1728 c3 
31.7
31.2
2197


The normal equations are
AT At = AT b
6
63
679
7497c0 
189.4



 


c
63
679
7497
84,595
1993.1

 1  = 

 679
 21,511.5
7497 84,595
972,993c2 

 


7497 84,595 972,993 11,377,939
237.677.3
c3 
and the solution is
 −123.7


41.07
t = 
.
− 3.543


 0.1000
The least squares regression cubic is y = −123.7 + 41.07t − 3.543t 2 + 0.1000t 3 .
2018 (quadratic):
y = −13.2 + 8.50(18) − 0.393(18) ≈ $ 12.5 billion
2
2018 (cubic):
y = −123.7 + 41.07(18) − 3.543(18) + 0.1000(18) ≈ $ 50.8 billion
2
3
Because the original data increased from 2008 to 2013 with the revenue leveling off in 2012, you can expect the revenue to
increase or stay about the same for future years. Because the cubic polynomial predicts the revenue to be about $ 50.8 billion
in 2018, this model is more accurate for predicting future revenues.
42. The vector Ax that minimizes Ax − b for a given vector b is Ax = projsb, where S = R( A). Since
( )
Ax − b = projsb − b, ( Ax − b) ∈ S ⊥ . Then ( Ax − b) ∈ N ( AT ), because S ⊥ = R( A) N AT . So
⊥
AT ( Ax − b ) = 0
AT Ax − AT b = 0
AT Ax = AT b.
These equations are used to find b and solve the least squares problem.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 5.5
Applications of Inner Product Spaces
185
44. (a) False. They are orthogonal subspaces of R m not R n .
(b) True. See the “Definition of Orthogonal Complement” on page 266.
(c) True. See page 265 for the definition of the “Least Squares Problem.”
46. Let S be a subspace of R n and S ⊥ its orthogonal complement. S ⊥ contains the zero vector. If v1, v 2 ∈ S ⊥ , then for all w ∈ S ,
( v1 + v 2 ) ⋅ w = v1 ⋅ w + v 2 ⋅ w = 0 + 0 = 0  v1 + v 2 ∈ S ⊥
and for any scalar c,
(cv1 ) ⋅ w = c( v1 ⋅ w) = c0 = 0 
cv1 ∈ S ⊥ .
48. Let x ∈ S1 ∩ S 2 , where R n = S1 ⊕ S2 . Then x = v1 + v 2 , v1 ∈ S1 and v 2 ∈ S 2 . But,
x ∈ S1  x = x + 0, x ∈ S1 , 0 ∈ S 2 , and x ∈ S 2

x = 0 + x , 0 ∈ S1 , x ∈ S 2 . So, x = 0 by the uniqueness of
direct sum representation.
Section 5.5 Applications of Inner Product Spaces
i
j k
2. i × j = 1 0
=
i
0
6. k × i = 0 0
1
0 1 0
1 0
0
0 0
1 0
i −
1 0
j+
0 0
1 0
0 1
=
k
= 0i − 0 j + k = k
0 1
0 0
k
1
1
−1
1
−1
i
0 1
1 0
j+
0 0
1 0
k
z
1
−1
i −
= 0i + j + 0k = j
z
x
j k
−1
j
1
x
y
−1
1
−1
y
j k
4. k × j = 0 0
1
0 1 0
=
0 1
1 0
i −
0 1
0 0
j+
0 0
0 1
k
= −i − 0 j + 0k = −i
z
1
−1
1
x
−1
−i
1
−1
y
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
186
Chapter 5
Inner Product Spaces
 i j k


8. (a) u × v = 2 0 1
 1 0 3


0 1
= i
0 3
j k
i


10. (a) u × v =  1 −1 −1
2 2 2


− j
2 1
1 3
2 0
+k
= i
1 0
−1 −1
2
2
− j
1 −1
2
2
+k
1 −1
2
2
= i(0 − 0) − j(6 − 1) + k (0 − 0)
= i( − 2 + 2) − j( 2 + 2) + k ( 2 + 2)
= 0i − 5 j + 0k = −5 j
= 0 j − 4 j + 4k = − 4 j + 4k
 i j k


(b) v × u =  1 0 3
2 0 1


0 3
= i
0 1
j k
i


(b) v × u = 2 2 2
 1 −1 −1


− j
1 3
2 1
1 0
+k
= i
2 0
2
2
−1 −1
− j
2
2
1 −1
+k
2
2
1 −1
= i(0 − 0) − j(1 − 6) + k (0 − 0)
= i( − 2 + 2) − j( − 2 − 2) + k ( − 2 − 2)
= 0i + 5 j + 0k = 5 j
= 0 j + 4 j − 4k = 4 j − 4k
i j k 


(c) v × v = 1 0 3
1 0 3


0 3
= i
0 3
 i j k


(c) v × v = 2 2 2
2 2 2


− j
1 3
1 3
+k
1 0
= i
1 0
2 2
2 2
− j
2 2
2 2
+k
2 2
2 2
= i(0 − 0) − j(3 − 3) + k (0 − 0)
= i( 2 − 2) − j( 2 − 2) + k ( 2 − 2)
= 0i − 0 j + 0k = 0
= 0i + 0 j + 0k = 0
i
j
k
12. (a) u × v = 3 − 3 − 3
3 −3
3
−3 −3
= i
−3
− j
3
3 −3
3
3
+k
3 −3
3 −3
= i( − 9 − 9) − j(9 + 9) + k ( − 9 + 9) = −18i − 18 j
i
j
k
(b) v × u = 3 − 3
3
3 −3 −3
= i
−3
3
−3 −3
− j
3
3
3 −3
+k
3 −3
3 −3
= i(9 + 9) − j( − 9 − 9) + k ( − 9 + 9) = 18i + 18 j
i
j k
(c) v × v = 3 − 3
3
3 −3
3
= i
−3 3
−3 3
− j
3 3
3 3
+k
3 −3
3
−3
= i( − 9 + 9) − j(9 − 9) + k ( − 9 + 9)
= 0
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 5.5
 i j k


14. (a) u × v = − 2 9 − 3
 4 6 − 5


9 −3
= i
6 −5
− j
i
−2 −3
4 −5
+k
−2 9
4 6
 i j k


(b) v × u =  4 6 − 5
− 2 9 − 3


9 −3
− j
4 −5
−2 −3
+k
4 6
−2 9
6 −5
j
4 −5
4 −5
+k
Furthermore, u × v = (0, 0, 0) is orthogonal to both
(− 5, 19, −12) and (5, −19, 12) because
k
2 = −3i − j − k = ( −3, −1, −1)
j k
4 2
12
4 6
(−1, 1, 2) and (0, 1, −1) because
(−3, −1, −1) ⋅ (−1,1, 2) = 0 and
(−3, −1, −1) ⋅ (0, 1, −1) = 0.
1
k
4 6
Furthermore, u × v = ( −3, −1, −1) is orthogonal to both
18. u × v = −2
j
19 −12 = 0i + 0 j + 0k = (0, 0, 0)
5 −19
0 1 −1
i
1 = i + j + k = (1, 1, 1)
3 −2
(1, − 2, 1) and (−1, 3, − 2) because (1, 1, 1) ⋅ (1, − 2, 1) = 0
and (1, 1, 1) ⋅ ( −1, 3, −2) = 0.
i
= 0i + 0 j + 0k = 0
i
k
Furthermore, u × v = (1, 1, 1) is orthogonal to both
26. u × v = − 5
= i ( − 30 + 30) − j( − 20 + 20) + k ( 24 − 24)
16. u × v = −1 1
j
1 −2
−1
 i j k


(c) v × v = 4 6 − 5
4 6 − 5


− j
(2, −1, 1) and (3, −1, 0) because (1, 3, 1) ⋅ (2, −1, 1) = 0
and (1, 3, 1) ⋅ (3, −1, 0) = 0.
24. u × v =
= 27i + 22 j + 48k
6 −5
Furthermore, u × v = (1, 3, 1) is orthogonal to both
i
= i( −18 + 45) − j( −12 − 10) + k (36 + 12)
= i
1 = i + 3 j + k = (1, 3, 1)
3 −1 0
= − 27i − 22 j − 48k
6 −5
187
j k
22. u × v = 2 −1
= i( − 45 + 18) − j(10 + 12) + k ( −12 − 36)
= i
Applications of Inner Product Spaces
1 = −2i + 4 j − 8k = ( −2, 4, − 8)
0
Furthermore, u × v = ( −2, 4, − 8) is orthogonal to both
(−2, 1, 1) and (4, 2, 0) because
(−2, 4, −8) ⋅ (−2, 1, 1) = 0 and
(−2, 4, −8) ⋅ (4, 2, 0) = 0.
i
j
k
20. u × v = 4
1
0 = −2i + 8 j + 5k = ( −2, 8, 5)
(0, 0, 0) ⋅ ( − 5, 19, −12) = 0 and
(0, 0, 0) ⋅ (5, −19, 12) = 0.
28. Using a graphing utility,
w = u × v = (7,1, 3).
Check if w is orthogonal to both u and v:
w ⋅ u = (7, 1, 3) ⋅ (1, 2, −3) = 7 + 2 − 9 = 0
w ⋅ v = (7, 1, 3) ⋅ ( −1, 1, 2) = −7 + 1 + 6 = 0
30. Using a graphing utility,
w = u × v = (0, 9, 0).
Check if w is orthogonal to both u and v:
w ⋅ u = (0, 9, 0) ⋅ ( 2, 0, −1) = 0 + 0 + 0 = 0
w ⋅ v = (0, 9, 0) ⋅ ( −1, 0, − 4) = 0 + 0 + 0 = 0
32. Using a graphing utility,
w = u × v = (0, 5, 5).
Check if w is orthogonal to both u and v:
w ⋅ u = (0, 5, 5) ⋅ (3, −1, 1) = 0 − 5 + 5 = 0
w ⋅ v = (0, 5, 5) ⋅ ( 2, 1, −1) = 0 + 5 − 5 = 0
3 2 −2
Furthermore, u × v = ( −2, 8, 5) is orthogonal to both
(4, 1, 0) and (3, 2, −2) because (−2, 8, 5) ⋅ (4, 1, 0) = 0
and ( −2, 8, 5) ⋅ (3, 2, −2) = 0.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
188
Chapter 5
Inner Product Spaces
34. Using a graphing utility,
42.
w ⋅ u = ( −8, 16, − 2) ⋅ ( 4, 2, 0) = −32 + 32 + 0 = 0
u×v =
j
36.
u×v =
=
i
38.
0
1
u × v = ( −1, 0, 1) =
0 = − 6i + 3j − 2k
2 square units.
46. Because
i
36 + 9 + 4 = 7
Unit vector =
1 = −i + k = ( −1, 0, 1),
the area of the parallelogram is
1 0 −3
u×v =
1
k
u×v = 1 2
j k
u × v = 1 −1
u×v
1
6
Unit vector =
=
( 2, 7, 1) =
( 2, 7, 1)
u×v
18
3 6
j
2
2
1
i + j+ k
3
3
3
44. Because
54 = 3 6
i
u×v
1
= (6i + 6 j + 3k )
u×v
9
Unit vector =
3 = ( 2, 7, 1)
0 −2
1
36 + 36 + 9 = 9
k
u × v = 2 −1
2 = 6i + 6 j + 3k
−1 −2
2
w ⋅ v = ( −8, 16, − 2) ⋅ (1, 0, − 4) = −8 + 0 + 8 = 0
k
u × v = 1 −2
Check if w is orthogonal to both u and v:
i
j
i
w = u × v = ( −8,16, − 2).
u×v
6
3
2
= − i + j− k
u×v
7
7
7
u×v =
j k
2 −1 0 = 3k = (0, 0, 3),
−1
2
0
the area of the parallelogram is
j
i
40.
u × v = 7 −14
k
5 = 70i + 175 j + 392k
28 −15
14
u×v =
702 + 1752 + 3922
=
189,189 = 21 429
Unit vector =
(0, 0, 3) = 3 square units.
u×v
1
=
(70, 175, 392)
u×v
21 249
=
1
(10, 25, 56)
3 429
=
429
(10, 25, 56)
1287
48. ( 4, 0, 3) − (1, − 2, 0) = (3, 2, 3)
( 2, 2,3) − (−1, 0, 0) = (3, 2, 3)
( 2, 2,3) − ( 4, 0, 3) = ( − 2, 2, 0)
( −1, 0, 0) − (1, − 2, 0) = (− 2, 2, 0)
u = (3, 2, 3) and v = ( − 2, 2, 0)
Because
i
j k
3 2
3 = − 6i − 6 j + 10k = ( − 6, − 6, 10),
−2 2
0
u×v =
the area of the parallelogram is
u× v =
( − 6) + ( − 6) + 102 =
2
2
172 = 2 43 square units.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 5.5
Applications of Inner Product Spaces
189
50. (0, 1, 2) − ( 2, − 3, 4) = ( −2, 4, − 2)
(0, 1, 2) − (−1, 2, 0) = (1, −1, 2)
Because
i
j
k
4 −2 = 6i + 2 j − 2k = (6, 2, − 2),
u × v = −2
1 −1
2
the area of the triangle is
62 + 22 + ( −2) = 12
2
A = 12 u × v = 12
44 =
11.
52. Because
54. Because
i
j k
i
v × w = 0 −1 0 = −i = ( −1, 0, 0),
0
0
j k
v×w = 0 3
0 = 3i = (3, 0, 0),
1
0 0
the triple scalar product of u, v, and w is
the triple scalar product is
u ⋅ ( v × w) = (−1, 0, 0) ⋅ ( −1, 0, 0) = 1.
u ⋅ ( v × w) = ( 2, 0, 1) ⋅ (3, 0, 0) = 6.
i
j
k
i
j
56. c(u × v ) = c u1 u2 u3 = cu1 cu2
i
v1
v2
j
k
v3
v1
v2
k
i
j
k
cu3 = cu × v = c u1 u2 u3 = u1
u2
u3 = u × cv
v3
i
v1
j
v2
k
1
v3
cv1 cv2
cv3
58. u × u = u1 u2 u3 = 0, because two rows are the same.
u1 u2
60. u × v
2
u3

(u ⋅ v)2  = u 2 v 2 − u ⋅ v 2
= u 2 v 2 sin 2θ = u 2 v 2 (1 − cos 2θ ) = u 2 v 2 1 −
(
)

u 2 v 2


62. (a) Because
i
j k
v×w = 0 1
1 0
1 = ( 2, 1, −1),
2
The volume is given by
u ⋅ ( v × w ) = (1, 1, 0) ⋅ ( 2, 1, − 1)
= 1( 2) + 1(1) + 0( −1) = 3 cubic units.
(b) Because
i
j k
v×w = 0 1
1 0
1 = i + j − k = (1, 1, −1),
1
The volume is given by
u ⋅ ( v × w ) = (1, 1, 0) ⋅ (1, 1, − 1) = 2 cubic units.
0 2
2
(c) u ⋅ ( v × w ) = 0 0 −2 = 0 − 2(6) + 2(0) = −12
3 0
2
The volume is given by u ⋅ ( v × w ) = 12 cubic units.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
190
Chapter 5
Inner Product Spaces
1 2 −1
(d) u ⋅ ( v × w ) = −1 2
2
2 0
1
= 1( 2) − 2( −1 − 4) − 1(0 − 4) = 16
The volume is given by u ⋅ ( v × w ) = 16 cubic units.
64. u v sin θ = u v
= u v
1 − cos 2θ
1−
2
(u ⋅ v ) 2
u 2 v 2
− (u ⋅ v )
2
=
u 2 v
=
(u12 + u22 + u32 )(v12 + v22 + v32 ) − (u1v1 + u2v2 + u3v3 )2
=
(u2v3 − u3v2 )2 + (u3v1 − u1v3 )2 + (u1v2 − u2v1 )2
= u×v
i
j
k
66. (a) u × ( v × w ) = u × v1
v2
v3
w1
w2
w3
= u × (v2 w3 − w2v3 )i − (v1w3 − w1v3 ) j + (v1w2 − v2 w1 )k 
=
i
j
k
u1
u2
u3
(v2 w3 − w2v3 ) ( w1v3 − v1w3 ) (v1w2 − v2 w1 )
= (u2 (v1w2 − v2 w1 )) − u3 ( w1v3 − v1w3 ) i − u1 (v1w2 − v2 w1 ) − u3 (v2 w3 − w2v3 ) j
+ u1 ( w1v3 − v1w3 ) − u2 (v2 w3 − w2v3 )k
= (u2 w2v1 + u3w3v1 − u2v2 w1 − u3v3w1 , u1w1v2 + u3w3v2 − u1v1w2 − u3v3 w2 ,
u1w1v3 + u2 w2v3 − u1v1w3 − u2v2 w3)
= (u1w1 + u2 w2 + u3w3 )(v1 , v2 , v3 ) − (u1v1 + u2v2 + u3v3 )( w1 , w2 , w3 )
= (u ⋅ w ) v − (u ⋅ v)w
(b) Let
u = (1, 0, 0), v = (0,1, 0)
and w = (1, 1,1).
Then
v × w = (1, 0, −1) and u × v = (0, 0, 1).
So
u × ( v × w) = (1, 0, 0) × (1, 0, −1) = (0, 1, 0),
while
(u × v) × w = (0, 0, 1) × (1, 1, 1) = (−1, 1, 0),
which are not equal.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 5.5
Applications of Inner Product Spaces
191
68. (a) The standard basis for P1 is {1, x}. Applying the Gram-Schmidt orthonormalization process produces the orthonormal basis
 1 1
B = {w1 , w 2} = 
, ( 2 x − 5) .
 3 3
The least squares approximating function is given by g ( x) = f , w1 w1 + f , w 2 w 2 .
Find the inner products
f1 , w1 =
f , w2 =
4
4
x
1
1
2 3 2
14
dx =
x  =
3
3 3
3 3
1
4
10 3 2 
22
1
4
x  ( 2 x − 5)dx =  x 5 2 −
x  =
9
45
 3
15
1
4
1
and conclude that
g ( x) = f , w1 w1 + f , w 2 w 2 =
14 1
22  1 
44
20
4
+
x +
=
(25 + 11x).
 ( 2 x − 5) =
45  3 
135
27
135
3 3 3
3
(b)
g
f
0
4.5
0
70. (a) The standard basis for P1 is {1, x}. Applying the Gram-Schmidt orthonormalization process produces the orthonormal basis
{
}
B = {w1 , w 2} = 1,
3 ( 2 x − 1) .
The least squares approximating function is then given by g ( x) = f , w1 w1 + f , w 2 w 2 .
Find the inner products
f , w1 =
f , w2 =
1
e −2 x dx = − 12 e −2 x  = − 12 (e −2 − 1)
 0
0
1
1
0
e −2 x
1
3 ( 2 x − 1)dx = − 3 × e−2 x  = − 3e −2
 0
and conclude that
g ( x) = f , w1 w1 + f , w 2 w 2 = − 12 (e −2 − 1) −
(b)
3e −2
( 3(2 x − 1)) = −6e x + 12 (5e + 1) ≈ −0.812 x + 0.8383.
−2
−2
1
g
f
0
0
1
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
192
Chapter 5
Inner Product Spaces
P1 ,
72. (a) The standard basis for
is {1, x}. Applying the Gram-Schmidt orthonormalization process produces the orthonormal basis
 2π
6π
B = {w1 , w 2} = 
,
4x − π ) .
2 (
π
π

The least squares approximating function is then given by g ( x) = f , w1 w1 + f , w 2 w 2 .
Find the inner products
π 2
f , w1 =
0
π 2
f , w2 =
π 2

2π 
2π
cos x
dx = −
π
 0
 π 

(sin x)

6π
(sin x)
 π
0
2

(4 x − π ) dx =

=
2π
π
6π
6π
π 2
[−4 x cos x + 4 sin x + π cos x]0 = 2 (4 − π )
π2
π
and conclude that
g ( x) = f , w1 w1 + f , w 2 w 2
 6π

2π  2π 
6π
4 − π )  2 ( 4 x − π )

+
2 (


π  π 
π
 π

2
6
=
+ 3 ( 4 − π )( 4 x − π )
=
π
=
(b)
π
24( 4 − π )
π3
x −
8(3 − π )
π2
≈ 0.6644 x + 0.1148.
1.5
g
f
0
π
2
0
74. (a) The standard basis for P2 is {1, x, x 2}. Applying the Gram-Schmidt orthonormalization process
produces the orthonormal basis
 1 1
2 5 2
11 
B = {w1 , w 2 , w 3} = 
, ( 2 x − 5),
 x − 5x +  .
3
2
3
3
3


The least squares approximating function is then given by g ( x) = f , w1 w1 + f , w2 w2 + f , w3 w3.
Find the inner products
4
1
14
f , w1 =
x
dx =
(see Exercise 51)
1
3
3 3
4
1
22
f , w2 =
x (2 x − 5)dx =
(see Exercise 51)
1
3
45
f , w3 =
4
1
x
2 5 2
11 
2 5
 x − 5 x + dx =
2
3 3
3 3

4
4
11 1 2 
2 5 2 7 2
11 
−2 5
 52
32
x − 2 x 5 2 + x3 2  =
 x − 5 x + x dx =

2
7
3
3 3
63 3

1
1 
and conclude that g is given by
14  1  22  1
2 5 2 5 2
11 

g ( x) =
⋅
 ( 2 x − 5)  −
 x − 5x + 

+
2
3 3  3  45  3
 63 3 3 3 
14 44 x
22
20 2 100
110
20 2 1424
310
=
+
−
−
= −
.
x +
x −
x +
x +
9
135
27 567
567
567
567
2835
567
(b)
1.5
f
g
0
0
1
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 5.5
Applications of Inner Product Spaces
193
76. (a) The standard basis for P2 is {1, x, x 2}. Applying the Gram-Schmidt orthonormalization process produces the orthonormal
 1 2 3 6 5 2 π 2 
basis B = {w1 , w 2 , w 3} = 
, 3 2 x, 5 2  x −
 .
π 
12 
 π π
The least squares approximating function is then given by g ( x) = f , w w1 + f , w 2 w 2 + f , w3 w3.
Find the inner products
f , w1 =
π 2
1
−π 2
π
π 2
cos x dx =
sin x 
=
π  −π 2
2
π
π /2
f , w2 =
 2 3 cos x
2 3 x sin x 
+
= 0
x cos x dx = 

32
−π 2 π 3 2
π
π3 2

 −π 2
f , w3 =
12 5 x cos x
6 5 2 π 2 
cos x dx = 
+
x −


3
2
−π 2 π
12 
π5 2


π 2
2 3
π 2
π /2
5 (12 x 2 − π 2 − 24)sin x 
2 5 (π 2 − 12)

=
52
2π
π5 2

−π 2
and conclude that
g ( x) = f , w1 w1 + f , w 2 w 2 + f , w 3 w 3
 2 3   2 5 (π 2 − 12)  6 5  2 π 2  
 2  1 

= 
x −


 + (0) 3 2 x  + 
 π 5 2 
12  
π5 2
 π  π 
π
 

=
2
π
+
2
60π 2 − 720  2 π 2   60(π − 12)  2
60 − 3π 2

x +
x
−
=
≈ −0.4177 x 2 + 0.9802.


5
5

π
π
π3
12  



78. The fourth order Fourier approximation of f ( x) = π − x is of the form
g ( x) =
a0
+ a1 cos x + b1 sin x + a2 cos 2 x + b2 sin 2 x + a3 cos 3 x + b3 sin 3 x + a4 cos 4 x + b4 sin 4 x.
2
In Exercise 67, you determined a0 and the general form of the coefficients a j and b j .
a0 = 0
a j = 0, j = 1, 2, 3, 
bj =
2
, j = 1, 2, 3, 
j
So, the approximation is g ( x) = 2 sin x + sin 2 x +
2
1
sin 3 x + sin 4 x.
3
2
80. The fourth order Fourier approximation of f ( x) = ( x − π ) is of the form
2
g ( x) =
a0
+ a1 cos x + b1 sin x + a2 cos 2 x + b2 sin 2 x + a3 cos 3 x + b3 sin 3 x + a4 cos 4 x + b4 sin 4 x.
2
In Exercise 69, you determined a0 and the general form of the coefficients a j and b j .
2π 2
3
4
a j = 2 , j = 1, 2, 
j
a0 =
b j = 0,
j = 1, 2, 
So, the approximation is g ( x) =
π2
3
+ 4 cos x + cos 2 x +
4
1
cos 3x + cos 4 x.
9
4
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
194
Chapter 5
Inner Product Spaces
82. The second order Fourier approximation of f ( x) = e− x is of the form
g ( x) =
a0
+ a1 cos x + b1 sin x + a2 cos 2 x + b 2 sin 2 x.
2
In Exercise 71, you found that
a0 = (1 − e −2π ) π
a1 = (1 − e −2π ) 2π
b1 = (1 − e −2π ) 2π .
So, you need to determine a 2 and b2 .
a2 =
1
2π
π
0
f ( x) cos 2 x dx =
1
2π
π
0
e − x cos 2 x dx
2π
=
b2 =
1 1 − x
(−e cos 2 x + 2e− x sin 2 x)
π  5
0
1
2π
π
0
f ( x) sin 2 x dx =
1
2π
π
0
=
e − x sin 2 x dx
2π
=
1
(1 − e−2π )
5π
1 1 − x
(−e sin 2 x − 2e− x cos 2 x)
π  5
0
=
2
(1 − e−2π )
5π
So, the approximation is
1 − e −2π
1 − e −2π
1 − e −2π
1 − e −2π
1 − e−2π
+
cos x +
sin x +
cos 2 x +
2 sin 2 x
2π
2π
2π
5π
5π
1
=
(1 − e−2π )(5 + 5 cos x + 5 sin x + 2 cos 2 x + 4 sin 2 x).
10π
g ( x) =
84. The second order Fourier approximation of f ( x) = e−2 x is of the form
g ( x) =
a0
+ a1 cos x + b1 sin x + a2 cos 2 x + b2 cos 2 x.
2
In Exercise 73, you found that
a0 =
1 − e −4π
2π
 1 − e −4π 
a1 = 2

 5π 
b1 =
1 − e −4π
.
5π
So, you need to determine a 2 and b2 .
a2 =
b2 =
1
2π
π
0
1
2π
π
0
f ( x) cos 2 x dx =
f ( x) sin 2 x dx =
1
2π
π
0
1
2π
π
0
2π
1 −2 x
1

e −2 x cos 2 x dx =  e −2 x sin 2 x −
e cos 2 x
4π
 4π
0
=
1 − e −4π
4π
=
1 − e −4π
4π
2π
1 −2 x
 1

e −2 x sin 2 x dx = − e −2 x sin 2 x −
e cos 2 x
π
π
4
4

0
So, the approximation is
g ( x) =
 1 − e −4π 
1 − e −4π
1 − e −4π
1 − e −4π
1 − e −4π
+ 2
sin x +
cos 2 x +
sin 2 x
 cos x +
4π
5π
4π
4π
 5π 
 1 − e −4π 
 1 − e −4π 
 1 − e −4π 
 1 − e −4π 
 1 − e −4π 
= 5
 + 8
 cos x + 4
 sin x + 5
 cos 2 x + 5
 sin 2 x
 20π 
 20π 
 20π 
 20π 
 20π 
 1 − e −4π 
= 
(5 + 8 cos x + 4 sin x + 5 cos 2 x + 5 sin 2 x).
 20π 
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Review Exercises for Chapter 5
195
86. The fourth order Fourier approximation of f ( x) = 1 + x is of the form
g ( x) =
a0
+ a1 cos x + b1 sin x + a2 cos 2 x + b2 sin 2 x + a3 cos 3 x + b3 sin 3 x + a4 cos 4 x + b4 sin 4 x.
2
In Exercise 71, you found that
a0 = 2 + 2π
a j = 0, j = 1, 2, 
bj =
−2
, j = 1, 2, 
j
So, the approximation is g ( x) = (1 + π ) − 2 sin x − sin 2 x − 23 sin 3x − 12 sin 4 x.
88. Because f ( x) = sin 2 x = 12 − 12 cos 2 x, you see that the fourth order Fourier approximation is simply g ( x ) = 12 − 12 cos 2 x.
90. Because
a0 =
2π 2
4
, a j = 2 ( j = 1, 2, ), b j = 0 ( j = 1, 2, ),
j
3
the nth order Fourier approximation is
g ( x) =
π2
3
+ 4 cos x + cos 2 x +
4
4
4
cos 3x +
cos 4 x +  + 2 cos nx.
9
16
n
92. (a) If u = (u1, u2 , u3 ) and v = (v1, v2 , v3 ), then the cross product of u and v is the vector
u × v = (u2v3 − u3v2 , u3v1 − u1v3 , u1v2 − u2v1).
(b) For a continuous function f on [a1b] and a finite-dimensional subspace W of C[a1b], the least squares approximating
function of f with respect to W is given by g = f , w1 + f , w 2 w 2 +  + f , w n w n , where B = {w1, w2 , , wn}
is an orthonormal basis for W .
(c) On the interval [0, 2π ], the least squares approximation of a continuous function f with respect to the vector space
spanned by {1, cos x, , cos nx, sin x, , sin nx} is g ( x) =
a0
+ a1 cos x +  + an cos nx + b1 sin x +  + bn sin nx,
2
where the Fourier coefficients a0 , a1 ,  , a n , b1 ,  , bn are
a0 =
aj =
bj =
1
2π
π
0
1
2π
π
0
1
2π
π
0
f ( x)dx
f ( x)cos jx dx, j = 1, 2,  n
f ( x)sin jx dx, j = 1, 2,  n.
Review Exercises for Chapter 5
2. (a)
u =
(−1) + 22 =
(b)
v =
2 2 + 32 =
2
5
13
(c) u ⋅ v = −1( 2) + 2(3) = 4
(d) d (u, v) = u − v =
(−3, −1) =
(−3) + (−1)
2
2
=
10
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
196
Chapter 5
Inner Product Spaces
4. (a)
u =
( − 3) + 22 + (− 2)
2
(b)
v =
12 + 32 + 52 =
35
2
=
10. The norm of v is
17
v =
2
2
2
=
u =
12 + ( −2) + 22 + 02 =
9 = 3
(b)
v =
22 + ( −1) + 02 + 22 =
9 = 3
2
2.
1
4
1 

= −
,−
,
.
3
2
3
2
3
2

66
6. (a)
2
2
1
1
v =
(−1, − 4, 1)
v
3 2
u =
(− 4, −1, − 7)
(− 4) + (−1) + (− 7)
=
2
So, a unit vector in the direction of v is
(c) u ⋅ v = − 3(1) + 2(3) + ( − 2)(5) = − 7
(d) d (u, v ) = u − v =
( −1) + (−4) + 12 = 3
12. The norm of v is
v =
02 + 22 + ( −1)
2
=
5.
(c) u ⋅ v = 1( 2) + ( −2)( −1) + 2(0) + (0)( 2) = 4
So, a unit vector in the direction of v is
(d) d (u, v) = u − v
u =
(−1, −1, 2, − 2)
=
(−1) + (−1) + 22 + (−2)
2
=
2
8. (a)
u =
12 + ( −1) + 02 + 12 + 12 =
(b)
v =
0 + 1 + ( −2) + 2 + 1 =
2
2
2
2
2
2
1
2 −1 

,
(0, 2, −1) =  0,
.
5
5
5

14. Solve the equation for c as follows.
=
10
c( 2, 2, −1) = 3
c
4 = 2
2
1
v =
v
c
10
(2, 2, −1) = 3
22 + 22 + ( −1)
2
= 3
c 3 = 3  c = ±1
(c) u ⋅ v = 1(0) + ( −1)(1) + 0( −2) + 1( 2) + 1(1) = 2
(d) d (u, v) = u − v
= (1, − 2, 2, −1, 0)
=
12 + ( −2) + 22 + ( −1)
2
2
=
10
16. The cosine of the angle θ between u and v is given by
cos θ =
u⋅v
=
u v
1(0) + ( −1)(1)
1 + ( −1)
2
2
0 +1
2
2
=
−1
−1
=
2 1
2
 −1  3π
which implies that θ = cos −1 
 = 4 radians (135°).
 2
18. The cosine of the angle θ between u and v is given by
u⋅v
=
cos θ =
u v
π
5π
5π
+ sin sin
6
6
6
6
=
π
π
π
5
2
2
2
2 5π
+ sin
+ sin
cos
cos
6
6
6
6
cos
π
3
3  1 1 
 −
+  
2  2  2  2 
cos
2
2
 3
1

 +  
 2
 2 
2
2

3
1
 −
 +  
 2
 2 
=
1
1
2
= −
2
1⋅ 1
−
2π
 1
radians (120°).
which implies that θ = cos −1  −  =
3
 2
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Review Exercises for Chapter 5
20. The cosine of the angle θ between u and v is given by
cos θ =
u⋅v
=
u v
4+1
=
17 20
197
24. A vector v = (v1, v2 , v3 , v4 ) that is orthogonal to u must
satisfy the equation u ⋅ v = 0v1 + v2 + 2v3 − v4 = 0.
85
34
This equation has solutions of the form
 85 
which implies that θ = cos −1 
 ≈ 1.18 radians
 34 
(67.7°).
(
numbers.
26. (a)
22. A vector v = (v1, v2 , v3 ) that is orthogonal to u must
satisfy the equation u ⋅ v = v1 − 2v2 + v3 = 0.
)
v = r , s, 12 t − 12 s, t , where r, s, and t are any real
 4
1
u, v = 2(0)  + (3)(1) + 2 ( −3) = 1
 3
 3
(b) d (u, v ) = u − v =
u − v, u − v
2
This equation has solutions of the form
v = ( 2 s − t , s, t ), where s and t are any real numbers.
=
 4
 10 
2 −  + 22 + 2 
 3
3
=
268
2
=
3
3
2
67
28. Verify the Triangle Inequality as follows.
u + v ≤ u + v
8
4
 , 4, −  ≤
3
3
2
1
9 + 2  +
9
 16 
2  + 1 + 18
9
2
 4
 8
2  + 42 + 2 −  ≤ 3.037 + 4.749
3
 
 3
5.812 ≤ 7.786
Verify the Cauchy-Schwarz Inequality as follows.
u, v ≤ u v
(3)(1) + 2 (−3) ≤ (3.037)(4.749)
1
 3
1 ≤ 14.423
30. (a)
1
f , g =  x 4 x 2 dx = x 4  = 1
 0
0
1
(b) The vectors are not orthogonal.
4
1
, verify the
and g =
3
5
Cauchy-Schwarz Inequality as follows
(c) Because
f =
f,g ≤ f
1≤
g
1 4 
≈ 1.0328.
3  5 
34. The projection of u onto v is given by
projv u =
=
u⋅v
v
v⋅v
2(7) + ( −1)(6)
7 2 + 62
(7, 6)
8
(7, 6)
85
 56 48 
=  , .
 85 85 
=
32. The projection of u onto v is given by
u⋅v
v
v⋅v
2(0) + 3( 4)
=
(0, 4)
02 + 42
12
=
(0, 4)
16
= (0, 3).
projvu =
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
198
Chapter 5
Inner Product Spaces
36. The projection of u onto v is given by
projv u =
=
u⋅v
= v
v⋅v
(−1)(4) + 3(0) + 1(5)
4 2 + 0 2 + 52
38. Orthogonalize the vectors in B.
w1 = (3, 4)
(4, 0, 5)
1
=
(4, 0, 5)
41
5
4
=  , 0, .
41 
 41
w 2 = (1, 2) −
11
8 6
(3, 4) =  − , 
25
 25 25 
Then normalize each vector.
1
1
3 4
u1 =
w1 = (3, 4) =  , 
w1
5
5 5
u2 =
1
1  8 6   4 3
w2 =
− ,  = − , 
w2
2 5  25 25   5 5 
 3 4
4 3 
So, an orthonormal basis for R2 is  , ,  − , .
5
5
5
5 




40. Orthogonalize the vectors in B.
w1 = (0, 0, 2)
w 2 = (0, 1, 1) − 24 (0, 0, 2) = (0, 1, 0)
w 3 = (1, 1, 1) − 24 (0, 0, 2) − 11(0, 1, 0) = (1, 0, 0)
Then normalize each vector to obtain the orthonormal basis for R3.
{(0, 0, 1), (0, 1, 0), (1, 0, 0)}.
42. (a) To find x = ( −3, 4, 4) as a linear combination of the vectors in
B = {( −1, 2, 2), (1, 0, 0)} solve the vector equation
c1( −1, 2,2) + c2 (1, 0, 0) = ( −3, 4, 4).
The solution to the corresponding system of equations is c1 = 2 and c2 = −1.
So, [x]B = ( 2, −1), and you can write
(−3, 4, 4) = 2(−1, 2, 2) − (1, 0, 0).
(b) To apply the Gram-Schmidt orthonormalization process, first orthogonalize each vector in B.
w1 = ( −1, 2, 2)
w 2 = (1, 0, 0) −
−1
8 2 2
(−1, 2, 2) =  , , 
9
9 9 9
Then normalize w1 and w2 as follows
u1 =
1
1
 1 2 2
w1 = ( −1, 2, 2) =  − , , 
3
w1
 3 3 3
u2 =
1
1 8 2 2  4
1
1 
,
,
w2 =
 , ,  = 
.
w2
2 2 3 9 9 9   3 2 3 2 3 2 
 1 2 2  4
1
1 
So, B′ =  − , , , 
,
,
.
3
3
3
3
2
3
2
3
2 




(c) The coordinates of x relative to B′ are found by calculating
 1 2 2  19
x, u1 = ( −3, 4, 4) ⋅  − , ,  =
3
 3 3 3
1
1 
−4
 4
x, u 2 = ( −3, 4, 4) ⋅ 
,
,
.
=
3 2 3 2 3 2  3 2
So,
19 1 2 2
4  4
1
1 
,
,
.
(−3, 4, 4) =  − , ,  −
3  3 3 3  3 2  3 2 3 2 3 2 
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Review Exercises for Chapter 5
44. These functions are orthogonal because f , g = 
1
1
0
0
f , g =  f ( x)g ( x) dx = 
46. (a)
(c)
f
= 
f,f
=
f =
1
0

4 1

 −1

(2 x − 2 x3 )dx = x 2 − x2 
−1
1
1
− 4 f , g = − 4 f , g = − 4(0) = 0
= f,f
−1
1 − x 2 2 x 1 − x 2 dx = 
= 0.
( x + 2)(15 x − 8) dx =  0 (15 x 2 + 22 x − 16)dx = 5 x3 + 11x 2 − 16 x 0 = 0
(b)
2
1
199
( x + 2) dx = 
2
1
1
 x3

19
x 2 + 4 x + 4)dx =  + 2 x 2 + 4 x =
(
0
3
3

0
1
19
3
(d) Because f and g are already orthogonal, you only need to normalize them. You know
f =
19
and so you
3
compute g .
g
2
= g , g =  (15 x − 8) dx =  ( 225 x 2 − 240 x + 64)dx = 75 x 3 − 120 x 2 + 64 x = 19
1
2
0
g =
1
1
0
0
19
So,
u1 =
1
f =
f
u2 =
1
g =
g
1
( x + 2) =
19
3
1
(15 x − 8).
19
3
( x + 2)
19
The orthonormal set is
3   15
 3
B′ = 
x + 2
x −
, 
19
19
  19

8 
.
19 
48. The solution space of the homogeneous system consists of vectors of the form ( −t , s, s, t ), where s and t are any real numbers.
So, a basis for the solution space is B = {( −1, 0, 0, 1), (0, 1, 1, 0)}. Because these vectors are orthogonal, and their length is
2, you normalize them to obtain the orthonormal basis
2
2 
2
2 

, 0, 0,
,
, 0 .
,  0,
 −
2  
2
2

 2
50. u + v
2
+ u − v
2
= (u + v) ⋅ (u + v ) + (u − v) ⋅ (u − v)
= (u ⋅ u + v ⋅ v + 2u ⋅ v ) + (u ⋅ u + v ⋅ v − 2u ⋅ v )
= 2 u
2
+ 2 v
2
52. Use the Triangle Inequality
u + w ≤ u + w with w = v − u
u + w = u + ( v − u) = v ≤ u + v − u
and so, v − u ≤ v − u . By symmetry, you also have u − v ≤ u − v = v − u .
So,
u − v
≤ u − v . To complete the proof, first observe that the Triangle Inequality implies that
u − w ≤ u + −w = u + w . Letting w = u + v, you have
u − w = u − (u + v) = − v = v ≤ u + u + v and so v − u ≤ u + v .
Similarly, u − v ≤ u + v , and
u − v ≤ u + v . In conclusion,
u − v ≤ u± v .
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
200
Chapter 5
Inner Product Spaces
54. Extend the V-basis {(0, 1, 0, 1), (0, 2, 0, 0)} to a basis of R 4 .
B = {(0, 1, 0, 1), (0, 2, 0, 0), (1, 0, 0, 0), (0, 0, 1, 0)}
Now, (1,1,1,1) = (0,1, 0,1) + (1, 0,1, 0) = v + w
where v ∈ V and w is orthogonal to every vector in V.
56. ( x1 + x2 +  + xn ) = ( x1 + x2 +  + xn )( x1 + x2 +  + xn )
2
= ( x1 ,  , xn ) ⋅ ( x1 ,  , xn ) + ( x2 ,  , xn , x1 ) ⋅ ( x1 ,  , xn )
+  + ( xn , x1 ,  , xn −1 ) ⋅ ( x1 ,  , xn )
1
1
≤ ( x12 +  + xn2 ) 2 ( x12 +  + xn2 ) 2
1
1
+ ( x22 +  + xn2 + x12 ) 2 ( x12 +  + xn2 ) 2 + 
1
1
+ ( xn2 + x12 +  + xn2 −1 ) 2 ( x12 +  + xn2 ) 2
= n( x12 +  + xn2 )
58. Let {u1, u2 ,  , un} be a dependent set of vectors, and assume u k is a linear combination of u1 , u 2 ,  , u k − 1 , which are
linearly independent. The Gram-Schmidt process will orthonormalize u1 ,  , u k − 1 , but then u k will be a linear combination of
u1 ,  , u k − 1 .
60. An orthonormal basis for S is
 0   0 

 

− 1   1 

2 ,  2 
 1   1 
 



2   2 
projs v = ( v ⋅ u1 )u1 + ( v ⋅ u 2 )u 2





 0
0




 0
 2  1   2  1 
 
= −
 + −
 =  0.
 −

2 
2 
2  2 

−2
 
 1 
 1 




2

 2
62.
1 −2


 1 1 1 1 1 −1
 4 −2
AT A = 
= 



−
−
2
1
0
1
1
0


−2 6

1 1
2
 
1
1
1
1


 7
 1
AT b = 
  =  
−2 −1 0 1  1
−2
3
1.9
AT Ax = AT b  x =  
0.3
line: y = 0.3x + 1.9
y
3
1
−2
−1
1
2
x
−1
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Review Exercises for Chapter 5
64. Substitute the data points
(6, 15.3), (7, 15.4), (8, 15.1), (9, 15.4), (10, 16.1),
(11, 16.7), (12, 17.9), and (13, 19.3) into the linear
polynomial y = c0 + c1t. You obtain the system of
linear equations
c0 + 6c1 = 15.3
c0 + 7c1 = 15.4
c0 + 8c1 = 15.1
c0 + 9c1 = 15.4
c0 + 10c1 = 16.1
c0 + 11c1 = 16.7
c0 + 12c1 = 17.9
c0 + 13c1 = 19.3.
This produces the least squares problem
At = b
1

1
1

1

1
1

1

1
6
15.3



7
15.4
15.1
8



9 c0 
15.4
  = 
.
10  c1 
16.1
16.7
11



17.9
12



13
19.3
The normal equations are
AT At = AT b
 8 76c0   131.2

  = 

76 764c1  1269.4
and the solution is
c0 
11.2
x =   = 
.
 c1 
0.55
So, the least squares linear equation is
y = 11.2 + 0.55t .
201
This produces the least squares problem
At = b
1

1
1

1

1
1

1

1
6
7
8
9
10
11
12
13
36
15.3



49
15.4
15.1
64
 c0 


81  
15.4
 c1 = 
.
100  
16.1

c2 
16.7
121



17.9
144



169
19.3
The normal equations are
AT Ax = AT b
76
764c0 
 8
 131.2

 


8056 c1  =  1269.4
 76 764
764 8056 88,292c2 
12,989.2

 


and the solution is
c0 
 22.6 
 


x =  c1  = − 2.01 .
c2 
 0.135
 


The least squares regression quadratic is
y = 22.6 − 2.01t + 0.135t 2 .
2018 (linear):
y = 11.2 + 0.55(18) ≈ 21.1 million
2018 (quadratic):
y = 22.6 − 2.01(18) + 0.135(18) ≈ 30.2 million
2
Because the original data increased from 2006 to 2013,
you expect the production to continue to increase.
Because the predicted value given by the quadratic
polynomial is greater than the actual value for 2013, this
model is more accurate for predicting future petroleum
productions.
Substitute the same data points into the quadratic
polynomial y = c0 + c1t + c2t 2 . You then obtain the
system of linear equations
c0 + 6c1 + 36c2 = 15.3
c0 + 7c1 + 49c2 = 15.4
c0 + 8c1 + 64c2 = 15.1
c0 + 9c1 + 81c2 = 15.4
c0 + 10c1 + 100c2 = 16.1
c0 + 11c1 + 121c2 = 16.7
c0 + 12c1 + 144c2 = 17.9
c0 + 13c1 + 169c2 = 19.3.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
202
Chapter 5
Inner Product Spaces
76. (a) The standard basis for P1 is {1, x}. In the interval
66. The cross product is
i
[0, 2], the Gram-Schmidt orthonormalization process
j k
1 = −2i − j + k = ( −2, −1, 1).
u × v = 1 −1
0
1
 1
yields the orthonormal basis 
,
 2
1
Furthermore, u × v is orthogonal to both u and v
because

3
, ( x − 1).
2

Because
1
4
dx =
2
2
2
u ⋅ (u × v) = 1( −2) + (1)( −1) + 1(1) = 0
f , w1 =  x3
0
and
3
( x − 1) dx
2
2
f , w 2 =  x3
v ⋅ (u × v) = 0( −2) + 1( −1) + 1(1) = 0.
0
2
68. The cross product is
i
j
=
3  x5 x 4 
 − 
5  0
2 5
=
3  32

 − 4
5
2

=
3  12 
 ,
2 5 
2
1 1 −1
Furthermore, u × v is orthogonal to both u and v
because
u ⋅ (u × v) = 2(1) + 0(1) + ( −1)( 2) = 0
and
g is given by
v ⋅ (u × v) = 1(1) + 1(1) + ( −1)( 2) = 0.
g ( x) = f , w1 + f , w 2 w 2
70. Because
j
v × w = −1 −1
3
3
4
3
 ( x − x ) dx
2 0
k
u × v = 2 0 −1 = i + j + 2k = (1, 1, 2).
i
=
k
0 = i − j − k = (1, −1, −1),
4 −1
u ⋅ ( v × w ) = (1, 2, 1) ⋅ (1, −1, −1) = −2 = 2.
4  1 

+
2 2 
=
18
8
x− .
5
5
3
( x − 1)
2
f
6
g
4
1 1 3
72. u ⋅ ( v × w ) = 0 3 3 = 1(9) + 3( − 9) = − 9
2
3 0 3
Volume = u ⋅ ( v × w ) = − 9 = 9 cubic units
74. Because u × v = u
3  12 
 
2 5 
y
(b)
the volume is
=
1
2
x
v sin θ , you see that u and v
are orthogonal if and only if sin θ = 1, which means
u×v = u
v .
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Review Exercises for Chapter 5
203
78. (a) The standard basis for P1 is {1, x}. In the interval [0, π ] the Gram-Schmidt orthonormalization process
3
 1

yields the orthonormal basis 
, 3 2 ( 2 x − π ).
π
π


Because
π
 1 
f , w1 =  sin x cos x
 dx = 0
0
 π
 3
π
f , w 2 =  sin x cos x 3 2 ( 2 x − π ) dx
0
π 
= −
3
,
2π 1 2
g is given by
g ( x) = f , w1 w1 + f , w 2 w 2

3  3
 1  
= 0
2 x − π ) 
 +  − 2π 1 2 
3 2(

 π 
 π

=−
3x
π2
+
3
.
2π
y
(b)
1
2
f
π
4
− 12
x
g
80. (a) The standard basis for P2 is {1, x, x 2}. In the interval [1, 2], the Gram-Schmidt orthonormalization process yields the

3 30  2
13 
orthonormal basis 1, 2 3  x − ,
,  x − 3x +
.
2
6 
5 



Because
f , w1 = 
2 1
0
x
dx = ln 2
2
f , w2 = 
3
3
3



 
2 3  x − dx = 2 3  1 −
 dx = 2 3 1 − ln 2 
1 x
2
2x 
2



1 
f , w3 = 
30  2
13 
30  
13 
30  13
3
 x − 3 +
 x − 3x + dx =
 dx =
 ln 2 − ,
1 x
6
6x 
2
5
5 1 
5 6
2 1
2
2 1
g is given by g ( x) = f , w1 w1 + f , w 2 w 2 + f , w 3 w 3
3
3
30  13
3  30  2
13 



= (ln 2) + 2 3 1 − ln 2 2 3  x −  +
 ln 2 − 
 x − 3x + 
2
2
2 5
6
5 6



3
3
3 
13 


 13
= ln 2 + 121 − ln 2  x −  + 180 ln 2 −  x 2 − 3 x +  = .3274 x 2 − 1.459 x + 2.1175.
2
2
2 
6


6
2
(b)
g
0
0
f
2
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
204
Chapter 5
Inner Product Spaces
82. Find the coefficients as follows
1
π
1
π
π  −π
a0 =
f ( x)dx =
1
π
π  −π
x dx = 0
1 1

x
x cos( jx)dx =  2 cos( jx) + sin ( jx)
π  −π
πj
j

aj =
1
1 1
π
x

x sin ( jx) =  2 sin ( jx) − cos( jx)
π  −π
πj
j

bj =
So, the approximation is g ( x ) =
π
= 0, j = 1, 2 
−π
π
2
= − cos(π j ) j = 1, 2, 
j
−π
a0
+ a1 cos x + a2 cos 2 x + b1 sin x + b2 sin 2 x = 2 sin x − sin 2 x.
2
84. (a) True. See note following Theorem 5.17, page 278.
(b) True. See Theorem 5.18, part 3, page 279.
(c) True. See discussion starting on page 285.
Project Solutions for Chapter 5
1
The QR-factorization
 1 1
.7071 .4082



 1.4142 0.7071
1. (a) A = 0 1 = 
0
.8165 
 = QR
0 1.2247
 1 0
.7071 −.4082 




1

0
(b) A = 
1

 1
0
.5774 −.7071



0
0
0 1.7321 1.7321
= 

 = QR
.5774
1
0 
0 1.4142



2
.5774 .7071
1

1
(c) A = 
1

1
0 −1
.5 −.5 −.7071
−.5


 2 2
2 0
.5 .5
0 


=
0 2
.5 = QR
.5 .5
2 0
0 


 0 0 .7071
0 0
.5 −.5 .7071
2. The normal equations simplify using A = QR as follows
AT Ax = AT b
(QR) QRx = (QR) b
T
T
RT QT QRx = RT QT b
RT Rx = RT QT b
(QT Q = I )
Rx = QT b.
Because R is upper triangular, only back-substitution is needed.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Project Solutions for Chapter 5
205
 1 1 .7071 .4082

 
 1.4142 0.7071
3. A = 0 1 = 
0 .8165 
 = QR.
0 1.2247
 1 0 .7071 −.4082 

 

1.4142 0.7071  x1 
Rx = QT b 
 
0 1.2247  x2 

−1
0
.7071  
.7071
−1.4142
= 
  1 = 

.4082 .8165 −.4082  
 0.8165
−1
 x1 
−1.3333
  = 

x
 2
 0.6667
2
Orthogonal Matrices and Change of Basis
 −1 2
T
1. P −1 = 
 ≠ P
−
2
3


cos θ −sin θ 
2. 

 sin θ cos θ 
−1
 cos θ sin θ  cos θ −sin θ 
=
 = 

−sin θ cos θ   sin θ cos θ 
T
3. If P −1 = PT , then PT P = I  columns of P are pairwise orthogonal.
( )
4. If P is orthogonal, then P−1 = PT by definition of orthogonal matrix. Then P −1
( )
holds because AT
−1
−1
( )
= PT
−1
( )
T
= P −1 . The last equality
( ) for any invertible matrix A. So, P−1 is orthogonal.
= A−1
T
 1 0  1 0
2 0
5. No. For example, 
 + 
 = 
 is not orthogonal. The product of orthogonal matrices is orthogonal. If
0
1
0
1

 

0 2
P−1 = PT and Q −1 = QT , then ( PQ)
6.
−1
= Q −1P −1 = QT PT = ( PQ ) .
T
P x = ( P x ) P x = x T P T Px = x T x = x
T
7. Let
 2
− 5
P = 
 1

5

1 
5 
2 

5
be the change of basis matrix from B′ to B. Because P is orthogonal, lengths are preserved.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
C H A P T E R 6
Linear Transformations
Section 6.1
Introduction to Linear Transformations ............................................207
Section 6.2
The Kernel and Range of a Linear Transformation ..........................212
Section 6.3
Matrices for Linear Transformations .................................................217
Section 6.4
Transition Matrices and Similarity ....................................................224
Section 6.5
Applications of Linear Transformations ...........................................228
Review Exercises ........................................................................................................233
Project Solutions.........................................................................................................241
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
C H A P T E R 6
Linear Transformations
Section 6.1 Introduction to Linear Transformations
2. (a) The image of v is
T (0, 4) = (0, 2( 4) − 0, 4)
= (0, 8, 4).
8. (a) The image of v is
 3

1
T ( 2, 4) = 
(2) − ( 4), 2 − 4, 4 
2
2


(b) If T (v1 , v2 ) = (v1 , 2v2 − v1 , v2 ) = ( 2, 4, 3), then
v1 = 2
=
( 3 − 2, − 2, 4).
 3

1
v1 − v2 , v1 − v2 , v2 
(b) If T (v1 , v2 ) = 
2
 2

2v2 − v1 = 4
v2 = 3
which implies that v1 = 2 and v2 = 3. So, the
preimage of w is ( 2, 3).
4. (a) The image of v is
T ( 2, 3, 0) = (3 − 2, 2 + 3, 2( 2)) = (1, 5, 4).
(b) If T (v1 , v2 , v3 ) = (v2 − v1 , v1 + v2 , 2v1 )
= ( −11, −1, 10),
=
( 3, 2, 0),
then
3
1
v1 − v2 =
2
2
v1 − v2 =
v2 =
3
2
0
which implies that v1 = 2 and v2 = 0. So, the
preimage of w is ( 2, 0).
then
v2 − v1 = −11
v1 + v2 = −1
2v1 = 10
which implies that v1 = 5 and v2 = −6. So, the
10. T is not a linear transformation because it does not
preserve addition nor scalar multiplication.
For example,
T (1, 1) + T (1, 1) = (1, 1) + (1, 1)
= ( 2, 2) ≠ ( 2, 4) = T ( 2, 2).
preimage of w is {(5, − 6, t ) : t is any real number}.
6. (a) The image of v is
T ( 2, 1, 4) = ( 2( 2) + 1, 2 − 1) = (5, 1).
12. T is not a linear transformation because it does not
preserve addition. For example,
T (1, 1, 1) + T (1, 1, 1) = ( 2, 2, 2) + ( 2, 2, 2)
= ( 4, 4, 4)
(b) If T (v1 , v2 , v3 ) = ( 2v1 + v2 , v1 − v2 ) = ( −1, 2), then
≠ (3, 3, 3) = T ( 2, 2, 2).
2v1 + v2 = −1
v1 − v2 =
2,
which implies that v1 = 13 , v2 = − 53 , and v3 = t ,
where t is any real number. So, the preimage of w is
{(
1, − 5, t
3
3
) : t is any real number}.
14. T is not a linear transformation because it does not
preserve addition nor scalar multiplication. For example,
T (1, 1) + T (1, 1) = (1, 1, 1) + (1, 1, 1)
= ( 2, 2, 2) ≠ ( 4, 4, 4) = T ( 2, 2).
16. T preserves addition.
 a1 b1  
 a2 b2  
T ( A1 ) + T ( A2 ) = T  
+ T
 c d  
 c d  
2
  1 1 
 2
= a1 + b1 + c1 + d1 + a2 + b2 + c2 + d 2
= ( a1 + a2 ) + (b1 + b2 ) + (c1 + c2 ) + ( d1 + d 2 ) = T ( A1 + A2 )
T preserves scalar multiplication.
T ( kA) = ka + kb + kc + kd = k ( a + b + c + d ) = kT ( A)
Therefore, T is a linear transformation.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
207
208
Chapter 6
Linear Transformations
18. T is not a linear transformation. T does not preserve addition.
 a1 b1  
 a2
T ( A1 ) + T ( A2 ) = T  
  + T  
 c1 d1  
 c2
b2  
2
2
2
  = b1 + b2 ≠ (b1 + b2 ) = T ( A1 + A2 )
d 2  
20. Let A and B be two elements of M 3,3 (two 3 × 3 matrices) and let c be a scalar. First
0
0
0
3 0
3 0
3 0






T ( A + B ) = 0 2
0 ( A + B ) = 0 2
0 A + 0 2
0 B = T ( A) + T ( B )
0 0 −10
0 0 −10
0 0 −10






by Theorem 2.3, part 2 and
0
0
3 0
3 0




T (cA) = 0 2
0 (cA) = c 0 2
0 A = cT ( A)
0 0 −10
0 0 −10




by Theorem 2.3, part 4. So, T is a linear transformation.
22. T preserves addition.
T ( a0 + a1 x + a2 x 2 ) + T (b0 + b1x + b2 x 2 ) = ( a1 + 2a2 x) + (b1 + 2b2 x)
= ( a1 + b1 ) + 2( a2 + b2 ) x
= T (( a0 + b0 ) + ( a1 + b1 ) x + ( a2 + b2 ) x 2 )
T preserves scalar multiplication.
(
)
T c( a0 + a1x + a2 x 2 ) = T (ca0 + ca1x + ca2 x 2 ) = ca1 + 2ca2 x = c( a1 + 2a2 x) = cT ( a0 + a1x + a2 x 2 )
Therefore, T is a linear transformation.
24. Because ( 2, 0) = 23 (1, 2) − 43 ( −1, 1), you have
T ( 2, 0) = T  23 (1, 2) − 43 ( −1, 1)
= 23 T (1, 2) − 43 T ( −1, 1)
= 23 (1, 0) − 43 (0, 1)
=
( 23 , − 43 )
Similarly, (0, 3) = (1, 2) + ( −1, 1), which gives
T (0, 3) = T (1, 2) + ( −1, 1)
28. Because ( −2, 4, −1) can be written as
(−2, 4, −1) = −2(1, 0, 0) + 4(0, 1, 0) − 1(0, 0, 1),
you can use Property 4 of Theorem 6.1 to write
T ( −2, 4, −1) = −2T (1, 0, 0) + 4T (0, 1, 0) − T (0, 0, 1)
= −2( 2, 4, −1) + 4(1, 3, − 2) − (0, − 2, 2)
= (0, 6, − 8).
30. Because (0, 2, −1) can be written as
= T (1, 2) + T ( −1, 1)
(0, 2, −1) = 32 (1, 1, 1) − 12 (0, −1, 2) − 23 (1, 0, 1), you can
= (1, 0) + (0, 1)
use Property 4 of Theorem 6.1 to write
= (1, 1).
26. Because ( 2, −1, 0) can be written as
(2, −1, 0) = 2(1, 0, 0) − 1(0, 1, 0) + 0(0, 0, 1),
you can use Property 4 of Theorem 6.1 to write
T ( 2, −1, 0) = 2T (1, 0, 0) − T (0, 1, 0) + 0T (0, 0, 1)
= 2( 2, 4, −1) − (1, 3, − 2) + (0, 0, 0)
= (3, 5, 0).
T (0, 2, −1) = 32 T (1, 1, 1) − 12 T (0, −1, 2) − 32 T (1, 0, 1)
= 32 ( 2, 0, −1) − 12 ( − 3, 2, −1) − 32 (1, 1, 0)
(
)
= 3, − 52 , −1 .
32. Because ( −2, 1, 0) can be written as
(−2, 1, 0) = 2(1, 1, 1) + (0, −1, 2) − 4(1, 0, 1), you can use
Property 4 of Theorem 6.1 to write
T ( −2, 1, 0) = 2T (1, 1, 1) + T (0, −1, 2) − 4T (1, 0, 1)
= 2( 2, 0, −1) + ( − 3, 2, −1) − 4(1, 1, 0)
= ( −3, − 2, − 3).
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 6.1
34. Because the matrix has 2 columns, the dimension of R n
is 2. Because the matrix has 3 rows, the dimension of
R m is 3. So, T : R 2 → R3.
Introduction to Linear Transformations
209
38. Because the matrix has five columns, the dimension of
R n is 5. Because the matrix has three rows, the
dimension of R m is 3. So, T : R 5 → R 3.
36. Because the matrix has five columns, the dimension of
R n is 5. Because the matrix has two rows, the dimension
of R m is 2. So, T : R 5 → R 2 .
 1 2
10

 2
 
40. (a) T ( 2, 4) = −2 4   = 12 = (10, 12, 4)
4
−2 2  
 4


 
(b) The preimage of ( −1, 2, 2) is given by solving the equation
 1 2
−1

  v1 
 
T (v1 , v2 ) = −2 4   =  2
v
2


−2 2
 2


 
for v = (v1 , v2 ). The equivalent system of linear equations
v1 + 2v2 = −1
−2v1 + 4v2 =
2
−2v1 + 2v2 =
2
has the solution v1 = −1 and v2 = 0. So, ( −1, 0) is the preimage of ( −1, 2, 2) under T.
(c) Because the system of linear equations represented by the equation
 1 2
1

  v1 

−2 4 v  = 1
2


−2 2
1



has no solution, (1, 1, 1) has no preimage under T.
 1
 
 0
−1 2 1 3 4  
 7
42. (a) T (1, 0, −1, 3, 0) = 
 −1 =   = (7, − 5).
 0 0 2 −1 0
−5
 3
 
 0
 v1 
 
v2 
−
1
2
1
3
4

 
−1
(b) The preimage of ( −1, 8) is determined by solving the equation T (v1 , v2 , v3 , v4 , v5 ) = 
 v3  =  .
 0 0 2 −1 0
 8
v4 
 
v5 
The equivalent system of linear equations has the solution v1 = 5 + 2r + 72 s + 4t , v2 = r , v3 = 4 + 12 s, v4 = s,
and v5 = t , where r, s, and t are any real numbers. So, the preimage is given by the set of vectors
{(5 + 2r + 72 s + 4t, r, 4 + 12 s, s, t ) : r, s, t are real numbers}.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
210
Chapter 6
Linear Transformations
0
 
0 2 0 2 0 1

 
44. (a) T (0, 1, 0, 1, 0) = 1 0 1 0 1 0 = ( 4, 0, 4)
 
1 2 2 2 1 1


 
0
(b) The preimage of (0, 0, 0) is determined by solving the equation as shown.
 v1 
 
0 2 0 2 0 v2 

 
T (v1 , v2 , v3 , v4 , v5 ) = 1 0 1 0 1 v3 = (0, 0, 0)
 
1 2 2 2 1 v 

 4
 
v5 
The equivalent system of linear equations has the solution v1 = − t , v2 = − s, v3 = 0, v4 = s, and v5 = t , where
s and t are any real numbers. So, the preimage is given by the set of vectors {( − t , − s, 0, s, t )}.
(c) The preimage of (1, −1, 2) is determined by solving the equation as shown.
 v1 
 
0 2 0 2 0 v2 


T (v1 , v2 , v3 , v4 , v5 ) = 1 0 1 0 1 v3  = (1, −1, 2)
 
1 2 2 2 1 v 

 4
 
v5 
The equivalent system of linear equations has the solution v1 = − 3 − t , v2 = 12 − s, v3 = 2, v4 = s, and v5 = t ,
where s and t are real numbers. So, the preimage is given by the set of vectors
46. If θ = 45°, then T is given by
T ( x, y ) = ( x cos θ − y sin θ , x sin θ + y cos θ )
 2
= 
x −
 2
2
2
y,
x +
2
2
2 
y .
2 
Solving T ( x, y ) = v = (1, 1), you have
2
x −
2
So, x =
( 2, 0).
2
y =1
2
and
2
x +
2
{(−3 − t, 12 − s, 2, s, t )}.
a − b 12
13
48. 
  =  
b
a
5

 
0
You then obtain the following system of equations.
12a − 5b = 13
12b + 5a = 0
2
y = 1.
2
2 and y = 0, and the preimage of v is
Solving the second equation for a gives a =
−12
b.
5
Substituting this back into the first equation produces
 −12 
12
b  − 5b = 13
 5 
−144
b − 5b = 13
5
−169
b = 13
5
−5
b =
.
13
Substituting b =
a =
−5
−12
into a =
b you obtain
13
5
12
.
13
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 6.1
50. If v = ( x, y, z ) is a vector into R 3 , then
Introduction to Linear Transformations
211
52. T is a linear transformation.
T ( v ) = (0, y , z ). In other words, T maps every vector in
R 3 to its orthogonal projection in the yz-plane.
T preserves addition.
T ( A + B) = ( A + B) X − X ( A + B)
= AX + BX − XA − XB
= ( AX − XA) + ( BX − XB )
= T ( A) + T ( B )
T preserves scalar multiplication.
T (cA) = (cA) X − X (cA)
= c( AX ) − c( XA)
= c( AX − XA)
= cT ( A)
54. T is not a linear transformation.
Consider A = I n . Then T ( A) = 1, but T ( 2 A) = 2n ≠ 2T ( A).
  1 3 
 1 0
0 1
0 0
0 0
 1 −1
0 2  1 2
3 −1
12 −1
56. T  
  = T 
 + 3T 
 − T
 + 4T 
 = 
 + 3
 − 
 + 4
 = 

0 0
0 0
 1 0
0 1
0 2
 1 1 0 1
1 0
 7 4
 −1 4 
58. This statement is true because Dx is a linear transformation and therefore preserves addition and scalar multiplication.
60. This statement is false because cos
x
1
≠ cos x for all x.
2
2
62. If Dx ( g ( x)) = e x , then g ( x) = e x + C.
64. If Dx ( g ( x )) =
1
, then g ( x) = ln x + C.
x
1
66. Solve the equation  p( x)dx = 1 for p( x) in P2 .
0

2
3 1

x
x
1
1
2
 0 (a0 + a1x + a2 x )dx = 1  a0 x + a1 2 + a2 3  = 1  a0 + 2 a1 + 3 a2 = 1.
1

0
Letting a2 = −3b and a1 = −2a be free variables, a0 = 1 + a + b, and p( x) = (1 + a + b) − 2ax − 3bx 2 .
68. (a) False. This function does not preserve addition nor
scalar multiplication. For example,
f (3x) = 27 x3 ≠ 3 f ( x).
(b) False. If f : R → R is given by f ( x) = ax + b for
some a, b ∈ R, then it preserves addition and scalar
multiplication if and only if b = 0.
70. (a) T ( x, y ) = T  x(1, 0) + y (0, 1)
= xT (1, 0) + yT (0, 1)
= x(0, 1) + y (1, 0) = ( y, x)
(b) T is a reflection about the line y = x.
72. Use the result of Exercise 71(a) as follows.
3 + 4 3 + 4
7 7
T (3, 4) = 
,
 =  , 
2
2


 2 2
7 7
T (T (3, 4)) = T  , 
 2 2
 1  7 7  1  7 7 
7 7
=   + ,  +   =  , 
 2 2
 2  2 2  2  2 2 
T is projection onto the line y = x.
74. To show that T : V → W is a linear transformation,
show that T : V → W preserves addition and scalar
multiplication by using the definition:
(1) T (u + v ) = T (u) + T ( v )
and
(2) T (cu) = cT (u),
where c is any nonzero constant.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
212
Chapter 6
Linear Transformations
76. (a) Because T (0, 0) = ( −h, − k ) ≠ (0, 0), a translation
(c) Because T ( x, y ) = ( x − h, y − k ) = ( x, y ) implies
x − h = x and y − k = y, a translation has no
fixed points.
cannot be a linear transformation.
(b) T (0, 0) = (0 − 2, 0 + 1) = ( −2, 1)
T ( 2, −1) = ( 2 − 2, −1 + 1) = (0, 0)
T (5, 4) = (5 − 2, 4 + 1) = (3, 5)
78. There are many possible examples. For instance, let T : R 3 → R 3 be given by T ( x, y, z ) = (0, 0, 0). Then if {v1 , v 2 , v 3} is
any set of linearly independent vectors, their images T ( v1 ), T ( v 2 ), T ( v 3 ) form a dependent set.
80. Let T be defined by T ( v) = v, v 0 . Then because
T ( v + w ) = v + w , v 0 = v, v 0 + w , v 0 = T ( v ) + T ( w )
and T (cv ) = cv, v 0 = c v, v 0 = cT ( v), T is a linear transformation.
82. Because
T (u + v) = u + v, w1 w1 +  + u + v, w n w n
= ( u, w1 w1 + v, w1 w1 ) +  + ( u, w n w n + v, w n w n )
= ( u, w1 w1 +  + u, w n w n ) + v, w1 w1 +  + v, w n w n = T (u) + T ( v)
and
T (cu) = cu, w1 w1 +  + cu, w n w n = c u, w1 w1 +  + c u, w n w n = c  u, w1 w1 +  + u, w n w n  = cT (u),
T is a linear transformation.
84. Suppose first that T is a linear transformation. Then T ( au + bv) = T ( au) + T (bv) = aT (u) + bT ( v).
Second, suppose T ( au + bv ) = aT (u ) + bT ( v ). Then T (u + v ) = T (1u + 1v ) = T (u) + T ( v)
and T (cu) = T (cu + 0) = cT (u) + T (0) = cT (u).
Section 6.2 The Kernel and Range of a Linear Transformation
2. T : R3 → R3 , T ( x, y, z ) = ( x, 0, z )
8. T : P3 → P2 ,
The kernel consists of all vectors lying on the y-axis.
That is, ker(T ) = {(0, y, 0) : y is a real number}.
4. T : R3 → R3 , T ( x, y, z ) = ( − z, − y, − x)
Solving the equation
T ( x, y, z ) = ( − z , − y, − x) = (0, 0, 0) yields that trivial
solution x = y = z = 0. So, ker(T ) = {(0, 0, 0)}.
(
)
6. T : P2 → R, T a0 + a1 x + a2 x 2 = a0
(
)
Solving the equation T a0 + a1 x + a2 x 2 = a0 = 0
yields solutions of the form a0 = 0 and a1 and a2 are
any real numbers. So,
ker(T ) = a1 x + a2 x 2 : a1 , a2 ∈ R .
{
T ( a0 + a1x + a2 x 2 + a3 x3 ) = a1 + 2a2 x + 3a3 x 2
Solving the equation
T ( a0 + a1 + a2 x 2 + a3 x3 ) = a1 + 2a2 x + 3a3 x 2 = 0
yields solutions of the form a1 = a2 = a3 = 0 and a0
any real number. So, ker (T ) = {a0 : a0 ∈ R}.
10. T : R 2 → R 2 , T ( x, y ) = ( x − y, y − x)
Solving the equation
T ( x, y ) = ( x − y, y − x) = (0, 0) yields solutions of
the form x = y. So, ker (T ) = {( x, x) : x ∈ R}.
}
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 6.2
The Kernel and Range of a Linear Transformation
14. (a) Because
 1 2  x1 
0
12. (a) 
  =  
3
6
x
−
−

  2
0

x1 + 2 x2 = 0
− 3x1 − 6 x2 = 0

213
 x1 
 1 −2 1  
0
T (x) = 
  x2  =  
0
2
1

 
0
 x3 
x1 + 2 x2 = 0
0 = 0
(
)
Using the parameter t = x2 produces the family of
solutions
has solutions of the form −2t , − 12 t , t where t is any
 x1 
− 2t 
− 2
  =   = t  .
 x2 
 t
 1
ker (T ) = t − 2, − 12 , 1 : t is a real number
real number,
{(
(b) Transpose A and find the equivalent row-echelon
form.
{(
)}
}
= span −2, − 12 , 1 .
So, ker (T ) = {t ( − 2, 1): t is a real number}
= span{( − 2, 1)}.
)
{(−2, − , 1)}.
1
2
(b) Transpose A and find the equivalent reduced
row-echelon form
1 − 3
1 − 3
A = 
  

2 − 6
0 0 
T
A
T
So, range(T ) = {t (1, − 3): t is a real number}.
= span{(1, − 3)}.
 1 0


= −2 2
 1 1



 1 0


0 1
0 0


So, range (T) is {(1, 0), (0, 1)} = R 2 .
16. (a) Because
 1 1
0

  x1 
T ( x) = −1 2   =  
0
 0 1  x2 


has only the trivial solution x1 = x2 = 0, ker(T ) = {(0, 0)}.
(b) Transpose A and find the equivalent reduced row-echelon form.
1 −1 0
AT = 

1 2 1

1 0

0 1

1
3

1
3
{(
)(
)}
So, range(T) = span 1, 0, 13 , 0, 1, 13 .
18. (a) Because
 x1 
 
−1 3 2 1 4  x2 
0


 
T ( x) =  2 3 5 0 0  x3  = 0
 
 2 1 2 1 0  x 
0

 4
 
 
 x5 
has solutions of the form ( −10 s − 4t , −15s − 24t , 13s + 16t , 9s, 9t ),
ker(T ) = span{( −10, −15, 13, 9, 0), ( −4, − 24, 16, 0, 9)}.
(b) Transpose A and find the equivalent reduced row-echelon form.
−1

 3
AT =  2

 1

 4
2 2

3 1
5 2

0 1

0 0

1

0
0

0

0
0 0

1 0
0 1

0 0

0 0
So, range (T) = span{(1, 0, 0), (0, 1, 0), (0, 0, 1)} = R3.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
214
Chapter 6
Linear Transformations
20. (a) The kernel of T is given by the solution to the
equation T ( x) = 0. So,
24. (a) The kernel of T is given by the solution to the
equation T ( x) = 0. So,
ker(T ) = {(5t , t ) : t ∈ R}.
ker(T ) = {( 2t , − 3t ) : t is any real number}.
(b) nullity(T ) = dim(ker (T )) = 1
(b) nullity(T ) = dim( ker(T )) = 1
(c) Transpose A and find its equivalent row-echelon
form.
(c) Transpose T and find the equivalent reduced
row-echelon form.
3 −9
A = 

2 −6
T

 1
AT =  26
− 5
 26
 1 −3


0 0
(d) rank(T ) = dim( range(T )) = 1
(d) rank(T ) = dim( range(T )) = 1
26. (a) The kernel of T is given by the solution to the
x = (0, 0), the kernel of T is {(0, 0)}.
(b) nullity(T ) = dim( ker(T )) = 0
(c) Transpose A and find the equivalent row-echelon
form.
4 0 2
A = 

 1 0 −3
T

 1 −5


0 0

So, range(T ) = {(t , − 5t ) : t ∈ R}.
So, range(T ) = {(t , − 3t ) : t is any real number}.
22. (a) Because T ( x) = 0 has only the trivial solution
5
− 26

25 
26 
 1 0 0


0 0 1
So, range(T ) = {(t , 0, s ) : s, t ∈ R}.
(d) rank(T ) = dim( range(T )) = 2
equation T ( x) = 0. So,
ker(T ) = {(0, t , 0) : t ∈ R}.
(b) nullity(T ) = dim(ker (T )) = 1
(c) Transpose A and find its equivalent row-echelon
form.
T
A
 1 0 0


= 0 0 0
0 0 1



 1 0 0


0 0 1
0 0 0


So, range(T ) = {(t , 0, s ) : s, t ∈ R}.
(d) rank(T ) = dim(range(T )) = 2
28. (a) The kernel of T is given by the solution to the
equation T ( x) = 0. So,
ker(T ) = {( −t , 0, t ): t is any real number}.
(b) nullity(T ) = dim( ker(T )) = 1
(c) Transpose A and find its equivalent reduced row-echelon form.
− 1 2 − 1 
1 0 1
3
 23 31



T
2

A =  3 3
0 1 0
3
− 1 2 − 1 
0 0 0


3
 3 3
So, range(T ) = {( s, 0, s ), (0, t , 0): s and t are any real numbers}.
(d) rank(T ) = dim( range(T )) = 2
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 6.2
The Kernel and Range of a Linear Transformation
30. (a) The kernel of T is given by the solution to the
equation T ( x) = 0. So,
ker(T ) = {(t , − t , s, − s ) : s, t ∈ R}.
(b) nullity(T ) = dim( ker(T )) = 2
(c) Transpose A and find its equivalent row-echelon form.
 1 0
 1 0




1
0
  0 1
AT = 
0 1
0 0




0 1
0 0
So, range(T ) = R 2 .
(d) rank(T ) = dim( range(T )) = 2
32. (a) The kernel of T is given by the solution to the
equation T ( x) = 0. So,
ker(T ) = {(−t − s − 2r , 6t − 2s, r , s, t ) : r , s, t ∈ R}.
(b) nullity(T ) = dim( ker(T )) = 3
(c) Transpose A and find its equivalent row-echelon form.
4 2
 3
17 0 18




3 −3
−2
 0 17 −5
AT =  6
  0 0 0
8 4




 −1 10 −4
 0 0 0




 15 −14 20
 0 0 0
215
34. Because rank (T ) + nullity(T ) = 3, and you are given
rank (T ) = 1, then nullity(T ) = 2. So, the kernel of T is
a plane, and the range is a line.
36. Because rank (T ) + nullity(T ) = 3, and you are given
rank (T ) = 3, then nullity(T ) = 0. So, the kernel of T is
the single point {(0, 0, 0)}, and the range is all of R 3 .
38. The kernel of T is determined by solving
T ( x, y, z ) = ( − x, y, z ) = (0, 0, 0), which implies that
the kernel is the single point {(0, 0, 0)}. From the
equation rank (T ) + nullity(T ) = 3, you see that the rank
of T is 3. So, the range of T is all of R 3 .
40. The kernel of T is determined by solving
T ( x, y, z ) = ( x, y, 0) = (0, 0, 0), which implies that
x = y = 0. So, the nullity of T is 1, and the kernel is a
line (the z-axis). The range of T is found by observing
that rank (T ) + nullity(T ) = 3. That is, the range of T is
2-dimensional, the xy-plane in R 3 .
So, range(T ) = {(17 s, 17t , 18s − 5t ) : s, t ∈ R}.
(d) rank(T ) = dim( range(T )) = 2
42. rank(T ) + nullity(T ) = dim R 4  nullity(T ) = 4 − 0 = 4
44. rank (T ) + nullity(T ) = dim P3  nullity(T ) = 4 − 2 = 2
46. rank (T ) + nullity(T ) = dim M 3, 3  nullity(T ) = 9 − 6 = 3
48. Because A = −1 ≠ 0, the homogeneous equation Ax = 0 has only the trivial solution. So, ker (T ) = {(0, 0)} and T is
( )
( )
one-to-one (by Theorem 6.6). Furthermore, because rank(T ) = dim R 2 − nullity(T ) = 2 − 0 = 2 = dim R 2 ,
T is onto (by Theorem 6.7).
50. Because A = −24 ≠ 0, the homogeneous equation Ax = 0 has only the trivial solution. So, ker(T ) = {(0, 0, 0)} and T is
( )
one-to-one (by Theorem 6.6). Furthermore, because rank(T ) = dim R3 − nullity(T ) = 3 − 0 = 3 = dim R3 ,
T is onto (by Theorem 6.7).
 1 −1
52. The matrix representation of T : R 2 → R 2 is given by A = 
.
−1 1
 1 −1
The matrix in row-echelon form is A = 
.
0 0
So, you have the following.
dim(domain ) = 2, rank (T ) = 1, nullity(T ) = 1
Because the rank of T is not equal to the dimension of R 2 , T is not onto. Because ker (T ) ≠ {0}, T is not one-to one.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
216
Chapter 6
Linear Transformations
1 0 0
−1 3 2 1 4



54. A =  2 3 5 0 0  0 1 0

 2 1 2 1 0



0 0 1
4 
9 
8 
3 
10
9
5
3
−13
9
−16 
9 

So, you have the following.
dim(domain ) = 5, rank(T ) = 3, nullity(T ) = 2
Because the rank of T is equal to the dimension of R 3 , T is onto. Because ker(T ) ≠ {0}, T is not one-to-one.
56. The vector spaces isomorphic to R 6 are those whose dimension is six. That is, (a) M 2,3 (d) M 6,1 (e) P5 and
(g) {( x1 , x2 , x3 , 0, x5 , x6 , x7 ) : xi ∈ R} are isomorphic to R 6 .
1
1
0
0
58. Solve the equation T ( p) =  p( x)dx = 
(a0 + a1x + a2 x 2 )dx = 0 yielding a0 + a1 2 + a2 3 = 0.
Letting a2 = −3b, a1 = −2a, you have a0 = − a1 2 − a2 3 = a + b, and ker(T ) =
 1 − 2 1 0
 1 0 5 0




60. A = 0
1 2 3  0 1 2 0
0
0 0 0 1
0 0 1



( )
(b) dim( R3 ) = 3
(a) dim R 4 = 4
(c) x1 + 5 x3 = 0 → x1 = − 5 x3
x2 + 2 x3 = 0 → x2 = − 2 x3
x4 = 0
So, ker(T ) = {( − 5 x3 , − 2 x3 , x3 , 0)} and
dim(ker(T )) = 1.
(d) T is not one-to-one since the ker (T ) ≠ {0}.
(e) rank (T ) = 3
= dim( R 3 )
So, T is onto by Theorem 6.7.
{(a + b) − 2ax − 3bx2 : a, b ∈ R}.
62. If T is onto, then m ≥ n.
If T is one-to-one, then m ≤ n.
64. Theorem 6.9 tells you that if M m ,n and M j ,k are of the
same dimension then they are isomorphic. So, you can
conclude that mn = jk .
66. (a) False. A concept of a dimension of a linear
transformation does not exist.
(b) True. See discussion on page 315 before
Theorem 6.6.
( )
(c) True. Because dim( P1 ) = dim R 2 = 2 and any
two vector spaces of equal finite dimension are
isomorphic (Theorem 6.9 on page 317).
68. From Theorem 6.5,
rank(T ) + nullity(T ) = n = dimension of V. T is
one-to-one if and only if nullity(T ) = 0 if and only if
rank (T ) = dimension of V .
(f) T is not an isomorphism since it is not one-to-one.
70. T −1 (U ) is nonempty because T (0) = 0 ∈ U  0 ∈ T −1 (U ).
Let v1, v 2 ∈ T −1 (U )  T ( v1 ) ∈ U and T ( v 2 ) ∈ U . Because U is a subspace of W ,
T ( v1 ) + T ( v 2 ) = T ( v1 + v 2 ) ∈ U  v1 + v 2 ∈ T −1 (U ).
Let v ∈ T −1 (U ) and c ∈ R  T ( v) ∈ U . Because U is a subspace of W, cT ( v) = T (cv) ∈ U  cv ∈ T −1 (U ).
If U = {0}, then T −1 (U ) is the kernel of T.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 6.3
Matrices for Linear Transformations
217
Section 6.3 Matrices for Linear Transformations
2. Because
8. Because
 2
  1 
 
T     =  1
 0 
 
−4
 
−3
 0 
 
T     =  1 ,
 1
 
 1
 
and
 2 −3


the standard matrix for T is A =  1 −1.
−4
1

4. Because
5
  1 
 
T     = 0
 0 
 
 
4
 1
 0 
 
T     =  0 ,
 1
 
 
−5
and
the standard matrix for T is
1
5


0 0.
4 −5


 1
 
1
  1 
T   =  
 0 
2
 
 
0
and
 1
 
 0 
−1
T     =  ,
 1
 0
 
 
 2
the standard matrix for T is
 1 1


1 −1
A = 
.
2 0


0 2
 1 1
 0


 
−
1
1
3
   =  6
So, T ( v ) = 
2 0 −3
 6

 
 
0 2
−6
and T (3, − 3) = (0, 6, 6, − 6).
10. T ( x1 , x2 , x3 , x4 ) = ( x1 − x3 , x2 − x4 , x3 − x1 , x2 + x4 )
6. Because
  1 
0
 
 
0
 0 
T     =  ,
0
0
 
 
 0 
0
 
 0 
0
 
 
0
 1
T     =  ,
0
0
 
 
 0 
0
 
 0 
0
 
 
0
 0 
T     =  ,
 1
0
 
 
 0 
0
 
 0 
0
 
 
0
 0 
and T     =  ,
0
0
 
 
  1 
0
 
The standard matrix is
 1

0
A = 
−1

 0
0 −1
0

0 −1
.
1 0

0 1
1
0
1
The image of v is
0

0
the standard matrix for T is A = 
0

0
0 0 0

0 0 0
.
0 0 0

0 0 0
 1

0
Av = 
−1

 0
0 −1
1
0
1
0  1
− 2
 
 
0 −1  2
4
=  .
 2
1 0  3
 
 
0 1 − 2
 0
So, T ( v ) = ( − 2, 4, 2, 0).
0 1
12. (a) The matrix of a reflection in the line y = x, T ( x, y ) = ( y, x), is given by A = T (1, 0) T (0, 1) = 
.
 1 0
(b) The image of v = (3, 4) is given by
0 1 3
4
Av = 
   =  .
1
0
4

 
3
So, T (3, 4) = ( 4, 3).
y
(c)
(3, 4)
4
3
(4, 3)
v
2
T(v)
1
1
2
3
4
x
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
218
Chapter 6
Linear Transformations
14. (a) The matrix of a reflection in the x-axis, T ( x, y ) = ( x, − y ), is given by
 1 0
A = T (1, 0) T (0, 1) = 
.
0 −1
(b) The image of v = ( 4, −1) is given by
 1 0  4
4
Av = 
   =  .
0 −1 −1
 1
So, T ( 4, −1) = ( 4, 1).
(c)
y
3
2
1
T(v)
−1
1 v
−2
(4, 1)
5
x
(4, − 1)
−3
16. (a) The counterclockwise rotation of 120° is given by
T ( x, y ) = (cos(120) x − sin (120) y, sin (120) x + cos(120) y )
 1
3
3
1 
=  − x −
y,
x − y .
2
2
2 
 2
So, the matrix is
 1
−
2
A = T (1, 0) T (0, 1) = 
 3

 2
−
3

2 
.
1
− 
2
(b) The image of v = ( 2, 2) is given by
 1
−
2
Av = 
 3

 2
−
3

−1 − 3 
2  2
.
  = 

2
3 − 1

1  

− 
2
(
So, T ( 2, 2) = −1 −
3,
)
3 − 1.
y
(c)
5
4
3
2
120°
T(v)
−3 −2 −1
v
1
2
3
x
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 6.3
Matrices for Linear Transformations
219
18. (a) The clockwise rotation of 30° is given by
T ( x, y ) = (cos( −30) x − sin ( −30) y, sin ( −30) x + cos( −30) y )
 3
1
1
3 
x + y, − x +
y .
= 
2
2
2
2


So, the matrix is
 3

2
A = T (1, 0) T (0, 1) = 
 1
−
 2
1

2
.
3

2 
(b) The image of v = ( 2, 1) is given by
 3

2
Av = 
 1
−
 2
1

1
 2
 3 + 2
2
.
  = 

3
3   1
1
−
+



2 

2 

So, T ( 2, 1) = 

3 +
1
3
, −1 +
.
2
2 
y
(c)
2
1
v
30°
3
T(v)
x
−1
20. (a) The matrix of a reflection through the yz-coordinate plane is given by
−1 0 0


A = T (1, 0, 0) T (0, 1, 0) T (0, 0, 1) =  0 1 0.
 0 0 1


(b) The image of v = ( 2, 3, 4) is given by
−1 0 0 2
−2

 
 
Av =  0 1 0 3 =  3.
 0 0 1 4
 4

 
 
So, T ( 2, 3, 4) = ( −2, 3, 4).
z
(c)
(− 2, 3, 4)
4
3
T(v)
(2, 3, 4) v
1
x
1
y
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
220
Chapter 6
Linear Transformations
22. (a) The reflection of a vector v through w is given by
T ( v ) = 2 projw v − v
3x + y
(3, 1) − ( x, y )
10
3 3
4 
4
=  x + y , x − y .
5
5
5
5


T ( x, y ) = 2
The standard matrix for T is
4
5
A = T (1, 0) T (0, 1) = 
3
 5
3
5
.
4
− 
5
(b) The image of v = (1, 4) is
3
4
 16 
5
 5
5   1
  = 
.
Av = 
4  4
3
 13 
−
−
 5
 5 
5 
 16 13 
So, T (1, 4) =  , − .
5
5
y
(c)
4
3
2
1
(1, 4)
−2
−3
3 4 5
T(v)
 1

−1
A = 
 0

 1
0

1 0 0
.
0 2 −1

0 0 0
2 0
(b) The image of v = (0, 1, −1, 1) is
 1

−1
Av = 
 0

 1
0  0
 2
 
 
1 0 0  1
1
=  .
−3
0 2 −1 −1
 
 
0 0 0  1
 0
2 0
So, T (0, 1, −1, 1) = ( 2, 1, − 3, 0).
(c) Using a graphing utility or a computer software
program to perform the multiplication in part (b)
gives the same result.
28. The standard matrices for T1 and T2 are
 1 0 0


A1 = 0 1 0
0 0 1


and
0 0 0


A2 =  1 0 0.
0 0 0


The standard matrix for T = T2
v
−1
26. (a) The standard matrix for T is
x
(165 , − 135 (
24. (a) The standard matrix for T is
2 − 3
1


A = 3 − 5
0.
0
1 − 3

(b) The image of v = (3, 13, 4) is
 1 2 − 3  3
 17

 


Av =  3 − 5
0 13 = − 56.
0
 1
1 − 3  4



So, T (3, 13, 4) is (17, − 56, 1).
(c) Using a graphing utility or a computer software
program to perform the multiplication in part (b)
gives the same results.
T1 is
0 0 0  1 0 0
0 0 0





A = A2 A1 =  1 0 0 0 1 0 =  1 0 0 = A2
0 0 0 0 0 1
0 0 0





and the standard matrix for T ′ = T1
T2 is
 1 0 0 0 0 0 0 0 0


 

A′ = A1 A2 = 0 1 0  1 0 0 =  1 0 0 = A2 .
0 0 1 0 0 0 0 0 0


 

30. The standard matrices for T1 and T2 are
 1 0


A1 = 0 1
0 1


and
0 1 0
A2 = 
.
0 0 1
The standard matrix for T = T2
T1 is
 1 0
0 1 0 
0 1

A2 A1 = 
 0 1 = 

0
0
1


0 1

0
1


and the standard matrix for T ′ = T1
T2 is
 1 0
0 1 0

 0 1 0


A1 A2 = 0 1 
=

0 0 1.
0
0
1


0 1
0 0 1




© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 6.3
32. The standard matrix for T is
Because A = 0, A is not invertible, and so T is not
invertible.
 1 −2 0 0


0
1 0 0
A = 
.
0 0 1 1


0 0 1 0
Because A = −1 ≠ 0, A is invertible. Calculate A–1 by
34. The standard matrix for T is
Gauss-Jordan elimination
1 1
A = 
.
1 −1
Because A = −2 ≠ 0, A is invertible.
So, T −1 ( x, y ) =
221
36. The standard matrix for T is
2 0
A = 
.
0 0
1
−1 −1
=
A−1 = − 12 
 12

 2
−1 1
Matrices for Linear Transformations
1
2

− 12 
( 12 x + 12 y, 12 x − 12 y).
1

0
A−1 = 
0

0
0

0
0 0 1

0 1 −1
2 0
1 0
and conclude that
T −1( x1, x2 , x3 , x4 ) = ( x1 + 2 x2 , x2 , x4 , x3 − x4 ).
38. (a) The standard matrix for T is
 1 −1 0
A′ = 

0 1 −1
and the image of v under T is
2
 1 −1 0  
− 2
′
Av = 
 4 =  .
0
1
−
1

 
− 2
6
So, T ( v) = ( − 2, − 2).
(b) The image of each vector in B is as follows.
T (1, 1, 1) = (0, 0) = 0(1, 1) + 0( 2, 1)
T (1, 1, 0) = (0, 1) = −1(1, 1) + (1, 2)
T (0, 1, 1) = ( −1, 0) = − 2(1, 1) + (1, 2)
− 2
0
−1
So, T (1, 1, 1) B ′ =  , T (1, 1, 0) B ′ =  , and T (0, 1, 1) B′ =   ,
0
1
 
 
 1
0 −1 − 2
which implies that A = 
.
1
0 1
 2
 2
0 −1 − 2  
− 2
 
Then, because [ v]B = −1 , T ( v ) B′ = A[ v]B = 
 −1 =  .
1  
0 1
 0
 1
 
 1
So, T ( v ) = − 2(1, 1) + 0(1, 2) = ( − 2, 2).
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
222
Chapter 6
Linear Transformations
40. (a) The standard matrix for T is
 1 1 1 1
A′ = 

−1 0 0 1
and the image of v = ( 4, − 3, 1, 1) under T is
 4
 
 1 1 1 1 −3
 3
A′v = 
   =    T ( v) = (3, − 3).
−
1
0
0
1
1

 
−3
 1
(b) Because
T (1, 0, 0, 1) = ( 2, 0) = 0(1, 1) + ( 2, 0)
T (0, 1, 0, 1) = ( 2, 1)
= (1, 1) + 12 ( 2, 0)
T (1, 0, 1, 0) = ( 2, −1) = −(1, 1) + 32 ( 2, 0)
T (1, 1, 0, 0) = ( 2, −1) = −(1, 1) + 32 ( 2, 0),
0 1 −1 −1
The matrix for T relative to B and B′ is A =  1
3
3 .
2
2
 1 2

Because v = (4, − 3, 1, 1) = 72 (1, 0, 0, 1) − 52 (0, 1, 0, 1) + (1, 0, 1, 0) − 12 (1, 1, 0, 0), you have
 7
 2
0 1 −1 −1 − 52 
−3
T ( v ) B′ = A[v]B =  1
 =  3.
3
3 
1
1

 
 
2
2
2
− 1 
 2
So, T ( v ) = −3(1, 1) + 3( 2, 0) = (3, − 3).
3 −13
42. (a) The standard matrix for T is A′ = 
 and the image of v = ( 4, 8) under T is
1 − 4
3 −13 4
− 92
A′v = 
  = 
  T ( v ) = ( − 92, − 28).
1 − 4 8
− 28
(b) Because
T ( 2, 1) = ( − 7, − 2) = −( 2, 1) − (5, 1)
T (5, 1) = ( 2, 1)
−1 0
the matrix for T relative to B and B′ is A = 
.
−1 1
−1 0  12
−12
Because v = ( 4, 8) = 12( 2, 1) − 4(5, 1), you have T ( v ) B′ = A[v]B = 
  = 
.
−
−
1
1
4

 
−16
So, T ( v ) = −12( 2, 1) − 16(5, 1) = ( − 92, − 28).
( )
44. The image of each vector in B is T (1) = x 2 , T ( x) = x3 , T x 2 = x 4 .
0

0
So, the matrix of T relative to B and B′ is A =  1

0

0
0 0

0 0
0 0.

1 0

0 1
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 6.3
Matrices for Linear Transformations
223
46. The image of each vector in B is as follows.
D ( e 2 x ) = 2e 2 x
D( xe 2 x ) = e 2 x + 2 xe 2 x
D( x 2e 2 x ) = 2 xe 2 x + 2 x 2e2 x
2 1 0


So, the matrix of T relative to B is A = 0 2 2.
0 0 2


( )
(
)
(
)
48. Because 5e2 x − 3xe2 x + x 2e2 x = 5 e2 x − 3 xe2 x + 1 x 2e2 x ,
2 1 0  5
 7

 
 
A[ v]B = 0 2 2 −3 = −4  Dx (5e 2 x − 3 xe 2 x + x 2e 2 x ) = 7e 2 x − 4 xe 2 x + 2 x 2e 2 x .
0 0 2  1
 2

 
 
50. (a) Let T : R n → R m be a linear transformation such that, for the standard basis vectors ei of R n ,
 a11 
 a12 
 a1n 
 
 
 
a21
a22
a2 n
T (e1 ) =  , T (e 2 ) =  ,  , T (en ) =  .
 
 
 
 
 
 
am1 
am 2 
amn 
Then the m × n matrix whose n columns correspond to T (ei )
 a11 a12  a1n 


a21 a22  a2 n 
A = 




am1 am 2  amn 
is such that T ( v ) = Av for every v in R n . A is called the standard matrix of T.
(b) Let T1 : R n → R m and T2 : R m → R p be linear transformations with standard matrices A1 and A2 , respectively. The
composition T : R n → R p , defined by T ( v) = T2 (T1 ( v)), is a linear transformation. Moreover, the standard matrix A for
T is given by the matrix product A = A2 A1.
(c) To find the inverse of a linear transformation T, first find the standard matrix A of T. Then find the inverse of A using the
techniques shown in Section 2.3.
(d) To find the transformation matrix relative to nonstandard basis, first find T ( v1 ), T ( v 2 ),, T ( v n ). Then determine the
coordinate matrices relative to B′. Finally, form the matrix T relative to B and B′ by using the coordinate matrices as
a11 a12  a1n 


a21 a22  a2 n 
columns to produce A = 




an1 am 2  amn 
52. Because T ( v) = kv for all v ∈ R n , the standard matrix
for T is the n × n diagonal matrix
k 0  0


0 k
.

k 0


0  0 k 
54. (a) True. See discussion, under “Composition of Linear
Transformations,” pages 323–324.
(b) False. See Example 3, page 324.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
224
Chapter 6
Linear Transformations
56. (1  2): Let T be invertible. If T ( v1 ) = T ( v 2 ), then T −1 (T ( v1 )) = T −1 (T ( v 2 )) and v1 = v 2 , so T is one-to-one. T is onto
because for any w ∈ R n , T −1( w) = v satisfies T ( v) = w.
(2  1): Let T be an isomorphism. Define T −1 as follows: Because T is onto, for any w ∈ R n , there exists v ∈ R n such
that T ( v) = w. Because T is one-to-one, this v is unique. So, define the inverse of T by T −1 ( w ) = v if and only if T ( v ) = w.
Finally, the corollaries to Theorems 6.3 and 6.4 show that 2 and 3 are equivalent.
If T is invertible, T ( x) = Ax implies that T −1(T ( x)) = x = A−1 ( Ax) and the standard matrix of T −1 is A−1.
58. b is in the range of the linear transformation T : R n → R m given by T ( x) = Ax if and only if b is in the column space of A.
Section 6.4 Transition Matrices and Similarity
1
2
2. The standard matrix for T is A = 
. Furthermore, the transition matrix P from B′ to the standard basis B, and its
 1 −2
 1 0
 1 0
−1
inverse are P = 
 and P = − 1 1 . Therefore, the matrix for T relative to B′ is
2
4


 2 4 
 1 0 2
 4 4
1  1 0
A′ = P −1 AP =  1 1  

 = − 11 −4
−
 4

 2 4   1 −2 2 4
 1 −2
4. The standard matrix for T is A = 
. Furthermore, the transition matrix P from B′ to the standard basis B, and its
4 0
−2 −1
−1 −1
−1
inverse, are P = 
 and P = 
. Therefore, the matrix for T relative to B′ is
1
1


 1 2
7
−1 −1  1 −2 −2 −1
 12
A′ = P −1 AP = 


 = 

−
−
1
2
4
0
1
1
20
11






5 4
6. The standard matrix for T is A = 
. Furthermore, the transition matrix P from B′ to the standard basis B, and its
4 5
− 12
13
 12
−1
25
inverse, are P = 
 and P =  13
 25
−13 −12
− 12
25
A′ = P −1 AP =  13
 25

− 13
25
. Therefore, the matrix for T relative to B′ is
12
25 

 5 4  12
− 13
13
 5 − 4
25 
= 

.
12 4 5 −13 −12
5



− 4
25 
0 0 0


8. The standard matrix for T is A = 0 0 0. Furthermore, the transition matrix P from B′ to the standard basis B, and its
0 0 0


 1 1 0
 1 1 −1




−1
1
inverse, are P =  1 0 1 and P = 2  1 −1 1. Therefore, the matrix for T relative to B′ is
0 1 1
−1 1 1




0 0 0


A′ = P AP = 0 0 0.
0 0 0


−1
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 6.4
Transition Matrices and Similarity
225
−1 0 0


10. The standard matrix for T is A =  1 −1 0. Furthermore, the transition matrix P from B′ to the standard basis B, and
 0 1 −1


 3 −1
 0 − 2 1
5
 5


−1
2
0 3 and P = − 52 15
its inverse, are P = −1
 1
4
 2
3 0

 5 15
 3
 5
A′ = P AP = − 52
 1
 5
− 15
−1
2
15
4
15
2  −1
5 

1  1
15  
2  0
15  
2
5

1 . Therefore, the matrix for T relative to B′ is
15 
3
15 
 −7
0  0 − 2 1
 15


−1 0 −1 0 3 = − 15
− 2
1 −1  2
3 0
 15
0
2
5
19
− 15
8
− 15
1

1 .
3
− 13 
1 0 0


12. The standard matrix for T is A = 1 2 0. Furthermore, the transition matrix P from B′ to the standard basis B, and
1 1 3


 1 0 0
1 0 0




its inverse, are P = −1 0 1 and P −1 = 1 1 1. Therefore, the matrix for T relative to B′ is
 0 1 −1
1 1 0




1 0 0 1 0 0  1 0 0
 1 0 0






A′ = P −1 AP = 1 1 1 1 2 0 −1 0
1 = 0 3 0.
1 1 0 1 1 3  0 1 −1
0 0 2






14. (a) The transition matrix P from B′ to B is found by row-reducing [ B B′] to [I P].
1 −2
[B B′] = 
1
3
 1
So, P =  5
− 52

2
5
.
1
5
1 0

−1 1

1 0
[ I P] = 
0 1

1
5
− 52
 1
(b) The coordinate matrix for v relative to B is [v]B = P[v]B′ =  25
− 5
2
5

1
5
2 1
−1
5  
=  .

1 −3
−1
5
 
3 2 −1
−5
Furthermore, the image of v under T relative to B is T ( v ) B = A[ v]B = 
   =  .
−
0
4
1

 
−4
 1 −2 3 2  15
(c) The matrix of T relative to B′ is A′ = P −1 AP = 


1 0 4 − 52
2
2
 3
5
 = 
1

−2
5
0
.
4
 1 −2 −5
 3
(d) The image of v under T relative to B′ is P −1 T ( v) B = 
  = 
.
1 −4
2
−14
 3 0  1
 3
You can also find the image of v under T relative to B′ by A′[ v]B′ = 
  = 
.
−2 4 −3
−14
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
226
Chapter 6
Linear Transformations
16. (a) The transition matrix P from B′ to B is found by row-reducing [ B B′] to [I P].
−1 −5
P = 

 0 −3
−1 −5  1
19
(b) The coordinate matrix for v relative to B is [v]B = P[v]B′ = 
   =  .
−
−
0
3
4

 
12
2 1 19
 50
Furthermore, the image of v under T relative to B is T ( v) B = A[ v]B = 
  = 
.
−
0
1
12

 
−12
5 2
−1
1 −1 −5
2 18
3 
(c) The matrix of T relative to B′ is A′ = P −1 AP = 
= 


1 0 −1  0 −3
−
0


0 −1
3


5  50
−1

−70
3 
(d) The image of v under T relative to B′ is P −1 T ( v ) B = 

 = 
.
1
 4
 0 − 3  −12
2 18  1
−70
You can also find the image of v under T relative to B′ by A′[ v]B′ = 
  = 
.
0 −1 −4
 4
18. (a) The transition matrix P from B′ to B is found by row-reducing [B B′] to [I P] .
 1 0 0 12 12 0
 1 1 −1 1 0 0




[B B′] =  1 −1 1 0 1 0  0 1 0 12 0 12  = [I
0 0 1 0 1 1 
−1 1 1 0 0 1


2
2

P]
 12 12 0


So, P =  12 0 12 .
0 1 1 
2
2

3
 12 12 0 2
2
1



(b) The coordinate matrix for v relative to B is [v]B = P[v]B′ =  2 0 12   1 =  23 .
 1
 0 1 1   1
2
2  

 
Furthermore, the image of v under T relative to B is
 3
 2
T ( v ) B = A[ v]B = − 12
 1
 2
 14 
−1 − 12   23 



 
1 3 = 11 .
2
4
2 2
19 
5   1
1
4
2  
(c) The matrix of T relative to B′ is A′ = P −1 AP.
 1 1 −1  32


A′ = P AP =  1 −1 1 − 12
−1 1 1  1

 2
−1
1
−1 − 12   12 12 0
 1
 14

1
1
2
0
=

4


2 2
2

5


5
1
1
1
0 2 2
2 
4
−1 − 54 

2 − 14 

1 15
4
(d) The image of v under T relative to B′ is
− 7 
 1 1 −1  14 
 4

  11
P T ( v) B =  1 −1 1  4  =  94 .
 29 
−1 1 1 19 

 4 
 4
−1
You can also find the image of v under T relative to B′ by
1
4
A′[v]B′ =  14
5
4
− 7 
−1 − 54  2

 4


2 − 14   1 =  94 .
  1
 29 
1 15
4  
 4
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 6.4
Transition Matrices and Similarity
227
20. A is similar to A′ since
 1 12  1 −12  1 −12
 1 −12
A′ = P −1 AP = 


 = 
.
1 0
1
1
0 1 0
0
22. A is similar to A′ since
 1 −1 0 5 0 0  1 1 1
5 2 2






A′ = P AP = 0 1 −1 0 3 0 0 1 1 = 0 3 2 .
0 0 1 0 0 1 0 0 1
0 0 1






−1
24. The transition matrix from B′ to the standard matrix has
columns consisting of the vectors in B′.
 1 1 −1


P =  1 −1 1
−1 1 1


4
 1 0
 1 0
28. Because B = P −1 AP, and A4 = 
 = 
,
0 2
0 16
you have B 4 = P −1 A4 P
 3 −5  1 0 2 5
= 



−1 2 0 16  1 3
and it follows that
−74 −225
= 
.
91
 30
 12 12 0


−1
P =  12 0 12  .
0 1 1 

2 2
30. If B = P −1 AP and A is an idempotent matrix, then
So, the matrix for T relative to B′ is
2
= ( P −1 AP )( P −1 AP)
A′ = P −1 AP
 12 12 0  23


=  12 0 12  − 12
0 1 1   1
2 2  2

B 2 = ( P −1 AP )
−1 − 12   1 1 −1


1  1 −1
2
1
2 
5  −1
1
1 1
2 
 1 0 0


= 0 2 0.
0 0 3


26. First, note that A and B are similar.
−1 −1 2  1 0 0 −1 1 0




−1
P AP =  0 −1 2 0 −2 0  2 1 2
 1 2 −3 0 0 3  1 1 1




7
10
 11


8 10
=  10
−18 −12 −17


Now,
= P −1 A2 P
= P −1 AP
= B,
which shows that B is an idempotent matrix.
32. If Ax = x and B = P −1 AP, then PB = AP and
PBP −1 = A. So, PBP −1x = Ax = x.
34. Because A and B are similar, they represent the same
linear transformation with respect to different bases. So,
the range is the same, and so is the rank.
36. If A is nonsingular, then so is P −1 AP = B, and
B = P −1 AP
B −1 = ( P −1 AP)
−1
= P −1 A−1 ( P −1 )
−1
= P −1 A−1P
which shows that A−1 and B −1 are similar.
7
10
 11


B =  10
8 10
−18 −12 −17


= 11( −16) − 7(10) + 10( 24)
= −6 = A .
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
228
Chapter 6
Linear Transformations
38. Because B = P −1 AP, you have AP = PB, as follows.
a11  a1n   p11 




an1  ann   pn1 


p1n 
 p11 


=


 pn1 
pnn 

p1n  b11  0 




pnn   0  bnn 
So,
a11  a1n   p1i 
 p1i 

 
 

   = bii  
an1  ann   pni 
 pni 

 
 
for i = 1, 2,  , n.
40. (a) There are two ways to get from the coordinate matrix [ v]B′ to the coordinate matrix T ( v ) B ′ . One way is direct, using the
matrix A′ to obtain A′[ v]B′ = T ( v) B ′ . The second way is indirect, using the matrices P, A, and P −1 to obtain
P −1 AP[ v]B ′ = T ( v) B ′ .
(b) To determine if two square matrices A and A′ are similar, the equation A′ = P −1 AP must hold true for some invertible
matrix P.
42. (a) True. See discussion, page 330, and note that A′ = P −1 AP  PA′P −1 = PP −1 APP −1 = A.
(b) False. Unless it is a diagonal matrix, see Example 5, page 333.
Section 6.5 Applications of Linear Transformations
−1 0
2. The standard matrix for T is A = 
.
 0 1
 0 −1
4. The standard matrix for T is A = 
.
−1 0
−1 0 5
− 5
(a) 
   =    T (5, 2) = ( − 5, 2)
 0 1 2
 2
 0 −1 −1
−2
(a) 
   =    T ( −1, 2) = ( −2, 1)
−1 0  2
 1
−1 0  −1
 1
(b) 
   =    T ( −1, − 6) = (1, − 6)
 0 1 − 6
− 6
 0 −1 2
−3
(b) 
   =    T ( 2, 3) = ( −3, − 2)
−1 0 3
−2
−1 0 a
−a
(c) 
   =    T ( a, 0) = ( −a, 0)
 0 1 0
 0
 0 −1 a
 0
(c) 
   =    T ( a, 0) = (0, − a)
−1 0 0
−a
−1 0 0
0
(d) 
   =    T (0, b) = (0, b)
0
1
b

 
b
 0 −1 0
−b
(d) 
   =    T (0, b) = ( −b, 0)
−
1
0
b

 
 0
−1 0  c
 −c
(e) 
   =    T ( c, − d ) = ( − c, − d )
 0 1 −d 
−d 
 0 −1  e
 d
(e) 
   =    T ( e, − d ) = ( d , − e )
−1 0 −d 
−e
−1 0  f 
− f 
(f) 
   =    T ( f , g ) = (− f , g )
 0 1  g 
 g
 0 −1 − f 
− g 
(f) 
   =    T (− f , g ) = (− g , f )
−1 0  g 
 f
6. (a) T ( x, y ) = xT (1, 0) + yT (0, 1)
= x(1, 1) + y(0, 1)
= ( x, x + y )
(b) T is vertical shear.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 6.5
(a) Identify T as a horizontal contraction from its
1

0
standard matrix A =  4 .


 0 1
So, the set of fixed points is {(t , 0) : t is real}
18. The reflection in the line y = − x is given by
T ( x, y ) = ( x, y ) = ( − y, − x) which implies − x = y.
( (
x, y
4
(x, y)
x
10. (a) Identify T as a vertical expansion from its standard
 1 0
matrix A = 
.
0 3
y
So, the set of fixed points is {(t , − t ) : t is real}
k 0
20. A horizontal expansion has the standard matrix 

0 1
where k > 1.
A fixed point of T satisfies the equation
k 0  v1 
kv1 
 v1 
T ( v) = 
  =   =   = v
0
1
v
v

  2
v2 
 2 
So the fixed points of T are
{v = (0, t ): t is a real number}.
22. A vertical shear has the form T ( x, y ) = ( x, y + kx). If
(x, 3y)
(x, y)
x
( x, y ) is a fixed point, then
T ( x, y ) = ( x, y ) = ( x, y + kx) which implies that
x = 0. So the set of fixed points is {(0, t ) : t is real}.
24. Find the image of each vertex under T ( x, y ) = ( y, x).
12. T ( x, y ) = ( x + 4 y, y )
(a) Identify T as a horizontal shear from its standard
 1 4
matrix A = 
.
0 1
(b)
T ( x, y ) = ( x, y ) = ( x, − y ) which implies that y = 0.
T ( x, y ) = ( − y, − x). If ( x, y ) is a fixed point then
y
(b)
229
16. The reflection in the x-axis is given by
T ( x, y ) = ( x, − y ). If ( x, y ) is a fixed point, then
x 
8. T ( x, y ) =  , y 
4 
(b)
Applications of Linear Transformations
T (0, 0) = (0, 0),
T (1, 0) = (0, 1),
T (1, 1) = (1, 1),
T (0, 1) = (1, 0)
y
y
1
T(1, 0)
T(1, 1)
T(0, 1)
(x, y)
(x + 4y, y)
x
14. T ( x, y ) = ( x, 9 x + y )
(a) Identify T as a vertical shear from its matrix
 1 0
A = 
.
9 1
(b)
T(0, 0)
x
1
 y
26. Find the image of each vertex under T ( x, y ) =  x, .
 4
T (0, 0) = (0, 0), T (1, 0) = (1, 0),
 1
T (1, 1) = 1, ,
 4
 1
T (0, 1) =  0, 
 4
y
1
y
(x, 9x + y)
1
2
(0, (
(1, (
(0, 0)
1
1
4
1
4
(1, 0)
(x, y)
x
x
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
230
Chapter 6
Linear Transformations
28. Find the image of each vertex under T ( x, y ) = (5 x, y ).
T (0, 0) = (0, 0),
T (1, 0) = (5, 0),
T (1, 1) = (5, 1),
T (0, 1) = (0, 1)
36. Find the image of each vertex under T ( x, y ) = ( 2 x, y ).
T (0, 0) = (0, 0),
T (1, 0) = ( 2, 0),
T (1, 2) = ( 2, 2),
T (0, 2) = (0, 2)
y
y
5
2
(0, 2)
(2, 2)
3
(0, 1)
1
(0, 0)
1
(5, 1)
1
(5, 0)
3
(2, 0)
x
(0, 0)
30. Find the image of each vertex under
T ( x, y ) = ( x, y + 3 x).
T (0, 0) = (0, 0),
T (1, 0) = (1, 3),
T (1, 1) = (1, 4),
T (0, 1) = (0, 1)
38. Find the image of each vertex under
T ( x, y ) = ( x, y + 2 x).
y
4
T(1, 1)
3
T(1, 0)
T (0, 0) = (0, 0),
T (1, 0) = (1, 2),
T (1, 2) = (1, 4),
T (0, 2) = (0, 2)
y
(1, 4)
4
3
2
(0, 2)
T(0, 1)
T(0, 0) 2
4
3
T (0, 0) = (0, 0),
T (1, 0) = (0, 1),
T (1, 2) = ( 2, 1),
T (0, 2) = ( 2, 0)
(1, 2)
1
x
(0, 0) 2
32. Find the image of each vertex under T ( x, y ) = ( y, x).
3
x
4
40. Find the image of each vertex under T ( x, y ) = ( y, x).
(a) T (0, 0) = (0, 0),
T (1, 2) = ( 2, 1),
T (3, 6) =
(6, 3), T (5, 2) = (2, 5)
T (6, 0) = (0, 6)
y
2
1
x
2
y
(0, 1)
(2, 1)
8
(2, 0)
4
6 (0, 6)
(2, 5)
(0, 0)
2
x
(
)
34. Find the image of each vertex under T ( x, y ) = x, 12 y .
T (0, 0) = (0, 0),
T (1, 0) = (1, 0),
T (1, 2) = (1, 1),
T (0, 2) = (0, 1)
(0, 0)
(2, 1)
2
4
6
(b) T (0, 0) = (0, 0),
T (6, 6) =
y
1
(6, 3)
2
x
8
T (0, 6) = (6, 0),
(6, 6), T (6, 0) = (0, 6)
y
(0, 1)
(1, 1)
8
6
(0, 6)
(6, 6)
4
x
1
2
(0, 0)
2
(6, 0)
4
6
8
x
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 6.5
42. Find the image of each vertex under
T ( x, y ) = ( x, x + y ).
(a) T (0, 0) = (0, 0), T (1, 2) = (1, 3), T (3, 6) = (3, 9),
T (5, 2) = (5, 7), T (6, 0) = (6, 6)
y
(6, 6)
(1, 3)
(0, 0)
x
(b) T (0, 0) = (0, 0),
T (0, 6) = (0, 6),
T (6, 6) = (6, 12),
T (6, 0) = (6, 6)
−2
T ( x, y ) =
( 12 x, 2 y ).
(a) T (0, 0) = (0, 0),
(0, 6)
( 32 , 12),
T (3, 6) =
( 12 , 4),
T (5, 2) = ( 52 , 4),
T (1, 2) =
y
1 2 3 4 5 6 7 8 9
12
10
8
6
4
2
44. Find the image of each vertex under
(5, 7)
y
231
T (6, 0) = (3, 0)
(3, 9)
9
8
7
6
5
4
3
2
1
Applications of Linear Transformations
( , 12(
3
2
12
10
8
1
6
,
4
2
4
( (
( , 4(
5
2
(0, 0)
−4 −2
(3, 0)
2 4 6 8
(b) T (0, 0) = (0, 0),
(6, 12)
T (6, 6) = (3, 12),
x
T (0, 6) = (0, 12),
T (6, 0) = (3, 0)
y
(6, 6)
(0, 0)
2 4 6 8 10 12
(0, 12)
x
(3, 12)
10
8
6
4
(0, 0)
(3, 0)
−4 −2
2 4 6 8
x
46. The linear transformation defined by A is a vertical shear.
48. The linear transformation defined by A is a vertical contraction.
50. The linear transformation defined by A is a reflection in the y-axis followed by a horizontal contraction.
 1 0
0 1
52. Because 
 represents a vertical expansion, and 
 represents a reflection in the line x = y , A is a vertical expansion
0 3
 1 0
followed by a reflection in the line x = y.
−1 0
54. (a) The linear transformation of 
 represents a reflection in the y-axis.
 0 1
 1 0
(b) The linear transformation of 
 represents a reflection in the x-axis.
0 −1
0 1
(c) The linear transformation of 
 represents a reflection in the line y = x.
1 0
k 0
(d) The linear transformation of 
 , where k > 1, represents a horizontal expansion.
0 1
k 0
(e) The linear transformation of 
 , where 0 < k < 1, represents a horizontal contraction.
0 1
 1 0
(f) The linear transformation of 
 , where k > 1, represents a vertical expansion.
0 k 
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
232
Chapter 6
Linear Transformations
 1 0
(g) The linear transformation of 
 , where 0 < k < 1, is represented by a vertical contraction.
0 k 
1 k 
(h) The linear transformation of 
 represents a horizontal shear.
0 1
 1 0
(i) The linear transformation of 
 represents a vertical shear.
k 1
0
1

(j) The linear transformation of 0 cos θ
0 sin θ

 cos θ

(k) The linear transformation of  0
− sin θ



− sin θ  represents a rotation about the x-axis.
cos θ 
0
0 sin θ 

0  represents a rotation about the y-axis.
0 cos θ 
1
cos θ

(l) The linear transformation of  sin θ
 0

− sin θ
cos θ
0
0

0 represents a rotation about the z-axis.
1
56. A rotation of 60° about the x-axis is given by the matrix
1

1
0
0


0


A = 0 cos 60° −sin 60° = 




0 sin 60° cos 60° 
0
0

3
−
2 .

1
2 
0
1
2
3
2
58. A rotation of 120° about the x-axis is given by the matrix
1

1
−
0
0


0


A = 0 cos 120° −sin 120° = 

0 sin 120° cos 120° 



0
0
−
1
2
3
2
0

3
−
2 .

1
− 
2
60. Using the matrix obtained in Exercise 56, you find
1

0
T (1, 1, 1) = 


0
0
1
2
3
2
62. Using the matrix obtained in Exercise 58, you find
1

0
T (1, 1, 1) = 


0
0
−
1
2
3
2
1

0

 1
 −1 −
3

−

2  1 = 
2
 

1
1  
 −1 +
− 
2

2
(
(


3 

.

3 

)
)
64. The indicated tetrahedron is produced by a − 90° rotation
about the z-axis.
66. The indicated tetrahedron is produced by a 180° rotation
about the z-axis.
68. The indicated tetrahedron is produced by a 180° rotation
about the x-axis.
1


0


 1
1− 3 
3

−

.
2  1 = 
2

 


1  1
1+ 3 

2


2
(
)
(
)

1
0

 cos 60° 0 sin 60°  cos 30° −sin30° 0
2




1
0
0 1
70. The matrix is  0
  sin 30° cos 30° 0 = 
−sin 60° 0 cos 60°  0

0
1

3


− 2 0
3 3 − 1
T (1, 1, 1) = 
,

4
3 +1
,
2
3  3

2  2
0  1

1  2
2   0
1
−
2
3
2
0
 3


0
 4

 1
 = 
0
 2


 −3
1
 4
−
1
4
3
2
3
4
3

2 

0.

1 
2 
3 − 1

4 
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Review Exercises for Chapter 6
233
72. The matrix is

2
−
0
0
cos 135° − sin 135° 0 1

2




 sin 135° cos 135° 0 0 cos 120° − sin 120° =  2

 0
0
1 0 sin 120° cos 120° 
 2

 0

T (1, 1, 1) = 

6 −
4
2
,
6 +3 2
,
4
2
−
2
2
−
2
0
 1
0 
 0

0 

0
1 
0
−
1
2
3
2

2
0  −
  2
3 
2
−
2  =
  2
1  
−  
0
2 

2
4
2
4
3
2
6

4 
6
.
4 
1
− 
2 
3 − 1

2 
Review Exercises for Chapter 6
2. (a) T ( v ) = T ( 4, −1) = (3, − 2)
(b) T (v1 , v2 ) = (v1 + v2 , 2v2 ) = (8, 4)
v1 + v2 = 8
2 v2 = 4
v1 = 6, v2 = 2
8. T preserves addition.
T ( x1 , y1 ) + T ( x2 , y2 ) = ( x1 + y1 ) + ( x2 + y2 )
= ( x1 + x2 ) + ( y1 + y2 )
= T ( x1 + x2 , y1 + y2 )
T preserves scalar multiplication.
Preimage of w is (6, 2).
cT ( x, y ) = c( x + y ) = (cx) + (cy ) = T (cx, cy )
4. (a) T (v) = T ( −2, 1, 2) = ( −1, 3, 2)
So, T is a linear transformation with standard matrix
[1 1].
(b) T (v1 , v2 , v3 ) = (v1 + v2 , v2 + v3 , v3 ) = (0, 1, 2)
v1 + v2 = 0
v2 + v3 = 1
v3 = 2
v2 = −1, v1 = 1
Preimage of w is (1, −1, 2).
6. (a) T ( v ) = T ( 2, − 3) = 7
(b) The preimage of w is given by solving the equation
T (v1 , v2 ) = 2v1 − v2 = 4.
The resulting linear equation 2v1 − v2 = 4
t + 4
, where t is any real
2
number. So, the preimage of w is
has the solutions v1 =
 t + 4 

, t : t is any real number.

2



10. T does not preserve addition or scalar multiplication,
so, T is not a linear transformation.
A counterexample is
T (1, 1) + T (1, 0) = ( 4, 1) + ( 4, 0)
= (8, 1) ≠ (5, 1) = T ( 2, 1).
12. T ( x, y ) = ( x + y, y )
T ( x1 , y1 ) + T ( x2 , y2 ) = ( x1 + y1 , y1 ) + ( x2 + y2 , y2 )
= x1 + y1 + x2 + y2 , y1 + y2
= ( x1 + x2 ) + ( y1 + y2 ), y1 + y2
So, T preserves addition.
cT ( x, y ) = c( x + y, y ) = cx + cy, cy = T (cx, cy )
So, T preserves scalar multiplication.
So, T is a linear transformation with standard matrix
 1 1
A = 
.
0 1
14. T does not preserve addition or scalar multiplication, and
so, T is not a linear transformation. A counterexample is
−2T (3, − 3) = −2( 3 , −3 ) = ( −6, − 6) ≠ (6, 6)
= T ( −6, 6) = T ( −2(3), − 2( −3)).
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
234
Chapter 6
Linear Transformations
16. T preserves addition.
T ( x1 , x2 , x3 ) + T ( y1 , y2 , y3 )
= ( x1 − x2 , x2 − x3 , x3 − x1 ) + ( y1 − y2 , y2 − y3 , y3 − y1 )
= ( x1 − x2 + y1 − y2 , x2 − x3 + y2 − y3 , x3 − x1 + y3 − y1 )
= (( x1 + y1 ) − ( x2 + y2 ), ( x2 + y2 ) − ( x3 + y3 ), ( x3 + y3 ) − ( x1 + y1 ))
= T ( x1 + y1 , x2 + y2 , x3 + y3 )
T preserves scalar multiplication.
cT ( x1 , x2 , x3 ) = c( x1 − x2 , x2 − x3 , x3 − x1 )
= (c( x1 − x2 ), c( x2 − x3 ), c( x3 − x1 ))
= (cx1 − cx2 , cx2 − cx3 , cx3 − cx1 )
= T (cx1 , cx2 , cx3 )
 1 −1 0


So, T is a linear transformation with standard matrix A =  0 1 −1.
−1 0 1


18. T preserves addition.
T ( x1 , y1 , z1 ) + T ( x2 , y2 , z2 ) = ( x, 0, − y1 ) + ( x2 , 0, − y2 )
= ( x1 + x2 , 0, − ( y1 + y2 ))
= T ( x1 + x2 , y1 + y2 , z1 + z2 )
T preserves scalar multiplication.
cT ( x, y, z ) = c( x, 0, − y ) = (cx, 0, − cy ) = T (cx, cy, cz )
 1 0 0


So, T is a linear transformation with standard matrix A = 0 0 0.
0 −1 0


20. Because (0, 1, 1) = (1, 1, 1) − (1, 0, 0), you have
T (0, 1, 1) = T (1, 1, 1) − T (1, 0, 0)
=1−3
= −2.
22. Because ( 2, 4) = 2(1, −1) + 3(0, 2), you have
T ( 2, 4) = 2T (1, −1) + 3T (0, 2)
= 2( 2, − 3) + 3(0, 8)
= ( 4, − 6) + (0, 24)
= ( 4, 18).
24. (a) Because A is a 2 × 3 matrix, it maps R 3 into
R 2 , ( n = 3, m = 2).
(b) Because T ( v ) = Av and
5
1 2 −1  
7
Av = 
 2 =   ,
1 0 1  
7
2
it follows that T (5, 2, 2) = (7, 7).
(c) The preimage of w is given by the solution to the
equation T (v1 , v2 , v3 ) = w = ( 4, 2).
The equivalent system of linear equations
v1 + 2v2 − v3 = 4
v1
+ v3 = 2
has the solution
{(2 − t, 1 + t, t ) : t is a real number}.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Review Exercises for Chapter 6
26. (a) Because A is a 2 × 2 matrix, it maps R 2 into
R 2 ( n = 2, m = 2).
(b) Because T ( v ) = Av and
2 1 8
20
Av = 
   =  , it follows that
0
1
4

 
4
T (8, 4) = ( 20, 4).
(c) The preimage of w is given by the solution to the
equation T (v1 , v2 ) = w = (5, 2).
The equivalent system of linear equations
2v1 + v2 = 5
v2 = 2, v1 = 32
235
32. (a) The standard matrix for T is
 1 2 0


A = 0 1 2.
2 0 1


Solving Av = 0 yields the solution v = 0. So,
ker(T ) = {(0, 0, 0)}.
(b) Because ker(T ) is dimension 0, range(T ) must be
all of R 3 .
34. (a) The standard matrix for T is
 1 1 0


A = 0 1 1.
 1 0 −1


( 32 , 2).
Solving Av = 0 yields the solution
{(t , −t , t ) : t ∈ R}. So, ker(T ) is {(1, −1, 1)}.
28. (a) Because A is a 3 × 2 matrix, it maps R 2 into
(b) Use Gauss-Jordan elimination to reduce AT as
follows.
 1 0 1
 1 0 1




T
A =  1 1 0  0 1 −1
0 1 −1
0 0 0




The nonzero row vectors form a basis for the range
of T , {(1, 0, 1), (0, 1, −1)}.
has the solution
R ( n = 2, m = 3).
3
(b) Because T ( v ) = Av and
−1 0
 − 3

 3


Av =  0
1   =  5 , it follows that
5






−1 − 3
−18
T (3, 5) = ( − 3, 5, −18).
(c) The preimage of w is given by the solution to the
equation T (v1 , v2 ) = w = (5, 2, −1).
The equivalent system of linear equations
− v1 = 5
(a) ker(T ) = {(0, 0)}
v2 = 2
− v1 − 3v2 = −1
(b) dim( ker(T )) = nullity(T ) = 0
has the solution v1 = − 5 and v2 = 2. So, the
−1 0 − 2
 1 0 2
(c) AT = 
  

−
2
1
2


0 1 2
preimage is ( − 5, 2).
30. If you translate the vertex (5, 3) back to the origin (0, 0),
then the other vertices (3, 5) and (3, 0) are translated to
(−2, 2) and ( −2, − 3), respectively. The rotation of 90° is
given by the matrix in Exercise 29, and you have
0 −1 −2
−2

  =  
 1 0  2
−2
0 −1 −2
 3

   =  .
 1 0 −3
−2
Translating back to the original coordinate system, the
new vertices are (5, 3), (3, 1) and (8, 1).
y
8
6
(3, 5)
4
2
36. To find the kernel of T, row reduce A.
 −1 2
 1 0




A =  0 −1  0 1
− 2 2
0 0




(3, 0)
90°
6
(d) dim( range(T )) = rank(T ) = 2
 1 1 −1
 1 0 0




38. A =  1 2 1  0 1 0
0 1 0
0 0 1




(a) ker (T ) = {(0, 0, 0)}
(b) dim( ker(T )) = nullity(T ) = 0
 1 1 0
 1 0 0




(c) AT =  1 2 1  0 1 0
−1 1 0
0 0 1




range(T ) is span {(1, 0, 0), (0, 1, 0) (0, 0, 1)}
(5, 3)
(3, 1)
range(T ) is span {(1, 0, 2), (0, 1, 2)}.
(8, 1)
8
(d) dim( range(T )) = 3
x
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
236
Chapter 6
Linear Transformations
40. Rank (T ) = dim P5 − nullity(T ) = 6 − 4 = 2
52. The standard matrix for T is
42. nullity(T ) = dim( M 3, 3 ) − rank (T ) = 9 − 5 = 4
 1 1 0
A = 
.
0 1 −1
44. The standard matrix for T is
Because A is not invertible, T has no inverse.
54. (a) Because A = 1 ≠ 0, ker(T ) = {(0, 0)} and T is
 1 0 0


A = 0 1 0.
0 0 0


one-to-one.
(b) Because rank ( A) = 2, T is onto.
Therefore, you have
(c) The transformation is one-to-one and onto, and is,
therefore, invertible.
 1 0 0  1 0 0
 1 0 0





A2 = 0 1 0 0 1 0 = 0 1 0 = A.
0 0 0 0 0 0
0 0 0





{
56. (a) Because A = 40 ≠ 0, ker(T ) = {(0, 0, 0)}, and T
}
46. The standard matrix for T, relative to B = 1, x, x 2 , x3 ,
is
0

0
A = 
0

0
1 0 0

0 2 0
.
0 0 3

0 0 0
Therefore, you have
0

0
A2 = 
0

0
1 0 0 0

0 2 0 0
0 0 3 0

0 0 0 0
1 0 0
0


0 2 0
0
= 


0 0 3
0


0 0 0
0
and
A2 = [3 1].
The standard matrix for T = T1
T2 is
58. (a) The standard matrix for T is
0 2
A = 

0 0
so it follows that
0 2 −1
6
Av = 
   =    T ( v ) = (6, 0).
0
0
3

 
0
(b) The image of each vector in B is as follows.
T ( 2, 1) = ( 2, 0) = −2( −1, 0) + 0( 2, 2)
T ( −1, 0) = (0, 0) = 0( −1, 0) + 0( 2, 2)
Because v = ( −1, 3) = 3( 2, 1) + 7( −1, 0),
3
−2 0 3
−6
   =  .
0
0
7

 
 0
[v]B =   and A′[v]B = 
 1
A = A2 A1 = [3 1]   = [7]
4
and the standard matrix for T ′ = T2
(c) The transformation is one-to-one and onto, and
therefore invertible.
Therefore, the matrix for T relative to B and B′ is
−2 0
A′ = 
.
 0 0
48. The standard matrix for T1 and T2 are
 1
A1 =  
4
0 2 0

0 0 6
.
0 0 0

0 0 0
is one-to-one.
(b) Because rank ( A) = 3, T is onto.
7
T1 is
So, T ( v) = −6(−1, 0) + 0(2, 2) = (6, 0).
 1
 3 1
A′ = A1 A2 =  [3 1] = 
.
4
12 4
50. The standard matrix for T is
cos θ
A = 
 sin θ
−sin θ 
.
cos θ 
A is invertible and its inverse is given by
 cos θ
A−1 = 
−sin θ
sin θ 
.
cos θ 
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Review Exercises for Chapter 6
60. The standard matrix for T is
 1 3 0


A = 3 1 0.
0 0 −2


The transition matrix from B′ to B, the standard matrix, is P
64. (a) Because T ( v) = proju v where u = (4, 3), you have
T ( v) =
 16 12 
T (1, 0) =  ,  and
 25 25 
1 0
 12
2


P −1 =  12 − 12 0.


0 1
 0
The matrix A′ for T relative to B′ is
1 0
 12
 1 3 0  1 1 0
2
1

=  2 − 12 0 3 1 0  1 −1 0


0 1 0 0 −2 0 0 1
 0
4 0 0
= 0 −2 0.
0 0 −2
A =
A and A′ are similar.
1 16 12

.
25 12 9
 1  9 −12 
2
(b) ( I − A) =  
 25 −12 16 

 
2
1  9 −12

 = I − A.
25 −12 16
16 
5
1 16 12 5
Av =

  =  
25 12 9 0
12 
 5 
=
(c)
 9
 5
1  9 −12 5

( I − A) v = 
  = 
25 −12 16 0
 12 
−
 5 
62. Since A′ = P −1 AP
2 0 0


= 0 1 0,
0 0 3


 12 9 
T (0, 1) =  , 
 25 25 
and the standard matrix for T is
Because, A′ = P −1 AP, it follows that A and A′ are similar.
0 0
1  1 0 1  1 2 0
1



1
=  2 0 − 2  −1 3 1 0 1 −1
 1 −1 − 1   0 0 2  1 0 0


2 
2
4x + 3y
( 4, 3).
25
So,
 1 1 0
P =  1 −1 0
0 0 1
A′ = P −1 AP
237
y
(d)
3
2
1
−1
−2
(165 , 125 )
(4, 3)
Av
v = (5, 0)
(I − A)v
4
5
x
( 95 , − 125 )
66. Suppose b = 0 . Then T ( v ) = Av. T (u + v ) = A(u + v) = Au + Av = T (u) + T ( v )
cT ( v) = c( Av ) = (cA) v = T (cv )
So, T : R 2 → R 2 is a linear transformation.
Suppose T is a linear transformation. Then T (u + v) = A(u + v ) + b and T (u) + T ( v ) = ( Au + b) + ( Av + b).
T (u + v ) = T ( u ) + T ( v )
A(u + v) + b = ( Au + b) + ( Av + b)
Au + Av + b = Au + Av + 2b
b = 2b
0 = b
 1 0
0 0
68. (a) Let S = 
and T = 

.
0 0
0 1
 1 0
Then S + T = 
 and rank ( S + T )
0 1
= rank ( S ) + rank (T ).
 1 0
(b) Let S = T = 
.
0 0
2 0
Then S + T = 
 and rank ( S + T )
0 0
= 1 < 2 = rank ( S ) + rank (T ).
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
238
Chapter 6
Linear Transformations
70. (a) Let v ∈ kernel(T ), which implies that T ( v) = 0.
Clearly ( S
T )( v ) = 0 as well, which shows that
v ∈ kernel( S
78. (a) T is a horizontal expansion.
y
(b)
T ).
(x, y)
4
(b) Let w ∈ W . Because S T is onto, there exists
v ∈ V such that ( S T )( v ) = w. So,
3
2
S (T ( v)) = w, and S is onto.
1
72. Compute the images of the basis vectors under Dx .
1
Dx (1) = 0
(b)
Dx (sin x) = cos x
0

0
.
0 0 −1

0 1 0
1 0
(x, y)
the kernel is spanned by {1}.
T (1, 0) = ( 2, 0), T (0, 1) = (0, 1). A sketch of the triangle
and its image follows.
y
{
}
74. First compute the effect of T on the basis 1, x, x 2 , x3 .
2
T (1) = 1
T ( x) = 1 + x
2 x + x2
T(1, 0)
3x + x
2
T(0, 0)
3
x
A sketch of the triangle and its image follows.
y
2
nullity(T ) = 0.
T(0, 1)
T(1, 0)
T(0, 0)
1
1
76. (a) T is a horizontal shear.
y
(x, y) T(x, y)
2
x
1 0
86. The transformation is a vertical shear 
 followed
3 1
2
1
2
84. The image of each vertex is
T (0, 0) = (0, 0), T (1, 0) = (1, 2), T (0, 1) = (0, 1).
1 0 0

1 2 0
.
0 1 3

0 0 1
Because the rank ( A) = 4, the rank (T ) = 4 and
1
T(0, 1)
1
The standard matrix for T is
(b)
x
82. The image of each vertex is T (0, 0) = (0, 0),
The range of Dx is spanned by {x, sin x, cos x}, whereas
1

0
A = 
0

0
x
5
3
2
0 0
T (x ) =
4
( x, y + x(
So, the matrix of Dx relative to this basis is
3
3
y
Dx (cos x) = −sin x
T ( x2 ) =
2
80. (a) T is a vertical shear.
Dx ( x) = 1
0

0
0

0
T(x, y)
5
 1 0
by a vertical expansion 
.
0 2
x
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Review Exercises for Chapter 6
88. A rotation of 90° about the x-axis is given by
0
0
1

 1 0 0




A = 0 cos 90° −sin 90° = 0 0 −1.
0 sin 90° cos 90° 
0 1 0




 1 0 0  1
 1

 
 
Because Av = 0 0 −1 −1 = −1 ,
0 1 0  1
−1

 
 
the image of (1, −1, 1) is (1, −1, −1).
92. A rotation of 120° about the y-axis is given by
90. A rotation of 30° about the y-axis is given by
2
2
0
 3
0

 cos 30° 0 sin 30° 
2



A =  0
1
0  =  0 1

−sin 30° 0 cos 30°
 1


 −2 0

Because
 3
0

 2
Av =  0 1

 1
 −2 0

239

1
3
0
 −

 cos 120° 0 sin 120° 
2
2




A1 =  0
1
0
0 1
0
 = 

−sin 120° 0 cos 120°

3
1


− 2 0 − 2 


while a rotation of 45° about the z-axis is given by
1

2
0.

3
2 
 3
1
1 
+ 
  1

2
2
 2
 
,
0 −1 = 
−1
 


 1
3   1
3
−
+
 2
2 
2 

 3
1
1
3
+ , −1, − +
the image of (1, −1, 1) is 
.
2
2
2 
 2
 2

cos
45
°
−
sin
45
°
0


 2


A2 =  sin 45° cos 45° 0 =  2

 0
0
1
 2

 0
−
2
2

0

.
0

1
So, the pair of rotations is given by
 2

 2
A2 A1 =  2

 2
 0

2
−
 4

2
= −
4


3
−
2

−
−
2
2
2
2
0

1
0  −
0
2

0 1

0 
 − 3 0
1  2
2
2
2
2
6

4 
6
.
4 
1
− 
2
0
3

2 
0

1
− 
2 
94. A rotation of 60° about the x-axis is given by
0
0
1


0
0
1

1
3
0


−
A1 = 0 cos 60° −sin 60° = 
2
2 


0 sin 60° cos 60° 

3
1


0

2
2 
while a rotation of 60° about the z-axis is given by
 1

cos 60° −sin 60° 0
 2


A2 =  sin 60° cos 60° 0 =  3

 0
0
1
 2

 0
−

0

.
1
0
2 
0 1
3
2
So, the pair of rotations is given by
0
0
 1
 1
3
0 


2
1
3
 2
 0
−
A2 A1 =  3

1
2
2 
0 


2
2
3
1

 0
 0
0 1  
2
2 
 1

 2
 3
= 
 2

 0

3
4
1
4
3
2
3

4
3
−
.
4 
1

2
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
240
Chapter 6
Linear Transformations
96. The standard matrix for T is
0
0
1

 1 0 0




°
−
°
=
0
cos
90
sin
90


0 0 −1.
0 sin 90° cos 90° 
0 1 0




Therefore, T is given by T ( x, y, z ) = ( x, − z, y ). The
image of each vertex is as follows.
98. The standard matrix for T is
 1
−
cos 120° −sin 120° 0
 2


 3
sin
120
cos
120
0
°
°
=



 0

0
1
 2


 0
−

0

.
1
0
−
2 
0 1 
3
2
T (0, 0, 1) = (0, −1, 0)
Therefore, T is given by
 1

3
3
1
T ( x, y, z ) =  − x −
y,
x − y, z . The image
2
2
2
 2

of each vertex is as follows.
T (1, 1, 1) = (1, −1, 1)
T (0, 0, 0) = (0, 0, 0)
T (1, 0, 0) = (1, 0, 0)
 1
3 
T (1, 0, 0) =  − ,
, 0 
 2 2

T (0, 0, 0) = (0, 0, 0)
T (1, 1, 0) = (1, 0, 1)
T (0, 1, 0) = (0, 0, 1)
T (1, 0, 1) = (1, −1, 0)
 1
3
3
1 
,
T (1, 1, 0) =  − −
− , 0 
2
2
2 
 2
T (0, 1, 1) = (0, −1, 1)

3 1 
, − , 0 
T (0, 1, 0) =  −
2 
 2
T (0, 0, 1) = (0, 0, 1)
 1
3 
, 1
T (1, 0, 1) =  − ,
 2 2 
 1
3
3
1 
,
T (1, 1, 1) =  − −
− , 1
2
2
2
2 


3 1 
, − , 1
T (0, 1, 1) =  −
2
2 

100. (a) True. The statement is true because if T is a
reflection T ( x, y ) = ( x, − y ), then the standard matrix is
1 0 

.
0 −1
(b) True. The statement is true because the linear transformation T ( x, y ) = ( x, ky ) has the standard matrix.
1 0

.
0 k 
102. (a) True. Dx is a linear transformation because it
preserves addition and scalar multiplication. Further, Dx ( Pn ) = Pn −1 because for all natural numbers i ≥ 1,
Dx ( xi ) = ixi −1.
(b) False. If T is a linear transformation V → W , then kernel of T is defined to be a set of v ∈ V , such that T ( v ) = 0W .
(c) True. If T = T2
T1 and Ai is the standard matrix for Ti , i = 1, 2, then the standard matrix for T is equal A2 A1 by
Theorem 6.11 on page 323.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Project Solutions for Chapter 6
241
Project Solutions for Chapter 6
1 Reflections in the Plane-I
y
L(x, y)
y
2.
 1 0


0 −1
(x, y)
ax + by = 0
1
y=0
2
(x, y)
−1
x
2
x=0
(−x, y)
(x, −y)
y
3.
−1 0


 0 1
y
1.
x
2
(y, x)
y=x
1
(x, y)
−1
0 1


 1 0
(x, y)
1
2
x
−1
−1
1
4. v = ( 2, 1)
x
B = {v, w}
y
w = ( −1, 2)
L( v ) = v
L( w ) = − w
2
w
 1 0

 = A
0 −1
−2
B′ = {e1 , e 2} standard basis
A is a matrix of L relative to basis B.
x − 2y = 0
v
−1
1
2
x
−2
A′ = P −1 AP matrix of L relative to the standard basis B′.
2 1
2 −1
1
  P = 5

1
2
−


 1 2
[B′ B] → I P −1  P −1 = 
3
2
1  2 1
3 4
2 −1  1 0  2 1
5
1
1
A′ = P −1 AP = 15 


 = 5

 = 5
 = 4
 1 2 0 −1 −1 2
 1 −2 −1 2
4 −3
 5
3
 45
 5
4 2
2
5  
  =  
3
− 5  1 
1 
3
 45
 5
4  −1
1
5   =  
 
 
− 53   2
2
−
 
3
 45
 5
4 5
3
5  
  =  
− 53  0
4
4
5
.
− 53 
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
242
Chapter 6
Linear Transformations
5. v = ( −b, a)
w = ( a, b)
ax + by = 0
y
 1 0
A = 

0 −1
−b a
P −1 = 

 a b
P =
x
1  −b + a


a 2 + b 2 + a +b
b 2 − a 2
−b a  1 0
−b − a −b a
1
1
= 2
A′ = P −1 AP = 

P = 

 2
2
2
a + b  −2ab
 a b 0 −1
 a −b  a b a + b
A′ =
6. 3x + 4 y = 0
−2ab 

a − b 2 
2
1  7 −24


32 + 42 −24 −7
−3
1  7 −24 3
1  −75

  =

 =  
25 −24 −7 4
25 −100
−4
−4
1  7 −24 −4
1 −100

  =

 =  
25 −24 −7  3
25  75
 3
 24 
− 5 
1  7 −24 0
1 −24 ⋅ 5


=
=

 


25 −24 −7 5
25  −7 ⋅ 5
 7
−
 5 
2 Reflections in the Plane-II
1. v = (0, 1)
0 0


0 1
2. v = (1, 0)
1 0


0 0
3. v = (2, 1)
B = {v, w}
y
w = (−1, 2)
projv v = v 

projv w = 0
P
−1
2
 1 0
A = 

0 0
w
−2
x − 2y = 0
v
1
2
 54
2 −1  1 0
2 0  2 1 1
= 

P = 

 5 = 2
 1 2 0 0
 1 0 −1 2
 5
2
5

1

5
2 1
2 −1
1
= 
, P = 5 

 1 2
−1 2
−1
x
−2
A′ = P −1 AP = matrix of L relative to standard basis.
 54
2
 5
2 2
2  54
5  
=



 ,  2
1 1
 
1   5
5
 54
2
 5
2 5
4
5  
=  

1 0
2
5
 
2  −1
0
5  
=  

1  2
 
0
5
y
5
4
3
2
1
1
2
3
4
5
x
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Project Solutions for Chapter 6
4. v = ( −b, a)
 1 0
A = 

0 0
w = ( a, b )
−b a
P −1 = 

 a b
A′ = P −1 AP =
5. projv u =
P =
−b a
1


a 2 + b 2  a b
 b 2 − ab
1

2
a + b − ab
a 2 
2
1
(u + L(u))
2

L(u ) = 2projv u − u
1  b 2 − ab 1 0
= 2 2

 −

a + b 2 − ab
a 2  0 1
1   2b 2 −2ab a 2 + b 2
= 2

 − 
a + b 2  −2ab
2a 2   0
1 b 2 − a 2

a 2 + b 2  −2ab
y
L(u)
L = 2 projv − I
=
243
u


2
2
a + b  
0
x
−2ab 

a − b 2 
2
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
C H A P T E R 7
Eigenvalues and Eigenvectors
Section 7.1
Eigenvalues and Eigenvectors ...........................................................245
Section 7.2
Diagonalization ...................................................................................252
Section 7.3
Symmetric Matrices and Orthogonal Diagonalization .....................256
Section 7.4
Applications of Eigenvalues and Eigenvectors .................................261
Review Exercises ........................................................................................................268
Project Solutions.........................................................................................................277
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
C H A P T E R 7
Eigenvalues and Eigenvectors
Section 7.1 Eigenvalues and Eigenvectors
4 −5 1
−1
1
2. Ax1 = 
   =   = −1  = λ1x1
2 −3 1
−1
1
4 −5 5
10
5
Ax 2 = 
   =   = 2   = λ2 x 2
2
−
3
2
4

 
 
2
−2 2 −3  1
 5
 1

 
 
 
4. Ax1 =  2
1 −6  2 = 10 = 5 2 = λ1x1
 −1 −2 0 −1
−5
−1

 
 
 
−2 2 −3 −2
 6
−2

 
 
 
Ax 2 =  2
1 −6  1 = −3 = −3 1 = λ2 x 2
 −1 −2 0  0
 0
 0

 
 
 
4 −1 3  1
4
 1

 
 
 
6. Ax1 = 0 2 1 0 = 0 = 4 0 = λ1x1
0 0 3 0
0
0

 
 
 
4 −1 3  1
2
 1

 
 
 
Ax 2 = 0 2 1 2 = 4 = 2 2 = λ2 x 2
0 0 3 0
0
0

 
 
 
−2
4 −1 3 −2
−6
 

 
 
Ax3 = 0 2 1  1 =  3 = 3 1 = λ3x3
 1
0 0 3  1
 3
 

 
 
−2 2 −3 3
−9
3

 
 
 
Ax3 =  2
1 −6 0 =  0 = −30 = λ3x3
 −1 −2 0  1
−3
 1

 
 
 
2 − 3  c
− 2
 5c
 c

 


 
8. (a) A(cx1 ) =  2
1 − 6  2c =  10c = 5  2c = 5(cx1 )
 −1 − 2
− 5c
− c
0 − c



 
2 − 3 − 2c
− 2
 6c 
− 2c







(b) A(cx 2 ) =  2
1 − 6  c = − 3c = − 3 c = − 3(cx 2 )
 −1 − 2
 0
 0
0  0





2 − 3 3c
− 2
− 9c
3c

 


 
(c) A(cx3 ) =  2
1 − 6  0 =  0 = − 3 0 = − 3(cx3 )
 −1 − 2
− 3c
 c
0  c



 
−3 10 4
28
4
10. (a) Because Ax = 
   =   = 7 
 5 2 4
28
4
x is an eigenvector of A (with corresponding eigenvalue 7).
−3 10 −8
 64
−8
(b) Because Ax = 
  = 
 = −8 
 5 2  4
−32
 4
x is an eigenvector of A (with corresponding eigenvalue −8 ).
−3 10 −4
 92
−4
(c) Because Ax = 
  =   ≠ λ 
5
2
8
−
4

 
 
 8
x is not an eigenvector of A.
−3 10  5
−45
 5
(d) Because Ax = 
  = 
 ≠ λ 
5
2
−
3
19

 


−3
x is not an eigenvector of A.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
245
246
Chapter 7
Eigenvalues and Eigenvectors
 1 0 5  1
 1
 1

 
 
 
12. (a) Because Ax = 0 −2 4  1 = −2 ≠ λ  1 , x is not an eigenvector of A.
 1 −2 9 0
 −1
0

 
 
 
 1 0 5 −5
0
−5

 
 
 
(b) Because Ax = 0 −2 4  2 = 0 = 0  2 , x is an eigenvector (with corresponding eigenvalue 0).
 1 −2 9  1
0
 1

 
 
 
(c) The zero vector is never an eigenvector.
12 + 2 6 
2 6 − 3
 1 0 5  2 6 − 3 







(d) Because Ax = 0 −2 4 −2 6 + 6 =  4 6  = 4 + 2 6 −2 6 + 6 ,





 1 −2 9 
3
3

 


6 6 + 12

(
)
)
x is an eigenvector of A ( with corresponding eigenvalue 4 + 2 6 .
14. Geometrically, multiplying a vector in R 2 by A corresponds to a horizontal shear.
 1 k   x
 x + ky 

  = 

0 1  y 
 y 
The only vectors mapped onto scalar multiples of themselves are those lying on the x-axis.
 1 k   x
 x
 x

   =   = 1 
0 1 0
0
0
So, the only eigenvalue is 1, and the corresponding eigenspace is the x-axis.
16. (a) The characteristic equation is
λI − A =
λ −1
4
2
λ −8
= λ 2 − 9λ = λ (λ − 9) = 0.
(b) The eigenvalues are λ 1 = 0 and λ 2 = 9.
4   x1 
λ 1 − 1
0
For λ1 = 0, 
  =  
2
−
8
λ
x
1
0

  2

 1 −4  x1 
0

   =  .
0
0
x

  2
0
The solution is {( 4t , t ) : t ∈ R}. So, an eigenvector corresponding to λ 1 = 0 is ( 4, 1).
4   x1 
λ 2 − 1
0
For λ2 = 9, 
  =   
λ 2 − 8  x2  0
 2
2 1  x1 
0

   =  .
0 0  x2 
0
The solution is {( −t , 2t ) : t ∈ R}. So, an eigenvector corresponding to λ 2 = 9 is ( −1, 2).
18. (a) The characteristic equation is
λI − A =
λ +2
−4
−1
λ −1
= (λ + 2)(λ − 1) − 4 = (λ + 3)(λ − 2) = 0.
(b) The eigenvalues are λ 1 = − 3 and λ 2 = 2.
− 4   x1 
λ 1 + 2
0
For λ1 = − 3, 
  =   
−
λ
−
1
1
x
2
0

  2
 1 4  x1 
0

   =  .
0
0
x

  2
0
The solution is {( − 4t , t ) : t ∈ R}. So, an eigenvector corresponding to λ 1 = − 3 is ( − 4, 1).
− 4   x1 
λ 2 + 2
0
For λ 2 = 2, 
  =   
−
λ
−
1
1
x
2
0

  2
 1 −1  x1 
0

   =  .
0
0
x

  2
0
The solution is {(t , t ) : t ∈ R}. So, an eigenvector corresponding to λ 2 = 2 is (1, 1).
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 7.1
Eigenvalues and Eigenvectors
247
20. (a) The characteristic equation is
λI − A =
λ − 14 − 14
− 12
λ
(
)(
)
= λ 2 − 14 λ − 18 = λ − 12 λ + 14 = 0.
(b) The eigenvalues are λ 1 = 12 and λ 2 = − 14 .
λ 1 − 14
For λ1 = 12 , 
1
 − 2
− 14   x1 
0
  =  
λ 1   x2 
0

 1 −1  x1 
0

   =  .
0 0  x2 
0
The solution is {(t , t ) : t ∈ R}. So, an eigenvector corresponding to λ 1 = 12 is (1, 1).
λ 2 − 14
For λ2 = − 14 , 
1
 − 2
− 14   x1 
0
  =  
λ 2   x2 
0

 1 12   x1 
0

   =  .
0 0  x2 
0
The solution is {(t , − 2t ) : t ∈ R}. So, an eigenvector corresponding to λ 2 = − 14 is (1, − 2).
λ − 3 −2 −1
22. (a) The characteristic equation is λ I − A =
0
λ
0
−2
−2 = (λ − 3)(λ 2 − 4) = 0.
λ
(b) The eigenvalues are λ1 = −2, λ2 = 2, and λ3 = 3.
λ 1 − 3 −2 −1  x1 
0
−5 −2 −1  x1 
0

 
 

 
 
For λ1 = −2,  0
λ 1 −2  x2  = 0   0 −2 −2  x2  = 0.
 0
0
 0 −2 −2  x3 
0
−2 λ 1   x3 
 

 
 

The solution is {(t , − 5t , 5t ) : t ∈ R}. So, an eigenvector corresponding to λ 1 = −2 is (1, − 5, 5).
λ 2 − 3 −2 −1  x1 
0
−1 −2 −1  x1 
0

 
 

 
 
λ 2 −2  x2  = 0   0 2 −2  x2  = 0.
For λ2 = 2,  0
 0
0
 0 −2 2  x3 
0
−2 λ 2   x3 
 

 
 

The solution is {( −3t , t , t ) : t ∈ R}. So, an eigenvector corresponding to λ 2 = 2 is ( −3, 1, 1).
λ 3 − 3 −2 −1  x1 
0
0 −2 −1  x1 
0

 
 

 
 
λ 3 −2  x2  = 0  0 3 −2  x2  = 0.
For λ3 = 3,  0
 0
0
0 −2
0
−2 λ 3   x3 
3  x3 
 

 

The solution is {(t , 0, 0) : t ∈ R}. So, an eigenvector corresponding to λ 3 = 3 is (1, 0, 0).
24. (a) The characteristic equation is λ I − A =
λ −3
−2
3
3
λ +4
−9
1
2
λ −5
= λ 3 − 4λ 2 + 4λ = λ (λ − 2) = 0.
2
(b) The eigenvalues are λ 1 = 0, λ 2 = 2 (repeated).
λ 1 − 3
−2
3   x1 
1  x1 
0
1 0
0

 
 

 
 
For λ 1 = 0,  3
λ1 + 4
−9   x2  = 0  0 1 −3  x2  = 0.
 1
0 0 0  x3 
0
2
λ 1 − 5  x3  0

 
 

The solution is {( −t , 3t , t ) : t ∈ R}. So, an eigenvector corresponding to λ1 = 0 is ( −1, 3, 1).
λ 2 − 3
−2
3   x1 
0
 1 2 −3  x1 
0

 
 

 
 
λ2 + 4
−9   x2  = 0  0 0 0  x2  = 0.
For λ 2 = 2,  3
 1
0 0 0  x3 
0
2
λ 2 − 5  x3  0

 
 

The solution is {( −2s + 3t , s, t ) : s, t ∈ R}. So, two independent eigenvectors corresponding
to λ 2 = 2 are ( −2, 1, 0) and (3, 0, 1).
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
248
Chapter 7
Eigenvalues and Eigenvectors
λ −1
26. (a) The characteristic equation is λ I − A =
2
− 32
3
2
λ − 13
2
9
2
− 52
10
λ −8
(
)(
= λ − 29
λ − 12
2
) = 0.
2
, λ 2 = 12 (repeated).
(b) The eigenvalues are λ 1 = 29
2
3
λ 1 − 1
− 52   x1 
0
3 0 −1  x1 
0
2

 
 

 
 
29
13
λ1 − 2
10   x2  = 0  0 3 4  x2  = 0.
For λ 1 = 2 ,  2
 −3
9
0 0 0  x3 
0
λ 1 − 8  x3  0

 
 
2
 2
The solution is {(t , − 4t , 3t ) : t ∈ R}. So, an eigenvector corresponding to λ1 = 29
is (1, − 4, 3).
2
For λ 2 =
3
λ 2 − 1
− 52   x1 
0
2

 
 
13
2
λ2 − 2
10   x2  = 0 
 −3
9
λ 2 − 8  x3  0
2
 2
1,
2 
 1 −3 5  x1 
0

 
 
0 0 0  x2  = 0.
0 0 0  x3 
0

 
 
The solution is {(3s − 5t , s, t ) : s, t ∈ R}. So, two eigenvectors corresponding to λ 2 = 12 are (3, 1, 0) and ( −5, 0, 1).
28. (a) The characteristic equation is λ I − A =
λ −5
0
−1
λ −4
0
0
0
0
λ −1
−3
0
0
0
λ −4
0
0
= (λ − 5)(λ − 4) (λ − 1) = 0.
2
(b) The eigenvalues are λ 1 = 5, λ 2 = 4, λ 3 = 1, and λ4 = 4.
0
0
0   x1 
λ1 − 5
0
 1 −1

 
 

−1
λ1 − 4
0
0   x2 
0
0 0


=
 
For λ 1 = 5, 
0
0 0
− 3   x3 
0
0
λ1 − 1

 
 

 0
0
0
λ1 − 4  x4  0
0 0
0 0
 x1 
0

 
 
1 0
x2 
0

=
=  .
 x3 
0
0 1

 
 
0 0
 x4 
0
The solution is {(t , t , 0, 0) : t ∈ R}. So, an eigenvector corresponding to λ1 = 5 is (1, 1, 0, 0).
0
0
0   x1 
λ 2 − 5
0
1

 
 

1
4
0
0
−
λ
−
x
0
2
  2  =    0
For λ 2 = 4, 
0
0
0
0
λ2 − 1
− 3   x3 

 
 

 0
0
0
λ 2 − 4  x4  0
0
0
 x1 
0

 
 
x
0 1 −1
0
2
=   =  .




0 0 0
x
0

 3
 
0 0 0
 x4 
0
0 0
The solution is {(0, s, t , t ) : s, t both not = 0}. So, an eigenvector corresponding to λ2 = 4 is (0, 1, 1, 1).
0
0
0   x1 
λ 3 − 5
0
1

 
 

1
λ
4
0
0
−
−
3
  x2  = 0  0
For λ 3 = 1, 
0
0
− 3   x3 
0
0
λ3 − 1

 
 

0
0
λ 2 − 4  x4  0
0
 0
0 0 0
 x1 
0

 
 
1 0 0
x2
0
=   =  .
 x3 
0
0 0 1

 
 
0 0 0
 x4 
0
The solution is {(0, 0, t , 0) : t ∈ R}. So, an eigenvector corresponding to λ3 = 1 is (0, 0, 1, 0).
30. Using a graphing utility: λ = −7, 3
32. Using a graphing utility: λ =
2
2
,−
2
2
34. Using a graphing utility: λ = 0, 1, 2
36. Using a graphing utility: λ =
1 7 ± 105
,
5
4
38. Using a graphing utility: λ = 0, 0, 3, 5
40. Using a graphing utility: λ = 0, 1, 1, 4
42. The eigenvalues are the entries on the main diagonal,
− 5, 7, and 3.
44. The eigenvalues are the entries on the main diagonal,
1 , 5 , 0,
and 34 .
2 4
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 7.1
Eigenvalues and Eigenvectors
249
46. (a) The characteristic equation is
λI − A =
λ +8
−16
−1
λ +2
= (λ + 8)(λ + 2) − 16 = λ (λ + 10) = 0
The eigenvalues are λ1 = 0 and λ2 = −10.
 8 −16  x1 
0
 1 − 2  x1 
0
(b) For λ1 = 0, 
  =    
   =  .
−
1
2
0
0
0
x
x

  2
 

  2
0
The solution is {( 2t , t ) : t ∈ R}. So, a basis for the eigenspace is B1 = {( 2, 1)}.
− 2 −16  x1 
0
 1 8  x1 
0
For λ 2 = −10, 
  =    
   =  .
−
1
−
8
0
0
0
x
x

  2
 

  2
0
The solution is {( − 8t , t ) : t ∈ R}. So, a basis for the eigenspace is B2 = {( − 8, 1)}.
0
0
(c) A′ = 

0 −10
48. (a) The characteristic equation is λ I − A =
λ −3
−1
−4
−2
λ −4
0
−5
−5
λ −6
= λ 3 − 13λ 2 + 32λ − 20
= (λ − 1)(λ − 2)(λ − 10).
The eigenvalues are λ1 = 1, λ2 = 2, and λ3 = 10.
3  x1 
− 2 −1 − 4  x1 
0
1 0
0

 
 

 
 
(b) For λ1 = 1, − 2 − 3
0  x2  = 0  0 1 − 2  x2  = 0.
 − 5 − 5 −5  x3 
0
0 0
0
0  x3 

 
 

 
The solution is {( − 3t , 2t , t ) : t ∈ R}. So, a basis for the eigenspace is B1 = {( − 3, 2, 1)}.
 −1 −1 − 4  x1 
0
 1 1 0  x1 
0

 
 

 
 
For λ2 = 2, − 2 − 2
0  x2  = 0  0 0 1  x2  = 0.
 − 5 − 5 −4  x3 
0
0 0 0  x3 
0

 
 

 
 
The solution is {(t , − t , 0) : t ∈ R}. So, a basis for the eigenspace is B2 = {(1, −1, 0)}.
 7 −1 − 4  x1 
0
 1 − 3 0  x1 
0

 
 

 
 
For λ3 = 10, − 2
6
0  x2  = 0  0
5 −1  x2  = 0.
− 5 − 5
0
0
0
4  x3 
0 0  x3 

 

 
The solution is {(3t , t , 5t ) : t ∈ R}. So, a basis for the eigenspace is B3 = {(3, 1, 5)}.
 1 0 0


′
(c) A = 0 2 0
0 0 10


50. The characteristic equation is
λI − A =
λ −6
1
−1
λ −5
= λ 2 − 11λ + 31 = 0.
Because
2
6 −1
6 −1
 1 0
35 −11 66 −11 31 0
0 0
A2 − 11A + 31I = 
 − 11
 + 31
 = 
 − 
 + 
 = 
,
 1 5
 1 5
0 1
11 24  11 55  0 31
0 0
the theorem holds for this matrix.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
250
Chapter 7
Eigenvalues and Eigenvectors
52. The characteristic equation is
λ +3
−1
0
1
λ −3
−2
0
−4
λ −3
λI − A =
= λ 3 − 3λ 2 − 16λ = 0.
Because
3
2
−3 1 0
−3 1 0
−3 1 0






A − 3 A − 16 A =  −1 3 2 − 3 −1 3 2 − 16  −1 3 2
 0 4 3
 0 4 3
 0 4 3






3
2
−24 16 6
 8 0 2
−3 1 0
0 0 0








= −16 96 68 − 3 0 16 12 − 16  −1 3 2 = 0 0 0,
−12 136 99
−4 24 17
 0 4 0
0 0 0








the theorem holds for this matrix.
54. For the n × n matrix A = aij , the sum of the diagonal
n
entries, or the trace, of A is given by
Exercise 24: λ1 = 0, λ2 = 2, λ3 = 2
3
aii .
(a)
i =1
i =1
(a)
2
λi = 9 =
i =1
(b)
aii
1 −4
−2
Exercise 18: λ1 = − 3, and λ 2 = 2
2
i =1
(b)
−2 4
1 1
Exercise 20: λ1 =
2
(a)
λi =
i =1
1
(b)
A = 14
2
1
4
=
= − 6 = ( − 3)( 2) = λ1 ⋅ λ 2
1, λ
2 2
= − 14
(a)
A = −2
1
4
0
4
(a)
λi = 14 =
i =1
aii
λi = 3 =
aii
i =1
5
2
−10 = 29
= 29
⋅ 12 ⋅ 12 = λ1 ⋅ λ2 ⋅ λ3
8
2
8
4
aii
i =1
5 0 0 0
( )
= − 18 = 12 − 14 = λ1 ⋅ λ2
3
13
2
− 92
3
Exercise 28: λ1 = 5, λ 2 = 4, λ3 = 1, λ4 = 4
i =1
i =1
(b)
A =
4 4 0 0
0 0 1 1
0 0 0 4
= 80 = 5 ⋅ 4 ⋅ 1 ⋅ 4 = λ1 ⋅ λ 2 ⋅ λ 3 ⋅ λ4
aii
i =1
3 2
(b)
λi = 31
=
2
3
2
2
Exercise 22: λ1 = −2, λ2 = 2, λ3 = 3
3
i =1
(b)
i =1
A =
3
(a)
1 − 32
aij
5
, λ2 = 12 , λ3 = 12
Exercise 26: λ1 = 29
2
2
λi = − 2 =
9 = 0 = 0 ⋅ 2 ⋅ 2 = λ1 ⋅ λ2 ⋅ λ3
−1 −2
= 0 = 0 ⋅ 9 = λ1 ⋅ λ2
8
2 −3
A = − 3 −4
i =1
A =
(a)
3
(b)
aii
i =1
Exercise 16: λ1 = 0, λ2 = 9
2
3
λi = 4 =
1
A = 0 0 2 = −12 = −2 ⋅ 2 ⋅ 3 = λ1 ⋅ λ2 ⋅ λ3
0 2 0
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 7.1
56. λ = 0 is an eigenvalue of
A ⇔ 0I − A = 0 ⇔ A = 0.
58. Observe that λ I − AT = (λ I − A)
T
u12
u1
(u1, u2 )
+ u22
the standard matrix A of T is A =
251
= λI − A .
Because the characteristic equations of A and AT are the
same, A and AT must have the same eigenvalues.
However, the eigenspaces are not the same.
60. Let u = (u1, u2 ) be the fixed vector in R 2 , and v = (v1, v2 ). Then proju v =
Because T (1, 0) =
Eigenvalues and Eigenvectors
T (0, 1) =
and
u12
u1v1 + u2v2
(u1 , u2 ).
u12 + u22
u2
(u1, u2 ),
+ u22
 u12 u1u2 
1

.
u12 + u22 u1u2
u22 
Now,
 u1 (u12 + u22 )
2
2 u
 u12 u1u2   u1 
 u13 + u1u22 
1
1
1

 = u1 + u2  1  = 1u
=
=
Au = 2






u1 + u22 u1u2
u12 + u22 u12u2 + u23 
u12 + u22 u2 (u12 + u22 )
u12 + u22 u2 
u22  u2 


and
 u12
 u2 
1
A  = 2

u1 + u22 u1u2
−u1 
u12u2 − u12u2 
u1u2   u2 
0
 u2 
1
1
=
= 2
= 0  .




2
2
2 
2
2
2
+
+
u
u
u
u
u2  −u1 
 u1u2 − u1u2 
1
2 
1
2 0
−u1 
So, λ = 1 and λ2 = 0 are the eigenvalues of A.
62. Let A2 = O and consider Ax = λ x. Then O = A2 x = A(λ x) = λ Ax = λ 2 x which implies λ = 0.
64. (a) − 2, 1, 3 (repeated)
(b) There are four roots of the characteristic equation, so A has order 4.
(c) When λ = − 2, 1, or 3, λ I − A is singular.
(d) No. Zero is not an eigenvalue of A, so A is nonsingular.
66. The characteristic equation of A is λ I − A =
λ −1
1
λ
= λ 2 + 1 = 0 which has no real solution.
68. (a) True. Ax = λ x and λ x is parallel to x for any real number λ . See discussion on page 348.
(b) False. The set of eigenvectors corresponding to λ together with the zero vector (which is never an eigenvector for any
n
eigenvalue) forms a subspace of R . (Theorem 7.1 on page 350).
ㅡ
70. Substituting the value λ = 3 yields the system
−1
0   x1 
λ − 3
0
0 1 0  x1 
0

 
 

 
 
λ −3
0   x2  = 0  0 0 0  x2  = 0.
 0
 0
0
0 0 0  x3 
0
λ − 3  x3 
0

 

 
 
So, 3 has two linearly independent eigenvectors and the dimension of the eigenspace is 2.
72. Substituting the value λ = 3 yields the system
−1
−1   x1 
λ − 3
0
0 1 0  x1 
0

 
 

 
 
−1   x2  = 0  0 0 1  x2  = 0.
λ −3
 0
 0
0
0 0 0  x3 
0
0
λ − 3  x3 

 

 
 
So, 3 has one linearly independent eigenvector, and the dimension of the eigenspace is 1.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
252
Chapter 7
Eigenvalues and Eigenvectors
74. Because T (e−2 x ) =
d −2 x
e  = −2e−2 x , the eigenvalue corresponding to f ( x) = e−2 x is −2.
dx 
76. The standard matrix for T is
2 1 −1


A = 0 −1 2.
0 0 −1


The characteristic equation of A is
λI − A =
λ −2
−1
1
0
λ +1
−2
0
0
λ +1
= (λ − 2)(λ + 1) .
2
The eigenvalues are λ1 = 2 and λ2 = −1 (repeated). The corresponding eigenvectors are found by solving
−1
1  a0 
λi − 2
0

 
 
λi + 1 −2   a1  = 0
 0
 0
λi + 1 a2  0
0

for each λi . So, p1( x) = 1 corresponds to λ1 = 2, and p2 ( x) = 1 − 3x corresponds to λ2 = −1.
78. The characteristic equation of A is
λ − cos θ
sin θ
−sin θ
λ − cos θ
= λ 2 − 2 cos θ λ + (cos 2 θ + sin 2 θ ) = λ 2 − 2 cos θ λ + 1.
There are real eigenvalues if the discriminant of this quadratic equation in λ is nonnegative:
b 2 − 4ac = 4 cos 2 θ − 4 = 4(cos 2 θ − 1) ≥ 0 
cos 2 θ = 1  θ = 0, π .
The only rotations that send vectors to multiples of themselves are the identity (θ = 0) and the 180° –rotation (θ = π ).
80. 0 is the only eigenvalue of a nilpotent matrix. For if Ax = λx, then A2 x = Aλ x = λ 2 x.
So,
Ak x = λ k x = 0

λ k = 0  λ = 0.
Section 7.2 Diagonalization
 1
2. (a) P −1 AP =  12
− 2
− 12   1 3 3 1
2 0
= 


3 −1 5 1 1



0 4
2 
(b) λ1 = 2, λ 2 = 4
0.25 0.25 0.25  0.80
 0.25


−
0.25
−
0.25 0.25 0.25  0.10
6. (a) P −1 AP = 
0
0
0.5 − 0.5 0.05


 0.5
− 0.5
0
0  0.05
− 2
4. (a) P −1 AP =  31
 3
5 4
3 

1
− 3  2
− 5 1 5
−1 0

 = 

− 3 1 2
 0 2
(b) λ1 = −1, λ 2 = 2
0.10 0.05 0.05 1 −1 0 1 


0.80 0.05 0.05 1 −1 0 −1
0.05 0.80 0.10 1 1 1 0 


0.05 0.10 0.80 1 1 −1 0 
= 1 0
0
0


0
0 0.8 0
0 0 0.7 0 


0 0.7
0 0
(b) λ1 = −1, λ 2 = 0.8, λ 3 = 0.7, λ 4 = 0.7
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 7.2
Diagonalization
253
8. The eigenvalues of A are λ1 = 12 , λ 2 = − 14 (See Exercise 20, Section 7.1). The corresponding eigenvectors (1, 1) and (1, − 2) are
used to form the columns of P. So,
1
1
P = 

−
1
2



2
P −1 =  31
 3
1
3

− 13 
and
2
P −1 AP =  31
 3
1 1
3 4

− 13   12
1 1
4 
 12
0
1

.
 = 
1
0 1 − 2
 0 − 4 
10. The eigenvalues of A are λ1 = −2, λ2 = 2, λ3 = 3. From Exercise 22, Section 7.1, the corresponding eigenvectors
(1, − 5, 5), (−3, 1, 1) and (1, 0, 0) are used to form the columns of P. So,
 1 −3 1
0 −0.1 0.1




−1
P = −5
1 0  P = 0 0.5 0.5
 5
 1 1.6 1.4
1 0



and
0 −0.1 0.1 3 2 1  1 −3 1
−2 0 0






P −1 AP = 0 0.5 0.5 0 0 2 −5
1 0 =  0 2 0.
 1 1.6 1.4 0 2 0  5
 0 0 3
1 0





12. The eigenvalues of A are λ1 = 0 and λ2 = 2 (repeated). From Exercise 24, Section 7.1, the corresponding eigenvectors
(−1, 3, 1), (3, 0, 1) and (−2, 1, 0) are used to form the columns of P. So,
 1
−1 3 −2
 2


−1
P =  3 0
1  P = − 12
− 3
 1 1 0


 2
 1
 2
P AP = − 12
− 3
 2
−1
1 − 23 

5 , and
−1
2
9
−2
2
1 − 23   3 2 −3 −1 3 −2
0 0 0





5
−1
−3 −4 9  3 0
1 = 0 2 0.
2 
9   −1 −2
0 0 2
−2
5  1 1 0


2 
14. The eigenvalues of A are λ1 = 2 and λ 2 = 4 (repeated).
Furthermore, there are just two linearly independent
eigenvectors of A, x1 = (0, 0, 1) and x2 = (1, − 2, 4).
So, A is not diagonalizable.
16. The matrix A has only one eigenvalue, λ = 0, and a
basis for the eigenspace is {(1, − 2)}, So, A does not
satisfy Theorem 7.5 and is not diagonalizable.
18. A is triangular, so the eigenvalues are simply the entries
on the main diagonal. So, the only eigenvalue is λ = 1,
20. For eigenvalue λ1 = 3, the corresponding eigenvector
is [1, 0, 0] . For the repeated eigenvalue λ 2 = − 2, the
T
corresponding eigenvector is [2, − 5, 0] . So, A does not
T
satisfy Theorem 7.5 (it does not have three linearly
independent eigenvectors) and is not diagonalizable.
22. From Exercise 40, Section 7.1, you know that A has only
three linearly independent eigenvectors. So, A does not
satisfy Theorem 7.5 and is not diagonalizable.
and a basis for the eigenspace is {(0, 1)}.
24. The eigenvalue of A is λ = 2 (repeated). Because A
does not have two distinct eigenvalues, Theorem 7.6
does not guarantee that A is diagonalizable.
Because A does not have two linearly independent
eigenvectors, it does not satisfy Theorem 7.5 and it is not
diagonalizable.
26. The eigenvalues of A are λ1 = 4, λ2 = 1, λ3 = −2.
Because A has three distinct eigenvalues, it is
diagonalizable by Theorem 7.6.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
254
Chapter 7
Eigenvalues and Eigenvectors
28. The standard matrix for T is
30. The standard matrix for T is
−2 2 −3


A =  2
1 −6
 −1 −2 0


3 0 1


A = 0 2 3
0 0 1


which has eigenvalues λ1 = 5, λ2 = −3 (repeated), and
which has eigenvalues λ1 = 3, λ2 = 2, and λ3 = 1, and
corresponding eigenvectors (1, 2, −1), (3, 0, 1) and
corresponding eigenvectors (1, 0, 0), (0, 1, 0), and
(−1, − 6, 2). Let B = {(1, x, −1 − 6 x + 2 x2 )} and the
(−2, 1, 0). Let B = {(1, 2, −1), (3, 0, 1), (−2, 1, 0)} and the
matrix of T relative to this basis is
matrix of T relative to this basis is
5 0 0


′
A = 0 −3 0.
0 0 −3


3 0 0


A′ = 0 2 0.
0 0 1


32. Let P be the matrix of eigenvectors corresponding to the n distinct eigenvalues λ1 ,  , λn . Then P −1 AP = D is a diagonal
matrix  A = PDP −1. From Exercise 31, Ak = PD k P −1 , which show that the eigenvalues of Ak are λ1k , λ2k ,  , λnk .
34. The eigenvalues and corresponding eigenvectors of A are λ1 = 3, λ2 = −2 and x1 = (3, 2) and x2 = ( −1, 1). Construct a
nonsingular matrix P from the eigenvectors of A,
 3 −1
P = 

2 1
and find a diagonal matrix B similar to A.
 1
B = P −1 AP =  25
− 5
1 1
5 

3 
5
 2
3  3 −1
3 0

 = 

0 2
1
0 −2
Then,
7
3 −1 3

A7 = PB 7 P −1 = 

2 1  0
0  1
 5
7
(−2)  − 52
1
1261
5
= 
3

 926
5
1389
.
798
36. The eigenvalues and corresponding eigenvectors of A are λ1 = 5, λ 2 = − 4, and λ3 = 0, x1 = ( − 5, 1, 9), x2 = ( −1, 2, 0),
and x3 = (5, − 2, 2). Construct a nonsingular matrix P from the eigenvectors of A.
5
− 5 −1


P =  1 2 − 2
 9 0
2

and find a diagonal matrix B similar to A.
− 2
 45
B = P −1 AP =  92

 15
1
− 45
11
18
1
10
4
45  2

1  − 2
18 

1 − 2

10 
3 − 2 − 5 −1
5
0 0
5




−5
0  1 2 − 2 = 0 − 4 0
0
−1
4  9 0
2
0 0

Then,
A = PB P
8
8
−1
0
0
3353 −177,252
390,625
 72,242

 −1


65,536 0 P =  11,766
71,419
42,004.
= P 0
 0
−156,250 − 78,125
0
0
312,500


© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 7.2
Diagonalization
255
38. (a) True. See Theorem 7.5 on page 360.
(b) False. Matrix
2 0


0 2
is diagonalizable (it is already diagonal) but it has only one eigenvalue λ = 2 (repeated).
1
1


0
1 + 1 + 2! + 3! + 

e 0
I
I
 = 
= I + I +
+
+ = 

2! 3!
1
1


0 e
+
+
+
+

0
1
1
2! 3!


 1 0
40. (a) X = 

0 1

e
1 0
(b) X = 

1 0

0
1 0
 e
1 1 0 1 1 0
eX = I + 
 + 
 + 
 + = 

1 0 2!1 0 3!1 0
e − 1 1
0 1
(c) X = 

 1 0

0 1
1  1 0 1 0 1
1  1 0
eX = I + 
 + 
 + 
 + 
 +
2!
3!
4!
1
0
0
1
1
0






0 1
Because e = 1 + 1 +
2 0
(d) X = 

0 −2

X
1 e + e −1 e − e−1 
1
1
1
1
1
1
and e −1 = 1 − 1 + −
+
− , you see that e X = 
+
+
.
2 3! 4!
2 3! 4!
2 e − e −1 e + e−1 
e 2
0
0
2 0
1 22 0 1 23
eX = I + 
.
 + 
 + = 
 + 
2
3
−2 
2!
3!
0
2
−
e
0
2
0
−
2
0













42. Assume that A is diagonalizable, P −1 AP = D, where D is diagonal. Then
DT = ( P −1 AP) = PT AT ( P −1 ) = PT AT ( PT )
T
T
−1
is diagonal, which shows that AT is diagonalizable.
44. Consider the characteristic equation λ I − A =
λ −a
−b
−c
λ −d
= λ 2 − ( a + d )λ + ( ad − bc ) = 0.
This equation has real and unequal roots if and only if ( a + d ) − 4( ad − bc) > 0, which is equivalent
2
to ( a − d ) > −4bc. So, A is diagonalizable if −4bc < ( a − d ) , and not diagonalizable if −4bc > ( a − d ) .
2
2
2
46. From Exercise 80, Section 7.1, you know that zero is the only eigenvalue of the nilpotent matrix A. If A were diagonalizable,
then there would exist an invertible matrix P, such that P −1 AP = D, where D is the zero matrix. So, A = PDP −1 = O,
which is impossible.
48. (a) A is diagonalizable when it is similar to a diagonal matrix D.
(b) A is diagonalizable when it has n linearly independent eigenvectors.
(c) A is diagonalizable when it has n distinct eigenvalues.
50. The only eigenvalue is λ = 0, and a basis for the eigenspace is {(0, 1)}. Since the matrix does not have two linearly
independent eigenvectors, the matrix is not diagonalizable.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
256
Chapter 7
Eigenvalues and Eigenvectors
Section 7.3 Symmetric Matrices and Orthogonal Diagonalization
2. Because
2 0

0 11
3 0

5 −2
5

0 −2
5 0

0
1
3
T
2 0

0 11
= 
3 0

5 −2
5

0 −2
5 0

0
1
3
the matrix is symmetric.
4. The characteristic equation of A is
λ −a
λI − A = − a
0
(
)(
)
λ −a = λ λ − a 2 λ + a 2 .
λ
0 −a
The eigenvalues are λ1 = − a 2, λ2 = 0, and λ3 = a 2. Since the eigenvalues are real, A is diagonalizable. The
(
)


P = −


1
1
2
0
−
)
2, 1 , respectively. So,
1

2  and
1
1 −1
1

4
1
P −1 AP = 
2
1

4
(
2, 1 , (1, 0, −1), and 1,
corresponding eigenvectors are 1, −
2
4
0
2
4
1 

4  0 a 0  1
1 

−  a 0 a − 2
2
0 a 0  1

1 

4
1
0
−1
− a 2 0
1 
0 



2 =  0
0
0 .


1 
0 a 2 
 0
6. The characteristic equation of A is
λI − A =
λ −a
−a
−a
−a
λ −a
−a
−a
−a
λ −a
= λ 2 (λ − 3a ).
The eigenvalues are λ1 = 0 and λ2 = 3a. Since the eigenvalues are real, A is diagonalizable. The corresponding eigenvectors
are ( −1, 1, 0) and ( −1, 0,1) for λ1 and (1, 1, 1) for λ2 . So,
− 13
−1 −1 1



−1
P =  1 0 1 and P AP = − 13
 1
 0 1 1


 3
2
3
− 13
1
3
− 13  a a a −1 −1 1
0 0 0




2 a a a  1
0 1 = 0 0 0.

3 
1  a a a  0
0 0 3a
1 1



3 
8. The characteristic equation of A is
λI − A =
λ −3
0
0
λ −3
10. The characteristic equation of A is
= (λ − 3) = 0.
2
Therefore, the eigenvalue is λ = 3. The multiplicity of
λ = 3 is 2, so the dimension of the corresponding
eigenspace is 2 (by Theorem 7.7).
λI − A =
λ −2
−1
−1
λ −2
−1
−1
−1
λ −2
−1
= (λ − 1) (λ − 4) = 0.
2
Therefore, the eigenvalues are λ1 = 1 and λ2 = 4. The
multiplicity of λ1 = 1 is 2, so the dimension of the
corresponding eigenspace is 2 (by Theorem 7.7). The
dimension of the eigenspace corresponding to λ2 = 4 is 1.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 7.3
Symmetric Matrices and Orthogonal Diagonalization
12. The characteristic equation of A is
λ
−4
−4
λ I − A = −4 λ − 2
−4
4
20. Because PPT =  9
 94

0
= (λ − 6)(λ + 6)λ = 0.
λ3 = 0. The dimension of the eigenspace corresponding
of each eigenvalue is 1.
14. The characteristic equation of A is
λI − A =
1
1
1
λ −2
1
1
1
λ −2
= λ (λ − 3) = 0.
2
Therefore, the eigenvalues are λ1 = 0 and λ2 = 3. The
dimension of the eigenspace corresponding to λ1 = 0 is
1. The multiplicity of λ2 = 3 is 2, so the dimension of
the corresponding eigenspace is 2 (by Theorem 7.7).
16. The characteristic equation of A is
λI − A =
−2
−2
λ +1
0
0
0
0
λ +1
−2
−2
λ +1
0
0
3 1

2  2
1  3

2  2
−
3

1 0
2 
= 
,
1
0 1

2
p1 ⋅ p2 = 0 and p1 = p2 = 1. So, { p1, p2} is an
orthonormal set.
T
 1 0 0  1 0 0
 1 0 0





= 0 1 0 0 1 0 = 0 1 0 ,
0 0 1 0 0 1
0 0 1





PT = P −1 and P is orthogonal. Letting
 1
0
0
 
 
 
p1 = 0 , p2 = 1 , and p3 = 0 produces
0
0
 1
 
 
 
2
The eigenvalues are λ1 = 1 and λ2 = − 3. The
p1 ⋅ p2 = p1 ⋅ p3 = p2 ⋅ p3 = 0 and
dimension of the corresponding eigenspace of each
eigenvalue is 2 (by Theorem 7.7).
orthonormal set.
18. The characteristic equation of A is
λI − A =
λ −1
1
1
0
I2,
PT = P −1 and P is orthogonal. Letting
1

 3



2
 and p2 =  2  produces
p1 = 

 1
3
−



 2 
 2
0
= (λ − 1) (λ + 3) .
2

1

2
T

PP =

3
−
 2
24. Because PP
λ +1
0
4
81 
≠
25 
81 
22. Because
Therefore, the eigenvalues are λ1 = 6, λ2 = −6 and
λ −2
4
 32
9
 =  81
3
4
9
 81
P is not orthogonal.
λ +2
0
− 94   94

3  4
−
9  9
257
p1 = p2 = p3 = 1. So, { p1, p2 , p3} is an
26. Because
0
0
0
λ −1
0
0
0
0
λ −1
0
0
0
0
0
λ −1
1
0
0
0
1
λ −1
= λ 2 (λ − 2) (λ − 1).
2
The eigenvalues are λ1 = 0, λ2 = 2, and λ3 = 1. The
dimensions of the corresponding eigenspaces are 2, 2,
and 1, respectively (by Theorem 7.7).
− 54 0 53  − 54 0 53 



PPT =  0 1 0  0 1 0 = I 3 , PT = P −1 and P
 3 0 4  3 0 4
5  5
5
 5
is orthogonal.
 3
− 54 
0
 5
 
 
Letting p1 =  0, p2 = 1, and p3 =  0 produces
4
 3
0
 5 
 
 5
p1 ⋅ p2 = p1 ⋅ p3 = p2 ⋅ p3 = 0 and
p1 = p2 = p3 = 1.
So, { p1, p2 , p3} is an orthonormal set.
−1 1
 4 −1 − 4  4
33 64 4 





28. Because PPT = −1 0 −17  −1
0 4 = 64 290 16 ≠ I 3 ,
 1 4
 4 16 18
−1 − 4 −17 −1



P is not orthogonal.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
258
Chapter 7
Eigenvalues and Eigenvectors
 2

 3

30. Because PPT =  0


− 2
 6
0
2 5
5
5
5
−
5 2

2  3

0  0

1   5
2   2
0
2 5
5
0
2



6 


5
 = 
−
5 

9
1 

2 

−
53
36
0
0
4
5
5 −4
36
−
2
5
9 5 − 4

36 
2 
−
 ≠ I3 ,
5 
91 

180 
P is not orthogonal.
3
3
1
 1

10 10 0 0 − 10 10   10 10 0 0 10 10 



0 1
0
0
0 1
0 
 0

T
−1
32. Because PPT = 

 = I 4 , P = P and P is orthogonal. Letting
0
1
0
0
0
1
0
0



3
 3

1
1
10 0 0
10  −
10 0 0
10 

10
10
10
  10

1

 3

10 10 
− 10 10 
0
0




 
 
0
 0 

0 , p = 1 , and p = 
=
p1 = 
p
,
4
 2

 produces
1 3
0
0
0




 
 
3

 1

0
0


10 
10 


10

 10

p1 ⋅ p2 = p1 ⋅ p3 = p1 ⋅ p4 = p2 ⋅ p3 = p2 ⋅ p4 = p3 ⋅ p4 = 0 and p1 = p2 = p3 = p4 = 1. So, { p1, p2, p3 , p4}
is an orthonormal set.
34. The characteristic polynomial of A is
λI − A =
λ +1
2
2
λ −2
= (λ + 2)(λ − 3).
The eigenvalues are λ1 = − 2 and λ2 = 3. Every
eigenvector corresponding to λ1 = − 2 is of the form
x1 = ( 2t , t ), and every eigenvector corresponding to
λ2 = 3 is of the form x2 = ( s, − 2s).
x1 ⋅ x2 = 2st − 2st = 0
So, x1 and x2 are orthogonal.
36. The matrix is diagonal, so the eigenvalues are
λ1 = 3, λ2 = − 3, and λ3 = 2. Every eigenvector
corresponding to λ1 = 3 is of the form x1 = (t , 0, 0),
every eigenvector corresponding to λ2 = − 3 is of the
form x2 = (0, s, 0), and every eigenvector
38. The characteristic polynomial of A is
λI − A =
λ −1
0
−1
0
λ −1
0
−1
0
λ +1
= λ 2 (λ − 1)
The eigenvalues are λ1 = 0 and λ2 = 1. Every
eigenvector corresponding to λ1 = 0 is of the form
x1 = (0, 0, 0) and x2 = (0, 0, 0), and every eigenvector
corresponding to λ 2 = 1 is of the form x3 = (0, t , 0).
x1 ⋅ x2 = x1 ⋅ x3 = x2 ⋅ x3 = 0
So, {x1, x2 , x3} is an orthogonal set.
40. The matrix is not symmetric, so it is not orthogonally
diagonalizable.
42. The matrix is symmetric, so it is orthogonally
diagonalizable.
corresponding to λ3 = 2 is of the form x3 = (0, 0, u).
x1 ⋅ x2 = x1 ⋅ x3 = x2 ⋅ x3 = 0
So, {x1, x2 , x3} is an orthogonal set.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 7.3
Symmetric Matrices and Orthogonal Diagonalization
259
44. The eigenvalues of A are λ1 = 2 and λ2 = 6, with corresponding eigenvectors (1, −1) and (1, 1), respectively. Normalize each
eigenvector to form the columns of P. Then
 2

2
P = 

2
−
 2
 2
2


2 
2
and PT AP = 
 2
2


 2
2 
−
 2
2
 4 2 
2 
2


2  2 4 
2

−
2 
 2
2

2 0
2 
= 
.
2
0 6

2 
46. The eigenvalues of A are λ1 = −1 (repeated) and λ2 = 2, with corresponding eigenvectors ( −1, 0, 1), ( −1, 1, 0) and (1, 1, 1),
respectively. Use Gram–Schmidt Orthonormalization process to orthonormalize the two eigenvectors corresponding to λ1 = −1.

1
1 
, 0,

2
2
(−1, 0, 1) →  −

1
1
1
2
1 
 1
(−1, 1, 0) − (−1, 0, 1) =  − , 1, −  →  , − ,

2
2
6
6
 2
 6
Normalizing the third eigenvector corresponding to λ2 = 2, you can form the columns of P. So,
 1

 3
 1
P = 
 3
 1

 3
−
1 

6
2 
0 −

6
1
1 

2
6
1
2
and
 1

3


1
PT AP = −
2

 1

6

1
3
0
−
2
6
1 
 1
3  0 1 1  3


1 
 1
 1 0 1 
2
3
1 1 0 
 1
1 


6
 3
−
1 
6 
2 0 0
2 


0 −
 = 0 −1 0.
6
0 0 −1


1
1 

2
6
1
2
48. The eigenvalues of A are λ1 = 5, λ2 = 0, λ3 = −5, with corresponding eigenvectors (3, 5, 4), ( −4, 0, 3) and (3, − 5, 4)
respectively. Normalize each eigenvector to form the columns of P. Then
3 2

2

4
2

1
P = 10
5
3 2

0 −5 2 

6 4 2 
−8
and
P
T
3 2
5 2 4 2  0 3 0 3 2


1
0
6 3 0 4 10
−8
5 2


 4 2
3
2
5
2
4
2
0
4
0
−
 
 

1
AP = 10

3 2
5 0 0



0 −5 2  = 0 0 0.

0 0 −5
6 4 2 


−8
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
260
Chapter 7
Eigenvalues and Eigenvectors
50. The characteristic polynomial of A, λ I − A = (λ − 8) (λ + 4) , yields the eigenvalues λ1 = 8 and λ2 = − 4. λ1 has a
2
multiplicity of 1 and λ2 has a multiplicity of 2. An eigenvector for λ1 is v1 = (1, 1, 2), which normalizes to
u1 =
 6
v1
6
6
,
,
= 
.
v1
6
3 
 6
Two eigenvectors for λ2 are v 2 = ( −1, 1, 0) and v3 = ( −2, 0, 1). Note that v1 is orthogonal to v2 and v3 , as guaranteed by
Theorem 7.9. The eigenvectors v2 and v3 , however, are not orthogonal to each other. To find two orthonormal eigenvectors
for λ2 , use the Gram-Schmidt process as follows.
w 2 = v 2 = ( −1, 1, 0)
 v ⋅ w2 
w3 = v3 −  3
 w 2 = ( −1, −1, 1)
 w2 ⋅ w2 
These vectors normalize to
u2 =

w2
2
2 
=  −
,
, 0 
w2
2
2


u3 =

w3
3
3
3
,−
,
=  −
.
w3
3
3 
 3
The matrix P has u1 , u 2 , and u3 as its column vectors.
 6

 6
 6
P = 
 6

 6
 3
−

3
6


3 
6


2
3
2
T
 and P AP = −
−
2
3 
 2


3
− 3
0
 3
3 
2
2
6
6
−
2
2
−
3
3
 6
6


3 
6
−2 2 4 


6

0  2 −2 4 

 6
4 4 4 
3  
 6
 3
3 
−
3

3 
8 0 0
2
3


 = 0 −4 0.
−
2
3 
0 0 −4


3 
0
3 
2
2
−
52. The eigenvalues of A are λ1 = 0 (repeated) and λ2 = 2 (repeated). The eigenvectors corresponding to λ1 = 0 are
(1, −1, 0, 0) and (0, 0, 1, −1), while those corresponding to λ2 = 2 are (1, 1, 0, 0) and (0, 0, 1, 1). Normalizing these
eigenvectors to form P, you have

2
0

2


2
−
0
2

P =

2

0

2

2

0 −
2


0


0


2
2 

2
2 
2
2
2
2
0
0
and
 2
2
−

2
 2

0
 0
PT AP = 
 2
2

2
 2

 0
0


0

2
2  1
−

2
2  1
 0
0
0 
 0

2
2
2
2 
0
1 0
1 0
0 1
0 1

2
0

 2
0 
2
 −
0
0  2
1 
2

0
1 
2

2

0 −

2
2
2
2
2
0
0

0

0


0
 = 0
0
2 

0
2 

2
2 
0 0 0

0 0 0
.
0 2 0

0 0 2
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 7.4
54. (a) False. The fact that a matrix P is invertible does not
imply P −1 = PT , only that P −1 exists. The
definition of orthogonal matrix (page 370) requires
that a matrix P is invertible and P −1 = PT . For
example,
1 3


3 4
Applications of Eigenvalues and Eigenvectors
261
58. (a) Yes. A = AT
(b) and (c) Yes, by Theorem 7.7, page 368.
(d) The multiplicity of each eigenvalue is 1, so the
dimensions of the corresponding eigenspaces are 1.
(e) No. The columns do not form an orthonormal set.
(f) Yes, by Theorem 7.9, page 372.
is invertible ( A ≠ 0) but A
(g) Yes, by Theorem 7.10, page 373.
−1
≠ A .
T
(b) True. See Theorem 7.10, page 373.
56. Suppose P −1 AP = D is diagonal, with λ the only
eigenvalue. Then
A = PDP −1 = P(λ I ) P −1 = λ I .
6
 1 4
 17 −27

  1 −3 2


60. A A = −3 −6 
45 −12
 = −27
4 −6 1
 2
 6 −12
1
5


T
 1 4
 1 −3 2 
14 24

AAT = 
 −3 −6 = 

4
6
1
−


24 53

2
1


Both products are symmetric.
Section 7.4 Applications of Eigenvalues and Eigenvectors
0
2. x 2 = Lx1 =  1
16
4 160
640
  =  
0 160
 10
0
x 3 = Lx 2 =  1
16
4 640
40
  =  
0  10
40
The eigenvalues are
1
2
and − 12 . Choosing the positive
eigenvalue, λ = 12 , you find the corresponding
eigenvector by row-reducing λ I − L = 12 I − L.
 1
 12
− 16
−4

1

2

 1 −8


0 0
So, an eigenvector is (8, 1) , and the stable age
8
distribution vector is x = t  .
1
0 2 0 8
16

 
 
4. x2 = Lx1 =  12 0 0 8 =  4 
0 1 0 8
4
 
2

 
0 2 0 16
8
1
 
 
x3 = Lx2 =  2 0 0  4  = 8
0 1 0  4 
2
 
2

 
The eigenvalues of L are 0, 1, and −1 . Choosing the
positive eigenvalue, let λ = 1. A corresponding
eigenvector is found by row-reducing 1I − L.
1
 1
− 2
0

− 2 0
1 0 − 4



1 0  0 1 − 2
0 0 0 
− 12 1


So, an eigenvector is ( 4, 2, 1) and a stable age
4
 
distribution vector is x = t 2.
 1
 
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
262
Chapter 7
Eigenvalues and Eigenvectors
0
1
2
6. x2 = Lx1 = 0

0

0
6 4 0 0 24
240

 
0 0 0 0 24
 12 



1 0 0 0 24 =  24 
 
 
0 12 0 0 24
 12 
 
 
1
0 0 2 0 24
 12 
0
1
2
x3 = Lx2 = 0

0

0
6 4 0 0 240
168

 
0 0 0 0  12 
120



1 0 0 0 24 =  12 
 
 
0 12 0 0  12 
 12 
 
 
1
0 0 2 0  12 
 6 
The eigenvalues of L are −1, 0, and 2. Choosing the
positive eigenvalue, let λ = 2. A corresponding
eigenvector is found by row-reducing 2 I − L.
2
 1
− 2
0

0

 0
− 6 − 4 0 0
1


2
0 0 0
0
−1 2 0 0  0


0 − 12 2 0
0


0
0 12 2
0
0 0 0 −128

1 0 0 − 32 
0 1 0 −16 

0 0 1 −4 

0 0 0
0 
So, an eigenvector is (128, 32, 16, 4, 1) and a stable age
distribution vector is
128
 
 32 
x = t  16  .
 
 4 
 
 1 
8. Construct the age transition matrix.
6
3
3


A = 0.8
0
0
 0 0.25 0


The current age distribution vector is
120
 
x1 = 120.
120
 
In 1 year, the age distribution vector will be
6
3 120
3
1440

 


x2 = Ax1 = 0.8
0
0 120 =  96 .
 0 0.25 0 120
 30 

 


In 2 years, the age distribution vector will be
6
3 1440
3
4986





x3 = Ax2 = 0.8
0
0  96  = 1152 .
 0 0.25 0  30 
 24 





10. The eigenvalues of A are λ1 = 1 and λ2 = −1, with
corresponding eigenvector ( 2, 1) and ( −2, 1),
respectively. Then A can be diagonalized as follows
 1
P −1 AP =  14
− 4
1  0
2

1 1
  2
2
2 2 −2
 1 0

 = 
 = D.
0  1
1
0 −1
So, A = PDP −1 and An = PD n P −1.
If n is even, Dn = I and An = I . If n is odd, D n = D
 0 2
n
and An = PDP −1 =  1
 = A. So, A x1 does not
 2 0
approach a limit as n approaches infinity.
12. The solution to the differential equation y′ = ky is
y = Ce kt . So, y1 = C1e −5t and y2 = C2e 4t .
14. The solution to the differential equation y′ = ky is
y = Ce kt . So, y1 = C1e1 2t and y2 = C2e1 8t .
16. The solution to the differential equation y′ = ky is
y = Ce kt . So, y1 = C1e5t , y2 = C2e−2t , and y3 = C3e −3t .
18. The solution to the differential equation y′ = ky is
y = Ce kt . So, y1 = C1e− 2 3t , y2 = C2e− 3 5t , and
y3 = C3e −8t .
20. The solution to the differential equation y′ = ky is
y = Ce kt .
− 7t
So, y1 = C1e− 0.1t , y2 = C2e 4 , y3 = C3e− 2π t , and
y4 = C4e 5t .
22. This system has the matrix form
 y1′ 
 1 −4  y1 
y′ =   = 
   = Ay.
′
8  y2 
 y2 
−2
The eigenvalues of A are λ1 = 0 and λ2 = 9, with
corresponding eigenvectors ( 4, 1) and ( −1, 2),
respectively. So, you can diagonalize A using a matrix P
whose columns are the eigenvectors of A.
4 −1
0 0
−1
P = 
 and P AP = 

 1 2
0 9
The solution of the system w′ = P −1 APw is w1 = C1
and w2 = C2e9t . Return to the original system by
applying the substitution y = Pw.
 y1 
4 −1  w1 
4w1 − w2 
y =   = 
  = 

 y2 
 1 2 w2 
w1 + 2w2 
So, the solution is
y1 = 4C1 − C2e9t
y2 = C1 + 2C2e9t .
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 7.4
24. This system has the matrix form
 y1′ 
 1 −1  y1 
y′ =   = 
   = A y.
 y′2 
2 4  y2 
The eigenvalues of A are λ1 = 2 and λ2 = 3, with
corresponding eigenvectors (1, −1) and ( −1, 2),
Applications of Eigenvalues and Eigenvectors
263
26. This system has the matrix form
 y1′ 
2 1 1  y1 
 

 
y′ =  y2′  =  1 1 0  y2  = Ay.
 y3′ 
 1 0 1  y3 
 

 
The eigenvalues of A are λ1 = 0, λ2 = 1 and λ3 = 3,
respectively. So, you can diagonalize A using a matrix P
whose columns are the eigenvectors of A.
with corresponding eigenvectors ( −1, 1, 1), (0, 1, −1) and
 1 −1
P = 

−1 2
matrix P whose columns are the eigenvectors.
2 0
and P AP = 

0 3
−1
The solution of the system w′ = P −1 APw is w1 = C1e 2t
and w2 = C2e3t . Return to the original system by
applying the substitution y = Pw.
 y1 
 1 −1  w1 
 w1 − w2 
y =   = 
  = 

 y2 
−1 2 w2 
− w1 + 2 w2 
So, the solution is
y1 =
C1e 2t −
C2e3t
y2 = − C1e 2t + 2C2e3t .
(2, 1, 1), respectively. So, you can diagonalize A using a
−1 0 2
0 0 0




P =  1 1 1 and P −1 AP = 0 1 0
 1 −1 1
0 0 3




The solution of the system w′ = P −1 APw is
w1 = C1, w2 = C2et and w3 = C3e3t . Return to the
original system by applying the substitution y = Pw.
 y1  −1 0 2  w1   − w1 + 2 w3 
  
  

y =  y2  =  1 1 1 w2  = w1 + w2 + w3 
 y3   1 −1 1  w3  w1 + w2 + w3 
  
  

So, the solution is
y1 = −C1
+ 2C3e3t
y2 =
C1 + C2e +
y3 =
C1 − C2et + C3e3t .
t
C3e3t
28. This system has the matrix form
 y1′ 
−2 0 1  y1 
 

 
′
′
y =  y2  =  0 3 4  y2  = Ay.
 y3′ 
 0 0 1  y3 
 

 
The eigenvalues of A are λ1 = −2, λ2 = 3 and λ3 = 1, with corresponding eigenvectors (1, 0, 0), (0, 1, 0) and (1, − 6, 3),
respectively. So, you can diagonalize A using a matrix P whose columns are the eigenvectors of A.
1
1 0
−2 0 0




P = 0 1 −6 and P −1 AP =  0 3 0
0 0
 0 0 1
3



The solution of the system w′ = P −1 APw is w1 = C1e −2t , w2 = C2e3t and w3 = C3et . Return to the original system by
applying the substitution y = Pw.
1  w1 
 y1 
1 0
 w + w3 
 

 


y =  y2  = 0 1 −6 w2  = w2 − 6 w3 
 y3 
0 0
 3w3 
3  w3 
 



So, the solution is
y1 = C1e −2t
+ C3et
y2 =
C2e3t − 6C3et
y3 =
3C3et .
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
264
Chapter 7
Eigenvalues and Eigenvectors
30. Because
 y1′ 
1 −1  y1 
y′ =   = 
   = Ay
′
 y2 
1 1  y2 
the system represented by y′ = Ay is
y1′ = y1 − y2
y2′ = y1 + y2 .
Note that
y1′ = C1et cos t − C1et sin t + C2et sin t + C2et cos t = y1 − y2
and
y2′ = −C2et cos t + C2et sin t + C1et sin t + C1et cos t = y1 + y2 .
32. Because
1 0  y1 
 y1′ 
0
 

 
y′ =  y2′  = 0 0 1  y2  = Ay the system represented by y′ = Ay is
 y3′ 
 1 −3 3  y3 
 

 
y1′ =
y2
y′2 =
y3
y3′ = y1 − 3 y2 + 3 y3 .
Note that
y1′ = C1et + C2tet + C2et + C3t 2et + 2C3tet = y2
y2′ = (C1 + C2 )et + (C2 + 2C3 )tet + (C2 + 2C3 )et + C3t 2et + 2C3tet = y3
y3′ = (C1 + 2C2 + 2C3 )et + (C2 + 4C3 )tet + (C2 + 4C3 )et + C3t 2et + 2C3tet
= (C1et + C2tet + C3t 2et ) − 3((C1 + C2 )et + (C2 + 2C3 )tet + C3t 2et )
+ 3((C1 + 2C2 + 2C3 )et + (C2 + 4C3 )tet + C3t 2et )
= y1 − 3 y2 + 3 y3 .
34. The matrix of the quadratic form is

a
A = 
b
 2
b
 1 −2
2
 = 
.
1

−2
c

36. The matrix of the quadratic form is

a
A = 
b
 2
b
5

 12 − 2 
2
 = 
.

 5

−
c
0
 2


38. The matrix of the quadratic form is

a
A = 
b
 2
b
 16 −2
2
 = 
.

−2 20
c

40. The matrix of the quadratic form is

a
A = 
b
 2
b
 5 −1
2
 = 
.

−1 5
c

The eigenvalues of A are λ1 = 4 and λ2 = 6, with
corresponding eigenvectors x1 = (1, 1) and
x2 = ( −1, 1), respectively. Using unit vectors in the
direction of x1 and x2 to form the columns of P, you
have
 2

2
P = 
 2

 2
−
2

2 
and
2

2 
4 0
PT AP = 
.
0 6
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 7.4
b

3 − 3
2
 = 
.
1

− 3
c

The eigenvalues of A are λ1 = 0 and λ2 = 4, with
(
corresponding eigenvectors x1 = 1,
(
)
)
3 and
x 2 = − 3, 1 , respectively. Using unit vectors in
the direction of x1 and x2 to form the columns of P,
you have
 1

2
P = 
 3

 2
3

2 
and
1

2
−
0 0
PT AP = 
.
0 4
44. The matrix of the quadratic form is

a
A = 
b
 2
The eigenvalues of A are λ1 = −15 and λ2 = 25,
with corresponding eigenvectors x1 = (1, − 2) and
x2 = ( 2, 1), respectively. Using unit vectors in the
direction of x1 and x2 to form the columns of P,
you have
2 
5 
and
1 

5
−15 0
PT AP = 
.
 0 25
46. The matrix of the quadratic form is

a
A = 
b
 2
b
 1 2
2
 = 
.

2 1
c

1 
 1
,

, respectively. So, let
2
 2
1 
2 
and
1 

2
1 
 2
,

 respectively. Let
5
 5
2 
 1

−25 0
5
5 
and
P = 
PT AP = 
.
 2
1 
 0 15
−

5
5

This implies that the rotated conic is a hyperbola with
2
2
equation −25( x′) + 15( y′) − 50 = 0.

a
A = 
b
 2
b
8 4
2
 = 
.
4 8


c

This matrix has eigenvalues of 4 and 12, and
1 
 1
,−
corresponding unit eigenvectors 
 and
2
 2
1 
 1
,

, respectively. So, let
2
 2
 1

2
P = 
 1
−
2

1 
4 0
2 
and PT AP = 
.
1 
0 12

2
This implies that the rotated conic is an ellipse.
Furthermore,
[d
This matrix has eigenvalues of −1 and 3, and
1 
 1
,−
corresponding unit eigenvectors 
 and
2
 2
 1

2
P = 
 1
−
2

This matrix has eigenvalues of −25 and 15, with
2 
 1
,−
corresponding unit eigenvectors 
 and
5
 5
50. The matrix of the quadratic form is
b
17 16
2
 = 
.
16 −7


c

 1

5
P = 
 2
−
5

265
48. The matrix of the quadratic form is
b

 a 2
16
7
 = 
A = 
.
b

16 −17
c
 2

42. The matrix of the quadratic form is

a
A = 
b
 2
Applications of Eigenvalues and Eigenvectors
e]P = 10 2
 1

2
26 2  
 1
−
2

1 
2 
1 

2
= [−16 36] = [d ′ e′],
so the equation in the x′y′-coordinate system is
4( x′) + 12( y′) − 16 x′ + 36 y′ + 31 = 0.
2
2
−1 0
PT AP = 
.
 0 3
This implies that the rotated conic is a hyperbola with
equation −( x′) + 3( y′) = 9.
2
2
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
266
Chapter 7
Eigenvalues and Eigenvectors
52. The matrix of the quadratic form is
b

a 2
 5 −1
 = 
A = 
.
b

−1 5
c
 2

The eigenvalues of A are 4 and 6, with corresponding
1 
1 
 1
 1
,
,
unit eigenvectors 
,
 and  −
2
2
2

 2
respectively. So, let
 1
 2
P = 
 1

 2
1 
4 0
2 
and PT AP = 
.
1 
0 6

2
−
This implies that the rotated conic is an ellipse.
Furthermore,
1 
 1
 2 − 2

[d e]P = 10 2 0 
 1
1 


2
 2
= [10 −10] = [d ′ e′],
so the equation in the x′y′-coordinate system is
4( x′) + 6( y′) + 10 x′ + 10 y′ = 0.
2
2
2 1 1


54. The matrix of the quadratic form is A =  1 2 1.
 1 1 2


The eigenvalues of A are 1, 1 and 4, with corresponding
1
2   1
1
 1

,
,−
,
, 0
unit eigenvectors 
,  −
6
6 
2
2 
 6
1
1 
 1
,
,
and 
, respectively. Then let
3
3
3

 1

6

 1
P = 
6

 2
−
6

−
1 

3
 1 0 0
1 


T
 and P AP = 0 1 0.
3
0 0 4


1 

3
1
2
1
2
0
So, the equation of the rotated quadratic surface is
( x′) + ( y′) + 4( z′) − 1 = 0 (ellipsoid).
2
2
2
 1 1 0


56. The matrix of the quadratic form is A =  1 1 0.
0 0 1


The eigenvalues of A are 0, 1, and 2, with corresponding
eigenvectors ( −1, 1, 0), (0, 0, 1), and (1, 1, 0),
respectively.
Then let

2
−
2

P = 
2

2

 0
0
0
1
2

2 
2

2 
0 
0 0 0


and P AP = 0 1 0.
0 0 2


T
So, the equation of the rotated quadratic surface is
( y′) + 2( z′) − 8 = 0.
2
2
58. The quadratic form f can be written using matrix notation
as
f ( x1 , x2 ) = xT Ax = [ x1
11 0  x1 
x2 ] 
  .
 0 4  x2 
11 0
Verify that the eigenvalues of A = 
 are
 0 4
λ1 = 11 and λ 2 = 4, with corresponding eigenvectors
1
0
  and  .
0
1
So, the constrained maximum of 11 occurs when
( x1, x2 ) = (1, 0) and the constrained minimum of 4
occurs when ( x1, x2 ) = (0, 1).
60. To find the maximum and minimum values of
z = − 5x2 + 9 y 2 subject to the constraint
x 2 + 9 y 2 = 9, you cannot use the Constrained
Optimization Theorem directly because the constraint is
not x
2
= 1. However, with the change of variables
x = 3x′ and y = y′,
the problem becomes finding the maximum and
minimum values of
z = − 45( x′) + 9( y′)
2
2
subject to the constraint ( x′) + ( y′) = 1. Verify that
2
2
the maximum value of 9 occurs when ( x′, y′) = (0, 1),
or ( x, y) = (0, 1), and the minimum value of − 45
occurs when ( x′, y′) = (1, 0), or ( x, y) = (3, 0).
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Section 7.4
62. The quadratic form f can be written using matrix notation
as
f ( x1 , x2 ) = xT Ax = [ x1
5 6  x1 
x2 ] 
  .
6 0  x2 
5 6
Verify that the eigenvalues of A = 
 are
6 0
λ1 = 9 and λ 2 = − 4, with corresponding eigenvectors
 3
− 2
  and  .
 2
3
So, the constrained maximum of 9 occurs when
1
2 
 3
,
( x1, x2 ) =
(3, 2) = 
 and the
13
13 
 13
constrained minimum of − 4 occurs when
( x1, x2 ) =
1
 −2
,
( − 2, 3) = 
13
 13
3 
.
13 
64. To find the maximum and minimum values of z = 9 xy
subject to the constraint 9 x 2 + 16 y 2 = 144, you cannot
use the Constrained Optimization Theorem directly
because the constraint is not x
2
= 1. However, with
the change of variables
x = 4 x′ and y = 3 y′,
the problem becomes finding the maximum and
minimum values of
z = 108 x′y′
subject to the constraint ( x′) + ( y′) = 1. Verify that
2
2
the maximum value of 54 occurs when ( x′, y′) = (1, 1),
or ( x, y) = ( 4, 3), and the minimum value of − 54
occurs when ( x′, y′) = ( −1, 1), or ( x, y) = ( − 4, 3).
Applications of Eigenvalues and Eigenvectors
267
66. The quadratic form f can be written using matrix notation
as
f ( x, y , z ) = xT Ax = [ x
y
 2 2 − 2  x

 
z]  2 −1 4   y.
− 2 4 −1  z 

 
Verify that the eigenvalues of A are λ1 = 3 (repeated)
and λ 2 = − 6, with corresponding eigenvectors
− 2 2
 1
   
 
0
,
1
,
and
   
− 2.
 1 0
 2
   
 
So, the constrained maximum of 3 occurs when
1
1 
 −2
( x, y , z ) =
(− 2, 0, 1) =  , 0,
 and
5
5
 5
−1
1
 2

, 0 , and the
(2, 1, 0) =  ,
5
5 
 5
minimum of − 6 occurs when
( x, y , z ) =
1 −2 2
, .
3 3
( x, y, z ) = (1, − 2, 2) =  ,
1
3
3
68. (a) To model population growth, use the average
number of offspring for each age class and the
probabilities of surviving to the next age class to
form the age transition matrix A. The initial age
distribution vector x1 is used to find x2 by the
formula x n = Axn −1. An eigenvector corresponding
to a positive eigenvalue of A is a stable age
distribution vector.
(b) To solve a system of first order linear differential
equations find the coefficient matrix A for the
system, then find a matrix P that diagonalizes A.
Solve the system w′ = P −1 APw to find w , and then
Pw is the solution of the original system.
(c) To use the Principal Axes Theorem to perform a
rotation of axes, find the matrix A of the quadratic
form of the conic or quadric surface. The
eigenvalues of A are the coefficients of the squared
terms in the rotated system.
(d) Write the quadratic form then apply the Constrained
Optimization Theorem.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
268
Chapter 7
Eigenvalues and Eigenvectors
Review Exercises for Chapter 7
2. (a) The characteristic equation of A is given by
λI − A =
λ −2
−1
4
λ +2
6. (a) The characteristic equation of A is given by
= λ 2 = 0.
(b) The eigenvalue of A is λ = 0 (repeated).
2 1 : 0


0 0 : 0

−1
−2
0
λ −1
−1
0
0
λ −3
Similarly, solve (λ2 I − A)x = 0 for λ2 = 1, and
see that {(0, 1, 0)} is a basis for the eigenspace of
λ3 = 3.
λ2 = 1. Finally, solve (λ3I − A)x = 0 for
(c) To find the eigenvectors corresponding to λ1 = −4,
λ3 = 2, and see that {(4, − 2, 1)} is a basis for its
solve the matrix equation (λ1I − A)x = 0. Row
eigenspace.
reducing the augmented matrix,
{(1, 0, 0)}. Similarly, solve (λ2 I − A)x = 0 for
λ2 = 1, and see that {(1, 5, 0)} is a basis for the
eigenspace of λ2 = 1. Finally, solve
(λ3I − A)x = 0 for λ3 = 3, and determine that
{(5, 7, 14)} is a basis for its eigenspace.
λ +2
Row-reducing the augmented matrix,
1  0
−4 0 −4  0
1 0




1
 0 −4 2  0  0 1 − 2  0
 −1 0 −1  0
0 0
0  0



you can see that a basis for the eigenspace of
λ1 = −3 is {(−2, 1, 2)}.
(b) The eigenvalues of A are λ1 = −4, λ2 = 1, and
you see that a basis for the eigenspace λ1 = −4 is
0
solve the matrix equation (λ1I − A)x = 0.
= (λ + 4)(λ − 1)(λ − 3) = 0.
0 −1 −2  0
0 1 0  0




0 −5 −1  0  0 0 1 : 0
0 0 −7  0
0 0 0  0




−1
(c) To find the eigenvector corresponding to λ1 = −3,
4. (a) The characteristic equation of A is given by
λ +4
2
λ3 = 2.
you see that a basis for the eigenspace is {( −1, 2)}.
λI − A =
−4
λ −1
(b) The eigenvalues of A are λ1 = −3, λ2 = 1, and
reducing the augmented matrix,
−2 −1 : 0


 4 2 : 0
0
0
= (λ + 3)(λ − 1)(λ − 2) = 0.
(c) To find the eigenvectors corresponding to λ = 0,
solve the matrix equation (λ I − A)x = 0. Row
λ −1
λI − A =
8. (a)
λ I − A = (λ − 1)(λ − 2)(λ − 4) = 0
2
(b) λ1 = 1, λ2 = 2, λ3 = 4 (repeated)
(c) A basis for the eigenspace of λ1 = 1 is
{(−1, 0, 1, 0)}.
A basis for the eigenspace of λ2 = 2 is {( −2, 1, 1, 0)}.
A basis for the eigenspace of λ3 = 4 is
{(2, 3, 1, 0), (0, 0, 0, 1)}.
10. The eigenvalues of A are λ1 = 12 and λ2 = − 13 , the
corresponding eigenvectors (3, 4) and ( −1, 2) are used to
form the columns of P. So,
 15
3 −1
−1
P = 
  P =  2
4 2
− 5
1
10
, and
3

10 
 1
P −1 AP =  25
− 5
 12
0
−1
.
 = 
1
2
 0 − 3 
1  1
10 6
3  2
10 
  3
1 3
4 

0 4
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Review Exercises for Chapter 7
12. The eigenvalues of A are the solutions of
14. The eigenvalues of A are the solutions of
λ − 3 2 −2
λI − A =
2
λ
1 = (λ + 1) (λ − 5) = 0.
−2
1
λ
2
1
2
λ −3
2
1
−1
λ
−1
= (λ − 1) (λ − 3) = 0.
2
Therefore, the eigenvalues are λ1 = 1 (repeated) and
λ2 = 3. The corresponding eigenvectors are solutions of
So, (1, 1, −1) and ( 2, 5, 1) are eigenvectors corresponding
(λI − A)x = 0. So, (−1, 0, 1) and (1, 1, 0) are
eigenvectors corresponding to λ1 = 1, while ( −1, 2, 1)
to λ1 = −1, while ( 2, −1, 1) corresponds to λ2 = 5.
Now form P from these eigenvectors and note that
corresponds to λ2 = 3. Now form P from these
eigenvectors and note that
−1 0 0


and P −1 AP =  0 −1 0.
 0 0 5


16. Consider the characteristic equation λ I − A =
λ −2
λI − A =
Therefore, the eigenvalues are −1 (repeated) and 5.
The corresponding eigenvectors are solutions of
(λI − A)x = 0.
 1 2 2


P =  1 5 −1
−1 1 1


269
−1 1 −1


P =  0 1 2
 1 0 1


λ − cos θ
sin θ
−sin θ
λ − cos θ
and
 1 0 0


P AP = 0 1 0.
0 0 3


−1
= λ 2 − 2 cos θ ⋅ λ + 1 = 0.
The discriminant of this quadratic equation in λ is b2 − 4ac = 4 cos2 θ − 4 = −4 sin 2 θ .
Because 0 < θ < π , this discriminant is always negative, and the characteristic equation has no real roots.
18. The eigenvalue is λ = −1 (repeated). To find its corresponding eigenspace, solve (λ I − A)x = 0 with λ = −1.
λ + 1 −2  0
0 −2  0
0 1  0

 = 
  

λ + 1  0 0 0  0
 0
0 0  0
Because the eigenspace is only one-dimensional, the matrix A is not diagonalizable.
20. The eigenvalues are λ = −2 (repeated) and λ = 4. Because the eigenspace corresponding to λ = −2 is only one-dimensional,
the matrix is not diagonalizable.
22. The eigenvalues of B are 5 and 3 with corresponding eigenvectors ( −1, 1) and ( −1, 2), respectively. Form the columns of P
from the eigenvectors of B. So,
−1 −1
P = 
 and
1 2
−2 −1  7 2 −1 −1
5 0
P −1BP = 


 = 
 = A.
 1 1 −4 1  1 2
0 3
Therefore, A and B are similar.
24. The eigenvalues of B are 1 and −2 (repeated) with corresponding eigenvectors ( −1, −1, 1), (1, 1, 0), and (1, 0, 1), respectively.
Form the columns of P from the eigenvectors of B. So,
−1 1 1


P = −1 1 0 and
 1 0 1


−1 1 1  1 −3 −3 −1 1 1
 1 0 0






P BP = −1 2 1  3 −5 −3 −1 1 0 = 0 −2 0 = A.
 1 −1 0 −3 3 1  1 0 1
0 0 −2






−1
Therefore, A and B are similar.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
270
Chapter 7
Eigenvalues and Eigenvectors
26. Because
32. Because
2 5

5
AT = 

5

 5
5

5 
= A
2 5
−

5 
 3
3
3


3
3 
 3
 3 2 3

AT = 
0 = A
3
 3

 3

3


0
 3
3 
A is symmetric. Because the column vectors of A do not
form an orthonormal set, A is not orthogonal.
A is symmetric. Furthermore, the column vectors of A
form an orthonormal set. So, A is both symmetric and
orthogonal.
28. Because
0 0 1


A = 0 1 0 = A,
 1 0 1


34. The characteristic polynomial of A is
T
λI − A =
λ −4
2
2
λ −1
= λ (λ − 5).
The eigenvalues are λ1 = 0 and λ2 = 5. Every
eigenvector corresponding to λ1 = 0 is of the form
A is symmetric. However, column 3 is not a unit vector,
so A is not orthogonal.
x1 = (t , 2t ), and every eigenvector corresponding to
30. Because
λ2 = 5 is of the form x2 = ( 2s, − s).
 54 0 − 53 


A =  0 1 0 ≠ A
3 0 4
5
5
x1 ⋅ x2 = 2st − 2st = 0
So, x1 and x2 are orthogonal.
T
A is not symmetric. However, the column vectors form
an orthonormal set, so A is orthogonal.
36. The matrix is diagonal, so the eigenvalues are λ1 = 2
and λ2 = 5. Every eigenvector corresponding to
λ1 = 2 is of the form x1 = (t1, t2 , 0), and every
eigenvector corresponding to λ2 = 5 is of the form
x2 = (0, 0, s).
x1 ⋅ x2 = 0
So, x1 and x2 are orthogonal.
38. The matrix is not symmetric, so it is not orthogonally diagonalizable.
40. The matrix is symmetric, so it is orthogonally diagonalizable.
 5
,
42. The eigenvalues of A are 17 and −17, with corresponding unit eigenvectors 
 34
Form the columns of P from the eigenvectors of A.


P = 



5
34
3
34
−
 5
 34
P AP = 
 3
−
34

T
3 
3

,
 and  −
34
34 

5 
, respectively.
34 
3 
34 
5 

34 
3 
 5
34   8 15   34


5  15 − 8  3


34 
 34
−
3 
17 0 
34 
= 

5 
 0 −17

34 
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Review Exercises for Chapter 7
271
1 
1 
 1
 1
, 0,
, 0, −
44. The eigenvalues of A are − 3, 0, and b, with corresponding unit eigenvectors (0, 1, 0), 
.
 , and 
2
2
 2
 2
Form the columns of P from the eigenvectors of A.

0

P = 1

0

1
2
0
 0

1
PT AP = 
 2

 1
 2
1
1
2
1 
2 
0 

1 
−
2 

0 
0
3
0
−
3



1 

0
0 − 3 0  1


2 

3  
 −3 0
0
1 

0 −
2 
1
2
0
1
2
1 
− 3 0 0
2 


0  =  0 0 0

 0 0 6
1 


−

2
46. The eigenvalues of A are 3, −1, and 5, with corresponding eigenvectors
1
1
 1
  1

,
, 0 , 
,−
, 0 , (0, 0, 1).

2
2
2
2

 

Form the columns of P from the eigenvectors of A
 1
 2

P =  1

 2
 0
1
2
1
−
2
0
 1
 2

 1
T
P AP = 
 2
 0

0


0

1
1
2
1
−
2
0

 1
0

 1 2 0  2

 1
0 2 1 0 
 0 0 5  2
 0
1 

1
2
1
−
2
0

0

3 0 0



0 = 0 −1 0

0 0 5


1
48. The eigenvalues of A are − 12 and 1. The eigenvectors corresponding to λ = 1 are x = t ( 2, 1). By choosing t = 13 , you find
the steady state probability vector for A to be v =
1
Av =  12
 2
( 23 , 13 ). Note that
2
1  23 
  1  =  31  = v.
0  3 
 3 
50. The eigenvalues of A are 15 and 1. The eigenvectors corresponding to λ = 1 are x = t (1, 3). By choosing t = 14 , you can find
the steady state probability vector for A to be v =
( 14 , 43 ). Note that
 14 
0.4 0.2  14 
Av = 
  3  =  3  = v.
 4 
0.6 0.8  4 
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
272
Chapter 7
Eigenvalues and Eigenvectors
52. The eigenvalues of A are −0.2060, 0.5393 and 1. The eigenvectors corresponding to λ = 1 are x = t ( 2, 1, 2). By choosing
t = 15 , find the steady state probability vector for A to be v =
 13

Av =  13
1
3
( 52 , 15 , 52 ). Note that
1 2
 52 
3 5
 1
 
0  5  =  15  = v.
2
2 2
3 5
5
2
3
1
3
0
1 , 1,
54. The eigenvalues of A are 10
and 1. The eigenvectors corresponding to λ = 1 are x = t (3, 1, 5). By choosing t = 19 , you
5
can find the steady state probability vector for A to be v =
( 13 , 19 , 59 ). Note that
 13 
0.3 0.1 0.4  13 
 

 1 
Av = 0.2 0.4 0.0  9  =  19  = v.
5 
0.5 0.5 0.6  5 

 9 
9 
56. Show by induction that for the n × n matrix λ I n − A,
λ −1
λIn − A =
0 
0
0
λ −1 
0


−1

= λ n + an −1λ n −1 +  + a1λ + a 0 .
a1 a 2  λ + an −1
a0
For λ I1 − A = λ + a 0 and for n = 2,
λI2 − A =
λ
−1
a0
λ + a1
= λ 2 + a1 λ + a 0 .
Assuming the property for n, you see that
λ −1
λ I n +1 − A =
0 
0
0
λ −1 
0


−1


= (λ + an )λ n + λ I n − A = λ n + 1 + anλ n +  + a 0 .
a1 a 2  λ + an
a0
Showing the property is valid for n + 1. You can now evaluate the characteristic equation of A as follows.
λ
−1
0

0
0
λ
−1 
0
λIn − A = 



0
0
0

−1
a0
a1
a2

λ + an −1
= λ n + an −1λ n −1 + an − 2λ n − 2 +  + a1λ + a0 .
58. From the form p(λ ) = a3λ 3 + a2λ 2 + a1λ + a0 , you have a3 = 2, a2 = −7, a1 = −120, and a0 = 189. This implies that
the companion matrix of p is
 0
1 0


A =  0 0 1.
− 189 60 7 
2
 2
The eigenvalues of A are 32 , 9, and −7, the zeros of p.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Review Exercises for Chapter 7
273
60. The characteristic equation of A is λ I − A = λ 3 − 20λ 2 + 128λ − 256 = 0.
Using A3 − 20 A2 + 128 A − 256 I 3 = 0, you can find the powers of A by the process below.
A3 = 20 A2 − 128 A + 256 I 3
A4 = 20 A3 − 128 A2 + 256 A
A3 = 20 A2 − 128 A + 256 I 3
4 − 3
4 − 3
 9
 9
 1 0 0






= 20 − 2
0
6 − 128− 2
0
6 + 256 0 1 0
 −1 − 4 11
 −1 − 4 11
0 0 1






960 − 720  1152
512 − 384 256
0
0
 1520

 
 

= − 480 − 640 1440 − − 256
0
768 +  0 256
0
− 240 − 960 2000  −128 − 512 1408  0
0 256

 
 
448 − 336
 624


672
= − 224 − 384
 −112 − 448
848

A4 = 20 A3 − 128 A2 + 256 A
448 − 336
48 − 36
4 − 3
 624
 76
 9






= 20 − 224 − 384
672 − 128− 24 − 32
72 + 256 − 2
0
6
 −112 − 448
 −12 − 48 100
 −1 − 4 11
848





8960 − 6720  9728
6144 − 4608  2304 1024 − 768
12,480
 


 
9216 + − 512
0 1536
= − 4480 − 7680 13,440 − − 3072 − 4096
− 2240 − 8960 16,960  −1536 − 6144 12,800 − 256 −1024 2816
 


 
3840 − 2880
 5056


= −1920 − 3584
5760
 − 960 − 3840
6976

62. ( A + cI )x = Ax + cIx = λx + cx = (λ + c)x. So, x is an eigenvector of ( A + cI ) with eigenvalue (λ + c).
64. (a) The eigenvalues of A are 3 and 1, with corresponding eigenvectors (1, 1) and (1, −1). Letting these
eigenvectors form the columns of P, you can diagonalize A.
1 1
3 0
−1
P = 
 and P AP = 
 = D
1
1
−


0 1
 3 0 −1
 3 +1
3 0 −1
So, A = PDP −1 = P 
 P = 12 
 P . Letting B = P 
0 1
 0 1
 3 − 1
2
3 − 1

3 + 1
2
  3 0 −1 
 3 0 −1
3 0 −1
you have B =  P 
P  = P
 P = P
 p = A.
  0 1

0
1
0 1



 

(b) In general, let A = PDP −1, D diagonal with positive eigenvalues on the diagonal. Let D′ be the diagonal matrix
consisting of the square roots of the diagonal entries of D. Then if B = PD′P −1 ,
B 2 = ( PD′P −1 )( PD′P −1 ) = P( D′) P −1 = PDP −1 = A.
2
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
274
Chapter 7
Eigenvalues and Eigenvectors
1 
 1
,
66. The eigenvalues of A are a + b and a − b, with corresponding unit eigenvectors 
 and
2
 2
 1
 2
1 
 1
,
−
, respectively. So, P = 
1
2
2


 2
 1

2
P −1 AP = 
 1
−
2

1 
 1
2  a b  2


1  b a   1


2
 2
−
−
1 
2 
. Note that
1 

2
1 
0 
a + b
2 
= 
.
1 
0
a
b
−


2
68. (a) A is diagonalizable if and only if a = b = c = 0.
(b) If exactly two of a, b, and c are zero, then the
eigenspace of 2 has dimension 3. If exactly one of a,
b, c is zero, then the dimension of the eigenspace is
2. If none of a, b, c is zero, the eigenspace is
dimension 1.
70. (a) True. See Theorem 7.2 on page 432.
(b) False. See remark after the “Definitions of
Eigenvalue and Eigenvector” on page 426. If x = 0
is allowed to be an eigenvector, then the definition
of eigenvalue would be meaningless, because
A0 = λ 0 for all real numbers λ .
(c) True. See page 459.
72. The population after one transition is
 0 1
32
32
x2 =  3    =  


32
0
24
 4   
and after two transitions is
 0 1
32
24
x3 =  3    =  .


0 24
24
 4   
74. The population after one transition is
 0 2 2 240
960

 
 
x 2 =  12 0 0 240 = 120
 0 0 0 240
 0

 
 
and after two transitions is
 0 2 2 960
240

 
 
x3 =  12 0 0 120 = 480.
 0 0 0  0
 0

 
 
The positive eigenvalue 1 has corresponding eigenvector
2
( 2, 1, 0), and the stable distribution vector is x = t  1.
0
 
76. Construct the age transition matrix.
8 2
 4


A = 0.75
0 0
 0 0.6 0


120
 
The current age distribution vector is x1 = 120.
120
 
3
. Choose the positive
2
eigenvalue and find the corresponding eigenvector to be
In one year, the age distribution vector will be
 2
x = t 
 3
In two years, the age distribution vector will be
The eigenvalues of A are ±
(2, 3), and the stable age distribution vector is
8 2 120
 4
1680

 


0 0 120 =  90.
x 2 = Ax1 = 0.75
 0 0.6 0 120
 72

 


8 2 1680
 4
7584





0 0  90 = 1260.
x3 = Ax 2 = 0.75
 0 0.6 0  72
 54





© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Review Exercises for Chapter 7
78. The matrix corresponds to the system y′ = Ay is
80. The matrix corresponding to the system y′ = Ay is
6 −1 2


A = 0 3 −1.
0 0 1


0 1
A = 
.
 1 0
This matrix has eigenvalues 1 and −1, with
corresponding eigenvectors (1, 1) and (1, −1). So, a
matrix P that diagonalizes A is
1 1
P = 
 and
1 −1
275
The eigenvalues of A are 6, 3, and 1, with corresponding
eigenvectors (1, 0, 0), (1, 3, 0), and ( −3, 5, 10). So, you
can diagonalize A by forming P.
 1 0
P −1 AP = 
.
0 −1
The system represented by w′ = P −1 APw has solutions
w1 = C1et and w2 = C2e −t . Substitute y = Pw and
obtain
 1 1 −3


P = 0 3 5
0 0 10


and
6 0 0


P −1 AP = 0 3 0.
0 0 1


The system represented by w′ = P −1 APw has solutions
w1 = C1e6t , w2 = C2e3t , and w3 = C3et . Substitute
C1et + C2e −t 
 y1 
1 1   w1 
  = 
  =  t
−t 
 y2 
1 −1 w2 
C1e − C2e 
y = Pw and obtain
 y1 
 1 1 −3  w1 
w1 + w2 − 3w3 
 

 


0
3
5
y
w
=
=
 2

  2
 3w2 + 5w3 
 y3 
0 0 10  w3 


10 w3
 

 


which yields the solutions
y1 = C1et + C2e − t
y2 = C1et − C2e − t .
which yields the solution
y1 = C1e6t +
C2e3t −
3C3et
y2 =
3C2e3t +
5C3et
y3 =
10C3et .
82. (a) The matrix of the quadratic form is

b
3
1 −



2
2 
 = 
.



3
c
−
2

 2


a
A = 
b
 2
(b) The eigenvalues are
 3 1
 1
1
5
3
,  and  − ,
and , with corresponding unit eigenvectors 
.
2
2
2
2
2
2




Use these eigenvectors to form the columns of P.
 3

2
P = 
 1

 2
1
1
− 
2
2
T
and P AP = 

3
0


2 

0

5
2 
(c) This implies that the equation of the rotated conic is
1
5
2
2
( x′) + ( y′) = 10, an ellipse.
2
2
(d)
y
y′
5
4
3
−5
30°
x′
4 5
x
−4
−5
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
276
Chapter 7
Eigenvalues and Eigenvectors
84. (a) The matrix of the quadratic form is
b

 1 2
 9 −12
 = 
A = 
.
b
−12 16



 2 c
 4 3
 3 4
(b) The eigenvalues are 0 and 25, with corresponding unit eigenvectors  ,  and  − , . Use these
 5 5
 5 5
eigenvectors to form the columns of P.
4
5
P = 
3
 5
3
− 
0 0
5
 and PT AP = 

4
0 25

5
This implies that the equation of the rotated conic is a parabola.
(c) Furthermore,
4
5
e]P = [−400 −300] 
3
 5
[d
3
− 
5
 = [−500 0] = [d ′ e′]
4
5 
so the equation in the x′y′-coordinate system is 25( y′) − 500 x′ = 0.
2
(d)
y
y′
20
x′
36.87°
x
− 20
− 20
86. To find the maximum and minimum values of z = x1x2
+ 4 x2 = 100, you
cannot use the Constrained Optimization Theorem
subject to the constraint 25 x12
2
directly because the constraint is not x
2
88. The quadratic form f can be written using matrix
notation as
f ( x, y ) = xT Ax
= 1.
= [x
However, with the change of variables
x1 = 2 x and x2 = 5 y
the problem becomes finding the maximum and
minimum values of
z = 10 xy
subject to the constraint x 2 + y 2 = 1. Verify that the
maximum value of 5 occurs when ( x, y) = (0, 1), or
( x1, x2 ) = (0, 5), and the minimum value of − 5 occurs
when ( x, y ) = (0, −1), or ( x1, x2 ) = (0, − 5).
−11 5   x 
y] 
  .
 5 −11  y
−11 5 
Verify that the eigenvalues of A = 
 are
 5 −11
λ1 = −16 and λ 2 = −6, with corresponding
−1
1
eigenvalues   and  .
 1
1
So, the constrained maximum of − 6 occurs when
1
1
1 
(1, 1) ,
 and constrained
2
2
 2
minimum of −16 occurs when
( x, y ) =
( x, y ) =
1
 −1 1 
(−1, 1) ,
.
2
2
 2
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Project Solutions for Chapter 7
277
Project Solutions for Chapter 7
1
Population Growth and Dynamical Systems (I )
 0.5 0.6
6
1. A = 
 , λ1 = 0.6, w1 =  
−0.4 3.0
 1
 1
λ2 = 2.9, w 2 =  
4
4 −1
0
6 1
0.6
−1
−1
1 
P = 
 , P = 23 
, P AP = 

 1 4
−1 6
 0 2.9
w1 = C1e0.6t , w 2 = C2e2.9t , y = Pw
6C1e0.6t + C2e 2.9t 
 y1 
6 1  C1e0.6t 

  = 
  2.9t  =  0.6t
+ 4C2e 2.9t 
C1e
 y2 
 1 4 C2e 
y1 (0) = 36  6C1 +
y2 (0) = 121 
C2 = 36

C1 + 4C2 = 121
So, C1 = 1, C2 = 30 and
y1 = 6e 0.6t + 30e 2.9t
y2 =
e 0.6t + 120e 2.9t .
2. No, neither species disappears. As t → ∞, y1 → 30e2.9t and y2 → 120e 2.9t .
3.
150,000
y2
y1
0
3
0
4. As t → ∞, y1 → 30e 2.9t , y2 → 120e 2.9t , and
y2
= 4.
y1
5. The population y2 ultimately disappears around t = 1.6.
2 The Fibonacci Sequence
1. x1 = 1
x4 = 3
x7 = 13
x10 = 55
x2 = 1
x5 = 5
x8 = 21
x11 = 89
x3 = 2
x6 = 8
x9 = 34
x12 = 144
1 1  xn −1 
 xn −1 + xn − 2 
 xn 
 xn −1 
2. 


 = 
 = 
. xn generated from 
x
x
x
1
0
n −1
 xn − 2 

  n−2


 n −1 
1
 x2 
 x3 
2
3. A  = A  =   =  
1
x
x

 1
 2
1
1
3
 x4 
A2   =   =  
1
2
 x3 
1
1
 xn + 2 
 xn 
n−2  
In general, An   = 
 or A   = 
.
1
 xn +1 
1
 xn −1 
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
278
4.
Chapter 7
λ − 1 −1
λ
−1
λ1 =
λ2 =
1+
Eigenvalues and Eigenvectors
= λ2 − λ − 1 = 0
5
2

eigenvector: 
−
+
1



5
5
2

eigenvector: 
−1 −


5
2
1−
2
2

P = 
−
+
1

P −1 =
λ =

2
5
−1 −
1  1+

4 5 −1 +
5
5
1±
5
2


5
2

−2
λ1 0
P −1 AP = 

 0 λ2 
5.
P −1 AP = D
P −1 A n − 2 P = D n − 2
An − 2 = PD n − 2 P −1
=
2
1 

4 5 −1 +
2
5 −1 −
n−2

2(λ 1 )
1 
=
4 5  −1 + 5 (λ 1 )n − 2

(
(

1 2 1 +
=
4 5

)
5 )(λ )
n−2
1
 1 + 5  n − 2



  2 

5 

0

2(λ 2 )


 1+

n−2
 1 − 5   −1 +

 
 2  
0

 1+
−1 +
 
n−2
n−2 
(−1 − 5 )(λ )
+ 2( −1 + 5 )(λ )
2
5
5
+4λ 1n − 2 − 4λ 2 n − 2
(
2 −1 +
5
2

− 2
4(λ 1 )
n−2
2
2

− 2
5
)
n−2
− 4λ 2 n − 2
(
5 λ 1n − 2 + 2 1 +


n−2
5 λ2 

)
1
 xn 
An − 2   = 
 
1
 xn −1 
(
)
(
1 
2 1 + 5 λ 1n − 2 + 2 −1 +
4 5
1
λ 1n − λ n2 
=
5
xn =
xn =
x1 =
)
5 λ 2 n − 2 + 4λ 1n − 2 − 4λ 2 n − 2 

n
n
1 − 5  
1  1 + 5 

 − 
 
5  2 
 2  

1
5 =1
5
( )
x2 =
1 6 + 2 5
6 − 2 5
−

 =1
4
4
5

x3 =
1 6 + 2 5 1 + 5
6 − 2 5 1 − 5
⋅
−
⋅

 =
4
2
4
2 
5
1 16 + 8 5 16 − 8 5 
−

 = 2
8
8
5

© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Project Solutions for Chapter 7
279
6. x10 = 55, x20 = 6765
7. For example,
6765
x20
=
= 1.618 .
x19
4181
The quotients seem to be approaching a fixed value near 1.618.
8. Let the limit be
b ≈
xn
= b. Then for large n, n → ∞.
xn − 1
xn
x + xn − 2
1
= n −1
≈1+
xn −1
xn −1
b
Taking the positive value, b =

1+
5
2
b2 − b − 1 = 0

b =
1±
5
2
≈ 1.618.
© 2017 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Download