Uploaded by 박정재

Contemporary Linear Algebra, Student Solutions Manual ( PDFDrive )

advertisement
CHAPTER 2
Systems of Linear Equations
EXERCISE SET 2.1
1.
(a) and {c) are linear. (b) is not linear due to the x1x3 term. {d) is not' linear due to the x} 2 term.
2. (a) a.nd (d) are linear. (b) is not linear because of the xyz term. (c) is not linear because of the
x 3/5
term.
1
3. (a) is linear. (b) is linear if k
4.
# 0.
(c) is linear only if k
= 1.
(a) is linear. (b) is linear jf m =f: 0. (c) is linear only if m = 1.
5. (a), (d), and (c) are solutions; these sets of values satisfy all three equations. (b) and (c) are not
solutions.
6. (b) , (d), and (e) are solutions; these sets of values satisfy all three equations. (a) and (c) are not
solutions.
7. The tluee lines intersect at the point {1, 0) (see figure). The values x
equations and this is the unique solution of the system.
= 1, y = 0 satisfy all three
3.x-3y"' 3
:c
The augmented matrix of the system is
l
~ ~ ~]·
r231
-3
!3
Add -2 times row 1 to row 2 and add -3 times row 1 to row 3:
[~ =! !~]
Multiply row 2 by -
j
and add 9 times the new row 2 t.o row 3:
[~ ~ ~]
From the last row we s~ that the system is redundant (reduces to only two equations). From the
second row we see that y =0 and, from back substitution, it follows that x = 1 - 2y = 1.
22
23
Exercise Set 2.1
8. The three lines do not intersect in a. common point (see figure). This system has no solution.
)'
The augmented matrix of the system is
and the reduced row echelon form of this matrix (details omitted) is:
The last row corresponds to the equation 0 = 1, so the system is jnconsistent.
9. (a)
The solution set of the equation 7x - 5y = 3 can be described parametrically by (for example)
solving the equation for x in terms of y and then making y into a parameter. This leads to
x == 3'*:,5t, y = t , where -oo < t < oo.
(b ) The solution set of 3x 1 - 5xz + 4x3 = 7 can be described by solving the equation for x1 in
terms of xz and x 3 , then making Xz and x 3 into parameters. This leads to x 1 = I..t 5;-<~ t,
x:z = s, x3 t, where - oo < s, t < oo.
=
(c)
The solution set of - 8x 1 + 2x 2 - 5x 3 + 6x 4 = 1 can be described by (for example) solving
the equation for x 2 in t.erms of x 1 , x;~, and x 4 , then making x 1 , x3, and x 4 into parameters.
This leads to :t 1 = r, x 2 = l±Brt5 s - 6 t, X3 = s, X4 = t, where -<Xl < r, s, t < oo.
(d) The solution set of 3v- Bw + 2x- y + 4z = 0 can be described by (for example) solving
the equation for y in terms of the other variables, and then making those variables into
parameters. This leads to v = t1, w = t2, x = t3, y = 3tl - 8t2 + Zt3 + 4t4, z t4, where
- oo < t11 t2 , t3, t4 < oo.
=
10.
= t, where - (X)< t < OO.·
(a)
x = 2- lOt, y
(b)
= s, x3 = t, where -oo < s, t < oo.
Xt = r , x2 = s, X3 = t, X4 = 20- 4r - 2s- 3t, where - oo < r, .s, t. < oo.
v = t1, w = t2, x = -t1 - t2 + 5t3 - 7t4, y = t3, z = t4, where -oo < h, t2, t3, t4 < oo.
(c)
(d)
11. (a)
x1 = 3- 3s + l2t, x 2
If the solution set is described by the equations x = 5 + 2t, y = t , then on replacing t by y in
t he first equation we have x = 5 + 2y or x- 2y = 5. This is a linear equation with the given
solution set.
(b) The solution set ca.n also be described by solving the equation for y in terms of x. and then
making x jnto a parameter. This leads to the equations x = t, y =
~-
!t-
Chapter2
24
12. (a)
If x1 = -3 + t and x2 = 2t, ther. t = Xt + 3 and so x2
a. linear equation with the given solution set.
= 2x1 + 6 or
-2xl + x2 = 6._ This is
(b) The solution set can also be described by solving the equation for X2 in terms of x 1 , and then
making x 1 into a parameter. This leads to the equations x 1 = t, x 2 = 2t + 6.
13. We can find parametric equations fN the line of intersection by (for example) solving the given
equations for x and y in terms of z, then making z into a parameter:
3+z}
x+y=
2x + y = 4- 3.z
From the above it follows that y
parametric equations
X
4-3z}
2x+y =
-x -y = -3- z
=}
=3+z =1 -
=}
x = l-4z
x = 3 + x - 1 + 4z = 2 + Sz, and this leads to the
4t,
y = 2 + 5t,
Z
=t
for the line of intersection. The corresponding vector equation is
(x,y,z)
= (1,2,0) + t(-4,5, 1)
14. We can find parametric equations for the line of intersection by (for example) solving the given
equations for x and y in terms of z, then making z into a parameter:
x + 2y =
3x- 2y =
1-3z}
:::;.
2- z
From the above it follows hat y
4x = ( 1 - 3z) + (2 - z)
=3-
4z
::::}
3-4z
X=--
4
= 1t,8z _ This leads to the parametric equations
X ::::
i - t,
y = ~ - t,
Z
=t
and the corresponding vector equation is
(x, y, z) = (~,
l· 0) + t( -1, -1, 1}
15. If k :fi 6, then the equations x- y = 3, 2x - 2y = k represent nonintersecting parallel lines and so
the system of equations has no solution. If k = 6, the two lines coincide and so there are infinitely
many solutions: x = 3 + t, y = t, where -oo < t < oo.
16.
No solutions if k-:/: 3; infinitely many solutions if k
= 3.
3-2 !l -1]
[
17. The augmented matrix of the system is 4
7
18. The augmented matrix is
[!
0
2
-1
4
1 -1
5
3:
3 .
2
Exercise Set 2.1
25
[~
19. The augmented matrix o! the system is
20. The augmented matrix is [
~
11
0 0 :
1 0 12
0 1 3
2
3
3
0
-1
2
1
0
7
~]
1
- 1
0
.
!J
21. A system of equations corresponding to the given augmented matrix is: .
2xt
=0
3xl- 4x2 = 0
X2
= 1
22. A system of equations corresponding to the given augmented matrix is:
3x 1
7xl
2x 3
+ 4x3
= 5
=- 3
+
=
-
+
x2
- 2x2
X3
7
23 . A system of equations corresponding to the given augmented ma.tri.x is:
+
2x2
x 1 + 2x2
?x1
7,
= - 2,
+ x3 + 4x3
3x-t = 5
= 1
= 3, :J:4 = 4
24.
X}=
25.
(a) B is obtained from A by adding 2 times the first row to the second row. A is obtained from
B by adding -2 times the first row to the second row.
(b) B is obtained from A by multiplying the first row by!· A is obtained from B by multiplying
Xz
X3
the first row by 2.
26, (a)
B is obtained from A by interchanging the first and third rows. A is obtained from B by
interchanging the first and third rows.
(b) B is obtained from A by multiplying the third row by 5. A is obtained from B by multiplying
the third row by }.
27.
29.
+ 3y + z = 7
2x + y + 3z = 9
4x + 2y + 5z = 16
2x
X
2x
X
31. (a)
+
+
=
12
y + 2z = 5
- z= - 1
-J- y
Z
+ c2 + 2c3 - C4 = 5
cz + 3c3 + 2c4 = 6
-c1 + c2
+ 5c4 = 5
2ct + c 2 + 2c3
""" 5
Jc1
28.
30.
2x
8x
6x
+ 3y + 12z =
4
+ 9y + 6z = 8
+ 6y + 12z 7
X +
=
y
+Z =
3
y+z=lO
+z= 6
-y
(b )
3cl
+ cz + 2c3 - C4 = 8
- c1
+ c2
2c1
+ cz + 2c3
Cz
+ 3c3 + 2c4 =
+ 5c4 =
=
3
-2
6
Chapter 2
26
(c)
32.
(a)
+ c~ + 2c3 - c4:;;;; 4
c2 + 3c3 + 2c, := 4
-cl + Cz
+ Sc4 = 6
2ct + c2 + 2c3
=2
3cl
Ct
+
C2
+ 2c3
=
(b )
2
+ 5C4 = -2
+ 2cz- c3 + 4c4 = -8
+ 2Cl + CJ- C4 = 0
5ct - Cz + 3c3 + C4 = 12
2c1
- 4cl
(c)
c1
2c1
- 4cl
-
+ cz + 2c3
- 2c3
c3
=
Ct
2Ct
2c3
-4ct
+
c2
+ 2c3
- 2c3
+ 2cz-
C3
=
+ 6c4
+ 4c4 =
+ 2cz +
5Ct Cz + 3c3 +
CJ-
c.&=
C4
5
= -3
=
-9
4
11
4
+ Sc4 = - 4
+ 2c2 -
+ 4c• = 2
+ 2c2 + c3 - C4 = 0
5c1 - c2 + 3c3 + c4 = 24
DISCUSSION AND DISCOVERY
Dl.
(a) There is no comm on intersection point.
(b) There is exactly one common point of intersection.
(c) T he three lines coincide.
D2. A consistent system has at least one solution; moreover, it either has exactly one solution or it
has infinitely many solutions.
If the system has exactly one solution, then there are two possibilities. If the three lines are
all distinct but have a. common point of intersECtion, then any one of the three equations can be
discarded without altering the solution set. On the other hand, if two of the lines coincide, then
one of the corresponding equations can be discarded without altering the solution set.
If the system has infinitely many solutions, then the three lines coincide. In this case any one
(in fact any two) of the equations can be discarded without altering the solution set.
D3. Yes. If B can be obtained from A by multiplying a row by a. nonzero constant, then A can be
obtained from B by multiplying the same row by the reciprocal of that constant. If B can be
obtained from A by interchanging two rows, then A can be obtained from B by interchanging the
same two rows. Finally, if B can be obtained from A by adding a multiple of a row to another
row, then A can be obtained from B by subtracting the same multiple of that row from the other
row.
D4. If k = l = m = 0, then x = y = 0 is a solution of all three equations and so the system is consistent.
If the system has exactly one solution then the three lines intersect at the origin.
05. The parabolay = ax2 + bx + c will pass through the points {1, 1), (2, 4), and (-1, 1) if and only if
a+ b+c=l
4a + 2b + c = 4
a-
b+c=I
Since there is a unique parabola passing through any three non-collinear points, one would expect
this system to ba.ve exactly one·solution.
Discussion and Discovery
D6. The parabola y == ax 2
27
+ bx + c passes through the points (x1, Yt), (xz, Y2), a.nd (x3, Ys) if and only
if
ax~
+ b~1 + c = Yl
4ax~ + Zbx2 + c = Y2
ax~ -
bxs
+ c = Y3
i.e. if and only if a, b, and c satisfy the linear sy5tem whose augmented matrix is
Y1]
1
1 Y2
1 Y3
D7. To say that the equations have the same solution set is the same thing as to say that they represent
the same line. From the first equation the x 1-intercept of the line is x1 = c, and from the second
equation the x 1 -intercept is x 1 = d; thus c = d. If the line is vertical then k = l = 0. If the line
is not vertical then from the first equation the slope is m =
and from the second equation the
slope ism=
thus k = l. In summary, we conclude that c = d and k = .l; thus the two equations
are identicaL
f;
08.
t,
~ 2 columns, then the first n- 1 columns correspond to the coefficients
of the variables that appear in the equations ar1d the last column corresponds to the constants
that appear on the right-hand side of the equal sign.
{b) False. Referring to Example 6: The sequen_c~ of linear systems appearing in the left-hand
column all have the same solution set, but the corresponding augmented matrices appearing
in the right-hand column are all different .
·
(c) False. Multiplying a row of the augmented matrix by zero corresponds to multiplying··both
sides of the corresponciing equation by zero. But this is equivalent to discarding one of the
equations!
(a) True. If there are n
(d) True. If the system is consistent, one can solve for two of the variables in terms of the third
or (if further redundancy is present) for one of the variables in terms of the other two. ln any
case, there is at lca.c:;t one "free" variable that ca.n be made into a parameter in describing
the solution set of the system. Thus if the system is consistent, it will have infinitely many
solutions.
D9.
(a)
True. A plane in 3-space corresponds to a linear equation in three variables. Thus a set of
four planes corresponds to a system of four linear equations in three variables. If there is
enough redundancy in the equations so that the system reduces to a system of two independent equations, then the solution set will be a line. For example, four vertical planes each
containing the z-axis and intersecting the xy-plane in four distinct lines.
( b) False. Interchanging the first two columns corresponds to interchanging the coefficients of
the first two variables. This results in a different system with a differeut solution set. [It
is oka.y to interchange rows since this corresponds to interchanging equations and therefore
does not alter the solution set.]
(c) False. If there Is enough redundancy so that the system reduces to a system of only two
(or fewer) equations, and if these equations are consistent, then the original system will be
consistent.
(d) True. Such a system will always have the trivial solution x 1
= x2 =
·· ·
= Xn = 0.
Chapter2
28
EXERCISE SET 2.2
1. The matrice1' (a), (c), and (d) are in reduced row echelon form. The matrix (b) does not satisfy
property 4 of the definition, and the matrix (e) does not satisfy property 2.
2. Tbe mo.trices (c), {d), an(l {e) are in reduced row echelon form . The matrix (a) does not satisfy
property 3 of the definition, and the matrix {b} does not satisfy property 4.
3. The matrices (a) and (b) are in row echelon form. The matrix (c) does not satisfy property 1 or
property 3 of the definition.
4. The matrices (a) and {b) are in row echelon form. The matrix (c) does not satisfy property 2.
5. The matrices (a) and (c) are in reduced row echelon form. The matrix (b) does not satisfy property
3 and thus ls not in row echelon form or reduced row echelon form.
6. The matrix (c) is in reduced row echelon form. The matrix (a) is in row echelon frem but does
not satisfy property 4. The matrix (b) does not satisfy property 3 and thus is not in row echelon
form or reduced row echelon form.
7. The pos:;ible 2 by 2 reduced row echelon forms are
number substituted for the
[8 g). [8 A]. [6
~),
and
[A o]
with any real
*.
8. The possible 3 by 3 reduced row echelon forms are
[~ ~ ~]. [~ ~ ~] .[~ ~ !] .[~ ! ~]
and
[~ ~ ~]·[~ ~ ~] [~ ~ ~]·[~. o ~ :]
0 0
0
0
0
0
0
0 0
0 0
with any real numbers substituted for the *'s.
9. The given matrix corresponds to the system
= -3
X1
0
Xz
7
X3 =
which clearly has the unique solution x 1 = -3, x2
= 0, X3 = 7.
10. The given matrix corresponds to the system
x1
+ Zx4 = -
+ 2x2
X3
+ 3x4
=
1
4
Solving these equations for the leading variables (x 1 and x 3) in terms of the free variables (x2 and
x4) results in x 1 = -1- 2x 2 - 2x 4 and x 3 = 4- 3x4 . Thus, by assigning arbitrary values to x2
and x4, the solution set of the system can be represented by the parametric equa.tious
Xt
= -1- 28- 2t,
X2
= S,
X3
= 4- 3t,
X4
=t
where - oo < s, t < S>O· The corresponding vector form is
(x~.
x2 , X3, X4)
= (-1, 0, 4, 0} + s( -2, 1, 0, G)+ t( -2,0, -3, 1)
Exercl&e Set 2.2
29
11. The given matrix corresponds to the system
+ 3xs =
+ 4xs =
X4
+ 5xs
-2
7
8
=
where the equation corresponding to the zero row has been omitted. Solving these equations
for the leading variables (x1, xs, and x4) in terms of the free variables (x2 and xs) results in
Xt = -2 + 6x2- 3xs, X3 = 7- 4xs and X4 = 8- 5xs. Thus, assigning arbitrary values to x 2 and
xs, the solution set can be represented by the parametric equations
X]
where -oo
=-2 + 6s- 3t,
X2
= S,
X3
= 7- 4t,
= 8- 5t,
X4
X5
=t
< s, t < oo. The correspondjng vector form is
12. The given matrix corresponds to the system
x1 -
3x 2
X3
=0
=0
0=1
which is clearly inconsistent since the last equation is not satisfied for any values of x 1 , x2 , and x 3 .
13. The given matrix corresponds to the system
8
+ 3x4 = 2
+ X4 = -5
- 7x4 =
X3
Solving these equations for the leading variables in terms of the free variable results in x 1 == 8 +· 7x 4 ,
x2 =o 2- 3x1, and x3 = -5 - x 4. Thus, making x 4 into a parameter, the solution set of the system
can be represented by the parametric equations
X1
where -oo < t
= 8
+ 7t,
Xz
== 2- 3t,
X3
= -5-
t,
X4
=t
< oo. The corresponding vector form is
14. The given matrix corresponds to the single equation Xt + 2xz + 2x.t - x 5 = 3 in which x3 does n0t
appear. Solving for x1 in terms of the other variables results in x 1 = 3- 2xz- 2x4 + X5. Thus,
making x2, x3, x4, and Xs into parameters, the solution set of the equation is given by
Xt :::::
where -oo
3 - 2s - 2u
+ V,
X2
= S,
X3
= t,
X4
= u,
X5
=V
< s, t, u, v < :x>. The corresponding (column) vector form is
XI
X2
3- 2s- 2u+v
s
=
3
-2
0
0
1
0
0
0
0
+t 1
0
X4
u
0 +s
0
xs
v
0
X3
0
r-~
+u
l!
+.
r~
l~
3C
Chapter2
15. The system of equations corresponding to the given matrix is
3xz
+ 4x3
= 7
x2 + 2x3 = 2
X3 = 5
x1 -
Starting with the last equation and W'orking up, it follows t hat Z3 == 5,
and x1 = 7 + 3x 2 - 4x3 = 7-24-20 = -37.
Xz =
2-
2x3
= 2 - 10 = - 8,
Alternate solution via Gauss- Jordan (starting from the original matrix and reducing further):
-3
0
1
[
0
0
1
4
2
1
Add - 2 times row 3 to row 2. Add -4 times row 3 to row 1.
0-13]
0
--8
5
l
Add 3 times row 2 to row 1.
0 -37]
0
1
0
1
0
From this we conclude (as before) that x 1
- 8
5
= -37, x2 =
- 8, and
x3
= 5.
16. The system of equations corresponding to the given matrix is
+ 8x3
+ 4x3
x1
x2
X3
=
- S:c4 6
- 9x.: = 3
+ X4 = 2
=
Starting with the last equation and working up, we have .1:3 = 2 - x 4 , x2 = 3 - 4x3 + 9x4
3- 4(2- .r4) + 9x4 = -5 + 13x4 , and x 1 6- 8xa + 5x4 = 6- 8(2 - x 4 ) + 5x4 .;:::10 + 13x4·
Finally, assigning an arbitrary value to x 4 , the solution set can be described by the paramet-
=
ric equations x 1
=
= -10 + 13t, x 2 = - 5 + 13t, x 3 = 2- t, x 4 = t.
Alternate solution via Gauss- Jordan (starting from the original matrix and reducing further):
[~
8 -5
0
1
4
-9
0
1
l
~]
Add - 4 times row 3 to row 2. Add -8 times row 3 to row 1.
[~
0
1
0
From this we conclude (as before) that x 1
0 -13
0 -13
1
1
-10]
-5
2
= -10 + 13t, x 2 ~ -5 + 13t, x 3 = 2 -
t , x4 = t.
31
Exercise Set 2.2
17. The corresponding system of equations is
x1
+ 7xz -
- 8xs = - 3
2x3
X3
+ x4 + 6xs =
X4
5
9
+ 3xs ,...
St<Uting with the last equation aml working up, it follows that x" = 9- 3xs , X3 = 5 - :t4 - 6xs =
5- (9 - 3xs}- 6xs = -4- 3xs, and Xt = - 3- 1x2 + 2x3 + 8xs = - 3- 7x'l + 2( -4- 3xs) + 8xs =
-11 - ?x2 + 2xs . Fina.lly, assigning arbitrary 'Values to xz and :ts, the solut ion set can be described
by
Xt
= -11- 7s + 2t,
=
Xz
X3
S,
18. The corresponding system
X1 -
= - 4- 3t,
3x2
+ 7XJ = 1
:tz
+ 4x3 = 0
X4
=9 -
3t,
Xs
= t
0 = 1
is inconsistent since t here are no values of x 1 , x 2 , and x3 which satisfy the third equation.
10. The corrcspouding system is
x1
+ x2 X2
Sx3
+ 2x4 =
+ 4x3
X4
1
= 3
= 2
=
Starting with the last equa tion, we have x 4 = 2, x 2 = 3 - 4x3. X 1 = 1 - x2 + 3x3 - 2x4
l - (3 - 4x 3 ) + 3x 3 - 2{2) = - 6 + 7x3 . Thus, making x 3 inLo a paramet er , the solutio n set can be
described by the p<Uame~t:ic equat ions
Xl
= - 6 + 7t ,
X2
= 3 - 4t,
X:~
= t,
X4
=2
20. The correspondi.ng system is
+ 5x3 + 3x4 = 2
2x3 + 4x4 = - 7
X3 + X4 =
3
Thus x 4 is a free. vc:~.riable and, setting x 4 = t, we have :r:, = 3 - t ,
- l - 6l, a.nd x 1 = 2 - 5(3 - t)- 3t = - 13 + 2t.
x1
x2 -
21. Star ting with the first equation and working down, we have x1
and X3 = l(12 - J:r1 - 2:r2) = i (12- 6- 2} =
!-
22. :r1= - l , x2 =
:r.2
Add row 1 to row 2. Add -3 times row 1 to row 3.
[~
1
-1
-10
t) -- 4t
=
= 2, xz = Hs - x1) = S(S- 2) = l,
4 - 2x 1
4+2
= - -=2,x3=5 -x 1 - 4x2=5+1-8 = - 2
3
3
23. The augmented matrix of the system is
= - 7 + 2(3 -
Chapter 2
32
Multiply row 2 by - 1. Add 10 t imes t he new row 2 to row 3.
[~
Multiply row 3 by -
1
1
2
-5
-9
0
- 52
-114
f2 •
8]
~ -~1 -~]2
[i
0
Add 5 times row 3 to row 2. Add -2 times row 3 to row 1.
Add - 1 times row 2 to row 1.
Thus the solution is
24.
x1
= 3, x 2 =
1,
x3
[~
1 0
1 0
0 1
[~
0
!]
0
!]
1 0
0 1
= 2.
The augmented matrix of the system is
H
2
5
!
Multiply row 1 by ~ · Add 2 t imes the new row 1 to row 2. Add -8 times the new row 1 to row 3.
fl
1
lo
OJ~
1
7
IQ
4
-7 - 4
l
-1
Multiply row 2 by ~. Add 7 t imes t he new row 2 to row 3.
[~
Add - 1 times row 2 to row 1.
[~
l
1
1
7
0
0
4
0 73
1 14
0 0
!]
-!]
Finally, assigning a n arbitrary value to the free variable X3, the solution set is represented by the
parametric equations
Exercise Set 2.2
33
25. The augmented matrix of the system is
[ -~
-1
3
-1]
2 -1
1 -2
2 -4
0
0
-2 -2
1
1
-3 -3
Add -2 times row 1 to row 2. Add row 1 to row 3. Add -3 times row 1 to row 4.
1-1 2-1 -1]
[
0
3 -6
0
0 -1 -2
0
0
0
0
0
3
-6
0
Multiply row 2 by ~- Add the new row 2 to row 3. Add -3 times the new row 2 to row 4.
Add row 2 to row 1.
Thus, setting z
equations
= s and
w ., t,
1
0
0
0
-1
2
-1
1
-2
0
0
0
0
0
0
0
[~
0
0
1
-2
0
0
-~]
-1 -1]
0
0
0
0
0
0
0
0
the solution set of the system is represented by the parametric
x = -1 + t,
y
= 2s,
z
= s,
w
=t
26. The augmented matrix of the system is
[~63
-~6 -·~3 --~]5
Interchange rows 1 and 2. Multiply the new row l by
[
1
0
2 -1
-2
3
0
-6
9
j. Add
-6 times the new row 1 to row 3.
-~]
.2
9
Multiply row 2 by - ~. Add 6 times the new row 2 to row 3.
[~
2
-1
1
-~
0
0
-~]
-1
3
It is now clear from the last row that the system is inconsistent.
Chapter 2
34
27. The augmented matrix of the system is
Multiply row 3 by 2, then add -1 times row l to row 2 and - 3 times row 1 to the new row 3.
The last two rows correspond to the (incompatible) equations
system is inconsistent.
4 x2
=3
and 13x2 = 8; thus the
28. The augmented matrix of the system is
[~
2
3
1
-1
2
3
2
-4
-I~]
11
30
and the reduced row echelon form of this matrix is
Thus the system is inconsistent.
29.
As an intermediate step in Exercise 2:i, the augmented matrix of the system was reduced to
Starting with the last row and working up, it follows that x3
and x1 = 8 - x1 - 2x3 = 8 - J - 4 = 3.
30.
As an intermediate step in
Exercis~
= 2, x2 = -9 + Sx3 = -9 + 10 = 1,
24, the augmented matrix of the system was reduced to
[
1
1
1
0
1
~
0 0 0
i]
Starting with the last equation and working up, it follows that x2 = ~ - ~X 3, a.nd :::1 = - x 2 - X3 =
- (~- ~ x 3 )- x 3 = - ~- ~x 3 . Finally, assigning an arbitrary value to x 3 , the solution set can be
described by the parametric equations
X )=- ~- ~t ,
E.x erclse Set 2.2
35
31. As an intermediate s te p in Exercise 25, the augmented matrix of the syste10 was reduced to
1
-1
0
[0
0
-~ -~]
2
1 -2
0
0
0
0
0
0
0
0
It follows that y = 2z a.nd x = -1 + y- 2z +w = - 1 + w. Thus, setting z = s a nd w = t, the
solution set of the system is represented by the parametric equations x = -1 + t , y = 2s, z = s ,
W =
t.
32. As in Exercise 26, the augmented matrix of the syst em can be reduced to
- 1
-! -~]
1 2
[
- 1
0 1
0 0
0
3
and from this we can immediately conclude that the system has no solution.
33. (a ) There a re more unknowns than equat ions in t his homogeneous system. Thus, by Theorem
2.2.3, there a t e in finitely many nontrivial solutions.
(b) From back s ubstit ution it is clear that x 1 = X-t = X3 "" 0. This syste m hns only the trivial
solution.
34 . . (a)
There are more unknowns than equations in this homogeneous system: thus there are infinitely many nont rivial solutions.
(b) The second equation is a multiple of the first. T hus t he system r educes to only one equation
in two unknowns and t here are infinitely many solut ions.
35. The a ugmented matrix of the homogeneous sys tem is
21 12 03 00]
[0 2 0
l
Interchange rows 1 and 2. Add -2 times the new row 1 to the new row 2.
2
1
0 -3
[0
1
0
3
2
~]
Multiply row 2 by - ~ . Add - 1 times row 2 to row 3. Multiply the new row 3 by
[~
2
0
1
- 1
0
1
!·
~]
T he last row of t his matrix corresponds to x 3 = 0 a..nd, f!·om back s ubstitution, it follows that
xz = x3 = 0 and x 1 = - 2x2 = 0. This system bas only the trivial solut ion .
<;;napter 2
36
36. T he augmented matrix of the homogeneous system is
3
1
[5 -1
1
~]
1
1 -1
Multiply row 2 by 3. Add - 5 times row 1 to the new row 2, then multiply this last row 2 by ~.
3111
[0 4 1 4
0]
0
Let x 3 = 4s, x 4 :::: t. Then, using back substitution, we have 4x 2 = - x 3 - 4x4 = -4s - 4t and
3:c 1 = -x2 - :c3 - x 4 = s + t - 4s- t = -3s. Thus the solution set of the system can be described
by t he parametric equations x1 = -s, x2 = - s - t, X3 = 4s, X4 = t.
37. The augmented matrix of the homogeneous system is
[_~
2
0
2
4
-1
-3
i
3
-2
~]
lnterchange rows l and 2. Add 2 times the new row 1 to row 3.
[~
. · Multiply row 2 by
0
2
1
~]
-1 -3
2
4
1 -8
!- Add - 1 times the new row 2 to row 3.
[~
Multiply the new row 3 by - 1~
~]
0 -1 -3
1
1
2
1
0
0
Add -2 times row 3 to row 2. Add 3 times row 3 to row 1.
[~
0 -1
0
1
0
0
1
0
~]
l
This is the reduced row echelon form of the matrix. From this we see that y (the third variable)
is a free variable a.nd, on setting y = t, the solution set of the system ca.n be described by the
parametric equations w = t, x = - t, y = t, z = 0.
38. The augmented matrix of the homogeneous system is
2 -1 - 3
-1
2 -3
[ 1
1
4
~]
and the reduced row echelon form of this matrix is
[H H]
Thus thi~ ~ystem has only the trivial solution x
=y =z =
0.
37
Exercise Set 2.2
39. The augmented matrix of this homogeneous system is
1
1
3
-4
-2
3
2
5
_,,
3
u
- 3
-1
01
~j
and the reduced row echelon form of this matrix is
[~
0
-2
7
5
1
3
0
0
-2
0
0
~]
2
0
0
Thus, setting w == 2s and x = 2t, the solution set of the system can be described by the parametric
equations u = 7s - 5t, v = -6s + 4t, w = 2s, x = 2t.
40. The augmented matrix of the homogeneous system is
I
a
1
4
0
2
I
0
0
0
0 - 2 -2 -1
2 - 4
I
1
1 - 2 -1
0
0
0
ancf the reduced row E-chelon form of this matrix is
1
0
0
0
0
Thus
th~
X 1 =. X2
41.
0
1
0
0
0
0
0
1
0
0
0 0
0
0
1
0
0
0
0
0
system re<.luces to only four equations, and these equations have only the trivial solution
= Xj :::: X4 :;;;·: 0.
We will solve the system by Gaussian elimination, i.e. hy reducing the augmented matrix of the
system to a row-echelon form: The augmented matrix of the original system is
[~
-1
0
3
-2
-3
1
4
4
7
5
-1
~]
Interchange rows 1 and 2. Add - 2 times the new row 1 to the new row 2. Add -3 times the new
row 1 to row 3. Add -2 times the new row 1 to row 4.
{~
0
-2
-1
7
7
- 10
-3
7
8
-16
- 10
1
~]
Chapter2
Multiply row 2 by -1 . Add 3 times the new row 2 to row 3. Add -1 times the new row 2 to row 4.
0
1
-2
-7
0
- 14
14
15
-~20
0
7
10
Multiply row 3 by -{4 . Add - 15 times the new row 3 to row 4. Multiply the new row 4 by
!.
~]
0 - 2
7
1 - 7 10
0
1 -1
0
0
1
This is a row-echelon form for the augmented matrix. Ftom the last row we conclude that / 4 = 0,
and from back substitution it follows that I 3 = / 2 = 11 = 0 also. This system has onlJ!,the trivial
solution.
42. The augmentcn matrix of the homogeneous system is
[
-~ -~ ~ -~
1
2
1
2
-2
- 1
0
0
1
1
-1
1
~]
and the reduced row echelon form of this matrix is
~ ~ ~ ~ ~ ~]
[
0 0
0
1 0 0
0 0 0 0 0 0
From this we conclude that the second and fifth variables are free variables, and that the solution
set of the system can be described by the parametric equations
Z1 = - .s - t,
z2= s , z3 = - t, z4 = o,
43. The augmented matrix of the system is
2
[!
-1
3
5
1
-14
.:2]
Add - 3 times row 1 to row2. Add -4 times row 1 to row 3.
1
[
2
0 -7
0 -7
3 4]
-4 - 10
- 26 a - 14
Multiply row 2 by -1. Add tne new row 2 to row 3.
[~
2
3
7
4
0
4 ]
10
-22 a-4
z~
=t
Exercise Set 2.2
39
From the last row we conclude that z = ~2~ and, from back substitution, it is clear that y and x
are uniquely determined as well. This system has exactly one solution for every value of a. ·
44. The augmented matrix of the system is
[~
1
2
-2
3
a2
2
- 3
~]
Add -2 times t he fi rst row to the second row. Add - 1 times the first row to the third row.
1 -62 11
( a
0
0
2
0
-
2]
-3
4 a- 2
The last row·corresponds to (a2 - 4}z =a- 2. If a= - 2, thls becomes 0 = -4 and so the system
is inconsistent. If a = 2, the last equation becomes 0 = 0; thus the system reduces to only two
equations and , with z serving as a free variable, bas infinitely many solutions. If a f.: ±2 the system
has a unique solution.
45 . The augmented matrix of the system is
[12
Add -2 times row 1 to row 2.
2
a2 -
[~
n
9
a~ 3]
2
a
2
Jf a= 3, then the last row corresponds to 0
-
~ ll
5
=0
and the system has infinitely many solutions.
1£ a= - 3, the last row corresponds to 0 = -6 and the system is inconsistent. If a-:/; ±3, then
y = :,-:_39 = 4 ~ 3 and, from back subst-itution, x is uniquely determined as well; the system has
exactly one solution in this case.
46. The augmented matrix of the
sys t~m
is
[~
This reduces to
[~
1
7
3
17
2 a2 + 1
0
:3
a
2
-
16
3a
-7]
7
1
1
-7]
~
_"l
9
3a+9
The last row corresponds to (a 2 - 9)z = 3a + 9. If a= -3 thls becomes 0 = 0, and the system will
ha~e infinitely many solutions. If a = 3, then the last row corresponds to 0 = 18; the system is
inconsistent. If a 'I- ±3, then z = !~:!:~ = o.:_ 3 and, from back substitution, y and x are uniquely
determined as well; the system has exactly one solution.
41.
(a) If x + y + z = 1, then 2x + 2y + 2z = 2 f.: 4; thus the system has no solution. The planes
represented by the two equations do not intersect (they are parallel).
Chapter 2
40
(b) lf x + y + z = 0, then 2x + 2y + 2z = 0 alsoi thus the system js redundant and has infinitely
many solutions. Any set of values of the form x = -s- t, y = s, and z = twill satisfy both
equations. T he planes represented by the equations coincide.
48. To rt:<.luce thl' matrix
[~ -~ ~]
3
4
5
to reduced r ow-echelon form without introducing f1actions:
Add -1 times row 1 to row 3. Interchange rows 1 and 3. Add -2 times row 1 to row 3.
[~ =~ _:]
Add - 3 times row 2 to row 3. Interchange rows 2 and 3. Add 2 times row 2 to row 3. Multiply
row 3 by - 317 •
[~
31 - 222]
0
1
Add 22 times row 3 to row 2. Add -2 times row 3 to row l. Add - 3 times row 2 to row 1.
This is the reduced row-echelon form.
49. The system is linear in the variab1es x = sin a, y
= cos{3, and z = tan-y.
2x- y + 3z = 3
4x+ 2y- 2z = 2
6x - 3y + z = 9
We solve the system by per forming the indicated rvw operations on the augmented matrix
2
4
[6
-1
3
2 -2
-3
1
!]
Add - 2 times row 1 t o row 2. Add -3 times row 1 to row 3.
r20 -14 - 83 -431
lo o -8 oj
From this we conclude that tan 1 = z = 0 a.nd, from back substitutio n, that cos {3 = y
sin a= x = 1. Thus a =
{3 = 1r, and '"Y = 0.
t.
50. Tbis sy11t.em is linear in the variables X
= x"J., Y = y 2 , and Z =
[i -:
1
2
= -1
z 2 , with augmented matrix
and
41
Exercise Set 2.2
The reduced row-echelon form for this matrix is
[~ ! H]
It follows from this that X
= 1, Y =
3, a.nd Z = 2; thus x = ±1 , y =
±..;3, and z = ±JZ.
51. This system is homogeneous with augmented matrix
2- ).
2
- 1
1 - ).
-2
-2
[
1f).
= 1, the augmented
matrix is
(60
~
0
matrix js [
g1 g0]. Thus x =
y = z
1f .A = 2, the augmented matrix is [
matrix is
[~ ~ ~ ~] ·
0
0
0
~
i -&
-2
-2
= 0,
~ -=~2
-2
(}
~ ~0]
1-).
8],
and the reduced row~echelon form of this
0
i.e. the system has only the trivial solution.
~
- 1
g},
and the reduced row-echelon form of this
0
Thus the system ha.<l infinitely many solutions: x =
-!t, y =
0
where - co < t < co.
52. The augmented matrix
U~ I
~]
reduces to the following
[~
l
2
1
1
l
0
0
row~ec:helon form:
a -b
a
c - a-b
Thus the syst.em is consistent if and only if c- a - b = 0.
53.
(a) Starting with the giVfm system and proceeding as directed, we have
O.OOOlx + l.OOOy
1.000x - l.OOOy
1.000
0.000
+ 10000y
l .OOOx - l.OOOy
10000
0.000
l.OOO:t + lOOOOy
- 1 OOOOy
= 10000
= -10000
l .OOOx
which results in y
~
1.000 and x
~
0.000.
(b ) If we first interchange rows and then proceed as directed, we have
l.OOOx - l.OOOy = 0.000
O.OOOl x + l.OOOy = 1.000
l .OOOx - l.OOOy = 0.000
l.OOOy = 1.000
\Vhich results in y
~
1.000 and x
~
1.000.
0, z = t,
Chapter2
42
(c)
The exact solutjon is x
= 140:: andy =
!~S~ ·
The approximate solution without using partial pivoting is
0.00002x
l.OOOx
+ l.OOOy =
+ l.OOOy =
l.OOOx + 50000y =
l.OOOx + l.OOOy =
l.OOOx + 50000y =
1.000
3.000
50000
3.000
50000
- 50000y = -50000
which results in y
~
1.000 and x
~
0.000.
The approximate solution using partial pivoting is
+ l.OOOy = 1.000
+ l.OOOy = 3.000
l.OOOx + LOOOy = 3.000
0.00002x + L OOOy = 1.000
l.OOOx + l.OOOy = 3.000
0.00002x
l.OOOx
l.OOOy
which results in y
54.
~
= 1.000
1.000 and x ::::-: 2.000.
(a) Solving the system a..-; directed. we have
+ 0.33y = 0.54
+ 0.24y = 0.94
0.70x + 0.24y = 0.94
0.21x
0.70x
+ 0.33y = 0.54
l.OOx + 0.34y = 1.34
0.21x
0.2lx
+ 0.33y
= 0.54
l.OOx
+ 0.34y
= 1.34
0.26
0.26y
resulting in y ::::: 1.00 and x
:=:::
=
1.00. The exact solution is x
=
1, y = 1.
(b) Solving the system as directed, we have
O.llx1 - 0.13x:~ + 0.20xs = -0.02
O.lOxt + 0.36x2 + 0.45xa = 0.25
0.50xt- O.Olx2 + 0.30xs = -0.70
O.SOx1 - O.Olx2 + 0.30xs = -0.70
O.IOx1 + 0.36x:J + 0.45xs = 0.25
O.Uz:t - 0 . 1 3x:~ + 0.20xa = -0:02
l.OOx1 - 0.02x2 + 0.60x3
+ 0.36x:z + 0.45x3
O.ll x 1 - 0.13x2 + 0.20xs
O.lOx1
= -1.40
= 0.25
= -0.02
Discussion and Discovery
43
0.02x2
0.36x2
0.13x2
0.02x2
l.OOxz
0.13x2
O.OZx2
l.OOx2
resulting in x3 ;:::,; 1.00,
X3 = 1.
x2;:::,;
+
+
+
+
+
+
+
+
0 .60XJ
0 .39x3
0.13x3
0.60x3
i.08x3
0.13x3
=
-
0.60x3
=
=
1.08XJ
0.27x3
=
-1.40
0.39
0.13
-1.40
1 .08
0.13
-1.40
1.08
0 .27
0.00, and x1;:::,; - 2.00. The exact solution is x 1 = - 2, x 2 = 0,
DISCUSSION AND DISCOVERY
Dl. lf the homogeneous system has only the trivial solution, then the non-homogeneous system will
either be inconsistent or have exactly one solution.
02.
(a)
All three liues pass through the origin and at least two of t hem do not coincide.
(b ) If the system has nontrivial solutions then tbe lines must coincide and pass through the
origin.
D3.
(a)
Yes. If axo + bvo = 0 then a(kxo) + b(kyo) = k(axo + byo) = 0. Similarly for the other equation.
(b) Yes. If a:r.o + byo "" 0 and ax 1 + lly1 = 0 then
(c)
and similarly fo r 1.he other equation.
Yes in both cases. These sta tements are not true for non-homogenous systems.
D4. The first system may be inconsistent, but the second system always has (at least) the trivial
solution. If the first sy~ tem is consistent then the solution sets will be parallel objects (points,
lines, or the entire plane) with the second containing Lhe origin.
D5. (a) At most three (the number of rows in the matrix).
(b) At most five (if B is the zero matrix) . If B is not the zero matrix, then there are at most 4
free variables (5 - r where 1· is the number of non-z.ero rows in a. row echelon form) .
(c) At most three (the number of rows in the matrix) .
06. (a) At most three (the nwnbc.- of columns).
(b) At most three (if D is the zero matrix) . U 8 is not the zero matrix, the n there are at most
2 free variables (3- r where r is the number of non-zero rows in a. row echelon form).
(c) At most three (the number of columns).
D7. (a) False. For ex<.mple, x + y + z
= 0 and x + y + z = 1 are inconsistent.
(b) False. If there is more than one solution then there are infinitely many solutions.
(c) Fabe. If the system is consistent t hen, since there is at least on e free variable, there will be
infinitely many solutions.
(d ) T:n..e. A homogeneou:-; syi:\tem always has (at least) t he trivial S<llution .
44
08.
Chapter 2
(a)
True. For
example[-~
1] can be reduced to either [5
nor[~ ~J·
(b) False. The reduce<l row echelon form of a. matrix is unique.
(c) False. The appea rance of a row of zeros means that there was some redundancy in the
system. But the remaining equations may be inconsistent, have exM:tly one solution, or have
infinitely many solutions. All of these are possible.
(d) False. ·There may be redundancy in the system. For example, the system consisting of the
equations x + y = 1, 2x + 2y = 2, and 3x + 3y = 3 has infinitely many solutions.
D9. The system is linear in the variables x = sina, y = cos/3, z = tan 7, and this system has only the
trivial solution x = y = z = 0. Thus sin a = 0, cos/3 = 0, tan-y = 0. It follows that a= 0, 11', or
·2rr; /3 = ~ or 3; ; and 1 = 0, rr, or 21r. There are eighteen possible combinations in all. This does
not contradict Theorem 2.1.1 since the equations are not linear in the variables a, f3t 'Y·
WORKING WITH PROOFS
Pl.
{a) lf a ;f. 0, then the reduction can be accomplished as follows:
If a = 0, then b =I 0 and c =f 0, so the reduction can be carried out as follows:
(b) If
[~ ~]
matr ix
[5 ?] , then the corresponding row operations on the augmented
r] will reduce it to a matrix of the form [5 ? {] and from this it follows that
can be reduced to
[~ ~
the syst.ern has the unique solutjon x
= K, y = L.
CHAPTER 3
Matrices and Matrix Algebra
EXERCISE SET 3.1
1. Since two matrices are equal if and only if their corresponding entries are equal, we have a - b = 8,
b + a = 1, 3d+ c = 7, and 2d- c = 6. Adding the first two equation s we see that 2a = 9; thus
a = ~ and it follows that b = 1 - a = - ~. Adding the second two equat!ons we see that 5d = 13;
thus d = 1 and c = 7 - 3d = - ~.
i
=
2 . For the two matrices to be equal, we must have a= 4, d- 2c = 3, d + 2c - 1, and a+ b = -2.
From the firs t and fourth equations, we see immediately that a= 4 and b = - 2- a= -6. Adding
the second a.nd third equations we see that 2d = 2; thus d = 1 and c = - l2-d = - 1.
3.
(a)
A has size 3 x 4, BT has size 4 x 2.
(b)
= 3, 0.23 = 11
tl,J = 3 if a ud only if (i,j)
(c)
ll32
(d) c, (A') =
4.
5.
(a)
- (1, 1), (3, 1), or (3, 2)
Ul
(e)
B has size 2 x 4. AT has size 4 x 3.
(b) bl2
= i.
(c)
b,;
= 1 if and only if (i, j) = (2, 2) or (2, 3)
(a)
A+·2B = H
(b)
A - B'~' is not defined
(c)
4 D - 3CT =
{d) D _ DT
(e)
b21 = 4
r-l~ 1 ~]- [~ -~) = [_ 1 ~ ~~]
=[
G + (2F T )
~l+H ~l=H ~]
1 1]_[1 -3] 0
-3
= [
3
1
3
-4
=lr-~ ~ ~] + rl~ -~
4
1 3
4
1~]
2
'"
(f)
, ,(2BT) = [1
(7 A, - B) + E is not defined
55
2)
56
Chapter 3
6. (a)
1
°)
+
[
[! ~]
-3.
- 3 ~]
3
3C + D = (
9
=
(b)
E-AT=[~~
(c)
40 - 5DT = [
(d) F _, p T
~]- [~ -~ ~] = [ -~ -~
-[: -~:] = [-~ -~~]
= [- ~ ~
~l [~ -~
~ =~J
Hi] H~~] [~~ ~~]
-~J
4
12
3 2
(e)
- !l-H
""
+
B +(4£T) =
20
7.
(f)
(7C - D) + B is not defined
(a)
CD= [l3
(b) AE =
12
20
0][- 1
1]=[16 10]
3 3
-1
H~] [1 4 2] [! ~~
3 1 5
1
~]
=
4
5
[-~ ~ ~l [-~ ~ ~1 =lr-~ ~ ~]
1
( c)
FG=
3 2 4j
(d) B'' P =
32
9 25
!] =[I~
! I~]
1
Hi][~ -; ~] H_:~ -:~]
H:~] [-~ ~] [~~ !)
naT =
(f)
GE is not defined
GA =
1 3
G -; ~] [-~ ~
(e)
8. (a)
4
=
=
3
r1
(b)
s
FB = l-~ ~
(c)
GP =
1 1
14 5
~] [-~ ~] [-~ -~]
=
4
4 0
16
5
H: ~l H~ ~l ~ :~l
= [::
57
Exercise Set 3.1
(d)
ATG= [~ -~ a[-~~ ~1=[2!
1 3
t1
(e)
E£"
9. Ax =
10.
~ [~ ~ ~1 [~ ~) ~ [~~ ~~]
H~ !J HJ ~+~J
[-! : ~][_i] =+!}
Ax =
3
11.
12.
13.
(a)
(f) D A is not defined
m m HJ
=
+J
1
14.
7x3 == 2
2x·z + :l.r3 = 0
l Xz - I3 = 3
17.
(a)
X2
+
X3
=
5xl - Jxz - 6x3
(6)(4)
+ (5)(3) + (4)(5) =59
= (0)(3) + (1)(6) + (3)(0) = 6
= rt(A)B = [3
[6 -21
-2 7] 0
7
7
[6 -2
(b) r3 (AB) = r 3(A)B = IO 4 9} 0
7
(c) c,(AB)
+
+ 3X2
2X t
= r2( A) · c3(B) =
r 1 (AB)
!] [=:] :;: [ :)
- 2
=Ac, (B) = [~
-2
5
4
1
7
7
!] =
{67 41
!]
2
1r- 1
4
9
1 =
7
=
-2
X3
=
X1
1
16. (BAht= r 2(B} · c1 (A)
5
[-! =! -~n::J nJ
(b)
+ 6x 2 -
15. (ABh3
[~ -~
(b)
[~ ~1 ;1 [::]~ n1
-:tl -
[~) =[q] .
+5 (_;] - 2
-1
[~ ~~ ~] [::1 ~ l-~
(a)
5x 1
-l
1017
3
3
{63
67
n
21
67
41]
57]
2
2
=
=-9
Chapter3
58
t3 -2
: :7]
' ' (BA)
~ • 1(B)A ~ [6
-2
(b) r 3 (BA)
~ • (8).4 ~ [7
7 5[
18. (a)
3
c2(BA) = Bcz(A)
(c)
= [~
4[l~
[~
;) = {6.1 41
-:
-~ 4]3 [-2]5 =
7
7
5
= [6 -6 70[
122[
[-6]
17
41
4
19. (a) tr(A) = (A)u + (A)22 + (A)33 = 3 + 5 + 9 = 17
(b) tr(AT ) = (AT)ll + (AT)2;: + (AT)33 = 3 + 5 + 9 = 17
tr(AB) = (AB)u + (AB)2 2 + (AB)33
(c)
t.r( AB) - tr(A)tr(B)
20. (a)
(c)
tr(B)= 6+1+ 5 = 12
tr(BA) = (BA)n + (BA)22
(a)
uT'v
= {-2
3] [:]
= 6 + 1 + 5 = -12; thus
145-204 = - 59
145, tr(B)
= 145- (17)(12) =
(b) tr(BT)=6+ 1+ 5 = 12
+ {BA)33 = 6 + 17 + 122 = 145, tr(A) = 3 + 5 + 9 = 17; thus
tr(BA)- tr(B)tr(A)
21.
= 67 + 21 +57 =
= 145- (12){17} =
= -8 + 15 = 7
(b) uvT =
145- 204 = - 59
[-~] [4
5j
~ [~~ -~~]
(d) v'ru = l4 5J[-!] = - 8+ 15=7 = uTv
(e )
tr(uvT)
= t r(vuT) = u · v = v · u = uTv =
uTv = [3
22. (a)
-4
5]
vTu = 7
m
= 6 _ 28+ 0 = --22
(b) uvr = [ -:] [2 7 0] =
6 21 0]0
- 8 -28
10
35
[
0
(c)
tr(uvT)=6-28+0 = -22=uTv
(d)
vru~[2
(e)
tr(uvT) = tr(vuT) = u · v = v. u = uTv = vTu = -22
23. [k
(k
l
ll
7
OJ [-:]
=6-28 + 0 = - 22 =uTv
[~ ~ -~][:]
+ 1}2 = 0 if and only if k =
=
[k
-1.
Exercise Set 3.1
24.
[2
2
k]
59
[~ ~ ~] lkr~] -= 12
2
kJ
0 3 1
[4: 3kl = 12 + 2(4 + 3k) + k(6 + k) = k + 12k +·20
2
= 0
6+k
if and only if k = -2 or k = - 10.
25. Let F = (C(DE}. Then, from Theorem 3.1.7, the entry in the ith row and jth column ofF is
the dot product of the ith row of C and the jth column of DE. Thus (F)23 can be computed as
follows:
27. Suppose tha t A is-m x nand B is r x s. If AB is defined, then n = r . On the other hand, if BA
is defined, then s = m . T hus A is m x n and B is n x m. It follows that AB is m x m and BA is
n xn.
28. Suppose that A is m x n and B is r x s. If BA is defined, then s = m a.nd BA is r x n. If, in
addition, A(BA) is defined, then n = r . Thus B is ann x m matrix.
29.
(a) If the i th row of A is a row of zeros, then (using the row rule) r i{AB) = ri(A )B = OB = 0
and so the ith row of AB is a row of zeros.
(b) If the jth column of B is a column of zeros, then (us ing the column rule) c_i(AB) = Ac; (B) =
A O = 0 and so the jth column of AB is a column of zeros.
30 .
(a)
If B anu C have the same jth column, then ci(AB)
AB and AC have the same jth column.
::=
Aci (B)
(b) If B and C have the sa.me ith row, then ri (BA) = ri (B)A
and C A have the same ith row.
31.
(a)
= Acj (C ) = cj( AC)
= r;(C )A = ri(CA)
and so
and so BA
If i ::j: j, t.heu a ; 1 h~ unequal row and column numbers; that is, it is off (a bove or below) the
main diagonal of the matrix [a;iJ; thus the matrix has zeros in all of the positions that are
above or below the main diagonal.
an
0
[aijJ
=
0
0
0
0
(b) If i > j , then the entry
0
a22
0
0
0
0
0
0
a33
0
0
0
0
0
0
a44
0
0
0
0
0
0
0
0
0
0
a 55
0
0
a 66
Cl;j has row number larger than column number ; that is, it lies below
the main diagonal. Thus laii] has zeros in all of the positions below t he main diagonal.
(c) If i < j, then the entry a;.i has row number smaller than column number; that is, it lies above
the main diagonaL Thus la;j] has zeros in a.ll of the positions above the main diagonal.
(d) If li - il > 1, then either i - j > 1 or i - j < -1 ; that is, either i > j + 1 or j > i + 1. The
first of these inequalities says that the entry aij lies below the main diagonal and also below t he "subdiagonal" cor>sisting of entries immediately below the diagonal entries. The
Chapter 3
60
second inequality says that the entry aii lies above the diagonal and also above the entries
immediately above the diagonal entries. Thus the matrix A has the following form:·
A = ia;j}
32. (a)
The entry
a;j
=i +j
an
<lt2
0
a21
a2.2
£l23
0
0
0
0
0
0
asz
a33
0
0
0
a43
0
0
=
0
0
a34
0
0
0
a44
a4s
0
as4
0
ass
a6s
aM
0
as6
is the sum of the row and column numbers. Thus the matrix is
: .....
+j
(b) The entry aii = (- l)i+i is - 1 if i
is odd and +1 if i
-:
r
~.-1
(c)
+j
is even. Thus the matrix is
~: -: ~:]
1
- 1
1
We have a;j = -1 if i = j or i = j ± 1; otherwise a.;i = l. Thus the entries on the main
diagonal, and those on the subdiagonals immediately above and below the main diagonal ,
arc all -l; whereas the remaining entries are all + 1. The matrL'<. is
r
33. The component,s of the matrix product
=~ =~ -~ ~]
1 - 1
1
1
- 1 -1
- 1 -1
[!~ :~ ~]~ [~]- r~~l~~
represent the total expenditures for
3 -
purchases during each of the first four months of the year. For example, the February expenditures
+ (6)($2) + (0)($3) == $17.
were (5)($1)
34. (a)
45 + 30 6030 ++23
33 7f40:> ++2540] [75
93 115]
51 53
65
T he entnes of the matrnc M + J =
=
65+12
45
+
11
21
77
56
[
10 + 10 35 + 9
23 50
44
.
.
30+ 21
12+ 9
15 + 8
represent t he
total units sold in ea.ch of the categories during the months of May and June. For example,
the total number of medium raincoats sold was M3 2 + J32 = 40 + 10 = 50.
(b) The entries of the matrix
M - J
15 27 35]
= : 5 ~ ~:
[ 1 30 26
represent the difference between May and
June sales in each of the categories. Note that June sales were less than May sales in each
<:Me; thus thes~ e ntries represent decreMP.S.
61
Discussion and Discovery
(c)
Let x
~ r:]·
Then th7 wmponents of M x
l
=
[:~ :~
15
<10
::] [:] =
35
[:~~]
represent the totalnum-
90
ber (all sizes) of shirts, jeans, suits, and raincoats sold in May.
(d ) Let y
= 11
1 1
1]. Then the components of
45
yM=[l 1
1
60
75]
30
:~ = 1102 195 205J
12 65
[
15 40 35
30
1)
represent the total number of small, medium, and large items sold in May.
The product yMx = [1
(e)
1
1
1]~:~ :~ ~] [~] = ~
{1
1
l][~E]
15 40 35
=492 represt>nts the
90
total number if items (all sizes and categories) sold in May.
DiSCUSSION AND DISCOVERY
Dl. If AB has 6 rows and 8 columns, then A must have size 6 x k a nd B must have six k x 8; thus A
has 6 rows and B has 8 columns.
D2. If A
= [o1 0
o]' then AA = [o1 0o]1[o 0o] = (o00o].
03. Let A
= G ~]
and B
= [~ ~] .
In the following we illustrate three different methods for computing
the product AB.
Met.'wd 1. Using Definition 3. 1.6. This is the same as what is later referred to as thP. column rule.
Since
we have AB
= [: ~]·
Method 2. Using Theorem 3.1 .7 (the dot product rule}, we have
Method 3. Using the row rule. Since
rt(AB) = rt(A)B = 11
we have AB = [:
~) ·
.
2]
[!
~] = [7
Sj and r2(AB) = t2(A)B
= [1
lj
[~ ~]
""' [4 3]
62
Chapter 3
D4. The matnx A
~ [~
-i .~]
Since xc, (A) + yc,(A)
th~ property. Here is a proof
+ zc3 ( A) ~ A[:] , we must have xc, (A)+ yc,(A) + zc,( A) ~
all x, y, and z. Talting x
and c3 (A) =
is the only 3 x 3 with
[= ~:]
~ 1, y = 0, z = 0, it follows that c, (A) = [~] · Similarly, c (A) =
2
[
for
-i]
m.
D5. There is no such matrix. Here is a proof (by contradiction):
Suppo"'
A;,. 3 x 3 for which A[:] ~ ["f] for all x, y, •nd z. Then A[:] · [~]and, on the other
hand , we must have A [:]
D6. (a)
S1
l
=:
=-
m. Thus there
is no such matrix.
= [~ ~] and S 2 = [=~ =~] are two square roots of the matrix A= [~ ~].
(b) The matrices S
(c)
= -A [
= [±~ ~)
and S
=
[±:s· ~~] are four different square roots of B = [~ ~].
Not all matrices have square roots. For example, it. is easy to check that the matrix [- ~
~1
has no square root.
07. Yes, the zero matrix A =
[o~ o~ o~l
ha.o:; the prop erty that AB has three equal rows (rows of zeros)
for every 3 x 3 rnatrix B .
DS. Yes, the matrix A =
[~ ~ ~]
has the property that AB
= 28 for every 3 x 3 matrix B.
09. (a ) False. For example, if A is 2 x 3 and B is 3 x 2, then AB and BA are both defined.
(b ) 'frue. If AB and BA are bot h defined and A ism x n, then B must ben x m ; thus AB is
m x m and BA is n x n. If, in addition, AB + BA is defi~d then AB and BA must have
the same size, i.e. m = n.
(c) True. From the column rule, c 1 (AB) = Ac;(B). Thus if B has a column of zeros, then AB
will have a column of zeroll.
(d) False. For example,
(e)
(f)
[~ ~] [~ ~]
=
[~ ~).
True. If A is n x m , then AT ism x n, AAT is n x n, and A T A ism x m. Thus AT A :1.nd
AAT both ar~:; square matrices, 8Jld so tr(AT A) and tr(AAT) are both defined.
False. If u and v are 1 x n row vectors, then u T v is ann x n matrix.
Exercise S&t 3.2
63
DlO. The second column of AB is also the sum of the first and third columns:
s
Dll. {a) :L)aikbkj)
= a,lblj
+ a;2b'2j + · · · + aisb.•i
.1.:=1
(b) This sum represents (AB)ij, the ijth entry of the matrix AB.
WORKING WITH PROOFS
Pl. Suppose B = ~bij] is an s x n matrix andy= !Yt
Ys]· Then yB is the 1
yz
yB = fy • c1(B) y · c2(B)
Xn
row vector
Y · Cn(B)]
whose jth component is y · Cj(B). On the other hand, the jth component of the vector
Ytrl (B)+ Y2r2(B) + · · · + Ysr s(B}
is Ytblj
+ Y2b2j + · · · + Ysbsj
= y • Cj(B). Thus Formula (21) is valid.
P2. Since Ax= x 1 c 1 (.t1) + x 2 c 2 (A)
+ · · · + XnCn(A),
X1C1 (A)+
X2c2(A)
the linear system Ax= b js equivalent to
+ · · · + XnC,l(A) = b
Thus the system Ax == b is consistent if and only if the vector b can be expressed as a linear
combination of the <.:olumn vectors of A.
EXERCISE SET 3.2
1.
(a)
(A+ B)+ C =
A+ (B r C)=
(b)
(AB)C
=
[282~
flO0
l2
[j
(c)
aC+bG
=
[~
12
10
-1
4
3
~] + [:
4
31 [10
-2
7
f.)
4j =
9
-5
1
5
15
3]5 [-187
-33] = [-10
1
4
[!
-8
-6
12
-1
19
5
r1
11
1!]
19
=
7 -2
-62
17
-27
-6
12
-1
-2]6 [101
8
~]
-31
-21
u
(a+ b)C = (-3)
-2]7 + [01
5
.6
a~]36 [031 -257
-28
-1
A(BC) =
-4
[-10
= 83
87
-222
-·67
33
1:]
26]
278
240
22
83
-222
-67
38
87
33
26]
278
240
6 -9]
[-~
!]
-2
7
5
-9
-21
-15
-12
-27
28
14
6
-21]
[
0
12]
[
0
16 +
-7 -49 -28 = -3 -21
20
36
-21
-35
-63
-9
-15
-9]
-12
-27
Chapter3
64
(d) a(B - C)= (4)
[~~ =~ =~] = [~~ -~: -~!]
1 - 12 - 3
aB - aC
4
- 48
~~] [~: -~: -~~]
32 - 12 -20]
[ 0 -8
0
4
8 4 28
[ 16 - 28
24
12 "20
=
- 12
=
4 - 48 -12
36
(r -3 -51 r m
[-18 -62
-132]
[-72
-2
2. (a)
a(BC) = (4)
= (4)
(aB)C =
0
4
(B + C)A
7
17
11
- 27
7
5
1
3
-~]
22 =
28
44
38
4
- 248
68
88
- 108
152
- 28
24
3
28
=
9
5
-!32]
- 248
68
- 108
44
88
152
r8 -3 -5 [ 0 -8
1162] = r-7228 -248G8 -132]
88
28
0
L4
(b)
2
6
[320 -124 -20]8 [01 -27 3] [-72
16
B0C) =
1
- 7
=
1
-7
2
6
1
4
12- 20
36
44
-108
152
20 -14 3] [ 20 -30 -9]
[8 -5 -2][
6
1
7
=
5
8
-2
15
-2
4
1
- 10
37
()7
-16
0
71
[26 -256 -11]
[ -6 -5 2] [- 2010
BA +CA = -4
13 + - 6 31 54
=
-4
3.
(a )
(AT)T
-26
1
4
5
( b ) (A+ B)r =
riO0
L2
AT
(c) .
-12
1
T [2
~H _
04
+ BT ~
(3C)T ~ [~
-6
21
15
-1
1
10
4
4
5
-5
T[0
12
27
=
5
7
-2
-21] + [-38
0
0
3
-6 21
9 12
0
1
2
- lfi
70
!3]
4
0
-2
= [10
-4
7
5
-6
H
=
-r
-4
26
37
0
-9]
6i
71
=A
-~]
10
-l [~~
6
05 -62]
-2
1:] =(3) [-~
27
- 30
3
7
1
7
4
10
!]~ 3C'
Exercise Set 3.2
65
=~~ 3~] [-~! 20 0]
- 31 -21
T =
-2r 36
6
38
36
~ -~]6 [-~3 5~ -~].....,
[-~:6 -~~ -2~]
4
2
4. (b)
(B - W ~
38
H-1-8]
-6
- 12
-2
- 3
T
=
[
36
8-1 1]
-1
-8
-6 - 12
- 2 -3
[-~ ~ -~] - [-~ ~ ~] [-~ =~ -1~]
-5
2
6
3 4 9
- 8
-3
-18 -62 -33
r-18 ~ -~~]
[
sr- cr =
=
-2
T
(d) (BC)'i' """""
7
17
-27
ll
cr Br ,... [-~
~ ~] [-~
:1
5
(a)
tr(~) ~
lr ( [
4
(b)
, ,(3.4)
~
(c)
tr(A)
,, ( [
~ 10, tr(B)
~
0
2
1
22
and tr(BA) = t.r
38
-~] = [=~~ 1~ -~~]
6
-33
22
38
: ]) '' 2 +1 +I = 10, and
= 2 + 4+4 = 10
;
6 + 12
tr
+ 12 = 30
= 8 +I
n~: ~~]) "'
r~~ ::: ~m ~
B) ~
tr (
1
-5
-62
- 33
=
-m
-~ ~~ :m ~
~ m_! -m
~ ,, ( [ _ ~
(d) tr(AB)
9
-~ -~
t r(A')
nnd tr(A+
22
38
tc (
+6 = 15,
25; thustr(A
28-31 + 36
26
-4
( [
-4
- 25 6
-26
ul ) =
13
1
~ 3tr(;lj
+ B) ~ tr(A ) +tr(B )"
= 33,
26 + 6-+ I
= 33; t.hll.s tr(AB) = tr(HA) .
Chapter 3
66
~ ~] = -6 + 3-2 = - 5 = 10 - 15 = tr(A)- tr(B).
6
6. (c)
Lr(A - B) = tr
0_
[ -6
8 -2
-~~ -~~1) = -1 8 + 17 + 38 = 37,
-18
(d )
= tr
t.t (BC)
1
([
- 27
38
J
!!]) ~
12 - 23
-24
( [ 60 - 67
and tr(CB) = tr
7. (a)
~
24
A matrix X satisfies the equation tr(B)A
12 - 24 + 49 = 37. T hus h(BC)
~
l<(CB).
+ 3X = BC if and only if 3X = BC -
tr(B) A, i.e.
8 -3 -5] [0 -2 3] [ 2 -1
[
!]
3X = 0
1
:1 - 7
=
-18
7
[ 11
2
6
. .
(b)
[ 30
22
38
=~
4 -15
0
9
-2
7
5
-33]
--62
17
- 42
in w},ich case we have X
1
3
-
45] = [-48
-- 15
60
15
0
-30
4
1
75
60
7
41
-78]
-53
-47
-47
- 42
-22
-48 -47 -78]
1
-47 -53
[ 4 1 - 42 -22
A ma trix X satisfies the equation B +(A+ X)T
=C
tf a.ud only if
(A +X)T =C - B
A +X
= ((A+ X)T)T = (C - 8)7
X= (C- B )T- A = C T- BT - A
Thu:; X
8. (a)
9.
X
0
1
= -:'.
7
4
r
;~
= B - 2:32~ = 2~
35] - [- 83 0 - 74] - [ 20 -1
~] - [-!01
4
1
9
- 5
2
v
6
[-3~ -2~~ =~~~]
-92
- 167
-2
10
-282
A-1 = rl - 52
(b) d ~t( A - 1 ) = 1
{A-t)-1
G~] , det(AT) =
(d ) 2A = [
6
10
1
~] ,det(2A) = 4
4
10
(b) X = CT- BT = _!_
(a) dPt.(A) = 6-5= 1
(c) A'l' =
1
-·
10
2 -4}
2
1
7 .
- 1
~-~
8
- ~]
= [3
s
(AT)- 1 = [ 2
- 1
(2A)-l=~ [
~] = A
-~] = (A - l)T
4
4 -10
-2]6 = ~2 [ -52 -~] = ~A- 1
Exercise Set 3.2
10. (a)
67
(b) det(B-1)
(c) BT
=[
= _!_
+ 12
= 8
202
(B-I)-1 =
20
T
2 :] , det(BT) = 20
(B )
-a
(d) 3B = [
ll. (a)
1 [ 4
s-1_20 ·-4
det(B)=8+12=20
(AB)
-9]
6
12
-1
12
, det(3B)
= 180
( 3B)
-i
-1
-•!])
20 (_!_
[2
20 4
1 [4
=
~l
20 :;
[~
=
18
-7
= 20 -18
s-lA-• = 2~ [_:
1 [ 12
-12
= 180
9]6 = 601[-44 ~]
10
~] [_~ -~) = ;o [-~~ 1~]
-l
(b) (ABC)_ 1
_
70
122
[
-
45]
79
__ 1 [ 79
- 40 -122
2
12 _ (a} (BC)_ 1
.,.-
.
2
32
u-lc-1 n-1
(C _B)-I
= _!_ [
2
=
AB
36
=~
22
40 -16
~W
-33]
-:] 2~)
aG
[._:
~} = 2~0 [ --~~ -~!]
__!__
4
18
70
12 -111
18J
= __!__ [
[-5 -7] [10 -5] = [-88
6
40 -122
18
= _1 [
240 --32
~ [~ ~] ~ r- ~
3
-5
-11]
12
10 -16
= [36 33] - I
·
14 . X=
12
6 20 -4
r-12 -4]6 20_!__ [ -44 3]2
c-ts-1 =!
(b) (BCD)_ 1
2
11] -l
[18
16
-·45]
70
-4] _!_ [ 4 3] [ 2 -1] = l [ 79 -45]
c-ls-~A- 1 =~[-I
- 7
11
66
4 -::::B
--~) = (IJ-l)T
[10 -5]-l 1[-7 5]
=
-3]
37]
- 29
=
~ 8 -1
Chapter3
68
-24]
16. (a)
67
(b)
6 -14] 61[3o o]2 = sl-3
1r 6 -14]
= [2
19. {a)
(b)
20. ( a )
Given that A - 1
-= (~ -~],and noting that det( A- 1 ) -~ 13, we have A = (A- 1 ) ·· 1 =
Given that (7A)- 1
[-3 -1] it foll ows t hat 5A
·-lJT =-51 [-2
5) .
-)
,
(,ivc'?
that ( 5AT) - 1 =
and so A = -1 [-:.!.
5 5
(b)
= [~ -~],we have A = H-~ -~-~
Given (I+ 2A)A :-::
1
1[-5-1 2]1
21(13
21 . The matrix A =
5
T =
2
3
-
1] 2
[-3 5
3
1
1
13
=
l .
1
__
(-l) [-~:; _31] __ [-~::;.
r-: ~]
= c2
-
and so it follows that
c f. 0, i.e. if and only if c :/:. 0, 1.
+ 1 ~ 0,
[~ ~ ~1- In genera), any matrix of the form
3 2 3J
[:
e
i.e. if aud only if c =/= ±L
~ ;] .
J
c
1
[-~ ~ --~J. In general, any matrix of the form [-~ ~
3 -2
25. Let X= [xiJJ
3x3
·-.1>],
._,
2
22. The matrix A = [ -~ - :) is invertible if anJ only if dct(A) = -c2
24. One such example is A =
~).
[Io OJ) = 131 [-9-cl]'
[~ ~} is invertible if and only if det(A)
23. One such example is A =
[_!
~[~ ~)·
=
= [-~ ~J. we have I+ ZA = (-~ ~r
1
13
-e - !
0
. Then AX= I if and only if
[~ '] ['"
0
1 0
1 1
X}2
xnl
X21
X22
X23
X31
X32
X33
j
[~ ~]
"
lJ
1
0
;].
0
Exercise Set 3.2
69
i.e. if and only if the entries of X satisfy the following system of equations;
-!-
Xn
=}
X3 1
+ X32
=0
+
X:tJ
+ X:z t
=:
0
=0
+ X22
=1
+ X23
Xt3
=0
=0
xz1
=0
1
+ X33 =
This system has the unique solution
=
1
[-! ~ -!]·
x;12 = - !·Thus A is invertible and A- =
:t12
'2
26. Let X =
lx,iJ 3X3 . Then AX = I
= x23 =
xu
= xzz
-- 2
X31
= X33::;; ~
and
x 13
= x2 1 =
'l
if anrl only if
[~ ~ ~] [:::
2 0
l
X13]
X23
=
x:n
XJJ
[}0
0
1
0 0
()]
0
l
i.e. if and only ii xu = 1, :tu = 0, x13 = 0, :t21 = 0, xn = 1, r23 = 0, 2x11 + X31 = 0, 2x12 + X32
0, and 2x 13 + X33 = l. This is a very simple syst.em of equations whicb has the solution
Xll
=- 1,
X12
= 0,
X13
= 0,
Thu& A is invertible and A - t = [
X21
= 0, X22
= l,
X23 =
0,
X31
= - 2,
X32 =
0,
~ ~ ~] ­
- 2 0 l
28.
(a)
The mat<ix uvT ~ [-:][I -I
3)
-3 9]
= [- 3
2 2 -6 ha;: the property that
4 -4
12
-3
2
-4
X33
= 1
=
70
Chapter 3
~ HJ. ([_!
(b) u· Av
ATU· V
=
(U ~ -:][-;)) HJ ~ [~~] . HJ ~
29. U A is invertible and AB
is
; -~] [-:]) ~ HJ. [~n ~ -6+8+48~
-1-12+63
and
(a) If A=
[._:~~:
~50
= AG then, using the Associative law (Theorem 3.2.2(a)). we have
(A-
1
30. If A invertible
AC = 0, then C = IC =
A)C = A- 1 (AC) = A- 1 0 = 0.
1
is invertible and AG = 0, then A= AI= A(CC- ) = (AC)C- 1 = OC- 1 = 0.
31.
50
::::] , then det(A} = cos2
A_ 1 =
()
+ sin 2 B =
Similarly, if C
1. Thus A is invertible and· :
- sin fJ]
cosO
[cosO
sinO
for every value of ().
(b) The given system can be written
i.e. x'
as[~]= [_:7,~: ;~~:) [~]. and so
= xcosO- ysin8 andy'= xsinO + ycos£1.
32 .
(a) If A 2 :o:: I. t hen (1- A) 2 = 12 - 2A + A 2 = 1 - 2A +A = I - A ; t h us J - A is idempotent.
( b) If .42 = / , then (2 A - 1 )(2A- I)= 4A 2 - 4.4 + ! 2 = 4A- 4A + I = I ; thus 2A -I is invertible anci (2A - 1)- 1 = 2.4- I.
33.
(a) If A anc.l B are invertible square matrices of the same size, and if A + B is also invertible,
then
A(A- 1 + s- 1 )B(A + B)- 1
:::
(I+ AB- 1 )B(A
+ B)- 1 =
(B
+ A)(A + B)- 1 =I
(b) From part( a) it follows that A(A- 1 + B - 1 )B;;::. A+ B, and so A -t + B - 1 = A- 1 (A+ B)B- 1 •
Thus the matrix A- 1 + B- 1 is invertible nnd (A- 1 + B- 1 ) - 1 = B(A ~ B) - 1 .4 .
34. Since u Tv = u · v:::: v · u = vru, we have (uvT) 2
(uT v )uvT. T hus, if u Tv =f - 1, we have
an(
= (u vT)(uvT ) =
u (vTu)vT
so the matrix A = I+ u vT is invertible with A - 1 = I - l+~ 1 vuvT.
= (vTu)uvT =
71
Discussion and Discovery
~]
35. If A = [:
and p(x) = x 2
p(A) =
(a+ b}x +(ad - be) , then
-
A~ - {a+ b)A + (ad -
= [~
bc)I
2
!]
- (a+ b) [
~ ~] + (ad- be) [~ ~]
2
= [a2+bc ab + ba1_[a J-da ab l·db]+(ad - bc)(l 0]
ca + de cb + d2 J
ac + de ad + d2
0 1
36. From parts (d) and (e) of Theorem 3.2.12, it follows that
tr(AB - BA)
= tr(AB)- tr(BA) = 0
for a.ny two n x n matrices A and B. On the other hand, tr(fn ) = 1 t 1 + ·· · + I ·= n . Thus it is
not possible to have AB - B A= I.,.. .
37. The adjacency matrix A and its square (computed using the row-column rule) are as follows:
s
1 1 0 0 0
0 0 1 l 0
0 0 0 l 0
0 1
0 0
0 0
0 0
0 ()
0 0
0 0
4-
0
0
0
0
0
0
0
Az
0 1 1
0 0 1
0 0 1
(} 0 0
0
=:
0
0
0
0
0
0
0
0
0 0
0
0
0
0
0
0
0
0
0
0
0 0
0 0
0
0
0
0
1 3 1
0 0 2
0 0
0 0 1
0 0 0
0 0 0
0 0 0
The entry in the ijth position of t he mat rix l1'2 represents t he number of ways of traveling from i
to j wit h one intermediate stop. For example, t here are three such ways of traveling from J to 6
(t hrough 2, 3 or 1) and two such ways of t raveling from 2 to 7 (t hrough 5 or 6).
In genern.l t.h€ entry in th~ tjth position of the matrix An represents the number of ways of traveling
from i to j with exactly n - 1 int.errnediatc s tops .
DISCUSSION AND DISCOV ERY
01.
A=[~ ~J
(a) Let
and B =
[~ ~] ·
the other hand, (A + B)( A-
(b) (A + B)(A - B) = A 2
(c) (A + B )(A - B ) = A 2
-
-
2
Then A =
B)~
G~] [_~ ~l
1
[~
0
(a) If A 2 + 2A +I= 0, ihcn i
A-
1
=-A - 21.
=
2
B =
(~ ~] , and
A2- B
2
= (_~ ~]·
On
r=~ ~] .
+ BA
- B2
8 2 if and only if A B
AB
D2. If A is any one of the eight matrices A =
D3.
[~ ~] ,
= - A2
-
;J
0
= BA.
~ ] ,then A
2
= f.:!.
±1
2A = A(-A- 21) ; thus A is invertible and we have
Chapter 3
72
(b) If p(A) = anA"+ a11 _ 1 A"- 1 + · · · + a 1 A
A(-
a .. An ao
1-
+ aol = 0, where a.o =I 0, then we have
an-I
Thus A is invertible with A-t= -!!.li. A"- 1
'
A" - 2 -
.. . -
ao
--
oo
al
ao
J) = I
~::..!. A n - 2 - . .. ao
!!.!.[ .
oo
D4. No. First note that if A3 is defined then A must be square. Thus if A 3 = AA2 =I, it follows that
A is invertible with A - I = A 2 .
D5.
(a) False. (AB) 2 = (AB)(AB) = A(BA)B. If A and B commute then (AB) 2 = A2 B 2 , but if
B A ¥: AB then this _will not in general be true.
(b) True.
Expanding both expressions, we ha.ve (A - B)'J. = A 2 - AB-BA + B 2 and
(B- A)Z = 8 2 - BA - AB + A2 ; thus {A- B) 2 = (B- A) 2 •
(c) True. The basic fact (from Theorem 3.2.11) is that (AT)- 1 = (A- 1 )7', and fr0m this it
follows that (A - n)T = ({A") - 1 )T = {(A")T)- 1 = ((AT)n)-1 = (AT) - n.
(d) False. For example, if A=
(e)
D6.
(a)
(~ ~)
and B =
G ~J
then u (AB)
= tr ([~ ~J)
= 3, whereas
tr(A)tr(B) = (2)(2) = 4.
False. For example, if B = -·A then A+ B = 0 is not invertible (whether A is invertible or
not).
If A is invertible, then the system Ax= b has a. unique solution for every vector b in
R3 , namely x = A- 1 b . Let x 1 , X:t, and x 3 be the solutions of Ax= e 1 , Ax= e 2 , and
Ax= e3 r espectively, and let B be the matrix having these vectors as its columns; B =
!x, x2 x3}. T hen we have AB = Ajx 1 X 2 x3] = [Ax1 Ax2 Ax3] = !e, ez e3] =I.
Thus A- 1 = B = [x1 Xz x3].
(b) From part {a) , the columns of the matrix A- 1 are the solutions of Ax= e1. Ax= e 2 , and
Ax= e3 . The augmented matrix of Ax = e 1 is
[i H~]
a.nd the reduced row echelon form of this matrix is
[~
D7. The matrices
[~ ~],
G~]I and [~
1
00 -13]
0
1
0
0
n
have determinant equal to 1. The matrices
(~ ~] . [~ ~]. and
[~ ~] have determinant equal to -1. These six matrices are invertible. The other ten matrices6U
have determinant equal to 0, and thus are: not invertible.
DS. True. If AB
= BA, then
n-•A-l = ( AB)- 1 =
(BA) -
1
= A - 1 s- 1 •
73
Working with Proofs
WORKJNG WITH PROOFS
Pl. We proceed as in the proof of part (b) given in the text. It is clear that the matrices (ab)A and
a(bA) ha.ve the same size. The following shows tha t corresponding entries are equal:
P2. 'The following shows that corresponding entries on the two sides are equal:
P3. The argument that the matrices A(B- C) and AB- AC must have the same size is the sa.rne as
in the proof of part (b) given in t he text. T he following shows that corresponding column vectors
are equal:
cjiA( B - C)J = Acj(B- C) = A(cj(B)- Cj(C))
= Ac1 ( B)- Aci (C )
= Cj (AB)- c1 (AC) =
cj (AB - AC )
P1 . T h('.se t.hree rn<\t r ices clel\rly ha.ve t he same s ize. The following shows t ha t corresponding column
vectors are equal:
= (aB)cj(C) = c; [(aB)CJ
a(Bc;(C)) = B(acj(C)) = B(cj (aG)) = C j [B(aC)]
cj ja(BC)] = ac1 (BC) = a(Bc1 (C))
Cj!a(BC )J
= acj ( BC) =
P5. If cA = 0 a.nd c :;f 0 then, using Theorem 3.2.l(c)), we have A
= 1A =
((
Dc) A = ~ (cA) =
~ 0 = 0.
P6 . (a) Tf A is invertible, then AA- 1 =I and A - 1 A = ! ; thus A- 1 is invertible and (A- 1 ) - 1 =A
(b ) . If A is invertible t hen, from Theorem 3.2.8, A 2 is invertible and (A2 ) - l = A- 1 A - 1 = (A- 1 ) 2 .
It t hen follows that A 3 is invertible and (A 3 ) · · l = (A2 ) - 1 A- 1 = (A .. 1 ) 2 A- 1 = {A- 1 ) 3 , et c.
In general, from the remark following T heorem 3.2.8, (An ) - 1 = A- 1 • • -A- 1 A - 1 = (A- 1 )n.
P7.
(a)
AA- 1
·
1
A- A
·"
a
[c
=(
b] {
d
d
1[
\ad .... be -c
1
[
-bl) = --·a
l
[ad · · be
(t<l - (,c
0
d ...ab]) [f'cl db'J = ad -1 be [da-0 be
ad - be - c
(b) Let A =- (: ;] where ad - be = 0. Then A is invertible if and only if !.here a.re scalars e,
f,
g, and h , such Lhnt
[: !] (;
~] = [~ ~]
i.e. if ami only if the followhg system of equa •.ions is consistent:
ae + bg
ce + dg
= 1
=0
af + bh = 0
cf + dh = 1
Multiply t he fi rst equation by d , multiply t he second equat ion by b, and subtract . This Jea.ds
to
(da - bc)e =- d
74
Chapter3
and from this we conclude that d = 0 is a necessary condition in order for the system to be
consistent. It then follows (since ad- be = 0} that be= 0 and so either b = 0 or c = 0. Let
us assume that b = 0 (the case c = 0 can be handled similarly). Then the equations reduce
to
ae
= 1
ce
= 0
af = 0
cf = 1
and these equations are easily seen to be inconsistent. Why? From ae = 1 we conclude
that e f. 0; and from ce = 0 we conclude that c = 0 or e = 0. From this it follows that c
must be equal to 0. But this is inconsistent with c/ = 1! ln summary, we have shown that
if ad- be = 0, then the system of equations has no solution and so the matrix. A is not
invertible.
n
tr(AB) =
n
n
n
n
n
L (ABh~c = L L:>~rtbtk = L L btkakt =
L(BA)u
k =: l
1=1
k=l / "" 1
1=1 k = l
= tr(BA)
EXERCISE SET 3.3
This matrix is elementary; it is obtained from J2 by adding -5 times the first row to the
second row.
(b) This matrix is not elementary since two row operations are needed to obtain it from I 2 (follow
the one in p art (a,) by interchanging the rows).
(c) This matrix js elementary ; it is obtained from h by interchanging the first and third rows.
(d) This matrix is not invertible, and therefore not elementary, since it has a row of zeros.
1. (a)
2. (c) is elementary. {a) , (b) , and (d) are not elementary.
3. (a) Add 3 times the firs t row to the second row.
{b) Multiply the third ro•.... hy ! .
(c) lnt.erchange the first and fourth rows.
(d) Add ~ times the third row to the first row.
4. (a) Add -2 times the second row to the first row.
(b) Multiply the first row by - 1.
{c) Interchange the first and third rows.
(d) Add -12 times the second row to t.he fourth row.
5. (a)
(c)
f-~
l~
~r~ = r~ ~J
0 0
1 0
0 1
0 0
W'
=
[~ ~]
0 0
1 0
0 l
0 0
(b)
[~
{d)
[~
~r r
0
1
0
=
0
1
0
0
I
- 7
0
1
0
0 OJ
0 1 0
0 0 3!
r [~
==
0
1
0
I\
v
l
7
0
1
0
~]
EXERCISE SET 3.3
75
[-~ ~ ~l-1 = [-~ ~ ~]
(b)
0
:: 1rr1r
1
0
0
1
r~ j !r r~ J ~]
0
0
1
0
(d)
7. (a)
0
fmm A by interchanging the first and third rows; thus EA
~ B where
(b) EB =A where E is the same as in part (a).
(c) :; : o[b~~r~lfrom A by adding -2 times the first row to the third row; thus EA =
c where
-2 0 1
(d)
EC = A where E is the inverse of the matrix in part (c) i.e. E
1
=
[~ ~ ~]·
2 0 1
8.
(a)
E
~]
=
(c) E =
(b)C
r~ 0 0]
0
l
0
2
1
(d)
~ [~ -~ ~]
E~[~
! -~]
9. Using thr. method of Example 3, we start with the partitioned rr.atrix lA I I] and perfonn row
operations aimed at reducing the left side to I; these same row operations performed simultaneously
on the right ::;ide will produce t.he matrix A -J.
1
[2
5 I 1 0]
20: 0 1
Add -2 times the first row to the second row.
5 : 1 0]
10 l -2 1
1
[0
Multipiy the seconci row by 1~, then add -5 times the new second row to the first row.
1 0! 2
[0 11_!
I
~
Thus A -l
= [ _ 12
5
th~t det(A)
= 10
1
_!]1 .
5
-~]
.L
10
On the other hano 1 using the formula from Theorem 3. 2. 71 ami the fact
10
we have A- l
21)
= 101 [ -2
-5]
1
-l]
= [ -g12 iO? ·
Chapter 3
76
10.
11.
A
-1
(a)
=
1 [ 1
14 - 4
3]2
l
t ~ ~~~
- 1
;=
[
. 7
Start with the partitioned matrix [A
I I J.
~]
3l0
1
-1 : 1
0
0
~]
I
I
I
J
0
0
5 - 4l0
[!
Interchange rows 1 and 2.
0
1
0
- 1 l1
4
[~
0
4
I
5 - 4l0
Add - 3 times row 1 to row 2. Add - 2 times row 1 to row 3.
[~
3:o
1
4 - 10:I 1
5 -10: 0
-3
-2
0
~]
Add - 1 t imes row 2 to row 3.
[~
3l
0
4
-3
0: -1
1
- 10:
I
1
~]
1
0
1
Add -4 times row 3 to row 2, then interchange rows 2 and 3.
[~
Multiply row 3 by -
1
,
10
0
3:
0
l
Q
'
--1
0 - 10:
5
I
I
1 0]
~
__
:
then add -3 times the new row 3 to row 1.
0 :I ~~
0 : -1
0
1
: _l
I
-- ~5 ]
_!l
10
2
1
1
7
10
S
From this we conclude that A is i nvertiblc , and that A-
2
1
-
[
~;
11
-TO
l
7
10
(b)
Start with the partitioned matrix [A
- ~] ·
'2
6
I IJ.
[-~
3 -4
-4
-9
4
2
I
l
0
0
0
1
0
~]
Multiply row 1 by -1. Add - 2 times the new row 1 to row 2; add 4 times the new row 1 to
row 3.
1
-3
0
10
[ ()
- 10
0
1
0
~]
77
EXERCISE SET 3.3
Add row 2 to row 3.
[
1
0
-3
10
0
0
0
1
1
At this point, since we have obtained a row of ~eros on the left side, we conclude that the
matrix .4 is not invertible.
!A I Jj.
(c) Start with the partitioned matrix
[~
~]
1:1 0
1 :I 0 1
0
I
o:o
1
0
Add -1 times row 1 to row 3.
fl
1 :
0
1
1
l~
I
j
I
I
-1: -
0
0
1
1
0
1
~]
Add · 1 times row 2 t o row 3, then mult ip ly the new row 3 by
[~
1
1
1
0
1
0
j]
0
1
1
0
l
1
2
-1.
2
Add -1 times row :3 t.o rows 2 and 1.
[~
o: l
0: - 4
0
I
1
I
1
0
I
I
2
l
-2
1
2
I
I
2
2
.
-ll
r
I
~ -2
f\·, "" t h ;, weconchtdet hnt ,\ ;, iot •«li ble , nod that A·· ' = [-;
I
2
I
'i
12.
(a)
A
-l
(b)
A-
1
= ~
l
[0
0 0]
1
- 1
- 3
4
-ll
1
(c)
A-
1
= [0
0
--1
Ol
~ -~J
0
!.
3
13. As in t he inversion algorit hm, we s tart wit h the po.rtitioncd matrix [A I I J and perfo rm row operations a.irr..;)d at redudng th~ left. side to its reduced row echelon form R. If A is invertible, then
R will be the identity m atrix and the matrix pro duced on the right side will be A - l . In the more
general situation the reduced matrix will have the form [R I B} = [BA I BIJ where the matrix B
on the right has the property that B.1 = R. [Note that B is the product of ejementary matrices
and thus i" always an inve rtible matrix, whether A is invertible or not .]
1 2
0 0
[1 2
3[1 0 0]
!0 l 0
1
4:0 0 1
Chapter 3
78
Add -1 Limes row 1 to row 3.
[~
3:
1
0 1I 0
0 1 1 -1
2
I
I
~]
0
1
0
Add --1 t imes row 2 t o row 3.
3:
[~
2
0
[~
2
0
0
1
I
I
I
1
0
0 : -1
0
0
1
- 1
~]
Add - 3 times row 2 to row l.
The reduced row echelon form of A is R
0:
I
I
1 -3
1
- 1
0
I
0: -1
}
20]
= [)
0 0 1 ,
~]
0 0 0
pro perty that B A
14.
R=
[~
0
1
0
=
R.
~l B=H
0
0
8
1
-~
u
-3
and the matrix B =
1
- 1
~]
has the
0]
l
15. 1f c = 0, t.hen t.hc fir~t. row is a row of zeros, so the matrix is not invertible If c f- 0, then after
rnult.iplying the fir5t row by 1/ c, we have:
Add -I timE'-'> the tlrst row to t he second row and to the t hird row.
]
1
0 c- 1
[
0
0
If c = 1, then t he second and third rows are rows of zeros, and so t he matrix is not invertible. If
c :j: 1, then we can divide the second and third rows by c - 1 obtaining
[~ l :]
::~.nd from this it is clear that the reduced row echelon form is the identity matrix. Thus we conclude
that the matrix is invertible if and only if c :f:. 0, 1.
16 . c ¥= 0,
±v2
17. The matrix B is obtained by starting with t he identity matrix and performing the same sequence
of row operation' ; thus B =
[!Hl
=
[~ ~][! ~ :J
:
EXERCISE SET 3.3
18
B
79
~ [~ ~. ~]
19. 1f any one of the ki's is 0, then the matrix A has a 1.ero row and thus is not invertible. If the kt.'s
are all nonzero, then multiplying the ith row of the matrix [A I IJ by 1/ki for i = 1, 2, 3, 4 and then
reversing the order of the rows yields
0
0
0
k3
l
k2
0
thus A is inv~rtible and A -
20.
k
=r' 0;
A-
1
['
l
-p
=
p
1
(a)
0
k
\
p
1
(h)
A
(c)
A = E}
(a )
0
k
I
-1?1
~J
The identity matrix can be obtained from A by first adding 5 times row 1 to row 3, and then
multiplying row 3 by
22.
1
·- k1
F
21.
0
I
I
1
0
is the matrix in the right-hand block above.
l
0
k
~]
l
•
~·
Thus if E 1 =
[~ ~]
[~
and E2 =
;] , th€n
E2E1 A = I.
E 2 E 1 where £ 1 and E 2 a.re as in part (a) .
1
E2
1
=
[-~ ~] [~ ~]0
::~t;;,:~:::·;:o:•::•:: :anT::, ;;·:~·:d [rof~]A .: ::''.ad[t'fg ?]'~':h:::::~ ~ ~d
1
then
A
0 0 1
(b)
0 0 2
A -I = E2 E 1 where £ 1 and E2 are as in par t (a).
(c)
23. The identity ma.Lrix can be obtained from A by the following sequence of row operations. T he
corresponding ~l~mcnta.ry matrices and their inverses arc as indicated .
[~ ~]
0
(1) Interchange rows 1 and 3.
E1=
1
s-1_
l
-
0
0
(2) Add -1 times row 1 to row 2.
£2
=
0
l
0
ol
H ~j
[~ ~]
[~ ~]
0
1
E-I_
2
-
0
1
0
80
Chapter 3
{3) Add - 2 times row 1 to row 3.
E,
{4 ) Add row 2 to row 3.
(5) Multiply row 3 by
E,
-i·
Es
~ [ -2~ 0 0]
1 0
0 1
~ [~
~]
0
1
1
~ [~
0
1
0
~ [: :]
E, ~ [~
-~]
0
(6) Add row 3 to row 2.
1
E,
0
()
(7) Add - 2 times row 3 to row 1.
1
0
(8) A<ici - 1 t imes row 2 to row 1.
Ea =
r~
- 1
l
LO
It follows that A- 1
j]
0
s;•~
[~ 0 0]
1. 0
0 1
Ei'
~ [~
E;'
~ [~
~]
0
1
-1
j]
0
]
0
[~ -H
~ ~]
~] EO'~ (~ ~]
0
1
E ()- t -_
0
Ei'
[:
0
1
0
1
1
0
= EaE1 E6EsE4E3E2E1 and A= E} 1 E2 1Ej 1 Ei'£5'Ei 1E7 1E8 1 .
25 . The two systems have t he same coefficient matrix. If we augment this m:\trix with the two columns
of cons tants on the right sides, we obtain
1 2 1!-1:0]
[ z:
1
3
0
1
21I
3•0
I
4 l1
~d the reduced row echelon form of this matrix is
[~
0
~ ~t:!-:o:!-:]
0
4
From this W P. conclude that the first system has the solution x 1
second system has the solution ~ l ~ 4, x2 = -4, xa = 4.
= - 9, x = 4, :r = 0;
2
3
and the
81
EXE RCISE SET 3.3
26. 'The coefficient matrix, augmented with the two colUJnns of constants, is
2 5: -1: 2]
[~
l
I
-3
0
I
-
I
1 :
I
I
I
1
3 :- 1
}
and the reduced row echelon form of this matrix is
1 0 0 l - 32 ! 11]
0 1 0
8 -2
[0 0 1 l 3 : - 1
Thus the first system has the solution x 1 = -32, x 2 = 8, x 3 = 3; and the second system has the
solution x 1 = 11, x2 = - 2, X3 = -1.
I
I
I
27.
I
(a) The systems can be. written in matrix form as
[~ :] [::]
2
3
n1
::0
1
The inverse of the coefficient matrix A
=
and
[~ ~][:] m
2
3
:::
]
[" :]~A-' ~[-:
~ 3
-32 - l'] . Thus the solutions
-l
1
are given by
[-: -:] [-!] n1
2
and
b,]
~ [- :
1
28.
( ~i )
-3
2
- 1
-1' ] [ - 3J
l
4
2
- 1
- 1
(b) A- 1 !b 1
H -:1m ~ Hl
-~~
-3
0]~ = [-9~ -:4] .
The systems can be written in matrix form us
nod
Tbe inverse of the coefficient matrix
tions are given by
A~ [~
[~ ~ -~][;}
~ -:]
[
J
82
29
Chapter3
0
The augmented matrix
consistent if and only if
30. The augmented matrix
[~ =~: ~]
2~ =
b1 -
can be row reduced to
0, i.e. if and only if b1
btl
1 -2 5:
4 -5 8: b2
[
-:1 3 -3 I b3
[
3
2!
-4
7
4 , b3
~22b:z)
bt
[1~
0
-2
3
0
system is consistent if and only if 2b1 - b2 + b3
T hus the system is
0
6:
bz- 4bl
0 I - I> 1 + bz + b3
!
-2 -1 :
-1
o
0
l
bl
- 12l
o
+ bz + b3 = 0.
l -2 -llbll&a can be row reduced to [1o
-2
l
= 2b2.
can be row reduced to
the system is consistent if and only if -b1
31. The augmented matrix
(~ -~
bl
~ t 2b1
0 I 2bt - b2
l
+ 113
.
0
Thus
Thus the
.
= Oo
32. The augmented matrix for this system can be row reduced to
1 -1
3
2
0
1 -11 - 5
0
0
0
[0
0
0
0
0
and from tohis it foll ows that the system Ax = b is consistent if and only if the components of
b satisfy the equations -b1 + bs + b4 = 0 and - 2~ + b2 + b4 = 0 . The general solution of this
homogeneous system can be written as b1 ::: s + t , b2 = 2s + t, bs = s, b4 = t. Thus the original
l
~ b is consistent if and only if the ve•:to' b is of the fonn b ~ [;,( ~ s [i] + t m
·
11
system Ax
33. The matrix A can be reduced to a row echelon from R by the following sequence of row operations:
ro
A=
(1) Intercha.nge rows 1 and 2.
ll
-2
[:
1
7
3
3
- 5
1
3
3
1
7
-2 - 5
1
r
(2) Add 2 times row 1 to row 3.
0
8
37
1
0 1
(3) Add -1 times row 2 to row 3.
R
~
l:
j]
j]
7
3
l
:J
3 88]
7
0 0 0
It follows from this that R = E 3 E 2 E 1 A where E 1 , E'2, E 3 <lie the elementary matrices corresponding to the row operations indicated above. Finally, we have the factorization
A
where E =
~
EFG R
E! 1 • F = Ei 1 ,
~ [! ~ ~]
and G = £3 1 •
u! ;][:
~~][~~
~:]
1
0 0 0 0
1
WORKING WITH PROOFS
83
34. The matrix A is obtained from the identity matrix by aJding a times the first row to the third row
and adding b times t he second row to the third raw. If either a = 0 or b = 0 (or both) the result
is an elementary matrix. On the other hand, if a =I= 0 and b =I= 0, the result is not a.n elementary
matrix since two elementary row operations are required to produce it. Thus A is elen,enta.ry if
and only if ab = 0.
DISCUSSION AND DISCOVERY
Dl. Suppose the matrix A can be reduced to the identity matrix by a known sequence of elementary
row operations, and let E 1 , E 2 , ...• Ek be the corresponding sequence o£ elementary matrices.
Then we have Ek · ··E2 EtA =I, and so A= E;- 1 E2 1 · · ·Ej; 1 l. Thus we can find A by applying
the inverse row operations to I in the reverse order.
D2. There i.'J not. For example, let b = 1 and a"" c = d = 0. Then
A[~ ~] = (~ :::]=I=[~ ~]·
D3. There is no nontrivial solution. From the last equation we ste. that x 4 = 0 and, from back substitution , it follows immediately that x 3 = x'2 = x 1 = 0 also. The coefficient matrix is invertible.
D4.
= [01
-2
:
0
]
0
[~
_:
(a)
The matrix B
(b)
Yes, t he mat rix B
(c)
The matrix B must be of size 3 x 2: it is not square and therefore not invert.ible.
D5 . (a )
(b)
(c)
=
has the requited property.
D6.
D7.
works just ns well.
False. Only invertible mat.rkes can be ~xpresse<.l as a produ<:t of elementary matrices.
F'alse. For example t he product [~
a single elementary row operation.
~] =_[~ ~} [~ ~)
cannot be obtained from the identity by
'Jhu~
This row operation is equivalent t.o multiplying the given mat.rix by an elementary
matrix: and, since any elementary matrix is invertible, the product is still invertible.
(d) 'T'r11e. If A is invertible and AR
(e)
~1
= 0, then B = IB = (A '- 1 A)B = A- 1 (AB) = A- 1 0= 0.
True. If r1. is invt!rtible then the homogeneous system Ax = 0 has only the trivial solution;
otherwise (if A is singular) there are infinitely many solutions.
All of these s tatements arc true.
No. An iuvertible matrix cannot have a row of zeros. Thus, for A to be invertible we must have
=I= 0. But (assuming this) if we add -d/a times row 1 to row 3, and add -e/h times
row 5 to row 3; we have a matrix with a row of zeros in the third row.
a#- 0 and h
WORKlNG WITH PROOFS
Pl. If AB is invertible then, by Theorem 3.3.8, both A and B a.re invertible. Thus, if either A orB is
singular, then the product AB must be singular.
P2. Suppose that A = BC where B is invertible, and that B is reduced to I by a sequence of elementary
row operations corresponding to the elementary matrices E 1 , E2, .. . , Ek · Then I= Ek · · · EzE1B,
and so B - 1 = Ek · · · EzE1. From this it follows that Ek · · · E2E1 A == E,.. · · · E2BC = C; thus the
same sequence of r{)w operations will reduce A to C.
84
Chapter 3
P3. Suppose Ax= 0 iff x = 0. We wish to prove that, for any positive integer k, Akx = 0 iff x = 0.
Our proof is by induction on the exponent k.
Step 1. If k = 1, then A 1x =Ax= 0 iff x
= 0. Thus the statement is true fork=
1.
Step 2 (induction step): Suppose the statement is true fork= j, where j is any fixed integer 2:: 1.
Then A1+1 x =A) (Ax) """' 0 iff Ax= 0, and this is true iff x = 0. This shows that if the statement
is true fork= j it is also true for k = j + 1.
•..
These two steps complete the proof by induction.
P4. From Theorem 3.3.7, the system Ax= 0 has only the trivial solution if and only if A is invertible.
But, since B is invertible, A is invertible iff B A is invertible. Thus Ax = 0 has only the trivial
solution iff (BA)x = 0 has only the trivial solution.
P 5. Let e1, ez, .. . , em be the standard unit vectors in Rm. Note that ell e2, ... , em are the rows of
the identity matrix I= lm. Thus, for any m x n matrix A, we have
for i = l, 2, ... , m. Suppose now that E is a.n elementary matrix that is obtained by performing a
single elementary row operation on I. \Ve consider the three types of row operations separately.
Row Interchange. Suppose E is obtained from I by interchanging rows i and j. Then
r1(EA) = fi(E)A = eiA = ri(A)
ri(EA) = rj(E)A
=
e;A = ri(A)
and rk(EA) = rk(E)A = c~eA = rk(A) fork =J: i,j. Thus EA is the matrix that is obtained from
A by interchanging rows i a.nd j.
Row Scaling. Suppose E is obtained from I by multiplying row i by a nonzero scalar c. Then
and rk(EA) = r,(E)A = ekA = rk(A) for all k
A by multiplying row i by the scalar c.
=I= i.
Thus EA is the matrix that is obtained from
Row Replacement. Suppose E is obtained from I by adding c times row ito row j (i :f.=j). Then
ri(EA) == rj(E)A
=
(ej
+ cei)A ~ eiA + ceiA = ri(A} + cri(A)
and r~c(EA) = rk(E)A = ekA = rk(A) for all k =J:. j. Thus EA is the matrix that is obtained from
A by adding c times row i to row j.
P6. Let A
= [:
!] ,and let B = [~
~].
Then AB
a= d and b =c. Thus the only matrices
= BA if and only if [!
tha~ c0mmute with
Suppose now that A is a matrix of this type, a.nd let G
[:
~!]
=
:] = [:
~, i.e.
if and only if
Bare those of the form A=
= [~ ~].
Then AC
(~
!] .
= C A if a.nd only if
[;b 2ba], and this is true if and only if b = 0. Thus the only 2 x 2 matrices that commute
with both B and C are those of the form A=
[~ ~] =a[~ ~)
where -oo <
see that such a matrix will commute with all other 2 x 2 matrices as well.
a< oo.
It is easy to
EXERCISE SET 3.4
85
P7. Every 111 x n ma trix A can be tra.nsforrr:ed to reduced row echelon form B by a sequence of elementary row o pe rations. Le t E 1 , E2 , . . . , E~c be the corresponding sequen ce of elementary m~;~.trices.
Then we have B = E~r: · · · E2E 1A = CA where C = E1r: · · · E 2 E 1 is a.n inver tible m:1trix.
P8.
Suppose tha t A is reduced to I= In by a sequence of elementary r ow operations, a.nd that
Et. Fh, .. . , l:J~o; n.re the correspondjng elementary matrices. Then E~c · · · E2E1 A =I and so
E~c · · · E2E1 = A- 1. Jt fo Uows that the same sequence of elementary ro w o perations will reduce
the matrix [A I B] to t.he matrix [E~c · · ·~EtA I Ek · · · E2 E1B) = [I I A- 1 BJ.
EXERCISE SET 3.4
1.
(X J ,X2)
(X t 1 X;!, X3) ::
(c)
2.
=
= t(l, -1); Xt
t, X2 -t
t(2, 1, - 4); X} = 2t, X2 = t, X3 = -4t
(x,,.:1:2, X3,X4) = t(l, 1,-2,3); X1
t, X2 = t, X3 - 2t,
=
=
X4
= 3t
=
t(O, - 4); :c1 = 0, x2 = -4t
( ~"') (x1, x2)
(b) (.t: , ,:l:<!, :2:3 ) = t(Q , - 1, 5) ; XI= 0, X2 = -t,
(c ) (Xl 1 X2, X3 1 X4 ) = £(2 , 0, 5, _ ,1); Xt = 2t, X2
3. (a)
4.
=
(a)
(b)
(Xt,X2 , X3)
= s(4, ·
4, 2) + t(-3,5, 7);
Xt
= 5t
= 0 , X3
X3
= 5t , :t4 = -
4t
= 4s - 3t, X2 =-tiS + 5t, :t3 = 2s + 7t
X1 = S + 3t , X2 = 2s + 4t, :r3 = S + 5t, x 4
= s( l , 2, 1, - 3) + t(3, 4, 5, 0);
(b)
(::tt , 2'2 · X3, X4 )
(a)
(:z: 1 ,:r2 X3) :-:: s{0,- 1,0) +t(-2,1 , 9); x 1 = 2t, X2 = -s+t, X3 = 9t
lXt, x2. ::3, :r4, x :;) = $(1, 5, - 1, 4, 2) + t (2, 2. 0, 1, - 4); x 1 = s + 2t, x2 = Ss + 2l , x 3
X 4 = 1s + t, X:; = 2s - 4t
(b)
·.
= -3.'f
= - s,
(a)
(b)
u
= - 2v ; Lhus u
u
:f:
6.
(a)
no
7.
(a)
Two vectors are linearly dependent if and only if one is a sca la r m ultiple of the other; thus
5.
is in the s ubspace span{v}.
kv fo r a ny scalar k ; thus u is not in the subspace s pa n{v}.
{b)
yes
these vectors 1\re linea.rJy dependent.
(b) These vectors are linearly independent.
8.
(a)
9.
u = 2v - '\v; thus the
linear ly independe nt.
10.
u = 4v - 2 w
11.
(a)
(b)
vec tor~
(b)
linearly dependent
u, v , w are linearly dependent.
A line (a 1-dilnens io nal subspace) in R 4 that passes thro ugh the origin and is parallel to the
vector u = (2, - 3, 1, 4) = 4(4, -6, 2, 8}.
A plane (a 2-.:!ime nsional suospace) in R" that passes through the origin and is parallel to
the vecto rs u = (3,- 2, 2, 5) and v = (6, -4, 4, 0).
A pla ne in R~ tha t passes through the origin and is parallel to the vectors u = (6, -2, -4,8)
and v = (3, 0. 2, - 4).
(h) A line in R 4 tha t passes through the origin and is parallel to the vector u = (6, -2, -4, 8).
12. (a)
Chapter 3
86
13. The augmented matrix of the homogenous system is
H
~]
-5
-3
2
6
-6 -1
12
5 -18
a.nd the reduced row-echelon form of this matrix is
[~
~]
0 11
1 -8
0
0
6
0
0
Thus a general solution of the system can be written in parametric form as
Xt
= -6s- llt,
X2
= s,
X3 :=
8t,
X4
=
t
...:
·:.~·
or in vector form as
(Xt. X2 1 x 3 , X4) =
S( -61 1,0, 0) + t( -11, 0, 8, 1)
This shows that the solution space is span{ v 1, v2} where v1
= (-6, 1, 0, 0) and v2 = (-11, 0, 8, 1).
14. The reduced row ech!:!lon form of the augmented matrix is
0
0
0-1 1 0~]
1
2
- 3
0
0
0
0
0
1 -1
[
thus a. general solution of the system is given by
(x 1 , x 2 , x 3 , x 4 , x 5 ) = r( l, 1, 0 , 0, 0) + s(l , 0 , - 2, l , 0) + t( -l , 0 , 3 , 0, 1}
15. The augmented matrix of the homogeneous system is
2
4
1
-1
l
1
0
1
~]
and the reduced row echelon form of this matrix is
1 0]
1 2 0 ~
[0 0 1 ~
i
0
Thus a general solution of the system can be written in parametric form as
X1
=
-2r-
js-
~t,
X2
= T,
X3
= - ~S - ~t,
.X4
= S,
X$=
t
or in vector form as
~ _r_~i;;l, 0,0,0) + s( -l ,0, -~, 1, 0) + t(-~, 0, -l,O, 1)
The vectors v 1 = (-2, 1, 0,0, 0), v2 = (- 3. 0, -i, 1, 0), and v 3 = (- ~. 0,- ~, 0 , 1) span the solution
(xt, x z, .x3, X4, xs)
sp3ce.
16. The reduced row echelon form of t!1a augmented matrix is
[~
0
1
0
0
-4
1
0
0
1
-2
1 -3
0
0
0
0
~]
thus a. general solution is given by (xt. x2, X3, x4 , xs) = s(4, -1, 1, 0 , O) + t( - 1, 2, 0, 3, 1).
87
EXERCISE SET 3.4
17. (a) Vz is a scalar multiple ol· v 1(v2 = -5vl); thus these two vectors are linearly dependent.
(b) Any set of more than 2 vectors in R 2 is linearly dependent (Theorem 3.4.8).
18. (a) Any set containing the zero vector is linearly dependent.
(b) Any set of more than 3 vectors in R 3 is linearly depenrlcnt (Theorem 3.4.8).
19. (a) Those two vectors arc linearly independent since neither is a scalar multiple of the other.
(b) v 1 is a scalar multiple of vz(v 1 = -3vz)i thus these two vectors are linearly dependent .
(c) These three vectors are linearly Independent since the system
n-: [:]~]
[~] has
only the trivial solution . The coefficient matrix, which is the matrix having the given vectors
as its columns, is invertible.
(d) These four vectors are linearly dependent since any set of more than 3 vectors in R 3 is linearly
dependent.
(b) linearly independent
(d) linearly dependent
20. (a) linearly independent
(c) linearly dependent
21. (a) The matrix having t hese vectors as its columns is invertible. Thus the vectors are linearly
independent; they do not lie in a. plane.
(b) These vectors are linearly dependent (v 1 = 2v2 - 3v3); they lie in a. plane but not on a line.
(c) These vectors line on a line; v 1 = 2v2 and v3 = 3vz.
22. In parts (a) and (b} the vectors are linearly independent; they do not lie in a plane. In part {c)
the vectors are linearly dependent; they lie in a plane but not on a line .
......-·--...,""
This se~-~f vectors is ~(~~; it is closed under scalar ~~ltipl~~.a.~ion and addition:
k(a, 0, 0) = (ka, 0, O) and (a1. a, 0) + (az, 0, 0) = (a1 + az, 0, 0).
(b) This seL of ''cctors is not a subspace; it is not closed under scalar multipllcation.
(c) Thl~ set of vectors is a.· subspace. lf b = a+ c, then . kb = ka + kc. If b1 ;;;·a 1 + c1, and
b2 = nz + cz, then (b1 + bz) = (aJ + a2) + (c1 + cz) .
(d) This set of vectors is not a subspace; it is not closed under addition or scalar multiplication.
23. (a)
.
24. Sets (a) and (b) are not subspaces. Sets (c) and (d) are suhspaces.
25. The set W consists of all vectors of the form x = a(l,O, 1, 0); thus W = span{v} wh~>.re v
(l,O , 1, 0) . This corresponds to a line (i.e. a 1-dimensional s ubs pace) through the origin in R 4 .
=
26 . W = 5pan{v 1 , v 2 } ·.vhcre v 1 ::: (1, 0, 2, 0, - 1) and v 2 = (0, 1, 0, 3, 0). This corresponds to a. plane
(i.e. a 2-dimensional subspace) through the origin R::..
27. (a) 7vl - 2vz + 3v3 = 0
2
3
(b) YJ = 7Y2
·- 7V3 , Vz =
7
zVl
3
+ 2YJ
1
V3
7
= -3Vl
+ 32 V2
!,
28. If ,\ =
then the given vectors are clearly dependent. Otherwise, consider the m atrix having
these vectors as its columns. Are there any other values of..\ for which it is sing\.!lar? To determine
this, it is convenient to start by multiplying all of the rows by 2.
[Y 2: JJ
88
Chapter 3
Assum!ng
>. ¥-
~, this matrix can be reduced to the row echelon form
[~
~ ~~
0 2+2.>.
l
which is singular if and only if>.= - 1. T hus the gjven vectors are d ependent iff >.=
29.
! or>.= -1.
Suppose S = { v 1 , v:~, v 3 } is a. linearly independent set. Note first that none of these vectors
can be equal to 0 (otherwise S would be linearl y dependent). and so each of the sets {v1 },
{v 2}, and {v 3} is linearly independent . Suppose then that Tis a 2-element subset of S,
e.g. T = {v 1 , v:~} . II T is linearly dependent then there are scalars c1 and c2 , not both
zero, such that c1 v 1 + c 2v z = 0 . But, if this were true, then c1 V t + c2v 2 + Ovs = 0 would
be a nontrivial linear relationship among the vectors v'h v2 , v 3, and so S would-.be linearly
dependent. Thus T = {v1 , v 2 } is linearly independent. T h e same argument applies to any
2-element subset of S. Thus if S is linearly independent , then each of its nonen:tpty subsets
is linearly independent.
(b) If S = {v1 , v 2, v 3 } is linearly dependent, then there a.re scalars c1o c2 , and c3 , not all zero,
s uch that c 1 v1 + c2v2 + CJVJ = 0 . Thus, for any vector v in R", we have c 1 v 1 + c2 v 2 +
c3 v3 + Ov = 0 and this is a nontrivial linear relationsbjp among the vectors v 1 , v 2, vs, v.
This shows that if S = {v 1 , v 2 , v 3 } is linearly dependent then so is T = (v 1 , v 2 , v 3 , v} for
any v .
(a)
30. The arguments used in Exercise 29 can easily be adapted to this more general situation.
3 1.
(u - v ) + (v - w) + (w - u )
penden t set .
= 0 ; t hus the
vectors u - v, v - w, and w - u form a linearly de-··
32. First note that the relationship between the vectors u , v and s, g can be written as
[~] = [o.~2 o.~6] (:)
and so the inverse relationship is given by
[s] = [0.12
1
g
0.06] _, [u]
1
v
1
= 0.9928
[
1
- 0.12
Parts (a.), (b) , (c) of the problem can now be a nswered as follows:
(a)
s
= 6 9i 28 ( u -
0.06v)
(b) g=o.9~28(-0.12u+v)
33.
(c)
~s+h= l.9~ 56 (0.88u +0.94v) =
(a)
No. 1t is not closed under either addition or scalar multiplication.
(b)
34.
5
P 876
= 0.38c + 0.59m + 0. 73y + 0.07k
P 2t6
= 0.83m
+ 0.34y + 0.47k
P328 = c + 0.47y
(c)
1 2~ u +~!~~v
H P e75
+ 0.30k
+ P2J6) corresponds to the CMYK vector
(a) k1 = k2 = k::~ = ~
(c)
(0.19,0.71 , 0.535, 0.27).
(b) k1
tc2 4c3
= k2 =
· · · = k1
=~
The components of x = ~Ct +
+
represent the averag.e scores for the students if the
Test 3 grade (the final exam?) is weighted twice as heavily as Test 1 and Test 2.
DISCUSSION ANP DISCOVERY
89
i
35. (a) kt = kz = k3 = k4 = ~
(b) k1 = k2 = k3 = k4 = k& =
(c) The comp onents of x; = ~r 1 + ! r 2 + ~ r:; represent the average total population of Philadelphia, Bucks, and Delaware counties in each of the sampled years.
n
36. v =
L
vk
k= l
=DISCUSSION AND DISCOVERY
Dl . (a) Two nonzero vectors will span ·R 2 if and only if t hey a.re do not lie on a line.
(b) T hree nonzero vectors . will span R3 if and only if they d o not lie in a plan~.
T wo vectors in Rn will span a plane if and only if they are nonzero and not scalar multiples
of one another.
(b) T wo vectors in R" will span a line if and only if they are not both zero and one is a scalar
multiple of the other.
(c) span {u} = span {v} if and only if one of ~be vectors u and v is a. scalar multiple of the other.
D2. ( a )
Yes. If three nonzero vectors are mutually orthogonal then none of t.hem lit:S in the plane
spanned by the other two; thus t he three are linearly independent.
(b) Suppose the vectors vl> v2 , and v 3 are nonzero and mutually orthogonal; thus v, · v; =
llvdl 2 > 0 fori= 1, 2, 3 and v; · vi = 0 for i=/: j . To prove they are linearly independent we
must s how that if
C1V1 + C2V2 + C:JVJ = 0
then Ct = c-_a = C3 = 0. Tbis follows from the fact that if c 1v1 + c 2v2 + C3 Y3 = 0, then
D 3. (a)
C;
for i
2
llv;ll = vi· (c1v1 + c2v2
+ c3v3)
= V1
•
0 = 0
= 1, 2, 3.
D4. The vectors in the fi rst figure are linearly independent since none of them lies in the plane spanlled
by the other two (none of them can be expressed as a linear combination of the other two). The
vectors in the second figure arc linearly dependent since v 3 = v 1 + v 2 .
DS. This :set is d ose>d under scalar multiplication, but uot under addi~ion . For example, the vectors
u = (1,2) nnd v = (-2, - 1) correspond to points in Lhe set, but 11 + v = (-1, 1) does not.
06. (a)
(b)
{c)
(d)
(e)
D7.
False. For example, two of the vectors ma.y lie on a line (so one is a scalar multiple of the
other), but the third vector may not lie on this same line and therefore cannot be expressed
a.s a linear combination of the other two.
False. The set of all linear combin ations of two vectors can be {0} (if both are 0), a line (if
one is a scalar multiple of the other), or a plane (if they are linearly independent) .
False. For example, v and w might be linearly dependent (scalar multiples of each other).
[But it is true that if {v, w} is a linearly independent set, and if u cannot be expressed as
a linear combination of v and w, then {u, v, w} is a lin~~trly independent set .J
True. See Example 9.
Ttne:. lf c1(kv1)+ cz(kv2 ) + cz(kvz) = 0, then k(c1 v1 +c2v2 + c3v2) = 0. Thus, since k =/:
0, it follows that Ct v 1 + c2 v2 + c 3 v 2 :::::: 0 and so c1 = cz = c3 = 0.
(a) False. The set {u, ku} is always a linearly dependent set.
(b) False. This statement is true for a homogeneous system (b:::::: 0), but not for a non-homogeneous system. !The solution space of a non-homogeneous linear system is a translated subspace.J
Chapter 3
(c)
True. If W is a subspace, then W is alr~ady closed under scalar multiplication and addition,
and so spa.n(W} = W .
·
(d) False. Forexample,ifSt = {(I ,O),(O, l}}a.nd S2 = {(1, l) ,(O, l}}.thenspan(SI) = span(S2 )=
R2 ,
but. S1
# S2.
08. Since span(S) is a subspace (already closed under scalar multiplication and addition), we have
s pan(spa.n(S}) = spa.n(S).
·
WORKING WITH PROOFS
Pl. Let() be the angle between u and w, and let 4> be the angle between v and w . We will show that
9 = ¢. First recall that u · w = llu!lllwll cosO, so u · w
kllwll cos9. Simllarly, v • w.=lllwllcos¢.
O n the other hand we have
·
=
= l(u · u) + k(u · v) = lk 2 + k(u • v)
aud ::;o kllwll COB()~ u. w = lk 2 + k(u . v ) , i.e. l!wll cos e = lk + ( u. v) . Similar calculations show
that llwll cos¢ = (v · u) + k l ; thus Uwll cos B = Uwll cos¢. It follows that cos 9 =cos¢ and () = </>.
u · w = u · (lu + kv)
X belongs to WIn w2 and k is a scalar, then kx also belongs to Wt n wl since both Wt and
w2 are subspaces. Similarly, if XI and X2 belong to Wt n W2 , then x, + X2 belongs to WIn w2.
Thus WI n w2 is closed under scalar multiplication and addition, .i.e. wl n w2 is a subspace.
P2. If
P3. First we show t hat ·w1 + W 2 is closed under scalar multiplication: Suppose z = x + y where xis in
W1 andy is in W2. Then, for any scalar k , we have kz = k(x + y) = kx + ky, where kx is in W1
and ky is in W:~ (since w, and w2 are subspaces); thus kz IS in w} + w2. Finally we show that
WI + w2 is closed under addition: Suppose Z l =X) + Yl and Z 2 = X 2 + Y2 o where X) and X2 are
in W1 and Y• and Y2 are in W2 . Then z 1 + Z2 == (x1 + y, ) + (x2 + Yz) = (x1 + x2) + (Yl + Y2) .
where X } + X2 is in lt\ and Yl + Y2 is in w2 (since WI and w2 ar~ subspaces); thus Zl + Z2 is in
H.-\ +
w2.
EXERCISE SET 3.5
1.
(a)
The reduced r,Jw echelon form of the augmented matrix of the h omogeneous system is
l
0
0
thus a general solut ion is
x1
!-~
~]
Gr
(in column vector form)
EXERCISE SET 3.5
(c)
91
From (a) and (b), a general solution of the nonhomogeneous system is gjven by
(d) The reduced row echelon form of the
.. ---
-
augmen_t~J;l ).~atrix
··-- · ·· · ::~~-:-··-- -·
,,.
..
[~
f
·.:.,
2
3
0
0
of the nonhomogeneous system is
1
-3
;]
0
0
T his solution is related to the one in Qa.rt (c) by the change of variable .s1
--
(o) The <educed
<OW
echelon ro,m is [;
1
(d) T he rnd uced row echelon form is
+ 1.
-·-----··- r--•. •otNo•·-·---.0o]; thus a general solution is [Zil = t ~-¥] .
[~
ll
T
2
-~
0
0
0
¥
1
o·
:r2
0
-
-
·-
-
-
:1:3
-g
o
: [:} [1J
+t'
nv
..-"
'··..
3. (a)
1
t = t
~-·····
0
2.
= s,
The reduced row echelon form of the augmented matrix of the give n system is
thus a general solut ion is given by
x1
=
!3
0
.3!.]
0
0
1
0
1
0
! - ~s -1t, x2 = s, X3 = t , X4 = 1; or
~
1
92
Chapter 3
(b) A general solution of the assoc~~ted n<:miqgenoouS,~ !s
--:::....
.·
::] ::::: s
[
[-!] ,r-;1 \.
+
l
0
' .. X4
0
and a particular solution of the given nonhomogenous system is
-·········
4. (a)
··•··
= !. x2 = 0, X3 =0, x 4 =
x1
1.
: ..o···-··- ··-·
The reduced row echelon form of the augmented matrix of the given system is
0
1
0
_1: 133]
9
-7
0
0
thus a general solution is given by
(b)
A general solution of the associated homogeneo'..ls system is
and a particular solution of the nonhomogeneous sy~tem is x 1 =
5.
i,
1
x2
=
0,
x3 =
-7,
X4
= 0.
The vector w can be expressed as a linear combination of v 1 , v 2 , and v 3 , if and only if the system
[~
1
: =~~] [: ] : : [-~]
5
-12
c3
1
is consistent. The reduced row echelon form of the augmented matrix of this system is
0
1
0
0 -2]
0
1
3
1
From this we conclude that the system has a. unique solution, and that w = -2v1
+ 3v2 + v3.
6. The vector w can be expressed as a linear combination of v 1 , v 2 , and v 3 , if and only if the system
f3
l2
-2
8
1
4
93
EXERCISE SET 3.5
is consistent. The reduced row echelon form of the augmented matrix of this system is
/
'
/
•,
•·..
/
,.'
[~
---·
0
l
0
3
-1
0
-
10]
-6
0
...---······• ...... .
.
.
/
and from this we conclude that the :;ystem. hilS inftnitcly many solutio.n~ given b}: c1 .:;::;.10 - ..3t 1~
c 2 == -6 + t, c3 = t. Thus.w can be expressed as a linear combination of v1, v2, and V3. fii particutat;
taking t = 0, we have w = 10v 1 - 6vz.
7. The vector w is in span{v1 , v 2 , v 3 , v 4 } if and only if the system
H
1
0
l
-3
2
is consistent. The row reduced echelon form of the augmented mo.trix of this system is
! -~ -~ \-~]
[~
. . : ..•.. ... . .··
From this we conclude that the system has infinitely many solutions;
,......--··-··--··--··"· ······-
~~(Is w
.
.
. ......
- ··.
.. ·::.::-_-;:-,..._
is in span{v 1 , vz, v 3 , v 4 }. /
.-·-:-······~.~.·:::~::···::····
8. The vector w is in span{v 1 , v 2 , v 3 } if and only if the system
is consistent.. The reduced row echelon form of the augmented matrix of this system is
[~
0
2
-1
0
0
From this we conclude that the system has infinitely many solutions; thus w is in span{vt, v2, VJ}.
9.
(a) The hyperplane al. consists of all vectors x = (x, y) such that a • x = 0, i.e. -2x + 3y = 0. This
corresponds to the line through the origin wit.h parametric equations x =
y = t.
(b) The hyperplane a..!. consists of all vectors x = (x, y, z) such that a· x = 0, i.e. 4x- 5z = 0. This
corresponds to the plane through the origin with parametric equations x = ~t, y = s, z =t.
(c) a.i. consists of all vectors x = (xbx 2, X3, x4) such that a· x = 0, i.e. Xt + 2x2- 3x3 + 7x4 = 0.
This is a hyperplane in R 4 with parametric equations Xt = -2r + 3s- 7t, x2 = r, X3 = s,
it,
X4
0. (a) x
(c)
= t.
= 4t, y = t
x1 = r
+ 2s, x 2 = r, x 3 =
(b)
s, s;4 = t
x
=
s, y
= -3s + 6t, z = t
i
94
Chapter 3
11. This system reduces to a single equation, Xt + x2
x 1 = -s- t, x 2 = s, x 3 =: t; or (in vector form)
+ x3 = 0.
Thus a general solution is given by
The solution space is two-dimensional.
13. The reduced row echelon form of the augmented matrix of the system is
[~
Thus a general solution is x 1
= ~r-
0
-~7
1
7
I_( s-
19
T
~
~t, x 2
l
-7
= -~1' + ts + ¥t,
3
X1
7
X2
-7
19
-7
2
xs =r
1
X4
0
0
X f.>
1
+s
7
0
1
0
+t
X3
= T,
x4
= s, x 5 = t; or
-:1
~j
The solution space is three-dimensional.
15.
(a)
A genecal
l f l] nl
wlutioni~ =: ·-=~- t:_y ~-s, ~ ~ \;t [: ~ m+
x
. ..
+t
'
·---·····--..
(b) The solution space corresponds to the plane which passes through the point fP( 1, 0, 0) il.nd is
· ····· ·
p~:r._a:Herto the·-vea:or·s-:v1-~-r...:..·T;Y;·ora.n:a--.y·z-·;;-·c.:::r;·o·;-l}
. . . ·····-· ... ··-···· ...................... ........ - .... ......_.. ······- ····--- ....... , ...,.
16.
(a)
A general solution is x = 1- t, y
-,._______.,. ._____ ·
= f; or[:]=[~]+ t[-~L~-···
-
(b) The solution ~pace corresponds to the line which passes through the point P(l, 0) and is parallel
to the vecto( v = (-1,1).
17. (a)
A vector x = (x, y,z) is orthogonal to a= (1, 1, 1) and b = ( -2, 3,0) if and only if
~~-;·~ y- ~ z. = 0~-,;
g+3y ~o
The solution space is the line through the :iii;~--~th~tj;·~erpendicu)ar to the vectors a
j
(b)
and b.
EXERCISE SET 3.5
(c) The reduced row echelon form of the augmented matrix of the system is
( [~-~-T iJ ·-
·
"-
gtneraj,..§Ql:ut~giv~ by x =~-=:: -~t, z = t; or (x, y,z) i·;(·=j·:~~ . l).,ote
and so a
. :.t.~~..J orthog~~a.l !9,,_botl!.~_ap<:\
- ==-~=!.~
..
that the vectof:
l8.
~
I:!.,.
--
(a) A vector x = (x,y,z) is orthogonal to a = ( - 3,2,-1) and h= (0, -2, -2) if and only if
+ 2y -
- 3x
z =0
- 2y- 2z = 0
(b) The solution space is th.e llne through th.~ . ori,g!11.. tha,:t is perpendicular to th~ ·~~~t~rs· a.and"~
(c) (x, y, z ) = 'i:{~i~ _: 1, 1); note that the vector v = (- 1, ·:.__ i, 1) is orthogonal to both a and b. ·
--
-==--~·
.9.
{a}
A \'ector x
= (x 1 ,x2,x3 ,x4) is orthogonal
to v1
= (1, 1, 2,2) and vz = {5, 4,3,4) if and only if
+ 2x4 = 0
+ 4x z + 3x3 1- 4X-t = 0
x 1 + xz + 2x3
5xl
(b) The solution space is the plane (2 dimens ional s ubspace) in R4 that passes through the origin
a.nd is perpendicular to the vectors v1 and v 2.
(c) The redUl:ed row P-chelon form of the augmented matrix of t.he system is
~]
- 5 -·1
0
7
l
6
=
and so a. general solution of the system is given by (x ,,xz,XJ ,x4) s{5, -7,1 ,0} + t(4, - 6,0, 1) .
Note that t he vectors (5,-7, 1,0) and (4 ,-6,0, 1) are orthogonal to both v 1 and v2.
:o.
(a )
A vector x = (x 1,:c2,r.1,x 4, x~) is orthogonal to the vectors v 1 = (1 , 3, 4,4, -1 ),
v 2 == (3, 2, 1, 2, - 3) , and v3 == (1, 5, 0, - 2, - 4) if and only if
+ 3xz + 4x3 + 4x.:~
+ 2x2 + X3 + 2x4
x, + 5xz
- 2x4
x1
3xl
-
Xs
- 3xs
- 4xs
=0
=0
=0
{b) The solution space is the plane (2 dimensional subspace) in R 5 t hat. passes through the origin
and is perpendicular to the vectors v1, v z, a.nd v3.
(c) The red uced row echelon fo r m of the augmented matrix of the system is
[~
0
0
5
3
7
-- IO
1
0
13
- Z5
31
_33
50
21
50
0
25
~]
and so a. general solution of the system is given by
(x1 , x2, x 3 , x -1, xs) = s( -~, ~. -~, 1, 0) + t( {o, ~g, - ~,0, I)
The ve~tors
(- t W. -~ . 1, 0) and ( t70 , ~~ . -~ , 0, 1) are orthogonal to v •• Vz , and v 3.
Chapter 3
96
DISCUSSION AND DISCOVERY
Dl. The solution set of Ax = b is a. translated subspace x 0
+ W , where W
is the solution space of
Ax= 0 .
D2. If vis orthogon al to every row of A, t hen ilv = 0 , and so (since A is invertible) v = 0 .
D3. T he general solution will have at least 3 free variables. Thus, assuming A i: 0, the solution space
will be of dimension 3, 4, 5, or 6 depending on how much redunda.ocy there is.
D4.
(a) True. The solution set of Ax= b is of the form x 0 + W where W is the solution space of
Ax = O.
0
(b) False. For example, the system :r - Y = is inconsistent, but the associated homogeneous
x-y=l
system has infinitely many solutions.
(c) True. Each hyperplane corresponds to a single homogeneous linear equation in four variables,
and there must be a least four equations in order to have a unique solution.
(d) 'Tme . Every pla ne in R3 corrr.:>ponrls to a equation of the form ax + by + cz = d.
(e) False. A vector x is orthogonal to row(A) if and only if x is a solution of the homogenous
system Ax = 0.
WORKING WITH PROOFS
P 1. Suppose that Ax = 0 has only the trivia.l ·s~lution and t hat Ax = b is consistent. If Xt and Xz
are t wo solutions of Ax= b , then A(x 1 - x 1 ) = Ax 1 - Axz = b - b = 0 anti so Xt - x z = 0 , i.e.
x 1 = xz . Thus, if Ax= 0 has only the t rivial solution, t he system Ax = b is either inconsistent
or has exactly one solution.
PZ . Suppose that Ax = 0 has infinitely many solutions and that Ax= b is consistent. Let Xo be
any solution of Ax = b . Then, for any solution w of Ax= 0 , we have A(xo + w) = A.xo + Aw =
b + 0 = b . Thus, if Ax = 0 has infinitely many solutions, the system Ax = b is either inconsistent
or ha.<> infinitely many solutions. Conversely, if Ax= b has at. most. one solution, then Ax= 0 has
only t.he trivial solution.
P3. If x 1 is<:\ solution of Ax = band x2 is a solution of Ax = c, then A(x, + x2) = Ax1 + Axz = b + c;
i.e. x 1 + X2 is a solution of Ax= h +c. Thus if Ax= band Ax ;= care consistent systems, then
Ax = b + c is also consistent. This argument c2-n easily be adapted to prove the following:
Theorem. If Ax = bi is consistent for each j
Ax = b is also consistent.
P4 . Since (ka) · x = k(a · x) and k
that. (ka)..L
a ..L.
=
= 1, 2, ... , r
and if b = b 1
f. 0, it follows that (ka) · x = 0 if and only
+ b2 + · · · + b,.,
if a· x = 0. This proves
EXERCISE SET 3.6
1.
(a)
This matrix has a row of zeros; it is not invertible.
(b) A diagonal matrix with nonzero entries on the c1 : ~gonal ;
o200] u ~
then
1
=
~-1O!O·
o ol
0
0 3
EXERCISE SET 3.6
2.
3.
(a}
(a)
(c)
[~
[~
[~
r
0
-2
0
0
3
~][
0
- 1
2
0
(a)
- 1
0
[ - 16 - 2
- 20
5.
(a)
6. (a)
A' ~ [~
A'~ [~
0
= rl0
0
_; l
2 5
1
(b) not invertible.
-2
0
fl
[! -~]
4
"
0
4
0
~]
,~]
(b)
0
I
9
0
1]
(b)
-4 1
2 5
10
[-12-3
15
{b)
[ 2 1] [~' 0 [ 10 -2]
(b )
0~][-:2 1~] [~ -2]0= [3~
0
-2
4.
97
- 2] =
- 20
10
0][ 10 -2]
]
- 2 = ro -6
0
-1
0
0
2
-~l
-5
10
5
A-'~ [~
-20
10
20
2
20 - 20
-10
(c)
[ - 24 - 10
3 - 10
60
20
(c)
A-'~ [~
0
1
0
11
0
9
0
,~]
4
A-' ~ [~
(c)
A- k ~
9.
Apply the invers ion a lgorit hm (see Section 3.3) to find the inve rse of A .
3 :1
1
0
-2:0
0
1
1 :0
0
'
-1~
0
(- ~ )k
[2'
~
~]
Add 2 t.i111es row 3 to row 2. Add -'- 3 times row 3 t o row l.
[~
2
0: 1
1
o:o
I
0
1 :0
0
0: 1
1
olI o
0
1:0
0
1
0
-;]
Add -2 times row 2 to row 1.
[~
T hus the ;nverse of the upper triangular matrix A
-2
1
0
-~]
i 23]
~[
0
3k
0
A is invertibl~ if and only if x =!= l , - 2, 4 (the diagonal entries must be nonzero).
2
12]
0
7.
[~
- 2
-10
1
0
[' -:r
-2
1
-~ is A- = ~
I
0
(J~'J
3]
98
10.
Chapter 3
If
A= [~
11. A =
A- = [~
0
-1
; } ,lhro
4
u
0
0
- 1
r[
0
-1
4
1
0
0
1
i]
-
1
2
- 11
0
;]
- 1
4
A = [~ 0-8]
0
4
12.
-4
0
13. The matrix A is symmetric if and only if a, b, and c satisfy the following equations:
a - 2b + 2c = 3
r2a
+- b+···· c-·,;-· -{) ..
~ - +- · c-·~ -2
The augmented matrix of this system is
~~
:}.-·:~~l
nnd the reduced row echelon form is
[~
0
1
0
~~]
0
0
- 13
1
Thus, in order for the m atrix A to be symmetric we must have a = 11 , b = - 9, and c = -13.
14. There are infinitely ma.ny solutions: a = l
15. 'vVe hnve A - l
J.
= [- 12-1
3
16. We have A- 1
= [- 2
17. AB =
I
+ lOt, b = 1t, c = t , d =
[3 }]
1
= ; t hus A
5 I 2
1
3 .-~
l - 2
1
-7
3 -7
4
r-45
1
lll thus A -
1 ;
1 -3
11
1
< t < oo.
_ ts. symmetnc.
1
-13
= - - 13 - 5
14
0, whe re - oo
1
is s ymmetric.
[-~ ~ ~ r~ -~ ~ = [-~ 1~ ~~]
0
0
1
- 4
0
0
3
0
0
- 12
18. From Theorem 3.6.3, the factors commute if and only if the product is symmetric. Thus the factors
;n (a) do not commute, whe,eas the ractocs ;n (h) do commute.
---~
19. If A =
r~ -~ ~}· then A-~= [(1~-s (-~)-5 ~ (r~~ -~ ~1.
]
0
0
-1
0
0
(-1 )- 5
0
..........
20. If
A=[~
0 0]
[(! 0)4 0 , then A3
2
0
l
=
0
2
0
~ ] - [~~~]
( 1)-2
0
0
I
_
-1
..
_l
.
EXERCISE SET 3.6
[! ~ ~1
21. The lower tria.ngular matrix A =
is invertible because of the nonzero diagonal entries.
1 3 1
Thus, from Theorem 3.6.5, AAT ond AT A are also invertible; furthermore, since these matrices
are symmetric, their inverses are also symmetric by Theorem 3.6.4. .Following are the matric:es
AAT, AT A, and their inverses.
ATA =
AAT :::::
22.
r
6
~ 10
[l
3
~]
(AT A)-
~~l
3
10
6
1
=
[ I -3 8]
-3
8
10
- 27
(AAT)- 1 = [ 74
-2~
- 27
10
-3
-27
74
-;]
Following are the matrice..<; A AT and AT A, and their inverses.
AT.4
=
['i ~]
5
5
2
{AT,4.)-1 =
r
[~ ~]
3
AAT =
35
-13
5
5
-2
= l-13
1
(AA ')-•
10
5
23. The fixed points of t he matrix A ==
-3
[ - 3I
10
17
5
-I~]
30
-~]
[~ ~] are ihe solut ions of the homogeneous system (1
- A )x
=
0. The augmented matrix of this system is
- 1
[- J
o]
-1
-1
0
and from this it is easy to see tha t the system has the gem~ra.l solution x 1 =
fixed poi nts of A are vectors of t he form x
24.
All vectors of thcform x
25.
(a)
If
A = [~ ~]. then
=
H] =
A2 =
(~ -~]
(b) The nilpotency index of A =
x2
= -t.
T hus the
< oo.
t [-:] , whero - oo < t < oo.
[~ ~] [~ ~]
and the inverse of I - A =
= [_~J = t [_ ~], where - oo < t
t,
=[~ ~J·
is (I- A)-
Thus A is nilpotent with nilpotency mdex 2,
1
= I+ A= [~ ~)·
1
[~8 ~l 0~] G~~~.s._t_~~--!nve.rse.~f I - A =:' [-~- 8 -1~ ~] is
100
Chapter 3
26. (a)
1-A=[lO]
A= [~ ~] has nilpotency indt>.x 2.
(b) A
= [0
o o2 31,]
-1 1
(I - A) - t
.
= I + A = [11
0
]
1
I-A~ r~ =:]
has nilpotency index 3.
0 0 0
~] + [~ ~ ~] + [~ ~ ~] [~ ~ ~]
=
If A is invertible and skew symmetric, then (A- 1 )T
is skew symmetric.
(b) If A and B are skew symmetric, then
27. (a)
= (AT)- 1 = (-A)- 1 = -A- 1 ; thus A- 1
(A+ B)T =AT+ BT =-A- B =-(A+ B)
(A- B)T =AT- BT = -A+ B
= -(A- B)
(kA)r =kAT= k( -A)'"'"' -kA
thus the matrices A+ B, A- B, and kA are also skew-syn\met.ric.
If A is any square matrix, then (A+ AT)T =AT+ ATT = A'r +A= A+ AT, so A+ AT is
symmetric. Similarly, (A- AT)T =-(A- AT), so A- AT is skew-symmetric.
(b) A=
+.A1') +~(A- AT)
28. (a)
HA
(c)
A=
[~ ~) = ~ [~ ~] + ~ [~ -~]
29. If H =In- 2uuT then we have
30.
T
AA
['']
T
r2
=
'~ [r, rf
['' !
rlrr
r2rr
T
r,.,.rT2
= r2r1
rmr1
nr =
I'I- 2(uuT)T =In- 2uuT
= H; thus li is symmetric.
r;,:]
r1rm
,,,~
Tl
fmfm
31. Using Formula (9), we have tr(Ar A}=
rz
'' · rm
f2 • r rn
1
·rz
Tm • I'm
J
:r 1 • r2
r1 · r1
r2 ·
[ ''; r,
rm · rl
I'm
lladl 2 + lla2!1 2 + ··· + tlanll 2 = 1 + 1 + · ·· + 1 = n.
DISCUSSION AND DISCOVERY
3an
5an
7al3]
Dl. (a) A= 3a21 Sa22 1a23
[
3a::n
Saa2
1a33
=
[au
a:n
a31
a12
a22
a32
a13l [3 0 0]
a23J
1
a33
0 5 0
0 0 7
= BD
(b) There are other such factorizations as well. For example, A ""
[!::: :::
3a31 an
DISCUSSION AND DISCOVERY
101
A is symmetric since a 31 = j 2 + i 2 = i 2 + j 2 ~a;; for each i, J .
(b) A is not symmetric .since a;;= j - i = - (i- j) = - a;; for each i, j. In fact, A is skewsymmroetric.
2j + 2i = 2i + 2j = a,i for each i, j.
(c) A is symmetric since
(d) A is not symmetric s ince a 1 , = zp + 2t3 f:. 2i2 + 2j3 ==-a.; if i = 1, j = 2.
D2. {a)
a,, =
In general, the matrix A= IJ(i,j)} is symmetric if and on1y if f(j , i}
= f(i,j)
for each i , j.
03. No. In fact, if A and B a.re commuting skew-symmetric matrices, t hen
(AB)T = BT AT= (-D)(-A)
= BA =
AB
and so the product AB is symmetric rather than s kew-symmetric.
04. Using Formula {9), we have tr(AT A)= lln1JI 2 + 1la21J 2 + · · · +
follows that llatll 2 = 1Jazll 2
= llanll2 = 0 amlso A= 0.
= ···
0 5. (a) lf A =
D6. If A
Thus, if ArA
= 0,
then it
[d~ d~ ~1, then A2= [d~ d~ ~1; thus A 2==- A if and only if d~ = d; for 1 = 1, Z, and
0
(b)
IJa,.jJ 2 .
0 d3
0
0 d~
3, n.nd this i:> t.rue iff di - 0 or 1 for ·i = 1, 2, ancl 3. There are a. tOtal of eight such matrices
(3 x 3 matrices whose diagonal entries are either 0 or 1)
There are a t otal of 2n such matrices (n x n. rnatrice.s whose diagonal entries are either 0 or 1).
= [d'0 dl0],
then A 2 + 5A
+ 6!2 =
0 if and only if d?
+ 5£'-. + G = 0
for i
= 1 and 2; i.e. if and
only if d; = - 2 or - 3 for i = 1 and 2. There are a t.ota.l of fonr s uch matrices (auy 2 x 2 diagonal
matrix ~hose diagonal entries are either - 2 or - 3) .
D7. If A is both symmetric and skew symmetric, then AT= A and AT = - A ; thus A = -A and so
A =0.
0 8.
In a symmetric matrix, enlries that are symmetr ically positioned across the main diagonal are
equal to each other. Thus a symmetric matrix is completely determined by the entries that lie on
or above the main rliagonal, and entries t hat. appear below the main diagonal a.re duplicates of
entries that appear above the main diagonal. Au n x n matrix has n entriP.s on the main diagonal ,
n - l entries on the diagonal just above the maiu diagonal, etc . Thus there are a total of
n
+ (n- 1) + · · · + 2 + 1 = n(n 2+ 1)
entries lhat lie on or above the main diagonal. For a ;;ymmet.ric matrix, t his is the maximum
number of distinct entries the matrix can have.
In a skew-symmetric matrix, the diagonal entries are 0 and entries t hat are symmetrically positioned ac:ross the main diagonal arc the negatives of each other. The maximum nwnber of distinct
entries can be attained by selecting distinct positive entries for the n(~- l) p,.,sit.ions above the 1.0ain
diagonal The entries in the n(n2- 11 positions below the main diagonal will then automatically be
distinct from each other and from the entries on or above the main diagonal. Thus the maximum
number of distinct entries in a skew-symmetric matrix is
n(n- 1} n(n- I )
--'--:---.....:. +
2
2
09. If D ==
let
rd~ d~J,
then AD
= ldt a1
+ 1 = n(n- 1) + 1
d2a2J where a 1 and a2 are lhe rolumns of A. Thus AD =
e2] (where e1 and e::~ are the standard unit vectors in R 2) if and only
and dza2 = e2. But this is true if and only if d1 i- 0, d2 :/; 0, a 1
-J;e 1 , and a 2
I=
=
if dlaJ
= :, ~2·
= e1
Thus
Chapter 3
102
A =
0]
_!.
[
d~
1
d2
where d 1 , d 2
=1-
0. Although described here for the case n
= 2 , it should. be clear
'
that the same argument can be applied to a square matrix of any size . Thus, if .1D
f
tliagonal entries d 1 , dz , ... ,d.n of D must be nonzero, and A=
D lO.
(a)
o
l
0
0
.l. .. .
o
d2
:
:
...
;
0
0
... ..1.
d,..
= 1, then the
.
False. If A is not square then A is not invertible; it doesnt matter whether AAT (wh.ich
is always square) is invertible or not. (But if A is square and AAT is invertible, then A is
invertible by Theorem 3.6.5.]
(b) False. For example if A =
( c)
..!.
dt
G~) and B = [~ !] ,then A + B = G~) is symme~r~~-
True. If A is both symmetric and triangular, then A must be a diagonal m atrix. Thus
A=
[~ :,
0
0
~], and so p(A) = ['(~,) P(~,)
Un
0
:
0
]
is also a. diagonal matrix (both
p(d., )
symmetric and triangular).
(d) True. For example, in the 3 x 3 case, we have
Dll.
(e)
True. If Ax= 0 has only the trivial solution, then A is invertible. But if A is invertible then
so is AT {Theorem 3.2.11); thus Ar x =a has only the trivial sol ution.
(a)
False. For
example[~ -~] [~ -~) = (- ~ -~J-
(b) True. If A is invertible then Ak is invertible, and thus Ak =1- 0, for every k = 1, 2, 3, .. . . This
shows that an invertible matrix cannot be nilpotent; equivalently, a nilpotent matrix cannot
be· invertible.
(c) True (assuming A i= 0). If A 3 = A, then .46 = A, A 9 =A , A 12 =A, . . . . Thus is it not
possible to have A,.. :::: 0 for any positive integer k , since this would imply that A~ = 0 for all
j ?. k.
(d) 1'rue. See Theorem 3.2.11.
(e) False. For example, I is invert)ble but I -1 = 0 is not invertible.
WORKING WITH PROOFS
Pl. If A and 1J are symmetric, then A 1 '
= A and BT = B.
It follows that
(AT )T = AT
(A+ B )T ~ AT + BT
(A - B)T
= AT -
BT = A- B
(kA )T =kAT
th11s t.hc matrices
= A+ B
= kA
Al' . A+ B , A - B , and kA are also symmetric.
EXERCISE SET 3.7
103
P 2. Our p roof is by induction on the exponent k .
1
Step 1. We have D = D
=
dJ: ~o:·] = [dI:
[ 0 ...., 0
0
] ; thus the statement is true for k
= I.
rl nl
Step 2 (induc~ion st ep). Suppose the statement is t rue for k = j, where j is an integer ~ 1. 'rhen
and so the statement is also true for k = j
+ l.
These two steps complete the proof by induction.
P 3. If d•, d2, ... ,d11 are nonzero, then
D=
:
,
:
l
[
~
0
[~:.l ~1
0
is inver tible with
0
0
~- ] ~ ;2::: ~:. 1= [~. ~
d.,
n-' =
Q
},
0
0 .. .
};
I
d~
0 0
01
0
On the other hand if auy one of
0
• · · dn
0
0
:i;;- .
the diagonal entries is zero, then D has a row of zeros and thus is not. invertible.
P 4. We will show t hat if A is symmetric (i.e. if AT == A), then ( An)T = A 11 for each positive integer .
n. Our proof is by induction on the exponent n .
Step L Since A is symmetric, we have (A 1) T == A'~' = A = A 1 ; thus t he statement is true for
n = 1.
Step 2 (induction step). Suppose the statement is true for n = j, where j is
(Ai+ l f
= (AAi)T = (Ai )T A1' =
An
integer ~ 1. Then
AJ A= AH 1
and so the statement is also true for n = j + I.
These two steps complete the proof by ind uction.
P5. If A is invertible, then Theorem 3.2.11 implies AT is !::1vertible; thus t he products AAT and AT A
are invertible as well. On the other hand, if either A AT or A T A is invertible, then Theorem
3.3.8 implies that A is invertihle. It follows that A , AAT, and AT A are eith~p all invertible or all
singular.
EXERCISE SET 3.7
1. We will solve Ax = b by first soiving Ly = b for y, and t hen solving Ux = y for x.
The system Ly = b is
3yl
=0
-2yl + Y2 = 1
from which, by forward substitution, we obtain Y l = 0, Y2
= 1.
Chapter 3
104
The system U x
=y
1s
x1- 2x2
=0
X2
= 1
from which, by back substitution, we obtain x 1
the solution of Ax = b .
2. The solution of Ly
l
X 2-- '1·
=b
is y 1 = 3, Y2
= ~-
= 2, X2 =
1. It is easy to check that this is in fact
The solution of Ux = y (a.nd of Ax= b ) is x 1 = ~.-
3. We will solve Ax = b by first solving Ly = b for y , and then solving Ux
The system Ly = b is
=
3yx
2yl
+ 4y2
- 4yl -
=y
for x .
-3
= -22
+ 2y3 =
Y2
3
from which, by forward substitution, obta.in Yl = -1, Y2 = -5, YJ = - 3.
The sys tem U x = y is
X J - 2X2 ··· X3 -= -1
x2 + 2x3 = -5
X 3 = -3
from which, by back s ubstitution, we obtain :x: 1
this is in fact the solution of Ax = b .
4 . T he solution of Ly = b is Yt
X - ill X
37 X _ 11
l 14 ' 2 - i4) 3 - i4.
= -2,
= 1, Y2 = 5, Y3 = 11.
x2
=
1, x 3
=
- 3. It is easy to check t ha.t
The solution of Ux
= y {and of A x = b ) is
5. T he matrix A can be reduced to row echelon form by the following operations:
The mult.ipliers associated wit h these operations are~~ 1, and ~ ; thus
A = LU = [
2 0]3 [10 4]1
-1
is an LU-factorization of A.
To solve the system Ax= b where b
= r=~J , we first solve Ly = b
for y , and then solve Ux = y
for x :
The system Ly ;:: b is
2yl
-yl
from which we obtain y 1 =:=. - 1, Y2 ::::: - 1.
The system Ux = y is
X1
= - 2
+ 3y2 = + 4X2 = X2
from which we obtain x 1
Ax. =:::. b.
= 3,
2
1
= - 1
x 2 = - 1. It is easy to check that thi! is in fact the solution of
105
EXERCISE SET 3.7
6.
The solution of Ly
Ax = b) is
7.
-I~]
An LU-decomposition of the matrix A= [ - :
x 1=
4,
= b,
x2
r-~~],
;here b =
is A= LU
= (-: -~][~ ~] .
-y·~ = - 1.
is Yl = 2,
The solution of Ux
=y
(and of
= . -1.
The matrix A can be re<luced to row echelon form by the following sequence of operations:
A=
[_~
[~
-2] [ 1 -1] ['
-2
-2
·-1
2 -7
2
5
-1
0
-1
-l] ['
l
-1
4
1
2
5
2
- 1 -7
5
1
0
~
To solve the •ystem Ax
=b
w ha•e
b
= [=~]
0
4
0
0
-7
u
1 - 1 =
0
1
0
-2
- 1
is an LU-factorization of A.
-~]
0 -2
4, 0 (for the second row), 1, -~. -4 , aud ~i
The multipliers associated wilh these operations arc
thus
A = LU = [
-7
-I] [' -1 -1]
-1
0
0
-7
-2
- 1
4
,
we fi.-s.,olve Ly
=b
ro, y, and then wive U x
=y
for x:
The system Ly = b is
= -4
2yl
= - 2
- 2y2
-y!
from which we obtain Y1
The system Ux = y is
= -2, Y2
+
+ 5y3 = {)
4yz
= 1, YJ = 0.
X1 -
X2 -
X3
= -2
I
= 0
xz- x 3 =
X3
from which we obtain x 1
Ax=b.
= -1,
.1.' 2
= 1, X 3 =
8. An LU-decomposition of the matrix A =
[-!-~ ~~1
0
Th• solution or Ly
Ax = b) is
Xt
= b,
wh•,. b
0. Jt is easy to check that this is the solution of
4 26
is A = LU =
[-!
0
0
4
~1 [~
- 2 0
1
0
~]·
l
~ m,;., y, = 0, Y> = I, Yl = 0. The solution or Ux = y (and or
= -1, xz = 1, x3 = 0.
Chapter 3
106
9. The matrix A can be reduced to row echelon form by the following sequence of operations:
A=
[-~
0
0
[~
[~
0
3
-1
0
1
-2
2
1
0 -1
1
0
-1
2
0
1
0 -1
1
0
0
1
0
0
~] [~
~1 [~
-1
~
01 [1
2J1 ~ 00
3 -2
2
1
0
-1
-1
0
0
l
2
0
0
1
-1
0
0
l
0
0
0
4
~1 ~ [~
;1 ~ [~
~1 -u
- 1
0
1
0
0 - 1
3
0
-1
2
0
1
0
-1
1
0
1
1
0
0
~1 ~
~1 ~
.•"!.:.·
The multipliers associated with these operations are -1 , -2, 0 (for the third row), 0, ~ . 1, 0, ~.
-1 , and ~; thus
A-LU-
[-1~
- 1
0
0
2
0
1
0
3
~][~
0
-1
1
0
1
0
0
0
~]
is an LU-decomposition of A.
To solve thesystem Ax= b where h
for x :
The system Ly
=b
= [-;],we first solve Ly -
b lory, and then solve Ux - y
is
-yl
2yl
== 5
=-
+ 3yz
Y2
+ 2y3
Y3
from which we obtain Y1 = - 5, Y2 = 3, Y3
The sy.:>tcm Ux = y is
== 3
+ 4y4 =
= 3, Y4 ==
x1
-
X3
7
1.
=- 5
X3
xz
1
+ 2x4 =
+ X4 =
3
3
=
1
X4
from which we obtain x 1 = -3, x 2 = 1, x 3 = 2, x 4
solution of Ax = b.
= 1.
It is easy to check that this is in fact the
107
EXERCISE SET 3.7
10. An LU-decomposition of !1 =
ro!ution o[ Ly
['-'
= b, wh"e b =
(and of Ax= b) is x 1
0 0]
A= LU = ['~
12
~ ~ 2 ~ is
0 -1 -4 - 5
[!lis =
= -1 , x 2 =
y,
-~,
X3
0
0
4
0
0
2
0 -1 -1
=-
4, Y>
= !, X4
=
l, y,
j [~
-2
0
1
0
3
1
0
0
0]0~ . The
~ ~. ~ ,J;. The soluUon of Ux ~
Y<
y
1
10 .
....,...,_....._--..._,
11. Le!, elt e2, ~e the standard unit vectors in R3 . Tl!_e.~_.(f?.~L:=:J ,_?, ?.)__ ~~-~ jt~__ c?J'!IDP..?fL2L.
the matriX-".A- 1 1S-ofitaiiiea-·E'y solvTng ' tne··syst~tn Ax·= ei for x = x;. Using the given LUd~~ositi~& -~ wyj A~ _Lli!~ oy llrsf'solviiig ~I;y ~--ej"ror.:sr~ Yj~ ~<t _then· sqlving U'x=Y-;19.!.-
x_=;= xi.
(1) Comp~_!-a~ion ()f..._~]-·: The sys tem
0'_::: .~.1
~
~VI
= 1
2yl + 4yz
=0
-4yl - Y2 + 2yJ = 0
TI. Then
from w hich we obtain Yt = ~, Y'2 = - ~, Y3 =
X1 -
2Xz •r
-~2
•
T
X3
the system Ux = Yt is
!
=
2"'
-"3 -- -6l
:r.3 --
from which we obtain x 1
=
- L xz = -1, x3 = TI· Thus X 1 =
(2) ComputatioP- of x z: T he sys tem Ly
= e2
:1yl
2yl
!11
= 0,
)/2 ::.:.
[=!]
= 0
= 1
Y2
+ 2y3 = 0
L Y3 = ! . Then the system U x = Yz is
xa - 2x2 -
X3
=
0
=!
X3 = 4
xz + 2:r3
fmm which we
~
is
+ 4y2
-4y1 -
from which we oblaiu
7
il
obtain .,~ j,., = 0, ., ~ l· Thus x, ~ m
(3} Computation of x 3 : The syst em Ly
= e3 is
3yl
2yl + 4y2
-4Yt - Y?.
0
== 0
=
+ 2y3 = 1
Chapter 3
108
from whieh we obtain y 1
""'
0, y 2
= 0, y 3
= ~· Then t he system Ux = YJ is
x1 - 2x2 x2
f•·om whkh
X3
= 0
+ 2x3 = 0
X3 = ~
w' obtain x, ~ - l , x, ~ - 1, x, ~ !. Thus x, ~ [~1]
Finally, as a result of these computations, we conclude that A- 1 = [x1
~0
-!]
l
~
-1 .
12. Let e 1 , e 2 , e 3 of! the s tandard unit vectors in R 3 • The jth column x i of the matrix A - .L ~ obtained
by solving the syst em Ax = ej for x = Xj. Using the given LU-decomposition, we do this by .first
solving Ly = e1 for y = y 1 , and then solving Ux = Yj for x = x;.
The sdu tion of L. y - c, io y,
~ -~l , • nd the solution a[ U x ~
[
The sohotion of Ly
~ c, is y ~
The solution of Ly
~· e
3
3
is Yl -
[:].
y , is x,
a nd the.olution of Ux = Y2 is
~y
m,
•nd the solution of V x
3
x,
is x,
~ r::J·
~ m.
~ [ =~] ·
~7
_..2._]
14
* -t. .
'l
7
13. The
nta.t l
l
r.i
ix il can 1.-c reduced to row echelon form by Lhe followin g sequence of oper ations:
A:.::
H
[~
1
i
1
0
2
-~]
-t
r
-11lj 0
2
0
2
2
-t
1
[-; ! -ll [~
0
I
2
1
0
-:]
2
I
1
2
1
--+
2
-:]
-t
=U
where thfl a.o:;sociated multipliers are ~, 2, -2, 1 (for the leading entry in t he second row) , and
- J. This leads t.o the factorization A = LU where L
= [-~
0
~1
If instead we prefer a lower
2 1 l
t.ri<mgular factor t hat has 1s on the main diagonal, this can be achieved by shifting the diagonal
entries of L to a diagonal mat rix D and writing the factorization as
EXERCISE S ET 3.7
14 . A =
109
1
1
r-~ ~~ -~~ = r-~ ~ ~ r-~ ~ ~ r~ ~ -2~
-2
-4
l1
1
1
2 2 1
0 0 63
0
0
= L'DU
1
15.
(a) This is a permutation matrix; it is obtained by interchanging the rows of J.z.
(b) T his is not a. permutation matrix; the second row is not a. row of h.
(c) T his js a permutation matrix; it is obtained by reordering the rows of 14 (3rd, 2nd , 4th, pt ).
16.
(b) is a permutation matrix. (a) and (c) are not permutation matrices.
17. The system
[! :].
Ax~ b i<equivalent top-> Ax ~ p -> b where p-> ~ P ~ ~
Using the given
decomposition, the system p - l A;x = p - l b can be written as
1
0
3
1
-5
~
LUx=
[
We solve this by first solving Ly = p-lb for y, and then solving Ux = y for x.
The system Ly = p - I b is
Y1
= 1
Y2
= 2
3yl - 5y2 + y:,~ = 5
from which we c•bt..ain y 1
=
1,
'!/2
= 2, y3
x1
= 12.
+ 2x2 +
X2 +
from which we obtaiu Xt = i~ , x2 ::::: - i~, x:~
18. The system Ax
Finally, t he system Ux = y is
1
4X3 = 2
17x3 = 12
2x3 =
= g, This is the solution of A x = b.
i
~ h iseq uivolent to p- 'Ax ~ P -' b whe>·e p- ' ~ P ~ [ o(\ 0]1 . Using the given
1 0
decomposition, the syst.em p-l Ax = p-lb can be written M
The solution of Ly = p - l b is y 1 = 3, y 2
X1 = ~' X2 = 0, X3 = 0.
= 0, Y3 = 0. The solution of Ux =
y (and of
Ax= b ) is
19. If we interchange rows 2 and 3 of A, then the resulting matrix can be reduced t o row echelon form
without any further row !nterchanges. This is equivalent to first multiplying A on the left by the
corresponding permutation matrix P:
3
-1
-1
0
2
-1
-1
2
[3
:1
J
Cha~ter
110
3
The reduction of P A to row echelon form proceeds as follows:
PA
=
3
0
[3
-1
2
-1
This corresponds to t he LU-decomposition
PA =
3
0
0] [3 0 0] [1
-1
2
1 =
1
[ 3 -1
0
1
0 2
3 0
0
0
of t.he matrix PA, or to the following P LU-decomposition of A
A=
1 0 0] [3 0 0] [1
[0 1 0 3 0 1 0
0
0
1
0
2 0
0
-~1
-0~] =
p - 1 LU
0
Note that, since p- =[~zlthis decomposition ca.n also be '~~~~ ~tlten as A = P LU.
1
The system Ax
=b
=
!
is equivalent to P Ax= P b =
~
and , using the LU-decornposjtion
obtained above, this can be written as
[3 o olll -·4 !ol [x [-2] =
1
PA x == DUx ==
0
2
4
X3
1
Pb
0
1
3 0 }
0
0
}
2, y 3
= 3, and the solution of Ux =
Finally, the solutjon of Ly = Pb is Yt
Ax = b ) is x 1 = - ~ , x2 = ~, X3 = 3.
20.
x 2]
0
= - ~, y 2 =
y (and of
If we interchange rows 1 and 2 of A, then the resulti ng matrix can be reduced to row echelon form
withoul any furt her row interchanges. This is equivalent to first multiplying A on the left by the
corresponding permutat ion matrix P:
The LU-decomposition of t he matrix PA is
P A = LU =
[~ ~ ~1 [~ ~
2
0
-3
0
0
-;]
1
a.nd the corresponding FLU-decomposit ion of A is
Since P
1
= P , this dccompo~i tiuu <.:<tn also be wri tten as A "" P LU.
111
DISCUSSION AND DISCOVERY
The system Ax
!]
= b = [_
is equivalent to PAx
= Pb = [_~]and, using the LU-decomposition
obtained above, this can be WTitten as
PAx = LUx =
[~ ~ ~1 [~
2
Q -3
The solution of Ly = Pb is y 1 = 5,
is X1 = - 16, X2 = 5, X3 = 4.
21.
Y2
-;] [ ::] = [
1
0
0
1
~1 =
Pb
-2
X3
= ~' y3 = 4; and the solution of Ux =
y (and of
Ax= b )
(a) We have n = l 0 5 for the given systern and so, from Table 3. 7.1, the number of gigaflops
required for the forward and backward phases a.re approximately
Grorwa.r<l
= ~n
3
X
10-9 = ~(10:; ) 3
Gback wt>rd =
n2
X
10-
9
X
w- 9 =
= (105 )'2
x
s
106
10- 9
= 10
X
;:::::
670,000
Thus if a computer can execute l gigaflop per second it will require approximately 66.67 x 104
s for the forward phase, and approximately 10 s for the backward phase.
(b) If n = 104 then, from Example 4, the total number of gigaflops required for the forward and
backwo.rd phasE>.s is approximately ~ x 10 3 + 10- 1 ;::::: 667 . Thus, in order to complete th~
task in IE'.ss than 0.5 s, a computer must be a.ble to execute at least ~6: = 1334 gigaflops per
second.
22.
(a)
If A has such au LU -decomposition, it ca.n be written us
This yields the syslem of equations
x
= a,
y
= b,
w:z;
= c,
wy + z
=d
and , since a =I= 0, this system has the unique solution
x =a,
(b )
y= b,
711
c
= -,
a
z=
(ad- be)
a
fl-orn the above, we ha\'e
[: !] = [~ ~][~ ~]
DISCUSSION AND DISCOVERY
01. The rows of Pare obtaine<l by reordering of the rO'I.VS of the identity matrix 14 (4th , 3'd, 2nd, P~) .
Thus P is a per mutation mn.trix. Multiplication of A (on the left) !;,y P results in (.he corresponding
reordering of the rows of A; thus
PA=
3
- 11
0
2
- 3
12
7
1
CHAPTER 4
Determinants
EXERCISE SET 4.1
1.
2.
1-~
1:
:1 =
~I =
7
3. ,-S
-7 -2
4.
5.
6.
(4)(2) _ (1){8) =
s- 8 = o
(J2)(/3) _ (4)(./6) =
I
-3v'6
a- 3
-3 a -5 2 = (a- 3)(a - 2) - (5)( -3)
I
-2
5
= (a2 -Sa+ 6) + 15 =
a2
3
R
4
-2
1
4
3
5
6
- 7 = ( - 20 - 7 + 72) - (20 + 8·1 + 6) = 45- 110 = -65
1
0
2
-5
2
-1
8.
3
7
3
0
2
1
- 1
9
c
2
-4
4
c- 1
11. (a)
{b)
(c)
(d)
{e)
(f)
-
5a + 21
7
6
1 -2 = (- 8 - 42 + 240)- (18 + 140 + 32) = 190- 190 = 0
l
10 .
= 12 + 10 = 22
1 = (-5){-2) - (7)(-7) = 10 + 49 = 59
IV: ~I =
1.
9.
(3}(4)- (5)( - 2)
1
2
= (O- 5 + 42) -
{0 + 6 + 35) = 37 - 41 = - 4
0
5 = (12 + 0 + 0) - (0 + 135 + 0)
= - 123
-4
3
c2 = {2c -l6c2 + 6(c - l)) - (12+ c3 (c- 1) - l6)
= - c• +c3 -16c2 +Be- 2
2
{4, 1, 3, 5, 2} is a.n odd permutation ~3 interchanges). T he signed product is -a 1 4a.21a.33a4sa.~2 ·
{5, 3, 4, 2, 1} is an odd permutation (3 interchanges). The signed product. is -atsa.z3a34a.42ast·
{4, 2, 5, 3, 1} is an odd pArmuta.tion (3 interchanges). The signed product is -al4ll22llJS043ll51·
{5, 4, 3, 2, 1} is a.n even permutation (2 interchanges) . The signed product is +ats<124a33a.czas1·
{ 1, 2, 3, 4 , 5} is an even permutation (O interchanges). The signed product is +au ana33a44a.~~·
{1 , 4, 2,3, 5} is an even permutation (2 interchanges) . T he signed product is +a 11 a:~-!a32 a43ass·
11 8
Exercise Set 4.1
119
1 2. (a), (b) , (c), and (d) are even; (e) and {f) are odd.
13. det(A) = (,\- 2)(>.
or >.= - 3.
14.
+ 4) + 5 = >.2 + 2>. - 3 =(..\ -1)(>. + 3}. Thus det(A) = 0 if and only if/\= 1
>. = 1, ,\ = 3, or ,.\ = - 2.
15. det(A) = (>. - 1)(>. + l }. Thus dct(A} = 0 if and only if A "" 1 or .A= - 1.
16. A = 2 or ,\
17. We have
= 5.
1; 1-=_1xl = x( l - x) + 3 = -x2 + x + 3, and
1 0
2 x
] 3
-3
-6
X-
= ((x(x -
5) + 0- 18) - ( -3x- 18 + 0)
= x 2 - 2x
5
Thus t he given equation is v<~,lid if and only if -:t 2
roots of this quadro.tic equation are x = 3 {33.
*
+ :-r + 3 =
.x 2
-
2x, i.e. if 2x 2
-
3x - 3 = 0. The
18. y=3
19. (a)
(c)
1
0
0
0
0
0
- 1
0
1
2
7
0
0
0
I
-4
()
2
0
20. (a)
0
0 2 0
0 0 2
= 23 =
0
l
2
10
0
- 1
-- 23
40
Mn
=1
;
M21
=
=6
(b)
8
0
0
u
1-2
7
1
1
1
0
0
0
0
2
3
~ = (1 )(2)(3)(4) =
0
0
4
2
24
= ( - 3)(2)( -1)(3) = 18
3
-11
= 29. c11 = 29
4
,-2 !I=~11, G21
1
0 '1
~I
0
0
=0
3 0
3 8
a
0
M3t =
= (1)(1)(2) (3)
7
0
100 200
21.
-3
1
-~
{c)
(b)
1 2
0
2
= (1)(-1)(1) = - 1
0 0
1 2
= 11
_ 31 = - 19, C31 = - 19
1
-114 = 2l,Ct2 = - 21
Ml2
= ~-~
M22
=~-~ !I=
M32
=I~
13,c22
=
13
_ 31=== -19, C32 == 19
1
M13
= 21, C1 :~ = 27
M23
= -5, C23 = s
M33 == 19, c33
= 19
120
22.
Chapter 4
Mll
=I~ ~~ = 6, cn = 6
M21
= ~~ ~I= 2, C21 = -
M31
= I~ 621 =
)
O,Ca1
0 0
23. (a)
M 13 = 4 1 14
4
(c)
(ct)
24.
=12,012 = -12
M1s
= 3, C1a =
3
M22
= 4, Cn = 4
M2a
= l , Oza =
-1
Ma2 =
(f)
26. (a)
(b)
(c)
(d)
(e)
(f)
Ms3 = 0, Cas = 0
= (0 + 0 + 12) -
(12 + 0 + 0) = 0
0 13
4
-1
6
1· 14 = (8 - 56+24)-(24+56-8)=-96
1 2
4
l
Mn= 4
0
4
3
M 23 = 4
M21=
=0
2
0 23 = ·96
6
14 =(0+ 56 +72)-(0+8+168) = -48
0 22 = - 48
2
- 1
1
1
o
1
3
6
14 = {O+I4+18)-(0+2 - 42) = 72
c21=
-n
2
(a) M3z = - 30, C32 = 30
(c) .M41= - l,C41= l
25. (a)
(b)
(c)
(d)
(e)
O,C3 2 =0
3
4 1
(b)
=0
2
M12
{b) M44 = 13, 044 = 13
(d) M24 = 0,024 = 0
det(A) = (l) Cu + (- 2)C12 + (3)013 = (1)(29) + (- 2)(-21) + (3)(27) = 152
det(A) = ( I)C ll + (6)C21 + (-3)C31 = (1){29) + (6)(11) + (-3)(-19) = 152
det(A) == (6)C 21 + (7)C22 +(-1 )023 = (6)(11) + (7)(13) + (-1 )(5) = 152
det.(A) = (-2)C12 + (7)C22 + (1)032 = (-2)(- 21) + (7)(13) + {1){19) == 152
det(A) = ( -3)CJI + (l)Ca2 + ( 4)Caa = ( -3)( - 19) + ( 1)(19) + (4)( 19) = 152
det(A) = (3)013 + (-l)Czs + (4)033 = (3)(27} + (- 1)(5) + (4)(19) = 152
det(A) = (l)Cll + (l)Ct2 + (2)013 = (1)(6) + (1)(- 12) + (2)(3) = 0
det(A) = (l)011 + {6) C21 + (3)Ca 1 = (1)(6) + (3)(-2) + (0)(0) = 0
det(A) = (3)021 + (3)022 + (6)023 = (3){-2) + (3)(4) + (6)(-1} = 0
det(A) = (l)Ou + (3)022 + (I)Cs2 = (1)(-12) + {3)(4) + (1)(0) = 0
det(A) = (O)C31 + (l)C32 + (4)C33 = {0)(0) + (1)(0) + (4)(0) = 0
d~t( A) = (2)013 + ( G)02J + {4)033 = (2)(3} + (6)(- 1) + (4)(0) = 0
27. Using column 2: det{A} = (5)~=~
28. Using row 2: det(A)
;j = (5){-15+ 7) = - 40
= (1)(-1-~ ~I)+ (-4) (-1~ -~D = (1)( -18) + (-4)(12) = - 66
DISCUSSION AND DISCOVERY
30. Jet( A) ::::: k 3
:n.
-
8k 2
-
121
IOk + 95
3
3
3
5
3
2 -2 - (3) 2
2
4
2 10
Using column 3: det(A) = ( -3) 2
5
= (- 3)(128) -
2 - 2
0
(3)( -48)
= -240
32. det(A) = 0
33. By expanding along the t hird column, we have
sinO
-cosO
0
0 = (1)
cos B
sin8
s inO- cosO sinO+ cosB 1
I
sinB cosO'
.
=sin 2 fJ+cos 2 B = 1
-cos 8 smB
for a ll values of 0.
34. AB
= [~ ~c:~ bf)
a.nd BA
= [0 ~ bd~
this is equivalent to t he condition that
36. If A =
[ac db] , then tr(A ) = a + d, A
I
tr( ,1)
2l tr(A2)
1
tr(A)
I= 2 I
1
2
ce]. Thus AB = BA if and onJy if ae + bf = bd + ce , and
lb
0
-
e d ·-
cl=
f
[a +bed
2
=
b(<l - f ) - (a - c)e = 0.
ab +bdl
2
b d 2 1, and tr(A )
J
co+ c c +
=a2 + 2bc + d2 .
Thus
I
2
2
a+ d
1
1
2
a.2 +Zbc+ d2 a +d =2 ((a+d) -(a +2bc+d ) =ad -bc=det(A).
DISCUSSION AND DISCOVERY
D 1.
Sin e~ Lhe
the
~till\
product of integers is always t\n integer , each elemf!ntary product is an integer and so
of the elementa.ry products is an integer
D2. The signed ele mentary p roducts will all be ±(1)(1) · · · (1)
and half equal to -1 . Thus the determinant. will be zero.
D3.
= ±1 , with
half of t hem equal to +1
A 3 x 3 matrix A can have as many as six zeros without having det(A) = 0. For example, let. A
be a diag0nal matrix with nonzero diagonal entries.
D4. If we e xpand a long the first row, the equation
X
y
a1
bt
b1
a ..2
1
= 0 becomes
1
and thjs is an equation for t he line through t he points Pt (a 1 , bt) and P2(a2, b2).
D5.
If ur = (a1 ,a2, a 3) and ' vT = (b 1 ,~,b3 ), then each of the six elementary products of uvT is o f
Lhe form (a1b;,)(a2bi?)(a3bj3 ) where {j 1 ,iz,]J} is a permutation of {1,2,3}; thus each of the
elementary products is equal to (ata2as)(b1b2b3).
Chapter4
122
WORKING WITH PROOFS
Xt
Pl. If t he three points lie on a vertical line then XJ = x2 = xa = a and we have
l/1
1
yz 1
:r:s l/3 1
x2
=
a l7l 1
a yz 1
= 0.
a !13 1
Thus, without loss of generality, we may assume that the points do not· lie on a vertical line. In
this case the points are collinear if and only if %,
~
= ~.
The latter condition is equivalent
q -x1
:1!2 - Xl
to (Y3 - Yt)(x2 - xt) = (Y2- Y1)(:r3- x1) wh.ich can be writ ten as:
(X2Y3 - X:;Yz) - (XtY3- XJYl)
+ (XJY2- X2Y1) = 0
On the other hand, expanding along the third column, we have:
·Thus the three points are collinear if and only if
Xl
Yt
x2
v2
1
1
XJ
l/3
1
= 0.
P2. We wish to pwve that for each positive integer n, t here are n! permutations of a set {it , )2, . .. , j .,.}
of n distinct elements. Our proof is by induction on n .
Step 1. It is clear that there is exactly 1 = 1! permutation of the set {j 1 } . Thus the statement is
true for the case n = 1.
Step 2 (induction srep). Suppose that the statement is true for n = k, where k is a fixed integer
~ 1. Let S = {j1, j2, ... ,jk, )k+I }- Then a permutation of the setS is for med by first choosing
one of k + 1 positions for the element )~;+ 1 , and then choosing a permutation for the remaining
k elements in t he remaining Jc positions. There are k + 1 possib ilities for the first choice and , by
the hypothesis, k! possibilities for the second choice . Thus there are a. total of (k + l )k! = (k + 1)!
p~r mutations of S . T his shows that if the statement is true for n = k it must also be true for
n = k + l. These two steps complete the proof by induetion.
EXERCISE SET 4.2
1.
(a)
= ~ -~ ~~ = -8 -3 = -11
det(A)
=
(b) det(A)
2
1
-1
2
4
5
-3
6
2
det(AT) = -1
3
2. (a) det(A)
3. (a)
1
0
1
3
0
0
0
- 8 -3= - 11
1
=
(24 - 20 - 9) - (30 - 6 - 24) = - 5 - 0
=-
5
5
2 -3 = (24 - 9 -· 20) -- (30- 6 - 24) = -5- 0 = -5
4
3
331
9
22
- 2 12
0
11 =
4
3
6
= det{AT) = 2
3
0
dct{AT) = -2
1 3
2
= (3) (!) (-2)(2) = -4
(b)
det(A) = d et(AT) = 17
123
EXERCISE SET 4.2
(b)
9
2 -3 = 0
5
3
3
- 1
1
l
~c)
3
0
0
-17
4
5
0
-2
(~t
and third columns are proportional)
1 = (3)(5)(-2)
e f
g h i
a
= (- 1) g
c
d
d
(a)
a
b
3a
(b)
-d
4g
(c)
(d )
6.
3b
-e
c
b
h
= (- 1)(-1)
i
e 1
3c
4h
1:+ i
a
b
e
f
e f
g
h
i
d
g
-3a
-3b
d
g-4d
h- 4e
e
h
I =(-1)(-1)( - 6)= - 6
I
=
6
- tl
=
-6
-2
c
a
12) d
f = (4i
g
b
c
e f
= (- 1'2)( - 6) = 72
h
= -6
d
e
g
h
-3cl
f = (-3) d
i
I
(c)
:1 = - 16- 24 = -40
2
-6
-4 -2
-8 - 10
b
i
-3a -3b
1 - 4/
i
Ia
(b) 0
(a) 6
(b) <.let(-· 2A)
d
c
h
-3c
f.
c
= (- 3) d e
4i
•1g 4h
b I· h
e
b
g
a+g
d
=
a
b
c
-e -f
(l
- ! = (3) -d
4h 4i
4g
7. (a) det(2A) = ,-2
8.
(c) 0 (proportional rows)
(b ) 0 (identical rows)
4. (a) -2
5.
= - 30
2 - 1
(- 2) 3 det(A) = ( - 8) 3
2
3
4
5
b
e
f
9
h
l
c
= (-3)(-t)) = 18
(d) - 12
6
22 det(A)
= (-160 + 8 -
a
=4
,-1
3
2
4
1 = 4( - 4 - 6) .., - 40
288) - (-48 - 64 + 120)
= · ·440 -
1 = ( - 8)((20 - 1 + 36) - (6 + 8 - 15))
8 = - 4t18
= (- 8)(55 + 1) =
-448
(a) del(-4A) = - 224 = (-4) 2 (-14) = (-4) 2 det(A)
(b) det(3A)-= -63 = (3) 2 (-7) = (3) 2 det(A)
9 . lf x
= 0, the given matrix becomes
A
~ [~ ~ ~] and, since the first and third rows are r-roporo
tiona!, we have det(A) = 0. If x
0 -5
= 2, the given matrix becomes B
and second rows are proportional, we have det(B) ::::: 0.
=
[~ i ;]
0
0 -5
and, since the first
Chapter 4
124
LO. If we replace the first row of the matrix by the sum of the first and second rows, we conclude that
det
b+a]
~ +a
b+c
a
[ 1
b
· [a+b+ c b+c+ a c+b+a]
= det
c
1
l
a
1
b
a
1
1
=0
since the first and third rows of the latter matrix are proportional .
11. We use the properties of determinants stated in Theorem 4.2.2. Corresponding row operations are
as indicated.
det(A)
=
3
0
6
0
-2
1
-9
1
-2 = (3) 0
5
-2
2 -3
0 -2
5 ·-1
1
:;:; (3) 0
0
= (3)( -1) 0
= (3)( -1)(-
10)
-2
5
A common factor of 3 was
taken from the first row.
~ded
to the third row.
The second and third rows
were interchanged.
-1
5
0
0
-3
~times the first mw "''-'
2 -3
1
12. det(A)
2
0
1
-2
= 30
=5
13. We use the properties of determinants stated in Theorerr1 4.2.2. Corresponding row operations are
as indicated .
1
-3
det(A) = -2
4
5
-2
-
0
0
2
0
5
-3
-2
13
0
1
-3
0
0
1
-l
0
13
2
1
-3
0
0
1
- 2
0
0
2
= {- 2)
= (-2}
= 33
-3
-2
- 2
l
=
= (- 2)
14. det(A)
0
0
l
2
2 times the first row was
added to the second row.
-5 times the first row was
added to the third row.
2
2
1
17
en =
-17
A fac tor of -· 2 was taken
from the second row.
- 13 times the second row
was added to the third row.
EXERCISE SET 4.2
1 5.
125
We use t he properties of de terminant s stated in T heorem 4.2.2. Corresponding r ow opera.t.ions are
as indicate'i.
det(A)
=
1
5
-2
-9
3
1
3
6
2 - 6 - 2
6
8
1
-1
2
=
=
-5 times row 1 was added to row 2;
row 1 was added to row 3; - 2 times
row l was added to row 4.
-9 - 2
0 - 3 -1
12
0 -1
-3
1
-9 - 2
-3 - 1
108 23
1
0
0
0
2
1
0
0
1
0
0
2 -3
1 - 9
0 - 3
0
0
0
=
1
2 -3
1
0
0
0
- 12 times r ow 2 was added to row 4.
1
-2
- 1
36 times row 3 was added row 4
_. ]
- l3
= 39
=6
16.
det(A)
]7.
We usc the properties of deter minants stated in T heorem 4.2.2. Corresponding row operations are
as indicated.
0
1
det(A )
=
1
1
1
l
1
2
2
1
'2
l
1
3
- :;l
1
3
3
0
2
0
0
3
1
= (- 1)
1
0
'2
l
1
1
I
0
3
2
1
l
3
3
2
0
0
1
1
1
2
l
1
1
- l
2
0
-3
1
1
0
1
0
l
3
3
2
1
2
1
l
times row 2 was added
t o row 3; - 1 t.imes row 2
was added to row 4.
l
0
-3
0
1
(~) ( - ~)
1
- ~ times row was added
to row 3; ~ times row 1
was added to row 4.
-3
'2
t~
l
0 0
I
T he first and second rows
were interchanged.
A facto< of was
from the first row.
:i
0
1
2
1
0
3
2
:l
1
- 3
0
= (-1)
1
l
0
(~)
2
l
I
3
I
= (- 1)
0
2
1
3
l
-3
= ( - 1)(~)
= (-1)(~)
1
2
-3
2
-3
- 3
1
0
1
2
1
1
0
0
1
0
0
- 3
l
1
I
2
2
3
A factor of - ~ was t aken
from row 3.
Chapter 4
126
= (-1)
= (- 1)
18. det.{A)
=
19.
(a)
a2
03
b3
+ b2 + Cz =
as+ b3 + C3
a1
bt
(l~
bz
bJ
+ b1
=2
=2
0
1
0
0
0
1
~
1
!
times row 3 was added
to row 4.
-2
C)
Add -1 times column 2 to
column 3.
C:J
2a 1
2a2
2a3
a1 -
b1
Ct
02
02- b2
C2
03
GJ -
b3
C3
=
C2
CJ
a1
- bl
Ct
02
- b2
C2
,a3
- hJ
CJ
GJ
bl
Cl
02
b2
C2
(13
03
CJ
a:~
+ b2t
att +b t
OJ
a2t+~
Gt
= (1 - t 2)
(1.1
= (1.-
Add -1 times column 1 to
column 3.
bz b2 + C2
bJ b3 + ca
C2
Gt
a1 + b1 t
bl bt + Ct
.:l3
C)
= -2
(a)
0
02
a1 - b1
(12 - b'J
a3 - bs
az + b2
a3 + b3
20.
1
1
al
02
<l3
a1
2
1
(~) ( -~) ( -~) = -~
+ b1 + C1
G!
=
(b)
1
1
- 2
bt
b2
al
(~) ( -~)
1
0
t2 ) bt
Ct
CI
b2
(1.3- b3
C2
(1.2-
ca
Add -1 times column 1 to
column 2 .
Factor of - 1 taken from
column 2.
+ b3t
a2
a1
+ b1t
(1-
+ bzt
·a3
t 2 )bl
+ b3t
bt
b2
b3
Ct
C2
CJ
a2
GJ
b:z b3
C2
J Add column 2 to column 1.
Factor of 2 taken from
column 1.
a3t+b3
+ btt
Ot
Gl-
CJ
a2
(1 -
+ bzt
( 2)b2
a3
+ ~t
{1- t 2 }b3
Add - t times
row 1 to row 2.
Factor of (1 - t 2 )
t.Rken from row 2.
Add - t iin:.es
row 2 to row 1.
127
EXERCISE SET 4.2
a2
b1 + ta1
b2 + ta2
a3
b:1
al
(b)
-
a3
b1
b2
b3
X
x2
21.
a2
1
det{A) = 1
1
Ct
r2
+ ta3
a1
+rb1 + sa1
C2 + 1'b2 + sa2
C3 + rb3 + sa3
+ rbt + sa1 la1 b1
b-l
c2 + r~ + sa2
=
c3 + rb3 + sa3
a a UJ
c1
Add -t times column 1
to column 2.
Add -s times column 1
to column 3.
Add -r times column 2
to column 3.
C)
c2
c3
x2
1
X
1 x
Y2
_ x2 =(y-x)(z - x) 0 1
y2 = 0 y - x
z2
0 1
0 z- x z2 - x2
y
;;
1 X
=( y- x)( z-x) 0 1
x2
x2
y+x
z +x
I
xl
y + = (y - x)(z - x) (z- y)
0 0 z- y
+ 3b)
22.
dct(.4) :... (a - b) 3 (a
23.
If we add the first row of A to the second row, the rMult is a lfla.trix B that h as two identical row$;
thus dct( A) = dct(.H) = 0.
24 .
If we add ~ach of the first four rows of A t o the last row, t.he result is a matrix B that h as a. row of
z~ros: thus det.(A) = det(B) = 0
~5.
(a) \Ve have dct(A) = (k- 3)(k ···· 2) - 4 = k 2 - 5k + 2. Thus, from Theorem 4.2.4, the matrix .4
is invertible if and ~)nly if lc2 - 5k + 2 -f:. 0, i.e. k #- 5±
fTI .
(b )
= ( 1 )~~ ~~- (3)~~ ~~ + (k)~~ :1 . ,. 8 + 8k.
if and only if 8 + Bk f 0, i.e. k =/' ... 1.
We ha..-e dE>t.( A)
A is invertible
i: ±2
~() .
(a)
~7.
(a) det(aA) = 33 det(A)
k
{h )
23 dct{A- 1 )
1
( d) clet((2 A)-t ) ~ det(2A)
(c)
:8.
tlct(2A- 1 )
= (27)(7) = 189
(a) det(-· A)
''=
= (- l) 4 det{A) =
(c) det(2A'~') = det(2A)
&.
We have
=
(8)( ~)
2
6
10
4
15
22
= [
"
-1
- 1
1
) = det(A)
1
=7
¥
1
= (8){7)
= 24 det(A) =-32
2
k=L.!
(b) det(A
(1)(-2) = - 2
All~ [i ~ ~] [~ ~ -~]
-2 0
0
2
det(AB) =
=
1
z3 det(A)
T hus, from Theorem -1.2.4, the matrix
!
10
-5
2
-6 = 0
- 23
10
1
= 56
1
(b) d et.t.. 1,t - 1) = det(A}
:;:;
(d) det(A 3 )
= { det(A)) 3
1
-2
= (- 2)3 = -8
l~ ~!] ; t hus
22 - 23
4 - 5
3
9
2
2
= 21
3 9
2 2
) = 2(6 - 18) = -24
Chapter 4
128
On the other hand, det(A)
det(A) det{B).
30.
We have AB =
= (1)(3)( -2) =
-6 and det(B)
= (2)(1){2) = 4.
Thus det(AB)
=
[~ ~ ~] [~ -~ ~1 = [3~ -~ 1~1 , and so det(AB) = -170. On the other hand,
0 0 2
5
0 l
10
0
2
det(A) = 10 and det(B) = -17. Thus det(AB)
= de t(A)det(B).
31. The following sequence of row operations reduce A to an upper triangular form:
-2
-9
H
2
8
-~] ~ [~
3
6
-6
6
-2
-}·[~
3
-9
-3 -1
1
0
12
0
0
-1
0
-~] ~ [~
-2
1
0
3
-9
-3 -1
0
108
23
0
~lf~
-2
0
-2
1
0
0
3
-9
1]
-2
-1
-3
0\·' -13
The corresponding LV -factorization is
A=
[!
-2
-9
-1
2
3
I)
2
-G
8
6
1
0
0
0
1
12
- 36
0
-~'] = [-~1
1
2
and from this we conclude that det(A) = det(L) det(U)
32.
3
1 - 9
-2
0
-3
- 1
0
0
- 13
1] =LU
= (1)(39) = 39.
The following sequence of row operations reduce A to an upper triangular form :
I
[~ ~] ~ [~
.. l
1
3
2
0
2
1
1
1
2
-2
'2
2
1
1
;~l ~ r~
'J
l
l
2
3
I
2
l
2
1
0
0
-1
1
-;]
-4
[~
1
2
1
0
0
2
1
3
_;]
1
0
-2
6
The corres ponding L U-decomposition is
A=
[~
1
0
2
1
3
1
1
2
l] "' [~
3
0
1
0
-2
2
0
- 1
l
1
0
~][~
3
l
2
1
1
2
l
0
0
0
_; ] = LU
- 2
6
a.nd from this we conclude that det.(Jt) = dct(L) det(U) :::: (1)(6) = 6.
33. U we add the firs t row of A to the second row, using the identity sin 2 B + cos 2 0 :::: 1, we see that
sin 2 o
det( A) = cos 2 o
1
sin 2 /3
cos 2 /3
1
sin 2 1
sin 2 o
2
cos 1 =
1
1
1
sin2 f3
sin 2 1
1
1
1
1
=0
since the resulti(lg matrix has two identical rows. Thus, from Theorem 4.2.4, A. is not invertible.
34. (a) .A=3or.A=-1
35.
(b) .A=6or.A =-l
(c) >. -= 2 or >. = -2
Since det(AT) = det{A), we have det{AT A)= det(AT) det(A) = (det(A)) 2 = det(A) det(AT) =
det(AAT ).
(b} · Since det(AT A)= (d et (A)) 2 , it follows that det(AT A) = 0 if and only if det(A) = 0. Thus. from
'T'heorem 4.2.4, AT A i~ invP.rt.ihiP. if ano only if A is invertible.
(a)
DISCUSSION AND DISCOVERY
36. det(A- 1 BA)
37.
129
= det(A -I) det(B) dct( A) = det~ll) dct(B } det(A} = det(B)
llxii 2 IIYII 2 - (x. y) 2 == (xi+ X~ + x~ )(yf T vi+ v5>- (XlYl + XzY2 + XsYa)2
2
Jx
1
IYl
x
2
Y2
1
2
+ IXJ
X:J I
'YJ
YJ
2
+ lxz
Y2
~ xiy~ - 2XtY2X2Yl
38.
(a) We: have det
[~ ~J
l 2 0
(b)
We have
2 5 o
- 1 3 2
:r
3
1
Y3
= (x1Y2 -
:r2yd 2 + (x1y3- X3Y1) 2 + (x2Y3- X3Y2) 2
+ x~y~ + xi y§- 2XJY3X3Yl + x~yf + x~y~- 2X2Y3X3Y2 + X~y~
=5 -4 = 1 and det
= {2)~~
2
5
[~ ~]
= 3- 6
1 = 2(5 - 4) = 2 and
= -3;
thus det(M) =(I){ -3)
3
0
0
2
- ·3
1
o=
(3)(1 )( - 4) = -12;
= -3.
th\18 det(M) =
8 -4
(2)( -12) = - 24.
39.
(a)
det(M)
(b) det(M)
=
=
121
~~
jo
-~I
2 0
1 2
() l
3 5
-2 6 2
3 5 2
I~
1
3
= (fi + 4) 0 12
0 -4
5
12
- . 13
= (10)(1) 112
- 4
121=
- 13
-1 080
~I = (1){1) ,.,, l
DISCUSSION AND DISCOVERY
D l. The matrices ar~ sin-gular if and only if the corresponding determ inants are zero. This leads to
the system of equations
from whir.h it follows t.hat s
D2. Since det(AB)
det(BA).
= 9~
1
9
and t =
1 4
4~
•
= det(A) det(B) == det(B) d P.t(A) = det(BA),
D3. If .4 or B is not invertible then either det(A) = 0 or det(B )
det(A) det(B) = 0; thus AB is not invertible.
it i;, always true that det(AB)
= 0 (or both).
It follows that det{AB)
=
=
130
Chapter 4
D4. For convenience call the given matri'V A n· If n = 2 or 3, then An can be reduced to the identity
matrix by interchanging the first and last rows. Thus det(An) = - 1 if n = 2 or 3. lf ~ = 4 or 5,
then two row interchanges are required to reduce An to the identity (interchange the first and last
rows, then interchange the second and next to last rows). Thus det (An) = +1 if n = 4 or 5. This
pattern continues and can be summarized as follows:
= det(A:~k+l ) = - 1
det(A2k) = det(Azk+t) = +1
det(A2k)
fork = 1,3,5, .. .
fork= 2, 4,6, .. .
DS. If A is skew-symmetric, then det(A) = det(AT) = det(-A) = (-l)ndet(A) where n is the size of
A. It follows that if A is a skew-symmetric matrix of odd order, then det(A) =- det(A) and so
det(A) = 0.
D6. Let A be an n x n matrix, and let B be the matrix that results when the rows of A are written
in reverse order. Then the matrix B can be reduced to A by a series of row interchanges. If
n = 2 or 3, then only one interchange is needed and so det(B) = - det(A). If n = 4 or 5, then two
interchanges are required and so det(B) = +det( A). This pattern continues;
= - det(A)
det(B) = + det(A)
det(B)
for n
for n
= 2k or 2k + 1 where k is odd
= 2k or 2k + 1 where k is even
D7. (a) False. For example, if A= I= /2, then det(J +A)= det(2/} = 4, whereas 1 + det{A) = 2.
(b) Thue. F'tom Theorem 4.2.5 it follows that det(A") = (det(A))n for every n = 1, 2, 3, ... .
(c) False. From Theorem 4.2.3(c), we have det(3A) =3n det(A) where n is the size of A. Thus
the: stu.ternent is false except when n = 1 or riet(A) = 0 .
(d) Thue. If det (A) = 0, the matrix is singular and so the system Ax = 0 has infinitely many
solutions.
DB.
(a) True. If A is invertible, then det(A) =F 0. Since det(ABA) = det(A) det(B) det(A) it follows
that if A is invertible and det( ABA) = 0, then det(B) = 0.
(b) 'frue. If A ,.,; A - l , then since det( A - l) = det1(A) , itiollows that {det( A ))2 == 1 and so det (A) =
±1.
(c) True. If the reduced row echelon form of A has a row of zeros, then A is n ot invertible.
(d) 'frue. Since det(A~') = det(A), it follows that det(AAT) = det(A) det(AT) = (det{A)) 2 :?: 0.
(e) True. If det.(A) f 0 then A is invertible, and an invertible matrix can always be written as
a prod uct of elementary matrices.
D9. If A = A 2 , then det(A) = det(A 2 ) = (det(A)) 2 and so det(A) = 0 or det(A) = 1. If A= A3, then
det(A) = det(A 3 ) = (det(A)) 3 and so det(A) = 0 or det(A) = ± 1.
DlO.
Each elementary product of this matrix must include a factor that comes from the 3 x 3 block of
zeros on the upper right. Thus all of the elementary products are zero. It follows that det(A) = 0,
no matter what values are assigned to the starred quantities.
Dll . This permutation of the columns of an n x n matrix A can be attained via a sequence of n- 1
column interchanges which successively move the first column to the right by one position (i.e.
interchange columns 1 and 2, then interchange columns 2 and 3, etc.). Thus the determinant of
the resulting matrix is equal to (-1)"- 1 det(A).
EXERCISE SET 4.3
131
WORKING WITH PROOFS
~
Pl. If x
and y = [ ] then, using cofacto• expansions along the jth w lumn, we have
[]
det(G)
= (x1 + Y1)C1; + (xz + Y2)C2; +
· · · + (xn +yn)Cn;
P2. Suppose A is a square matrix, and B is the matrix that is obtained from A by adding k times the
ith row to the jth row. Then, expanding along the jth row ofB , we have
det(B) = (ajl
+ kao)Gjl + (aj2 + ka;2)Cj2 + · · •+ (a;n + ktJ.in )Cjn
= det(A) +kdct(C)
where C is the mat.rix obtained from .t1 by replacing the jth row by a copy of tho ith row. Since
C has two identical rows, it follows that det( C) = 0, and so det( R) = det(A).
EXERCISE SET 4.3
1. The matrix of cofactors from A
)s
C
[-~
=
&
Thus adj{A)
= lr-~
- 2
-!
-~],and det(A)
-s
= (2)(-3)
+ (5)(3) + (5)( -2) = -1.
3
-! -~] and A-t = -a.dj(A) = [-~ - ~ -~]·
2
3
2 -2 -3
0
I
3
~]
2
3
0 -1
3.
The matrix of cofactors from A is C
2 6
adj(A) =
4.
5.
adj(A)
Xt
[
o 4
·1]
G
0 0 2
29
-6
-~1 :::: -18 =
-~,
6 4
[2 6 4] [ ~o ~1
4 6 =
4 oo2
4
~ [ ~4~
3
2 0 0]0 , and det (A) = (2}(2) + (- 3)(0) + (5)(0) = 4. Thus
[4 6 2
and A- 1 = -1 adj(.4) = -1 0
0
12
I!
= 17
=
16
29
12
17
13
9
8
0 0 21
A-' ~ ~adj(A)
~ [ ~~
12
~]
X2
=
j;
!I
-~,
I]
~ .
26
13
= - =16
8
0
1
l
-·2
;]
132
Chapter 4
I~! ~1 = -- 153- = 3
I~~ ~~1
6. :r = .:....___ _;_
1 1~ ~I
7.
8.
x=
XJ
=
I t=
-4
-1
2
4
-20
-3
2
- 1
2
-20
1 -4
1
-1
2
2
-3
2
4
-20
2
-3
2
-4
1
2
1
4
2
1
-4
1
144
=- 55
1
y=
- 1
1
6
61
- 55
z=
= -
4 -1
2 -3
2
2
2 -3
4
2
6
- 1
4
-3
1
1
4
1
-2
- 1
2
-2
0
0
0
0
3
4
0
-3
1
-3
1
2
4
- 1
0
2
- 1
0
0
-3
4
0
-3
-3
]
2 - 1
0
0
-3
1
30
= - 11
- 32 - 4
1·1 -1
11
1
- .t - 2
- I -4
2 - 1
- 1
:3
2
-4
- l
1
1
-2
6
12
6
2
4
8
3
2
3
=
3
I
-4
2
1
7
9
- 2115
= - -=5
- 423
Xz
=
38
=- 11
X3
- 1
2
- 1
-32
14
2
11
1
- 1
-4
-4
3
1
2
2
- 1
-1
7
- .t
3
1
1
-4
2
7
3
1
- 32
1
- 1
-4
2
-1
ll
1
- 1
1
-4 -4
1
-2
- 1269 = 3
- 423
x4
=
1
9
1
-4 = - 3384 = 8
1
- 423
9
1
9
=
7
=
14
11
-4
- 423
- 1
1
2
4
- 1
2
4
5 -3
3 - 2
2 - l
4
8
3
Iz =
2
4
8
3
6
"'
~.:
-3
6
-2
2
3
5
3
- 1
2
=-=1
2
1
- 1
2
-3
-2
4
2
2
= -= l
2
1
- 1
2
-3
3 -2
4
2
3
5
- 1
- 1
1
2
4
2
2
-230
46
=--=-55
11
1 -3
4
2 - 1 -2
40
4
0
0
= 1 - 3
1
- 11
-2
- 32
14
- 423
4
Xz
1
9
7
-2
- I
=
=
1
-1
2
Xt
1
-4
- 1
10.
-- 5-1 = -4
6
I
X3
= !..:-,-1~-~.,-l-1 =
- 51
-1
4
9.
y
204
423
= -- = - 1
- 423
133
EXERCISE SET 4.3
:r~
11.
=
x=
4
1
3
2 -2 - 1
1
4
1
2
1
4
3
- 2 -1
l
3
2
1
2
4
2
2 2 4
4 3 0
8 5 12
3 3 6
2
4
- 2
= -= -1
2
X4
2
14
i2
6
-2
::;: -=- 1
2
1
-2
[
1
-2
2
3
1
3 -1
-1
4 - 2
1
2
(}
- 33
= -
41
+ sin2 8 =
1. Thus A is invertible
0]
0
0
15. The coefficienl matrix is 11 =
solution if k #- 0 and k
i
1
2
l
1
=
=
1 foro
:f.
2
tan acos2 a
cos a
[
0
l
Thus, for these values o f
0
-cos2 o ' · l.an o cos2 a
0
.
sec 2 a
0
k
<1
+ n1r.
~
3 3 2I] , and d ct(A) = k(k- 4 ).
[2k 2k k
Thus the system has a unique
4. In this case the solution is given by:
3
1
k
2
2k k
(k- l)(k - 6)
k(k- 4)
--- =
k (k - 4 )
3
4
2k
z=
X
1
-3
sin9 0], and det( A) = cos
Using the identity 1 + tan 2 o = sec 2 a:, we have det(A!)
lc4~
-~3
r 5'
3
co38 - sin(1
sinO cos @ 0 .
adj(A) =
x =
3
-1
- s~n e c~e ~
n, t.he m a trix A is invertible and A- 1 = adj(/1.)
16.
-3
3 -2
2
1
[
14.
4
12. y=
cosO
•. :::
=
6
5
21
3
= - =-
13. The matr ix of cofactors is C =
and A- 1
8
3
2 -1
3 - 1
3
k
2k
l
formula. for the inverse is A -
1
= (2
2
k
2(k - 1)
=-k(k- 4)
k(k - 4)
(2k - 3)(k - 4)
k(k- 4)
14k - 3
= ..,.-----.-,.-----:(5k- 3)(3k + 2)
-
l
1
2
k(k- 4)
17. We have d et(A) = 1i- x 2 y = y(y 2
y=
3 1
4 2
2k l
y=
2k - 3
= - --
3{k2 - k + 3)
(5k - 3}(3k + 2)
k
3k
z=--5k -
3
x 2 ). Thus A is invertible if a.nd only if y =I= 0 a.nd y
l
. '!/ Y -
X
2
)
[·-x·y y
2
0
0
- :ry y2 _ ;r2
y2]
- xy .
;r2
=I=
±x. The
Chapter 4
134
= (s 2 -
18. We have det(A)
is
t2 ) 2 . Thus A is invertible if ;md only if s =/; ±t. The formula for the inverse
1
19.
]det(A)I
=II+ 2] =
0s -t0 -t0]
s
1
0
= (s2 - t 2) -t
[
0
s
0
0
-t
0 ·
s
ldet(A)I = ]o- 61
20.
3
21. ]det(A)J = 1(0- 4 + 6)- (0 + 4
22. ]det(A)I
\-1
=6
+ 2)] = l-41 = 4
= ](1 + 1 + 0)- (0- 3 + 2}1 = 131 = 3
~
23. The parallelogram has the vectors ~
P1 Pz = (3,2) and P1P4
= (3, 1) as adjacent sides.
Then, from Theorem 4.3.5, the area of the parallelogram is ]det(A)] = ]3- 61 = 3.
~
24. The parallelogram has the vectors P
1 Pz
----t -= (4, 0)
= (2, 2) and P1P4
as adjacent sides. Let A ;;;.;
Then, front Theorem 4.3.5, the area of the parallelogram is ]det(A)]
25. area
~ABC =
1
.
2
2
0
1
3
1
1
2
1 =7
1
-1
2
27.
V
=
]det(A)I wlsere A= [- 6
2
28.
v
26. area
~ABC=
Let. A=
-
=
]0- 8]
1
1
2
2
[33]
2 1
.
[22 "o].
= 8.
1
2
1
1
3 -3
1
=3
~ ~1; thus V = ]-16] = 16.
-2 -4
= 45
29. The vectors lie in the same plane if and only if the parallelepiped that they determine is Jegenerate
in the sen.;;e that. it.s "volume" is zero. In this example, we have
v
~i
-1
= -2
0
5
-4
-2
0
= 16
and so the \'Cctors do not lie in the same plane.
30. These vectors lie in the same plane.
31.
33.
a=± )5(0, 2, 1)
U XV=
2
2
34.
(a)
32. a= ±fsi(6, -3, 4)
j
k
3
3
-6
6
' .l
--+ --+
ABxAC= -1
2
k
2
1
1
-1
(b) area
sin f)
= 36i- 24j
j
=
=
l]u
X
vll
j!u!IJlvll
-4i +j - 3k
= )1296 + 576 =
v'4§y49
area
)1872 = 12M
49
49
~ABC = ! ~~~ x
~ABC=~ IIXBII h = ~h; thus h = ~xp = ~·
ACII = £P
EXERCISE SET 4.3
15.
135
j
k
2 - 3
6
7
(a) v x w = 0
2
= (14 + 18)i- {0 + 6)j + (0- 4)k = 3Zi k
j
{b)
U X (V X W)
=
3
2 -1
32 -6 -4
uxv = 3
0
j
2
17.
(a) (0, 171), ..... 2fJ4)
(a)
U
XV ~ ~ -~
I
U XV =
-- 14i- 20j- 82k
6
k
6 =(63-36)i-(-28 - 12)j +( - 24 - 18)k = 27i+40j-42k
7
k
4 2
3 1 5
( (:)
( - 8, -3, -8)
j
- 2
= 18i + 36j -18k = (18, 36, -18)
is orthogonal to both u and v.
k
1
0
3
i8. (a) u x v
+ (-18 - 64)k =
(h) (-44,55,-22)
j
( b)
(- 8 - 6}i - {- J2 + 32)j
2 - 1 =(-6+2)i-(-9+0)j+(6 - 0)k =- 4i + 9j +6k
2 - 3
(uxv)xw = - 4 9
16.
=
k
j
(c)
6j - •lk
5 = - 3i
-3
= (0, - 6, - 3)
+ 9j - 3k = ( -3, 9, -3) is orthogonal to both u and v.
(b )
U X V =
(2,-6, 12)
vzl) =
U2
.o.
u x (v
+ w)
.,.,.,
(l
= (u
t 12
v2
+ w:i
x v)
= (ku) x v
+ (u
x w)
-v xu
Chapter 4
136
43.
(a)
(b)
u >< v = 1
j
-1
k
.2
0
3
1
2
uxv=
- 1
= lfu x
44. (a) A
---+
45. PtP2
X
A =~
46.
A=
---+
plp3
li M
t II:PQ
X
= -7.i-j+ 3k
j
k
3
2
-2
A= ll u x v ii = J49 + 1 + 9 =
A= ll u x v lf = J 36 + 16 + 49 =
0 = -6i +4j +7k
(b) A=
vlf = 0
k
2
3
j
=
X
- 1
-5
2
0
llu x
vii =
Ji01
/IT4
= - 15i + 7j + IOk
plp~ll = ~v'225 + 49+ 100 =
i~ll =
J59
tJll40 =
4TI
J285
47. Recall that the dot product distributes across addition, i.e. (a+ d )· u =a· u + d · u . Thus, with
u = b x c , it follows that (a+ d ) · (b x c ) =a · (b x c)+ d · (b x c) .
48.
Using properties of cross products stated in Theorem 4.3.8, we have
(u + v j x (u - v ) = u x (u - v) + v x ( u- v ) =-= ( u x u)+ ( u x ( - v) ) + (v x u ) + (v x ( -v))
= (u x
-- ~
---t
49. The vcc:lo r AB x A C
+ {v x
u ) - (u x v)
k .I
1 -3 =
j
=
- I
8i + 4j
u ) - (v x v ) = 0 - (u x v) - (u x v)- 0
-)-
= - 2(u x v )
---t
+ 4k is perpendic ular t o AB aiH.I AC, and thus is per-
3 -1
pendic\llar to the plane determined by the points A, B, and C .
50.
(I
1131, - 1111 t1311. 111
vzl) ; thus
(a)
We h uve v x w =
(b)
lu · (v x w )l is equal to the volume of tlte parallelpiped having the vectors u , v , w as adjacent
edges. A proof can be found in any standard calculus text.
51. {a)
We have adj(A)
vz
W2
W3
= [-~!
I
W1
W3
W}
W2
- ~ ol] , and so A2
0
1
= detCA)adj(A)
·1
(b ) The ,.duced .ow echelon fo<mof(A
34
= -t [-14
1
IT];, [:
0
0:-¥
I
0
0
I
I
2 _:
t : -~
0
-21
7
0
~~J.
-1
-~];thus A- = [-~ _:
1
'f
- 'i
0
-;].
7
EXERCISE SET 4.3
(c)
137
The method used in (b) requires much less computation.
>2.
We have det(AJ.:) = (det(A))k. Thus if A''-'= 0 for some k, then det{A) = 0 and so A is not invertible.
53.
F:rom Theorem 4.3.9, we know that v x w is orthogonal t.o the plane determined by v and w. Thus
a vector lies in the plane determined by v and w if and only if it is orthogonal to v x w. Therefore,
since u x (v x w) is orthogona.l to v x w, it follows that u x (v x w) lies in the plane determined by
v and w.
>4.
Since (u x v) x w = -w x (u x v), it follows from the previous exercise that (u x v) x w lies in the
plane determined by u and v.
55.
If A is upper triangular, and if j > i, then the submatrix that remains when the ith row and jth
column of A are deleted is upper triangular and has a zero on its main diagonal; thus C;; (the
ijth cofactor of A) must be zero if j > i. It follows that the cofactor matrix C is lower triangular,
and so adj( A) "" or is upper triangular. Thus, if A is invertible and upper triangular. then A_, =
det~A)adj(A) is also upper triangular.
56. If A is lower triangular and invertible, then AT is upper triangular and so (A- 1)T = (A 1'}- 1 is upper
triangular; thus A - I is lower triangular.
57.
The polynomialp(x) = ax 3
if and only if
+ bx2 +ex+ d passes through the points (0, 1}, (1, -1), (2, -1), and (3, 7)
d=
a+ b+ c+d=-1
Ra
27 a
+ '1b + 2c + d = -1
+ 9h + 3c + d =
7
Using Cra.mers Rule, the solution of this system is given by
1
-1
-1
7
(~
-::::
0
1
8
27
0
l
8
c=
27
0
1
0
1
ll
0
1
1
2 1
9 3
12
---1
0 0 1 - 124
1
I
2
1
4
9
3
1
0
l
4
9
b=
-·1
0
1
1
8 -l
2
1
0 0
1 1
8 4
27 9
l
1
-1
·-1
I
0
1
1
1
7
1
12
7
127
8
-12
=-=-1
12
27
d=
Thus the interpolating polynomial is p(x) = x:l - 2x 2
-
0
1
1
9
12
x
+ 1.
-24
12
3
0 1
1 1
2 1
= - - = -2
3
0
1
1
-]
2
- 1
3
71
12
12
=-=1
Chapter 4
138
DISCUSSION AND DISCOVERY
Dl.
(a)
The vector w = v x (u x v} is orthogonal to both v and u x v;
thus w is orthogonal to v and lies in the plane determined by
u and •t.
,. ux v
I
I
v
~X(UX>)
(b) Since w is orthogonal to v, we have v • w = 0. On the other hand, u · w = l!ullllwll cos9 =
llullllwll sin(%- B) where() is the angle between u and w. It follows that lu · wl is equal to
the area of the parallelogram having u and v as adjacent edges.
D2.
No. For example, let u
but vi= w.
= (1, 0, 0), v =
= (1, 1, 0).
(0; 1, 0), and w
Then u x v
=ux
w
= (0, 0, 1),
D3. (u · v) x w docs not make sense since the first factor is a scalar rather than a vector.
D4. If either u or v is the zero vector, then u x v = 0. If u and v are nonzero then, from Theorem
4.3.10, we have flux vii= JluiJI!viJsinO where B is the angle between u and v. Thus if u x v = 0,
with u and v not zero, then sin B = 0 and so u and v are parallel.
D5. The associative law of multiplication is not valid for the cross product; that is u x (v x w) is not
in general the same as (u x v) x w. · ·
D6. Let A
= [l -cc
-(l-
c
c)]. Then det(A) = c2 + (1 -
c) 2
= 2c2 - 2c
+ l f. 0 for all values of c.
Thus,
for every c, the system has a unique solution given by
-(l-c)l
3
x1 =
07.
(c)
1-4
'l.c2
7c- 4
c
-
2c + 1
= -2c"""2,.---2c_+_l
The solution by Gauss-Jordan elimination requires much less computation.
D8. {a) True. As was shown in the proof of Theorem 4.3.3, we have Aadj(A} = det(A)L
(b) False. In addition, the determinant of the coefficient matrix must be nonzero.
(c) True. In fact we have adj(A) = det(A)A- 1 and so (adj(A))- 1 = det~A)A.
(d)
False. For example, if A=[~
(e)
True. Both sides "ne equal to
~]then adj(A) = [~ -~]·
Ut
U~
UJ
v1
l!'J
t13 •
W1
W2 W3
WORKING WITH PROOFS
P 1. We have u · v
= II ullllvll cos 0 and II u x vII = II ul!ll vI! sin 0; thus tan 0 = II '::.;11 .
P2. The angle between the vectors is () = o:- /3; thus u · v =
U•V
Uullllvll'
Uullllvll cos( a-- {3)
or cos(a:- /3)
=
EXERCISE SET 4.4
P3.
139
(a) Using properties of cross products from Theorem 4.3.8, we have
(u
+ kv)
x v = (u x v)
+ (kv x v)
= (u x v)
+ k(v x v)
= (u x v)
+ kO = u x v
(b) Using part (a) of Exercise 5U, we have
u1
u · {v x w)
=
Uz
U3
V2
VJ
V3
Vt
v2
V3
Ut
1l2
1l3
W}
W2
W3
W)
W2
W3
= -v · {u x w) = -(u x w) • v
P4. If a, b, c, and d all lie in the same plane, then ax b and c x d are both perpendicular to that
plane, and thus parallel to each other. It follows that (ax b) x (c x d} = 0.
P5. Let Q1 = (xt,Yt,l), Q2 = (x2,y2,l), Q3 = (x3,Y3,l), and letT denote the tetrahedron in R3 ·
having the vectors OQ1, OQ2, OQ:.h a..c; adjacent edges. The base of this tetrahedron lies in the
plane z = l and is congruent to the triangle 6P1 P2h; thus vol(T)"" !area(6P1 P 2 P3 ). On the
ot.her hand, voi(T) iR equal to ~ timP.s the volume of the p<\rallelepipe(l having OQ 1, OQ 2 , OQ3,
as adjacent edges and, from part (b) of Exercise 50, the latter is equal to OQ 1 • (OQ:z x OQ3).
Thus
area(6.P1P2P3)
= 3voi(T)
~OQ1 · (OQ2
=
x OQ3) =
~
l
Yl
X}
X2
Y2
l
'X3
1/3
1
1"'o.
The fixed points
EXERCISE SET 4.4
1.
1has nontrivial fixed points since det(I--
1 0
0
(a) The matrix A= [
1
are the solutions of the system ( l -.. A)x
A)""'
1°1
0
-1
= 0, which can be expressed in vector form
a..<>
x
= t[~]
where -oo < t < oo.
(b) The matrix B . [ 0
']
l 0
has nontrivial fixed points since det(J - B) =
points are the solutions of the system (I - B)x
x
2.
= t [~]
1
-l
-l~1 = o.
The fixed
which can be expressed in vector form as
where -oo < t < co.
(a) No nontrivial fixed points.
3. We have Ax= [:
4.
= 0,
I
eigenvalue ..\
= 5.
We have Ax
=[
eigenvalue >.
==-
~ :] m~ [·~] = 5x; thus
(b)
X= [:]
No nontrivial fixed points.
,,
an cigcnvedor of A corresponding to the
=: ~: =;] [:] ~ m·~Ox; thus X~ m,, an eigenvector of A corresponding to the
0.
Chapter 4
140
5.
(a)
A=[!
The characteristic equation of
Thus ..X
= 3 and >.
=
-~]
is det(.XI~>ach
- 1 are eigenvalues of A;
(b) The characteristic equation is I.X
=
10
).. : 1 =
2
4
A) =1>.--=-
3
8
.X~ 11=
(,X- 3)(~ + 1} = 0.
has algebraic multiplicity 1.
(>. - lO)(..l. + 2) + 36 =
(>. - 4)2 = 0.
Thus ,X = 4
is the only cig~nvalue; it has algebraic multiplicity 2.
(c)
The characteristic equation is 1.>.-
2
-1
I= (..l.- 2)
0
,
"'-2
2
= 0.
Thus ,X= 2 is the only
eigenva.lu~; it
has algebraic multiplicity 2.
(a) The characteristic equation is .X2 - 16 = 0. Thus
6.
,X = ±4 are eigenvalues; each has aJgebra.ic
multiplicity 1.
(b) The characteristic equation is >.. 2 = 0. Thus ,\ = 0 is the only eigenvalue; it has aJgebra.ic
multiplicity 2.
(c) The characteristic equation is(>.. -1f~ = 0. Thus >.. = 1 is the only eigen.value; it fi8s algebraic
multiplicity 2.
7.
(a)
.>. - ·1
0
2
.>.- 1
o
2
0
A- 1
The characteristic l!quat io!l is
0. Thus .A = 1, >.. = 2, and >.
=
- 1
= ).3
-
6). 2 + 11 .A- 6 = (.A - 1)(>. - 2){>..- 3) =
3 are eigenvalues; each has .algebraic multiplicity 1.
.A-4
(b) The characteristic equation is -~
-t
5
5
.X -1
1
3
). + 1
= A3 -
4>.2
+ 4A =..\(A -
and ). = 2 are eigenvalues; >. = 0 has algebraic multiplicity 1, and ).
).. -3
(c)
- 3
A = - 3 and ,\
-4
1
.x + 2 - 1
The characteristic equation is
-9
= 2 are eigenvalues; A= -
= ).3 -
>.. 2 - 8>.
+ 12 =
).
3 has multiplicity 1, and >..
2) 2
= 0.
Thus A= 0
..
= 2 hFlS multiplicity 2.
(,X+ 3)(A- 2) 2
= 0.
Thus
= 2 has multiplicity 2.
The characteristic equation is >. 3 + 2..\ 2 + A=>.(>.+ 1) 2 = 0 . Thus >. = 0 is an eigenvalue of
multiplicity 1, and ).. = -1 is an eigenvalue of multiplicity 2.
(b) The characteristic tlltuation is ).3 - 6).. 2 + 12)., - 8 = (A- 2) 3 = D; thus,\ = 2 is an eigenvalue of
multiplicity 3.
(c) The chara.ctcris t.ic equation is >.. 3 - 2A 2 - 15>. + 36 = (,\ + 4)(A - 3) 2 = 0; t hus A= -4 is an
eigenvalue of multiplicity 1, and ,\ = 3 is an eigenvalue of multiplicity 2.
8.
{a)
9.
(a) The eigenspace corresponding to >.
yields the general solution x
= 3 is
= t, y = 2t;
found by solving the system [ -~
corrc~ponding
yields tlu~ ~eneral solution x
[~] = t [~] .
to A = - 1 is found by solving the system [
= 0, y = t;
yields the general solution x
=
t(~J.
= (~].
This
=:~) (:] = [~].
This
thus the eigenspace consists of all vectors of the form
Geometrica.lly, this is the line x = 0 (y-a.x.is).
(b) The eigenspace corresponding to .A
[:J
(;]
t.hus the eigenspace consists of all vectors of the form
[:J = t[~J - Geometrically, this is the line y = 2x in the xy-plane.
The eigenspace
~]
=4
=:!] [:] = [~] .
is found by solving the system [
= 3t, y = 2t; thus the eigenspace
Geometrically, this is the line y
= 3x-
This
consist of a ll vectors of the form
E.X ERCISE SET 4.4
141
(x]
0
(c) The eigenspace corresponding to >. = 2 is found by solving the system [ 0 0)
= ( ]. This
-1 0 y
0 .
yields the general solution x = 0, y = t; thus the eigenspace consists of all vectors of the form
[:] = t[~].
11).
Geometrically, t his is the line x
= 0.
(a) The eigenspace corresponding to>. = 4 consists of a ll vectors of the for m (:)
the line y
= ~x.
[:] = t [_~];
= t (~J;
this is
T he eigenspace corresponding to >. = - 4 consists of all vectors of the fo rrri
this is t he line y = -
~x.
(b) The eigenspace corresponding to>.= 0 consists of all vectors of the form
(~)
s [~] + t[~) ; this
=
is the entire xy-plane.
(c)
The eigenspace corresponding to A.
line x
11.
(a)
= 1 consists of all vectors of the form
(:J=
[~]; this is the
t
= 0.
[-3~ o~ -~1] [~z]" - = [ool '.r~~·s
The eigenspace corresponding to >.
= 1 is obta ined
yields the general solution x == 0, y
= t, z = 0; thus the eigenspace consists of all vectors of the
hy solving
f~•m [~] ~ t [:] ; thls conespond• to a Hne thmugh the o•igin (the y-axis) in R
the eigenspace
~ 3 consists of all vccto" of the fo<m [:] ~ t [- :]·
(b) The eigenspace corresponding to
This yields the general solution
of the fo<m [:] = t
[!];
>. == 0 is found
x =
5t, y
= t, z
by solving the system
[~;
~] f:] = r~l -
5
-·i
= :H.. Thus the eigenspace
:l
l
consist~
v
0
of all vectors
this is the line through the odgin >nd the point. (5, 1, 3).
The eigenspace con...pondh;g to!.= 2 is found by •olving the system
(c)
Slmilad y,
•
co~""'Ponding to !. ~ 2 consists of all vecto" of the fo< m [:] = t [=~] , and the
eigeosp""e conesponding to !.
yields [:]
3
= s [~] + t [:],
[=I
!:] m [~]·
=
This
which conesponds to a plane thwugh the odgin.
T he eigenspace corresponding to>.= - 3 is fo:1nd by solving
[-~ =~ -~]
-3 -9 -3
[:] [~] ·
=
This yields
0
z
[: ] = t [
~:] , which cones ponds to a Hoe th•ough the odgin. The eigenspa<e cxmespon~ing to
!. = h
found by solving
u ~: -:][:] m
sponds to a line through t he origin .
=
This
~elds
l f l],
[
whichabo con&
Chapter4
142
12.
(a) The cigenspA£e CO<J<SpOnding to A = 0 consists of vectors of the form
.
eigenspac~ concsponding to A = -1 consist-s of vectorn oft he focm
and the
-~]·
m Lil
=
t
T he eigenspace cnn esponding to A = - 4 consists of vectorn of t he fOTm [:] - t [ :] • and the
eigenspace conc•ponding to A = 3 oonsi•t• of vectOTS of the form
13.
= t[
[: ] = t [
(b) The eigenspa.;e conesponding to ). = 2 consists of vectors of the fe<m
(c)
m -!].
m
= t [ -:] ·
The c.ha.md.t1ristic polynomial is p(>.) = (>. + 1)(>.- 5) . The eigenvalues a.rc >. = ..:_ 1 and>.= 5.
( b ) The cha racteristic polynomial is p(A) = (.A - 3) (.X - 7)(>.- 1). 'The eigenvalues are >.= 3, ). = 7,
and A= 1.
(c) T he characteristic polynomial is p(A) = (>. + ~)'2(). - l)(.X- ~ ). The eigenvalues are.-\=
(with multiplicity 2), ). = 1, and>.= ~-
(a )
-s
14. 'l'wo examples are A ::::
[~0
.o
2
?0
~
0
0 -1
L
?.]v
~
0
0
0
1
-1
-3
u
- 1
and B = [
0
15. Using t he block diagonal structure, the characterist-ic polynomial of t he given matrix is
p(),)
), - 2
-3
0
0
l
), - 6
0
0
0
0
0
0
.\+2
-1
-5
=
.::: [{>- - 2)(>. - 6)
+ 3)[(>. + 2)(>- -
T hus the eigenvalues are
16.
A~
!). -l 2
=
--3
>. -6
If), + 2
- 1
-5
I
>.- 2
>- - 2
2)- 5J
='"
(>- 2
8), + 15)(,\2
-
5, ), = 3 (with multiplicity 2), and),
-
9) = (>-- 5)(.), - 3)2 (-X + 3)
= -3.
Using the block triangular structure, the characteristic polynomial of the given matrix is
I- 1).011). +0 2
p(>-) = I A
I= >-,
0
>- - 1
2 ().
Thus the eigenvalues of B are ). = 0 (with muitiplicity 2), ,\
+ 2)(>.- 1)
= - 2, and ), = l.
17 . The characteristic polynomial of A is
p( A)
= det( AI - A) =
>. +1
2
-1
). - 2
1
2
-1 =(.-\+ 1)(>. -1) 2
).
EXERCISE SET 4.4
143
thus the eigenvalues are .X = -1 and ..\ = 1 (with multiplicity 2}. The eigenspace correspontJing to
.>. = - 1 is obtained by sol.-ing the system
whkh yield$ [:]
~ l [ - :]. Similarly, the eigenspace co"esponding to !. = I is obtained by solviug
H-: -:1m =m
which has the general solution
' =t, y "
-t - ·'· ' = ' ' or (in vedorform)
[~] = ·' [-:] + t [- 1]
The eigenvalues of A.! 5 are). = ( - 1) 25 = -1 and >. = ( 1) 25 = 1. Correspcnding eigenvectors ar~ the
same as above.
18.
Thoeigenvalues of A are !. = J , !. =
rll·
and
and [
~]
j, !. =
0, end !.
9
=2
Conc~pondi
r"'peotively. The eigenvalues of A are A = (1)'
~
:j. r:J. .
ng eigcnveclo" are [
A ~ :0)' =
0
1, A =
(!)'
= ,:, .
0.
>. == {2)!} == 512. Corresponding eigenvectors arc the same as above.
l9.
The characteristic polynomial of A is p()..) = >.3 -- >. 2 - 5).. - 3 = (..>.- 3)().. + 1) 2 ; thus the cigeovaluP~<> are .>. 1 = 3, .A2 = -1 , >..3 = -l. We bnve det( A) = 3 and tr(A) = 1. Thus det(A) = 3:;
(3)(- 1)(-1} = .A 1.A2>..3 and tr(A) = I= (3) + (- 1) + ( -1) =.A t+ .X2 + .>.3.
W.
The characteristic polynomial of A is p(>..) = >. 3 - 6.:\ 2 I· 12.>- - 8 = (.>.- 2)3; thus the eigenvalues are
At = 2, -\2 = 2, A3 = 2. We have det(A) = 8 and tr(A) = (). T hus det(A) = 8 = (2)(2)(2) = ..\. 1.\2.:\3
and tr(A} = 6 = (2) + (2) + (2) = )q + .:\2 + .\3.
= 0 and
!1. The eigenvalues are .\
( -~) ann
= -!x andy .::::: 2x.
The eigenvalues are ..\
tors [
= 5, with associa.t.ed eigenvectors
GJ respectively. Thus the eigenspaces correspond to t he
perpendicular li nes y
:2.
.>.
= 2 and
>.
= -1, with associated eigenvec-
v;]and [ -~) respectively. Thus the eigenspace::; correspond
to the perpendicular lines y""' ~x andy=
- J2x.
Chapter 4
144
23. The inv<l.riant lines, if any, correspond to eigenspaces of the matrix.
(a)
The eigenvalues
the lines y
-= 2x
are ~ = 2 and.\:::::: 3, with associated eigenvectors
ami y
=x
g]
and
[~)respectively.
arc invariant under the given matrix.
(b)
This matrix
(c)
The only eigenvalue is A= 2 (multiplicity 2), with associated eigenvector
y
}m.':i
Thus
no rt'lal eigenvalues, so t her e are no invariant lines.
[~]·
Thus the line
= 0 is invariant unde r the given matrLx..
24 . The char acteristic polynomia l of A is p{A) = >..2 - (b + l )A + (b- 6a), so A has the stated eigenvalues
if and o nly if p( 4) = p( - 3) = 0. This leads to the equations
6a - 4b = 12
6a + 3b = 12
from which we conclude that a = 2l b.,., 0.
25. T he characteristic polynomial of A i:; p( A) = >. 2 - (b + 3)A + (3b - 2a), so A has the s tated eigenvalW!S
if l.l.lld o nly if p(2) = p(5) = 0. This leads to the equations
- 2a + b = 2
a+ b=5
from which we conclude t.hat a = 1 a nd b = 4.
26. The chl'l.r!'l.ctcristic polynomial of A is p(>. ) = (A - 3)(A2 - 2Ax + x 2 - 4). Note that the second factor
in this polynomial cannot. have a double root (for any value of x) since ( -2x) 2 - 4(x 2 - 4) = 16 # 0.
Thus t he o nly possible repeated eigenvalue of A is .>- = 3, and this occurs if and only if .\ = 3 is a
root of t.he second factor of p(A), i.e. if and only if 9-- 6x + x 2 - 4 = 0. The roots of this quadratic
eq uatio n are x = 1 and :t - 5. For t hese values of x, A = 3 is an e igenvalue of multiplicity 2.
27. lf A'l =.f. then A (x ·l- tlx) = Ax+ A 2 x :-:- Ax+ x = x +Ax; thus y = x +Ax is an e igenvector of A
corresponding to A = 1. Sirni la dy, z = x ·- Ax is a.n eigenvector of A corresponding to A = -1.
28.
Accorrling to Theorern 4.4 .8, the characteristic polynomial of A can be expressed as
where >.1, >. 2 , ... , Ak arc the distinct eigenvalues of A and m 1 + m 2 + · · · + mk = n. The constant
t.crrn iu this polynomial is p(O) . On the other har.J, p(O) == det( - A) = (-l)ndet(A) .
29.
(a)
lJsinf:! Formulu. (22), the d1atacteristic equation of A is .\2 + (a+ d)>..+ (ad - be)= 0. This is n
qundratric equation with discriminant
(a+ d) 2
-
4(ad - be) = a 2
+ 2ad + d2 -
4ad + 4bc = (a - d) 2
Thus the eigenvalues of A are given by .A= ~[(a+ d)± J(a- d}'l
(c)
+ 4/>c > 0
If (a- df + 4bc :-:o 0
{d)
If {n.-
{b) If (u -~ d)
2
+ 4bc
+ 4bcj.
then, from (a) , the characteristic equation has two distinct real roots.
then, from (a), tro~"r~ il:l one real eigenvalue {of multiplicity 2).
dV + 4bc < ()then, from
(a ), t here a re no real eigenvalues.
145
EXERCISE SET 4.4
30. lf (a- d)Z + 4bc > 0, we have two distinct real eigenvalues A1 and Az. The corresponding eigenvectors
are obtained by solving the homogeneous system
·
(A~ - a):tt - cx1
bx2
=0
+ (Ai- d)x2 = 0
Since A; is an eigenvalue this system is redundant, and (using the first equation) a general solution
is given by
x 1 = t,
Finally, setting t
n.
= -b. we ;,;ee that [a =b>.J
is an eigenvector corresponding to
>. =
..\i·
If the characteristic polynomial of A is p(.A) = A2 + 3>.- 4 = (>.- l){A + 4) then the eigenvalues of
A are >.1 = 1 a nd A2 = - 4.
(a) From Exercise P 3 below, A- 1 has eigenvalues .-\ 1 = 1 a nd ,\2 = - ~(b)
From (a), together with Theorem 4.4.6, it follows tha t. A- 3 has eigenvalues A1 "'""(1 )3 = 1 and
>-z-=- ( -- ~)3-
(c)
Prom P4 below, A - 41 has eigenvalues AJ
(d)
From P 5 below, 5A has eigenvalues)"
( e)
From P2( a.) below , t he eigenvalues of <'lA:r + 2I = (4A 1- 21) T arc the same !'IS those of 4A + 2J;
namely A1 = 4(1 ) + 2 = 6 and ..\2 = 4(- 4) + 2 = -\4.
- o\.
=
1 --4
= 5 and ..\2 ;::: -20.
= (..\x) · x .,. >.(x · x) == .AIIxll 2 and so
l2.
If ...-lx -= Ax, where x -? 0 , then (Ax) · x
'3.
(a) The characteristic polynomial of the matrix C is
Add .>. l ime.-> the
p(A) =
= det ( >.J -
~econd
C) ""
()
0
-I
,\
0
0
0
0
0
0
- 1
0
>..
A 0
- 1
p( ,\}
= -3 and ,\2 = - 4 - 4 = - 8.
A=
(i1:1?t.
co
Ct
Cz
A+ Cn-
1
row to the first row, then expand by cofactors a.long the first column:
0
A2
- 1
0
Co+ CtA
>.
0
0
0
Ct
0
-1
.>.
0
cz
0
0
0
- 1
..\ +
)..'2
0
-1
..\
0
0
0
0
-1
co
+Ci A
Cz
=
..\+ Cn - 1
C:n - 1
Add A2 times t.he secvnd row to the first row, then expand by cofactors along the first column.
p(.>.)
=
0
Az
- 1
).
0
0
co+ Ct >. + cz>..2
,\2
0
C2
co + C)A+ C2A 2
=
0
0
- 1
..\ + c,._l
0
-1
A+ Cn-l
Chapter4
146
Continuing in this fashion for n - 2 s teps, we obtain
p~.\) = 1,\n-l
co + Ct A + c2.\
-1
(b) The matrix C
=
[l
1
0
+ · ·· + Cn.-2>-.n- 21
>-+Cn-t
= Co+ CtA
0
0
2
+ C2 A 2 + · · · + Cn-2A
11
00 -2]3 hasp(>.) = 2 - 3>. +
o
-1
1
5
-
2
+ <'-n - 1An- l + >."
.>-2 - 5>.3
+ .>.4 as its cha.ra.cteristic poly-
nomial.
DISCUSSION AND DISCOVERY
Dl. (a) The characteristic polynomial p(.>.) has degree 6; thus A is a 6 x 6 matrix.
(b ) Yes. From Theorem 4.4 .12, we have dct{A ) = {1)(3) 2 (4) 3 = 576 =/: 0; thus A is invertible.
D2. If A is :;quare m n.trix all of whose entries arc the sl:l mc then de t(A)
solutions and >. = 0 is an eigenvalue of A.
03. Using Formul a (22), the characteristic polynomial of A is p(..X)
). = 2 is the only eigenvalue of A (it has multiplicity 2).
= 0; thus Ax = 0
= ). 2
-
4.\ + 4 =
has nontrivial
(..\- 2) 2.
Thus
04. The eigenvalues of A (wit h multiplicity) arc 3, 3, and - 2, -2, - 2 . .Thus, from Theore m 4.4.12, we
l1ave det(A) = (3)(3)( - 2) ( -··2)( -2) = - 72 m1d tr(1t) = 3 + 3 - 2- 2- 2 = 0.
D5. The matrix A
d
= 1 then
= [ac
blj satisfies the condition tr(A)
d
= det(A)
if and only if a+ d = ad- be. If
this is equation is satisfied if and only if be = - 1, e .g., A =
· · sa.t 1S1l€
. r. d 1·r an<1 o n 1y 1·r a =
.
equa t 1011
1s
d·tbc
<J •• , e.g.,
1
A
~::
[1
1
(~ -~J.
lf d
f.
1, then the
- 1] ·
2
06 . The characteristic polynomial of A fac tors a.s p(..X) = (>.- 1)(-\ + 2) 3 ; t hus t he eigenvalues of A a re
). = 1 and).= - 2. It follo\vS from Theorem 4.4.6 t hat the eigenvalues of A 2 are ). = (J) 2 = 1 and
>. = ( - 2) 2 = 4.
.
D7. {a) False. For example, x = 0 sa.tisfies this condition for any A and .>.. The correct statement is
t.hat if A x = ..\x for some nonzero vector x, then x is a n eigen\'ector of A.
(b) True. If >. is an eigenva lue of A, then .-\2 is an eigenvalue of A 2 ; thus (.>. 2I - A1)x = 0 has
uontrivial solutions.
(c) False. If A = 0 is an eigenvalue of A, then the system Ax = 0 ha.s nontrivial solutions; thus
A is not invertible anrl so the row vectors and column vectors of A are linearly dependent.
!The statement becomes true if "independent" is replaced by "dependent" .J
(d) False. For example, A=
G~]
has eigenva.lues ).
=1
and .\
= 2.
!But it is true tha t a
:;ymmetric matrix has real eigenvalucs.J
DB. (a) False. For example, the reduced row echelon form of A
fl OJ l.S
=h
2
I =
[1 OJ .
0
1
(b) True. We havP. A(x 1 + x2) = A1x 1 + .>.2x2 and, if ..X1 f. >.2 it can be shown (sine"' x1 and
must be linearly indep(:ndent) thal .)qxl
+ A2Xz f. {J(xl + X2) for any value of a.
X2
147
WOR KING WITH PROOFS
'!'rue. The characteristic polynomial of A is a. cubic polynomial, o.nd every cubic polynomial
has at least one real root.
(d) 1Tue. If r(A) = .>." + 1, then det(A) = (- l }np(D) = ± 1 =I 0; thus A is invertible.
(c)
WORKING WITH PROOFS
b], t hen A2 = [02
+be
ca + de
Pl. If A = [ac
A2
_
tr (A)A
= [be -
P 2.
(a)
ad
cb _o
0
nod so p(A)....: A'l - tr(A)A
= (a+ d) [ac
ab + bdl and tr(A)A
cb+ d 2 j
d
+ det(A)J
ad] =
(- de t(A)
0
b]
d
d!]; thus
= [02ac+dc
+do
ab +
ad+ a-
0
]
_ det.(A) =- det(A)J
= 0.
Using previously estabJished properties, we have
d~t,(AJ- 111')
= det(Afr ·- AT)= dct((M- A)r) = det(M -
11)
Thus A a11d AT have t.hc *lame characteristic polynomial.
(b ) The e;gcnvalues are 2 and 3 in each case. The eigenspace of A conesponrling to ).
tained by solving the system
t o >..
=
= 2 is ob-
[_~ -~l r:J~[~];whereas t.he eigenspace of AT corresponding ':
2 is obto.inf:d by solving
(~ ~~]
[:]
~ [~J .
Thus the eigenspace of A corresponds to the
line y = 2x, whereas th~ eigenspace of A'r corresponds to 11 = 0 . Simtlarly, fo r A ~ 3, the
eigf'nspacc of A corresponds to x = 0; whereas the eigeuspace of AT corresponds toy= ~x.
P3. Suppose that Ax = .Xx where x i- 0 and A is invert.ible. Then x = A- 1 Ax = A- 1 >.x = AA- 1 x
and, s1nce ,\ :f; 0 (because .·1 is invertible), it follows that A- 1 x = }x. Thus is an eigenvalue of
.t\ - t and x is ~ cormsponding ~~igenvector.
l
P4. Suppose
A
P 5.
tlt;H
Ax
.\.x whcr<' x -:f. 0 . T hen (A - sJ)x
= Ax -
six = >.x - sx
= (A- s}x .
Thus
sis an C'i);»nvaluc: nf A - sf and xi~ a r.orresponding eigenvector.
Suppo:;e tha t, Ax = AX where x :f 0. Then (sA)x
value of sA and x is a. corres ponciing eigenvector.
=
s(Ax}
=s(.>.x) = (s>..)x.
P 6. If the ma trix A = [: :] is symmP.tric, then c :=: b a.nd so (a- d) 2
+ 4b<: =
Thus s.>. is a a eigen-
(a - d) 2 ·+- 4b2 .
ln the case that A has a repeated eigenvalue, we must have (a- d) 2 + 4b2 = 0 and so a= d a.nd
b = 0. T hus t he o nly sytnructric 2 x 2 matrices wi Lh repeated eigenvalues arc those of the form
A """ al . [;ud1 a. ntatrix has .>. = a as its only eigenvalue, and the corresponding eigenspa<:e is R 2 .
This prov<•s part (a) of Theorem 4.4..\ I.
If (a- d) 2 + 4b 2 > 0, t hen A has two distinct real eigenvalues .>. 1 a nd
eigenvectors x 1 and x2 , g.iven by:
.>.1 = ~ l{a +d)+ J(a- d)2
+ 4b2)
..\2,
with corresponding
'
Chapter 4
148
The eigenspaces correspond to the lines y
(a- >. 1 ){a- >.2 )
= m 1x
andy = mzx where m;
= (H(a- d)+ J(a- d)2 + 4b2J) (t!(a = t((a -
d)
2
-
(a - d)
2
-
2
4b
l=
= a=~', j
d) - J(a - d)2
= .1, 2. Since
+ 4b2])
2
-b
we have m 1 m 2 = - 1; thus the eigenspaces correspond to perpendicular lines. This proves part (b)
of Theorem 4.4.11.
Note. It is not possible to have (a - d) 2 + 4b2 < 0; thus the eigenvalues of a 2 x 2 symmetric
matrix must necessarily be real
P7. Suppose that Ax = .XX a.nd Bx = x . Then we have ABx = A(Bx) = A(x) = >.x and BAx =
B (Ax) = B (>.x) = .XX. Thus >. is an eigenvalue of both AB and BA, and xis a corresponding
eigenvector.
CHAPTER 6
Linear Transformations
EXERCISE SET 6.1
1.
(a) TA: R2
(b) TA: R3
(c) TA: R3
---?
---?
---?
R 3 ; domain = RZ, codomain = R 3
R 2 ; domain = R3 , codomain= R2
R?; domain= R3 , codomain= R 3
3. The domain of Q , the codomain ofT
i(!~nd T(l , - 2) = (-1,2,3) .
4 . The domain ofT is R3 , the codomain ofT is R 2 , o.nd T (0, - 1,4) = (-2, 2).
-2xl + :!.:7 + 4XJ]
-2
6. (a) T(x) =
[
=
~
[
;~x1
+ 5x:z + 7xa
6x1 - x:~
(b) T(x) =
7.
(a) We have T;~(x) = b !f and only if xis a solution of the linear syst,em
1 2 ~)]
[2 5 -3
0 -l
3
[XJX 2] = [- 1
]
1
3
X;!
The reduced row echelon form of the augmented matrix of the above system is
[~
0
1
6
-3
-111
o o oJ
nl
and it follows that the system ha..o; the general solut.ion x 1
Thus any vee to< of the fo<m
(b) We have TA (x)
X
~ :J + t
[-
= b if a nd only if x
6t, x2 = 1 + 3t, x 3
will have the pwpe<ty that
is a. solution of the linear system
[~ -~ ~]l:~] = [~]
2
= -1 -
5 -3
165
XJ
1
:~~x)_~ [-
= t.
J
Chapter6
166
The reduced row echelon form of the augmented matrix of the above system is
[~
~ -~ -~]
0
0
0
1
and from the last row we see that the system is inconsistent. Thus there is no vector x in
R'
8.
(a)
for which TA(x)
We have TA(x)
=b
~ l:J·
if and only if xis a solution of the linear system
= [~]
[-1~ -~2 ~0 -~]5 [=:]
xa
4
The reduced row echelon form of the augmented matrix of the system is
[~
2 -1
1 2
0
1
0
0
0
and it fo llows t.ho.t the system has general solution x 1 = 2- 2s + L, .r2
X3
~
S,
x,
~ (. ~hUS any v.,.;tor ofthe form ~ m =r] +
that TA(x )
(h )
+s [
X
t [ - ;]
= 3- s- 2t,
will have t he p<Operly
= [~]·
We have TA (x ) = b if and only if x is a solution of the linear system
[ ~ -~ ! -~] [::] = [-~]
- 1
2
0
5
X3
3
T he red uced row er.hclon form of the augmented matrix of t he system is
[~
~ ~ -~ ~1]
0
0
0
Thus the system is inconsistent; there is no vector x in R4 for which TA(x)
=
[-~]·
9. (a). (c), and (d) are linear t ransformations. (b) is not linear; neitht!r homogeneous nor addi~ivc.
10. (a) and (c) are linear t ransformations. (b) and (d) are not linear; neither homogeneous nor addit ive.
11. (a)_and (c) are linear transformations. (b) is not linear; neither homogeneous Mr additive.
12.
(n) nml (c) o.re
li n~t\r ~ransformations.
(b) is not linear ; nPit her homogene<ms nor add:tive.
exercise Set 6.1
167
13 . T his lmnsfonn• lion can
~e written in m•~: [o,m "' [: : ] ~ [! =:
formation with domair@. and
1~ . - "Notlitie'o~ .
·.(__ __ - ·
:] l::J ;
it is a linear trans-
cod~n:~~R~.
The domain is R 2 o.nd the codoma in is R:i.
• .,.,.--J· . . _ _ _ _ - - -
- -·-·----
- · - . - - · ···-· ...
-~.
15. This transformation can be written in matrix form as
transformation with domt
nd
[:~] = [-~
codom~R~~---·,
~]
- l
2 -4 -1
1U3
[=:];
it is a linear
XJ
16 . ...Noti~The clomain:ls R 4 land the codomain(is R 2 . !
('
···--:_)
1.._
'
·...~~--.,.....,
[~ -~]
17. [TJ "" !T(eJ) T (e 2 )] =
4
19.
(a) We have T( l , O)
.
18. [T]
= [T(et)
T(e:z) T(e3)};:;;
0
[! ~ ~1
2 2 IJ
(-1 , 0) a nrl T(O, 1)
=
-
= (1, 1); thn~ thf' st<lP.d'l.rd matrix is [TJ = [-~ ~].Us­
ing the m atrix, we have T ( [- ~]) = [ -·~
using t he given formula: T{ -1 , 4) =
~]
= [:]. This a grees with direct calculation
( -( -1) + 4, 4) = (5,4).
[ -:]
(b) We have T (l,O,O) = (2, 0,0), T(O, 1,0) = (-1, 1,0), and 1'(0,0, 1) = (1,1,0) ; t hus the stan-
"G-: l
da~rlmatrix i, {Tf
[
[~ -~ ~]
(a)
[1'] =
(b)
ITl = [0
21. ( a)
([J) ~ [i -l ll [_rJ
-~] - T his a.grec~(i.};"J~t"c7lculettion usmg the given formula :
T (2, l , - 3)
20.
U<ingthc matrix wc haveT
3
0
]
-5
= (2(2) -
( 1) I (- 3j, 1 + ( -3), 0) = (0, -2,0)
r([_m ~ [:, --~
T ( [
~]
Ul ~ [_~]
[
-~]) = [~ -~J --~] ~ l- 1~]
The stn.ndard mat.rix of the transformation is IT]
= [: -
i ~~] ·
(b ) If x -= (- 1, 2,4) t.hen, using the equations, we have
T(x)
= (3(-1) + 5{2)- (4),4(- 1)- (2) + (4) , 3(-1) + 2(2) -
On th e other hand, using t he matrix , we luwe
(4)}
= (3 , - 2, -3)
=
168
Chapter 8
1] .
([-m[~ -~ nl
22.
(a)
{T] = [2 -3
3 5 - 1
( b) T
23.
(a)
r~ -~J r-~J = [=~J
(b) r o
(c)
[~ ~] [-~] = [-~]
(d)
(a)
[-~ ~] [_~] = [=~J
(b)
[~ ~] [_~] = [-~]
(c)
[~ ~] [_~] = [~]
(d)
[~ ~] [ - ~] = [ - ~]
(b)
r~ -~J r:J= r-:J
24.
25.
(a)
(c)
'26.
(a)
(c)
27.
28.
(a)
(a )
(b )
[to -t-Jn= [-t>J ~ [-070l
72
72
72
4
r-:]
l-1 -1)0 [-2]1 = [-1]2
[~ ~J [-~J = GJ
[~~ -1]
~] [1 946]
4 [•] [2J3+~ ~
{b )
[_~ ~] [~) = [_~]
[_~ ~] [~] - [_~]
(d )
2
3 =
[-! ~][3]
~ ! 4
[_~
[
41
':i
- ~:.
'
- ~1
~J
2
[-l3'{3++2~ ~ [19641
2
=
[•]
3
.
4.598
=
[
(b)
4.598.
1.964
..
[2+¥+ ] - [ 4598]
[ l 4] rJ [~ +JJ] [2482]
4
= ¥
[ l ~]!·] -
-:1/- 4
~
3 -
-2 /3
~ ~ -1.964
+3 ~
4
4.299
~J ~
- l.9G4
= r~
I
~
0.518J
~]
1
corresponds to R 8 = [ ~os
. 8 -sin 8] where B =
S i ll 8
COS 8
~
72] corresponds to He =
I
- 7l
2
2
sin28] __ [cos 8-sin 8
2sin8cos0
2sin9cos8]
l +~' [ 1 ; : m;r:_ 1] .
2
= (:Jin9
coscotJ
8
9
sin8cos9]
oinl(J
= l+,....'l
1 [1
m
28 -
. ~
sm
.. 8 - cos 2 8
2
Thus Hr.=
[cos28
.
S Jn
sm28 - cos 28
L
2VJ -
-2- ¥ l [-4598]
-2v'3 +
3 =
liL __ [cos 28
p
4 -
-4]~ [•] [-VJ+
1- ¥ ~l~ [-0.2991
rhe matnx A
{b)
=
[>f-! :/,f!] [3]- [-~>f'+•][4598]
+
[~] = [=!J
-
31. (a)
_:]
(d)
[- ~ - ~]
.
29. 1, he matnx
A = [- ~
~ -
30 .
4.950
=
m]
m2
sin29)
COS 28
where B =
wherecosB
3;
(135°).
i (22.5° ).
=~
,'
a.nd sin8 =
v l +m•
~lm+
•
'1/JTrn-
Exercl&e Set 6.1
32.
(a) We
169
have m
(b ) We have m
33. (a)
==
2;
thus H
= 2; thus P = h
=
We bave m = 3; thus H = HL =
the line y
[-! ;J and H ( [!]) t(-~ ~] [!] = t [z:] = [~::] ·
~ ~ !]
!] ) ~ [~ ~1[!] = ! (~~] = [::!]·
fo [-: !] = ~ [-~ !]
= gr-~ !] [:] k[~:] [-~::].
fo [~ :] [:] = [~!] [~:!].
1~ [~ !]
= HL = ~
=
[
and P ( [
=
and the reflection of x = [: ] about
= 3x is given by H ([:])
(b) We have m = 3; t hus P
= PL =
=
=
1
10
and P ([:]) =
=
34. If Tis defined by the formula. T(x,y} = (0,0), then T (cx, cy) = (0,0) = c(O, O) = cT(x,y) and
T(x1 + X2, Yt + Y2) = (0, 0) = (0, 0) + {0, 0) = T(x1 , Y1) + T(x2 , Y2); thus Tis linear.
If T is defined hy T(x , y).::: (1, 1), then T(2x, 2y) = (1, I) f. 2(1, 1) = 2T(x, y) and T(x 1 + x 2 ,
Yl + Y2) ::..., (1, l) =I (1 , 1) + (1 , 1) T(x 1 , yl) + T(xz, Y2); thus T is neither homogeneous nor adJitive.
=
36. The given equations can we writt.en in matrix form us
-
[~J · ·~ [~ ~r· r:J = -~ [--~ -~J r:J = l··--3:~ ~tl
·--
Thus t he image of the
2s - ~t .:.-=: 1.
37. (a)
38.
(a)
[T]
= [1
~
0 0~]
r~ ~] [:] = [;], and so
.. .
<''__........_ ... .... ..
line· ·:e + y =
....
. ...
.
1 corresponds to {-s + 4t ) + (3s- t) = 1, i.e. to Lhe line
·-- - -- -- -·-··- ·---
{b) IT] =
[~- ~ -~]
0
(iJ
1
(c)
[T] =
(b)
.f
!
[~ ~]
Chapter 6
170
(d)
(c)
X
39. T(x, y)
[-1 0]
= (- x, 0); thus IT} =
0 0
.
40. T(x,y)
= (y, - x ); thus IT]=
[-i ~l
DISCUSSION AND DISCOVERY
Dl.
(a)
(b)
(c)
(d)
False.
True.
True.
True.
For example, T(x,y} = {x2 ,y2 ) satisfies T(O) =0 but is not linear.
Such a. transformation is both homogeneous and additive.
This is the transformation with standard matrix (T] = -[.
The ·z.cro transformation T(x, ·y) o-:: (0, 0) is the only linear transformation with this
property.
(e)
= v o f. 0.
False. Such a transformation cannot be linear since T(O)
D2. The eigenvalues of A are >.= 1 and .>. = - 1 with corresponding eigenspaces given (respectively)
by
t [~J
and
t[-~], where -oo < t < oo.
D3. From familiar trigonometric identities, we have A =
29 -sin 28] __ R-.·e .
cos
[ sin 29
cos 28
"
Thus muJtiplication
by A corresponds to rotation about the origin through the angle 2fJ.
D4. Jf
A ·/ 1 ·- ..
R o -_ [cos O .. sin
sin9
81
cosO'
t h en AT _
-- [ cos 0 sin 8] ..__ [cos( -9)
-sin9 cosO
sin(-8)
-::~=:~]
= R- o·
Thus multipli-
cation by !1.,. corresponJs to rotation t hrough the ang le -0.
D5 . Since T(O) :::: x 0 =f:. 0, this transformation is not linear. Geometrically, it corresponds to a. rotation
followed by a translation.
D6. 1f b = 0, theu
geneous.
f is both addi tive and homogeneous.
If b =f:. 0, then j is neither additive nor homo-
D7 . Since Tis linear, we have T (x 0 +tv) = T(x0 ) + tT(v). Thus, if T(v) =f:. 0, the image of the line
x = x o +tv is the line y = Yo+ tw where y 0 = T (xo) and w = T(v) . lf T(v) = 0 , then the image
of x = xo + tv is the point Yo = T (xo).
EXERCISE SET 6.2
1.
ArA =
[_:I] [i-n=[~ ~lthusAisorthogonalandA- 1 =Ar= (_:
21
2
aQ
2 . AT A= [ ll
!.1
~~~
_
29
!Q
iJ·
J [ £! :ollJ - rl0 0
_llJ
·thus A is orthogonal and A - 1 =AT= [aQ
l '
ll
' D
29
!Q
-
:~o
29
'iG
-
1
29
29
29
.aQ
zg
•
Exercise Set 6.2
171
u
- !s
12]
[-~54
~
25
4
5
_g
-
5
l2
25
5
2S
0
-~]
3
16
100]0 ;
= [0
~ -~
25
·u
5
25
1
=AT =
thus A is orthogonal and A - l
0 0 l
25
1].
16
25
-~ ~] [~
I
2
76
5.
r-t =~J r=~ -~J
_
(a) AT A =
A
= [
~]·
1
73
-~
-~
[~ ~]; thus A
=
=~ ~] = Ro where B =
3
;.
is orthogonal. We have det(A)
= 1, and
Thus multiplication by A corresponds to CO\tnterclock-
wise rotation about the origin through the a ngle 3;.
(b)
A~'A;
A= [
[
J ~] [~ ~] ; [~ ~] ;
thus A is o'thogonal. We have det(A )
~ ~] =He w bere fJ = i · Thus multiplication by A corresponds to ref!ed ion a~)ut
the line origin through the origin m aking an angle
6.
(a)
AT A= [
(b)
=
Jl
[
rlT A= [
1 0
]; thus
1
~
with the p ositive x- axis.
A is orthogonal. We have det (A)
_ J_
2
2
~
0
] · Lhus A is or thogonal.
We have dct(A )
1 •
= -1,
a nd so
./3] = He where 0 = %·
2
.:£1
- !
7.
2
A=
[t ;]
(b)
A=
[! ~]
(c)
A=
8 . (a)
A=
[~ ~]
(b)
A=
[~ ~]
(c)
A = [~
9. (a) Expansion in the x-direction with factor ::!
(c) Shear in the x-direction with factor 4.
Compression in the y-direction with factor ~ .
Dilation wjtb factor 8.
(c) Shear in the x -direction with factor - 3.
(d) Shear in the y-clirflct.ion with factor 3.
10. (a)
(b)
and
2
(a)
1.
= l,
2
Y,}l = (o1
a~ v;l [./3~ _!
!2
[
2
<
2
A=
-1]
_v'4J 1-J
* [:a~ l = [0
v'~Jl = Re where () = ~ 2
A
~ -1 , and so
[~ ~]
n
(d)
A=[~ ~]
(d)
A=
[I ~]
2
{b) Contraction with factor ~.
(d) Shear in they-direction with factor - 4.
172
Chapter 6
]-t [~]
11. The action of T on the stanciatd unit vectors is e 1 = [ ~
[~] -t [~] -; [~] -t [~I = Te~; thus [T] =
[Te1
Te2) = (~
12. The action ofT on the staudard unit vectors is e 1
e2 =
[~) -t r-~] -t [~]
-t [~]
='Fe, , and e, =
-+
=
Te1 . and
-t
[-~J -t [-~]
-t [- ~] = Tel
= Te,; thus
= Te, • e, =
4
111 =
14. The action of T on the standard unit. vectors is e 1
e2
and
m m-;
4
[~ ~ ~] ·
= [~]-t
[i] -}[~] -t [-~]
= T et,
e2
t hus [T]
-~ ~l 0 -1
15.
( a)
16.
17.
y
( b)
(c)
(a}
(b)
(c)
(a)
(b)
~
iL
=
[-! ~J ·
m m-; m
mmm
-+
-t [~]
~] .
=Te2; thus IT) = [Te1 Te2) =
13. The actioo of T on the "anda<d unJt vedo<s is e, =
m
= [~]
~J
-t [
(c)
I~
y
=
Exercise Set 6.2
1 8. (a)
19. (a)
~
(b)
(c)
[9~ ~ ~][~]=[~]
s
{b )
[~ -! ~ll:J ~ Hl
r~: ~J nJ ~ nl
(b)
10OJ r-21 = r-20
[000
1 1
0 - 1
20.
(a)
-3
I ' " ·••''
21.
(a)
[TJ(: [~ ~ -~] ·'
·,, ...0... _.....1
·..
22.
[
23. (a)
24.
I)
1
(o) [T] _, o o
tnt
0 0 1
,.-,................ ,._,
3
~ ~ ~]
(b) [T) = [
,.0
3
(c)
(c)
!11.
n!~] m~ r-~l
[~ !~] nJ ~ m
(c) [T) =
-1 0 0
(01,1 ]
(b)
0 - 1
[Tj =
0o 0-1]o
[
1
L
0
0
(c)
173
IT]=
[-~0 ~\) ~]1
[! -~
~]
~ [:
~
[_;~
(b)
IRI =
(c)
!RI
(a)
!Ill = [:
~ [·
f
.iJ
(b) [Rj
= [ _;
I
(c)
[RJ =:
[
7z
-~
25. The matrix A is a. rotation matrix since it is orthogonal anrl det(A) == 1. Tbe axis o f rotation
;, found by .olv;ng the system (/- A)x
ofth;s sys<em ;s x =
= 0 , whkh ;s
[~ ~ ~] r=J= [~] .
[:J = t [~] ; thus the matdx co:,.;;••:, to•
A
A geneml solutioo
a mt:.;on about the x-a>ds.
Choosing the positive orientation and comparing with Table 6.2.6, we see tha.t the angle of rotation
is
!I = ~ .
Chapt&r6
174
26. The matrix A is a rotation matrix since it is orthogonal and de~( A) = l. The axis of rotation is
found by solving the
of t his system is :<.
=
u -~ -n[~] m.
s[:sltem ([~]- A)x = 0 , which is
~
=
t
~
A
=
genenJ solution
; thus the axis of rotation is t he line pnssing through the origin
and the point (1, 1, 1). The plane passing through the origin that is perpendicular to this line has
equation x + y + z = 0, and w == (-1, 1, 0) is a vector in this plane. Writing w in column vector
form, we have
W=
·-1]
l ,
[ 0
Thus the rotation angle 8 relative to the orientation u
w-Aw _ - 1 , and so 1
Ll _
2>r(120o) .
cos 0 -_ IIW~
72
3
=w
x Aw = (1,1, 1) is detenn.ined by
27. We have tr(A) = 1, and so Formula (17) reduces to v = Ax + ATx. Taking x
v
= Ax+ ATx = {A + Ar)x
=
[~2
000
= e 1 , this results in
0~] [~1] [~2'j
=
From this we conclude th at t he x-axis is t.he ax.is of rotation. Finally, using Formula (16), the
rotation angle is determined by cosO = t.r(~)- l = 0, and so 0 = ~·
28.
We have tr(A)
= 0, and so Formula ( 17) reduces to v
= Ax + AT x
+ x . Ta king x
= e1. this results
in
v = Ax+ ATx 1- x = (A 1 ;(~'
+ 1 )x =
[ 1~
llll
[~1].,.. [l~l
-
Thus the axis of rotation is the line through the origin and the point (1,1,1). Using Formula {16),
the rotation angle is determined by cos()= tr(~)-l = - ~ ; thus 8 = 2; .
29.
(a)
We have T, ([:]) =
Similad y, M,
= [~
m [~ ~ ~ [~];
!~]
=
and M 3
= [~
thus T, is a lineA< ope<ato< and 1!, =
~ ~] ·
(b) If x= (x , y , z), then T 1x · (x-T1x) = (x,O,O) · (O, y,z)
= 0;
thogonal for every vector x in R 3. Simllarly for T2 and T3 .
30.
(a)
We have S(e 1 )
[~ ~ ~]
= S(l, 0, 0) = {1, 0, 0), S{e2 ) = S(O, 1, 0) =
(k, k, l) ; thus the standard matrix for S is [S] =
[
oI 0l
kkl .
0 0 1
th,;;; T 1 x and x-T1x are or-
(0, 1, 0), and S(e3 )
= S(O, 0, 1) =
DISCUSSION AND DISCOVERY
175
(b) The shear in the xz-directio n with factor k is defined by S(x, y, z) = (x + ky,y, z + ky); and
its s t andard matrix
is lSI =
lkO]
.
l o . Similarly, the shear in the yz-direction with factor k
[0 1
is dcnned by S{x, y , z) = (x , y
o
+ :x , z + kx); anti its standard matrix is lS) =
[~ ~ ~lk 0 1
31. If u = (1, 0,0) then , on substituting a= 1 and b = c = 0 into Formula ( 13) , we have
l
0 () -sinO
0
cos
sin{) cos £J
The other entries of Table 6.2.6 are obtained similarly.
DISCUSSION AND DISCOVERY
Dl.
(a)
(b)
(c)
(d)
The
Tho
The
T he
unit
umt
unit
unit
square is mapped onto th<'! segment 0 $ x s; 2 along the x-ax.is (y = 0) .
square is rnappC(J onto the st~g rnent 0 ~ y ~ 3 alon g they-axis (x = 0).
square is mapped onto the rectl'ingle 0 5 x $ 2, 0 5 y S 3.
square is rotated a hout. the o r igin through an angle () -·-= (30 degrees clockwise) .
i
D2. I n order for the m a t rix to be orthogonal we rnust have
and from this it follows that o. = 0, b = -
-jg, c = "73
or a. ""' 0, b = ~, r.
only p orss ibilit.ies.
= - ~3 .
These are the
D3. The t.wo columns of t.his matrix a.re or thogonal for any values of a a nd b. T hus, for t he m a trix to
be orthogonal, all t ha t is required is t ha t the column \·ectors be of length I. T hus a and b must
satisfy (<1 + b} 2 + (a - b) 2 = 1, or (equiv<\lcntly) 2a 2 + 21} = L
D4 . If A is an o rthogonal matrix and Ax :=. >.x . t hen
values of A (if any) must be of ah;;olutc vain~; L
D 5.
(a)
llxll = I!Axll = 11>-xll = I.XIIIxll.
Thus the eigen-
VectQrs paralle l to the line y = x will be eigenvectors corresponding to the eigenvalue>.= 1.
Vectors perpendicular to y
x will be eigenvect ors corres p o nding to the eigenvalue ,\ = -1.
=
(b) E very nonzero vector is an eigenvecto r corresponding to the eigenvalue ,X = ~ .
DO. The shel.'lr in the x .. dircction with factor - 2; thus 1'(x, y)
D7.
From thP. polarization identity,
we have x · y =
= (x -
2y, y) and
Hllx + yJ]2 - l!x -
llx + Yll = llx - Yll tho.n the parallelogram having x and y as
adjacent edges has diagonals of equal length a.nd mus t t herefore be
a rectangle.
D8. !f
yll
2
)
ITl = [~
= i(l 6 -
4)
-n
°
= 3.
y-x
X+Y
7~
X
Chapter 6
176
EXERCISE SET 6.3
1. (a)
ker(T) = {(O,y)J -' oo < y < vo} (y-a.xis), ran(T) = {(x,O)l- oo < x < oo} (x~axis).
transformation T is neither 1-1 nor onto.
The
(b) ker(T) = {(x, 0, 0)1- oo < x <co} (x-a."<is), tan(T) = {{0, y, z)l - oo < y, z < oo} (yz-plane).
The transformation T is neither 1~1 nor onto:-·-~~- -· ··
··
-~··"·
(c)
ker(T)
= { [~] }• ran(T) = R2 • The transformation Tis both 1~1 and onto.
(d) ke<(T) = {
m}
ran(T) =
R". The transfonnotion T
2. (a) ker(T) = {(x, 0)[ - oo < x < oo} (x-axis), ran(T)
transformation T is neither 1-1 nor onto.
(b) ker(T) == {(0, y, 0)1- co< y < oo} (y-a.xis), ran(T)
Tho transformation T is neither 1-1 nor onto.
{c)
ker(T) = {
(d) ker(T} = {
[~]}. ran(T) = R2
[~] }· ran(T) =
i• both J.] Md onto.
=
{(O,y)l- oo < y < oo} (y..,a,-xis).
=
{(x,O, z)l - oo < x,z < oo} (xz-plane) .
·
The
The transformat.ion Tis both 1-1 nnd onto.
R 3 . The transformation Tis both 1-1 and onto.
3. The kernel of the tro.nsformation is the solution set of the system Ax = 0 . The augmented matrix
of thi.s system can be reduced to [~ ~: ~]; t;hus the solution set consists of all vectors of the form
x =
t[·-~]
where - oo
< t < oo.
4. The kernP.l of the transformation js the solution set of
matrix of this system can be reduced t.o
vectors of the form x
= t [ -~]
[~
: 0]
5 l 0 ;
[~ ~ - 2
0 :o
where -oo < t
_: -
of the form x
=t [
-:J
where -oo
[~] ·
The augmented
< oo .
[~ ~
0
=
thus the solution set consists of all
5. The kernel of the t<aosformatOon is the solut;on set of [-: - ;
matrix of this system can he reduced to
!] [~]
0
_:
-:J [:] m.
=
The augmented
~ ~] ; thus the .solution set consists of all vectors
0 : 0
< t < oo.
6. The kernel of the t ransformation TA : R 3 -t R4 is the solution set of the linear system
[: =; -:] [::] [~1J
=
3
4 - 3
XJ
0
177
Exercise Set 6.3
-! :
~
]
t:
0
3 I
The augmented matrix of-this sy,tem can be <educed to [:
consists of aJI vcct.ocs of the form x = t [
7.
whe<e -oo <
The kernel of T is equal to the solution space of [
of this system can be reduced to
ker(T) = {
8.
-il
Thr.
[~] } .
I
0
o:
0
0 :
thus the solution set
oo
~ _: =l] l:l= m
.The augmented matrix
[~ !m]; t~e s~st:.m
thus
0 0
b
has only the trivial solution and so
o:o
l
k~rncl of 1' is eqml.! to the solution space of the system [~ ~ ~] [~] = ~J. 't'h~ <tugmented
.
l 0 1 OJ
[0 1 0 i0 ; thus kcr(T) cons1sts of all vectors of the form
1
matrix of this system can be redlJced to
x
9.
= t [ -~]
(a)
where · oo < t < oo.
The vector b is in the column space of A if and only if the llnE\ar system Ax = b is consistent.
The a tJgrnented matrix ()f Ax = b is
H
2
2
0
8
l
1
and the reduced row ochf:lon form of this matrix is
[~
~]
0 -~
I
L
6
0
0
Frorn this we conclude tha.t the system is inconsistent; thus b is not in the column space of A.
(b) ThP. augmented matrix of the system Ax = b is
0:1• -13]
H
2
2
8
l
1: 8
and the reduced row echelon form of this matrix is
[~
0
-k
1
6
0
0
I
1]
6
0
Chapter6
178
From this we conclude that the system is con'listent, with general solution
4~ +~it3ltl = [4]3~ + t [-~31]
[
3
X =
Taking t == 0, the vector b can expressed as a linear combination of the column vectors of A
as follows:
[-~] =
b =
8
10.
~c1{A) + ~c2(A} +
3
6
Oc3(A) =
~ [-~] + ~ [~]
3168
is
(a) The vector b in the column spttee of A if a.nd only if the linear system Ax = b~i~consistent.
The augmented matrix of Ax = b is
[31 -21
0
1
1 5
5 -3
2
i]
I
I
}
1 -1 : 0
an d the reduced row echelon form of this matrix is
[~
0
1
0
i 0]
1
1 - 1 •0
I
0 0 : 1
From trus we conclude t hat the system is inconsistent; thus b is not in the column space of A.
(b) The augmented matrix of the system Ax = b is
3 -2
1 4
[0 1
and
th~
1 5: 4]
5 -3 :
(j
1 -1 : 1
reduced row echelon form of t his matrix is
[~
0
1
1l -11:: 2l ]
0
0
0 : 0
Prow this we conclude that the system is consistent, with general solution
Taking s = t = 0, the vector b can expressed as a linear combination of the column vectors
of A as follows:
m~ ~
b
2c , (A)
+ lc,{A) + Oc, (A) + lk, (A)
~
2
[!] nl
+
Exercise Set 6.3
179
11. The vector w is in the range of the linear operator T if and only if the linear system
2x- y
=3
x
+z = 3
y - z=O
is consistent.. The augmented matrix of t.his system can be nxiuced to
[~
Thus the system h as a unique solution x
0
1
0
00 :: 21]
I
1 : 1
= (2, 1,1), a.nd we have Tx = T(2, 1, 1) =
(3,3, 0)
= w.
12. The vector w is in the range of the linear operator T if and only if the linear system
y
X -
=
1
x +y+z = 2
+2z =- l
:t
~;an
.is consistent. The augmented matrix of thls system
[~
0
1
0
T hus the system has a unique solution x =
(1, 2, - 1) = w .
13. The operator can be written as [::
Since df't(A)
J = (~
be reduced to
O
0:' I]
J
3
I
4
1
I
I _§
3
I
G, ~, - ~),
-n [::J,
and we have T x
= T ( ~, ~, -
t hus the standard mat rix is A = [ ~
~)
=
-~].
= 17 :f. 0, the operator is both 1·1 and onto.
14. The operator can be written as [::] =
[~ ~]
[::] ;
th11s t he standard matrix is A =
[~ ~] .
Since
det(A) '" 0, the operu.tor is neither 1-l nor onto.
'WI]
[wz =
15. The operator can he written as
:\ 2]
o
4 .
3
G
1.113
Siw;e
d~ t( A)
[-1 3 2] [Xi]
2
1
o
:3
"
6
2:2
:
t hus the stanrlard ma.trix is A
=
::Z:J
= 0, the operator is neither l - 1 nor onto.
16. The operator can be written as
'WI] [l 23
] [XJ]
[ = 12 50 83
1.112
:z:2
1.1/3
X3
;
thus the s tandard matrix is A
=
Since dct(A) = - 1 f. 0, the operator is both 1-1 and onto.
[~ =~]. Since det(A) = 0, TA is not onto.
,- -- ·- - -.
consists o~~~~ vedors of -~~.foriw = t (j_:here -oo < t < oo. Jn particular,
17. The operator can be written as w =TAx , where A=
The_ran_gr.
w
=
[~]
-~ TA
is not in the range of TA .
-~·---- --
Chap~r6
180
onto. The range of T,1 consists of all vectors w
=
-2 1]
1.
= [5
3 . Since det(A) = 0, TA is not
2
(w1 , w2, w3) f0r which the linear system
18. The operator can be written as w =TAX, where A
- 1
1
4
[~ =~ ~] [::] = [:~]
4
1
2
W3
X3
is consistent. The augmented matrix of this system can be row reduced as follows:
1 -2
1:
Wt]
5 -1
3l
W2
1
2:
W3
[4
-t
[1 -2 1:l
0
0
9 -2
9 -2 ]
Wt ]
5wl
W2 -
W3 - 4w1
[1 -2 1\
-+ 0
0
9 -2 :
0 0:
W1
W3 -
Thus the system is consistent {w .is in the range of 1A} if ar:d only if w 1
partiClllar, the vector w = (1 , 1,1 ) is not in the range of TA .
19.
(a)
]
5w1
W2 +. 'tJIH ·
W2 -
-
w 2 + w3
= 0.
In
Th,, linear t.rl\.nsformation T A : R 2 -+ R 3 is one-to-one if A-nd only if the linear system Ax = 0
has only the t rivial solution. The augmented matrix of Ax = 0 is
and the
redur.~d
row echelon form of this matrix is
From t his we conclude that Ax = 0 has only the trivial solution, and so TA
(b) The augmented matrix of the system Ax -=-= 0 is
[-~
2 3
0 -4
IS
one-to-one.
~)
a.nd t he reduced row echeJon form of t his matrix is
oJ
o
}4 :I
1 -2 : 0
Fwm this we conclude that .4x
~ 0 has the gene'"! solution x = t [ -~) .
In p"'ticula<, the
systerr, Ax = 0 has nontrivial solutions and so the transformation TA : R 3 -+ R 2 is not onet.o-one.
20.
(a)
The range of the t ransformation T.~, : R2 -: R3 consists of those vectors w in Jl3 for which
the linear system Ax = w is consistent. The augmented matrix of Ax = w is
1 -1
[
2
! Wt]
0 :
W2
3 -4 :
W3
Exercise Set 6.3
181
l [} -}
and this matrix can be row reduced as follows:
1-1 : WJ] [1.-: :
[
2
0 :
3 -4 :
'W2
W3
,
0
2 :
0 -1 :
W)
W2 - 2WI
0
,
1 -1
,
2
[
0 -2
W3 - 3w1
0
0
l
2:
I
o:
Wz Wt
- 2Wt
2w3 + w2- 8w1
From t his we conclude that Ax = w i..; consistent if and only if - 8w1 + w 2 + 2w., = 0; thus
the transformation TA is not onto.
(b) The range of the transformation TA : R3 ~ R 2 consists of those vectors win R2 for which
the 1inear system Ax = w is consistent. T he augme nted matrix of Ax = w is
2 3
0 - 4
l Wt]
W2
1
and this· matrix can be r ow reduced as follows:
[_~
2
3 :
WJ
l
2 -1 : u.•2 +w1 . r
F'rorn this we conclude that Ax = w is consistent for every vector w in R 2 ; thus TA is onto.
21.
(a)
The augmented matrix of the system Ax= b can be row reduced as follow:;:
[~
b,l ~ [I
-2 -1
3
6 -2
3 3
4
0
,
['
b2
b:J
0
0
-2 - 1
I
I
3
0
1
l -1
0
0
0
I
I
I
I
Ql
I
It follows that Ax = b is consistent if and only
( b) The range of
-l 3: b, l
-2
k
8 - 8 ; 02 - 2bl
6 -6 : 03 - 3bl
6
b,
~ b2 - {bl
~b3 - kbz - ~b1
\f ib3 -
~ b2 - ib
~he transformation T A consists of'~;ctors b
[-ls:+1']=' [-!]
+t
(c)
]
= 0, or 6/>1 + 3bz -
4b3 = 0.
o tne form
m
Note. This is just one possibility; jt was obtained by solving for b1 in t.erms of 02 a nd b.> and
then making bz and o3 into parameters.
The augmented matrix of the system Ax ::; 0 can be row reduced to
[~
(l
0
1
0
1 L: 0]
l
1 -1
0
0 0 : 0
Thus the kernel of TA (i .e. the solution s pace of Ax = 0) consists of all vectors of the form
tl = [-1]+ [-1]
-s + t
-s-
X=
[
s
t
s
-1
1
0
t
1
0
1
l
Chapter 6
182
DISCUSSION AND DISCOVERY
DL
= 0 if and only if x = 0; thus T( u - v) = 0 implies
u - v = 0 and u = v.
(b) True. lf T : Rn -+ Rn is onto, then (from Theorem 6.3.14) it is one-to-one and so the argument given in part (a) applies.
(c) 'ITue. See Theorem 6.3.15.
(d) 'ltue. If TA is not one-to-one, then the homogeneous linear system Ax = 0 has infinitely
many nontrivial solutions.
(a) True. If T is one-to-one, then Tx
(e) True. The standard matrix of a shear operator T is of the form A = [ ~
either case, we have det(A) =:= 1
D2.
'f:. 0 and
~] or A = [~ ~] . In
so T = TA is one-to-one.
No. The transformation is not one-to-one since T(v )
to a .
= ax v =
0 for all vectors v tb.at:a.re parallel
D3 . T he transformation TA : R"-+ Rm .is onto.
D 4. No (assu ming vo is not a scalar multiple of v ). T he line x = vo +tv does not p ass through the
origin and thus is not a subspace of
It. follows from Theorem 6.3.7 that this line Calmot be
flQ ual t o range of a linear operator.
nn .
WORKING WITH PROOFS
P l. Jf Bx = 0, then (11B)x
= A(Bx)
= AO = 0 ; thus x is in t he nullspace of
AB.
EXERCISE SET 6.4
1.
!Tn
o 'I~4 J
= BA =
[~
=
[:
-~] [:
-3
0
-2
[TA oToJ = A E3
!][:
-2
-3
1
4
2
=BA =
fTA oTaf = AB
3. (a) iT!) =
[~
[~ -~1
{b) [72 c T1]
(c)
=
[ - 41 5
2 -3
= [~
6
7
6
[12) =
25
44 - 11 45
3 -1]1 = [4012
-3
0
-9
0
5
2 -3
.J
2
8
r
38 -18
6
=
18
10
-3
31
- :~3
20]
18
43
22]
16 .
58
[~ ~]
~] [~ -~J = [! -~]
T2(Tt (x 1, x2)) = (3xt
~~
0
30 -1][
4
1 -1
-3
45
:11] = [-5-8 - -153 - 8I]
0
~][!
0
2. [Ts o TAJ
0] = [105 -1- 8 2!]
1 -3
2 4
+ Jx2, 6x1 -
2x2)
ITt oTz)
= [~
-~J [~ ~] = [~
T1 (T2(x1t x2))
_:]
= (Sx1 + 4x2, x1 -
4x2}
Exercise Set 6.4
4. {a) !T1]
183
=
[~ -~]
2 0][
[~
~]
~]
~ ~ ~}[~ 2 0] = [ 4 8 0]
[-~
- 1
{b) [12 o T1]
{T1 o T3]
0
1
-3
~]
2
0 - 1
0 - 1
0 -1
:=
[-
=
[T2]
()
-2
- 1 -3
0
1
u
2
0
l
4
=
3
3
0 -1
0 -1
-1 -3
-2 -4 -1
- 1 -2
3
(c) T2{T,(x1,x2,x3)) = (2x2,:r1 +3x2,17:r1 +3x2)
T1 (T2(x1, x2, x3)) = (4:rt + 8x2, -2x1 - 4x2 - x3, -xi - 2x2 + 3x3)
5.
(a)
The s t andard matrix for the rotat1on is A 1
tion is A2
= [~ ~] .
= [~ - ~]
and the standard matrix for the
rP.tl~c­
Thus t he standard m atrix for the rotation followed by the reflection is
A2 A 1 =
[01 01] [01 -1]0 [l0 - OJ1
=
(b) The s t andard matrix for the projection followed by the contraction is
(c)
The s tandard matrix for the reflection followed by the dilation is
.
.il·z A1
6.
(a)
=
[:l0 3OJ [10 -1OJ = [30 -301
T he s t andard matrix for the r.omposition is
A,A, A,
~ [~ ~] [~ ~] [ ~ -~] ~ [~ _J]
( b ) The s tandard matrix for the composition i:s
(c)
The composition corresponds to a counterclockwise rotation of 180°. The standard matrix is
R z<. R 1n R_
.n
3
7.
(a)
12
!2
=
R
11
7rr
r.
12+J2+ 3
=R rr =
[-1 0]
0 - 1
The s tandard matrix for the reflection followed by the projection is
Chapter 6
184
{b) The standard matrix for the rotation followed by the dilation is
0o] [ ~o 01 ~]
o =[1
o V20
v'2
(c)
8.
The
standar~
-72
0
~
-1
0
matrix for the projection followed by the reflection is
(a) T he standard matrix for the reflection followed by the projection is
(b) T he st.andard matrix for the rotation followed by the contraction is
A2A1
=
l 0~ 00] ['0
~
[
00 -31
D.
0 0] .[!0 :If0-~0]
{! --! =
0!
2
,/3
2
0.!
6
,/3
6
(c)
The standard matrix for the projection followed by t he reflection is
(a)
The standard ma.trix for the composition is
_ fl
16
16
.1. ]
.l.. _fl16
16
l
1-
8
(b) T he standard matrix for the composition is
10.
(a) The standard matrix for the composition is
Exercise Set 6.4
185
(b) The standard matrix for the composition is
11.
(a)
We have A=
[~ ~] = [~ ~] [~ ~l thus multiplication by A corresponds to expansion by a
factor of 3 in the y-direction and expansion by a factor of 2 in the x-direction.
(b) We have A =
[~ ~] = [~ ~] [~ ~] ; thus multiplication by A corresponds to reflection about
the line y = x followed by a shear in the x-d irection with factor 2.
Note. The fac toriut.t.ion of A as a product of elementary matrices is not unique. This is just
one possibility.
(c)
We have
A=[~ -~]
[~ ~] [~ ~] r~ ~l [~ -~] ; thus multiplicat ion by
=
Acorresponds to
reflection about t he x-axis, followed by expansion in the y-direct.ion by a fad or of 2, expansion
in the x-direct ion by a factor of 4, and reflection about the line y = x.
(d)
We havcA= [: -:]
= [~ ~] [~ 1 ~) [~ -~];thu$ multiplicationbyA c~)rr~:.spondsto ashear
in the x-dircction with factor -3, followed hy expans ion in the y-dircction with factor 18,
and a shear in t he y-d irection with factor 4.
12.
(a)
We have
A=[! ;] = [~
!] [~~];thus
multipHcatlon by
Acorresponds t.o compression
by a factor of ~ in the x -direct.ion and compression by a factor of ~ in t he y -<.lirection.
(b) We have
A =[~ ~]
=
[~ ~J [~
n;
thus multiplication by
Acorresponds to a shear in the
:.r.-direction with f(l(:t(Jt 2, followed by reflection about t he line y =
(c)
We have
A= [-~ ~] =
g-~] [~ ~] [~ ~] [~ ~] ;
:r;.
thus multiplk:a Lion by
A corresponds
to
reflection about the line y = x . followed by followed by expans ion in the x-direct ion by a
factor of 2, expansion in the y-direction by a factor of [) , and reflection about t.he x-axis.
(d)
We have
A= [~ :] = [~ ~1
g~} ; thus
multiplicat ion by
A corresponds ~o a
x-dircction with factor 4, followed by a shear in they-direction with factor 2.
Reflection of R.2 about the :r -axis.
Rotation of R 2 about the origin through an angle of - ~.
Contraction of R 2 by a factor of ~.
Expansion of R2 in the y-direc tion wjth factor 2.
13.
(a)
(b)
(c )
(d )
14.
(a ) Refl ection about they-axis.
(b j Rotation about the origin through an angle of
(c) Dilation by a fact.or of 5.
(d) Compression ill Llu~ x-direct.ion with factor ~ .
~.
shear in the
186
Chapter 6
15. The st andan.l matrix fqr the operator
the standard matrix fo r
[! ~].
Tis A=
r - 1 is A- 1 = [- : -~l thus r - 1(w1, w2 ) = (-w1 + 2'W2, tv 1 -
y-l
is A-
1
!l
= 2~ [-~
17. The standard matrix for the operator Tis A
the sto.ndn.rd matrix for r - l is A- 1
= [-~ ~];
19. The standard mat r ix for the operator T is A
a n d thf' standard matrix for
r- 1 is
A- t
thus r
= [~ -~].
18 . The standard matrix for the operator Tis A =
21.
=[
{~ ~].
=
(wl,w2)
Since
2
1
= <lwl + 1w2, -
1
12wl
+ iw2)·
A is invertible, T 1s one-t(rone and
= (W2, - wl)·
Since A is not invertible, Tis not one-to-one.
1-2 2]
[
= [-~
-i ~!
1
-
thus r- 1(wl, w2)
1
1
1 .
0
Since A is invertible, T is
one-t~>oone
41
t hus the formula for T -l is
-sJ ..
-2
2 -3 ;
-1
20. The standard matrix for 7 ' is A
w2) .
[~ -~]. Since A is invertible, Tis one-to-one and
16. The standard matrix for the opP.rator Tis A =
the standard matrix for
Since A is invertible, Tis one-to-one and
3
~]. Since A is not invertible, Tis not one-to-one.
The st.andarrl matrix for the operator T is A
4-1] Since A is inver tible, T is one- to-one
= [~
3
[
I .
0
3
-2 -2
~
~
and the standard matrix for T - 1 is A - I =
7
3
I
I
-2
Ill
=1 ;thus the fo<mula fo< r -
1
is
'.i
22. The standa<d mat<lx fo< the ope<ato< TIs A = [; :
:J.
Since A is inve<tible, TIs one-to-one and
lr-*fi --ri-J5=7] ;
26
the stand ard mat rix for T -
1
is A -l =
~
I
26
23.
(a)
thus the formula for
T3
26
lL is easy to see d irectly (from the geometric definitions) that T 1 o T 2
a J.qn fo llows fTo m
r- 1 is
5
=0 =
T 2 o T 1 • This
IT.] ~~~} = [~ ~] [~ ~] = [~ ~] and IT2} lTd= [~ ~) [~ ~] = [~ ~].
187
Exercise Set 6.4
(b) It js easy to see directly that the composition of Tt and T2 (in either order) corresponds to
rotation about the origin through the angle 81 + Bz; thus 11 o Tz = T2 o T1 . This also follows
from the computation carried out in Example l.
lTd = rlO
l
(c) We have
0
0
cosO t -sin9t
0
£11
sin
l
and [12]
[cos 0 sin 0
2
=
r.:os&1
~]; thus
2
-
sin02
0
cos02
0
cos02
- sinBz
[Tx oTz] =ITt} !Tz) = coslh sin Bz cosB1 cos82
[ sin lh sin Bz sin B1 cos B2
Ozl
- cos 81 sin Bz sin 81 sin
cosfh cosBz - sin 81 cos Bz
sin 8 1
COSO)
and it follows tha t T1 o Tz
24 .
(a)
It is ea..<;y t.o see
(b) We have [Tl] =
(c)
=
We have
i.ha.t T 1 oT2 =-I :;: TzoT1 . This l::llso follows from the fact that
[~ -~] [-~ ~1 = [-- ~ -~1
[Td [Tz!=
[T2 o T1 ]
dir~etl y
f Tz o T1 .
[~ ~]
[T2} [T1 )
and [T:d
= [c~soe.
S in
and lTz]ITJ] =
= [:::
[-~ ~] [~ -~1 = [-~ -~J -
-:~::];thus IIi oTz] =
°l-1=
[T o 12] . lt fo llows that T
j
1
0
1
jTJ} [T2 ]
o 1'2
ITt] ::;[~~~] =kl ; t.hus fTI] [Tz] === (kJ)[TzJ=kl12]
= [co~O -s~ne]
and
f Tz oTt .
and [TzJ ITl i = ITz]{kl} =
0 0 k
k [T2 }. Jt follows that 1'1 o T 2 = T 2 o T 1 = kT2.
25.
Wt:;
have JJ, 13 =
_.\ ~ ]
[~ ~
f _,;.~
--
2 '
'l
i~
and J-1, 16
- 1
'\!jl
-~
2
.
T int:> the standard matrix for the composition
.ill [
2
1
2
26. We have H:ri-i =
[~ ~]
and f/, 18 =
lirr /8
!2
viJ
-z-
[~ - ~). T hus t.he standard matrix for the composition is
H . = [ '~
7ft4
-
..!_
V2
_
7i][0 1] _[ 72
1
72
l O -
_
_!_
v'2
27. The image of the unit square is the parallelogram having the
vectors T( e!)
= U]
and T( ez)
= [- ~]
of this parallelogram is jdet(A}I = ldet
as adjacent sides. The area
[~ -~] ! = 12 +
II= 3.
ChapterS
188
28. The image of the unit square is the parallelogram having the
vectors T(e1 )
=
g] a~d
T{e 2 ) =
[!]
y
7f-·
as adjacent sides. The ar(;a
of this parallelogram is ldct(.A)I = \det
[~
!] I
,
J
{5. 7)
J
"---·
( • 3)
= 18 - 91 = 1.
--···
") I
(3.
)
-
'f
1/
.1:
5
DISCUSSION AND DISCOVERY
D l . (a) 'l'rue. If 1't(X) = 0 with x # 0 , then T2(T1 (x)) = T2 (0) = 0 ; t hus T2 o T~ is not one-to-one.
( b ) False. If T 2 (ran(T1 )) = R" then T2 o T,, is onto.
(c) False. If xis in Rn, then Tz(T1 (x)) = 0 if and only if T,(x) belongs to the kernel of Tz;
thus if T t is one-to-one and ra.n(TI) n ker(Ti) = {OL then the transformation 72 oTt will be
one-to-one.
(d ) True. If ran{T2 ) :F Rk then ra.n(T2 oT1) ~ ran(T2 ) #- Rk .
02. We have RfJ
= [cos~
sm ""
R H R _1
0
0
1
0
],
0 - 1
- sinf{J'], H0 = (
cos
2
and R;; 1
"
= R-p = [- eM~
sin~) .
sm
cos"'
~-"
Thus
2
[cos {3 - sin {3 2sinf1cosf3 l - [cos2/3
sin2/3] - H
f.i ··2 sin {3 cos f3 sin 2 f3 - cos2 f3J - sin 2/3 -cos 2/3 i3
_
and su multiplication by the matrix RiJHoR~ 1 corresponds to reflection about the line L.
D3. From Example 2, we have He, Hez = R 2ctJ 2 - Ba) · Since {)= 2(0- ~8), it follows t hat Re
Thus every rotation can be expressed as a composition of two reflections.
= He Her~·
CHAPTER 7
Dimension and Structure
EXERCISE SET 7.1
1.
(a)
(b)
(c)
2.
(a)
{b)
(c)
5.
(a) Any one of the vectors (1 2), ,1), ( - 1, - 2) forms a basis for the line y = 2x.
(b) Any two of the vectors (1. - 1, 0), (2, 0, - 1), (0, 2, -1) form a basis for x + y + 2z
6.
(!
1
= 0.
(a) Any one of the vectors (1, 3), (2, 6) , ( - 1 ~ -3) forms a basis for the line x = t y = 3t .
(b) Any two of the vectors (1, 1,3), (1,-1 2}, ('2, 0, 5) form n. basis for the plane x=t1 +t
y =- tl - l:t, z ~ 3t, + 2t2.
1
1
7.
1
1
0
: ]
I - 1 1 0
3
5 -· 1
The augmented matrix of the system is [
1 0 l 0 I 0]
~
: .
[0 1
4 1 10
this matrix is
21
and the reduced row echelon form of
Thus a general solution of the system is
x-- (-l"
4
VJ
"(-.!4 - !4 1 O)+t(O - 1 0 1)
_!s
4 ·-· t J s ) t) --
tJI
t
I
l
'
l
\
where -oo < s, t < oo. T he solution space is 2-dimensional wil.h canonical basis {v 1 ,v2_} where
V J :.; ( -
~
1 -
!
1
1) 0)
(l.J1
d
V 2 """ ( 0 > -
1, 0 l) •
I
8. Tho augmented mat,;x of tho system i' [:
matrix is
I () :1: 0]
[
0 1 2 •o .
I
0 0 O tO
=: ,; ~ ~] nnd t ho ceduced row echelon fo,m of th ;s
Thus the general solution is
X= (
-3t, - 2t , t)
~=
t( -3 , -2, 1)
where -co< l < oc. The solution space is 1-dimcnsional with cnnonic..al basis {( -3, -2, 1)} .
~ - ~ - ~ -~ ~ ] °]0 and the reduced row echelon form
0
9.
The augmented matrix of tl1e ::;ystcm is [-
l
1 - 2
0 -1
0
0
l
I
I
lt' O
I 1 0 IJ I :0
~ ~ ~ ~ ~: ~ . Thus the general solution
of this matrix is
[
0 0 0 0
is
o:o
x = (- s - t , s,- t,O,t) = s(-1, 1,0,0,0) +t(-1,0, - 1,0,1)
where -oo < .'l, t < oo. The solution space is 2-dimensional with canonical basis {v 1 , v'2} where
Vt = (-1, 1,0,0,0) and v 2 = (- J, 0, - 1,0, 1).
195
t96
Chapter 7
10. The augmented matrix of the system is
r: ~ ~: -: -!i~]
0
0 -1
i
I
0
01 00 - 31 - 31 : 0]0
1
.
1
and the reduced row e-chelon form
.
. 0
o f thJs maLnx 1s ~
~
~
_l .
~
~
[
I
~
Thus t he general solution is
x = ( -3s + ~t,s- t, !t,s,t)
= s(-3,1 , 0,1,0) + t( ! , -1, j, o, 1)
where - oo < 8 , t < oo. The solution space is 2-dimensional with canonical basis {vh v 2} where
Yt = (- 3,1, 0,1, 0) and Y2 = (3, - 1,1,0, 1).
11.
(a)
T he hyperplane (1 ,2, -3).1 consists of all vectors x = (x, y,z) in R 3 satisfying.:the equation
x + 2y- 3z = 0. Using ·y =sand z = t as free variables, the solutions of this·equation can
be written in the form
= (-2s + 3t, s , t) = s( - 2, l, 0) + t(3, 0, 1)
T hus the vectors v 1 = (- 2, 1, 0) and v 2 = (3, 0, 1)
x
where -oo < s, t < oo.
form a basis for
the hyperplane.
(b ) T he hyperplanC! (2, -1, 4, 1)..!. consists of all vectors x= (x 1 , x 2 , xa, x 4 ) in R 4 satisfying t he
equation 2xt - xz + 4x3 + x 4 = 0. Using x1 = r, x3 s and x 4 = t as free variables, the
solutions of this equation can be written in the form
=
x
= (r, 2r + 4s + t, s, t ) =
r(l, 2, 0, 0)
where -oo < r , s, t < oo. T hus thevcctors v 1
form a basis for the hyperplane.
1 2.
=
+ s(O, 4, 1, 0) 't t(O, 1, 0, 1)
(1, 2, 0, 0), v 2
"'
(0, 4, 1, 0), and v 3
=
(0, 1, 0, 1
(a) The h yperplane ( - 2, 1, 4).L consists of all vectors x = (x , y, z) in R 3 satisfying the equation
·- 2x + y + 4z = 0. Using x = s and z = t as free variables, the solutions of this equation can
he written in t he form
x
= (s, 2s -
4t, t ) = s(l , 2,0) + t(O , -4 ,1)
where - oo < s, t < oo. Thus the vect.o-rs v 1 = (1, 2, 0) and v 2 = {0, - 4, 1) form a. basis for
t.he hyperplane.
(b ) The hyperplane (0, - 3, 5, 7)J. consists of all vectors x = {x 1 , x2 , x 3, X.t) in R4:satisfy ing t he
equation -3x2 + 5x3 + 7x4 = 0. Using x 1 = r, X3 = s and x 4 = t as free variables, the solutions of this equation can b e written in the form
x = (r, ~s + ~t, s, t) = r(l , 0, 0, 0)
where - oo < r, 8, t < oo. Thus the vectors v 1
form a basis for the hyperplane.
+ s(O, j ,1, 0) + t(O, ~, 0, 1)
= (1, 0, 0, 0), v 2 = {0, ~ . 1, 0), and v 3 = (0, ~. 0,
DISCUSSION AND DISCOVERY
Dl.
(a)
(b)
(c)
(d)
Assuming A :1 0 , the dimension of t he solution sp ace of Ax = 0 is at most n- 1.
A hyperplane in R!' has dimension 5.
T he subspaces of R5 ha.ve dimensions 0, 1, 2, 3, 4, or 5.
The vectors Yt = (1 ;0,1 ,0), v 2 =::: (1 ,1, 0,0), and v 3 = (1,1,1,0) are linearly independent,
t hus they span a 3-dimensional subs pace of R 4 •
F.XERCISE SET 7.2
197
D2. Yes, they are linearly independent. If we write the vectors in the order
v 4 = (0,0, 0,0, l,~) ,v3
= (0,0,0,1, "', '),v2 = (0,0, 1, ·, • , •), v1 = (l, ·, • , •, • , •)
then, because of ~he positions of the leading ls, it is clear that none of these vectors can be written
as a linear combination of the preceding ones in t.l!c list.
D3. False. Such a set is linearly dependent ~ince, when
is a linear combination of its predecessors.
tht~
set is wrltt.eH in reverse order, some vector
D4. The solution space of Ax = 0 has positive dimension if and only H the system has nontrivial
solutions, ar~d this occurs if and only if det(A) = 0. Since det(A) = t 2 + 7t + 12 = (t + 3)(t + 4),
it foll ows t hat the solution space has positive dimension if and only if t = -3 or t = -4. The
solution space bas dimension 1 in each case.
WORKING WITH PROOFS
Pl.
(a) If S = {vh v2, ... , vk} is a linearly dependent set in R" then t here C\re scalars Ct , cz, ... ,ck
(not all zero) such that c1v 1 + c 2v 2 + · · · + C~<; VJc
= 0.
lt follows that the setS'= {v 1, v 2, .. . ,
vk, w1, .. . , wr} is lillearly dependent since c 1v 1 + c2v2
i::; ~ nuut1 i v i<\l Jepcndeucy rela.tio n i:l.lllung its demen ts.
I··· · + C>Yk + Owl+ ·· · + Owr
-=0
(b) If S= {v t, v 2 , . . . ,vJc} is a linearly independent set in R"' then there is no nont rivial dependency relation among its elements, i.e ., if Ct VJ + c2v2 I····+ c~cv~c = 0 then Ct = c2 =
· · · = c~c == 0. It follows t hat if S' is a ny nonempty subs et of S then S' must. aho be linearly
independent since a nontrivial dependency among its elements would also be a nontrivial
~~ependency among t he elements of S .
P2./ If k #·0, then (ka) · x ' - k( a- x) = (l if and only if a· x
=
0: t.lm::; ( k:a )J
= a.l
EXERCISE SET 7.2
1.
(a)
(b)
(c)
A bMis for R 2 must contain exactly two vectors (three are t.oo many); any ~ct of three vectors
in R 2 is linearly dependent.
A basis for R3 must centum ~xadly three vect ors (two nre not eno11gh); <~ s<>t. 0 f (.wo V<'('t.01'S
cannot span R:1
Thf: vector~ v 1 and v 2 are linc<trly dependPnt (v 2 = 2v 1 ).
2.
(a) A basis for R2 must contain exactly two vectQrS (one is not enough).
(b) A basis for R 3 must contain exact.ly three vect.ors (four are t.oo many)
(c) These vectors are linearly d ~7.pendent (any set of vectors cont<.>ining t.he zero vector is de pendent.).
3.
(a)
The vectors v 1 = (2, 1) and v 2 ""'(0,3) are linearly independent ~incc
of the other; thus these two \'ect.ors form a hasis for R 2 .
(b) The vector Vz = (-7,8,0) is not a scalar multiple of vl = (4, l,O),
linear combination of v 1 and v 2 since a ny such linear combination
component. Thus v 1 , v 2 , and v 3 are Iincarly independent, anJ it
from a basis for R 3 .
neither is a scalar m1.tltiplc
and V:J:::: (1, 1, 1) is not a
would have 0 in the third
fo llows that t.hese vectors
The vectors v 1 = (7, 5) and v 2 = (4 , 8) arc linearly independent since neither is a scalar multiple
of the other; thus these two vectors form a b~is for R 2 .
(b) The vector v2 (0 1 8,0) is not a. scalar multiple of v 1 = (0, 1, 3), and v 3 = (1,6,0) is not a
linear combi1iation of v 1 and v2 since any such linear combim.t.ioJl would h ave 0 in the first
component. Thus v 1 , v 2 , and v 3 are linearly il.dependcnt, a nd it follows that these vecto rs
from a basis for R3 .
4 . (a)
=
Chapter 7
198
5.
(a)
~
~] · Sir•.cc det{A)
4
-4 -2 -3
the column vectors a re linearly independent and bence form a basis for
(b )
(a)
The matrix having v 1 , v 2 , and v 3 as its column vectors is A= [
-5
:f: 0,
~ -~] · Since de t(.l.) = 0,
T he matrix having v 1 , v 2 , a nd v3 as its column vectors is A= [ :
8
t he column vectors a re linearly dependent and hence do not form a ba.si& for
6.
:=
Rl.
- 5 -2
Jl3 .
~ ~ ~]- Since det(A) = 0,
-8
5 -3
the column vectors are linearly dependent and hence do not form a basis for R 3 .
(b) The matrix having v 1 , v 2 , and v 3 as its column vectors is A =
[-~ -! ~]- Si~:e det(A) =
l
4
7
(a)
=f 0, t he column vec tors are linearly independent
An arl•itmry vector x
= (:r., y , z:)
l
-1
a..nd hence form a basis for R3 .
in R 1 can be written as
t hus the vectors v 1 , v 2 , v 3 , and v 4 span R 3 . Oo tbe other hand , since
the vecl ors v 1,
v2,
v 3, and v 4 are linea rly dependent. and do not form a basis for R 3 .
(b ) T hP ,·ector equa t ion v
= c 1 v 1 + c2 v 2 + c 3 v 3 + c4 v 4
1 0
0
0 2 0
[0 0
a nd,
~e t.l ing c~
where - oo
3
:J
is equivalent to the linear system
l~ l
=
m
= t, n general solution of this system is given by
CJ/~·;-_-;:~,~i ·-= ~t,~3 ~ .~. ·- ~,·;~C4 =
\'~.
=··'··-. .
.~,.-
-· . . .
l
......
.•
.t
...
< t < oo. Thus
for a.ny value of t . In pa r ticular, corresponding to t = 0, t = 1, and t = -6, we have v
v , + v 2 + v 3, v = 4v 2 + ~ v3 + v4, and v = 7v 1 + 4v2 + 3v 3 - 6v 4 .
8.
(a)
An nrhitra ry \'ector x
=
= (x , y, z} can be written as
x = zv 1 + (y- z)v, + (x - y)v3 + Ov4
.".
'
r} I
The ma t r ix having v 1 , v2 , v 3 as its column vectors is A = [
il- \
.,
t.hus the vectors v , , v2, v 3 , and v 4 span R3 • On the other hand, since v 4 = 2v 2
vt:ct.ors v 1, v 2 , v 3 , ~n d v 4 are linearly dependent and do not form a basis for R 3 .
J ·
.
·· .-~;, '
(l
\
,· ~1
..
. .,.),
.:~.·~"' . V1 . .,
·~ ..,.'I
~· ':~. I
I
\1.,
.~
'~
·.) I
+ v3 ,
the
199
EXERCISE SET 7.2
{b) The vector equation v
= c1v 1 + c2v 2 + c3v 3 + c4v4 is equivalent to the linear system
The augmented matrix·of this system is
}~
[
1 3 1]
I
1 0 2 :2
o o ol3
and the reduced row echelon form of this matrix is
1 0 0 0: 3]
[0Q 01 01 2:-1
1 -1
I
Thm:,
~>~-·t.t.ing
c.1 = t , a general solut io n of the sys tem i:; t;i•:cn by
C1 =
where -oo < t
3, cz
=:
-1-
2t , C3 =
- 1 - - t,c4
=t
< oo. Thu:;
v
= 3v1 ·- (1 + 2t)v-z -
(1 -1- t)v3 + LV4
for any value of t. For example, correspond ing _t9 t =0, t = - 1, and t = - ~ . we have v =
3v 1 - v7 - v3, v = 3v 1 + v2- v 4 , and v = 3v , - ~ v3 - ~ v4 .
9. The vector v 2 = {1, -2, - 2) is not a scalar multiple of v 1 = ( - 1, 2, 3) ; thus S == {v 1 , v 2 } is a linearly
independent set. Let A be the matrix having v 11 v 2 , and e 1 as its columns, i.e.,
A :::
-1 1 1]
[
2
3
-2
- 2
0
0
Then det( A) = 2 ;6 0, and it follow:; from this that S' = { v 1, Vz, e1} is a linearly indepew.lent set and
hence a basis for R 3 . [Similarly, it can he $hown that {v 1 ,vz, e2} is a b asis for R3 . On t.he other
hand, ~he set. {vl> v 2 ,e3 } is linearly dependent and thus not t':. basis for R 3 .J
10. The vector v2 = (3, 1, - 2) is not a scalar multiple of Vt = (1 , -1 , 0) ; thns S = {v1 , v 2} is a linearly
independent set . L€t A be the matrix having v 11 v 2, and c 1 as i~s columns, i.e.,
.4
= [-
~
0
:
- 2
~]
0
Then det(A) = 2 f 0, and it follows ftom this that S' = { vb v2 , e1} is a linearly independent set and
hence a basis for R 3 . [Similarly, it can be shown t hat {v 1,v 2 , e2} and {v 1, v 2,e3} are bases for R 3 .]
11.
Chapter7
200
12.
We h ave v 2 = 2v 1, a.nd if we [~ele~ t~el vector v 2 from the set S then the remaining vectors a.re
linearly independent since· det
11 I] =
[
13. Since dct o
1 1
1
2
2
l
2 -3
4
f 0, the vectors
0 0 1
The vector equation v
VJ.
= 27 =I= 0. Thus S' =
v:~, V3 (column vectors of the matrix) form a basis for R3 .
= c1v1 + c2v2 +c3v3 is equivalent to
which, from back substitution, has the solution c3
v = -3vl + 4v2 + v3.
14. Since del
{ Vt, v 3 , v 4 } is a basis for R3 •
= 1, c2 =
the linear system
5- c3
= 4, c1
= 2- c2 -
c:v=- -3. Thus
[~ ~ ~] = - 2 =f. 0, the vectors v1r v2, va (column vectors of the matrix) form a basis for
0
I
J
R3 . The \'P.ctor cquatioH v ::: c1v 1 + c2v 2 + c3v3 is equivalent to the linear system
15.
Since u · n . (1)(2) + (2)(0) ~ ( - 1)(1) = 1 i: 0, the vector u is not orthogonal ton. Thus Vis
not contained in W.
(b) The line Vis parallel to u = (2, - 1, 1), and the plane W has normal vector n = (1,3, 1). Since
u·n:o::: (2)0)+( -1 )(3)+(1}(1) :::0, the vector u is orthogonal ton. Thus Vis contained
(a)
il'
w.
Sim:e u · n = (1) (2) + (1)( l) + (3)( - J) = 0, the vector u is orthogon~l ton. Thus Vis contained
in IV.
(b) T he line V is parallel to u = (J, 2, - 5), a.nd the plane W has normal vector n = (3,2, 1). Since
u· n = (1) (3) + (2)(2) + (-5)(1) = 2 i: 0, the vector u is not orthogonal ton. Thus. Vis not
contained in W .
16.
(a)
17-
(a)
T he vector equation c1(1, 1, 0) + cz(O, 1, 2)
::;ystem with augmented mo.trix
[~
+ CJ(2, 1, 3) = (3, 2 , -1)
0
1
2
T he reduced row echelon form of this rna.t.rix is
13
5
t
l;l
1
0!
Q l _i
0
1
0
1 hus (3, 2, - 1) =
is equivalent to the linear
I
I
1i
I
S
I
1
(1, 1, 0) - ~ (0, 1, 2) + {2, 1, 3) and, by linearity, it follows that
T(~,2, ... 1)= \ 3 (2,1 ,-1)- ~(1,0,2)+ ~(4,1, 0)
= !(26, 13, -20)
201
EXERCISE SET 7.2
(b) The vector equation CHI, 1,0} + c 2 (0,1, 2) + c 3 (2, 1,3)
te m with augmented matrix
= (a,b,c)
is equivalent to the linear sys-
[..o~ ~ ~ ~]
1
2
3 : c
The reduced row echelon form of this matrix is
Thus (a, b, c) =(~a+ ~b- ~c).(l,l, 0) + (-~a+
lb + tc){O, 1,2) + { ~a-lb+ ic)(2, 1, 3) and
T(a , b, c) = (! a + ~b- ~c)(2, 1, -1) + (- ~a+ ~b + !c)(l , 0, 2) +(~a - ~b + ~ c}(4, 1, 0)
3b + ISC> ga
1 + [;
4b - -gC,
2
7
, !)
= ( gaT
-a + C)
( c)
From lhe formula above, we have {T I
!
=[
-1
18.
(a)
'fhe vector equation c 1 (1, l , l) + c2(l , l, 0} + c3(l, 0, 0) = ( 4, 3, 0) is equivalent to t he linear systern
~hich
has solution c1
hy Jineari t.)',
\V€
= 0, c 2 = 3, c:,~ = 1.
Thus (4, 3,0)
= O(l, l.Jlj-.-3.(Ll 0) + l(l~Q._QL.and,
nave
-
.
·.
/
T('t 3, o) = o(3, 2~ o, I)+ 3(2, 1, 3, - 1) + 1(5, - 2, 1, o)-~ .(1 1, 1, 10, -3)/
- ...• •.
( b)
T he vector eqnatiun c, (1, l, 1) I· c2 ( l, l, 0)
+ c:l( 1, 0, 0) = (a , b, c)
-_..? ·
ill equivaleht .to the linear syil-
rl I 1] [:t1] [a]
::
l~ ~ ~
which ha.:; s o!ution
x1 =
c,
x2
=b -
c,
X3
=
~
=a - b. Thus
(a, b, c) = c(l, 1,1) + (b - c)(l, 1,0) + (a-- b)(l , O, 0)
a nd so, by lmearity, we have
T(a, b, c)
= c(3, 2, 0 , 1) + (b- c)(2, 1, 3, -1) + (a -
b)(5, ·-2, 1, 0)
= (5a - 3b+ c, - 2a +3b+ c,a + 2b - 3c,-b + 2c)
(c)
F\-om the lo<mula above, we have
ITI =
[-j -3 1]
3
l
2 -3 .
- 1
2
Chapter 7
202
19. Since det
[-~ -! -:]
-5
8
:=
5
- 100
-
~ 0, the vectors
v 1
a b asis for R 3 . The vector equation c 1v 1 + c 2 v 2
augmented matrix
= (1 , -7, - 5), v
+ c:~ v 3 =
2
= (3, -2, 8),
V3
= (4, :-3, 5) form
x is equivalent to the linear system with
1 - 32 - 34:x]
ly
-7
[- 5
8
5 : z
and the reduced row echelon form of this matrix is
1 0 0
0 1 0
[
0
i- iox - doY + t~oz]
1
1
1
I
0 1 1:
-
!x
- !.,
+ 1z
2
4"
4
33
50 x
23
19
+ 10oYImiz
Thus a general vector x = (x, y, z) can be expressed in terms of the basis vectors v 1 , v 21 _~3 as follows:
X
= (- ;0 X
-
1~0 y +
160 Z j V l
+ (-
~X
-
h + ~ Z) V2 + ( ~~X + 1~30 '!/ -
11~0 Z )v3
DISCUSSION AND DISCOVERY
01.
(a) True. Any set of more than n vectors in Rn is linearly depe ndent.
(b) True. Any set of less than n vectors cannot be spanning set for Rn.
( c) True. 1f every vector in Rn can be expressed in exactly one way as a linear combina tion of
the vectors in S , then S is a busis for R n and t hus must contain exact ly n vectors.
(d) True. If Ax = 0 Iws infin itely many solutions. then det(A) ::: 0 and so the row vectors of r1
are linearly dependent.
(e) True. If V £:; W (or W ~ V) and if dim( V) = d im ( W), then V = W .
02. No. If s = {vl , Y2, - . . 'Vn} is a linearly dependent set in Rn, then s is not a spanning set for n n;
r.hus it. is not possible to create a basis by formiug linear combinations of the vectors in S .
03. E ach such operat()r corresponds to (and is determined by) a permutation of the vectors in t he
basis B . Thus t here a rc a t.otal of n! s uch operators.
04. Let A be the matrix having the vectors v 1 and
dct (A) ""' (sin 2 o- sin 2 /3)
v2
as its columns. Then
- (cos 2 a - cos2 {3) = - cos 2o + cos 2{3
and det(A) =/= 0 if and only if cos 2a j cos 2/3 , i.e., if and only if o =/= ±/3 + br where k = 0, ± I ,
± 2, .. . . For these values of a and {3, the vectors v 1 and v 2 form a basis for R 2 .
D5. Suppose W is a. s ubspace of Rn and dim( W) = k. If S = {w1 , w2 , .. . ,wj } is a spanning set for
W, then eit her S is a basis for W (in which caseS contains exactly k ve<::tors) or, from Theorem
7.2.2, a. b asis for W can be obtained by removing appropriate vectors from S. Thus the number
of clements in a spanning set must be at least k , and the smallest possible number is k.
WORKING WITH PROOFS
Pl. Let V b e a nonzero subspace uf R" , and let v 1 be a nonzero vector in V. If dim(V) = 1, then
S = {v 1 } is a ba..<;is for V . Otherwise, from Theorem 7.2.2(bJ, a busis for V can be obtained by
adding appropriate vedors from V to the set S.
WORKING WITH PROOFS
203
P2. Let {v1, Vz, ... 1 vn} be a basis for Rn and, fork any integer between 1 and n, let V = spa.n{v1, v2,
• • • 1 vk}· Then S = {vt. v 2, ... , vk} is a basis for V and so dim{V) = k.
The subspace V = {0}
has dimension 0.
·
P3. LetS= {v1o v2, ... , vn}· Since every vector in Rn can be written as a Hnearcombinationofvectors
in S, we have span(S) = Rn. Moreover, from the uniqueness, if Ct v1 + cz v2 + ·· · +env n = 0 then
Ct = Cz =···=en= 0. Thus the vectors Vt. vz, ... , Yn span Rn and are linearly independent, i.e.,
S = {vl, Vz, ... 1 vn} is a basis for Rn.
P4. Since we know that dim(R"} = n, it suffices to show that the vectors T(vt) 1 T(v2), ... ,T(vn) are
)inearly independent. This follows from the fact that if ctT(vl) + CzT(vz) + · · · + c 11T(v,) = 0
then
T{c1v1 + CzVz + · · · + CnVn) = CtT(vt} + CzT(vz) + · · · + CnT(vn) = 0
and so, since Tis one-to-one, we must have c1v1 + czvz + · · · + CnVn = 0. Since VJt v2, ... , Vn are
linearly independent it follows from this that c 1 = c2 = ···=en= 0. Thus T(v1 ), T(vz), ... , T(v,)
are linearly independent.
P 5. Since B = {v 1 , v2, ... , vn} is a basis for R"', every vector x in Rn can he expressed as 1i linear
combination x = c1 v 1 + czv 2 + · · · + c,. Vn for exactly one choice of scalars c 1 , c~h ... , Cn. Thus it
makes sense to define a transformation T: Rn --1 Rn by setting
T(x) = T{ctVl + CzVz
+ ... + CnYn) =clwl + c2w2 + ... + CnWn
n
It is easy to check that Tis linear. For example, if x
n
= I: CjVj andy= E d;v1 , then
j=l
n
T(x + y)
= T(l= (c; + d;)vJ)
j=l
=
j=l
n
n
j=l
j;l
L (cj + dj)wj = L
n
CjWj
+L
d;w;
= Tx + Ty
j=l
and so T is additive. Finally, the transformation T has the property that Tv; = w; for each
= 1, ... , n, and it is clear from the defining formula that T is uniquely determined by this
property.
j
P6. (a)
Since { u 1 1 Uz 1 U3} has the correct number of elements, we need only show that the vectors are
linearly independent. Suppose c 1, C:lJ c3 are scalars such that c1u 1 + CzUz + C3U3 = 0. Then
(c1 + Cz + c3)v1 + (cz + c3)v2 + c3v3 = 0 and, since {Yt. Vz, v3} is a linearly independent
set, we must have c 1 + c2 + c3 = c2 + c 3 = c3 = 0. It follows that c 1 = c2 = C3 = 0, and this
shows that Ut, uz, us are linearly independent.
(b) If { v1, v2, ... , Vn} is a basis for R 1", then so is { u 1, u2, ... , un} where
... '
P7. Suppose x is an eigenvector of A. Then x :f. 0, and Ax = >.x for some scalar >.. It follows that
span{x,Ax} = spa.n{x,>.x} = span{x} and so span{x,Ax} has dimension 1.
Conv\...sely, suppose tha.t span{x, Ax} has dimension 1. Then the vectors x f. 0 and Ax are linearly
dependent; thus there exist scaJars CJ a:.d c2 , not both zero, such that c 1 x + c2 Ax = 0. We note
further that c2 =I= 0, for if c2 = 0 then since x =I= 0 we would have c1 = 0 also. Thus Ax = >.x where
>.::;;; -cdc2.
P8. SupposeS= {vt, v2, ... , vk} is a basis for V, where V C Wand dim(V) = dim(W). Then Sis
a linearly independent set in W, and it follows that .~ must be basis for W. Otherwise, from
Theorem 7.2.2, a basis for W could be obtained by adding additional vectors from W to Sand
this would violate the assumption that dim(V) = dim(W). Finally, since S is a. basis for W and
S C V, we must have W = span(S) C V and soW= V.
Chapter 7
204
EXERCISE SET 7.3
1 . The orthogonal complement of S
= {v 1 , v 2 } is the solution set of the .system
-Jt,
!t,
A general solution of this system is given by x =
y=
z =tor (.x, y,z)
S.l is the line through the origin that is parallel to the vector (- ~, 1}.
A vector that is orthogonal to both v 1 and v 2 is w
!,
= v1 x Vz = [ 1i
0
j
1
I<]
.3
=
t( -~, ~,1). Thus
= -7i + j
+ 21<: =
( -7, 1, 2).
2 -1
Note the vector w is parallel to the one obtained in our first solution. Thus SJ. is the line through
the origin that is parallel to {-7, 1, 2) or, equivalently, parallel to(-~ .~ ,1).
2. The orthogonal complement of S = {v 1 , v 2 } is the solution set of the system
A general solution of this system is given by x = ~t, y =-1ft, z =tor (x,y,z} = t(~ . S.l. is the line tluough the origin that is parallel to the vector (!, - ¥ , 1).
..
A vector that is orthogonal to both v 1 and v 2 is w = v 1 x v 2
=
[I
2
j
0
- 1
1
1
5
kl = i -
ll j
i
1
1
1). Thus
+ 2k = (1, - 11 , 2).
Note the vector w is parallel to the one obtained In our first solution. Thus 51.. is the line through
the origin that is parallel to (1, -11,2) or, equivalently, parallel to (j,- ~1 ,1).
3. We have u · v 1 = ( -1)(6) + (1)(2) + (0)(7) ·1- (2)(2) = 0. Similarly, u · v 2 = 0 and u · v3 = 0. Thus u
is orthogonal to any linear combination of the vectors v 1 , v 2 , and v 3 , i,e., u bek>,ngs to the orthogonal
complement of W = spa.n{vl> v2, v3}.
"
4. We have u · v 1 = (0)(4) + (2)(1} + (1)(2) + (-2)(2) = 0, u · v 2 = 0, and u · v 3 = 0. Thus u is orthogonal to any linear combination of the vectors vlt v 2 , and v 3 , i.e., u is in the orthogonal complement
ofW.
5; The line y = 2.x corresponds to vectors of the form u = t{1,2), Le., W = span{(l, 2)} . Thus WJ.
corresponds to the line y = -~x or, equivalently, to vectors of the form w = s(2, -1).
6. Here W J. is the line which is normal to the plane x - 2y - 3z = 0 8Jld pa:3Sea through the origin. A
normal vector to this plane is n = (1, - 2, -3); thus parametric equations for WJ. are given by:
X =
t,
y
= -2t,
Z
= - 3t
7. The line W corresponds to S<:alar multiples of the vedor u = (2, -5, 4); thus a vector w = (x, y, z) is
in WJ. if and only if u · w
2x- 5y + 4z = 0. Parametric equations for this plane are:
=
X
= ~8
-
2t,
y
= s,
Z
=t
q
EXERCISE SET 7.3
205
't
\
I
8.
\
'·~
Here W is the line of intersection of the planes x + y + z = 0 and x - y + z = 0. Parametric equatip_ns '\
for this line are x =
~hus W corresponds to vectors of the form!..~ !~-;,l,O,·t).(..It} ~~ . .1
follows that ~ea"Or w
(x,y, z) 1s m w1. if and only if x = z, i.e. if and only if w is of the Torm
- tli==-9.!!
=
= (r, s , r) = r(l , ····---···0, 1) + s{O,l, 0)
Parametric equations for this plane are given by x = r, y == s, z = r .
w
,
~
t'
Let A be the ma trix having the given vectors as its rows. The reduced row echelon form of A is
0 -16]
1
0
-19
. 0
Thus the vectors w 1 = (1,0, -16) and w2 == (0, 1, - 19) form a basis for the row space o( A or, equivalently, for the s ubspace W s panned by the given vectors.
We have W..l. = n.1w(A).l = nuii(.A) = null(U}. Thus, from the above, we c~~!h.a.t.. .W..~. cpnsists
of all vectors of the form~), i.e., the vector u = (16, 19, 1) forms a basis for w.l..
l 0.
Let A bt!
llH~
•w•L• ix having lhe given vect ors as ils rows. This matrix can be row reduced to
u
! -~]
~ [~
Thus the vect orl; w 1 = (2. 0, - 1) and w 2 = (0, 1, 0) form a. basis for the row space of A o r, equivalently,
for the subspace W spanned by the given vectors.
We have w.t = row(A) .L = null(A) = null(U). Thus , from the above, we conclude that
consists
of all vectors of the form x = (~t, 0, t), i.e., the vector u = (!, 0, 1) forms a basis for W .1..
w.1.
11. Let A be t he matr ix having the given vectors as its rows. The reduced row echelon form of A is
1
· .-:"
. ,.
'·
y •• •
R=
'
\._--,
';- l
l ~]
0
1
0
0
0
0
1
0
Thus the vectors w1 = (1, 0, 0, 0), w 2 = (0,1, 0, 0), and w 3 = (0, 0, 1,1) form a basis for the row space
of A or; equivalently, for the space W spanned by the given vectors.
We have w.t = row(A).l = null(A) = null(R). Thus, from the above, we conclude that W.l consists
of all vect OrS Of the form X = (Q, 0, -t, t), i.e., the vector U = (0, 0, -1, 1) formS a basis for W.l.
12.
Let A be the matrix having the given vectors as its rows. The reduced row echelon form of A is
R= 01 01 00
[0 0
0 0
1
0
i~]
0
0
Thus the vectors w 1 = (1, 0, 0, 0), w~ = (0, 1, 0, i), and w3 = (0, 0, 1, 0) form a. basis for the row space
of A or, equivalently, for the space W spanned by the given vectors.
We have Wl. = rvw(A).l = null(A) = null(R). Thus, from the above, we conclude that
consists
of all vectors of the form x = (0, - ~t, 0, t) , i.e., tho vector u = (0, -~, 0, 1) foriDB a basis for W .1. •
w.t
.\
~
. ···.
I
"". ~
~·. -..i...· ',
\
Chapter7
206
13. Let A be the matrix having the given vectors s.s its rows. The reduced row echelon form of A is
R=
!
[~ ~ ~]
;
=
Thus the vectors w 1 = (1,0, 0,2,0), w2 (0, 1,0,3,0), W3 = (0, 0,1,4, 0) and W4 = (0,0, 0,0,1) form
a basis for the row space of A or, equivalently, for the space W spanned by the given vectoiS.
We have W.J. = row(A)..L = nuli(A) null(R). Thus, from the above, we conclude that W.l consists
of all vectors of the form x = ( -2t, -3t, -4t, t, 0), i.e., the vector u = (-2, -3, -4, 1, 0) forms a basis
for W i.
=
14. Let A be the matrix having the given vectors as· its rows. The reduced row echelon form of A is
R=
1
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0 -243
0
57
0
-7
1
2
0
0
Thus the vectors w1 = (1, 0, 0, 0, -243), w2 = (0, 1, 0, 0, 57), W 3 = (0, 0, 1, O, -7), W4 = (0, 0, 0, 1, 2)
form a basis for the row space of A or, equivalently, for the space W spanned by the given vectors.
We h ave Wi = row(A).l = null(A) = null(R). Thus, from the above, we conclude that W.J. consists
of all vectors of the form x = (243t, -57t, 7t, - 2t, t}, i.e., the vector u = (243, - 57, 7, -2, 1) forms a.
basis for W J..
=
15. In Exercise 9 we found that the vectoru = (16, 19, 1) forms a basis for W.l; thus W
w.t..t consists of
the set of all vectors which are orthogonal to u, i.e., vectors x = (x, y, z} satisfying 16x + 19y t z = 0.
=
16. In Exercise 10 we found that the vector u = (!, 0, 1) forms a basis for W J.; thus W W .J..J. consists
of the set of all vectors which are orthogonal to u, i.e., vectors x = (x, y, z) satisfying x + 2z = 0.
17. In Exercise 11 we found that the vector u = (0, 0, -1, 1) forms a basis for W .l; thus W = W l..l
consists of the set of all vectors which are orthogonal to u, i.e., vectors x = (x 1, x2, X3, X4) satisfying
-X3 + X4 = 0.
18. In Exercise 12 we found that the vector u = (0,-~,0, 1) forms a basis for w.t; thus W = w.t.L.
consists of the set of all vectors which are orthogonal to u , i.e., vectors x = (x1, x2, x3, x4) satisfying
3x-z- 2x4 = 0.
19.
Solution 1. Let A be the matrix having the given ~ctors as its columns. Then a (column) vector
b is a linear combination of v1, v2, v3 if and only if the linear system Ax= b is consistent. The
augmented matrix of this system is
1 5 7l bl]
-1 -4 -6! 02
3 - 4
21 03
[
and a row echelon form for this matrix is
207
EXERCISE SET 7.3
From this we conclude that b = ( o1
if 16b1 + 19bz + b3 = 0.
Solution 2. The matrix
1
~ 1 b3 )
lies in the space spanned by the given vectors if and only
-1 3]
VI]
[ 5\ -4 -4
v2
=
,
_1_::!! __ 2_
[~1
b
bt
b2
l 0-16]
can be row reduced to
b3
0
1
- 19
b1
b2
ba
_o__ .s'___ ~ , and
[
then further
reduced to
From this we. conclude that W = span{ v 1, v 2, v3) has dimension 2, and that b = (bt, b2, b3) is in lV
if and only if 16b1 + 19b:~ + b3 = 0.
Solution 3. In Exercise 9 we found that the vector u = (16, 19, 1) forms a basis for w1.. Thus
b = (bt,~ , b3 } is in W ::-: W l.l. if and only if u · b = 0, i.e., if and only if Hib1 + 19b2 + b3 = 0.
20.
Solution l. Let .4 be the ma.trix having the given vect ors ns its columns. Then a (column) vector
b is a .linear combination of v 1 , v 2 , v 3 if and only if the linear system Ax = b is consistent. The
a ugmented matrix of this system is
·· ·
and a row echelon form for this matrix is
T hus a vector b = (b 1 , b2, b3) lies in the space spanned by the given vectors iJ and only if b1
Solution 2.
can be row reduced to
reduced to
0.
and then further
l
~--~-jb,- i
1 0
[
+ 2b3 =
_!
b;
From this we conclude that W = spau{v 1, v2 , v3} has dimension 2, and that b
if and only if b1 + 2b3 = 0.
n
= (b t , b2, b3}
Solution 3. In Exercise 10 we fOIJnd t hat the vector u = I ul 1) forms a basis for
b = (bt , b2, b3 ) is in W = W.l..J.. if and only if u · b =0, i.e., if and only if bt + 2b3 = 0.
is in W
wl..
Thus
208
21.
Chap1er 7
Solution 1. Let A be the matrix having the given vectors as its columns. Then a (column) vector b
is a linear combination <>f. v 1 , v 2 , v:l, v 4 if and only if the linear system Ax ::::: b is consistent. The
augmented matrix of this system is
[~
0
-2
4
0
1
0
2
1
2
2
-1
-1
b,]
b2
b3
b4
and a row reduced form for this matrix is
Thus a vector b
0
-2
1
z
0
2'
0
0
4:
-1
b,
I
-2:0:
]
ba
1
. · ~·
+ b2
~b3 + b4
-bt
= (b1 , bz , b3) lies in the space spanned by the given vectors if and only if -b3 + b4 = 0.
8olution 2. The matrix v3
-~ ~ ! ~]
4
b
can be row reduced to [
----------bl
b4
02
~
:
~ ~1,
and then
0
0
0
-·--------0
0! b2 ba b4
2 -1 - 1
b3
further reduced to
l
0
0
0
()
0
0
0
0
0
0
1
0
0
0
1
0
From t his we conclude t hat W = span{vlo v z, v 3 , v 4 } has dimension 3, and t.hat b = (b1.bz,b3) is in
W if and only if - b3 + b." = 0.
Solutwn 3. Tn E..xcrcise 1J we found t.hat ~be vector u = {0, 0, -1, l ) forms a basis for W .L. Thus
b = (b 1 , b2 , b:~, b~) is in IV =- WL1 if and o nly if u · b = 0, i.e., if and only if - b3 + b4 = 0.
22.
Solution 1. Let A be the matrix having the gi ven vectors as its columns. Then a (column) vector
b is a linear combination of these vectors if a.nd only if the linear system Ax = b is consistent. The
augmented mat.rix of this system is
[-~
]
2
-2
3
3
5
6
4
-3 -1
9
6
I
I
I
I
I
I
b,]
b2
03
04
and a row echelon form for this matrix is
1 ~1 9~ 23i !! 5bl ~-b32b:l
Thus b
= (b;. ~ . b3 , b4 )
1~0
l
0
0
t:-!bl+ i b2
0
0
0 :I !3 b4 - !2 b2
lies in the space spanned by t he given vectors i£ and only )f 3~- 2b4 = 0.
209
EXERCISE SET 7.3
vl] rl}2 4 -s 6]
[
Solution 2. The matrix l~~ ·= -~- -;- ~;_ _; can be row reduced to
2 - 2
Y2
b
In b
bt
J
l
0
0
0
0
1
0
a2
o o
1
o , and then further
0 0 0 0
---------
b1
reduced to
1
0
0
0
0
1
0
2
0
0
0
0
1
0
0
3
0
-----------------0
0
0 -~~ +b4
From this we conclude that W = span{ v 1 , vz, v3, v 4 } has dimension 3, a nd that b = (bl> bz, b3, b4 ) is
in W if and only if 3b2 - 2b4 0.
=
Solution 3. In Exercise 12 we foun d that t he vector u = (0, - ~, 0, 1) forms a basis for W .L. Thus b =
{bJ , b2. b3 , b~ ) is in W ;; w .t.t if and only if u · b = -~b2 + b4
0, i.e., if and only if 3b2 -- 2b 4
0.
=
23.
The augmented matrix lv 1
v2
V:;
=
v 4 I ht l b z l b "J is
-··2
- ]
0
1
1
3
1
l
0
-1
2
2
1
- 1
0
I
I
2:
-2
I
-2
2:' -1
5: 5
I
I
I
'
1
0
2
4: - 2
2: - 3 - 1
I
- 1
1
I
I
I
1
0
and the reduced row echelon form of this matrix is
r~
l~
0
0
l
0
0
1
0
0
0
0
0
0
0
1
0
0
l: -2
0
0
0
0
1 I'
1:
0
l ''t
1
o:
0
From this we conclude Uwt the vectors h 1 C~.nd b :t lie in span{v 1 , v 2, v 3 , v 4 }, but b s does not.
2 4. The
augmentl~d
matrix jv 1
Vz
'13
v 4 I b 1 I b2 I b s l is
1
- L
1
0
3
2
1
0 -l
2
0
1
0
3 •t 3 : - 2 : ~~
-2 : -1: o:2
I
1 I'
0 '
1
'
1 :
7 : -1 : 6
I
I
2 I' 2 : 4
1: 2:1
and the reduced row echelon form of this matrix is
0
1
0
0
0
0
1
1
0
0
0
0
0
0
0
0: _! :
0
I
1)
1
I
I
§
I-}
I
0
0
1
0
0
0
0
o:
1
t
t
t
5
6 t
- t
5 '
1t
o: o:'
1
From this we conclude t ha t t he vectors b 1 and b 2 lie in span{v1 , vz , v 3 , v 4 } , but b 3 does not.
Chapter7
210
25.
The reduced row echelon form of the matrix A is
R= r~~~ ~ ~~J
0 0 0 0 1
0 0 0 0 0
0
1
=
Thus the vectors r 1 = (1,3, 0,4, 0,0), r2 = (0,0,1,2,0,0), r 3 = (O, O,O,O, l,O), r4 (0, 0,0,0,0,1),
form a basis for the row space of A. We also conclude from a.n inspection of R that the null space of
A (solutions of Ax= 0) consists of vectors of the form
x
= s( - 3, 1, 0, 0,0,0) + t{-4, 0, -2, 1, 0, 0)
Thus the vectors n 1 = ( - 3, 1, 0, 0, 0, 0) and n2 = (-4, 0, -2, 1, 0, 0) form a basis for the null space of
A. It is easy to check that r; · Dj = 0 for a.ll i, j. Thus row( A) and null(A) are orthogonaf.subspat:es
of R 6 •
26 . The
rf>(\\lc~C'd
row (>Chf'lon form o f the m11.trix .4 is
R=
1
1
0
0
4
0
1
0
2
0
0
1
t
;j
0
0
0
0
n.
-!]
1
5
3
5
0
4.
Thus the vectors f ) = (1, 0, 0, ~ . r 2 = (0, 1, 0,
~). a.nd TJ = (0, 0, 1, } , ~) form a basis for the
row ~par.<' of A \Ve also conclude from nn inspection of R that the null space of A (solutions of
Ax -::: O) consists of vectors of the form
Thus the vectors n 1 = ( · ~, - ~,- ~, 1, 0), n 2 =: ( ~, - ~, ·- ~, 0, 1) form a basis for the null s pace of A.
It is easy to che<.:k t.hat r, · n 1 = 0 for all i, j . T hus row(A) and onll(A) are orthogonal subspaces of
R" .
27. The vectors r1 = (1,3 ,0,4,0, 0), r2 = (0,0, 1, 2,0 ,0),
basis for t he row space of A (see Exercise 25).
!. - ),
r2 = (0, l, 0, ~ ,
r3 =
V. a..nd
28.
The vectors r, = ( l , 0, 0,
~
space of A (see Exercise 26).
29.
The reduced row echelon forms of A and B are
(0,0,0,0, 1, 0),
r3
= (0, 0, 1, ! , ~}
= (0,0,0,0,0,1)
vectors r l :::: (1,0,3, 0), r2 = (0, 1, 2,0) , r 3 = (0,0,0,
follows t.hnt row(A) = row(B).
n form
~ ~~~
Thus the
a basis for both of the row spaces. It
ec~1elon forms of A and B are rl~ ~ ~ ~] and r~ ~ ~ ~] respectively.
00 13
f<'l rm a.
for m a. basis for t he row
r~ ~ ~ ~] and [~ ~ ~ ~] respectively.
0 0 0 1
30. The reduced row
r4
Thus the
OQI J
0 0 0 0
vectors r 1 :::;: (1,0,0,5), r 2 = (0,1,0 ,~),
follows that ruw(A) = row(B).
r~
= (0,0,1 ,3) form a basis for bot~ of the row spaces. It
211
DISCUSSION AND DISCOVERY
31.
Let B be the matrix having the given vectors as its rows. The reduced row echelon form of B is
R=
[~
0 -1
-4
~]
a nd from this it follows that a general solution of Bx= 0 is given by x = s (1 , 4, 1,0) + t~-2, 0, 0, 1),
where -oo < s, t < oo. T hus the vectors w 1 = (1, 4, 1, 0) and w2 = (-2, 0, 0, 1) form a ha.sis for the
null space of B , and so the matrix
A = [ 1 4 1 0]
-2 0 0 1
has the property that null (A) = row{A)J.
32. (a) If A=
= null( B).!. = row(B).l.J. =row(B).
[~... ~ ~] and x = [:], then Ax = 0 if and only if x = y
=
0. Thus the null sp•ce of A
corresponds t o point.s on t he z-axis. On the othe:- hanrl, t he cokm n sp:.:.cc cf A t.'Onsists of all
vectors tha t are of the form
which corresponds t.o points on lht} xy-plane.
( b ) The matrix B
=
lr~ ~ ~1
has the specified n ull space n.nd column space.
0 0 I
DISCUSSION AND DISCOVERY
D 1.
(a) False. T his statement is t.rue if and only if the nonzero rows o f A are linear.ly inde pendent,
i.e. , if Lhcre a.re no additional zero rows in i.tll echelo n form.
"
( b) T rue. 1f E is a11 elementary matrix , t hen E is invertible and so EA x = 0 if a nd only if
Ax= 0 .
(c) True. If A ha.s ru.nk n , then there can be no zero rows in an echelon form for A; thus the
red11ced row echelon form is t he ident.it.y matrix.
(d) False. For example, if m = n and A is invert ible, then row(A) -= Rn and null(A) = {0}.
(e) True. T his follows from the fact (T heorem 7.3.3) that SJ. is a subspace of Rn .
D2.
(a) True. 1f A is invertible, then the rows of A a nd the columns of A each form a basis for W';
thus row(A ) = col(A) = R" .
( b ) Fah>e. ln fact, t he opposite is true: lf W is a subspace of V, then V J. is a subspace of W .l.
since every vector that is orthogona l to V will also be orthogonal to W .
(c) False. The specified condition implies that row(A) ~ row( B ) fro m which it follows that
mJll(A) = row(A)J.
2 row(B)L =
null(B)
but in general the inclus ion can be proper.
(d) False. F.or example A= [~ ~] and B
spaces.
=
[~ ~]
have the same row space but
~ifferent column
212
Chapter 7
(e)
True. This is in fact true for any invertible matrix E. The rows of EA are linear combinations
of t.he rows of A; thus it is always true that row(EA) ~ row(A) . If E i:; invertible, then
A = E- 1 (EA) and so we also have row{A) ~ row(EA).
D3. If null{ A) is the line 3x- 5y
= 0, then row(A) = null(A)l.
is the line 5x + 3y = 0 . Thus each row
of A must be a scalar multiple of the vector {3,-5), i.e .• A i'i of the form A= [:ls
-!Js].
3t - 5t
D4. The null space of A corresponds to the kernel of TA, and the column space of A corresponds to
the range of T A .
D5. lf W
D6.
= a..t. , then w..t. = a.L.L =span{a}
is the !-dimensional subspace spanned by the vector a.
(a) If nuU( A) is a line through the origin, then row(A) = null( A )..I. is the plane through the origin
that is perpendicular to that line.
(b) If col(A) is a line through the origin, then null( AT)= col(A)..l is the plane through the origin
thnt is perpendicular to that line.
D7. The first. two matrices arc innrtiblc; thus in each cnse the null spuce is {0}. The mill ~pace of the
matri..x A
= {~ ~]
is the line 3x + y = 0 , and the null space of A =
{~ ~]
is all of R 2 .
DB. (a) Since S has equation y = 3x, Sl. has equation y = -- ~x, and Sl.J. = S has equation y = 3x.
(b) If S == { ( 1, 2)}, then spa.n(S) has equation y = 2x; thus Sl. has equation y = - ~ x ann Sl. l =
span(S) has equation y = 2x .
09.
·No, this is not
possible. The row space of an invertible n x n matrix is all of Rn since its rows
form a ba.sis for R ... On the other hancl , the row space of a singular matrix is a proper subspace
of Rn since its rows are linearly dependent and do not s pan all of Rn.
WORKING WITH PROOFS
Pl.
lt is clear, from the definition of row space, that (a) implies (c). Conversely, suppose that (c)
holds. Then, since <::ach row of A is a linear combination of the rows of B, it fo llows that any
li ne..a.r c.ombination of the rows of A can be expressed as a linear combination of the rows of B .
This shows that row(A) ~ row{B), and a si1uilar argument shows that row(B) ~ row(A). Thus
(a) holds.
P2. The row vectors of an invertible matrix are linearly independent and so, since there are exactly n
of them, they form a basis for R" .
P3. If Pis invertible, then (PA)x = P(Ax) = 0 if and only if A x= 0 ; thus the matrices PA and A have
the same null space, and so nullity{PA) = nullity( A). From Theorem 7.3.8, it follows that PA and
A. have also have the same row space. Thus rank(PA) = dim(row(PA)) = dirn(row(A)) =rank( A).
P4.
From T heorem 7.3.4 we have SJ. = span(S)l., and (span(S)J.)..l
subspace. Thus (S l. )l. = (span{S)l. )l. = span(S)
1
P5. WehaveAA- 1
rJ(A)·cl(A- ) rl(A)·c:J(Ar'l(A)·ct(A- 1 ) r 2{A)·c2(A-
=
[
...
..
1
)
1)
.
r 11 (A) · C! (A- 1 ) r n{A) • C2{A -
1
)
•• •
••
..
· •·
= span{S)
since spa.n(S) is a
1
rt(A)·cn(A- ) ] [..1 0 ··· 0]
r 2(A)·cn(A- 1 )
0 1 ·· · 0
= . . .
.
.
r n(A) · Cn(A - 1)
..
..
= [Oijl·
In par-
0 0 ·· · l
ticular, if it= {1, 2, . . . ,k} and j E {k + 1, k + 2,. _., n}, th~n i < j and so ": ( A)· c;(A - 1)
This shows that the first k rows of A and the last n - k columns of A- 1 are orthogonal.
= 0.
EXERCISE SET 7.4
213
EXERCISE SET 7.4
1.
The reduced row echelon form for A is
0 -16]
[~
1
-19
0
0
Thus rank(A) = 2, and a general solution of Ax= 0 is given by x = ( 16t, 19t, t) = t(16, 19, 1). It
foll ows that nullity(A) = 1. Thus rank( A) + nullity(A) = 2 + 1 = 3 the number of columns of A.
2.
T he reduced row echelon form for A .is
[~
~ -~]
0
0
Thus rank(A) = 1, a nd a general solution of Ax= 0 is given by x = (1t, s, t) = s(O, l ,O) + t(~,0, 1 ) .
It. follows t hat. nullity(A) = 2. Thus rank(A) + nullity(A) = 1 + 2 = 3 t he number of columns of A .
3. The reduced row echelon form for A is
[~
1
ll
-tl
0
0
0
0
:!7
T hus rank(A) = 2, and a general solution of Ax= 0 iR given by x <= s(- 1,-- 1, 1, 0) +t(~,- ~ , 0 , 1).
It follows t hat nullit.y(A) = 2. Thus rank (A) + nullity(A) = 2 + 2 = 4 the number of columns of A.
4.
The reduced r ow
~r.helon
form for A is
]
l
0
0
1
1 2 ]}
1-..J 2
~ ~ ~~ ~ .. - ~ ·-...._______ - .- ·
Thus rallk(A) = 2, and a general solution of Ax= 0 is given by
x
= r( -1 , -
It fo llows that n111lity(A)
5. The reduced row
~chelon
= 3.
1, 1, 0, 0) + s(- 2, - 1, 0, l, 0)
Thus rank(A)
+ t( -1 , -· 2, 0 , 0, 1)
+ nullity(A) = 2 + 3 =
5.
form for A is
1
0
0
4
0
2
3
0
0
-- 6
- n5
0
0
1
0
0
0
0
0
0
0
0
0
I
0
0
Thus rank(A) = 3, and a general solution of Ax= 0 is given by
x ""' s(-2,0,0,1,0} + t(-~ . ~ .
It follows that
fi,o, 1)
nul;ti.y(A) = 2. Thus rank(A} + nullHy(A) ""'3 + 2 = 5.
Chapter 7
214
6. The
redu~P.d
row echelon form for A is
[~
1
0
3
1
3
0
0
0
0
1
2
2
3
3
7
4
-3 -3
0
0
0
0
4
3
l
3
0
0
-~]
Thus rank(A) = 2, and a gene1al solution of Ax = 0 is given by
X =
ll follows
7.
r(
-s,
-~ 1 }, 0,0, 0,0}
+ s( -j, ~~ 0, 1, 0, 0, 0) + t( -j, t,0, 0, 1, 0,0)
+ u(-~, -A. 0,0,0,1,0) + v(~, -~, 0, 0, 0, 0,1)
that nullity( A)= 5. Thus rank{A) + nullity(A) = 2 + 5 =
7.
(a) lf A is a 5 x 8 matrix having rank 3, then its nulHty must be 8- 3 = 5. Thus there a.re 3 pivot
variables and 5 free parameters in a general solution of Ax = 0 .
(b) Tf A is a 7 x 4 matrix having nullity 2, then its rank must be 4 - 2 = 2. Thus there are 2 pivot
variables and 2 free parameters in a general solution of Ax = 0.
(c) lf A is a 6 x G matrix whose row echelon forms have 2 nonz!'rO rows, then A hn.s rank 2 and
nullity 6- 2 = 4 Thus there are 2 pivot variables and ..1 free parameters in a general solution
of Ax - 0
8.
If A is a 7 x 9 matrix having rank 5, then its nullity must be 9 - 5 = 4. Thus there are 5 pivot
variables and t free parameters in a general solution of Ax = 0 .
(b) If A is an 8 x 6 matrix having nullity 3, then its rank must be 6-3 = 3 Thu:; there are 3 pivot
variables and 3 free parameters in a general.solution of Ax = 0
(c) If A is a i x 7 matriX ''hose row echelon forms ha\f' 3 nonzero rows then ..1 has rank 3 and
nulhty 7 - 3 = ·1 Thus there are 3 pivot variables and 4 free paramett-rs in a genera] solution
of Ax - 0
9.
(a)
(a)
(b)
(c)
10.
[fA i~ a 5 x 3 rnatnx. then the largest possible value for rank( ...l) is 3 and the smallest possible
value for nulllty(A) is 0
If A is a 3 x 5 :natrix, then the largest possible value for rank(A) is 3 and the smallest possible
valur for null1ty( '\) is 2
If A is a 4 x 4 mn.trix, t hen the largest possible value ior ranK( A) IS 4 and the smallest possible
value for nulltty(A.) is 0.
If 4 1s a 6 x I rn::~.lrix, then lhe largest possible value for rank(.4) is 4 and the smallest possible
value fm nulhty(A) is 0.
(b) If A is a. 2 x 6 matrix, then the largest possible value for rank(A) is 2 and the smallest possible
value for nullity(A) is 4
(c) If A 1s a 5 x 5 matrix, then the largest posstble value for rank(A) ts 5 and the smaiJest possible
value for nulltty(A) is 0
(a)
11. Let A be the 2 x 4 matnx havmg v 1 and v 2 as its rows Then the reduced row echelon fonn of A is
0 -~
"
3
and so a g<>n('ral solulton of Ax= 0 is given by x = s(~ , -~,1, 0)
Vt= (·! , l ,O,O),
form a basts for R 4
v2=(0,3,.t,5),
+
WJ=(1,-~,l,O),
tn,-
~. 0, 1). Thus the vectors
w4 =(~.-~.0,l)
215
EXERCISE SET 7.4
12. Let A be the 3 x 5 matrix having v 1 , v 2 , and v:l as its row:>. T hen the reduced row echelon form of
A is
0
0 -- 819
2:l
0
0
and so a general solution of Ax= 0 isx
Vt
15
- 8
43
- 16
8
43
= .~( 1i ,- ~ , ~' l, 0) + t (- 21, 1i, - ~ , 0, 1). Thus the vectors
= (1, 0, -2, 3, -5),
W3
¥]
15
3
v2
= ( - 2, 3, 3, }, -1) ,
= (-~,-~, 1~, 1,0),
W4
V3
3
= ( -~ ,
15
8
= (4
1
1, -3,01 5)
,-¥,0, 1)
form a basis for R5 •
13.
(a) This matrix is of re.nk 1 with l1 =
(h) T his matrix is of rank 2.
(c) T his matdx i< oh ank
1 4.
(a)
1
with A =
[_~ ~:1
(- ~1{1
-7] =
u vT.
H-~ -:-: _:;] +:1[1
This matrix is of ra.nk 2.
(b) This
=
matrix ~ of '""k 1 with A= L~
_ ,: : ]
= [
~1[1
6
-3[
-3
5
1 3
3
= uv'''.
(c) This matrix is of rank 1 with
;\ = [
10
8] [ 2]
~ --69--r~-tB---:I
10
3
- .)
12 - 6
12 -6
3
-1
15.
-6
lJ =
.21
[~
-6
-~=
2
3
- 1
1
6
- 3
4) =
UVT
69 32 32] .ts a sv mmetnc. rnat nx.
.
3 l
1
•
3 1 1
0
16.
-~
8
- i·Z
- ·l
0
16 -20
- 20 25
28 -35
2~] is a symmetric matrix.
-~:
1 7. The matrix A can be row reduced to upper triangular form as follows:
l [1
t
1 -t--+
0
2
1- t
0
1
t- 1
0
t t
1-
]
(2+ t)(l - t)
If t = 1, then the la.tter has only one nonzero row and so ran~(A) = 1. If t = -2, then there are two
nonzero rows and ra.nk(A) = 2. If t :f: lor - 2, then there are three nonzero rows and rank(A) = 3.
Chapter 7
216
18.
_, l r
-tl r
The matrix A can be row red11ced as follows:
A=
[~
-1
3
-1]
6 -2 -+
3
t
3
6
[i
3
-2 -+ 0
0
3 -1
-3
-2+3t
-1 + t 2
3- 3t
-+
3
-t
]
-2+ 3t
-(t- l){2t - 3)
0 -3
0
0
If t = 1 or t = ~, then t he third row of the latter matrix is a zero row and rank( A)
values of t there are no zero rows, and rank(A) = 3.
19.
If lhe matrix
Thus (x.y,z)
20.
[:r1
= 2.
For all other
z] has rank 1 then the first row must be a scalar multiple of the second row.
Y
l/
X
= t(l,x,y), and sox= t, y =
Let A be the 3 x 3 matrix having
VI.
tx
= t2 , z =
ty
= t3 .
v 2 , v 3 as ils rows. The reduced row echelon form.of A is
...
thus the vt•ctors
Ax
=0
-=
w1
(1, 0, 5) and
w2
= (0, 1, -4) form a basis for W
=row( A)
= row(R ).
tf and only if Rx = 0 , and a general solution of the latter is given by x
Lhus the vf'r.tor x 1
-
(-5, 4,1) forms a basis for W.l.
= row(A).l..
We have
= [-Stl
4: = t [-5]~ ;
Finally, dim(W)+dim(W).l. =
.
'>+I== 3
21. The suh.spare W, consisting of all vector!'J of the form x = t(2, -1, - 3} , has dimension 1. The subspace
n·.l is the hyperplane cons1$tmg of all vectors x = (x,y,z) whlch are orthogonal to (2,-1,3), i.e.,
which satisfy the equation
2x- y- 3z = 0
A gt•neral !iolution of the latter ts given by
x
= (s. 2s- 3t, t) = s(1. 2, O) + t(O, -3, 1)
wtwre - oo < t < oo. fhus the vectors (1, 2, 0) and (0, -3, 1) form
dim(W)
l\
basis for w.t' and we have
+ dim(lV.l.) = I + 2 = 3
=
=
22 .
If u and v nre nonzc>ro column vectors and A = uvr, then Ax = (uvT)x
u (vrx) (v. x ) u . Thus
Ax= 0 if Bnd only if v · x = 0. i.e.• if and only if xis orthogonal to v . This shows that ker (T) = v.l. .
Similar!_\, I he range ofT conststs of all vectors of the form Ax= (v · x }u , and so ran(T) =span{ u }.
23.
If B 1<:: obtainerl from A by changing only one entry, then A - B has only one nonzero entry
(hence only one nonzero row), and so rank(A- B) = 1.
(b) If B is obtained from A by changing only one column (or one row), then A - B has only one
nonzero column (or row) , and so rank{A - B) = 1.
(c) 11 B 1s obtamed from A in the specified manner. Then B - A has rank 1. Thus B - A is of the
form uvT and 8 =A+ uvT.
24 .
(a)
(a)
If A= uvr, lhcn A 2 = ( uvT)( uvT) == u (vTu)vT = (v · u ) uvT
= (u. v )A.
( b ) If Ax= >.x for some nonzero vector x, then A x = >. x. On the other hand, since A 2 = (u · v )A,
we have A2 x = (u · v )Ax = (u · v)>.x . Thus >.2 = (u · v )>., a.nd if>. :f nit. follows that>.= u · v .
2
2
217
DISCUSSION AND DISCOVERY
(c)
If A is any square matrix, then I - A fails to be jnvertible if and only if there is a nonzero .vector
x such tha.t (I - A)x = 0; this is equivalent to saying t hat>. = 1 is an eigenvalue of A. Thus if
A ::::: uvT is a nmk 1 r£tatrix for which I - A is invertible, we must have u · v =/= 1 and .4 2 :fA.
DISCUSSION AND DISCOVERY
Dl. (a) True. For example, if A ism x n where m > n (more rows than columns) then the rows of
A form a set of m vectors in Rn and must therefore be linearly dependent. On t.he other
hand, if m < n then the columns of A must be linearly dependent.
(b) False. If the additional row is a linear combination of the existing rows, then the rank will
not be increased .
(c) False. For example, if m = 1 then rank( A)= 1 and nullity( A) = n - 1.
(d) True. s·uch a matrix must have rank less than n; thus nullity(A):::::: n - rank(A) == L
(e) False. U Ax= b is inconsistent for some b then A is not invertible and so Ax = 0 hM
nontrivial solutions; thus nulHty(A) ;::: 1.
(f) 11-uP.. We must have rank(A) + nullity(A) = 3; thus it is not possible to have rank( A) and
nullity(A) both C'qual t.o 1.
D2. If A is m x n, then AT is n x m and so, by Theorem 7.4.1, we have rank(AT) + nullity( AT )= m .
D3. If A is a 3 x 5 matri."<, then the number of leading l's in the reduced row echelon form is at most
3, und (assuming A =/= 0) the number of parameters in a general solution of Ax= 0 is a.t most 4.
D4. If A is a 5 x 3 matrix, thP-n ·rank(A) :::; 3 and so the n umber of leading l's in the red uced rvw
echelon form of A is at most 3. Assuming A =I 0, we have rank(A) :;::: 1 and so the number of free
parameters in a general solution of Ax = 0 is at most 2.
D5. If A is a 3 x 5 matrix, then the possible values for rank(A) are 0 (if A= 0) , 1, 2, or 3, and the
corresponding values for nufHlypt)-m-c-&;-4,·&,-or · ~·: If A is a 5 X 3 matrix, then the possible values
for rank(/\) are 0, 1, 2, or 3, and the corresponding values for nullity(A) are 3, 2, 1, or 0. If A is
a 5 x 5 matrix, then the possible values for rank(A) are 0, 1, 2, 3, 4, or 5, and t.he corresponding
values for nullity(A) are 5, 4, 3, 2, 1, or 0.
D6.
Assuming u and v are nonzero vect.ors, the rank of A= u vT is 1; thus t.he nullity is n - 1.
D7. Let A be the s tandard matrix ofT. If ker(T) is a. line through the origin, then nullity(A) = 1 and
so rank(A) ::: n - 1. Tt follows that ran(T) = col (A) has dimension n- 1 and thus js a hyperplane
in R" .
D8. If r == 2 a.nd s = 1, then rank(A) = 2. Otherwise, either r - 2 or s- l (or both) is ~ 0 , and
rank(A) = 3. Note thnt, since the first and fourt h rows are linearly independent, rank(A) can
never be 1 (or 0).
D9. If,\¥ 0, then the reduced row echelon from of A is
r~
0
L.
0
0
1
0
0
1
0
0
-{]
Chapter 7
218
and so rank(A)
= 3.
On the other hand, 1f >. = 0, then the reduced row echelon form is
0
-~
1
5
0
0
2
0
0
il
and so rank(A) = 2. Thus>. = 0 is the value for which the matrix A has lowest rank.
DlO.
Let .4 =
B2
[~ ~]
= (~ ~]
and B
= [~ ~]·
Then rank(A)
= rank(B) =
1, whereas A 2
= [~ ~]
bas rank 1 and
has rank 0.
= 0 then rank( A) + rank(B) - n = rank(AB) = 0, and so rank(A) + rank(B} 5 n. Since
rank( B) - n- nulHty(B), it follows that rank(A) + n - nullity( B) 5 n ; thus rank{A) ~nullity( B).
S1milarly, rank(B) 5 nullity(A).
Dll. If AB
WORKING WITH PROOFS
011
012
021
an
Pl. First. we note that. the matrix A= (
013
]
023
fails to be of rank 2 if and only if one of the rows
is a scalar multiple of the other (which includes the case where one 01 both is a row of zeros).
Thus it is sufficient to prove that the latter is equivalent to the condition that
Suppose that one of the rows of A is a scalar multiple of the other. Then the same 1s true of each
of the 2 >< 2 matrices that appear in (#) and so each of these det erminants IS equal to zero.
Suppose, conversely, that the cond1t10n ( #) holds. Tlten we have
\\'ithout loss of gPnerality ,,.e may assume that the first row of A is not a row of zeros, and further
t Interchange columns if neces!lary) that. a11 =1: 0 We then have a22 = (l!.U
)at 2 and a2~ = ( l!.U
)a.l3
an
an
Thus (a~t , On,a:n)- (~)(ntt,l'lt2·a13).
and
so
the
second
row
is
a
scalar
multiple
o
f
the
first
••II
row
P2. lf A is of rank 1, then A = xyT for some (nonzero) column vect ors x andy. If, in addition, A is
symmetric, Lhen we also have A = AT= (xyT)T = pr . From this it follows that
x(yr x)
= (xyr )x = (yxT )x = y(xr x ) = yllxll 2
Since x andy are nonzero we have xry
=
yTx = y · x :/; 0 and x
= y(gy)T ==- (~)yy1' = (~y)(~y)T =
respondmg formula 1s A = - uur where u = j-~~;y.
that A= yxr
(B)
r 2(B}
= i~Y·
If xTy > 0, it follows
uuT IfxTy
< 0, then thecor-
r l
P3. AB
= lc.CA )
c1(A)
c~c(A)J
= Ct(A )r t(B )
[
r~c(B)
each of the
prod11l'l~
c1 ( A)r1 (B) is a rank 1 matrix.
+ c2(A)r 2(8) + · · · + C~t (A)r~t ( B) ,
and
EXERCISE SET 7.5
219
P4 . Since the set V U W contains n vectors, it. suffices to show that V U W is a linearly independent
set. Supp?se then tha.t c1 , c2 , . •. , c~c and d 1 , d 2 , ••• , dn- k are scalars with the property that
+ CzVz + · · · + CJ.:Vk + d1W1 + dzWz + · ·· + d-n-kWn-k = 0
c1v1 + czvz + · · · + c,.vk = -(d1w1 + dzW?. + · · · + dn - JcWn-k) belongs both
CtVJ
Then the vector n =
to V = row(A) and to W = null(A) = row(A)~. It follows that n · n = llnll 2 = 0 and son= 0.
Thus we simultaneous ly have c1 v1 + czVz + · · · + q v,. = 0 and d1 w1 + d2w2 + · · · + dn -k Wn-k =
0. Since the vectors v1, Vz, . .. , Yk are linearly indepenjent it follows that c1 = cz = · · · = ck = 0,
and since w1, Wz, ... , Wn-k are linearly independent it follows that d1 = dz = · · · = dn-k = 0.
This shows that V U W is a linearly independent set.
P5.
~
From the inequality rank(A) + rank(B) - n
n- rank(A)
~
rank{AB}
n- rank(AB}
~
~rank( A)
it follows that
2n - rank{A) - ra.n.k(B)
which, using Theorem 7.4.1 , is the same as
nullity( A) ~ nullity(AB) S nullity(A) +nullity( B)
Similarly, nullity(B) S nullity(AB)
P6. Suppose .A= 1 + vr ; t ' 1u
Bx
.=:
# 0, and
~
nullity(A)
let X= A- 1
T) (A ~ l
A- l uv1'A-t)
I
(A + uv
A
=
.
=l+uv rA - 1
-
+ nullity(B).
-
A-~u( A-'. Then
+ uv
T A-l
-·
uvTA-1
uvTA- 1 + u (vTA- 1 u )vTA - 1
= l + uvT A- 1
).
TA-t
-1
and so B is invertible with B -· 1 = X= A- 1 - A uvA
Note. If vTA- 1 u = - 1, then nA- lu =(A+ uvT)ABA- 1 is singular and so B is singular.
=A - l -
1u
=u+
+ uvT A-luvr AA
-
A-1
t
l +vTA- 1 u
uvTA - 1
A
;;;;;[
TA - 1
uv
l+vT.'I. - 1u '
u(vT A- 1 u) =
u-
u
= 0; thus
....··
,
EXERCISE SET 7.5
1.
The <edueed cow .ehelon fonn of A is
0 - 1
2
0
l
1
AT is 0
0
0
0
0
J
0
0
0
0
0
0
0
[j
4.
5 -6
0
0
0
0
0
0
0
0
T hus dim(row(A)) = 2 and dim(col(A) = dim(row(AT))
diro(null(A)) =5-2 = 3, and dim(null(AT))
parameters in a general solution of Ax = 0.
2. The reduced row echelon form of A is
l 0 0 1
0 I 0 l
-5] and the reduced row echelon form of
0 -7
l -9
= 4- '2 =
= 2.
It follows that
2. Sjnce dim(null(A)) = 3, t here are 3 free
~
0
0
0
1
0 _ .!§.
2
0
0
1
-2
0
0
0
0
7
~1
--1]
of AT is 0 o 1 0 . Thus dim(row(A)) ~ 3 a nd di.m(col(A)
0 0 0 0
and the reduced row echelon form
= dim{row(AT)) =
3. It follows t hat
0 0 0 0
dim(null(A)) = 5-3 == 2, and dim(null(AT)) == 4-3
parameters in a general solution of Ax = 0.
= 1.
Since dim(nuli(A)) = 2, there are 2 free
220
Chapter 7
3. The reduce<! row echelon form of A is
Thus rank(A)
[~ ~ !] and the reduced row echelon form of AT [~
is
r~
:-,! -:]
0
0
0
] . Thusrank(A)
=
= 2 and rank(A;) = 2
'o
~
and the reduced row echelon form of AT is
=
5.
dim(row(A)) dim(col(A)) rank(A) = 3,
dim(rlllJI(A)) = 3 - 3 = 0, dim(nuli(Ar)) = 3-3 = 0.
(b) dim(row(A)) = dim(col(A)) = rank(A) = 2,
dim(null(A)) = 3 - 2 = 1, dim(null(AT)) = 3- 2 = 1.
(c) dim (row(A))- dim(coi(A)) = rank{A) = 1,
dim(null(A)) = 3 - I - 2, dim(null(AT)) = 3 - 1 = 2.
(d) dim(row(A)) = dim(coi(A)) = rank(A) = 2,
dim(nuii{A)) = 9 - 2 = 7, dim(nuii(AT)) = 5-2 = 3.
(e) <lim (row(A)) = dJm(col(A)) = rank(A) = 2,
dim(nuii(A)) = 5 2 = 3, dim(nuii(AT)) = 9 - 2 = 7.
6.
(a) dim(row(A)) = d im(col(A)) = rank(A) = 3.
dim(nuii(A)) = ot - 3 = 1, <lim(null(AT)) = 3-3 = 0.
{b) dtro(row(A)} = dlm(col(A)) = rank(A) = 2,
dim(null(A)} = 3 - 2 = 1, dirn(nuii(AT)) = 4 - 2 = 2.
(c) dim(row(A)) = c.llm( coi(A)) = rank(A) = 1,
dim(null (A)) = 3 I = 2, dim(nuli(AT)) = 6 - l = 5.
(d) dim(row(A)) = d1m (coi{A)) = rank( A) = 2,
dim(null(A )) = 7 2 = 5, dim{nuli( AT)) =5 - 2 = 3.
(e) dim(row( A)) = dim(col(A)) = rank(A) = 2,
dim(nuli(A)) = 4 2 = 2, dim(nuii(AT)) = 7 - 2 = 5.
(a)
2
1 - 1
3 and rnnk(AT) = 3.
4. The reduced •ow echelon form of A is
[~ ~
0
7. (a) Since rank(A)
= rnnk!A I bJ=
solution is n - r
(b) Since rank(A ) t(c) Since rank{ A) =
solution is n - r
{d) Since rank(A)
soluLion JS n - r
=
3 the system is consis tent. The number of parameters in a general
= 3- 3 = 0; i.e., the system has a unique solution .
rank[A I bJ the system is inconsistent.
rank[A I bJ = 1 the system is consistent. The number of parameters in a general
= 3 - 1 = 2.
rank[ A I bj = 2 the system is consistent. The number of parameters in a general
= 9- 2 = 7.
t- rank[A I b) the system is inconsistent.
(b) Since rank( A) = rank[ A I bl = 2 the system is consistent. The number of parameters in a. general
solution is n - r = 4 - 2 = 2.
(c) Since rank( A) = rank[A I bJ = 3 the system is consistent. The number of parameters in a general
solution is n - r = 3 - 3 = 0, i.e. , tLc system has a unique solution.
(d) Since rank( A) = rank[A I bJ = 4 the system is consistent. The number of parh.weters in a. general
solution 1s n - r = 7 - 4 = 3.
8. (a) Since rank{A)
221
EXERCISE SET 7.5
9.
(a) This matrix bas full column rank because the two column vectors are not scalar multiples of
each other; it does not ~ave full row rank because any three vectors in R 2 ar..; linearly d~p.endent.
(b) This matrix does not have full row rank because the 2nd row is a scalar mul'.iple of the 1st row;
it does not have full column rank since any three vectors in R2 are linearly dependent.
(c) This matrix has full row rank because the two row vectors arP. not scalar mult iples of each other,
it. does not have full column rank beeause any three vectors iu R 2 are linearly dependen~.
(d) This (square) matrix is invertible, it has full row rank and full column rank.
10.
(a) Thls matrix has full column rank because the two column vectors are not scalar multiples of
each other; it does not have full row rank because any three vectors in R 2 are linearly dependent.
(b) This matrix has full row rank because the two row vectors are not scalar multiples of each other;
it does not have full column rank because any three vectors in R2 are linearly dependent.
(c) Thls matrix does not have full row rank because the 2nd row is a scalar multiple of the lst row;
it does not have full column rank because any three vectors in R 2 are linearly dependent.
(d) This (square) matrix is invertible; it has full row rank and full column rank.
11.
(a) det(ATA) = det
l-
.
- .~]
2
··
6
1
="-"
149
=f 0
and det(AAT)
= dct [102
24 - 1]2 = 0; t hus ATA is invE>rti
ll - 2
17
ible and AAT is not invertible. This corresponds to the fact (see Exercise 9a) that A has fuJl
column rank but not full row rank.
2~
(b) det(AT A) =- det [
:
=~~] = o and d et(AAT) = det. [~~ ;~] = o; thus neither ATA nor AAT
-!0 -40
20
invertible. This corresponds t.o the fact. that A doeR not have full column rank nor full row
rank.
IS
(c)
det(AT A)
~ det [::
; __;_;]:
~ 'l-'"1 det(AAT) ~dot[': :] ~ 66,; 0; thus AT A Is not lnmtlble
but A A1 -i&inveitll)fe. This corresponds to the fact that A hu.s does not have full column rank
but. does have full row rank .
(d)
dct(A~'A)
hoth invertible.
12.
(a)
4 2
1
] = 64 -:f= 0 and det(AA T) "" det (
]
4 ]{)
2 17
This corre~ponds to the fact that A has full
"" d el(!'>
det(AT A) ...,. det [
= 61 =f 0; thus AT A and AAT are
column rank and full row rank.
~:] = 45 =f 0 and det(AAT) = det [ ~~ ~~ =~] = 0; thus AT A is invertible
:
1
·- 1 - 2
I
and AAT is not invertible. Th1s corresponds to the fact (see Exercise lOa) that A has full
column rank but not full row rank
(b) ciet(AT.1)..,., rlct
4 10 -2] = 0 and det(AAT) = del [ ~
[-2
10
34
3
13
13 37
g
4 ~] =
1269 :j: 0; thus AT A is not
invertible but AAT is invertible. This corresponds to t he fact that A does not have full column
rank but does have full row rank .
2
(c)
det(AT A)::; det
[-~~ - ~ -~~]
60 -15
= 0 and det(AAT) = det
[~~ ~~~] = 0; thus neither AT A nor
45
AAT is invertible. This corresponds to the fact. that A has does not have full column. rank nor
full row rank.
Chapter 1
222
61 41
(d) det(AT A)= dct [
41
soj1 = 1369 =/= 0 and det(AAT ) =
det
[~~14
37
]
31
1369 =/= 0; thus AT A and AAT
are both invertible This corresponds to the fact that A has both full column r<illk and full row
rank
!4b ~
I 0:
1 3. The augmented matnx of Ax= b can be row reduced to o
[
1
b3
1 -
0 01 bt
l
~ b3 •
Thus lhe system Ax = b
+ b'J
is either inconsistent {if b1 + ~ :f:. 0}, or bas exactly one solution {if b1 + ~ = 0). Note Lhat the latter
includes the case b1 = b2 = b3 = 0; thus the system Ax = 0 h as only the t rivial solut ion.
l
4:
bt
-3: i>l- 2bt
. Thus the system
[
0
0 1 b1 - 2i>l + b3
1
o
1 4. The augmented matrix of Ax = b can be reduced to
Ax = b
is t!ither inconsistent (if bt - 2b2 + b3 =I= 0), or has exactly o ne solution (if b1 - 2b2 + 63 = 0). The
latter includes t he case b1 = b2 = b3 = 0; thus the system Ax = 0 has only the trivial sdl'i.ition.
15.
If A
= [- ~)
~]
2
then A r A
[
I
6 12
12 24
It is clear from inspection t hnt the rows of A and .4.T A
].
are multiples of the single vector u = (1,2). Thus row(A) = row(Ar A) is the !-dimensional space
consisting of all scalar multiples of u . Similarly, null (A) = null(AT A) is the !-dimensional space
consisting of all vectors v in R2 which are orthogonal to u , i e all V('ctors of the form v = s( -2, 1).
16. The reduced ·row echelon from of A = [ 1
2
AT A = [
1
1
3 -4
]
1
0
is [
0
7
] and
I -6
the reduced row echelon form of
~ 1~ -~~]~s [~ ~ -~]· Thusrow(A)=row(ATA)isthe2-dimensionalspaceconsisting
-7 - 11
17
0
0
0
of all linear comb1ne.tions of the vect ors u 1 = (1, 0, 7) and u 2 = {0, I , - 6). Thus null(A) =null( AT A)
1s the 1-dimtlnsional space consisting of aJI vectors v in R3 which n.re o rthogonal to both u 1 and u 2,
i e, all \'CClor of the form v = s( - 7,6, 1)
17.
'fh~>
augmented ma.trix of the system Ax = b can be rel!\lo.:ed to
-3
)
0
0
0
0
bt
b2 - bl
1
0
0
0
b3 - ·lb2
+ 3bl
+ ~- 2b,
b5- 8b2 + 7bt
b4
thus the system will be inconsistent unless (b 1 , b2 , b3 , b4 , b5 ) s~tisfies the equations b3 = -3bt -1 4b2,
b,. = 2bt- ~.and bs = -?b1 + 8~, where b1 and~ can assume any values.
18.
The augmented matrix of the system Ax= b can be 1educed to
1
2
3
0 - 7 -8
[
0
0
0
thus the system is consistent if and only if b3
solutions.
-1 :
5
0 :
!
= b2 + b1 and, in this case, there will be in!'lnitely many
223
WORKJNG WITH PROOFS
DISCUSSION AND DISCOVERY
D l. If A is a 7 X 5 matrix wit h rank 3, t hen A'r also h as rank 3; t hus dim( row( AT)) = dim( col( AT ))= 3
and dim (nuli(AT)) = 7 - 3 = 4.
02. If A has rank k the n, from T heorems 7.5.2 and 7.5.9, we h ave dim(row(AT A))= rank(A1'A)
rank(A1 ') = rank(A) = k and dim(row(AAT}) = rank(AAT) = rank( A) = k.
=
D3 . If AT x = 0 has only the trivial solution then, from T heorem 7.5.11, A has full row rank. Thus, if
A is m >< n, we mus t have n ~ m and dim(row(A)) = dim(col(A) ) = m .
D4. ·(a ) False. The row space and column space always have t he same dimension.
(b) False. It is always true that rank(A) = ra.nk(AT) , whether A is square or not.
{c) True. Under these assumptions, the system Ax = b is consistent (for any b) and so the
ma t riCes A and [A I b J have the same rank.
(d} True. If an m x n matrix A has full row rank u.nd full column rank, then m = dim(row(A )) =
rank(.4) = dim(col(A)) = n.
(e) True. If A 1'J\ and A A'T a re both invertible then, from 'Theorem 7.5.10 , A has full column
(f)
rank ami full r ow rank; t hus A is square.
True. T he rank of a 3 x 3 matrix is 0, 1, 2, or 3 a.nd the corres ponding nullity is 3, 2, 1, ·or 0.
D5 . (a) The solution$ of the system are given by x = (b - s- t, s, t) where - oo < s, t < oo. This
docs not violate T heorem 7.5.7(b).
(b) The solutions can be expressed as (b, 0, 0) -1- s( - 1,1, O) + t ( - 1, 0,1), whe re (b, 0, 0) is a pa.rt.icula r solution and s( - 1, 1, 0) + t( - 1, 0, l) is a. general solution of the corresponding homogeneous system.
\
n6.
If A is 3 x 5 , t.hen the columns of /1 are a. set of five vectors in R3 and thus t~re linearly
dependent.
(b) Lf A i~ 5 x 3, t.hen t.he rows of A are a se t of 5 vectors in R3 a nd thus a.re linearly dependent.
(c) Jf A is m x n , with m f n, then eit her t he columns of A are line arly dependent or the rows
of A arc linearly depende nt (or both).
(a)
WO RKING WITH PROOFS
Pl.
Frnm T heorem 7.5.8(a) we have nuli(AT A) = nuii(A) . Thus if A .is m >< n , t hen AT A is n x nand
so ra nk (ATA) = n - nuility (AT A) = n - nullity(A} = rank( A). Similarly, nuli{AAT) = nuli(AT )
a.nd so rank(A AT) ::= m - n ~llity ( A A T) = m - nullity(AT) ""' ra.nk(Al' ) = ra nk( A) .
P2.
As above, we have rank( AT A) = n - nu l1ity( A'~' A)= n - nullity(A)
P3,
(a) Since null(A1'A) = null(A), we have row(A)
{b) Sinr.~ AT A is symmetric, we have col(AT A )
P4.
=
ra nk(A) .
= nuB{A).l = null(A'
= row(AT A) =
A)l.
row(A)
= row(A T A ).
= col(AT).
If A is m x n where m < n, then the columns of A form a set of n vectors in Rm and thus are
linearly dependent . Similarly, if m > n , then the rows of A form a set of m vectors in R~'~ and thus
are linea r ly depender.t
224
Chapter 7
=
P5. If rank(A 2 ) = ran 1'(A) then dim(null(A 2 )) = n- rank(A 2 ) n- rank( A)= dim(nuii(A}) and,
since null(A) <;;; null(A 2 ), it follows that null(A) = null(A 2 ) .
Suppose now that y belongs to null( A) n col( A) . Then y = Ax for some x in R" and Ay = 0 . Since
A 2 x = Ay = 0 , it follows that the vector x belongs to null(A 2 ) = null(A) , and soy = Ax= 0. This
shows that null( A) n col( A)= {0}.
P6. First we prove that if A is a nonzero matrix with rank k, then A has at least one invertible k x k
submatrix, and all submatrices of larger size are singular. The proof is organized as suggested.
Step 1. If A is an m x n matrix with rank k, then dim(col(A)) = k and so A has k linearly
independent columns. Let B be the m x k submatrix of A having these vectors as tts columns.
This matrix also has rank k and thus bas k linearly independent rows. Let C be the k x k
submatrix of B having these vectors as its rows. Then Cis an invertible k x k submatrix of A.
Step 2. Suppose D is an r x r submatrix of A with r > k. Then, since dim(coi(A}) = k < r, the
columns of A which contain those of D must be linearly dependent. It follows that bne columns
of D are linearly dependent since a nontrivial linear dependence arx1ong the containing columns
results in a nontrivial linear dependence among the columns of D. Thus D is singular
Conversely, we prove that if the largest invertible submatrix of A is k x k, then A has rank k
Step 1. Let C be an invertible k x k submatrix of A. Then the columns of C are linearly
independent and so the columns of A that contain the columns of C are also linearly independent.
This shows that rank(A) = dim(coi(A)) ~ k .
Step 2. Suppose rank(A) = r > k. Then dim(col(A)) = r, and so A has r linearly independent
columns. Let 8 be the m x r submatrix of A having these vectors as its columns. Then B also
has rank r and thus has r lmearly independent rows. Let C be the submatrix of B having these
vec-torc; as lt'l rm\''l Thf'n (' is a nonsingular r x r submatrix of A Thus the a..c;sumption that
rank(A) > k has led to a contradiction. This, together with Step l, shows that rank(A) = k.
P7.
If A is invertible then so is AT. Thus, using the cited exercise and Theorem 7.5 2, we have
= rank((C P)T) = rank (Prcr)- rnnk(CT) =rank( C)
this 1l also follows that nullity(CP) = n- rank(G' P) = n- rank( C)= nullity(C).
rnnk(C P)
and from
EXERCISE SET 7.6
1.
A mw eehelon fmm fo• A ;, 8
= [~ -:
0
-•!];
0
and these column vectors form a. b asis for
dim(row(A))
= 2, and
thus the fi.st two columns of A are the ph•oL columns
~ol(A).
A row echelon from for AT is C =
[~ ~ ~]; thus
0 0 0
the first two rows of A form a basis for row(A)
2. The first column of A forms a basis for col(A), and th~ first row of A forms a basis for row(A).
3. The ma.tnx A can be row reduced to B =
[~ ~ ~ ~]; thus the first two columns of A are the pivot
:l:[r ~d 'I .:::::::::;:~0~2~ ~:;:h~:::~:: .~: o::·::.~T.::::0:o::::eed ,0
0 0 0 0
225
EXERCISE SET 7.6
4. The redur.ed row echelon form of A is B =
is C
=
ii
[i ~ i]
~ =: ~jl
0
0
0
0
0
0
0
0 . Thus the first two columns of
0
0
0
0
0
and the reduced row echelon form of AT
A form a basis for col(A), and the first two
rows of A form a basis for row(A) .
1
Q
0
5. The reduced row echelon form of A is B =.
[0~ ;!(;
0
0
0
0
0
0
i
0
2
0
0
-· 6
l
0
-~
0
0
0
0
0
3
I
and the reduced row echelon form
OJ
1
1
()( Al' is C: =
] .
0
0
()
Thus the fi rst tii <ee column..<> of A form a basis for col(A), and the
0
first three rows of A form a hasis for row( A) .
1 3
6. The reduced row echelon form of A is fJ
c=
[~
0
0
0
1
0
5
-4
0
0
0
t
2
0
:
1
0
-
l1'1 .
=
·o o
0 0 1 0
0 0 0 1
0 0 0 0
0 0 0 0
0 0 0 0
and the reduced row echelon form of AT · iS
Thus lhe 1st. 3rd , aud •lt.h columns of A form a basis for co!( A), and
0
o oJ
the 1st, 2nd, and 4th rows of A for m a basis for row(A).
7 . Proceeding as in Example 2, we first form t he matrix ha.¥ing the given vectors ns its columns:
4 1]
G
10
-6
- 11
2
3
The reduced row echelon form of A is:
From this we conclude that all three columns of A are pivot columns; '~hus the vectors v 1 , v 2 , v 3 a.re
lineMiy independent and form a basis for the space which they span (the column space of A).
Chapter7
226
T~~
8. Let A be the matrix having the given vectors as its columns.
[
reduced row echelon form of A is
~ 0~ 0~ -~]0
R= 0
0
0
0
From thls we conclude that {v 1, v2} is a basis for W
and v 4 == -2Vt + v2.
0
= col(A}, and it also follows that v3 = 2v 1 + v 2
9. Proceeding as in Example 2, we first form the matrix having the given vectors as its columns:
A==
[
-~ -~ -~ -~]
0
0
2
2
3
6
0
3
The reduced row echelon form of A is:
R=[~~~~l
From this we conclude that the 1st and 3rd columns of A are the pivot columns; thus {v 1, v3} is
a basis for W col(A). Since c2 (R) == 2c1 (R) and c4(R) = c 1 (R) + c3(R), we also conclude that
v2 = 2vt and v 1
v 1 -+- v 3
10. Let A be t he matrix having the given vectors as its columns. The reduced row echelon form of A is
lA I 1,1 = [:
2
1
-1
0
0
0
0
0-1]
0
3
1
2
0
0
v 1,
v 2 , v 4 form a basis fnr W = coi(A), and that v 3 = 2vt- v 2
0 -1 : 1
0 -'2 I 0
0
0
0
~] can be row reduced (and further parmioned) to
From this we conclude that the vectors
and v:~ = -v 1 1- 3vz + 2v~.
11. The m•tnx
0
0
I
I
I
0
[~-~~;] = [~---~--:~-!-~--~~---~]
0
Thus the vectors
v1
0
0 : 0
0
= (1,- 4,0) and v 2 = (0, 0, 1) form a basis for
1
null(AT).
12. The matrix {A I /3j can be row reduced (and further partitioned) to
[~+ ~;] [i~-~i~--:~-~-~-~ :~---1---~]
=
Thus the vector v 1
= (-1, 1, 1) and forms a basis for nuii(AT) .
227
EXERCISE SET 7.6
13. The redu-:ed row echelon form of A is R
=
f~ ! =~ =~] .
From this
w~
conclude that the first two
lo o o o
columns of A are the pivot columns, and so these two columm; forrn a has is for col(A) . The fi rst
two rows of R form a basis for row(A) = row(R). We have Ax = 0 if and only if Rx = 0, und the
general solution of the latter is x = (s+4t ,3s+1t, s,t) = s{1,3,1,0) +t(4, 7, 0,1). Thus (1,3,1,0)
and (4, 7, 0, 1) fonn a basis for null(A) .
The reduced row echelon form of the partitioned matrix [A I !4} is
l
[
1
0 - 1 -4
0
0
- ~]
0
1 - 3 -7 I 0
0
0 -1
------- ----- -~- - -- ---- - -- --0
0
0
0 : 1
0 -~ _ .!
t
I
0
0
0
4
0 : 0
~
1
-~
Thus t he vectors ( 1, 0, - ~, - ~ ) and (0, 1, ~, - ~ ) form a basis for null(AT).
14. The •educed ww echelon fmm of A ;, R =
~ ~
0
0
-:
:]. From this we conclude t hat the lsl. 2nd ,
0
0
and 4th columns of A are the pivot columns, and so these three columns form a basis for col( A).
The first three rows of R form a basis for row( A) = row( R). We have Ax = 0 if and only if Rx ""' 0,
and the general solution of t he latter is x = ( ~s, - ~s, s, O) = sO,- ~ . l ,O). Thus ( ~ . - ~ . l , Orrorms
a basis for null( A). The reduced row echelon form of the part itioned mat rix [A I J 1 j is
l
0
0
1
:...12
0
s
:-!
0
1
I
4
I
0
8
1
lG
o
11o
- .l.
40
1
80
-5
5
5
1
~---~---~---~ -~- t---~---~~- ---}
I
Thus t.he vector (0, 1, -£ ,
15. {a)
t) forms a b&Sis for null (A-r).
'The lst, 3rd, and 4th columns of A are the pi vot columns; thus these vectors form a basis for
col(A).
(b) The first three rows of A form a basis for row( A).
(c) We have Ax = 0 if and only if Rx = 0, i.e., if and only if x is of t he for(ll
X =
(d)
(- 4t,t,O,O)
= t(-4,1,0,0)
Thus the vect or u = (-4 , 1, 0, 0) forms a basis for null(A).
We have ATx = 0 if and on ly if C x = 0, i.e. , if and only if x is of the form
x = ( - lt,-~s,O,s,t) = t( - ~,0,0,0,1)
Thus t he vectors v 1 = (- LO, O, O, 1) and v 2
16. The reduced ww <ehelon fo•m of A = [: - :
ticn is
A=CR =
+ s(O, -~.0,1,0}
= (0, -~ , 0 , 1, 0) form a basis for nuli(Ar).
~ ~] is .Ro ~ [~ ~
t]·
[~ -:]l~ ~
il
Thus the column-row factori>a·
228
Chapter 7
.1nd the ro rrespcmding column-row expansion is
A~ m[1
0
II+
HJ
~I
1
jO
0 -37
- 4
17. The reduced row echelon fo rm of A
=
[l
1
-9
-4
82
- 2 .
"] 18
14
Ro
s
-3
0
= ['0
6 -10 -10
0
0
0
0
0
=;].
Thus the column-
row factorization is
A = CR=
and the corresponding column-row expansion is
-
~
3
DISCUSSION AND DISCOVERY
Dl.
(a)
1f A •~ a 3 J( 5 matrix, then the number of leading l 's an a row echelon form of A is at most
3, the number of parameters m a general solution of Ax = 0 is at most 4 (assummg A¥= 0 ),
I he rank of A is at most 3, the rank of AT is at most 3, and the nullity of AT is at most 2
(assuming tl -# 0)
(b )
If A as a fl"' 3 rnatnx then the number o f leading l 's in a row echelon form of A is at most
3, the num ber of pararnrters in a. general solution of Ax= 0 is at most 2 {a.ssumang A ¥= 0 ),
lhe r1.nk o f A is at most 3. the r ank of AT is at most 3, and the nullity of AT is. al most 4
(nssummg A "# 0)
(c)
If A iR a 4 x 4 matrix, then the number o f leading 1 's in a row echelon form of A' is at most
I, tbP number of parameters in a general solution of Ax = 0 is at most 3 (assuming A i 0),
l he rnnk o f A is at most 4, the rank of A T is at most. ·1 , and the nullity of A T is at most. 3
(assummg A i 0).
If A is an m x n matrix, then the number of leading l's in a ro w echelon form of A is at mos t
rn, the n .. mber of parameters in a general solution o f Ax : 0 is at most n - 1 (assuming
A # 0), the rank of A is at most min{m, n}, the rank o f A T is at most min{m , n} , a nd the
n ullity of AT is at most m- 1 (assuming A# 0).
(d )
D2. The pivot columns of a matrix A are those columns
tha~ correspond tv tht: columns of a row
ec..,Plon form R o f A which contain a leading 1. For example,
A ~ [~
-2
- 10
4
-5
0
4
0
7
6
6
12
-5
12]
28 --. R
- 1
=
[1
2
0
3 0
0 0 1 0
0 0 0 0
and :-:o the 1st, 3rd, nnd 5th rolumns of A are tbe pivot columns
0
~]
EXERCISE SET 7.7
229
D3. The vectors v 1 = (4, 0, 1, -4, 5) Mid v 2 = (0, 1, 0~ 0, 1) form a bnsis for null( AT).
D4. (a) fa 1, az , a4} is a. basis for col( A).
(b) aJ = 4a 1 - 3a2 and a.o = 6a1 + 7a2 + 2a4.
EXERCISE SET 7.7
l
i ).
1. Using Formula (5) we have projaX = 1;:.:1{2 a= ( 1 )(1, 2) = (151 , 2
On the other hand, since sin(} =
and cos B =
the standard matrix for the projection is Po
[
i i~]
!.
.
and we have
Pex =
[li
fs
[.!l]
·¥·
i_~] [- 61} =
7&,
=
2. Using Formula (5) we have proj 8 x = u:u~a = (~)(-2,5) = (-~~. ~).
On the other hand, since sinO = ~ and cosO =- ~. the st andard mat rix for the projection is
Pe =
[_
~~~ -~)
~9
3.
and we have Pex
29
= [_ t.~ -~] (~] ~ [-~]
29
19
·
A vector parallel to l is a = (1, 2). Thus, using Formula 5, the projection of x on lis given by
.
proJaX
a·x a ·""' ( 3)(
) (3 6)
= llall2
5 1, 2 = 5• 5
On the other hand, since sin 0 = .1,5 and cos (J
[I ·~ ] ·./ [3]
1 2]
[i ; and we have Pex = i ! [~] = ~ .
5
4.
:.!!)
~
5
~.
=
1
15 , the standard rnatrix for the projection is Pe =
v
.')
A vector pnr11.llel to l is a = (3, ·-1). Thus, using Form ula. 5, the projection of x on l is g.iven by
.
a ·x
-a )(
) ( 33 11
proJ.,X = ll all2a :.::(IO . •3,··· 1. = - ro,w )
On the other hand, since sin O =-
Po =
[_1 . .1]
10
vko
and we have Pex""" [_
10
and cosB =
~
10
;?to. the standard matrix for
-11 [-~J = r- ~J ·
It)
the projection is
10
!
5. T he vector component of x along a is proj 6 x = 11':"1j2 a = (0, 2, -1) = (0, ~, - ~ ), and the component
orth.:.gona l to a is x- proj 8 x = (1, 1, 1) ·- (0, ~.
= (1, ~ ~ ~)-
-0
6 . prOJ·8 X, -_ iiGlr
X·a a = 5 ( l , 2, 3) -..
5 14•
10 i4
1[)) , an d x - - prOJaX· _ (Z 1 Q, 1)
_.. ( Iii•
14
-
(
5 10 15) _ ( 23
14 , i4• 14 - 14•
10 ·-!4
I ).
- 14,
= (~, - L{o - 110 ), and the
fo, - lo) = (~, ~. [o, ~) .
7. The vector component of x along a is projax = {a:lij a = ~(4, -4 , 2, -2)
component orthogonal to a is x - proj.x = (2, I, 1, 2) - (~, -~ .
8. proj ..x = ~ (2, 1,-1, -- 1):::;
x- projax
.
9 . llprOJaxll
= (5, 0, -
Ia · xl
3, 7} -
e72, ~. - ~, -¥),
(11, ~ . -~ , -~ ) = CZi, -~,- 1i , 575 ).
12- 6 + 241
20
20
= ~ = J4 +9 + 36 = J49 = 7
Chapter 7
230
2
1
J6
v'24=J6-6
Ia · xl
18- 10 + 41
la·xl
1-8+6-2+151
-nail= J4 + 4 +- 16
.
11.
llprOJ nxll =
1 2.
llprOJ aXII
.
Taf =
a·
xl
v16
+4+4 + 9
.J33
11
J33 =
=
135-3 + 0- 11
31
-3-
3IJ5i
= Taf = V49 + I + 0 + 1 = J5I = ~
13. If a = ( - 1, 5, 2) then, from Theorem 7.7.3, the standard matrix for the orthogonal projection of R 3
onto span{a} is given by
P=
;.
aaT
= .~
.
a a
"
-1]
[ 1-1
5 2) =
5
2
[ 1-5 -2]
3~
-5
25
10
-2
10
4
We note from inspection that P is symmetric. lt is also apparent that P has rank I since each of its
rows is a scalar multiple of a . Finally, it is easy to check that
p2
= _.!..
[-~ 2~ ~~] ~3 [-~ ~: ~~] = _.!..30 [-~
10
-2 10
4
- 2
30 -2
4
and so P 1s tdempotent.
14. If a = (7, - 2, 2) then the standard matrix for the orthogonal projection of R 3 onto span{ a} is given
by
7] lr_
1
T
1
P = - a a = - -2
aTa
57
2
[
1
-2 2] = 57
[- 1419
14
114 -11]
4
-4
4
We note front in!>pection that Pis symmetric. It is also apparent that P has rank 1 since each of its
rows is a sc,\lar mult1plc uf a Finally, it is easy to check that P 2 = P and so P is idl.mpot enl.
Hi.
Let M =
3
?.]
u
I
:l
- 1
[
.
Then i\f 1 M
16
=[
orthogonal prOJCclton of !1. 3 onto W
P= AI(MTM)-IMT --=
'l
;]
9 1
and. from Theorem 7 7 5 , the standard matrix for the
= span {a 1, a2}
is given by
[-! ~]-~- [13 -9] [3 -4
1 3 257 - 9
26
2
0
1
]
3
113
1
-84
257 [ 96
=-
-84
208
56
96]
56
193
We not e from inspection t.hat lhe matrix Pis symmetric. The reduced row echelon form of P is
[~ ! ~]
and from this we conclude that P has rank 2. Finally, it is easy to check that
P' = 2~7 [ ~:~
and so P ic:: idl'mpotent
~~: ,m 2!1 [~:: ~~: ,m = 2!1 [~~ ~: ~]
193
=P
231
EXERCISE SET 7.7
16. Let M
~ [ - : -~]· Then MT M ~ [;~
~wjection
:]and the •tandW"d matrix for the orthogonal
of R 3 onto W = span{ a 1 , a2} is given by
-~~ -~~~]
-23]
[1 -2
30
-2
4
-102
305
From inspection we see that P is symmetr ic. The reduced row echelon form of P is
[~
0
1
0
and from this we conclude that P has ranK. 2. Finally, it is easy to check that P 2
= P.
17. The standard matrix for the orthogonal projection of R 3 onto the xz-plane is P =
agcees with the following computation rn;ing Po, mula
0
and M(MT M) - MT
(2n
Let M
~ [: ;
l
[~ ~ ~] -
This
0 0 1
Then MT M
~ [~ ~~ ,
~ [: ~] [; :1 [~ ~ :] [~ ~ :J.
=
= [~
18. The standard matrix for t he orthogonal projection of R 3 onto the yz-p]ane is P
0
19. We proceeri as iu Example 6. The general ::oolution of the equa tion x
+y +z =
~ ~]·
0
This
I
0 can Le written as
and so the two column vectors on the right form a basis for the plane. If M is the 3 x 2 matrix
h~ving
these vectors as Jts columns, then MT M
= (~ ~]
and the standard matrix of the orthogonal
projection onto the plane is
[
~1 2~ -12 -l]2 [-1
1 0] = 3~ [-~ -~ =~]
- 1 0 1
lJ
The orthogonal projection of the vector v on the plane is Pv
-1
= ~ [-~
-1
2
-1] [ 2] = 1[ 1]
- 2J -1
-1 - 1
2
4
- 1
7 .
- 8
Chapter 7
232
20. The general solution of the equation 2x - y + 3z = 0 can be written as
and so the two column vectors on the right form a basis for the plane. 1f M is the 3 x 2 matrix
having these vectors as its columns, then NfT M
[! 1~} and the standard matnx of the or~bogonal
=
projection onto the plane is
P = M(MT M)- 1 MT
=
12 30] _!_ [ 10 - 6] [ 1 2 0] = _!_ [ 102 132 -6]
3
[ 0 1 14 -6
5
0 3
1
14
10
2
14 [ -6
-
6
3
5
2 -6] [ 2] = 141[34]
The orthogonal projection of the vector von the plane is Pv = ]_
13
3
1
li
4
- 1
53 •
-5
21. Let A be the matrix having the given vectors as its columns. The reduced row echel<m form of A is
and from this we conclude that t.he vet-tor~ a 1 and a 3 form a basts for the ~ubspact II' spanned b}·
the given vectors (column space of A). Let M be the 4 x 2 matrix havmg a 1 and a 3 as its columns.
Then
6
0
4 lj
2 -4
-2
-6
5]
[
0
72
2 -2
= [- 20
-4
20]
30
5
and the standard matrix for the orthogonal projection of R 4 onto W ts g1ven by
M(MT M)-t MT = [ - :
2
-4
~]
_1_ [ 30
-2 1760 -20
5
89
-105
-105
220
- 3
[
25
135
- 15
15
= _1_
- 20] [4
72 1
-3
- 15
31
-75
-6
2
0 -2
25]
15
- 75
185
22. Let A be the matrix having the given vectors as its columns. The reduced row echelon form of A is
23:,
EXERCISE SET 7.7
and from this we conclude that the vectors a 1 and
form a basis for the subspace W spanned by the
a2
oiven vectors. Let M be the 4 x 2 matrix having a 1 <md a 2 as its columns. Then M r M = [
o·
·
36
24
24
]
126
and the stand~u·ct matrix for t;he orthogonal projection of R4 onto W is given by
M(MT A-1)-l
5 3]
MT = -3 -6 _1_ [ 21 -4] [5 -3
[-11 9 660 - 4 6 3 - 6
0
23.
The reduced row echelon from of the matrix A is R
= [~
_.!._ -89
0
220
9
[ 31 -13
- 37 -59
31 -37]
-13 -59
7
- 19
-19
193
i1 -i1] ; thus a general solution of the
0
1
system Ax= 0 can be expressed as
x=
153 -89
87
1 -1] =
2
-ts
[-~]
[- l~]
_! + !t1
lt
_.!
=s
+t
r s J
10
OJI
l
2
3 -
2
2
2
where the t,wo column vectors on the nght hand side form a basis for Lhe solution space. Let D be
t.hP. matrix having these two vec tors a.S its columns. Then
1
0]1
0
[=! -i] [~
1
0
0
1
=
0] = 2~ [101OJ
0~
and the standard matr ix for t he orthogonal projection from R 4 onto the solution space is
l-l-! 'l ~ ~] [-:
0
2
P
~
IJ(BT 8) - ' Br
~
_ .!.
?
:;
1
-l
2
1
[0
_!
0
2
0]1 = ~3
[-t
- 1
- 1
-1
2
·· 1
0
_:]
0
2
Thus t he orthogonal projectio n of v ·= {5, 6, 7, 2) on the solution space is g iven (in column form ) hy
Pv= -1·[~-~ =~0
3
24. The ceduced
COW
thesyst.m Ax
·~helon !com of the matdx A is R ~ r:
0
lo
o
= 0 can be expcessed as = t
X
r-~] '"'
~] ; thus the gcnP.ra.l solution of
:
-2
(to avoid lcactions)
X
~
= • [- ] · In ot h& words,
the solution space of Ax= 0 js equal to span{a) where a = ( -3, 0, 1, 2). Thus, from The;).lem 7.7.2,
the orthogonal projection of v = (1, 1, 2, 3) on th<! solution space is given by
.
a· v
11
(
proJ 8 v = !l aW~ a = i4 ( -3, 0, 1, 2) = -
15
14 , 0,
s
10
14 , 14 )
234
Chapter7
25. The reduced row echelon form of the
ma.t~rix A= ~~ ~ -~ -~]
[~ ~ -~ -~]
is R =
From
3
2 -1
1
0
0
0
0
form a basis for row(A), and the first two columns of
this we conclude that the first two rows of R
A
form a ba.c:;is for col(A).
Orthogonal proJeCt JOn of R4 onto row( A): Let B be the 4 x 2 matrix having the first two rows of R
as 1ts columns. ThPn
1 0]
[ =
0
1
3 0
1
1 - 2 - 4] 1 -2
3 -4
and the standard matrix fo r the orthogonal p rojection o f
Pr
= B(BTB) - tBT =
[~ ~] 2_ [21
1
3
proJCCtJOn of R3
Orthogonal
A as its columns. Then
1114] [10
-2 35 14
- 11
11 -14
[-14
21]
R4 onto row(A) is g iven by
01 -21 -43]
=
[~~ ~~
2_
35
-7
7
-1
7]
-8
-2
9
11
1l
29
-8
-2
onto col(A): Let C be the 3 x 2 matnx having the first two columns of
cr c =
[t 1 3] [~ ~1 = [11
102
32
1]
75
and the standard matnx for the orthogonal projection of R 3 onto col(A) is given by
P,
= cccrc)-'cr =
26. Tht reduced row echelon form o f
[i ~] H-~ ~;] [: ~ ~] = ~
A~
IS
R
=
[~ ! l ~ ~]·
H-~
~]
From this we conclude that the first two
0 0 0 0 0
rows of R form a basis for row(A), a.nd the first two columns o f A form a basis for col(A).
Orthogonal projet..tlon onto row(A): Let B be t.he 5 x 2 matrix having the first two rows of R as
ItS
column'l Then
G ~) and the standard matrix for the orthogonal projection of Rl'> on to
sT B
row(A) is given by
Pr
= B(BTB)-I BT = 2_
24
7
-5
-5
7
2
9
2
-3
9
its columns. Then
ere= [1:
9
-3
2 -3
9
4
6
6
6
5
15
3
3
15
x 2 matrix having t he fi rst two columns of A as
-3
Orthogonal projection onto col(A): Let C be the -1
2
:] and t he stand ard matrix fo r the or t hogonal projection of
2
col(A) is given by
237
P.
c
= C(crc)-lcr =
_I_
419
- 73
- 73 369
-13 -95
[
194
64
=~~ 1::]
29
- 46
- 46
203
R 4 onto
235
EXERCISE SET 7.7
[R[cJ= [~
27. The reduced row echelon foxm of the matrix [A I b) is
0
51
~:
0
0I
0
1]
g8 .I - 5
1 - ~
k.
[-,! - ,! •-
fii:
Prom this we
0
,!']
:~c~:d~ :Y::~:i:o~ ::~~ns:::· :·:~::n~: :~::.·:::v:g t~: ~,: t~o ,~,~ ~:c:
tl·
= [~
Then the s tandard matrix for the orthogonal projection
onto row(R) = row(A) is P = BC- 1 Br, and the solution of Ax= b which lies in row(A) is
its columns, and let C = BTB
of R 4
given by
1
r
l-~
= Pxo =
25
25
!R
1
Xrow ( A)
2
_.!_
25
25
11
- 25
·n
7
-25
2!1
11
'25
2
1
25
2~
25
28 . The reduced row echelon for m of the matrix [A I b j is
conclude that
Ax ~
b is onnsht<nt, and that x0
= [-
~] [-i]
0
0
25
!!
25
!RIc] :::
~]
having the first t wo rows of R as its columns, ami let C
[
1
0
0
1
0
0
n1
=
32 -3I 1I 34]
fi fi :- ~ . rrom this we
= BT B = [_ ~ - ~] .
matrix for the orthogonA-l projection of R 4 onto row(R) =row( A) is p ::-:
of Ax = b which lies in row( A) is given by
I. .
299
224
299
Xrow (A )
= Pxo
=
53
- -m
299
124
299
131
20
2~9
13 l
299
32
299
0
is one solution. Let B be the4 x 2 matrix
:16
:!99
20
o,
0
- f.<i9
j
124
l
Then t he s tandard
72
vc-1 BT , and the solutio n
299
48
"lr .l I"'l
:~2
209
m
90
2!19
-A
299
90
25
- 299
299
- 3 _
0 -
0
-m
!M
299
!!l
- 299
29. In Exercise Hl we found t hat the standard ma.trix for the orthogonal projection of R 3 onto the plane
W with equation x + y + z = 0 is
1
3
P =-
[-~ -~
-1
. 1
=~]
2
Thus the standard matrix for the orthogonal projection onto
1- p
=
w.t
[~ ~ ~] ~ [-~ -~ =~]
0 0
1
-1
·- 1
2
(the line perpendicular toW) is
1
1
1
Note that W L is t he !-dimensional space (line) ::~panned by t he vect or a= (1.1, 1) and so the computation above is consistent with the formula. given in Theorem 7.7.3.
236
Chapter 7
30. In Exercise 20 we found that the standard matrix for the orthogonal prOJection of R 3 onto the plane
W with equation 2.c - y + 3z = 0 is
2 -6]
10
2
14 [ -6
P=~
13
3
3
5
Thus the standard matrix for the orthogonal projection onto W J. (the line perpendicular to W) !s
[~ ~ ~1-
I - p =
0
31.
1
1
14 [
0 1
~ 1 ~ -~]
-6
3
[-~ -~
1
= 14
5
6
-3
-:]
9
Let A be the 4 x 2 matnx having the vectors v 1 and v 2 as its columns. Then
[1 -2
!I'TA =
3
3
-1
4
0]2 [-~3
- 1
0
2
!]
14 -8]
30
= [- 8
and the sta ndard matrix for the orthogonal projection of R4 onto the subspace W
=
p -= A(A r A)-1 l r
[-~ ~]-1
3
0
- 1 178
2
[15 4]
4
7 l3
Thus the orthogonal projection of the vector v
51
\
Pv
23
54
28
- 31
89 28 - 31
25
20
59
1
r:]
23
[
ll
fl
=
51
-2
4
2]
-1
25] [1]1
1:
1
= 89
23
23
54
[2825 - 3120
28
- 31
59
5
is
25]
20
5
14
w.1. is gtven {in column form) by
(1, 1, 1,1} onto
20
5
0
3
= spa.n{v 1, v 2 }
1]1
~ = ~ -
l
66
[127]
89
~~
1
- 89
[-38]
23
~:
DISCUSSION AND DISCOVERY
01. (a)
(b)
The rank of the standarrl matrix for the orthogonal projection of Rn onto a line through the
on gm 1s 1, and onto its orthogonal complement is n - 1.
If n ~ 2, then the rank of the standard matrix for the orthogonal projection of Rn onto a
plane through t he ongin is 2, and onto its orthogonal complement is n - 2.
02. A 5 x 5 rnatnx Pis thl' s tandard matrix for an orthogonal projection of R5 onto some 3-dimensional
subspace if and only 1f it is symmetric, idempotent, and has rank 3.
D3. If x1
= projax and x2 = x- x1
ll x2ll
Thus q
=/
ll x ll 2
-
2
1 o~IW
then
llxll 2 ==
llxdl 2 + llx2ll 2 and so
= llxll 2 -llxtll 2 = llxll 2 = llx2ll where x2
('~,~~ 1 )
2
= llxll 2 -
~~~~~r
is the vector component of x orthogonal to a .
DISCUSSION AND DISCOVERY
237
D4. If P i~ the standard matrix for the orthogonal projection of Rn on a. subspace W, then P 2
(Pis idempotent) and so p k = P for all k ;:::: 2. In particular, we have pn = P.
D5 . (a)
=P
TJ"ue. Since projwu belongs toW and projw.1. u belongs to W .L, the two vectors a.re orthogonal
(b) False. For example, the matrix P
= [~ ~]
satisfies P 2
;::
P but is not symmetric and there-
fore does not correspond to an orthogonal projection.
(c) True. See the proof of Theorem 7.7.7.
(d) True. Since P 2 = P, we also have (J - P) 2 =I - 2P + P 2 = I- P ; thus I- P is idempotent.
(e) False. In fa.ct, since projcoi(Aj b belongs to col(A). the system Ax= projcol(A)b is always
consistent.
D6. Since (W.L )l. = W (Theorem 7 .7.8), it follows that ((Wl.)..l)l.
D7. The matrix A =
1l 1]
[
1 1 1
= W.1..
is symmetric and has rank 1, but is not the standard matrix of an
1 l 1
orthogonal projection. Note that A 2 = 3A, so A is not idempotent.
D8. In this case the row space of A is equal to all of Rn. Thus the orthogonal projection of Rn onto
row(A) is the identity transformation, and its matrix is the identity matrix.
D9. Suppose that A is ann x n idempotent matrix, and that>. is nn eigenvalue of A with corresponding
eigenvector x (x f 0). Then A 2 x = A(Ax) = A(.h) = ..\2 x. On the other hand, since A 2 =A , we
have A 2 x = Ax = .>.x. Since x =/= 0, it follows 'that >.? = >. and so >. = 0 or 1.
DlO.
Using ca lculus: The reduced row echelon form of lA I bj is
r~ ~ ~l ~] ; thus the general solution of
0 0 010
Ax = b is x = (7 - 3t, 3 - t , t} where -oo
llxll 2 = {7- 3t) 2 +
< t < oo. We have
(3 - t) 2
+ t2
= 58 - 48t + 1lt2
and so the solut.ion vector of smallest length corresponds to
t :.=: ~1 · We conclude that
Xrow
= (7=-- ii, 3- H, ~1)
1~,
5
= ( 1p
ft lllxii 2 J =
-48 + 22t = 0, i.e., to
~~).
Using an orthogonal projecti on: The solution Xrow is equal to the orthogonal projection of a.ny
solution of Ax = b , e.g., x
(7, 3, 0}, onto t he row space of A. From the row reduction alluded
to above, we sec that the vectors v 1 = (1, 0, 3) and v 2 == (0, 1, J) form a basis for the row space of
=
.4 . Let. B be the 3 x 2 matrix having these ver,tor as its columns. Then BT B
standard matrix for the orthogonal projection of R 3 onto W
= [1~
~], and the
= row(A} is given by
~ ~] = 1\ [-~ ~~ ~]
3
Finally, in agreement with the calculus solution, we have
Xrow
1
= Px = 11
[ -~ ~~ ~]
3
1
10
[;] =
0
f [ !]
1
24
1
10
238
Chapter 7
Dll. The rows of R Corm a basis for the row space of A, and G = RT has these vectors as its columns.
Thus, from Theorem 7.7.5, G(GTG)- 1GT is the standard matrix for the orthogonal projection of
Rn onto lV = row{A).
WORKING WITH PROOFS
Pl. If x anJ y are vectors in Rn and if a and /3 are scalars, then a· (ax+ /3y) =a( a· x) + /3(a · y).
Thus
T(ax + /3y) =
a · (ax + /3y)
(a · x)
(a · y)
llall 2
a = a:lja112a + /3~a = aT(x)
+ /3T(y)
wh ich shows that T is linear.
P 2. If b = ta, then bTb = b · b = (ta) · (ta)
= t 2a · a
= t 2aTa and (similarly) b b T = t2aaT; thus
1
T
1
2T
1
T
bTb b b = t2aTa t a a ~ aT a aa
P3.
Lf'l P hl' a symmetric n x n matrix that is idempotent. and has rank k. Then W -= col(?) is a
k-dimens•onal subspace of R". We will show that P is the standard matrix for the orthogonal
projecl ion of Rn onto W, i.e, that Px = projwx for all x in Rn . To this end, we first note that
Px belongs to W and that
x
= Px + (x- Px)
= Px t (I- P )x
To show that Px = projwx it suffices (from Theorem 7.7.4) to show that (I- P)x belongs to
\1' • .1nd since W - col{ A) = ran(P). this IS equtvalem to showmg that Py · (J - P)x = 0 for all
y m R" Finally, since pT = P = P 2 (Pis symmetric and idempotent), we have P(J- P) =
P - P 2 - P - P = 0 and so
for evcrv x and y m R" This completes the proof.
EXERCISE SET 7.8
J.
First we note that the columns of A are linearly independent since they are not scalar multiples of
each other; thus A has full column rank. It follows from Theorem 7.8.3(b) that the system Ax= b
has a unique least squares solution given by
The least squares error vector is
b- A x =
[-~] - [~ -~] ~ (~0] ~ [-~~]
5
4
5
11
8 = 11
15
and it is easy to chP"k that this vector is in fact orthogonal to each of the columns of A . For example,
(b- Ax )· Ct(A) = 1\ !(-6){1) + {-27}(2) + (15)(4)] = 0
239
EXERCISE SEl' 7.8
2. The colunms of A are linearly independent and so A has full column rank. Thus the system Ax= b
has a unique least squares solution given by
1 2..
6] -2 I I] 2lj21 [-14]
0
-l [
2 1 3
[
- 1 -
9
The least squares error vector is
and it is easy ,to.check that this vector is orthogonal to each of the columns of A.
3.
From Exercise 1, the least squares solution of Ax= b is x =
f1 [~~);thus
-1]
1
~ [~ ] = ~ [28]
[
2
4
Ax =
3
11
5
0
11
8
16
40
On the other hand, the standard matrix for the orthogona.J projection of R 3 onto col(A) is
p _ A(ATA.)-tAT::::.:
[~. -~] [2125 3525]-l [-11 2
4] = _1
[~~~ -!~ ~~1
3 5
220
4
5
20
90
:).nd so we have
projcoi(A)b
= Pb =
20
4.
[-·
2~0 [·~~~ -~~ ~~1 ~1
90
170
From Exercise 2, the least squares solution of Ax = b is x
2 --2].l .!.. [ 9] =
[3 1 21 -1 4
Ax = 1
f>
= 2\
[
1
= 11
[~:]
40
= Ax
-t:]; thus
[ tj6]
..!:_ - 5
21
13
On the other hand, the standard matrix of the orthogonal projection ont.o col(A) is
and so we have
~1 [-~]
17
1 .
1
= 21
[~~1
13
=Ax
170
240
Chapter 7
5. The least squares solutions of Ax = b are obtained by solving the associated normal system AT Ax=
.4Th which IS
Sinc·p the rnntrix on the left is nonsingular, this system has the unique solution
X= [J:1] = [248 8]-l
[12]8 = 2_
[ 6 -8] [12] =[to]
6
80 -8 24
8
l
X2
The error vector is
and the least squares error is li b - AxiJ
= V(~) 2 + (-~) 2 + (0) 2 =
Ji = ~·
6. The least. squares solutions of Ax= b are obtained by solving the normal system AT Ax = ATb which
IS
[4214 12642] [Xl] = [124]
X2
This (redundant) system has infinJtely many solutions given by
X= [::J = r~ ~ 3tl = [!J + t r
The error ve,.lor
n
jc:;
and the least squares error is li b - Axil=
V(~)2 + (~)2 + (~)2 =
J* = :Lfl.
7. The least SCJUf\rf'S solut1ons of Ax= b are obtai;1ed by solving the normal system AT A x = ATb which
is
0
The augmented matdx of this system reduces to [:
0
solutions given by
1
'-a7]
'
1 :
i ; thus there are infinitely many
0'
0
EXERCISE SET 7.8
241
T he error vector is
and lhc least squares error is lib - Ax il =
8.
J(!)2 + ( ~)2 + (- ~) 2 = ~ = ~-
The least squares solutions of Ax= bare obtained by solving AT Ax= ATb which is
~ ~~ ~~]
1
[ 17 33
[::]
50
1
Tbe augmented matrix of this system reduces to
:;olutions given by
[~
1
[W,=:]
x = [:;]
=
[~]
1:-sltf ;
6
X3
0
1 :
0
0
I
.
thus there are infinitely many
0
[~} '[=:]
T he error vt.'C tor is
=
and t he least squares error is llb - Axll
V( !iY + {fr )2 + {- ·fi-)2
=y
9. The hnc"' model fo, the given data b M v
wh"c M
·~ [i
il
=
,f1j; = f[I .
and y
2
= [;] .
The least squa<"'
solm ion i:> o b tained by sobing the norrnal sys tem MT~H v :.:: M T y which is
[1: ~:] [~~] = [~~1
Since t he matrix on t he left is nonsingular, this system has a. unique solution given by
1] [ 4 16J-l [IO]
[V2 = 16 74
47 =
v
3
1
[
37
-8]
[tO]
1
[-3
]
r1
0]
20 ··" 8
2 47 ::::: 10
7 =
7
10
T hus the least squares straight line fit to the given data is y-== y
4
2
12
-I
3
456
7
3
10
+ I0 x.
242
Chapter 7
y where M
~ [1 ~]
and y
solution i~ obtajned by solving the normal system AfT Mv
= MTy
which is
10. The Hnear model for the given data is Alv
~
~ [!]·
The leaSt squares
r: 2~J r::J = r~J
Since the matrix on the left is nonsingular, tllls system has a unique solution given by
[Vl] = [48
V1
8] -l [4] 1[22 -8] [4] = 1[16] = [~]!
9 = 24 -8
22
4
9
Thus the least squares straight line fit to the given data is y
24
4
= j + !x.
y
2
2
-I
3
4
-I
11. The quadratk least squar" model for the gh-en data is Mv- y whe" M
The least squares solution
IS
~ [~ ~ ~] and y ~ [!]·
obtained by solving the normal system M 7 Mv
= MT y
which
r .t8 228 6222] [l..'t ] [ 9']
1Jl
l22 62 178
Since the matrix on the left
[~:] = [ :
V3
JS
27
V3
nonsingular, this system has a unique solution given by
2~ ~~] - l [ :] = [- ~
22 62
178
27
!
6
J -;] [:] = [- 16~]
-
-1
Thus the leas1 squares quadratic fit to the given data is y = 1 y
-I
!
3
11 x
6
27
+ jx 2
1
3
1s
243
EXERCISE SET 7.8
12. The quadratic least. squares m.odel for the given data is Atv = y where A1
=
110lj andy=
1 0
1 1 1
1 2 4
The le3st squares snl tti.ion is obtained by solving t he normal system Mr llifv =
M'I'y
which is
Since the ma.t.rix on the left is nonsingular, t his system has a unique solution given by
[~:] [! : 1~] -l [~]
113
6
10
[-~t
=
14
18
-:
~] [=:]
_;] [
- 2
==
1
+ ~x
Thus the least squar~:.'> quadratic fit i:.n the give n tlat<'- is y = -1 - ~:r
13. The model for t.he lea."3t squares cubic fit to the given data is Mv
1
1
T he associl\.ti~d normal sysle111 A(r /11 v
r
5
l~~
15
55
s
4
2
3
4
5
M:::::
r
1
1
27
64
25 125
16
55
is
225]
225
979
225
979
4425
2'25 979 442:> 2051.1
=y
n
a1
a2
-
a3
l 2168
18822.41
916.0
4087.4
14. The model for the least squares cubic fit to the given data is Mv
M=
0
1
0
1
0
2
3
4
8
27
1
4
9
10
y~
~
=y
0.9
1
64
where
4.9
and the solution, written in comma delimited form, is (ao, a 1, a2 ,a3)
1
1
1
1
2
10.8
27.9.
y=
60.2
113.0
!)
= lv[T y
~
H
3.1
9.4
24 .1
57.3
(5 .160, -1.864,0.811, {).775).
where
Chapter 7
The associated normal system MT M v = MT y is
5
10
30
[
100
10
30
100
30
100 354
354 1300
aol [ 323.4
94.8]
1300
[ = 1174.4
4890
4396.2
and the solution, written in comma delimited form, is (ao, a1, a2, as) ~ (0.817, 3.586, -2.171, 1.200).
15. If M =
1:tal. and y =
[. .
1
:t2
•
•
1 :tn
MTy = [
100] a1
354
a2
a3
312
1
[Yal
. , then MT M = [
.
X l
•
1 2:2
n
:
] [1. :ral = [r: :z:, L.
~ :t!]
1
··
1
%2
•· ·
:J:n
Yn
'
.• .
1
and
Xn
1 1··· 1] Y!Ill. = [J-Y• ]. Thus the normal system can be written as
2
XI
:1:2
:Z:n
:
L.x,y,
!in
DISCUSSION AND DISCOVERY
Dl. (a)
The distance from the point Po= {1, -2, 1) to the plane tv with equation x + y- z
d
= IP)(l) + {1)(-2) ;- (-1)\1)1
/{1)2+(1)2+(-1)2
= _:__ = 2,'3
V3
3
and the point in the plane that is closest to Po is Q = (~,- ~,!
computing thr orthogonal[~r oje~]tion of the vector b
1
vectors of the matrix A =
onto IV
ag
~
~
= 0 is
-:---+
= OP0
).
The latter is found by
onto the plane· ThP column
form a bnsis for tv and so the orthogonal projection of b
given by
(b) The distance from the point P0
= (1, 2, 0, -1) to the hyperplane x 1 - x 2 + 2x 3 - 2x4 = 0 is
d = I( 1)(1) + (-1){2) + (2)(0) + (-2)( -1)1
/(1)2 + (-1)2 + (2)2 + (-2)~
= _ 1_
Jill
= JTij
10
and the point in the plane that is closest to Po is Q == ( 190 , ~~,- 120 , - 1~ ). The latter is found
by computing the orthogonal projection of the vector b = OP~ onto the hyperplane:
0
P' Jwb =
[~ -! ~] io [-~
: -:]
H~ ! ~] Ul
1
1
= 0 [
~~]
245
DISCUSSION AND DISCOVERY
D2.
(a)
= A ( AT A) - 1 ATb .
The vect or in col(A) Lhat is closest to b is projcol( t!)b
(b) The least squares solution of Ax == b is X= (JiT A) - lATh.
(c)
The least squares error vector isb -A( AT A)- 1 A'T b .
(d) The least squn.res error is li b-- A(AT A)-1 A1'bll.
(e)
The stan<iard matrix for the orthogonal projectiou onto col(A) is P = A (A1'A)- 1 Ar .
D3. From Theorem 7.8.4, a vector xis a least squares solution of Ax
to coi(A).l . We have A =
[~ -~]
4
= b if and only if b - Ax belongs
[~] - [~ -~] (~] = [~]- [-~]
a.nd b - Ax =
.~
5
4
s
5
= [
14
~7 1; thus
s - 14
b - Ax is orthogonal to col( A) if and only if
( 1)(2) + (2)( -7)
+ (4)( s -
14) = 0 ""' ( - 1)(2)
·1s - 68
=0=
+ (3)( - 7) + (5 )( s -
14)
5s - 93
Ti u~se equations are dearly iucornpatible and so we conclude that, for any value of s, the vector x
is not a least sq11ates solution of Ax = b.
D4. The given data points nearly fall on a stra ight line; thus it, would be rel\..c;onable to p~rform a linear
least squares fit and then use the resulting linear formula y = a + bx to extrap0l a.~e to x = 45.
05 . T he m<><;el foe t h;s least "''"'"' fit ;, [:
[
:
'l
4~]
[:]
36
~ r~;J.
il
th~ l~a.."t squares solution is
Thus
~
l [~
3
11
2
3]1
[
[
LJ= i * ~
al
rP.'lulting in y
= 251
+
~
1:1 : m, •nd the conespond;ng nonnal system ;s
i
;n
.1)
=
-
l [~5]
1
9
]
[1
¥ ~
-1
...
21
=
as the best least squares fit. by a curve of this type.
y
10
6
4
2
I 2
D6. We have
[~ ~';]
[:]
=
[~]
3 4 5 6 7
if and only if Ax + r =b and AT r
= 0.
Note that AT r
= 0 if and only
if r is orthogonal to col(A) . It follows that b - Ax belongs t.o coi{A).L and so, from Theorem 7.8.4,
x is a least squares solution of Ax= b and r = b - Ax is the least squares error vec".nr.
246
Chapter 7
WORKING WITH PROOFS
Pl.
If Ax= b is consistent, then b is in the column space of A and any solu t ion of Ax = b is also a least
squares solution (since li b - Axil= 0). If, in addition, the columns of A are linearly indep endent
lhen there is only one solution of Ax = b and, from Theorem 7 8.3, t he least squares solution is
also umque. Thus, in this case, the least squares solution is the same as the exact solution of
Ax = b
P 2. If b is orthogonal to the column space of A, then projcoi(A)b = 0 . T h us, since t he columns of A
are linearly independent, we have Ax = projcoi(A) b = 0 if and only if x = 0.
P 3. The least squares solutions of A x = b are t he solutions of the normal system AT Ax = AT b . From
Theorem 3 5 1, the solution space of the latter is the translated subspace X.+ W where :X is any
least squares solution and W = null(AT A) = nuii(A).
P 4. If w is in W and w :1: projw b then, as in the proof of Theorem 7.8.1, we have l b lib - projw b ll ; thus projwb is the only best approximation to b from W .
P5.
If ao, a 1, a2, ... , am are scalars such t hat aocdM) + a 1c2(M) t a2c3(M ) +
then
ao + n1:ri + a2x~ + · · · + amxi = 0
wll >
.. + amCm+l(M)
= 0,
for each t = 1, 2, ... , n Thus each x, is a root of the polynomial P (x) = a 0 + a 1 x + · · + amxm.
But such a polynomial (if not identically zero) can have at most m distinct roots. Thus, if n > m
and if at. least. m + 1 of the numbt>rs x 1 , x 2 , . • , Xn are distinct, then ao = a 1 = a2 = · · = am = 0.
This shows that the column vectors of M are linearly independent
P6.
If at least m + I of the numbers :r1 , :r2 , ...• :rn are distinct then, fr om Exercise P5, the column
vect ors of M are linearly independent, thus M has full column rank and M T M is invertible.
EXERCISE SET 7.9
1.
(a) v, · v2 = (2)(3) -t (3)(2) = 12-:/; 0, thus the vectors v 1 , v2 do not form an orthogonal set
(b) Vt · v2 = ( 1}(1) t (1}(1) = 0; thus the vectors v 1, v 2 form an orthogonal set. The corresponding Orthonormal SC t IS ql =
= (-~, ~), q2 = 11:~11 =
~).
u::u
(c)
(-32,
We hnve v1 · vz = V t • v 3 = v2 · v 3 = 0; thus the vectors v 1, v2 , v3 form an orthogonal set. The
correspondmgorthonormal set isql = (--J5,~,~),q2 = (7s,O.js), q3 = (-~.-~.~).
(d) Although v1 · v2- Vt · v3 = 0 we have v2 · v3 = (1)(4)
vectors v1 , v2, v3 do not form an orthogonal set.
2.
(a)
+ (2)(-3) + (5)(0) = -
2-:/; 0; thus the
.,. · v2 = 0; thus the vectors v 1 , v 2 form an orthogonal set. The corresponding orthonormal set
is q l = (~. ~), q2::: (-~.
?tJ)
(b) v 1 • v2 :1: 0; thus the vectors v 1o v 2 do not form an orthogonal set.
(c) V t • v2 -:/; 0, thus the vectors v 1 , v 2 , v 3 do not form an orthogonal set.
( d ) We have v 1 · v 2 = v 1 • v 3 v 2 · v 3 = 0; t hus the vectors v 1 , v 2 , v 3 form an orthogonal seL.
The corresponding orthonormal set is q1 = (~, -j, ~), q2 = (~. ! .- i) , q3 = (!, ~. ~).
=
3.
(a) These vectors form an orthonormal set.
(b) These vectors do not form an orthogonal set since v2 · v3
(c)
= - '3; -:/; 0.
These vector~ form an orthogonal set but not ~n orthonormnl set since llv 3ll
= .j3 :1: 1.
247
EXERCISE SET 7.9
4.
(a) Yes.
f. 1 ;,.nd llv zll f. 1.
• vz f. 0. vz · vJ f. 0, llv<dl #
( b) No; llv1 1l
(c)
No;
Vt
6.
(a) projw x = (x · v1)v1
(b) projwx = (x · vt )vl
7.
(n)
l, and
llvJII f.
L
+ (x · v:a)vz = (l )vt + (2)v z = (!. ~. !, 4} + (1, 1, - 1, -1) = (~. ~, -~ , - ~)
+ (x · vz)vz + (x · v 3) v3 = (l )vl + (2)v 2 + (O)v3 = {~, ~ . - !,-~)
(b)
8.
(a)
(b )
13. Using Formula (6), we have P
=
[ ·~
~J r·32
2
1
J
2
3
:i
I
3
'2
-3
~
J
14.
Using Formula (6), we h.:we P
72
=
~
I
V'6
1 5.
js]- [ I~
2
?J -76
[
-
76
I
275
I
7u
2
3
-
2~]
~·
I
:.
6
Using t.hf! matrix fouud in Exen;isc 1:3, the or thogona.l projection of w onto W = SJJan{ v 1, v 2 } is
Pw =
2] = [ ~]
r!~~.. s~9 _1~] [ 0
-3
l -9
9
-
2
On Lhe other hand, using Formula (7) , we have
~9
!§
9
_L98
2~8
Chapter 7
16. Using the matrix found in Exercise 13, the orthogonal projection of w onto W = span{v1 , v 2 } is
On the other hand, u:.mg Formula (7), we have
projww = (w · Yt)Vt
+ (w · v2)v2 = (~)(j, !, ~) + {-~)(!, j, -~) = (~,- i, ~ )
1
6
'7s· -76).
(7J, 7J, 7J)
17. We have 11~:11 =
and ~~~~II = (~,
Thus the standard matrix for the orthogonal projection of R 3 onto W = span{v 1 , v 2 } is given by
1
73
P=
18.
[
1
73
~
1
76
73
=
[! : ~]
0
0
1
(S,
We have 11 :!11
(~.l· ~)and n~~ll =
~. -~). Thus the standard matrix for the orthogonal projection of R 3 onto W span{v 1 , v 2} is given by
p
=
[!_~31 j1] [l
1
3
2
3
-il
19. Using the matrix found in Exerctse 17, the orthogonal projection of w onto W =span{ v 1 , v 2} is
Pw
[~ ~ ~] Hl ~ [=t]
On t ht> r,t her hand, using Formula (8), we havP
20. Using the matrix found in Exercise 18, the orthogonal projection of w onto W
Pw=
[ !~9 !; -;9~~] [-~2] = [-~OJ
-9
On the other hand, using Formula {8), we have
21.
From Exe<Cise 17 we have P
~ [ t t ~] ;thus tcaoe( P) ~ j + j + I ~ 2.
= span{v 1 , v 2 }
is
249
EXERCISE SET 7.9
22.
From Excrciso !8 we have P ,.
[!
94
'92]
~ - ~ ; t hus trace( F )
&
-~
= ~ .; ~ + ~ =
2.
23.
We have pT:;.. I' and i t is easy to check that P 2 = P ; thus Pis the standard matrix. of an orthogonal
projection. The dimension of the range of the projection is equal t o tr(P ) = ~ + 251 +
= 2.
24.
The dimension of the range is equal to tr(P) =
ff
19 + ;fg + !~ = 1.
25. We have pT = P and it is ea.'!y to check that P 2 = P ; thus Pis the standard ma.t.rix of an orthogonal
projection. T he dimension of t he range of the projection is equal t o tr(P ) = ~ + + ~ = 1.
t
26.
27.
The dimensio n of the range is equal to tr(P)
= 2.
= wl = (1, - 3) and v 2 = w2 - IJJ:I!~ Vt = (2, 2)- ( 1; )(1 , - 3) =
(Jj, !> = ~ {:3, I) . Then {v 1, v2} is an orthogona l basis for R 2, and the
vectors q 1 = 1 ~: II = (~, jfo) and q 2 = 1 :~ 1 = (fro, ftc) forrn an or2
Let v 1
thonon nal hAS is for R
28.
= ~ +~+ ~
.
Let V J = Wt = (1, 0) and V2 = W 'J
Then Q 1 = 11 ::~ = (1,0) ami q 2 =
for R 2
·-
ff'J:u~ v 1 = (3, -5) -
ft:!n
n)(l,0) =
(0, - :>).
= {0, - 1} fo rm an orthonormal ba:;is
-s
29. Let
YJ
= w, =(1, 1, 1), voz =
w 2 -- lfv~[Jvl =
(-1, 1,0)- (~){ 1 , J. , 1) = (--1, 1, 0}, and
form u.n orthon orma l hasls for R 3 .
30. Let. v, ,..
WJ
= (1,
o. 0), Y 2 = w 2 - u;;u! v l =
(3. 7, -2)- (f)(l , O, 0)
= (0 , 7, -2). and
Then {v 1 1 v 2, v3} is an orthogonal basis for R 3 1 and the vectors
VJ
q,
= llvdl = (l, O, O),
form an orthonormal basis for
q3
R3 .
VJ
(Q
30
105
= llv3 U= ' ./11925 J uns
1
)
Chapter 7
250
31. Let
Vl
=
Wt
= (0, 2, 1, 0), V2 = w2- fJ;I[~ VJ = (1, -1, 0, 0) -
( -~)(0,2, 1, 0)
= (1, -i, ~~ ~),
and
Then {v 1 , v 2 , v 3 , v 4 } is an orthogonal basis for ~, and the vectors
-
ql -
q 3-
YJ
llvdl
-VJ
-
11 v3 ll-
{0
2Vs J5 0)
'"5• Sl ,
(.{IQ ~
10 ' 10,-
- ~ - ( v'3(i
llv211
6 I
q2 -
v'iO) •
s ,--s-
.£@
-
q4 -
v~
-
VJo :.&.Q 0)
30 '
IS
I
'
- (v'iS .ill uTI .iil)
15• 1s , - ts • s
11 v~ll-
fo rm an orthonormal basis for R4.
32.
Let VJ = WJ
= (1. 2.1. 0). V2 = Wz-
IIJ:Ii~vl
= (1,1, 2, 0)- (~){ 1 , 2, 1, 0) = (~.-~.a. 0),
and
Then {v 1 , v2, v3 , v 4 } is an orthogonal basis for R 4 , and the vectors
form an orthonormal basis for R4 •
33. The vectors w,
R3.
= (~· ~, 0),
w2
= ( ~·- ~· 0),
and w 3 = (0, 0, 1) form an "rthonormal basis for
EXERCISE SET 7.9
34.
251
Let A be the 2 x 4 ma.trixhavingthe vectors w 1 = (~, ~.o,
!) and w2 = ( -~, 7J, ~,0) as i~s rows.
Then row(A) = s pa.n{w1, w2}, and uull{A) = spa.n{wt, w2}.1. . A basis for null(A) can be found by
solving the linPnr system Ax = 0 . The reduced row echelon form of the augmented matrix for this
system is
[~
0 _l2
4.!.
:I
l
_!
I
1
2
4
0]
0
I
~··• [m , [\ - ·[-!]+ •[=;] Th~
1
;•]
by
and oo a general ootut;on ;,
=
the vootorn
w, =
4,
(~.- 1, 0) and w4 = (- ~ , - ~. 0, 1) form a basis for span{w1, w2}-'-, and B = {wJ, w2, w3, w 4} is
a basis for R 4 . Note also t ha t, in adrlition t o being orthogonal to w1 a.nd W'J, the vectors w 3 and
w4 ar<:: orthogonal to each other. Thus B is an orthogonal basis for R4 and application of the
Grarn-Schmic.Jt process to t hese vectors amounts to simply n ormnlizing them. This results in the
orthon o110td l>asi::; { qt , Q z, Q:$, q 4} where Q1 = Wt ~ (L ~.0, ~) , Q 2 = W z:::: (-7.f, ~·
0), Q3 =
7:3•
11:; 11
35.
= (-js ,- )5, -:76,0), and q4 =
11 :!g = (--dis, -
)r8, 7rn,o).
Note that W 3 = w 1 + Wz . Thus the subspace W spanned by t he given vectors is 2-diinensiooal with
basis {w 1, w2} . Let v1 = Wt = (0, 1, 2) and
v 2= w 2- w2·V1
llvdl 2 v1 =(-1, 0,l ) - ( 52)(0,1 ,2) ;;- ( - 1,-g-2 ,~1)
Then {v 1. ,·l} i::: an Ol lbogonal basis for\\' , anti the vect-or!'
Ut
V1
= llvdl
form an orthonor mal ba.c;is for
36.
·
l
2 )
= tO. -/S' ~ ,
~V .
NoLe that w.1 = w 1 - W -z t- w 3 • Thus the ::;uhspace \V spanned hy the given vectors i.~ :3-dimensional
with b(l~is { w 1 , W ?, w3}. Let v 1 = w 1 = (-1 , 2, 4, 7) . t\nd let.
v .....
V:~=WJ -
= \1: 2 -
w3 • v 1
~!..:~!.vl
= (-3 0 4 -2) ·- (2.)(-l 2 4 ..,, ) ""' (- 1! _J. £~ ·- ~)
Jlv 1 jj 2
' • ' '
· ;o
• • •
11 '
-:, 1 •
2
w;~ · v 2
,
( 9 )
( - 41 - ~ 26 -~)
jj2 v2 = (2, 2, 7, -3)(- 1, 2,-1,7)- ( 31843)
401
14' 7) 7 ' 2
70
2
14
9876 3768 5891
3032)
= ( 2005 ) 2005 > 2005
200.'5
llvdl 2 v1- llv
I
w. and the vectors Uj = n::u = (j.ffl, ~· ~· ~),
Then {vt . v2 , VJ} is an orthogonal ha.osis for
U2
=
,;!1.,.
Uv211
=
( - 41
- 2
s2
-35 )
JS'iit4 ' •/5614 1 15iii4' J5614 '
v
form an orthonormal basis for W.
-
I
an(
UJ
=
~
llvlil =
(
9876
3768
5891
-3032
)
J 155630 105' v'l55630t05 ' ,/155630105 1 J l55630105
37. No~ that u 1 and u 2 are orthonormal vectors. Thus the orthog~na.l projectior. vf w onto the subspace
W spanned by these two vectors is given by
WJ
= projw w = (w · u t) u 1 + (w · u 2) u 2 =
( -1 )(~. 0, -~)
+ (2)(0, 1,0) = ( - ~, 2, ~)
and the component of w orthogonal to W is
Wz = W-Wl =(1,2, 3) -(-~,2, l)=(~,0,l52 )
252
Chapter 7
38. First we find vD orthonorma.l basis {Q], Q2} for w by applying the Gram-Schmidt process to {UtI u 2}·
Let
= U t = (-1,0 1 11 2) 1 v2 = U 2 - r~;U~Vt = {0~ 1 1 0,1) -1(-1,0,1,2) = (j,l,
and let
Vt
--Js, 7s, fs),
-!,!},
Jh. Ji2).
q l = 1~: 11 = (
0,
q2 = 11~~11 = ( ~· ~ ·T hen {qt , Q2} is an orthonormal
basis for H' . and so the orthogonal projection of w = (- 1, 2, 6, 0) onto W is given by
and the component of w orthogonal to W is
W2
39.
=W -
Wt = ( -1, 2,6, Q) - ( -~ 1 - ~, ~~ ~)
= (~
1
~~
19
4 1 - ~)
If w = (a., b, c), then the vector
is an orthonormal b~is for lhe !-dimensional subspace lV spanned by w . Thus, using Formula (6),
the sta.ndard matrix for Lhe orthogonal projection of R 3 onto W is
P
= u T u = a2
1-
:2
+c
2
a] Ia
[c
b
2
b c]
.
= a 2 + ~2 + c2
[aab
ab2
a.c
c
b
b
acl
be
.'2
c-
DISCUSSION AND DISCOVERY
D 1.
=
=
If a and b" are nonzero, then u 1
(1, 0, a) and u2 (0, 1, b) form a basis for the plane z = ax + by,
ami applKHion of tlw Gram-Schm1dt process to these vectors yields '\n orthonormal basis {Q 1, Q2}
where
02. (a) span{vt} =span{ wl}, span{ v 1, v2} = span{w 1, w 2} 1 and spa.n{vJ , v2 , v 3} =span{ Wt, w2 , w 3
(b) v 3 IS orthogonal to span{ wt, w2}.
D3. If the vectors Wt, w2 , . . , w~c are linearly dependent, t hen at least one of the vectors in the list is
a linear combination of the previous ones. If w 1 is a linear combination of w 1 , w 2 , ••. , w 1 _ 1 then ,
when applying the Gram-Schmidt process at the jth step, the vector v i will be 0 .
D4. If A has orthonormal columns, then AAT is the standard matrix for the orthogotia.l projection
onto the column space of A.
D5.
col(M) = col(P )
(b) Find an orthonormal basis for col(P) and use these vectors as the columns of the matrix M .
(c) No. Any orthonormal basis for col(P ) can be used to form the columns of M .
(a)
EX ERCISE SET 7.10
D 6.
253
(a) True. A ny orthonormal s et of vectors is linearly iudeptmdent.
(b ) Pa.lse. An orthogona:l set may contain 0 . However, it is t r ue t hat any orthogonal 'set of
no nzero vect.ors is linea.rly independent.
(c) False. Strictly speaking, the subspace {0} has no basis, hence no o r thonormal basis. However ,
it i~; true tha t any nonzero subspace has an orthonormal basiS.
(d ) 1'ruc. 'The vect.or q 3 is ort hogonal to the subspace span{w 1, w 2}.
WORKING WITH PROOFS
Pl. If {v1, v2, .. . , v,~c} is an orthogonal basis for W, then {vt/llvl ll , v2/llv2ll. . .. , v~c/llvkl} is a.n
orthonor mal basis. Thus , using part (a), the orthogonal projection of a vector x on W can be
expressed as
0
proJwX
=
(
Yt
x . Tfv~
)
V1
llvtll +
(
V2
x.
l!v2ll
)
Y2
llv1ll + ... +
(
V~c
x. Uv~oll
)
Vk
llv~cll
PZ. lf A. is symmet.ric and idempot ent, t hen A is the s tandard matrix of an o rthogon al projection
operator; namely the ort hogonal projectio n of R" onto W = col(A). Thus A= UlJT where U is
any 11 x k matrix whose column vectors form an orthonormal basis for tV .
P3 . We rnust. prove that v i E span {w 1, w2 ___ , w,} for each J - 1, 2, . . -- The proof is by induction on
J
Step 1. Since
v 1 = w 1 , we have v 1
E
span{w 1 } ;
t.hllS
t.he stat ement is true for j
=
L
Step 2 (inductio n step) . Suppose the st atemeot is t.r ne fo r integers k which a re less t han or
~qual
to j, i.e., fork = l , 2, . . . , j . Then
and since v 1 C ,c;pan{ w d , v'/. E span {Wt, w·2 ), . .. a.nd v; (7 s pan{ Wt , w 2 , . .. , W j } , it follows that
v J+ l C span{ Wt , w 2, .. . , wi, wj 1.1} Thus if the s tatement. is true for e ach of the integers k =
1, 2, ... , j then it is also true fork= j + 1.
0
T hesf: two s te ps complete the proof by induction .
EXERCISE SET 7.10
1 . T he column vec.t,o rs of the matrix A are w 1
procE'SS to
thf'~'>E:
= g}
and
Wz
= [-~] _ Applicat ion of t he Gram-Schmidt
vect.or yields
We have W1 := { w, · q i)q 1 = J5q l and w 2 = {w2 · q 1 )q l + (w2 · Q2)Q2 = -/5q l
plio.:ation of Form;.:!a (3) yields the following QR·decomposition of A :
A===
fll
2
-i]
3
=
[~
-~]
v'5
V5
[J5
0
V5j = QR
v'5j
+ ../5q 2.
Thus ap-
254
Chapter 7
2. Application of the Gram-Schmidt process to the column vectors
We have Wt
= J2q 1 and
w2
of A yields
w2 = 3J2q 1 + J3q2. This yields the following QR-decomposition of A:
[l [
1 1]
1 2
72 -"J!
A= 0 1
=
1 4
3.
and
Wt
l
3v'2
J2
13 [ 0
0
1
J3
1
72
73
= QR
Application of the Gram-Schmidt process to the column vectors w 1 and
l
w2
of A yields
aJ26
Q2
=[
s0t
8
JJ26
We have WJ = (wl · Qt)CJ1 - 3q, and w2
the following QR-decomposilion of A:
= (w2 · q, )q l + (wz · Q2)Q2-
11]
A=
[ -~ ~
=
l
~ Ql + " f q z_ This yields
3726 3
[~
- : 3 ~ [o ~] =QR
8
.l
3726
~
1\ppliratton nf thP Grn111-Schrnidt process to the column \'ectors w 1 , wz , w3 of A vields
\Ve have w 1 = J2q 1 , w 2 = "lq1 +
ing QR-dec01npositinn l.lf A:
A
vl1q2, and
w3
= .J2q 1 -
1q2 +
v'2
1
0
2
]
[~
-~
~] [v'20 v'3
11=
0
73'
7s
[0
1 2 Q
0
0
72
73' --Ja
I
l
5. Application of the Gram-Schmidt process to the column vectors
A=
[~ 2 1]
I
3
1 =
l
+ ~q2,
[~
72
0
and wa = J2q l
-
Q3·
[
This yields the follow-
il
=QR
J2]
3
h1
3
w 11 w 2 , w3
Q2 =
We have Wt = J2q 1 , w2 = ~ q 1
following QR-decomposition of A :
¥
of A yields
-~]
~
719
+ '(P q2 + ~ Q3 ·
3
1
738
1
-73'8
6
73'8
0
~
J2] =QR
.i!!
19
This yields the
255
EXERCISE SET 7.10
6.
Application of the Gram-Schmidt process to the column vectors w 1, w2, w 3 of 11 yields
We have w 1 = 2ql,
·decomposition of A:
w2
=-
q1
+qz,
and
w3
[
;
1
[ r :L
oJ
-~.
A=
l
- 1 1
7.
= ~ Ql
From fsxerci~e 3, we have .4
-2
= [- 1
2 1
1] =
2
+ ~qz + ./i-G3·
2
0
]
~ [o
..r·~
tu
I
:Jv"itl
3
2
7
W'16
3
J]
=QR
1
0
72
h
[-3_: j~l f3
2 J
-1
0
1
LO
This yields the following QR-
J=
Q R T hus the nnrmnl
~y~t.t>rn
for
3
Ax = b can be expressed as Rx = Qrb, which is:
,J,!l m Lil
[~
=
Solving thi.s system hy back substitution yields the least squares solution Xz = ~g, x 1
8. From Ex.erci$c 1, we have A =
102]- [-72
[
-jz
0 1 1 -
0
120
-~
~~]
[v'2o v"2
v'2] ·""' QR.
~
,16
v'3 - :/f.
73 -)6
0
,..
0
-4~3
W3
[::]
X3
=
[-~
~
v6
()
:r
0
1
J3
2
v·'G
=
Solving this system by back substit ution yields X3 = ~, Xz
~, x ,
the syst.etn Ax = b is consistent and this is its exact solution.
9.
F'rom Excrci11e 5, we have A =
system for Ax
= b can
kl [Vi
12 1] [~ ~ -· ~
[0 31 = ~0 -~
'.;?S8
719
1 1 1
be expressed as Rx = QTb , which is:
w2
.ill
2
0
Thus the normal
2..f6
system for Ax = b can be e.xprcssed as Rx = Q1'b, which is:
v2
= - i~.
o
0
= 0.
41
0
Note that, in this example,
J2]
3'(P
=
QR. Thus the normal
.ill
19
1
v'2
l
- v'38
3
'J19
Solving this system by back substitution yields X3 = 16, x2 = -5, x , = -8. Note that, in this example, the system Ax = b is consistent and ~his is its exact solution.
Chapter 7
256
[-~ !:] ~ -H ~
10. Prom Exercise 6, we have A=
-1 1 0
I
-~
[:
- I
I
l
O
2 72
ll
= QR. Thus the normal system
v~
for .-\x - b can be expressed as Rx = Qrb, which is
[~ -~ i][=:] [t
=
0
0
~
1
0
X3
72
Solving this system by back substitution yields x 3 = -2, x 2
= 121 , x 1 =
~·
11. The plane 2x - y + 3z = 0 corresponds to a.l. where a= (2, -1, 3). Thus, writing a as a column
vector, the standard matrix for the reflection of R3 about the plane is
H
=1-
2
T
- -aa
ara
=
[~ ~ ~] - I~ [-~ -~ -~]
0
0
6
1
-3
!~6
= [
9
-7
and the reflectiOn of I he vee lor b = (1, 2, 2) about that plane is given, in column form, by
Hb=
!2.
lhevlane.x 1-y- lz Ocorrespondstoa 1 wherea=(l,l,-4). Thus,writingaMacolumnvector,
t.he standard matnx for the reflection of R 3 about the plane tS
~
18
[ 1 -4] [ ~ -~
I
-4
- -l
- 4
16
:;
~]
-~
~
~
:1
.:
_l
9
9
o.nd the retlec:ti•>n of the vector b = (1,0,1) about tltat plane is given, in column form, by
Hb
13.
H = l - -
2
aT a
T
aa -
2
1
14. H =1- - -aa
aT a
15.
Fi
= 1- -aT2 a aaT
~
H-: J][~] Ul
[~ ~]-~ [-:
0
- 1
1
1
0
-1
[~ ~]
- 1
1
0
=
I
2 [- :
6
0
[~
2 -2
0 0
1 0 0
0 1 OJ
0 0
~
9
2
11
r
0
~
u -!l
u ~il
2
3
-:]
-~1
=
2
3
I
3
=
4J
0
2
3
2
3
0
1 - 1
- 1
I
3
1
3 -3
-~] ~ [~
0
9
0
2
IT
IT
2
9
TI
_..§.
II
IT
i
II
- ·0]
II
6
IT
-n7
EXERCISE SET7.10
257
2
16. H- l - -aar =
aT a
17. (a)
[~
0
1
0
0
0
0]
0 0
2
o
10
1
-2
[-21
4
-3
0 1
6
9
3
6
2
-1
-~]
-3
4
13
T5
4
15
=
2
_1
H'
4
5
1
']
15
4.
-15
-ii
--s
-g2
_.i_
:.!
13
5
2
2
5
15
7
15 -5
15
15
li=I-af.aaaT=[~ ~]- 2~[-:
Let a=v-w=-=(3,,1)-(5,0)=(-2,4),
Then His the Householder matrix for the reflection about a..l, and Hv = w.
(b) Let a=v-w=(3,4)-(0,5)=(3,-1), H=I-af.aaaT=[~ ~]- 2 [_:
10
-~]=[-:
~]
4
•
5
Then H is the Householder matrix for the reflection about a..l, and Hv = w.
(c) Let a= v.- w = (3,4)- (¥,- {/) = ( 6
S+l'l). Then the appropriate Householder matrix is:
-;12,
2
T
H = I - --aa
aT a
18.
(a)
= [10
0] - ----;::::
2 [67-~2v'2
1
Let a= v- w,., (1, 1)- (v'2,0)
H = J .....
&0- l7v'2
= (1- v'2, 1).
~aa-r "'~
a1'a
17-25/2]
=
2
33+8../2
17·-~5v'2
-2-
r .1__..
l'n
-1
../2
Then the appropriate Householder matrix is:
[(1-
[1 OJ _ _ _
2 _
0 1
·1-2v'2
2
v'2)
1-/2
1- v'2]
I
~-~l [~ -~£]
=
2
(b) Let a-~ v - w
=
2
[1 0] . . __._2_ [ 1v2 1- v'2]
0
aTa
=
1
=v-
w = ( t, 1)
- (
[ot o]l _[¥v'2
Y1-) ' 1+2v'3) =
(1- /2) 2
4-2J2 1-
-~]
2- v'2
-2-
-2
Let a
2
(1, 1)- {0, /2) = (l, 1- .J2). Then the appropriate Householder matrix is:
H = 1 _ _ aaT =
( <:}
2
e-2-n. l-;/3).
=
[-~~
J:i
2
Then the appropriate HoH~C holrlcr
mat.rix is:
( 3-/'3 )( 1-2.,13 )]
e-.i13 ) 2
-3±2/3]
o]1 _ [~
= (1
0
2
.fl
3-2:/3
2
2
19. L;;t w
= (llvll, 0, 0) = (3, 0, 0), and a= v- w
2
T
lJ =I- --aa
aT a
=
= ( -5, 1, 2).
Then
1 0 0] 2[ 25 -5 -10] [- ~
[
~
0
1 0 - -
0
0
1
30
-5
1
2 =
-10
2
4
1
3
14
3
3
15
2
-15
is the .standard matrix for the Householder reflection of R 3 about a..l, and H v
-l]
= w.
11
I5
Chapter 7
258
20. Let w
= (llvll, 0, 0) = (3, 0, 0), a= v 2
aar
aT a
H =1 - -
=
[~
w
= (-2, -2, 2).
lq :
~]-
0
1
0
Then
1
-4]
4
-4 =
4
-4
-4
t
'2
-j
H !l
l
j
4
3
is the standard matrix for the Householder reflection of R 3 about a.l, and Hv
= w.
= (1, -1), w = (llvll, 0) = (J2, 0), and a= v- w = (1- -/2, - 1). Then the Householder reflection abouL a..L maps v into w. The standard matrix for this reflection is
21. Let v
H
=1 -
0]
2 [(1
- J2)2
-1 + v'2
_2_aaT = [1
aTa
0 1 - 4- 2-/2
=
[3 -+2J2J2
~]- l'r 4]
[ol o] _ 2 + J2
1
=
We have HA
[~
2
~
2
2] = [.;2
:D -~
-~] [_l
= [ -~
1
-1
3
0
-~]
-~
=R
-1 + J2]
1 J
= [ .;;
-
_fl
Y{l
_fl
2
2
and, setting Q
= H - 1 = HT = H,
this
y1elds the following QR-decompositioo of the matrix A:
-X}] [h -~
-~ ]-
A=[ - 1
2]=[_fl:f1 3
-{/-
2
22.
-Q
0
R
Let v = (2, 1), w = (llvll, 0) = ( v's, 0), and a = v - w = (2- v's, 1). Then the Ho\lseholder reflection
about a.l. nnps v into w The standard matrix for this reflection JS
OJ _
=
Now let Q,
=
[~ iJ]
= [:
'i
[~
2
[(2 - ''5) 2 2- v'5l
I
10- 4/S
OJ -
[2.::~
-~ =
-~
~
1
5
2 - ,f5
l
5
J
1
[¥
~
s
~]
-~
:;
~]· and multiply the given matnx on the left by Q
_M
5
~][~ ~] = [~o Js]=R
o
_w
o
s
1
and, settmg Q = QT = Q,, we have the following QR-decomposition of the matrix A:
0] [10 ~] =QR
y]
-¥
0
1
This ytelds
259
EXERCISE SET 7.10
23.
Referring to the construction in Exercise 21, the second entry in the first column of A can be zeroed
out by multiplying by the orthogonal matrix Q1 as indicated below:
·
5
-../2]
2¥'2
1]3 = [v-20
0 - 2J2
0
4
5
Although the matrix on the right is not upper triangular, it can be made so by interchanging the 2nd
and 3rd rows, a nd this can be achieved by interchanging the corresponding rows of Q 1 • This yields .
QzA
=
fl
2
-f.
0
0
[ _fl
-~
2
and finally, setting Q ==
A=
0
2
Q:; 1 "" Qf,
0
4
~
·1
5
-../2] =R
~ -2~
0
we obtain the following QR-decomposition of A:
l [1
1
2
-1 -2
[ 0
24.
~] [-~ -~ ~] = [~
0 ·-
-~
=
0
v;l [V2
_ 1.1'2
2
1
0
·-~] = QR
2/2
0
4
0
0
- 2..;'2
Referring to the construction in Exercise 18 (a) , the second enlry in the first column of A can be
zeroed out by multiplying hy the orthogonal matrix Q 1 a.<> indicated below!
QI
A
0 [l1 23 - 21] ~.1 /~.o: -¥:If- ' -L.}-1
3'{2 ....
.;_; : 0
:If
= {/ - Yf : 0 0
-o ---o-: -1---6
o
r=
o
- l
cY
oi
o
:
ol
o, 1 )
·From a similar construction , the third entry-in t he second colum'n of Q A ~··oe zeroed out by
o
o: o
1
0
o
1 \
/
.
1
multiplyiug by t he orthogomli matrix Q2 as indicated below:
~j
0 __ ~
I
I
'-:-~
I
I
·
I ·-
Q·>... Q 1 ; 1 ·-
r.;
'
--
·~-~-~l0 [v-20 ¥
-
I
y6
3
I
't
~ ~ -·~ ~~---}~. ~ -~
2
~
'
-
,J2
3 f2
:!..lL!
2
,/2
--
.
r.:.
-V[l =
~
~
w2
0
.i§
0
0
0
0
2
_ill
2
_ _.&
2
"' v'3
1
From a Uurd such construction, the fourt-h entry in the third column of Q2Q1 A can be zeroed out by
multiplying by the orthogonal matrix Q3 as indicated below:
sV2
-2-
- :£1
2
v'6
-~
0
0
- ./3
2
Finally, setting Q
A=
2
0
2
~
:&
0
0
0
0
\12
=
2
_ill
2
_:L§
2
=
R
2
0
= Q[QfQT = Q1Q2Q3, we obtain the following QR-dccomposition of A:
:£1
2
.i§
::11
2
~
0
0
6
1
2
-
:il
6
1
:il
6
-~
J
-21
0
2
:il
6
fl
6
-2
I
2
__ill
-~ ~ QR
0
. ....
260
25.
Chapter 7
Since A = QR, the system Ax = b is equivalent to the upper triangular system Rx
[
v'30 - .../2
v'3
0
0
-33]0
4
7s
[:ttl
:t2 =
X3
= QTb whlcb is:
[730
.:i§
3
Solving this system by back substitution yields :t3 = 1, :t2 = 1, :t 1 = 1.
26.
(a)
= a (aTx) = (aTx)a, we have Hx =(I- af.aaT)x = /x - af.aaaTx =X- e:1T:)a.
Using the formula in pa.rt (a), we have Hx = x- ( 2$:)a = (3,4, 1}- 1: (1, 1, 1) = ( -~ , -1, -Jj).
Since aaTx
(b)
On the other ha.nd , we have H = I - ;faaaT
= [~
~ ~] - j [~ ~ ~] "" [-1
0 0 l
-~
-: =~]
a _13
1 1 1
_
and so
l3
DISCUSSION AND DISCOVERY
Dl. The standard matrix for the reflection of R 3 about
[~
0
and
~mularly
~ ~]
0 1
ef is (as should be expected)
2
[~ ~ ~]
0 0 0
[-
~ ~ ~]
0
0
1
for t.he others.
D2. The standard matnx for the reflection of R 2 about the line y = mx is (taking a = (1, m)) given
by
D3.
D4.
D5.
= ±v's3, then llwll = ll vll and tbe Householder reflect1on about. (v- w ).l mnps v into w
Since llwll = llvll , the Householder reflection about (v - w).l. maps v into w , We have v - w =
(-8, 12), and so (v - w ).l. is the line -8x + 12y = 0, or y = ~x.
Let a = v - w = ( 1, 2, 2)- (0, 0, 3) = (1, 2, -1). Then the reflection of R 3 about a.l. maps v into
w , and the plane a .l corresponds to x + 2y- z = 0 or z = x + 2y.
If s
WORKING WITH PROOFS
P2. To show that H = I - af;aaT is orthogonal we must show that HT = H H H T = ( I - -2-aaT)
aTa
(I - - 2
aTa
1•
This follows from
' T = I - - 2 aaT - -2-aaT + - -4- a aTaaT
aaTJ
aTa
aTa
(aTa )2
2
=I- .2.. aaT - - - aaT
a Ta
aTa
where we ha,·e used the fact that aaT aaT =a(aT a )a T = (aTa)naT.
4
+ -aTa
- aaT =I
EXERCISE SET 7.11
261
P 3. One ofthe features of the Gram-Schmidt process is that span{ q 1 , q 2, .
for ea.ch j == 1, 2, . .. , k. Th~1s in the expansion
. .,
q,}
= span {w 1 , w 2 , , . . , Wj}
we must have w 3 • CV -::j:. 0, for otherwise wj would be in span {Q1 , Q2, ... , ClJ -1 } =
span{w1 , w2 , ... , w;- l } which would mean that { w 1 , w2 , . . . , w 1} is a linearly dependent set.
P4. If A = QR is a QR-decomposition of A, then Q = AR- 1 . From this it follows that the columns of
Q belong to the column space of A. In particular, if R- 1 = js,j], then from Q = AR- 1 it follows
that
c; (Q) = Acj(R- 1 ) = StjCI(A) + s2jc2(A) + · · · + S.~<-jCA:(A)
for each j = 1, 2, ... , k. Finally, since dim(col(A)) = k and t.he vectors Ct(Q), c2 (Q), ... ,ck(Q) are
linearly independent, it follows that they form a basis for col(A).
EXERCISE SET 7.11
1.
(a)
(b) T h e vect or equation c 1 v 1
(a)
(w )B
=
1 c2 =
8,
= (-2, 5)
= [_~]·
3
14 .
= {fa, f4)
Thus (w ) B
a nd
lwl o = ( ~].
(b) (w )n =(I , 1}
3. The vector equation c 1 v 1
+
c2v·~ + c v
3
3
=w
tem by back subi:ititut ion y ir.ldsc.3 = l,c2
4.
and [w]B
+ c2 v 2 = w is equiva.1ent to t 11e Iinear system [ 2 a3) [cc2'] -- [11)' and
- 4
the 1:.olut.ion of this system is c 1
2.
= (3, -7)
We have w '"" 3v 1 ... 7v2; thus (w)B
The vector equation c 1v 1
~· r. v
2
2
is equivalent to
o
= -2,c 1 = 3.
+ c3 v 3 =
[~ ~ ~1 [~~1 = [-~] ·
0 3
Thus (w)a
w is equiva.lent to
[~
= (3 ,- 2, 1) and
- :. _; ]
3
Solving this sys-
3
C3
~~]
[win= -~.
[
[;,~] = [-~~]
69C:J
Solving this
3
system by row reduction yif!k!s c 1 =' -2, c2 = 0, c3 = 1. Thus (w)B = ( - 2, 0, 1).
5. If (u)n = (7, - 2, 1), then u
6. If (u )B
7.
=
(8, - 5, <1 ), then u = 8v 1
(w }B = (w · v 1 , w · v 2)
8. (w)a = (w · v 1 , w
9.
(w)£J
= 7v 1 -
2v 2
+ v:~ =
5v 2
+ 4v3 = 8(1 , 2, 3)- 5{ -1, :>, 6) + 4(7, - 8, 9) =(56, -41 , 30) .
= ( -~ , ~) =
· v 'l , w · v 3 ):::
7(1, 0, 0) ·- 2(2, 2, 0) + (3, 3, 3)
( -2,/2, 5~)- ..
(0, -2, 1)
=(W • Vt W·v 2 W·v 3 )=(....Q... _..2_ ~)=(~ _hQ y])
'
'
-/2' ,/3' v6
2 '
3 • 6
= (6, -
1, 3).
Chapter 7
262
11.
= v1 + v~ = (~,- ~ )and v = - v1 + 4v2 = (~, 1;).
Using Theoretn 7.11.2: Uu ll = ll(u )all = y'(1) 2 + (1) 2 = /2, llvll = ll(v)sll = J(-1}2 + (4)2 =
m. and u. y = (u}s (v)a = (1)(-1) + (1)(4) = 3.
Computing directly· llull = V(~) 2 4- (-~) 2 =
= /2, llvll = Je;p + e;p =
Jl7.
6
and u · v = (t)( ~ ) + (-!)( ; ) = ~ = 3.
(a)
We have u
( b)
0
12.
!iii=
/¥s
3
-i, -~)and v = 3vl + Ov2- 2v3 = (-~, .!p, 1).
(a) We have u = -2v1 + v2 + 2v3 = (~,
(b) ll u ll = ll(u )sll = j( -2)2 + (1)2 + (2)2
= 3, llv ll = ll(v )sll = J(3) 2 + (0) 2 + ( -2)2 = Jf3, and
= (u )s · (v )s = {-2}(3) + (1)(0) + (2)(-2) = - 10.
ll u ll = J<~)2 + (-D2 + (-~)'2 = fj = 3, llv ll = Vr-(----~)-2_+_(_130_)_2_+_(_~)-2 =
= Jf3,
1
9
and u · v = (~)( -!) + (-i)( 3°) + (-~)(!) =- 9° = -10.
u·v
13.
J¥
!lull = ll(u)Dil = J( - 1) 2
1- (2) 2
+ (1)2 + (3)2 = Jl5
= li(v)sll = }(0) 2 t (-3)2 + (1)2 + (5) 2 = J35
Uwll - ll( w )sll = ,fi2)2 + (-4)2 + (3)2 + (1)2 = v'30
uv + w 11 = 11 <v >s + <w >s 11 = 11 <- 2, -1, 4, 6) 11 = .;,..,....<-----=2=>2----+--:-<--=7~F,--+_c,....,.4=,2-+----:<"'="'s
llv- wll = li(v)a- (w )all = 11(2, 1, -2, 4)11 = y'(2)2 + (1)2 + (-2)2 + (4)2 =
v · w = (v)a · (w)a = (0)(-2) + (-3)(-4) + (1)(3) + (5)(1) = 20
llvll
=
>""":.!
14.
!lull= ll(u) 8 11 = j(0)2 +
(0)2 + (-1)2 + (-1)2
.Jf05
5
= /2
=,/58
= ll(v)o 1- j(5) 2 + (5)2 {-2)2 + (-2)2
llwll = ll(w)all = .j(3)2 + {0) 2 + (-3) 2 + (0)2 = /18 = 3v'2
2 -+-;-(---=2~)2 = JIT8
llv + w U = ll(v)a + (w)all = 11(8,5, -5, -2)11 = v=(s=)2,--+~(=s)=2-+-;o(--s=)~
llv - w U= il(v)a- (w)all = 11(2, 5, 1, -2)11 = y'(2)'l + (5)2 + (1)2 + (-2)2 = v'34
1
Hvll
v ·
..l..
w = (viB · (w )a- (5)(3) + (5)(0)
+ (-2)(-3) 4- (-2)(0)
= 21
15. Let B - {e 1 ,e2} be the standard basis for R2 , and let B'- {v 1 , v 2 } be the basis corresponding to
the .r'y'-systcm that is described in Ftgure Ex-15. Then Ps· ..... s = [[vi]a
Ps-.a·
(PB'-+D)- 1
the equations x'
(a)
(c)
16.
=x
= [~ ~].
- y and y'
= -/2y.
= (1, L), then (x', y') = (0, /2).
[f (x,y) = (0,1), thPn (x',y') = (-1, /2).
as described in Figure Ex-16. Then P9•- s -
----=
(b ) If (x,y)
(d ) If (x, y)
0 V'1
and so
-
(*f
10
]
~ 1
= (1,0), then (x',y') = (1,0).
= (a, b), then (x', y') = (a- b, v'2b)
= {u 1 , u 2}
= (v'J, 1), then (x',y') = (2,0}.
= (1.0), then (x',y') = (~,-~).
= (0, L). then (x', y') = (0, I )
(o,b),then(x'.y')=(~a,--JJa+b).
be the basis for the x'y'-system
and Pa-s = (Ps .... 8
x'y'-coordinates are related to xy-coordinates by tlre equations x'
particu Jar.
If (x,y)
lf (x,y)
1
In particular:
Let B = {i,j} be the standard basis for R 2 , and let B'
(c)
(d)
= [ ~]
It follows that x'y'-coordinates are related to xy-coordinates by
If (x, y)
(a) If (x,y)
( b ) 1f (x,y)
[v2Jsl
= ~x
)
1
= [ ~3
-73
0
].
Thuc;
1
and y' = -~x + y. In
263
EXERCISE SET 7.11
(v2)s)
(b) The row reduced echelon form of [B
0)1 IS. [10
:2 - ::J I
:
[1
4
0
I S'J =
OI
I
= (2l
4
rr
li _J..
I
I I
1];
3
- ).
-1
thus Ps-+B ::::
ll
4
3 ]
IT
Ti
[
(c)
l
2
-n H
.
Note that
-J]
1 = [2
{Ps~s)~
1
4
-
(d) We have w = v 1 - v z; thus [w)s
(e)
We have w
=3e
4 3
.!..
·
li [ -1 'J.] -- PS-tB·
-I -
= [_~] and lwls =
6e2; thus [w]s
1 -
Ps-ts[w}s
=[_~) and [w]D =Ps
,s[w]s
G-!][_ ~]
=
=
[_~]·
[_ ~ ~] [_!] = [=~J.
;]
~
18. (a)
=
0 8
(b) The malrix [B I SJ
:'!:
[~ ~ ~ j~ ~ ~1
0
·40
13
5
I
I
1 0 B10 0 1
form;
01
(J I
l, i
9] as its reduced row echelon
H)
-·.5 ·-·.1
-2 ·-1
Lim~ Ps_.n .,., [-~~ ~~ -~]5 -2 -1
(c)
lt is easy to chec.k tho.t
0]
.t o : t.hus ( Po -.s)[~ ~ ~1 [-~~ ~~ -~] - [~ 0
I 0 8
-2
l 5]
~~ I
(d)
-1
[1
0
()
0
0
w = 5et- 3e,
+ e3;
1
I
thus [wls =
~']
I
and lw] s
[-3l
0
0
I
2.'1!J1
o :.
a - 3 has o
8
(e)
5
as its reduceu row echelon form; thus
)6 -391 [- 35]
= Ps ·-tH[w)s = [-- 40
13 -5
5 -2 -1
19. (a) The row reduced echelon form of !Bt I B2] -=
(22
4_1
- 1
I
I
l
-1] . [)0
3 -l
!S
[11 -~] .
-g
10
2
(b) The row reduced echelon form of [B2I
[ 0
_§)
-2 - ;; .
{c)
·r:
Note that (Ps~_.s,)- 1 = ~1 - ~
[
13
= Ps-+B·
1
77
30
I
1
-11 2
Bd '"" [~ -l
t2
I
~] i:.
-1
[~
ro Lt) = Ps,-.82 ·
~ (-5) ~
10
1
=
lr -239]
77 .
30
264
Chapter 7
= -;i0u 1 + ~u 2 ; thus [w]B, =
[-11
(d )
w
(e)
We ha,·e w - 4v 1 -7v 2i thus (w]s,
=[=~]and (w)a,
(b) The row reduced echelon form of (B2I Bt]
= [3'
-11 [-11 = [=:].
r~1 -~] r=~l = [-11·
Ps,-.a,I·'~IB, = [_~
and (w]B, =
=Pa,.... a, (w)s, =
01 - 21 - 5]3
1 :•1 2] IS
. (1
0
4 2 3
I
1
I
thus Pa,-+B, =
;
[_~ -~l.
21.
[_~ -~r· = (-1)[-~ -~] =
(c)
Note that (Pe,-.e.)- 1 =
(d )
We have w - 2u1 - u 2; thus [w]B,
(e)
Wf! have
(a)
w
:3v 1
v 2; thus
-
= [_~]and (w)e, ·= Pe, ... a,(w]s, = [_~ -~]
[
-6 -2 -2 -3 -3 1] [Io
=[
I
-6 -6 -3:
0
0
7 I -3 - 1 -1
4
0
3
l
I
I
1
0
:-~4 -u -.!1
1 .I
1'2
I
0
q
6 is
2
[_~]·
0
0
3
=
4
4
i2
17
0
'2
.)
2
3
-H -H
~
u
lfw=(-5,8
[_n = r-~]·
= [_~]and [w]B 1 = Pa, ... e ,[wJo, = [_~ _~] [_~]
The row redured echelon form of [B2l Bt]
~
~
j~l
thus Ps, -BJ = -~
(b )
[w]a 2
Pe, .... s, .
~
5).wehave(w )a 1 =(l, l , l )andso
3
4
-1Jm,HJ
17
-12
2
3
(c)
The row reduced e<'helon form of IB2 I w] =
;
[=: =~ =~ l -~l [~
is
0
4
71-5
(ti,- g, ~).which agrees with the computation in part (b).
22.
(a )
The row reduced echelon form of the matdx (B2I B t]
I
[
0
0
0
1
0
0: 3 2
0:-2 - 3
11.'1
I
~]
-4 ; thus Pa,_.a, =
6
[ 3 2
-2 -3
51
=[
~]
-4 .
6
(b ) Ifw = (-5,~ -5), we have (w )a 1 = (9, -9,-5) and so
~
-S
0
II
!..2]
1
4
I~
o! -H
I
I
I -1
1
I
2
2
I
0
I
I
-1
-3
2
1
I
I
l
; tHus (w )s,""
265
EXERCISE SET 7.1 1
~ ~ !-:] is [o
1
3
(c) The row reduced echeloq form of [B2 I w) = [
·-5-3
(-t,
23.
2
,},
21 - 5
I
'2
l
0o:•-X
]
¥ ; t hus (w)a~ =
0
l :
()
0
-
0
6), which agrees with the computation in part (b).
The vector equation c 1u 1 + c2u2
+ C3 ll3 =
v1 is equivalent to
[* -t ~] [:] [ ~] ,and
=
'7J
?s•
C3
-76
O
the ·
tTl
solution of this system is c1 =
c2 = ~, c3 = - ~ . Thus (vt) s 1 = (,h, ~~ -~ ). Similarly, we
have (v2)s1 = (~,0, ~)and (v3)s1 = (- 3 ~, 2 ~,
Thus the transition matrix PB 2 -4B, is
Ps1-+B, =
l).
['
_
1
76
~
'l
3
-372
0
21
1
4.
2\/'J
3¥"2
H is <'MY to check th <lt
1
2
0
-~] - [l0 10 0]0
:l,i2
-
0
.!.
6
Thus PFJ 1 4 Bt is an orthogonal matrix, a.nd !lince Pr; ,-+ B~
true of Po, -·•B, .
=
(Pe,-~a ,)"· t
0
= (Pn?-•B,rr ,
1
the same is
24.
25.
(a)
We have Vt = (0, 1) = Oe 1 + le2 and vz
( b) 1f P
= Ps-.s = [~ ~]
= (l, 0) = le\
t Oe2 ; thus
then, since P is orthogonal, we have
pT
Geometrically, this corresponds to the fact that reflection about y
is an orthogonal transformation.
~6.
(a)
Po .... s
= p- 1
=x
= [o1
1} .
0
= (Ps ..... s)- 1 = Ps-+D ·
preserves length and thus
sin28]
= (sin20, -cos2B); thus Pa..... s = [:7~~: - cos28
·
then, since P is orthogonal, we have pr = p - l = (PB_.s)- 1 = Ps ..... b · Geomct-
We have Vt = (cos20,sin28) and Vz
(b) II P = PB .... s
ric~y. this corresponds to the fact that reflection about the line preserves length a.nd thus is
an orthogonal transformation .
266
27 .
Chapter 7
If
(a)
(x) = (-2],
lhen [~:] = [ c~ca;>
6
11
-sm(
3; )
y
If
(b)
-72
3
: >] [:r:] = r- 72
[x:)= [5]' then [;r] = [cose4")
-sin(
~
sm( ft)
cos(
ll
28.
(:r] = [-~
sin(34")]
cos( 3; } Y
2
3
4
ll
3 ")
4
ll
rJ [:]-= l-~ rJ [-~] [-~:~]·
-rJ[~:1 [~ -rJ[~l [~ ~l-
If [:] [-~].then[~]= l-~
(a)
z
(b) If
r::J = [~l· then [:J = [~
=
=
=
* nl l
nl,
29. (a) If [:] = then [::] =[-::(;) :11~ ~][~] +~
~] = [~
] [ 1] [x] [cos(~)
(b) rr x'
[~: - ~ lhen ~ ;: sin~fi} -c:i;~~) ~] r~:l [~ ~ ~] ~] =
=
=
'
0
If[~] {i]· then[~:]=
(b) If[::]= Ul· then[~]=
30
(a)
31.
We have [::]
=[
~
1
-
[
0
z'
0
1
-3
[-t].
-3
[t :-:][:] [~ -rl nl [~~~]
u:!l [: ] =u: !l Ul =[~~~]
=
=
:
r;] m [~:] [! :-!] [: ]
and
=
r
Thus
[~::J = [! r -;)[-t ~l m [ -4
~
.:.::]
•I
DISCUSSION AND DISCOVERY
- .J_
I~
[
7
R
l!]
IS
2.! •
- ~~
Let 8 = {v,,v2 , v 3}, where v 1 = (1, 1, 0) , v 2 = (1,0,2),v3 = (0,2, 1) correspond to the column veclors of the matrix P . Then, from Theorem 1.11.8, Pis the transition matrix from
8 to the standard basis S = {e 1 ,e2,e3}.
(b) rr pis t he transition matrix from = {e ,,e2,e3} to B = {wl . w 2. W J}, ~hen e, = Wt + w2 .
e2 = w, + 2w3, a.nd e3 = 2w 2 + W J. Solving these vector equations for w 1 , w 2 , and W J in
terms of e1, e2, and e3 results in w 1 = ~e, + ~e2- ~ e3 = ( ~ .!, -~). w 2 = %e1 - !e2 + ~eJ
= (~ .
~).and w 3 = -je, + j e2 +
= (-~. ~. Note that w w 2, W3 correspond
to the column vectors of the matrix p- .
02. (a)
s
i·
\e3
!>·
1,
WORKING WITH PROOFS
D3 . If 8
v1
==-
267
~ f v 1, v, , v3} and if ~ ~ [~ ~
(1, 1, 1 ) ,
D 4. If lwls
B is
=w
V ·l
= 3(1, 1, 0) + (1, 0, 0) =
and soB= S
=
~he l<an,ition matrix from B to the given basis, then
( 4, 3, 0) , and v :l
= 2(1, 1, 0) + ( 1, 0, 0) =
(3, 2, 0).
holds for every w , t hen the transition mat rD.. from the standard basis S to the basis
Ps .... B
D 5. If !x- Yls
: ] is
= ([eds I (ez]s l le3]sl = rel l ez les} =In
= {eb e2, ... ,en}·
0, then {x ]s = IY]B and sox= y .
WORKING WITH PROOFS
P l. If c1, Cz, . . . , Ck arc scalars, then (c1v 1 + czv2 + · · · + c~o: v~o: ) s = c1 (v t ) s + cz(vz )s + · · · + ck(v~o)B ·
Note also that ( v) 8 = 0 if and only if v = 0. It follows that c1 v 1 + Cz v2 + · · · + ck v 1: = 0 if a nd
on ly if c1(v 1)a I· cz(vz)s + · ·· + ck(vk) B = 0. Thus the vectors Y1, v 2, . . , vk arc linearly indc·
pendent if and only if (v 1) a, (v;!) B, ... (vk).V a re linearly independe nt.
1
1
P 2. T he vec tors VJ, Vz , . .. ' VI( s pan Rn if and only if every vector v in nn can be expressed as a linear
combination of them, i.e., there exist scalars c1 c2, .. . , Ck SllCh t.hat v = r.1 v 1 + c2v 2 + · · · + c~c v~,; .
Since (v)n = c1(v1) B + c2(v2)B + · · · + ck(vk)B and the coordinate mapping v -+ (v)a is onto,
it follows tha t the vectors v l, v2,' . . 'Vk span Rn jf and only if (vt)e. (v 2) B,· .. ' (vk)B span
1
nn.
P :l. Since the coorciinate map x -+ !xJn is onto, we have A{xJs = C!xle for every x in R" if and only
if Ay =:.: Cy for every y in R 11 • Thus, using Theorem 3.4 .4, we can conclude that. A = C if and
only if .4[x ]B -"' C[xJ a for every x in R 11 •
P4.. Suppose B ,... {ut , u 2, . . . , u 71 } is a basis for [C' . T hen if v
il1 l.I J + l>z u 2 + · · ·-'- b, u,.. , we have:
V
= a1 u1 + a2 U·l + · · · + a
71
Un
anJ w =
+ a2 u2 + · · ' + O.n Un) -1- (b1U1 + b2U2 + ' ' · + bnUn)
=(eLl+ bt) Ut + (a2 + b2)u 2 +··· +(a, + bn )Un ·
+ W=
(a1111
Thus (c:v)s = (ca. ~, ca2, .. . , can)= c(a1 , a~, . .. , an.) :::;: c(v)s and (v + w)e =(at + b1, .. . , an + b,.,)
(a l,a21. .. , an) + (bt , b2, . .. ,bn) = (v)B + (w)s.
=
CHAPTER 8
Diagonalization
EXERCISE SET 8.1
Thus [x]a
that
= [.];2 :t~:rJ. [Tx]a = [:z\:2:r2 ), and (T]s = I[Tvt]a
(Tx]s
-ol] [
= rxl2l- X2] = [22
l
[Tv z]a] =
[~ -~]-Finally, we note
X2 X, ] = IT]a[x]a
X~ -
X2
which is Formula (7).
Thus [x]s
= r i_"':r,' ++ ~~,r,2 ).
[Tx]o
71
= [~:;:rt- !% 11
~.:r,
ITis
and
= IITvda
(Tv2JaJ =
=~~
I
[
-
".
•s~]
-"5
Finally, we note that
(TxJa
\\"l11ch is Formula (7)
3.
Let P
= Ps-•B where S = {e 1,e2} is the standard basis. Then P = [[vds (v2JsJ = [~ -~]and
P[TJ 8 p -l
4.
Let P
= Ps~ o where SPITJaP
5.
For
P\'ery
X~
=
[I -1] [2 -1] [ 0 1] = [1 -1] _[TJ
1
0
2
0
-1
1
1
1
{ e1o ez} is the standard basis. Then P = llv!]s
1
= [~
2]1
[-t
-..,.
:>
18
5]
-
_ .1
5
~[
1 2]l [l3
5 -2
=
[vz]s] = [~ -~] and
~] = [T]
v€:ctor x m R3 , we have
[::]
~ (jx,- jx, + jx,) m+ H·· + jx, + ~~,) [!] + (jx, + !•· -jx,) m
268
Exercise Set 8.1
269
ami
Finally, we note that
~x1
[TxJH =
-xz - !x3 ]
[- 4x: +
- 2Xt
X2
+
-
~X3
[-~
-
1
2 X3
which is Formula (7).
6.
.For every vector x in R3 , w~ have
X =
[~;] = Hx, - fz, + !x,)
nl
+ tlx, +
'~"' + {,x,)
m
+ (-jx, + lx,-!x,)
and
[T] F = [[Tv,]o [7'v,]o [Tv, ]!,] = [
Finally, we note that
-y
[
12
7
-8
which is Formula (7) .
-J -il
nl
270
Chapter 8
= [lvt]s
7. Le~ P =- Ps .... a where 5' is the standard basis. T hen P
3
0
P[TJaP-
1
=
I
-2
-2
1
I
ilH t] [-l
l
2
[;
l
1
l
I
1
-2
2
2
2
9. We have T v l
.. 'I nrIy, 1'vI1
Stmt
[T]s =
I~
3
0
_'J.
[()]
1
J
=IT
1
anti [T]s·
= [: _:)
and
4
= [~ -~]·
II
1
P[l]sP
Si1mlarly.
[Tis
P
=
l
= [-82
156]
_~
81i
61 ]
~5
= Pa ... FJ•
=[
GJ -
and [Tis· =
112
3
J
[
_;j
I
Since
Vj
26
I
1798[
") =
15 -1
=;
n
[0 -2 ~]
0
0
1
0
I] = - rrv
l2
= [TJ
-2
9
= 11 vl
4
1
V ?.
= v;- v2,
+ ~~ v2.
19 2 .
+ nv
I
Thus
we have p
=
I
2
2
-2
_86v
-15 1
+
32] =!Tis
-~
IT
II
L798v
and T v 2
4~
2
= [-3]16 -
6l v 90 1
~ v2 •
45
2
P[T is P -
1
= [1o8
- 21(_§§
-¥J
!i:
1
-!] [1:1
2
[tSh
_..!..]
_ = [-¥-7i ¥~]
= p- 1 [Tis•P.
- ~~]
11
3-AJ
- %-~
.!.2
11
i
1
=
[T]s··
T hus, from Exercise 9, we
[]1 1] = p-• [T]s· P
-1
= Pe.-s•.
12. The e>quation P[T]sP
· I=
[Tis· is equivalent to [T]s
= P - 1 [Tl"'•P.
have
86
= [ ~7 :!
45
whf•rt•, ns before.
={Tj
~5 [ =:) = - 321 v; - 72s v2 and Tv2 = [-~] i v; + ;sv;. Thus
31
9]
~ S1me v! = 10v1 r 8v2 and v 2 = - ~v~ - ¥ v2, we hnve
[Tis
[TJs
[-1
and
~] = [A
I
11 . The .-qualion P[T]B p - l = [Tis• is equivalent to [T]a
have
where, as before, P
r v2 -
;] and
_;]
1
0
Tv2 = [~~)
and
= v; + v2
=
- 8
8
an d
2
+ nv
2
TI
'2
lo -~ ]
ancl
[8
1
-M fl] [~
-1
4S
31
2
I
1] [_..:!..
11
~~
=
12
[i
II
~r 222] +
:
Tv~ = [:~] = -
~
-:J
[2] +IT2 r-3] = nv
3
r= ~ TI~]
10. We ha\·e Tvl
8
..1..
= r- l~] =-II r-~]- ~~ [_~] = - ~~ v 1- ~~V?.
=
j]
_ !
8
I I
Ps ... a•
=
n
3-1] [-y
2
-1
J] H
0
=
= [lv 1]s Jv,]s lv>lsl =
8. Let P - Ps ..... a where S is the standa rd basis. Then P
3
[v2Js [vJ]sJ
61]
90
13
-
90
49 - [ ..!.
-;rs
45
P = Po-e··
~]
~
2
[108
Thus, from Exercise
10, we
Exercise Set 8.1
13. The standard matrix is [T}
= [1 ~ 2]
. ,
o 1
matrices are related by the equation
and, from Exercise 9, we have !T}s
ft
- rr
~].
u
= [~ -~
ll
'l 6
_25
TI
11
22
5
22
22
and, from Exercise 10, we have {T)s
= [~ _~] .
matrices are related by the equation
where P
1 5.
(a)
= [vl I v:J
For ev€r y x
= (:.c,y) in R 2 , we have
a.nd
16.
(b)
Ju agreement wit b l·hrmu Ia. (7), we hn.vr.
{a)
For every x -" (x.y,z) in R 3 , we ha.ve
and
\ .. Tx .,;,
.These
5] [-1~ l!..l [~ ~] = [10 -2]1 = jTJ
-1
P[T] 8 P - 1 = [
5 -3
14. The standard matrix is [T]
= [-
x+ Y + z1
[1]
[ 2y:~ 4z j =(~x+~y+~.z) ~
I
( v,
-?
+(- !x+h+~z)
[-1]~
/
!
) J •
'··
+(4z)
[0]~
These
Chapter 8
272
(b) ln agreement with Formula. (7), we have
17. Fo r every vector x in R'l, we have
x =
[: 1 = (~x1
+ %xz)
[~] + (- {0x1 + -fo:r2) [-~1 = (~xl + ix2)v1 + (- 130xl + -foxz)v2
and
Finally, in agreement with Formula. (26), we h ave
18. For every vector x m R2 , we have
anrl
1
t hus lxls
= l -:tt
~ 4 Xl
; l·
1 X2
1- ;:t:z
jTxJ s'
=
- 2 :r1
1
[
3
z XI
l
+ 3I :r2
~x:z
+
, and ITJ s•s
'2
J%2
Finally, in agreement with Formula (26), we have
19.
For every vector x in R 3 , we have
= I!Tv1J B'
Exercise Set 8.1
273
and
+ 2X3]
2x
x · =
[3Xt
Tx =
1
t hUS
[X IB
=
_
4xt
+ ~X2
I
l
[-
zXl l
2 x1
- ~X3] '
1
2 xz + '2:2:3
1
( --·x
5
r 1
2
-
- 1.. xz
14
[TX JB' =
,
-
4
)
;;XJ
,
r-3]
+ (6
2
- x1 1
[-rrt6 - 1
I
+ 2 x:z + 2x3
+ 12
- ;[J ) [1]
4
1
14X2- 'fX3]
3
2 .
7XJ-f4X2+'f:t3
~
3
-x2
1<1
1
ann [T]B'B
=
r-~ -! -!4]
I
'f
Ill
.
Iii
Finally, in agreement with Formula (26), we have
20. For every vector x in R 3 , we hnvc
and
-·~ -!]
lf
ITxl.~:~ •
21.
(a) [TvdB
=
=
rl -
lx1
- .2...t2
7
14'
4
l
7Xl- il.X:.!
[_~]
I·
-
~
14
-
13
Finally,
\.ii
GJ·
2v2 = [ -~]- 2GJ = [- ~]
and !Tv:da =
Tvl
(c)
For every vector x in R 2 , we have
VI -
•
l [-
ZJ
(b)
=
1
2
Md T vz
=
3vl +5v2
= 3 [-~} + s[~] = [}~]-
Thus, using the linearity ofT, it follows t hat
-5) + (-
Tx==(·- .-1 xt+ 52 x2)lf
..,
0
1
Xt 22x
5
~X2]
+ ll5 x 2
1
ix1- ~x2, fjx1 + Jtx2).
Using the f.--:-mt:la obtained in part (c) , we have T(l , l ) = (lri' - ~, ¥ + ¥) = ('5u, 353 ) .
or, in t he comma delimited form , T (xJ> x2) = ( 1
(d)
7] - [ l
2
1 ) [
5 x1+5x2
11
274
Chapter 8
(c)
For every vector x in R 4 , we have
x - (~x1- 4x2- ~x3 + 121 x4)v1
+ (~xt
+ ~X4)v2
+ ~.r2 + !xJ- ~x~)v4
- 2.r2- 4x3
+ (- ~x1 + ~x2- !xJ- !x4)v3 + (-~:r1
I hu:>, using lht
Tx
lin~arity
ofT, it follows
(~ xt - 4x2 - !x3
th:t~
1; x4)Tv1
+ (~Xt - 2x2- !x3 + ~x4)Tv2
+ (-~x1 + ~x2- !x3- !x.a)Tv3 + (-~Xt + ~x2 + !x3- ~x4)Tv4
+
which leads to the following formula forT
(d )
23. 1f T
= (- 31 1 37, 12).
Using Lhe formula obtained in par t (c), we have T(2, 2, 0, 0)
IS
the identity operator then, since T e 1 = e 1 and Te2
[~~]and
[T]a
and Tv 2 = [
IT]s• =
[~ ~J On
the other hand, Tv1
= e2,
= [~]
we have IT]=
=
[~ ~].
Similarly,
H[~] + H[_:] = gv~ + ~~vz
~) = -{,v}- !:v;; thus ITJs•.B = [~ -~l·
~~
24. If Tis the identity operator then IT]
5I
= IT] B = IT] a• =
[~ ~ ~]· On the other hand, we have Tv1 =
0 0 l
3
]
[~
1+
2
=
37 [
36
-~J
19 [
?] -36 [g]~ =
36 - :
II
37 1
Jij vl
19
+ 36v21
1J 1
36 v 3,
T
_
v'2 -
[Q] ~ -
299 1
144vl
209
+ mv2.
1
229 I
144v3,
275
DISCUSSION ANO DISCOVERY
3
and Tv3 --
[9] =
173
4~
V.1 + .illv'
48 2 -
115
48 v'·
3•
thus [T}a•' D
=
4
25.
37
~
m
.IJ!
36
'109
299
ill]
48
119
48
m
[
'229
ll5
II
- 36 - m -48
.
=
Let B
{v 1, v2 , ,.. , v ,t) and B' = {u 1 , u 2, ... , u m} be ba.ses for R" a nd R m respectively. Then,
if T is the zero transformation , we have
for each i = 1, 2, ... , n. Thus {TJB',B = [0 I 0 I · · · I OJ is the zero matrix.
26. There is a. scalar k > 0 such that T(x) = kx for all x in R". T hus, if B = {v1 , v 2 , •.• , v,} is any
basis for Rn, we have T(v1) = kvi for aJl j = 1, 2, . .. , n and so
[T]a
~~ ~
0 0
k
[T]:
29.
We have Tv1 =
1 0
0 1
=k
[H!]
27 .
[ -~
=
0
0
0 0
28.
(~ ~] [ _ -i~] = [ ~] = -4vl
[TI
=
and Tv2 =
[~ ~J [~] = [ ~] =
6vz; thus {Tla
=
~) . From this we see that the effect of the operator 1' is to stre tch the v 1 component of a
vector by a fac tor of 4 and reverse its direction, and to stretch the v 2 component by a factor of 6.
If the xy-coordinat e axeg are rotated 45 degrees clockwise to produce an x'y'-coordinate system
who::;e axes are aligned with the directions of the vectors v 1 ttnd v 2 , then the effe-.t is ~o st.retch
by a fact or of 4 in t.he x'-dircction, reflect about the y'-axis, and stret ch by a factor of 6 in the
y'-direction.
30.
Wr.
have
ami
0]
[- 3; = - v'3v2 -
T v3 = -~
v3. Thus [1']a
=
[2o -10 -vf:J0] = 2 [10 -!0 - {/0]. From lhis we sec that
0 J3
- 1
0 :,(f
-~
t he effect of the operator 1' is to rotate vectors counterclockwise by an angle of 120 degrees about
the v, axis (looking toward the origin from the tip of v!), l-beu stretch by a factor of 2.
DISCUSSION AND DISCOVERY
Dl. Since Tv1
[~ ~]·
Te2
= v2
and Tv2 = vl> the matrix ofT with respect to the basis B = {v 1 ,v2 } is [T]B
On t he other hand, since e1
= ~v 1
and e2 = *v2, we have Te1
= ~Tv2 = ~vi = ~e 1 ; thus the standard matrix forT is [Tj == r~
= iTv 1 = ~v2 =
t].
=
2e2 and
276
Chapter 8
D2. The !lppropriate diagram is
c-•
c
(T)Bl
03. The approp1inte diagram is
c-•
D4
D 5.
D
I'luc We ha\·e IT1 (x ))e = !Tds s{x)s = {T2Js• e[x)s = !T2 (x))s·: thus Tt(x) = T2(x).
( b) l· abe. l·ot example, the zero operator has th~ same rn:ttrJX (t hP ZNO nMtnx) with respect to
any basis for R 2
(a)
(c)
True If 8 = { v1, v2 ,
, vn} and [T)s =I, then T(v,.J = v1c for each k = 1, 2,
follows from this that T(x ) = x for all x .
(d )
False For example, h:t B = {e1, e2}. 8' = {ez,ct}, and T(x, y)
but T ~ ~ not the 1dent1t.y operator.
. , n and it
= (y , x). Them IT]s •.B =
/2
Ouc 1cason 1s that lhf tcpresPntation 0f the operator in s•"~rn ~ orht'r b:L.c;is may more clearly reAect
t. he ~~;romrt rir
effect. of the operator
WORKING WITH PROOFS
Pl . If x andy are vectors and cis a scala r then , since T is linear, we have
c[x)e
[x)o
= [cx]s -7 {T(cx )Ja = [cT (x )Je = c{T(x )]a
+ [Y)s = jx + Y)s
-t[T(x + y )]s
Thts shows that th- mapping {x]s
P2.
Ifx is
tn
-7
= jT(x) + T(y)Js -
{Tfx )]e
+ {T(y )]s
[T(x )Js is linear.
R" and y ts m Rk, then we ha"e [T1 (x)J a• = [Tds•.s lx)s and IT2(y)J a" = !T2)B".B'IY)s•.
Thu:-
IT2('i JIX))]s"
and rrolll tIll-: It follows thal IT2
0
= IT2] B",B' IT1 (x )Js• = IT2)B".B'!Tds·.alx]s
Tds•·,s = (:t;JB" ,B'!Td s·.B·
277
EXERCISE SET 8.2
P3. If x is a vector in Rn, then [T]s[x]D = [Tx] B = 0 if a.nd only if Tx :::: 0 . Thus, if T is one-toone, it follows that [TJB[X]B = 0 if and only if [x] a = J, i.e., that: [T]a is an invertible matrix.
Furthermore, since [T'" 1]a[T]B = [T- 1 oT} 8 = [J]B = /,we have [T- 1]s = [TaJ- 1.
P4. [TJs
= [T(vi) a I T(v2)al ··· I T(vn}s] = [Tle. B
EXERCISE SET 8.2
1. We have tr{A) = 3 and tr(B)
= -1;
thus A and Bare not similar.
= 18 and det(B) = 14; thus A and B are not similar.
2.
We have det(A)
3.
We have rank( A) = 3 and rank(B)
4.
We have rank(A)
5.
(a) The size of the matrix corresponds to the deg ree of it.s characteristic polynomial; so in this
case we have a 5 x 5 mat rix . The eigenvalues of the matrix wiLh t heir algebraic mu!Liplicities
arc A = 0 (wult.iplicity 1), A=- -1 (multiplicity 2), and A= 1 (mult.iplicity 2). The e igenspace
corresponding to A = 0 has dimension 1, and the eigenspaces corresponding to A= - 1 or..\= 1
have dimension 1 or 2.
(b) The matrix is 11 x 11 with eigenvalues A = - 3 (multiplicity 1), A = - 1 (multiplicity 3), and
>. = 8 (multiplicity 7). The eigenspace corresponding to >. ""' -3 has dimension 1; the eigenspace
corresponding to ). = - J has dimension 1, 2, or 3; and the eigenspace corresponding to A = 8
= 2; thus A and B
are not similar.
= 1 and rank(B) = 2: thus A and B
are tJOt similar.
have dimension I, 2, 3, 4, 5, 6, 7, or 8.
6.
(a) The matrix is 5 x 5 with eigenvalues A= 0 (multiplicity 1), A= 1 (rm1ltiplicity 1), .A= -2
(multiplicity 1), and .A = 3 (multiplicity 2). The eigenspaces corresponding to .A= 0, >. = 1; and
>. = - 2 each have dimension J. The eigenspact! corresponding to A = 3 has dimension 1 or 2.
(b) The matrix is 6 x 6 with eigenvalues A= 0 (multiplicity 2), A= 6 (multiplici ty 1), and .A=
2 (multiplicity 3). The eigenspace corresp(1nding to >. = 6 has dimensio n 1, the eigP.nspace
corrcspondinr; to A = 0 ha.<> dimension 1 or 2; and the eigenspace corresponding to A = 2 has
dimension 1, 2, (){ 3.
7. Since A is triangular, its characteristic polynomial is p(.A) = (..\- l)(A- 1)(.\ - 2) =(.A -- 1)2 (A- 2).
Thus the eigelll'alues of !I are .A= 1 and>.= 2, with algebraic multiplicities 2 a nd 1 respectively. The
eigenspace corresponding to A= 1 is the solution space of the system (I- A)x = 0, which is
The general solution or this system is x =
[i]
= t [:] ; thus the eigenspa«
is
1-dime.'5ional and so
A = 1 has geometric multiplicity 1. The eigenspace corresponding to A = 2 is the solution space of
the system (2/- A)x = 0 which is
Chapter 8
278
The solution space of tbis system ;., x
~ [':] = s
[l
thus the eigenspace is 1-dimensiotlai and so
). = 2 also has geometric multiplicity 1.
8. The eigenvalues of A are ). = l , >. = 3, and >.
rcultipliclty 1.
= 5; each
with algebraic multiplicity 1 and geometric
9. The characteristic polynomial of A is p(>.) = det(>J -A) = (.X- 5) 2 (>. - 3). Thus the eigenvalues of
A are>. = 5 and >. = 3, with algebraic multiplicities 2 and 1 respectively. The eigenspace corresponding to ). = 5 is the solution space of the system (51- A)x = 0, which is
Tbe gene>al solution of this system is x
~ m~ t
[!l;
thus the eigenspace is 1-<limensional aud so
). = 5 has geometric multiplicity 1. The eigenspace corresponding to ). = 3 is the solution space of
the system (31 - A)x = 0 which is
The solution space of this system is x
= [
~~
= s[
- 2s
~~ ; thus the eigenspace is ]-dimensional and so
-2
). = 3 also has geometric multiplicity l .
10. The characleristic poiynomial of A is (). + 1)(.>.- 3) 2 . Thus the eigenvalues of A are>.= - 1 and
>. = 3, with algebraic multiplici ties 1 and 2 respectively. The eigenspace corresponding to >. = -1 JS
1-dirnensional, u.nd so >. = -1 has geometnc multiplicity 1. The eigenspace corresponding to >. = 3
is Lhe solution space of the system (31- A)x = 0 which is
[_~ -~ ~]
The general solution of Lhis system is x
[:;] =
[~]
~ s [ -1"'~J + t [0]~ ; thus the eigenspace is 2-dimensional, and so
). = 3 geometric multiplicity 2.
ll. The characteristic polynomial vf A is p(>.) = >. 3 + 3>.2 = >. 2 (>. + 3); thus the eigenvalues are >. = 0
and ..\ = - 3, with algebraic multiplicities 2 and 1 respectively. The rank of the matrix
OJ - A = -A
=[
~
- 1
1
-1
=~]
1
279
EXERCISE SET 6.2
is dearly 1 since each of the rows is a scalar multiple o f the lst row. Thus nullity(OJ - A)
and this is tbe geometric multiplicity of ..\ = 0. On the other h and, the matrix
-2
-31 --A =-
this is t he geometric multiplicity of
>. =
- 3.
12. The characteris tic polynomial of A is (>.- l)(A 2
[~
-
= 2,
-1
- 1 -2
-2
[- 1
has rank 2 since its reduced row echelon form is
3- _1
-1]
1
1
): ~·
:=:
0 1]
1
1 .
0
0
Thus nullity( -31- A) == 3 - 2 = 1, and
2A + 2); thus >. = 1 is the only real eigenvalue of
A. The reduced row echelon form of the matrix li - A=
[=~ ~ -~~] is [~ ~ - ~1 ; thus the rank
- l
4
- 6
0
of 11 ·- A is 2 and the geomet ric multiplicity of ).. = 1 is nullity( ll - A)
=
(I
0
3 - 2 "" 1.
13. The characteristic polynomia.l of A is p(>-.) = .>.3 - 11>.2 + 39A - 45 = (A - 5)(>.- 3) 2 ; thus the eigenvalues are >. = 5 and ,\ = 3, with algebraic multipli<.:ities 1 and 2 respectively. The matrix
5/ - A ""'
[-~ ~ =~]
- 1
has <ank 2, since its •eduC€d <Ow cohelon fo•m is
0
l
!
[~ ~J.
Thus nullity(f>l --A) - 3 - 2
= l,
and
this is the geometric multiplicity of ,\ = 5. On the o t her ha.nd, the matrix
3T- A =
[=~ ~ =~]
- 1
0
-]
has rank l since each of the rows is a scalar multiple of the 1st row. Thus nuUity(3I - A) = 3 - l = 2,
and this is the geometric multiplicity of,\ ""'3. It follows that A hM 1 + 2 '-=-= 3 line<'lrly independent
eigenvectors and t hus is diagonali<>able.
1 4 . The characteristic polynomial of A is {A+ 2)(>. - l) 2 ; thus the eigenvalues are,\= - 2 and A= 1,
with algeb<ak multipHcities I and 2 <espocti vdy. The matrix - 21 - A - [-: -:
and t he matrix 11 -A
=[
~
:
... - 1 - 1
=~1
has rank 1. Thus >-
=
- 2 has
=:]
h,. <a.Jll< 2,
geome~;ic-~u~t~plicity 1, and
l
>. = 1 has geometric multiplicity 2. It follows that A has l + 2 = 3 linearly independent eigenvectors
~nd
thus is diagonalizable.
15. The characteristic pc,lynomial of A is p(.>-) =.>- 2 - 3>. + 2 = (>.- l) (A- 2); thus A ho.s two distinct
eigenvalues,>. = 1 and>.= 2. The eigenspace corresponding to A = 1 is obtained by solving the system
(/ - A)x
= 0. which is [~~ =:~] [:~] = [~] ·
The general solution of thls system is x
= t [ ~)-
Thus,
I
280
Chapter 8
taking t
= 5, we see that ?l
= [:]
is an eigenvector for
for A= 2. Finally, the matrix P = [p 1
p-1 AP = [
p 2]
= [!
>. = 1. Similarly, P 2 =
[!] is an eigenvector
!} has the property that
4 -3]4 [-14
12] [4 3] [l0 0]2
-20 17 5 4
=
-5
16. The characteristic polynomial of A is (.>.- 1)(.>. + 1); thus A ha.s two distinct eigenvalues, .>.1 = 1 and
.>.2
=
-1 . Corresponding eigenvectors are P1
the property that p -l AP =
= (~]
and P2 =
[~],
and the matrix P = !P 1
P2J has
(_~ ~] [~ -~] [~ ~] = [~ -~].
17. The characteristic polynomial of A is p(>.) = .>.(.>.- 1)(,\- 2); thus A has three distinct eigenvalues,
>. = 0, >. = 1, and >. = 2. The eigenspace corresponding to >.. = 0 is obtained by solving· the system
(OJ- A)x
~ 0, which is
n-:=!] [::]
Similarly, the genml solution of (I- A)x
is x
~ [~] The ~enecal solution of this system is x ~
~ 0 is x =
+l,
+:J
and the gene<al solution of (2! - A)x
~ t [:]. Thus the mat,ix P ~ [-! ~ :] has the P':perty that
p- 1 AP
=~
[~
-
[-
~ . .~] [~ ~ ~] ~ ~ ~] [~ ~ ~]
011011
002
101
18. The characteristic polynomial of A is (A- 2)(>. - 3) 2 ; thus tl.e eigenvalues of A are >.
The vecto' v 1
~ [i]
~ ~ 2, and v ~ [!]
is an eigenvecto. cwesponding to
2
linearly independent eigenvectors corresponding to .X = 3. The matrix P
property that, p - t AP =
;:
= 2 and >.. =
and v 3
= (v 1
v~
3.
~ r-~Ja<e
v 3 J has the
[~0~0~1 1[~ 0~0-~]3 0
r~ 0~ 1-~] r~0 0~ 3~]·
19. The characteristic polynomiaJ of A is p(>.)
Lhree distinct eigenvalues
= >.3 -
6>.2 + 11>.. - 6
>. 1 = 1, .>.2 = 2 and .>.3
=3.
[-~0 -~-1 -~]1 [=~-3
Note. The diagonalizing matrix P is not unique·
eigenvectors. This is just one possibility.
= (..>..- 1)(>. - 2)(.>.- 3); thus A has
Corresponding eigenvectors are v 1
v, ~ m, and v, ~ [:]- Thus A is diagonalizable and P ~ [:
p-IAP =
~0
it
3
[:].
: :] has the pmpe<ty that
: -~] [~ ~ ~]
1
=
1 3
4
=
l~~0 0~ 3~]
depends on the choice (and the order) of tht:
EXERCISE SET 8.2
20.
281
The characteristic polynomial of A is p(>.)
= >..3 -
-t >. 2
+ 5>.- 2
= (>.- 2)(>.- 1)2 ;
thus A bas two
;::e~v:::~~~ :::: ~s~[~·~r:~ e:g]e[::l~~c [~]""';::d~::e::: ~~::.::b::i:l:: b::~:n: t:·~~s[t~l~
-17
9 f
0
7)
>.
which shows I hat the etgenspacP corresponding to
solution of (I - A)x
~ 0 is x -
s[
il·
=2
~
has dunension 1
Simi larly, llw general
which shows that the eigenspace conesponding tn A = t also
has dimension 1. It follows that the matrix A is not diagonalizable since it has only two linearly
independent eigenvectors.
21. The characteristic polynomial of A is p(>.) = (>. - 5)3 ; thus A has one eigenvalue, >. = 5, which has
algebraic multipUcity 3. The eigenspace corresponding to >.. = 5 is obtained by solving the system
(51- A)x = 0, which is
H: ~] [:;]
=
m
The genml solution of this system is
X-
t
[~]·
which shows that t ht:' eigenspacP has dimension 1, i.P , Lhe eigenvalue has geometric mulliplicity 1 lL
follows that. A is nnt. diagnn~liz;tblf' since thn sum of t.hr gcomelrif multiplicil.it!S vf its vig1~tl\'alucs is
less than 3
22. The characteristic polynomtaJ nf A is
0
1 The
hM dimension 2. and the vecto'5 v, = [
Thus--A IS diagonalizable and thr matnx P = lv 1
p - t AP
23
= 0, and >. =
~]and v, ~ [:] fotm a
The "'""' ' ' = mfotms a ba.<is lot the eigenspace-:wespondut< :o ~ = I.
eigenspace corresponding to A
hMis lot this space.
>.2 (>.- 1}; thus the eigenvalues of A are \
=
v2
[~ : ~][~ ~ ~]
u! ~] [~ ~ ~]
v 3)
has the property that
ThP charactt>risttc polynomtal o f A is p( .\) = (>. + 2) 2 t.\- 3) 2 • thus A has two etgcnwtlues, .-\ = -2
and .>- = 3, each of which has n.lgebraic rnulLiplicity 2. The eigewJpace corresponding L(l ..\ = -2 is
obtained by solving thP. system ( -2/- A)x = 0, which is
The gonctnl solution of this system ts x = r
~] + s [~]· whteh shows that the eigenspace has dimenSton
2, i.e., that the eigenvalue >. = -2 has geomeLric multiplicity 2. On the other hand, the general
solution of (31 - A )x - 0 is x = I
[~]· •nd so ~
= 3 has geomet d e mu I liphcity I It follows that A is
not d iagonalizable since the sum of the geometric multi :' liriti~>s of its eigenvalues is lt>..ss than 4
282
2~.
Chapter 8
The charactenstic polynorrual of A is p(>.) = (>. + 2)2(>.- 3) 2 ; thus A has two eigenvalues A= -2
au.l
~-
3, each of algebrak multiplici<y 2. The vec<on
eige "'P"' correspondi og to
~
v, -
- -2, and <he vectors v, -
[~]
rrl
and v 2
=
[~]
form a
~]
form a basis for <he
and v, - [-
e1genspace corresponding to A= 3. Thus A is diagonalizable and the matrix P = fv 1
has the property that
p-'AP=
[~
1
-1
0
0
0
0
1
0
~1 r-~ -~ ~ -~1 r~ ~ ~ -~l = r-~
0
1
0
0
0
0
3
0
0 0
30
010
0
01
basi~
v2
0
v3
0
-2 •- 0
0
0
0
0
for <he
3
0
v 4J
~]
25.
1f the matrix A is upper triangular with l 's on the main diagonal, then its characteristic polynomial is
p(>.) = (>.- 1)" and >. = lss the only eigenvalue. Thus, in order for A to be diagonalizable, the system
(1 A)x = 0 m11st haven linearly independent solutions. But, if this is true, then (/- A)x. = 0 for
every vector x in R" and so I - A is the zero matrix, i.e., A =I.
26.
If A 1s a 3 x 3 matrix with a three-dimensional eigenspace then A has only one eigenvalue, A= A1 ,
whsch IS of geometric rnultJphc1ty 3. In other words, the eigenspace corresponding to A1 is all of R3 .
It follows that Ax = >. 1 x fos a.! I x in R 3 , and so A = >. 1 I is a diagonal matrix.
27.
If C is similar to A then there is an invertsble matrix P such that C = p-l AP It follows that
sf A IS invertible, then C ts invertible since it is a product of invert1ble matnces. Similarly, smce
PCP 1 =-= A, the invertibihty of C implies the invertibility of A.
28.
If P =!P sI P2l · I PnJ. then AP = (Aps I Ap:~l · · · I ApnJ and PD =[A t PI I A2P2I · · · I AnPnJ where
\1 -\2.
. An at c the diagonal entries of D. Thus Ap.~;
A.I: Pk for each k = 1, 2,
, n, i.e., >..~; is an
~·t,t;;< . n\'alue of .~ an<.l P1r is an eigenvector corresponding to A.~;
29. The :>tandard matrix of the linear operator T is A
= [-~
-~ =~]. and the characteristic polynomial
-1 - 1 -2
+ 9>. =>.(A+ 3) 2 • Thus the eigenvalues ofT are >. = 0 and A= -3, with
qJ~ehraic mulllplicltles 1 and 2 respectively. Since>.= 0 has algebraic multiplicity 1, its -geometric
snult tplictty 1S also 1. The eigenspace associated with A = -3 is found by solving ( -31 - A)x = 0
whsth IS
of A ts p(>.) - >.J
+ 6A2
~] +t [:] ; <hus ~ -
The gen"' ..t svlu<ion of <hissys<em is x - s [
follows that T
IS
diagonali1.able since the sum of the geometric multiplicities of its eigenvalues is 3.
30. Th<' standard rnatnx of the operator Tis A=
[-~ -~ ~1, and the characteristic polynomial of A
l
i-; (>.
-3 has geome<nc mulbplic.ty 2. I<
+ 2)(>.- 1) 2 Thus the eigenvalues ofT are
1
0
A= -2 and
>. = 1, with algebraic
multiplici~·~ 1
DISCUSSION ANIJ DISCOVERY
283
and 2 respedively. Smce >. = -2 has algebraic multiplicity 1, tts geometric multiplicity is also 1.
The eigenspace associated with>.= 1 is found by solving the system (J - A)x = 0 Since the matrix
l - A= [
~ ~ -~]
-1
- 1
has rnnk 1, the solution space of(!- A)x
=0
is two dimrnsional, i.e.,>.= 1
l
is an eigenvalue of geometnc multiplicity 2. It follows that T is diagonaltzal>le ~ mu~ the sum of lhe
geometric multiplicities of its eigenvalues is 3.
31 .
If x is a vector in Rn and >. is scalar, then [Tx)a = [T]a[x]a and [>vc]B = .>.[x]a. lL follows that
Tx =Ax if and only if [T}s[x]s = [Tx]B =[Ax:]a = >.[x)a; thus xis an eigenvector ofT corresponding
to >. if and only if [x]s is an eigenvector of [T] 8 corresponding to >..
32.
The characteristic polynomial of A is p(>.)
= 1>._:-ca
criminant of thi's quadratic polynomial is (a+
d)2 -
>. -:_b
dl = >.2 -
4(ad- be)
(a+ d)>.+ (ad - be), and the dis-
= (a- d) 2 + 4bc.
(a) lf (a - d) 2 + 4bc > 0, thon p(>.) has two distinct reo.! roots; thus A is dia.gonali~ab l e since it has
(b)
two distinct Pigenvalues.
Tf ( o - d) 2 + tbc ·· n, t h"n r(>.) has no r~al roots; t hue: A has no rc:1l t'lgf"nvalues and is the refer~"
nol diagonahzablP.
DISCUSSION AND DISCOVERY
01.
are not s1mtlar smce rank(A )
02.
(a)
= 1 and
=2
rank(8)
'ITue. We havP A = p-I AP where P =I
(b) 1Tue. If A 1s s tnHiar to B and B is similar to C. then there arc mvert1ble matrices P 1
nnd P2 surh that A - P1- 1 BP1 and B = F 2- 1 (' P2 . lt follows that A = P1 1 ( P 2 1 C P2 )P1 =
(P2P1) 1 C(e'JJ>I ), thus A is sinular to C
(c) True. IfA - P 1 BP,thenA- 1 =(P- 1 BP) - 1 = P - 1 B - 1 (P 1 ) 1 p- 1 e- 'P.
( cl) False. This statement does not guarantee that there are enough linearly independent eigenvectors.
For example, the matrix A=
[~ ~ ~1
0 -1
wh1ch has
03 . (a)
algcbra1~
has only one (real) eigenvalue, >.
=
1,
0
mull1pltcity 1, but A is not dJagonalizable
False. For examplr, 1 -
[~ ~]
is diagonalizahle
False. For exc.Lmple, 1f p -l AP is a diagonal matrix then so IS Q 1 AQ where Q = 2P. The
diagonalizing matrix (if it exists) is not uruque!
(;:) True. Vectors from different eigenspaces correspond to djfferent eigenvalues and are therdore
linearly independent. In the situation described {v 1 , v2 , v 3 } is a linearly independent set.
(d ) True. If an invert1ble matrix A is similar to a diagtlnal matrix D, then D must also be m1 is the diagoncJ matnx whose diagonal
vertible, thus D has nonzero diagonal entries and
entries are the reciprocals of the corresponding entries of D. Finally, 1f PIS an mvert1ble ma1 and so A- 1 is sintilar
trix such that p -I AP = D, we have p-• A- 1P (P- 1 AP) - 1 =
too-l
(b)
o-
=
o-
284
Chapter 8
(e) 1'ru_y The vectors in a basis are linearly independent; thus A bas n linear independent
\
~rlvectors
04.
(a) A is a 6 x 6 matrix.
(b) The cigens puce corresponding to >. = 1 has dimension 1 The eigt:nspace corresponding to
>. = 3 has dimension l or 2. The eigenspace corresponding to >.= 4 has dtmension 1, 2 or 3.
(c) If A is diagonalizable, then the eigenspaces corresponding to >. = 1, >. = 3, and >. = 4 have
dimensions 1, 2, and 3 respectively.
(d) These vectors must correspond to the eigenvalue >. = 4.
05.
(a) If >. 1 has geometric multiplicity 2 and >.2 has geometric multiplicity 3, then >.3 must have
multiplicity l. Thus the sum of the geometric multiplicities is 6 and so A is diagonalizable.
(b) In this case the matrix is not diagonalizable since the sum o f the geometric multiplicities of
t he e1genvalues is less than 6.
(c) The matrix may or may not be diagonalizable. The geomet.ric multiplicity of >.3 .IIWSt be 1 or
2. If the geometric multiplicity of >.3 is 2, then the matrix is diagonalizable. If the geometric
mllll.iplicitv of >-1 Is 1, then the matrix is not dia.gonaliza.ble.
WORKING WITH PROOFS
Pl. If A and Bare similar, then there is an invertible matnx P such that A = p - t BP. Thus PA = BP
and so, usira,v the result of t he ci ted Exercise, we have rank( A) = rank(PA) = rank(BP) = rank(B )
and nullity(A) = nullity(PA ) nullity(BP) = nnllity( B).
P2.
If A and 13 are sum lar lhen t here ts an invertible matnx P such that A = p - • BP Thus, using
part (e) of Theorem 3 2 12, we have tr(A) = tr(P - 1 BP ) == tr(P 1 ( IJP)) o..: tr(( BP)P - 1 ) = tr(B ).
P3. If
X
= p - l A, we have
p - 1 (,\x) = >.P- 1x
f. 0 and Ax = >.x t.hen, since pis invertible and cp-l
C'P - 1 x = p - I Ax =
wit h p -• x -:f 0 T hus p -• x is an eigenvector of C corresponding to >..
P4. If A and B ure :m111lar, lhen there is an invertible matrix P such t ha t A = p - l BP. We will prove,
by induction, tha t Air P 1 B" P (thus Ak and BJr are similnr) for every positive integer k .
Step l The fact that , \ 1 = .1 = p - 1BP = p-l B 1P is gtven
= p - l BI<P, where k is a fixed integer ~ 1, then we have
A.\:+ 1 = AAic = (P- 1 BP)(P - 1 BkP) = p - 1 B (PP- 1 }BI<P = p - tBI<+lp
Step 2. If Ak
These two s teps compl(·te the proof by induction
PS
If A ts d tagonaltzablc, t hen there is an invertible matrix P and a diagonal matrix D such that
P - • AP = D . We will prove, by induction, that p - tA'" P = Dk for every posttive integer k . Since
D" is diagonal thts s hows that A" IS diagonalizable.
Step 1 The fact that p- 1 A 1 P = P- 1 AP = D
= D1
is given.
= D" , where k is a fixed integer ~ 1, then we have
p - lA H 1 p = p - 1 AAkP = (P- 1AP)( P- 1A 4 P ) = DD" = Dlc+ l
Step 2. If p - t Ak P
These two s teps l"omplcte thP proof by induction .
EXERCISE SET 8.3
P6. (a )
285
Let W be the eigenspace corresponding to >-o. Choose a basis {u 1, u z, ... , uk} for W, then
extend it to obtain 11. basis B = { U t, u 2, ... , u ,., Uk+l• .. , u n} for R".
(b) If P = [u1 I u2l
l u.~; I U .~;+J
I · · · I un] =[Btl B2j
then t.hc product AP has the form
AP = [-\out I >.o u 2l · · · I >.o u.~: I AB~)
On the other hand, if Cis ann x n matrix o f the form C
where Z
(c)
= ['>.~" ~] , t hen PC has the form
= [~]-Thus if Z = p -l AB2 , we have AP =PC.
Since AP =PC, we have p - l AP =C. T hus A is similar to C
= [.>.~"' ~] ,and so A and C
have the·same characteristic polynomjaJ.
(d ) Due to the special block structure of C, its characteristic polynomial of has the form
Thus the algebraic multiplicity of >.0 as an eigenvalue of C, and of A, is greater than or equal
to k
EXERCISE SET 8.3
1. The charactenslic polynom1'\l of A is p(>.) = ,\2 - 5,\ = >.(>.- 5). Thus the eigenvalues of A are).= 0
and ,\ = 5, and each of the cigcnspaces has climension 1.
+
2 . The characteristic polynomial of A 1s p(,\) == >. 3 - 27...\-54== (,\- 6){>. 3) 2 Thus the eigenvalues of
A are,\ = 6 and .>, -3 . Thf> t>igenspace corresponrling to,\ = C hl\.o:; dimension 1, anrllhe e1genspace
=
corresponding to ,\ =- -3 ha.s rlimPnsion 2.
3. T he characteristic polynom1a.l of A is p(>.) = >.3 - 3).2 = ..\2 (.X- 3). Thus t he eigenvalues of A are
A= 0 and ,\ 3. The eigensptLCt: corresponding to >. = 0 has dillleu.sion 2, and the eigenspace corresponding to >. - 3 has dimcllSIOn 1
4 . The charactenstic polynomial of A is p(>.) = ,\,3 - 9...\ 2 + 15).- 7- (,\- 7)(>-- 1) 2 Thus the eigenvalues of A are >. = 7 and >. - 1. The eigenspace cor responding to ,\ = 7 has dimension I, and t he
eigenspace corrPspondmg to >. = l has dimension 2.
5 . The gen"al solution or the system (0/ - A)x
and v 3
~
[-:]
~ 0 is x ~ [-~] + s [-:], thus the •ectors v ~ -~]
r
1
form a basts for the eigenspace corresponding to
forms a basis for t he eigenspace corresponding to>.= 3. Since
follows that the two eigenspnces are o~thogonal.
[
~ = 0. Similarly the vector v, = m
v3
is or thogonal to both
v1
and
v2
it
Chapter 8
286
6. The general wlution or (7f
>i )x
eigenspa<e ro"esponding to >.
~ 0 is X~
T
[i}
thus the vcctoc
v, ~ [:]
~ 7. Similacly the vecto"' v ~ ~]•nd v ~
[-
2
3
rom•s a basis foe the
[-:]
the e tgenspace corres ponding to >. = 1. Since v 1 is orthogonal to both v, and v 3
two etgenspaces a re orthogonal
7.
The characteristic polynomial of A is p(>.) = >.2
are>.= 2 and >.- 4. The vector v 1
[~]
and the vector v2 =
= [-~]
fvllows that the
6>. + 8 = (>.- 2)(>. - 4); thus the eigenvalues of A
-
forms a basis for the eigenspace corresponding to>.= 2,
forms a basis for the eigenspace corresponding to >. = 4 . These vectors are
orthogonal to each other, and the orthogonal matrix P =
that
PTAP=
[-~ ~]
72
8.
ll
form a basis for
[31
J2
[~~~:II ~~~~II J =
~] =
l] [-~
3
-12
[-
~ ~]
has the property
~72
[20 0]" = D
\/2
The characteristic polynomial of A is (>.- 2}(>. - 7); thus the eigenvalues of A are >.
Correspondmg Pigcnvectors are v 1 =
[~]
and v 2 =
[-~]
pT _.1P =
[- ..!S
~1
ts] [ 6 -2] ['~
~
9. The characteristic polynomial of A is p(>.)
-2
= .>.3 + 6>.2 -
32
-~]
hns the property that
-~]
= [2 ~] = D
75
0
'75
3
7.
respectively. These vectors are odhogonal to
1 :~ 1 1 = [~
each other, and the orthogonal matrix P = 111 :: 11
= 2 and >. =
I
= (>.- 2j(>. + 4) 2 ; thus the eigenvalues of
ace>. ~ 2 and .\ = - 4. The g'ncral solu<ion of(21 A)x ~ 0 is X~ m' and the gcnecal solution
of (-If - A)x - 0 is x ~ s [-l] + t [ -~] Thus <he vedo< v
forms a basis foe the eigenspace
A
T
1
corresponding to >. - 2, and the vectors v,
~ [ -:]
and v,
~
[:]
r:J
form • basis for the eigenspace
corresponding to>.= -4. Application of the Gram-Schmidt process to {vt} and to {v2 , v 3} yield
orthonormal bases { u 1} and {u 2 , u 3 } for the eigenspaces, and the orthogonal matnx
P=
lut
u2
u3) =
?s -~
[~ ~
-~
0
73
...16
has the property
th~>.t
I
pT AP
=
-73]
76
r-12
l-~
2]2
0
[7:
76
2
76
-~]
[2
-~ = 0
73
=D
0
Note. The diagonali?.ing matrix I' is not unique. It depends on the choice of bases for lhe eigenspaces.
This is j ,ts t onf' possibilit.y.
EXERCISE SET 8.3
287
10. The characteristac polynomial of A is p(>.) = >.3 + 28>. 2 - 1175>.- 3750 = (>. + 3)(>.- 25)(>. +50);
thus the eigenvalues of A are >. 1= -3, >.2 = 25, an<.l >.3 = -50. Corresponding eigenvectors are
=
v,
[!],
v,
i],
=[
and v,
=
m.
matrix
p
[
"!' mutually orthogonal, and the orthogonal
These vc-eto<S
li::u] = [~
v2
V1
= llv1l1 llv2ll
4
-~
0
:J
3
5
has the property that
pTAP =
[-~ 0][ -2 0 -36] [0
i
0
:J
0
-3
-36
0
5
11.
~
0
1
0
0 -23
0
3
The charactenstic pnlyn(lmial of A is p(>.) = \ 3
and
> = 2.
( 2/ - A )x
The genoral •oluUon or (OJ - .4)x
= 0 is x
""respondin• to I
>.
I[
l]·
=0
Thus the vedors v,
0. o>nd >he vector v, =
2>.2
-
=
[l]
is
~]
-g4
5
n
-5~]
0
25
=
0
=D
= ).2 (>,- 2); t.hus the eigf'nvalues of A are>.= 0
X
=T
m
m+
and v,
s [ -:]' and the general soluUon
or
l
= [-:
form a basis £or the eigenspace
£mms a basos £or the eigenspace eonesponding to
= 2 . These VPctors are mutually orthogonal, and t.he orthogonal matrix
~]
1
-72
Vt
F =
[
I
llvdl
72
72
0
0
hn.-, the propt>rtv 1hnl
n
[ -~
72
12.
~ ~] [~ ~ ~] [~
~
0
The <"haracterrsltr. polynomaal of A
are
> = 0 and > =3
JS
i
0
p(>.) = .>.3
1
-
6>. 2
72
l[ l
0
=
I)
0
0
0
0
0
0 0
2
+ 9>. = >.(>.- 3) 2 ;
=D
thus the eigenvalues of A
= [:]
forms a ha.,is £or the eigenspace corresponding to A= 0,
= [- ~]
form a basis £or the eigenspace corresponding to A = 3.
The ve<:tor v,
and the vectors v, = [- ]••d v 3
0 0
1
V2
Application of Gram-Schmidt to { vt) and to { v 2, v3} yield orthonormal bases { u 1} and {u 2 1 u 3 } for
the eigenspaces, and the orthogonal matrix
288
Chapter 8
has the property that
1
tal [
7J
1
"J2
-12 - 21
0
2
I
-~
76
13. The characteristic polynomial of A is p(.X)
of A are A
~ ~
0, A
2, and A = 4. The
gene,aJ solution of (21 - A)x
Thus the ve<to" v,
v3
= >.4 -
6>.3
~ 0 is ~ t
x
lJ,
~ [~] and v ~ [~]
3
-~
l
I
~
....!_
2
0
v3
1
[~
-72
I
~ [~] + [!],
0 is x
s
:he
= 0 is ·x
~ u [~].
r
2, and v,
~ [ll
0
0
0
-.J..
v'l
0
72
l
~] - [00 00 00 0]0
~
0
0
0 -
0 0 2 0
~ o o lo o o o
1
1
0
()
0
0
0 0
= >. 4 -
1250..\2
the
0
0 0 0 0
72
A~ 0.
fo,ms a basis
2
14. The characteristic polynomial of A is p(..\)
eigenvalues of A ace
~
:~, 0::~: rlr rt~; ar~lm::·:~. :::::~::~a~d
00 0
01
00
1] 1
[3 310000] [00
T
3
fo<m • bO!Sis fo< th<· ci•cnspace coHe;ponding to
1
pAP=
0 0
and the genecal solution of (4/- A)x
[
:o:::di:;::o >.
=
2)(>. - 4); thus the eigenvalues
A)x
~ -ll fo<ms a basi: fo, the eigcn:pace co"esponding to >. ~
:::h:g::::e::~:
0 0 0
0 3 0 =D
7G
+ 8.X2 = >.2 (>. -
solution of (01
l[ l
-76
'1.2
- 1
2
g~~e.al
1
-1] [~73
-1
-1
---
0
0
4
+ 390625 = (>.- 25) 2 (..\ + 25) 2 ;
thus the
A~ 25, and A= -25. The vecto" v = [~]and v ~ [~] fo<m a basis fo, the
1
eigenspace COHespondi og to A
3
~ 25, and the vee to" v ~ [-~]and v, ~ [-ll fo<m a basis fo< the
3
eigenspace corresponding to..\ = -25. These four vectors art> mutually orthogonal , and the orthogcnal
matrix P lu~:u u~!H r~!" 1: : 11 1has the property that
~
s
PTAP=
0
4
5
0
3
s
0
0
;!
01~
s
0
0
4
~
-~
s
[-7
24
0
7
0
0 -7
0
0
24
0
24
2~]
s
3
0
-$
4
0
3
5
0
0
3
g
4
s
4
5
0
0
2~
-2~
-il
25
=
0
0
0
J
EXERCISE SET 8.3
1 5.
289
The cig<>nvalues of the matrix A = [~ ~] are
eigeuvccturs u 1
l
:l
l]
3
1
= [_ ~]
71
and u 2
1 7.
= [- ~].
and >.. 2
= tl,
with corresponding nOrJ:!lalized
Thus the s pectral Jccompo.'lition of A is
7-i
= (2) [ -72
~]
16. The eigenvalues of A are
= [~].
and u 2
>11 = 2
[
- },j+(4)[~][J. ~]
I
72
>-t = 2 and
.X2
= 1, with corresponding normalized eigenvect ors u 1 = [
Thus the spectral decomposition of A is
~]
~
:.-:·
ii
;:~:::::·:::~::: :::.:·:,:i•~A[ A~~: r~r f~r:::o::~:~:::·::::~n:ii~::b::
is
l
-3
:2]
~
=2
2
=
[js]
~ [;n
I
76
1.]-• [-72]
~ [- ,', ;!; o]-' [-+a]
-~ [-~ - 73 ;!;]
1
[! ! l - +t -t ~l-'l-t -! =j]
Note fhl' :;pPctral decomposilion Lc; noluni()ue. l t depends on the r hoice of bases for the eigenspaces.
Thic; 1s juc;l one possibility.
18.
-2
A= [
0
3G
n
0 -36
-3
0 ....: -3 1 {0
0 -23
0
1
[0
1 OJ + 25
~~
0
2
0
= -3 0 l 0OJ - 25 [
_g
0 0 0
25
19. T he maln x A has eigenvalues >.
Thus the matr ix P
= [- : - ~]
= -1
and ).
o
0
= 2,
n
I_~
1
-25
~so [.
9
25
12
"o]
25
0
mll
~]
0
o o lll
25
0
16
25
= lJ = [- ~ ~].
OJ [- 21 -13]
1024
0
25
with corresponding eigenvectors
has the property that p- t AP
-3]2 [10
lJ _ 50
0
[-11] and [-32].
It follows that
[ 3070
3069]
- 2046 -2045
290
Chapter 8
[~)and!~]·
20. The ma•rix A has eigenvalues>.= 2 and>. = -2, with corresponding eigenvectors
[~
the matr;., P =
;) has the property that p-l AP = D =
AlO = PDIOp- 1 = [1
-1]
2
21. The matrix A has elgenv&lues
conesponding
to >.
= 1.
7
[1024
0
(~ -~J.
0 ] [-7
1024
>.~-I and >. ~ 1.
4]
-1
2
Thus
It, follows that
= [1024
0 ]
1024
0
The ve<:to' [:] fonns a basis fo' !he eigenspace
to>. ~ - 1, and the vectorn mand [!] form • basis fo, the eigenspa<e conesponding
Thus the matrix P
~ ~1
= [:
bas the property that p-I AP
=D=
4 1 0
[-~ ~~~] . and it
0 0 1
follows that
= PDIOOO p - t = [:
AlOOO
4
22.
The mat,ix A has e•genvalu., >.
and
[!]
Thu• P
A1ooo
23.
~ [:
;
!]
1 0
0
0
-3
has the pcope,ty that p-• AP
2
3
0
0 1
~~
12 -24
! -~l,
~ D ~ [~
[:]. [:] ,
and it follow• that
[1~ 2~ 6:] [0~
(a) The characteristic polynomial of A is p(>.)
..,. [ :
1
~ 0, >. ~ 1, and >. ~ - 1, with conesponding eigenvedorn
= potooop-1 =
of A, we have A 2
-
~ ~1 [~ ~ ~1 (~) [-~ ~ ~1 = [~ ~ ~1
:1
and A 3
16
-24
- 40
-72
= >.3 =
6>. 2
+ 12>.- 8
Computing successive powers
[~~ =~~ ~~1; thus
36 -72
~~]-
6 [ :
12
44
44
] + 12 [32 -2-2 ~] ~ 8/
84
-~~
-24
16
3
-6
which shows that A satisfies its characteristic equation, i.e., that p(A} = 0.
{b ) Since A 3 =-6A 2
(c)
Since A
3
-
-
12A + 8/, we have A 4
= 6A 3 -l2A 2 + 8A =
6A -1 12A - 81 = 0, we have A(A
2
2
-
6A
24A 2
-
64A ..._ 4 I
+ 12/) = 8 1 and A- 1 = ~(A 2 - 6A + 12/).
EXERCISE SET 8.3
24.
(a)
291
The characteristic polynomial of A is p(>.) =
A, we have A2
= [~
~ ~]
=I
.A2
A
>.3
-
>.2
-
>. + 1.
Computmg successive powers of
[=~ ~ ~] = A, thus
and A 3 =
0 0 1
-4 0 5
-
=
-5 0 6] [1 0 0] [-5 0 6]
[
-3 1 3 -4 0 5
0 1 0 0 0 1
-3
-4
l
0
which shows that. A satisfies its characteristic t>quat1on, i.e., that p(A)
(b) Since A =A, we have
=
2
1
(c) Since A = I, we have A- =A.
3
A4
_PetD pT = [- -k~ *) [e 0] [-h~ -?z]
2
'
"2
27
0.: 11
-L
.,!}
Prom Exercise 9 we have
pT AP =
[
-- ·:~J
I
76
J;;
=f
I
l
72
I
vJ
v'3
../3
A2
=
"2
1 [
"'2
n
1
-3
2
= 0.
e~'
4
t e. '
-l''lt
2][~76
Thus A = PDPT and
+ e~']
":!'+elt'
=~] = [~
I
-72
1
2
n
[~ ~] = D.
=
2 -e:!-Tt' 11
.J,.
l
= I.
[t -~] [~ ~] [ ~ ~]
25 . From Exercise 7 we hnve pT AP =
e'·~
= AA =
AA3
3 =
5
72
0
- I
../3
1
'l
-rr..
\ (>
0
0
e .,,
0
()
e- 4t
0
7J
~]
=D
- I
Thus A= PDP'l' nnd
e'A
Pe'o J>r
=
[I
1
-~] [''
76
-/2
I
j
76
..;2
-73
0
-36
0
~
0
[ e"e2t +_ se-"
e-·H
()
28 .
2e2t- 2e-4t
.. a
e2' _
][I-?z
7ti
0
I
76
I
J2
- ~ -73I
2e2e2' - 2e-"]
2e-4t
21
e-4t
e21 + 5e- 41
2e2t- 2e-"'
-
4e2t
+ 2e- 4t
Prom Exercise 10 we havP
PTAP =
0 0 0] [ -20
_!
[i
Thus A= PDPT and
1
0
~
i
-36
4
-g
0
3
s
2~0 -50~] =
D
292
Chapter 8
4
-5
0
3
5
[ 25
" e'"
+ -2-e-'"
25
=
_ 12 e25t
25
29.
Note that
s in('rrA)
l
sln(21f)
o
0
sin( - 4rr)
0
0
0
sin( - 47<)
[
0
= Psin (7rD)Pr =
cos("ITA ) - Pcos(1rD}PT
=
I
[~
rr
A
3
4
3
5
;J
+ !!
e
25
50t
proceeding as in Exercise 27:
1
0
0 0
0 0
;][oos(t)
5
-g
0
25
0
1
0
4
..!!. e25 t
0
72 -73
-5
0
0
-~] [00 0][--:;,to
0
-+a
73
I
ts
~ [~
31.
0
76 -72
[ l
76
30.
+ 25
12 e-sot
= [0o o0 OJ
o • Thus,
0
- 25
~•'" + 25
" e-'" ]
0
e-31
0
76
1
72
l
-73
11 0
i
cos( 507r)
n ~l H
0
1
-1
0
0
0
~J
00 ][- 10 0
0
cos(257r)
0
~1 = ~0!, 00 00]
!0] = [-f.0
4
5
0
0
0
7
0
25
il
ll
25
-1
24
0
B
~ [! ~ ~]· then A' ~ [~ ~ ~] and A ~ [~ ~ ~]· Thus A is nilpotent and
3
0]0 + I [01 00 0]0 +~ [00 00 0]0 = [11 01 0]0
!
32.
Smc('
.4 3 -0
cos(1r.A )
=J
1
2
[
.~ ~ ~] ·
-T
0
~11
2
!flO
have sin(1rA) = sin(O)/ +1rcos(O}A- t,1r 2 sin(O)A 2 = 'Tf'A
WP
; A
210
=
0 00 00]
[2tr
rr
rr
and
0
l
33. If P is symmetric nnd orthogonal, then pT = P and pT P = I ; thus P 2 = pr P = I If >. is an
eigenvaluE' of P thPn lhere is a nonzero vector x such thal Px = >.x S10ce F 2 = 1 1t follows lhat
2
2
..\ x = P~x = Jx = x , thus >. = 1 and so >. = ±1
DISCUSSION AND DISCOVERY
Dl.
(a)
(b)
True The malrix AAr is symmetric and hence is orthogonally diagonalizable.
False If A •s dtagonahzable but not symmetric (therefme not orthogonally daagonalizable),
then there as a basis for R" (but not an orthogonal basis) cons1slmg of eJgenvectors of A.
WORKJNG W ITH PROOFS
(c)
293
False. An orthogonal matrix need not be symmetric; for ex\\mple A
= (~ -~]· .
~
~
{d ) '!Tue If A is an invertible orthogonally diagonalizablc mntnX 1 then there is an orthogonal
matrix P such that pT AP = D where D is a diagonal matnx with nonzero entries {lhe
eigenvalues of A) on the main diagonal. It follows that pr A - lp - (PT AP)- 1 = D- 1 and
t.hat D - 1 is a dtagonal matnx with nonzero entries (the reciprocals of the (~lgem·alues) on
the main diagonal Thus the matrix A - l is orthogonally diagonalizable.
(e) 1Tue If A is orthogon ally cliagonalizable 1 t hen A is symmetnc and thus has r eal eigenvalues.
D2. (a)
~
A = P DPT = [ ;
-~
(b)
0
; ]
~
[-~ ~ ~] [~
0
0
7
0
t
72
0
-72] [3 0 0]
0
1
=
I
7z
72
0
0
3 4
4 3
No. 'l'he vectors v 2 and v3 correspond to different eigenvalues, b ut are no t orthogona l.
T here fore t hey canno t be eigenvectors of a symmetric mat rix.
03. Yes. Since A is cll!lgonn.lizable and the eigenspaces are mutually orthogonlll 1 there 1s an orthonormal
basts for R" cons1stmg ol e1genvectors of A. Thus A ts orthogonally d1agouahzablc and therefo re
mu11t be symmetric
WORKING WITH PROOFS
Pl.
\'\'e first show that if A and C are orthogonally smular, then there exi~ t orthonormal ha.c:es with
rc.. J • t lv \~ hkb tht:) rt'pn:scnl the same l1ut:ar '-per ll Jr Fur 1!11:. lJUrpose, let T L~ the v1 eratvr
defined by T(x) = Ax fhen A = [T], i.e., A is the matnx of T relative to the standard basis
B = { e,, e2 1 1en} Smce A and C are orthogonally similar, there 1s an orthogonal matrL< P
such that G=PTAP. Let B' ={v 1 , v 21. . , v n} where v 1 , v 2 , .• , vn are the column vectors
of P. Then B' is an orthonormal basis for R" , and P = Pa'-•B Thus jTJa = PjTJs· PT and
[T]s· = pT[T]sP = pT AP =C. This shows that there exist orthonormal basrs with respect to
which .4 n.nJ C repn:!:>Pnt t.he same linear operator.
ConwrsPly, lluppost: lhat A = [1']a and C = [TJa• when• T: R" - > Rn IS a linear operator
and B, fJ' n.re h1\SCS for
If P = PD'-+8 t.hen Pis an ort,hogonnl matrix and C = [T]a· =
pT ITlnP
pTA P. Thus A and C are orthogonally si1ni ln.r.
nu.
P2 . Suppose A - c1Ut U{ I c2 u 2uf + · · · + Cn llnu ;,· where {u 1, u2, ... , u n} is an orthonormal basis for
Rn. Sincf> (u 1 uJ)T = u J'7 ' u J = u1 u f it follov.;s that A'T =A; thus A is symmetnc. Furthermore,
since u{' u1 = u, · u 1
-
o,,
1
\"e have
T\
Au1 ={c1u1uf +c2u 2u f -··· +c, u nu~) u1
2: c, u,u?' u 1 =c1 u 1
I
fo r each j = 112,
, n Thus c 1 , c 2 • •
.
1Cn are e1genvalues of A
P3. T he s pectr a l decomposition A= >q u 1 uf + A2 u 2ui + ··· -r An Un u~ ts equivalent. to A= PDPT
where P = !u tI u 2l · I u nJ and D = diag(>. ~. A21 . . , >-n); thus
/(A)= P /(D )PT = Pdiag(f( At). /(>•2), ... , /(An))PT
= /(AI) ulUJf
+ j(>.2) u 2u f + · · · + /(An) un u~
294
Chapter 8
P4. (a)
Suppose A is a symmetric matnx, and >.o is an eigenvalue of A having geometric multiplicity J.. Let l\' be the eigenspace corresponding to >. 0 . Choose an orthonormal basis
{ u 1, u 2,. , uk} for W, extend it to an orthonormal basis B = {u 1, u2, ... , u ~c , Uk+J, .. . , u n}
for R" , and let P be the orthogonal matrix having the vectors -:>f B as its columns. Then, as
..;hown in Exercise P6(b) of Section 8.2, the product AP can be written asAP=
Since P
IS
P [-'~t ~].
orthogonal, we have
and since pT AP is a symmetric matrix, it follows that X
("~"' ~].it has
(b) Since A is similar to C =
= 0.
the same characteristic polynomial as C, namely
(>.- >.o)" det(>.In-k- Y) = (>.- >.o)kpy(>.) where py(>.) is the characteristic pofynomial of
Y . We will now prove that py (>.o) :f. 0 and thus that the a lgebraic multiplicity of >.0 is
exactly k. The proof is by contradiction:
Suppose py (Ao) - 0, i.e., that >.o is an eigenvo.lue of the matrix Y. Then there is a
nonzero vector y in R"-k such that Yy
= ..\oY·
Let x
= [;]
be Lhe vector in R" whose first
k components are 0 and whose last n- k components are those of y
(c)
and sox IS an eigenvector of C corresponding to >.o Since AP = PC, it follows that Px is
an eigenvector of A corresponding to >.o But. note that e 1 , •. , e~c are also eigenvectors of C
('orrespondin~ to >.o, and that {e 1 • . . , ek x} IS a linearly independent set It follows that
{Pet, .. , Pe~c, Px} 1s a linearly mdependent. set of eigenvectors of A corresponding to >.0 .
But this implies that the geometric muJtiplkity of >.0 IS greater than k, a contradiction!
It follows from part (b) that the sum of the d1mensions of the eigenspaces of A is equal ton;
thus A is dJagonalizable F\u-thermore, smce A IS symmetric, the e1genspaces corresponrling
to different eigenvaluPs are orthogonal. Thus we can form an orthonormal bas1s for R" by
choosing an orthonormal basis for each of tht> e1genspaces and JOining them together. Since
the surn of the dimensions is n, this will be an orthonormal bns1s consisting of ejgenvectors
uf A . Thus A IS orthogonally diagonalizable.
EXERCISE SET 8.4
1.
(a)
Then
3x?
+ 7x~ -!xJ
x2]
[~
~] [:~]
X
2
][ 4 -3] [X']
-3
-9
X2
EXERCISE SET 8.4
295
5 . The quadratic form Q
= 2xr
1- 2x~ - 2x 1 x 2 can be expressed in matrix notation as
The matrix . \ has ds!•n·:alucs ,\ 1 = 1 and ,\ 2
v2
= [-:]
respective)) Thus
of \'&riable [:
1
)
2
=x
- Py
= 3,
wilh corresponding dgt:m'l(tor:> v 1
[tz -~] [Y
7z
72
rho •• v"' """"'' ,.;, !om• ' "" '"
1
]
eliminates the cross product terms in Q·
[~~]
,. = y =
exp<e~ed in Lt ,,, notation
The matrix A hns rigcnvahJ<'S ,\ 1 = 1, ).. 2 = 4, ).. 3
~ Hl'v, = m.v,
tht' (:hang!' of vn.liablc x
[I]
= 6,
pT x = [
M
Q-
~
-J'i 7i
~]
-~
1
[r
1
]
J:'l
A X whccc A
= [:]'
[:
]·
octhogonally di ..on alizes A' and
= Py eliminates the cross product terms in Q·
The matrix A has eigenvalues ).. 1 = l, ).. 2 = 4,
nl' v,
~ ~~
l
~: ~
Y:~)
[
l
0
0
..t
0 0
The given quadiM>e (o, m can be exp<essed in matdx notation as Q = x' Ax whm A =
v, -
•
with corresponding (orLhogonnl) eigenvectors
Thos the- ma:ix p = [ -
Y2
,.,
am.l
Y2
- I
v'
GJ
th~ matrix P = [~ -~] orthogonally diagona.IJZcs A, and the change
Note that the in verst: relationship between x aad y is
G.
-
v, =
Ul
)..3
= 7,
n!-ll
[~
_: -:]·
with cor responding (orthogonal) eigenvectors
Thus the matdx p =
orthogon~ly d•agonaH..s
A,
Chapter 8
296
and the change of variable x
= Py eliminates the cross product terms in Q :
.
Q = xTAx = yr(pT AP)y = 1~11
Y'l
ll
0 OJ
Ya} 0 4 0
0 0 7
lYlj
Y'2
=
Y~ + 4yi + 7y~
Y3
Note that the diagonalizing matrix P is symmetric and so the inverse relationship between x and y
is
:2
3
1
= y = pTx = Px =
3
2
Y3
.
3
3
[-!
[~:]
8. The given quadratic form can be expressed as Q
= xT Ax
~ ~ =~]·
where A= [
-2 - 4
The matrix
5
.
~ ~ I and ~ ~ 10. The vec:o" v, - mand v, ~ [-~] fo<m a basis fo< the
fo,m s a basis fo' the eigenspace co,esponding to
eigenspace w 'espond ing to ~ ~ 1, and v ~
A has eigenvalues
[ _:]
3
1
:.~~ [jfp:::l ~r :':Tj]m::::::~m:,::·~v:·,;,> p::du;: :::::::::::::::::::
A and the change of variable x
= Py eliminates the cross product terms in Q:
Q = xT Ax = yT (PT AP)y = !Y1 Y'2 YJ)
[~ ~ ~1 [~:] = Y~ + Yi +
0
10.
[~ ~]
(b)
!x
yj
(a)
lx
y] - ~
(b)
!x Yl [;
[:J +
17
-8)
[ 1 -l][x]
~
y -t 15
[:J -
0
10
10y5
Y3
5= 0
8} [:) -3 0
=
~] [;] - 8-0
11. (a) Ellipse
(b ) Hyperbola.
(c) Parabola
(d) Circle
(a ) Ellipse
(b) Hyperbola
(c) Parabola
(d )
12.
13. The CGUation can be written in matrix form as x T Ax= -8 where A
-
are
>.1
= 3 and >.2 =
Lhe matrix P
=[
- 2, ·.vith corresponding eigenvectors v 1 =
~
ts]
-Vi 7s
= [- 22
Circle
2
- ]. The ei"envalues of A
- 1
[_~] ! and v 2
orthogonally diagonA-Iizes A. Note ~that det(P)
=
-
[~]
q:;
respectively. Thus
= 1, so
P is a rotation
EXERCISE SET 8.4
297
matrix. The equation of lhe conic m the rotated x'y'-coordinate system is
which can be written ns 2y' 2 :.,;g'2 = 8.:. thus. ~ic is a.· hyperbola. The a11gle through which the
axes have bePn rotated 1s 9 = tan 1 ( - ~) ~ -26 6°.
14. The equation can be written m matrix form as
At
= 3 and
xT Ax=
.X2 = 7, with corresponding eigenvectors v1
matrix P = [
-~ ~]
[~ :] . The eigenvalues of A are
9 where A=
= [_~]
= GJ
and v2
respectively. Thus the
orthogonally diagonalizes A. Note that det(P) = 1, soP is a rotation matrix.
The equation of the conic in the rotated x'y'-coordinate system is
[x' y']
[~ ~1
[::]
=- 9
whkh w~> ran writ~> It<: :h'~ f· iy'2 = !1· thns th,. rr•nlt~ i~ .1n t>llip~e ThP angle of rotation corresponds
to cos(}=~ and !lin 0
-~; lhus 8
-45°.
=
=
15. The equation can be writ len in matrix form as xT Ax= 15 where A
A are At
= 20 and
A2 = -5, with corresponding e1genvectors v 1
Thus the matrix P
-n
[~
= g~
= [~]
'~] .
and v 2
The etgenvalues of
= [-~]
respectively.
orthogonally diagooallzE>s A Note that det(P)-= I , soP is a rotation
matrix The equation of the come in the rotated .x'y'-coordinnte system is
wh1ch we can wntf' as ·1r'2 - y' 2 == 3; thus the conir is a hyperbola
axes havP been rotntcd IS (I = U\0 t ( ~) ::::; 36. !) 0
16. The equation co.n be wrinen in rnalrix form as xr Ax=
nrc .A1 =
P
4 and
>.2 ;;:
~,
~
where A
wit.h cur responding eigenvectors v 1
[-~ ~]orthogonally dmgonalizes .4.
= [_~]
The angle through which the
= [~ :] . The eigenvalues of A
and v2 -
[~] .
Thus 1he matrix
Note that dct(P) = l, soP is a rotat1on matnx. The
equat1on of lhc conic in the rotntrrl x'y'-coordinate system is
0] [x']
= ~2
y'
;!
2
•
which we can write as :r' 2 1- 3y'2 = 1; thus thP conic is an ellipse. The angle of rotation corresponds
lo cosO= ~ and sm (} =
thus(}= -45°_
7i,
17. (a) The e1genvalucs of A=
(b)
negative drfinil('
(d) positive semidPfimtc
[~ ~]
and A= 1 and .X= 2; thus A is positive defimtP
(c)
indefinite
(e)
negative semidefinite
Chapter 8
298
18.
(a)
The eigenvalues of A =
(b)
negative definite
[~
0
] and A
-5
= 2 and A =
-5; thus A is indefinite.
(c) positive definjte
'
(e)
(d ) negative semidefinite
positive semidefinite
19.
We have Q = xi+ x~ > 0 for (x 1 . x2)
20.
negative definite
21.
We have Q = (x1 - x2) 2 > 0 for x1
22.
negative semidefinite
23.
We have Q
24.
indefinite
25.
(a) The eigenvalues of the matrix A = [_~ -~] are A= 3 and A = 7; thus A is posi tive definite.
Since
= x{ -
x~
> 0 for
lSI = 5 and
I
5
-2
x1
'f: (0, 0) ; thus Q is positive definite.
x2
and Q = 0 for
of 0, x 2
= 0 and Q < 0
I
x1
for
= x2; thus Q is positive semidefinite.
x 1
= 0, xz ..f 0; thus Q is indefinite.
- 52 ~ = 21 are positive, we reach the same conclusion
(b ) The eigenvalues of A=
[-~ -~ ~]
0
0
are >.
= 1, >. = 3,
5
The determinants of the principal submatrices are
using Theorem 8.4.5.
and ,\ = 5; thus A is positive definite.
121 = 2,] _~ -~~ =
12 -}
3, and - l
I
0
2
0
0
o = 15;
5
thus we reach the same conclusion using T heorem 8.4.5.
26.
( a)
The eigenvalues of the matrix A =
121= '2
2 1
]
,1 2
a nd 1
[~ ~J
are .A
= 1 and >. = 3; thus A is positive definite. Since
= 3 an• positive, we reach t.hc sttme conclusion using Theorem 8.4.5
(b) The eigenvalues of A= [-
~ -~ - ~]
0 -1
are >. = 1, .-\ = 3, and ,\ = 4; thus A is positive definite.
3
3 -1
The determinants of the principal submatrices are
131= 3, ~-~ -~~ = 5,
and - 1
0 -1
thus we reach the
27.
(a)
s;;~.me
= (- ~J .
Thus the matrix P
B =
12;
3
conclusion using Theorem 8.4.5.
The matrix A h.as eigenvalues >- 1
v2
0
2 - 1 =
= (~ -
t]
orthogonally diagonalizes A , and the matrix
7;]
[~
72
has the property that 8 2
= 3 and >.2 = 7, with correspo nding eigenvectors v 1 = g)
- !':<
v2
= A.
=
[Y..il! +- 4:11.
2
2
and
EXERCISE SET 8.4
299
(b ) The matrix A has eigenvnlues >. 1
nl,
nnd v,
' 2
~ I, >. ~ 3, >., ~ 5, with corresponding ttigcnvecto<S v t ~ m
,
2
~ mThus P ~ [ ~ - ~ }nhogonWiy diagonWizea A, and
I
72
0
D= [ ~
v'3
0
v2
v~
0
(a ) The n1atrix A has eigenvalues >.1 = 1 a.nd >.2
nnd
~
'2
0
'2
0
= A.
has I he property that 8 2
28.
~ 0~] = [~! +_il~
~] [-~
.J5
=3,
[-% ~]
= [:]. Thus lhc matrix P =
= [-~]
with corresponding eigenvectors v 1
orthogonally dia.gonalizt>.s A, and the matrix
[-~
B=
72
has the property Lhnl 8 2
= A.
~ I, >., ~ 3, >., ~ ' , wu h co,.spondin< eigen"""'"" v ~
[ '] Thus P = [:.e~ --j;; - ~]
~ orthogonally dia.gonalizes A, and 1
(b ) The "'""" A has •i•cn<•Jtoes \ 1
v2
-
[-1]~
B-
,
and v 3 -
[7.
I
7a
ton ~] [j
I
0
- "J:i
..;'2
73
t
I
has t.h P property that 8 2
2 9 . The q11ad rnt tc form Q = 5x~
tion as Q - x T Ax where A
!SJ = 5. 5
21 =
12 1
1, and
2
0
7s
0
v'3
0
0
0
..;2
1
7J
1
-7:1
~]
72 =
I
7:i
[~6 + fl2
I
j
-
I
§6
6- T
'
3
fl
2
V>l
~~3¥
I
-3
I
-3
= A.
+ x~ + kx~ + 4x 1 x2 -
=[
2x 1x3 - 2x2x3 can be expres~<>d in matrix nota-
~ ~ =:] . The determinants of the principal submatrices of A are
- 1 - 1
s
[:] ,
:
- 72
-t.
\IG
1
k
2 -1
:!
1 - 1
-1 -1
Jc
= k- 2. Thus Q IS positi\·c d••fiuit<' tf and only if k
3 0 . T he quadrat 1r form can be expressed in matrix notation as Q = xr Ax where A
=
;>
2.
u
0
1
k
The del<'rrnlnanls of the pnncipal submatrices of A are
Thus Q 1s positive definite i! :1::u only if
lkl < VIS~
J31=
3,
~~ ~~
= 3, and
3
0
- 1
-tl
Jc .
2
0 -1
Jc
Jc =
2
5- 3k2
300
Chapter 8
31. (a)
~1
The matrix A has e:genvalues
= 3 and >.2 = 15, with
corresponding eigenvectors
and vz = [:]. Thus A is positive definite, the matrix P = [ -~
=
vJ
[-~]
~] orthogonally diagonalues
.4, 1\n:i the matrix
8-
[-~ ~] [~ ~] [-~ ~]- [_~:: -~::l
has the property that 8 2 = A.
(b) The LOU-decomposition (pl59-160) of the matrix A is
A
and, since L =
= [~
!] [i ~] [~ ~] [; ~] = LDU
=
ur, this can be written as
which is a factorization of the required type.
32. (a)
T(x + y)
(x + y)T A{x + y) = (xT
+ yT)A(x + y) = xTAx+ xT Ay +
= xT Ax+ 2xT Ay + yT Ay
(b)
T(cx.)
33. We have
= (cx)T A(cx) =
(ctXl
c2 (xT Ax)=
+ czxz + · + CnXn) 2
2T(x)
n
n
t=l
A=
34.
(a)
For each
xf -
t
= T(x) + 2xT Ay + T(y)
~
L; c'fx? + 2: 2: 2c.c1 x,x1
=
c1c2
C1Cn
c,:c2
~
c2 c.,
c,cn
C2Cn
c2
i) 2
= xr Ax where
t=l J=t+l
[q
1, .. . ,n we have (x;-
yT Ax+ yT Ay
n
= x~- 2x,x + x = x?- 2x,~
2
f: x + ~ (t x
J=l
1
~l
2
1)
=
~ £: x,x1 + ~ ( r=1
f: x~ + 2 f: f +1 x1xk). Thus in the 4uadratic form
J=l
J=l k=1
s; = n-1
- 1 -[(x
1-
x) 2 +
(x2-
x) 2
+ · · + (xn
*
- x) 2 ]
the coefficient of x? is "~ 1 [1 - + ~n) = ~. and the coefficient of x,x1 for
n~J [-~- ~ + ~n] = -n(n2_ 1 >. lt foUows that s; = xT Ax where
l
1
-n(n-1)
I
- n{n-1)
_ _1_
n(n-1)
.!.
I
- n(n-1)
1
-n{n-1)
1
- n(n-1)
l
n
A=
n
n
t
:f. j is
DISCUSSION AND DISCOVERY
301
si
(b) We have
= ": 1 1(x, - i)2 + (xz - i) 2 -t- · • · + (xn- :t)2 J;::: 0, and .~~- 0 if a nd only if x 1 =
i, Xz = i, ... ,:en= :i:, i.e., if and only if x 1 = X2 = · ·::::: Xn. Thus s; is posit1ve semJdefinite.
35. (a)
T he quad ratic form Q = jx 2
tation as Q
The vectorn
and v 3
+ ~y + ~z 2 + ~xy + ~xz + ~yz
x' Ax whe<e .4
2
[! !l
~ ~
v, ~ [-!]and v, ~ [-~]
~ [:]
can be expressed in matrix n~
>.
The mal<ix A has eigenvalues
l
and
form a bas;. for the eigenspace corresponding
forms a hasis for the eigenspace corresponding to
>.
~!
>.
~ j.
to>. ~ j,
Application of the
Gram-Schmidt process to {v 1, v 2 , v 3 } produces orthonormal eigenvec tors {p 1, P2, P J}, and t he
matrix
P
=
Pt
P2
P•J =
[-i
'
76
I
-"JS
1
-76
~]
73
ange of variable x = Px' converts Q into a quadratic
orthogonally diagonahz.es A. Thus L
form m the variables x'- (x',y',z') w1thout cross product terms.
From this
2j"i =
wP
cnnrludc I hat thr equation Q
1 ~orn>sponrls to an ellipsoid with a.xis lengths
J6 in thP x' and y' directions, and 2.jf = ~in
the z' direction
{b) The matnx A must bP. positive definite.
DISCUSSION AND DISCOVERY
D l . (a) False. For cxmnple the matrix A= [~ ~] hn.s r.igcnvalues
l and 3; tlmii A 1s mdefi nite.
(b ) False. The term 4x 1.z:'2x3 is not quadratic tn the variables x 1 , x2, XJ.
(c) True. When cxpuudcd, each of t.he terms of lhe res uiLing expression is quadm~ic (of degree
2) in t he variables
(d) True. The eigenvalues of a pos1tJVe definite matnx A are strictly positive, m particular, 0 Is
not an eigenvalue of A and so A is invert.1ble
(e) False For examplr th!C' matnx A= [~ ~] is pos1t1ve sem1-definite
(f )
True.
rr the eigenvalues of. \ are posttive,
tlu>n the eigenvalues of - A arc negat1ve
0 2. (a) True. When wr itten in matrix form , we have x · x = xT Ax where A = I .
(b) True. Tf A has pos1t1ve eigenvalues, then so does A- 1.
(c) True. See Theorem 8.11 3(a)
(d ) True. Both of thE' principal S'lbmatrices of A will have a positive determinant
(e)
False. If A
= ( ~ :] , then xT A~ = x' + y'<
is assumed to be symmetric.
On the other hand, the statement is true if A
302
Chapter 8
False If c > 0 the graph is an ellipse. If c
(f)
< 0 the graph is empty.
D3. The eigenvalues of A must be pos1lhe and equal to each other; in other words, A must have a
positive eigenvalue of muiLlplicily 2.
WORKING WITH PROOFS
Pl. Rotating the coordinate axes through an angle 8 corresponds to the change of va.ria.ble x = Px!
where P = [:: -:::], i.e., x = x' cosO- y' sinO andy= x' sin 8 + y' cosO. Substituting these
expressions into the quadratic form ax 2
coefficient of the cross product term is
+ 2bxy + cy2
leads to Ax'2
+ Bx'y' + Cy'2 ,
where the
B = -2acos0sin 8 + 2b(cos2 0- sin2 0) + 2ccos OsinO = (-a+ c) sin 20 + 2bc'os20
Thus the resulting quadratic form in the variables x' and y' has no cross product ter):nP.if and only
if (-a+ c) sin 28 + 2b cos 20 = 0, or (equivalently) cot 28 = ~·
P2 .
From the Print1paJ Ax1s Theorem (8.4.1), there is an orthogonal change of varial>le x = Py for
wh1ch xr Ax= yl'Dy - >-.t!Jf 1- >.2y~, where )q and >.2 are the eigenvalues of A. Since ,\ 1 and >.2
are nonnegat1ve, it follows that xT Ax ~ 0 for every vector x in Rn.
EXERCISE SET 8.5
l.
{a)
The first partial derivatives ot fare fz(x , y) = 4y- 4x 3 and f 11 (x, y) = 4x - 4y3 To find the
cnt1cal pomts we sl'l l:r and fv equal to zero. This yields the equations y = x3 and x = y3.
1-\om Lh1s we conduJe Lhat y = y9 and so y =0 or y = ±1 Since x = y3 the corresponding
values of x are x = 0 and x = ±1 respectively Thus there are three critical points: (0, 0), (1, 1),
and ( 1,-1).
(b) The Jlc:-.sian matrix
IS H(x.y) = [ln((x.y))
r y
/xy((z,y)]
/~z
fvv z.y)
= r-l42:r
2 4 2]
- 12y
Evaluating this matrix
at thP. cnticf\1 points off yaelds
0
[4
/l(O,O)
The eigenvalues of
•cl]
(~ ~)
H(l,l)
I
=
[-12 4]
4
-12 '
H( - 1,-1)
= [
- 12
4
are A= ±4; thus the matrix H(O, 0) is indefinite and so
1
has a
[- I~ _ 1 ~] are>.= -8 and >. = - 16; thus Lhe matrix
negative definHe and so f has a relative maximum at {1,1) and at
saddle poinl at (0, 0) . The eigenvalues of
H(l,l)
= H( - 1, -
1) is
(-1,-1)
2.
(a) The first partial derivatives off are f%(x, y) = 3x2 - 6y and fv(x, y) = -6x- 3y2 . To find the
2 and x = - ~y 2 •
cntic~l points we set 1% and f 11 equal to zero. This yields the equations y =
4
From Lhis we conclude that y = h and so y = 0 or y = 2. The corresponding values of x are
4x
x = 0 and
4
- 2 respectavdy. Thus there are two critical points: (0, 0) and ( -2, 2).
{b) The Hessian matrix is H (x,y)
[
~ -~]
:z;,
are
1zv((:z:,y))] = ( 6x
-6
11111 %, Y
6
- ]. The eigenvalues of
-611
H (O,O} =
>. = ±6; this matrix is indefinite and so f has a saddle point at (0, 0) . The
eigenvalues of H ( -2, 2) =
and so
= [/u((x,y))
fvz
Y
r-
12
6
- ]
-6 -12
are
>. = -6 and >. = -18, this matnx is negative definite
f has a relnLive maximum at ( -2, 2)
EXERCISE SET 8.5
305
1 3 . The constraint equation 4x 2
+ 8y2 =
16 can be rewritten as ( ~ )2
+ ( ~ )2 =
l. Thus, with the change
of variable (x, y) = (2x', v'2y'), the problem is to find the exvreme values of z = xy = 2v'2x'y' subject
lo xn
+ y'2 = 1
Note that. z
The eigcm·aJue~ of A are
l~]
and v2
= [-~].
>. 1
'=
= xy =
2v'2x'y' can be expressed as z =
Thus the constrained maximum is z
+ 3y2 =
16 can be rewritten as ( ~ )
lhe problem is Lu find the extreme values of z =
x' 2 + y' 2 = 1. Note that z = l6x' 2 + ~x'y' t-
~ ~]. The eigenvalue:. of A are >.
6
[
Normalized
~).
1
= J2 occurring at (x', y') = (~·~)or
= -v'2 occurdng at (x',y') = (-72, ~)
2
+ (4y)2 = 1. Thus, setting (x, y) = (4:r', ~y'),
x 2 + xy 1 2y 2 = 16x'2 t- ~x'y' + ¥v' 2 subject to
¥y' can be expressed as z = xfT Ax' where A =
= 8 and>.,=~. with correspondmg eigenvectors v, = [ -~]
»i~r.nvectors are u
2
= ~~~~II = [ ~] .
constrained m;ucimum is z = 536 occurring at (x',y') = ±{4, !>or (x, y) = .J:(2J3, 1s>·
the constrained mmimum is z =a occurring at (x', y') = ±(~. -4) or (x, y) = ±{2. -2)
an:! v·/= [
'~] .
where A= [ ~
v'2 and>.,= -J2, with corresponding (normalized) eigenvectors v 1 =
(x,y) = (J2, 1). Similarly, the constrained minimum is z
or (x,y)= (-J2,1)
14. The constraint :r2
xrT Ax'
1
=
u:: 11 = [ _ ~]
and
ll ?
Thus the
Similarly,
15. The level curve correspondmg to the constramed maximum 1s
the hyp.:!rbola 5x 2 - y 2 = 5; it touches the unit circle at (x, y) =
(± 1, 0). The l~wl curve corre• ponuing to the constrained minimum IS the hyperbola 5z• - y~ = -1, it touches the unit Circle al
(:r,y) = (0,±1)
16.
!'he lt·v.. l cun·p l'nrrPspondmg to th"' constramed maxirnum 1s the
hype1bola xy- ~; 1t touches lhr> un1t circle at (x, y) == ±\~.
The level curvn corresponding to the constrained minimum is
the hyperbola xy it touches the unit circle at. {x, y) =
±( ;~.
72).
-J2)
-!;
'Y
"_.!.2
xy=.!.
1
-4
I
.ry: 2
-2
x.y = _l
1
-4
17. The area of the inscribed rectangle is z = 4xy. where (x, y) is the corner pomt that lies in the first
quadrant. Our problem is to find the maximum value of z = 4xy subject to the constrakts x;::: 0,
y ;::: 0, x 2 + 25y~ = 25 The constraint equat1on can be rewritten as x' 2 + y'2 = 1 where x = Sx'
and y = y'. In terms of the vanables x' and y', our problem IS to find the maximum value of z =
= x'T Ax' where A = [ 1 ~ 1 ~1 · The largest
1
eigenvalue of A IS >. = 10 with c:orresponding (normalized) eigenvector [ ~]. Thus the maximum
area IS z = 10. and th1s occurs whl'n (z',y') = (-jz, ~)or (x,y) = (~. ~)
20x'y' subject to x' 2 + y' 2
= l , x';::: 0, y';::: 0.
Note that z
306
Chapter 8
18. Our problem is to find the extreme values of z = 4x2 - 4xy + y2 s ubject to x 2 + y2 = 25. Setting
x = 5x' and y = 5y' this is equivalent to finding the extreme values of z = 100x'2 - lOOx'y' + 25y'2
subject to x'2 t- y' 2
=1
Note that z = xff Ax' where .-\ = [ ~:
-~]. The eigenvalues of A are .-\ 1 =
125 t'.nd >.2 = 0, wit h corresponding (normalized) eigenvectors v 1 = [-
~] and v 2 = [~]. Thus the
maximum temperature encountered by the ant is z = 125, and th1s occurs at (x', y')
75) or
= 0, and t his occurs at (x', y') =
(x, y) = ±(-2-.15, VS). The minimum temperature encountered is z
±(~ . -;h) or {x,y) =
= ±(--;h.
±(VS,2VS).
DISCUSSION AND DISCOVERY
Dl. (a)
We have fz(x , y) = 4x3 and /,Ax, y ) = 4y 3 ; thus f has a critical point at (0, 0). Simifarly,
9:z(x, y) = 4x3 and g11 (x, y) = - 4y3 , and so g has a critical point a.t {0, 0). The Hessian
matrices for
f
and g are HJ(x,y) = [
Since H,(O,O)- H9 (0.0)
12
;
2
°Y ]
12 2
and H 9 (x,y)
= [~ ~]·the second derh·ntl\·e test
IS
= [12;
2
_ 1~112 )
respectively.
mconclusive in both cases.
(b) It is clear that f has a relative minimum at {0, 0) since /{0, 0) = 0 and f(x , y) = x4 + y 4 is
strictly positive at all other points (x,y). ln contrast, we have g(O,O) = 0, g(x,O) = x 4 > 0
for x ::/; 0, and g(O, y) = -y4 < 0 for y f. 0. Thus g has a saddle point at (0, 0).
D2. The eigenvalues of H
[~ ~) are A= 6 and.,\= -2. Thus His indefinite and so the crit1cal points
=
off (if any} nrc s'\dcile pmnts. Starting from fr.Ax y\- J~ (.r y}
2 and /y:r(:r y) = /-ry(xy) = 4
it follows, using partial mlegrat.ion, that. the quadratic form f 1s f(x, y) = :r2 + 4xy + y 2 . This
function has one cril!cal point (a saddle) which is located at the origin
D3. lfx is a uni t f>lgenvector corresponding to..\. then q(x) = x1'Ax
= xT(.-\x)
= >.(x T x ) = >.(1)
= >..
WORKING WITH PROOFS
Pl. First. note that, as in 03, we have u r,Au m = m and u r,A u M = M On the otl.er hand, since u m
and liM are orthogon al, we have u~AuM = u~(MuM ) = M(u;: u M) = M (O) = 0 and ur,Au m =
0. It follows that if X r
x TcAXc=
~Urn+
j .f-_":r., UM, then
1
(M-e)
M-m u rmAum+O+O+ (e-m )
J\f-m
1
u MAUt.t=
(M-e
M -m )m + (e-m)
M- m M=c
EXERCISE SET 8.6
1. The ch.,.a<tedstic polynomial of A7
A~ m(t
of AT A are At = 5 and A2 = 0, and cr1
2. The eigenvalues of AT A =
u2
=
J9 =
[~ ~] [~ ~]
3 are s mgular values of A.
2
o]
~ [~ ~ ~] is >. (>. - 5); thus the eigenvalues
2
= v'5 is a singular value of A.
=
[~ 1 ~]
are A1
= liJ and A2 = 9; thus u 1 = v'i6 = 4 and
Exercise Set 8.6
307
~
3. The eigenvalues of A7'A
[_ ~
~] [~ -~]
[~ ~]are >. 1 =
=
of mulliplicaty 2); thus the 's ingular values of A are crt =
4.
= ('7, ~}[v; ~] = [Jz v'22] are >.1 = 4 and >.
The eigemalucs of ATA
values of A are cr 1 =
J4 -
[~]
= [~]
and v2
of A:
A•
6.
The
= ../2 ar:d u2 = .J2.
is
>.
=2
~
[-J
0
] [-
0 -·1
= [~]
and v 2
3
0
]
0 -4
= [~]
= [90
~1 [l0 OJI _UEVT
[v2
0 v2
0
16
] are )q
resperLively
The
= 16 and >.2 = 9, with corresponding
smgular values of A are u 1 = 4 and
=3
A
7.
The
t:i~~:envnhti'S
eigen\'t.!Liors "
o2
= 2.
0
l]
We hnvc
u 1 ,~
= .,1,A v 2 = ~[-~ -~l [~] = [-~]·
r-~ -~J = [_~ -~1 [~ ~] [~ ~] = UEVT
c
1 0 4 6
16
] = [
of A r A = [' ] (
=[
1
I
2
~]
2.J L1
0 ct
= [- ~]
and v:::
arf' .\ 1 = 6-t und
r~sp~>cth·ely
\~•
= -1, with correspomling umt
The :;mgular values of A arc cr 1 = 8 and
![~ ~] [~]
and u
+. = [-jg]
-!.,. •
5
..L;\v 1 =
c
~ (; -~] [~] = [~],and
This res ults in t.he following singular value decomposition
\Vp IHl,·e Ut : <r\ Avt = t[-~ -~J [~]=[_~].and t1 2
This results in lhe following singular '-a.l\!e dPCOmposition:
a2
(multiplicity 2), and the vectors
We have Ut = ;, Av1 =
[11 -1]1 = [~ -~]
72
eigenvalue~ of AT A
unit cagenvectors v 1
1; thus the singular
---
(_~ ~] [~ -~] = [~ ~]
~ [~ - ~] [~] = [-~].
"\ Av2 ""
2 -
form an orthonormal basis for the eigenspace (which is all of R2 ). The
singular values of A are crt
u2 =
v'l =1.
2 and cr2 =
5. The only eigenvalue of AT A=
VJ =
= 5 (i.e., >. = 5 is an eigeJJ.value
J5 and cr2 .:: JS.
5 and >. 2
l
8 0 tl
v$
2 -
v5
..L.4v 2
(1,
4
= ![
2 0
6) [•-};] =
4
_JA'
vr>
[-~]
..l. .
.;:;
This results in the fo llowing singular value decomposition:
8. The ('tgcnvn.lucs of AT A
eigenwcLors
we have u 1
Vt -
=;
[~]
Av1
I
[
anrl
= ~ [33
3 3 3 3
] [
]
3 3 3 3
"2
3
J
]
1
= (18
~]
18 I,.,
= r-~]
[~]
=
7i
are
>. 1 = 36 and >.2
= 0, with corresponding unit
rPspccth·ely. The only smgular value of A is
[+1.
72J
Thf' vector
is an or thonormal bas1s for R 2 , e.g., u 2 = (-
~].
mu~t
=6,
and
u2 must be chosen so thaL {u1 , u 2}
This results in the following singular value
decomposition.
A =
Ut
[33 33] = [~
-~] [6 OJ [ ~ ~] =UEVT
72 72 0 0 - 72 72
308
9.
Chapter 8
Th~ eigenvalues of AT A = [-~ -~ -~] [=~ ~1 = [_:
-:] are >. 1
= 18 and
.A2
= 0,
with corre-
2 -2
[-~)
spondtl'g uniL etgenvectors v1 =
A
~u
the
1
and
v2
= [~]
= ,/18 = 3,/2, and we have u 1 = ;, Av, =
veo~ts u
2
respectively The only singular value of
3 ~ =! _:] [- ~] =
[
Ul
Wemus',choose
and u 3 so that { u 1, u 2, u 3 } is an o•thono•mal has;s fo• R 3 , e.g., u, = [i]•nd
u , = [ -:] . This results io the foUowU.g singuiOI" value deoomposWon'
A=
t -1] [ ~ ~]
[-2 2] [ ~
- 1
=
1
2 -2
3
~
_.£
0
!
2
3
3
3
0
Note. The singular value decomposition is not unique. It depends on the choice of the (extended)
orthonormal basis for R 3 . This is just one possibility.
= [=:
10. Theeigenvalu"',"f ATA
ve<to.v,
-il
=[
J ~-~ -: J- U_;
=:]a<e .> 1 = 18 and .>1
is a unit eigenvedo<eonesponding to .1,
a basts for the eigenspace corresponding to ,\
= 0,
to these vector:. yields orthononnc.l eigen,·eci:.ors
value of A is o- 1 =
v'18 = 3v'2,
and we have u,
v2
= 18.
= .>3 = 0
The vecto<S [-:]and
m
jsj"
= -~
[
and
v3
= a\ Av1 = ~ [-22
"~Its ~·:·;:;i~gl ,;~~;~~ ~~;;:~·;
= [~]
~ . The only smgular
-1 2] [ t~] = [-~]
~ ·
1
_
2
t [.ll-VfJ
~- t
ll.
The eigenvalues of AT A
= [~
lv~. \J
-~] [ ~ ~1 = [~ ~)
-1
unit eigenvectors v 1 =
[~]
and v 2 ==
[~]
fom
and application of the Gram-Schmidt process
_
must choose the vector u 2 so that { u 1 , u 2 } is an orthonormal basis ror R2 , e.g., u 2 = [ ~
in
The
M
are At = 3 and
We
1- This
~J
= UEVT
3v:>
.A2
= 2, with corresponding
I
•tlspectively. The singular values or A are u 1 =
J3
and
Exercise Set 8.6
309
,..
singular value decomposition:
I
[ l ['
.fi
I 0
A=
1 l
=
-1 1
?
unit eigenvectors v 1 = [
o2
'-
u,
=
-73
'+s] [J3~ ~l[~ ~]
76
I
I
0
72 -"76
1
72
=UEVT
·,
I
[~ ~ :1 [: i] = r;: ::1 are ~. = 64 and ~, = 4, with conespondlng
12. The eigenvalues or AT A =
and
~
0
= 2. We have
~]
UJ
[!l
and v 2 = [_
~~]
resp,.ctively. The sinf?:nlar valtlf'.s <>fA
are~=~
[~ ~][f] ~s~J~]J u~ = ~ [~ ~1 [_ ~] = ~-~]-:e
k
and
4 0
vf>
--
...!..
.~
~~
:iS
4 0
choose
oJS
so that {u, , u,, u,} Is an octhonocma\-basis roc R3 . This cesults m the rollowmg singular
value decornposttton
A =[~ ~] =[~
·I 0
] 3.
-::-sI
0
v5
Using the smgular vrdtH' decomposition A =
0][80]['2
1] = l/EVT
1
ll 2
../, _"J.
0
0 ll
... $
J.J~2s [s0
[0~ 6)<l = [.J..\~ -75
~
in Exercise 7, we have thr following polar de>composition nf A:
14.
Using the singular value decowpo.<:ition A =
" 5
0
] [
2
75
-3; '
-:;a
*
]
-
UE vr found
G~] = [~ -~] [~ ~] [-~ ~] = UEVT found
in Exercise 8, we have the following polar decomposition of A:
[-
~l r60 0
[- ~ ~l) r, ,_,:!
r~ -~
~ ~l) = [33 331r~0 0
= PQ
01 72 Ji
72 1
72 72
11
72
310
15.
Chapter 8
ln Exercise 11 we found the following s:ngular value decomposition·
A=
[
10]
1 1
=
[i1
~
-~
-1 1
Since A has rank 2, the corresponding reduced singular valu~ decomposition is
A=
and the reduced singular value expansion is
16.
Ir: Exercise 10 we found the following singular value decomposition:
A -
[-2 -1 2] = [--3-272 7z] [3J2 00] [ t~
2
~
1 -2
0
~~
0 0
315
Since A has rank 1, the corresponding reduced singular value decomposition is
and the
17.
rerluc~'d
singular \'alue expansion is
The characteristic polynomial of A is (.>. + 1)(.>. - 3) 2 ; thus>. = - 1 is an eigenvalue cf multiplicity 1
and
>.
= 3 is an Pigenvalue of multiplicity 2.
oonespon~ ing to >. -
il
v' ~ [il
The vector v 1
-1' and the vee toes v' =
m
and
the etgenspace <'orresponding to>.= 3. Thus the matrix P
= [-
forms a basis for the eigenspace
form an (orthogonal) basis for
= [1:: 1 1 :~1 fvil) orthogonally diag-
onalizes A and the eigenvalue decomposition of A is
120] [--3i~ 072] [-1030
00] [-720 7,010]
[0 0 3
0
0
003 72720
A= 2 1 0
=
0~
Exerc.se Set 8.6
311
The corresponding singular value decomposition of A is obtained by sh1ft1ng t he negat1ve sign from
the diagonal factor t o t he second orthogonal factor·
1 2
0
7i
-72
1
0 0
T2
0:30
003
0
1
[ l [ lol l [ ] [
~0~
01
0
A= 2 1 0 =
003
18.
~
I
72
The characteristic polynomial of A is(..\+ 1)(..\ + 2)(..\- 4), t hus t h: ei~ues ozrA are ..\1 = ~ 1 ,
.12
~ - 2, and .1 ~ 4. Corresponding unit eigenvecto" are v, ~ [;,],1v, ~ ma• v, ~ [_;,]
3
The matrix P
= [v1
A o.nd lhe eigenvalue <.lec9 n position of A is
v 2 v 3j orthogonally diagonalizes
~ 0-2] = [-35
-2
A= [
-2 0
0
3
~]0
0
0
[-1
----
0 -~
.l,..
v5
'115
0
0 - 2
0
0
-
The corresponding singular \'alue decomposition of A is obtained by shiflmg the llt!gative signs
from the diagonal factor to the SC'cond vrthogonal factor.
2
0-2] = [.
-2
0
0
3
'5
0
-!,..
'11'5
0I
~]0
01
0 --l.
0
"~
(a)
-!
u,
-!
~
=
U~ \ · r
-'75
!
I
19.
-~21!10]
1
2
fnrm n basis for col(.4), u 3
and
1
'i
form • basis for col(A).L
for m a bes" for row(A). and v 3
~ nuii (AT),
~ ] and
vr- [~
- !l
~ -~] forms n bas1s for row(At:_= nuli(A).
[
~
12
(b) A =
:
[
20. Since A
12
=: ~~]
0
0
10
6
= UEVT and
I
l
2
2
1
I
1
I
2 -2
2 -2
I
2
I
2
V is orthogonal we have AV
= UE.
Written in t.olumn vl'ctor form th1s 1s
Chapter 8
312
and, since T(x) = Ax, it follows that
0
0
.. '
0
0
0
0
0
and [T(v1 )] 8 , - 0 for j - k + 1, ... ,n. Thus [TJ 8 •. 8 =E.
21.
Since A'J'A is positive semidefinite and symmetric its eigenvalues arc nonnegative, and its singular
values c;Lre the same as its nonzero eigenvalues. On the other hand, the singular values of A are the
squnre roots of the nonzero eigenvalues of AT A. Thus the singular values of AT A are the squares
o ft hP sing\llar valnc>s of A.
22.
lf A - U'i:.VT is a singular value decomposition of A, then
"vhcre D IS the diagonal matrix having the eigenvalues of AAT (squares of the singular values of
A) on its main diagonal Thus ur AATU = D, i.e' u orthogonally diagonahzes AAT .
23.
We have Q
=
l !]
r fl
2
_!
4
;,
.:c.1
l
=
[cos
e
sm8
.
-Sin
8
]
cos8
where
e = 330°'
lhus multiplication by Q corresponds to
rotat1on nhout thP orsgm through an angle ('If 330° The symmetric matrix P has eigenvalues A.= 3
and ,\ = l. \\'llh rorrP.spondmg unil eigenvectors u1 = [
rJ
and U2
= [~]
Thus
v = [ul
U2] is
a clmgonnJi?,ing mat.nx for P:
vrp-v·=
[~
~]
_ ! :i]
2
2
[v'03 J32] [~
!
2
-4]
fl
2
From this we conclude tha.L multiplication by P stretches R2 by n factor of 3 m the directron of
u1 and by a factor of 1 in the direction vf u 2.
DISCUSSION AND DISCOVERY
01.
H A
Ul.v·r IS a s1ngular value dccomposiLton of an m x n matrix of rank k, then U has
size m x m, E has size m x n and V bas size n x n.
{b) If A = Ur Er Vt IS a reduced smgular value decomposition of an m x n matnx of rank k,
then U has size m x k, E 1 has size k x k and V has size n x k .
02.
If A 1s an invertible matrix, then it.s eigenvalues are nonzero Thus if A = UEVT is a singular
valur decomposition of A, then E is invertible and A- 1 = (VT)- 1 E- 1U- 1 = vE- 1 UT. Note also
Ihal the clta.gonal "!1lril''i of r;-t are the reciprocals of the diagonal entnes of r;, which are the
.~tr,Kulnr vnlurs of A -t. Thus A - 1 = VE- 1UT is a singular value decomposition of A - l
(a)
Exercls a Set 8.7
313
D 3. If A and B are orthogonally similar matrices, then there is an orthogonal matrix P such that
B = PAPT. Thus, 'fA = (JEVT is a singular value decomposition of A. then
~
B = PAPT
= P(UEVT)pT =
(PU)E(PV)r
is a singular value decomposition of B. It follows that Band A have the same smgular values (the
nonzero diagonal entries of E).
D4. If P is the matrix for the or thogonal projection of Rn onto a subspace W of d1mensioo k, then
P 2 = P and the eigenvalues of P are ). = 1 (with multiplicity k) and >. = 0 (with multiplicity
n- k). T hus t he singular values of Pare u1 = 1, u2 = 1, ... , U!t = 1.
EXERCISE SET 8.7
1. We have AT A = [3 4]
2 . WP. have AT A -
[~
[~] = 25; thus A += ( AT A)- 1AT= .}5 (3 4]
1 2
)
3 1
~~2 ~~I = [
6 6
),
6 11
thus the pseudoinversc of A is
A' =(AT A)-1 AT= ....L [
30
3. We ha-·e AT A - [;
A
5. (a)
Ll -6] [I 12] =
-6 6 1 3 1
[i
_!!.IS. ]
-370
0
s~
I
-~
~ :J [~ ~] = [;: :~]. thus U>e p"'udotnwse of A ts
+_
T
-(A A)
_1
A
T _
1
26 -321
[7 0 5] _ [ !
74 1 0 5 - -~
[
- goo -32
[~] [~\ ~~] [~] -= [~] [1]= CJ
AA1-A =
= [fs 2iJ·
0
0
-~]
~0
=A
(b )
A+AA+
= (ls /ll] [~] [ts ~) = [1] [I& is]= (ts fs)
(c)
AA+ =
[~]Its i~J = [~ ~]
{d)
A+ A
(e)
T he e1genvalues of AAT:; [
=A+
is symmetnc; thus (AA+)T
= AA+ .
=[~ ~~ [~] =Ill is svmmetnc; thus (.4+ A)T = A+ A .
vectors
Vt
=
rn
and v2
9 12
]
12 16
-:J
=[
we have u 1 = o'. AT v 1 = !(3 4]
are
>q
= 25 and ~ 2 = 0, with corresponding unit eigen-
respectively. The only singular value of AT is
[i] = [1].
,1 T = [3 •J
=
0'!
=5, and
This results in the singular value decomposition
III [5 OJ
[-! ll
= UEVT
314
Chapter 8
ThP corresponding reduced singular value decomposition is
AT
= (3
4]
= fll [5] [l
~) = UlEl
vr
and from this we obtam
(AT>+ =
(f)
"•Ei"•ur =
[i] UJ 111 = [:~]
= (A+>r
6 ~ 5 [ 1~ ~!] are At= 215 and A2 = 0, with corresponcling unit
and v2 = [-~] respectively. The only singular value of A+ is a 1 = t•
The eigenvalues of (A+)T A+ =
eigenvectors v 1
and we have
~
u 1 -
[i]
1
ct ,
A+v•
= 5[,\
[!] =
fs)
[1). This results in the singular value decompo-
sition
The corresponding reduced singular value decomposition is
2~]
3
A+= [ 2 5
= u.E. vt
[1) [t) (~ ~)
=
and from thts we obtam
6.
(a)
.
[l ~] [~ -~il [l ;]
-30
4 : 11- .4 =
=
2
5
(b)
.4 1 A:\
=
1
[~
__ ]_
30
2
5
-1] [i ~][~
r~ ~j [~ ~] ~ [I~ ~I]
11
7
- 30
2
5
~1] [~
7
- Jo
2
s
=A
~1] [!
I
6
'2!1
3o
- ygI
l
-.t
13
15
l
=[5
0
(c)
AA'
= [;
i] [~ -1 ~1] [~: ~ -~]E
z
3
1] [; i] ~ [~ ~~
(d )
(e) The eigenvalues of AAT =
unol eog•nvocto.s
is symmetric, thus (AA +)T -= AA+
-1$
is 'ymme.,ic; thus
[~2 1~4 3:]
are A1
v, ~ },;; [·~]· v, ~
W
= 15, >.2 =
*' Hl·
A)T
~A+ A .
'2, and AJ
and ., =
f,o
= 0,
nl
with corresponding
The singular values of
Exercise Set 8.7
315
*'
~2 = .Ji. Setting u , = :, AT v, = ~-jg, [: ~ ~] [':] =
u, = :/rv, = ~;+. [: ~ :J [-!] = f,; [_:].we have
AT a" u, = Jj5 and
kJ-u
4
-
't"'
1~1
726
*' 1:1
and
vr1
and from this it foUows t.hat
(C)
= -
The eignn-.;tlues of (A")rAi-
[
i& - 1~\l
1;
:15]
~~ ~ - 2~5
46
-m
40
43
arc
ns
conespondmg uniteigenw>CIO<S v' = ).;; [':]. v, =
gular values of A ... are
Ot
= ..,1~ 5
and
02
= ~·
,\1
;+.
=ft . ..\2 =~.and
H]·
and
VJ
= ;;;\;
,\3 = 0, wtth
n]·
The sin·
Seltmg
[~~]
lv'26 [ I]
-30
I~• _I_
7
~ -~
-3 = _L_ [ 3]
-1
\113
-2
we have
and fro111 this it follows that
5
(I 1• ) •
...lm
= r I EI-1 urI =
11
.)t%
[
7. The only eigenvalue of AT A
only smgular value of A ts a 1
=-
7
Ji9s
!?.5J is
=5
-~] [J}:; .J2l[vp _ip]
= [~ ~] =A
713
-!...
0
0
:13
2 -1
v26
>. 1 = 25, with corresponding unit eigenvector v 1 = [lj. The
We have u 1
= ;, Av1 = ~ [!] [Ij =
[i],
-:1
and we choose u 2 = [
so that {u 1 , u2} is an orthonormal basis for R 2 • This results in the singular value decomposition
A=
[!]
= [:
-~] [~]flf =UEVT
316
Chapter 8
T he corresponding reduced singular value decomposition is
and from this we obtain
[! 1 ~] are
8. The eigenvalues of AT A =
v1
= -Jb [!] and v:~ = 7iJ [-~J.
)q
= 15 a nd A2
= 2,
with corresponding unit eigenvectors
ThE' singular values of A are cr 1
= VIs and <12 = V2.
Setting
~ [~ ~1j ~ [23] = ~ [1~]
= cr1
_!_Avt = v iS
Ut
v 13
2 1
vl95
7
~HJ
we have
l
)]- [~
0
Ji9s -~]
726 [v'IS J2
3
1
-
0
7
4
725
:t'9s
and it follows that
.4•
9.
The eigenvalues of AT A
v1 =
l~1]
7S
have u,
=
[~ ~] [ ~ ~] [ ; . _v~
1
= V1Ej UT =
and v2 . [_
~ ~v,
choose u3 =
3
14
= [32
~~]
..n;
Jr,;
32
]
are
26
)q =
90 and .X2 = 10, w1th correspondmg unit eigenvectors
The smgnla.r values of A are
[~ i] [~] = [; ].
u,
<1 1
= '1/'90 = 3v'i0 and
= :,Av, = ,)rn
a2 =
JIO.
[~ i][-~l = [_;,] ,
\\'e
and we
[~] so that {u 1, u 2, UJ} is an orthonormal basis for R 3 . This yields the singula r value
decomposition
1
l
0] [3v'0l0 v'lO0
71
0
1
0
l
[l [
0
0
The corresponding red uced singular value decomposition is
7 1
A
-= 0 0 =
5t:
i)
72
I
I
0
~
I
I
3
0 [ ViQ
J2 -71
0
l[
2
O
jjO
7s
-jw
Exercise Set 8 7
317
ancl from t h1s we obtain
.,+ - \.
•
2 7s1][1
W'iO 0][1
72
[-35-?s
0 Jrn ~
~
~-•ur
I
1~1
-
0
0
~};]
[ Gl
v2
-~ =
-t
o - :ro]
0
Jo
= I·H] is A1 = 41, with correspouJiug unit. t:Jj!;~nvt:(.Lur v 1 = 11 }. The
only siug~lar value of A is cr1 = .J41. We have u 1 = ;, Av 1 = Jh [:] Ill = (~], and we choose
10. The ouly ra~envalue of A I' A
r-7fl
so that { U j, U2} is an orthonormal basis for R 2 • This results in the s ingular value
7oii
decomposition
4
A= ( ] =
= ur;vr
5
74i 74i
0
u'l =
[vf. -~] [J4I]IIJ
The cora esponding reduced singular value decomposition is
and fm111 th1s we obtain
At
11. Smcc A =
(_~ ~] has full column rank, we have
:4+ = rlZJ
.
12. Smcc rl
3]-l [22 -1]1
3 5
=
;.)
.4+ = (ATA )- 1 AT : thus
_!_ [ ;:; -3]
16 -3
5
[:!2
-I]=~ [1 -~]=A-I
1
.j
1
1
'0:.! ,2_] has f11ll column rank, we have A+= (AT A)- 1 A:r·: thus
[
13. The mamx A =
[~ :]
eageuv<llues of AT A =
and v2 .....,
VtE} 1 U[= [lJ[7,rr][Jrr ~)=[ 441
r-~]
does not have full column rank, so Formu la (3) does nol apply.
[~ ~) are )q
= 4 and .X 2 :::: 0, with correspondtng unit eagenwc tors v 1
The only singular value
or .4
[ ~]and we ,.hoose = (- 7i~)so that {u ,u
Ut
1
~
is
2}
Ot
= 2 We have
Ut
[I1 l]I
=
[~
-~]
72 72
= [~]
= ~Avl = ~ [: :] (~] =
is an orthonormal basis for R2 • Th1s results m
the smgular value decompmntion
A=
The
[20 0o] [-72~
The corresponding reduced singular value decomposition is
~]
= ur:vr
v"l
318
Chapter 8
and fro·n th1s we obtain
A•
i]
= v, Et 'ur = [~] 1l 1[7, 7,] = [:
14. The m<ltrix A does n:>t have fuU column rank, so Formula (3} d.:>es not apply. By inspection. we
see that a smgular value decompositiOn of A is given by
A=
[~ ~1 = [~ ~ ~1 [~ ~1 [~ ~]
0 0
0 0 1
= UEVT
0 0
The corresponding reduced singular value decomposition is
cllld
fr0111
llt iS
WE' 0 htO.in
15. The standard mfltrix for the orthogonal proJectmn of R 3 onto col{A} is
\ ~+ _
-
11
1I :\
~
[2 1
[160 -
·f
30
8]
15
_
; _!
5
-
[!
6
30
!
_J..
5
!
3
~A
15
-~]u
13
16. The standard matrix for the orthogonal projection of R 3 onto col(A) is
A A+
=
[~ ~] [_ ~
55
G
0 Q
~ l [~ ~ ~]
=
jQ
17. The giv<n system can be wd tten in matdx fom• as Ax
001
= b whm A = [~
~]and b =. [_~]· The
matrix A has thf' fnllowinp; red uced s ingular value decomposition and pseudoinverse:
A+
1
V1 E) Uf
= [~]
72
[~] (! f
j) = [ 1~
18
Thus the least squares solut1on of minimwn norm for the system Ax
~ ~]
9 9
=b
is:
Working with Proofs
319
~] and b =
18. The g;ven system can he wdtten as Ax= b whece A= [:
_!_ [
11
IS -9
9 1 2 2
- ] (
]
9 l 3 l
= r~0 -fii~
-
fi,]
0
[:] . The pseudo;,..,,. of
Thus the least squares solution of
2
19° Since AT has full column rank, we have (A+)T = (AT)+ = (AAT)- 1 A = (14)- 1 (I 2 3]
and so A+=
[H
20 . The matrix A T has full column rank and. from Ext:n: t::.c 18,
w~ have (AT)+= [ ~ --& ~]
I
1
2
14
0
1
I
[ !1
-l
'"
l
DISCUSSION AND DISCOVERY
cnJ
1<;
an m x n matrix with orthogonal (nonzero) column vecto rs, then
~ 1-1
ll~nll
= lv
uTo
a
A • A= ( ~v uT)(a u vr) = v( u: u vr = vvT
(b )
03
If c
D5o (a)
IS
0
0
0
0
0
AT=
7
Note If the columns of A r\re orthonormal, then A+
02. (a) If A =
I
Uc:tl 2
0
= AT
I
jjCJj"'
0
a u v1', then A;.
a nomero scalar, l hen (cA )"' =
0
and AA~
= (a uvT){~v u r) = uur.
i .4+ .
AA+ and A+ A are the standard matrices o f orthoJ!'onal projection operators.
fi)
Lhus
2 -2
~ -:0] .
$
.4 T-- -n
= (f.
320
Chapter 8
(b)
Using parts (a) and (b) of Theorem 8.7.2, we have (AA+)(AA+) = A(A+ AA+)
(A+ A)(A+ A)= Jl,+(AA+ A)= A+ A; thus AA,. and A+ A are idempotent.
= AA+ and
WORKING WITH PROOFS
PI. If A has rank k, then we have U'{U1 == V{V1 = h; thus
~
...
P5. Using P4 and Pl, we have .4+ AA+ =A+ A H A+= A+ .
P6. Using P4 and P2, we have (AA+)'r =(A++ A+)T = A++ A+= AA+.
P7. First note that, as in Exercise P2, we have AA+ = (U1 L: 1 V{)(V1 2:~ 1 U'[) = U1 U'[. Thus, since
the columns of U1 form an orthonormal basis for col(A), the matrix AA+ = U1 UT is the standard
matrix of the orthogonal projection of R" onto col(A).
P8. It follows from Exercise P7 t hat AT(AT)+ is the standard matrix of the orthogonal projection of
R" onto coi(AT) =·row(A). Furthermore, using parts (d), (e), and (f) of Theorem 8.7.2, we have
and so .4+ A is the matrix of the orthogonal projection of R" onto row(A) .
Download