Uploaded by sahogf

Elementary Linear Algebra Solution Manual (12th Edition)

advertisement
Elementary
Linear Algebra
Solution
Howard Anton
12th edition
Table of Contents.
ch01····················································································································································································· 1
ch02················································································································································································· 161
ch03················································································································································································· 208
ch04················································································································································································· 241
ch05 ················································································································································································ 353
ch06················································································································································································· 417
ch07················································································································································································· 463
ch08················································································································································································· 524
ch09················································································································································································· 583
1.1 Introduction to Systems of Linear Equations
1.1 Introduction to Systems of Linear Equations
1.
2.
3.
4.
(a)
This is a linear equation in x1 , x2 , and x3 .
(b)
This is not a linear equation in x1 , x2 , and x3 because of the term x1 x3 .
(c)
We can rewrite this equation in the form x1  7 x2  3 x3  0 therefore it is a linear equation in x1 , x2 , and x3 .
(d)
This is not a linear equation in x1 , x2 , and x3 because of the term x12 .
(e)
This is not a linear equation in x1 , x2 , and x3 because of the term x13/5 .
(f)
This is a linear equation in x1 , x2 , and x3 .
(a)
This is a linear equation in x and y .
(b)
This is not a linear equation in x and y because of the terms 2x1/3 and 3 y .
(c)
This is a linear equation in x and y .
(d)
This is not a linear equation in x and y because of the term 7 cos x .
(e)
This is not a linear equation in x and y because of the term xy .
(f)
We can rewrite this equation in the form  x  y  7 thus it is a linear equation in x and y .
(a)
a11 x1
a21 x1
 a12 x2
 a22 x2
 b1
 b2
(b)
a11 x1
a21 x1
a31 x1
 a12 x2
 a22 x2
 a32 x2
 a13 x3
 a23 x3
 a33 x3
 b1
 b2
 b3
(c)
a11 x1
a21 x1
 a12 x2
 a22 x2
 a13 x3
 a23 x3
 a14 x4
 a24 x4
(a)
(c)
(b)
 a11
a
 21
a12
a22
b1 
b2 
 a11
a
 21
 a31
 b1
 b2
a12
a13
a22
a23
a32
a33
b1 
b2 
b3 
 a11
a
 21
a12
a22
a13
a23
a14
a24
b1 
b2 
1
1.1 Introduction to Systems of Linear Equations
5.
(a)
(b)
2 x1
3 x1
6.
5 x1
8.
 4 x2
x2
 0
 0
 1
3 x1
7 x1
 2 x3
 4 x3
x3

x2

 2 x2
(a)
7.
2
 5
 3
 7
(b)
3 x2
 2 x2
(a)
 2 6 
 3 8


 9 3 
(a)
 3 2 1
4
5 3

 7
3 2 

 x4
 3 x4
x3
 1
 6
(b)
6
0

3 x1
4 x1
 x1

x3
 4 x3
 3 x2
 4 x4
x4

 2 x4
x4


3
 3
 9
 2
(c)
1 0
 0 2 0 3
 3 1 1 0 0 1


 6 2 1 2 3 6 
1 3 4 
5 1 1 
(b)
 2 0 2 1
 3 1 4 7 


 6
1 1 0 
(c)
1 0 0 1 
0 1 0 2 


0 0 1 3 
9.
The values in (a), (d), and (e) satisfy all three equations – these 3-tuples are solutions of the system.
The 3-tuples in (b) and (c) are not solutions of the system.
10.
The values in (b), (d), and (e) satisfy all three equations – these 3-tuples are solutions of the system.
The 3-tuples in (a) and (c) are not solutions of the system.
11.
(a)
We can eliminate x from the second equation by adding 2 times the first equation to the second. This yields
the system
3x  2 y  4
0  1
The second equation is contradictory, so the original system has no solutions. The lines represented by the
equations in that system have no points of intersection (the lines are parallel and distinct).
(b)
We can eliminate x from the second equation by adding 2 times the first equation to the second. This yields
the system
2x  4y  1
0  0
1.1 Introduction to Systems of Linear Equations
3
The second equation does not impose any restriction on x and y therefore we can omit it. The lines
represented by the original system have infinitely many points of intersection. Solving the first equation for x
we obtain x  12  2 y . This allows us to represent the solution using parametric equations
x
1
 2t ,
2
yt
where the parameter t is an arbitrary real number.
(c)
We can eliminate x from the second equation by adding 1 times the first equation to the second. This yields
the system
x  2y  0
 2y  8
From the second equation we obtain y  4 . Substituting 4 for y into the first equation results in x  8 .
Therefore, the original system has the unique solution
x  8,
y  4
The represented by the equations in that system have one point of intersection:  8, 4  .
We can eliminate x from the second equation by adding 2 times the first equation to the second. This yields
the system
12.
2 x  3y  a
0  b  2a
If b  2 a  0 (i.e., b  2 a ) then the second equation imposes no restriction on x and y ; consequently, the
system has infinitely many solutions.
If b  2 a  0 (i.e., b  2 a ) then the second equation becomes contradictory thus the system has no solutions.
There are no values of a and b for which the system has one solution.
13.
(a)
Solving the equation for x we obtain x  37  75 y therefore the solution set of the original equation can be
described by the parametric equations
x
3 5
 t,
7 7
yt
where the parameter t is an arbitrary real number.
(b)
Solving the equation for x1 we obtain x1  37  35 x2  34 x3 therefore the solution set of the original equation can
be described by the parametric equations
x1 
7 5
4
 r  s,
3 3
3
x2  r ,
where the parameters r and s are arbitrary real numbers.
x3  s
1.1 Introduction to Systems of Linear Equations
(c)
4
Solving the equation for x1 we obtain x1   81  14 x2  85 x3  43 x4 therefore the solution set of the original
equation can be described by the parametric equations
1 1
5
3
x1    r  s  t ,
8 4
8
4
x2  r ,
x3  s,
x4  t
where the parameters r , s , and t are arbitrary real numbers.
(d)
Solving the equation for v we obtain v  83 w  23 x  13 y  43 z therefore the solution set of the original equation
can be described by the parametric equations
8
2
1
4
v  t1  t2  t3  t 4 ,
3
3
3
3
w  t1 ,
x  t2 ,
y  t3 ,
z  t4
where the parameters t1 , t2 , t3 , and t4 are arbitrary real numbers.
14.
(a)
Solving the equation for x we obtain x  2  10 y therefore the solution set of the original equation can be
described by the parametric equations
x  2  10t,
yt
where the parameter t is an arbitrary real number.
(b)
Solving the equation for x1 we obtain x1  3  3 x2  12 x3 therefore the solution set of the original equation can
be described by the parametric equations
x1  3  3r  12 s,
x2  r ,
x3  s
where the parameters r and s are arbitrary real numbers.
(c)
Solving the equation for x1 we obtain x1  5  12 x2  34 x3  14 x4 therefore the solution set of the original
equation can be described by the parametric equations
1
3
1
x1  5  r  s  t,
2
4
4
x2  r ,
y  s,
zt
where the parameters r , s , and t are arbitrary real numbers.
(d)
Solving the equation for v we obtain v  w  x  5 y  7z therefore the solution set of the original equation
can be described by the parametric equations
v  t1  t2  5t3  7t 4 ,
w  t1 ,
x  t2 ,
y  t3 ,
z  t4
where the parameters t1 , t2 , t3 , and t4 are arbitrary real numbers.
15.
(a)
We can eliminate x from the second equation by adding 3 times the first equation to the second. This yields
the system
2 x  3y  1
0  0
1.1 Introduction to Systems of Linear Equations
5
The second equation does not impose any restriction on x and y therefore we can omit it. Solving the first
equation for x we obtain x  12  23 y . This allows us to represent the solution using parametric equations
x
1 3
 t,
2 2
yt
where the parameter t is an arbitrary real number.
(b)
We can see that the second and the third equation are multiples of the first: adding 3 times the first equation
to the second, then adding the first equation to the third yields the system
x1  3 x2  x3  4
00
00
The last two equations do not impose any restriction on the unknowns therefore we can omit them. Solving the
first equation for x1 we obtain x1  4  3 x2  x3 . This allows us to represent the solution using parametric
equations
x1  4  3r  s,
x2  r ,
x3  s
where the parameters r and s are arbitrary real numbers.
16.
(a)
We can eliminate x1 from the first equation by adding 2 times the second equation to the first. This yields
the system
00
3 x1  x2  4
The first equation does not impose any restriction on x1 and x2 therefore we can omit it. Solving the second
equation for x1 we obtain x1   43  13 x2 . This allows us to represent the solution using parametric equations
4 1
x1    t,
3 3
x2  t
where the parameter t is an arbitrary real number.
(b)
We can see that the second and the third equation are multiples of the first: adding 3 times the first equation
to the second, then adding 2 times the first equation to the third yields the system
2 x  y  2 z  4
00
00
The last two equations do not impose any restriction on the unknowns therefore we can omit them. Solving the
first equation for x we obtain x  2  12 y  z . This allows us to represent the solution using parametric
equations
1.1 Introduction to Systems of Linear Equations
1
x  2  r  s,
2
y  r,
6
zs
where the parameters r and s are arbitrary real numbers.
17.
(a)
 1 7 8 8 
Add 2 times the second row to the first to obtain 2 3 3 2  .
0 2 3 1
(b)
 1 3 8 3
Add the third row to the first to obtain 2 9 3 2 
 1 4 3 3
 1 4 3 3 
(another solution: interchange the first row and the third row to obtain 2 9 3 2  ).
0 1 5 0 
18.
(a)
 1 2 3 4 
Multiply the first row by 12 to obtain  7 1 4 3 .
 5 4 2 7 
(b)
 1 1 3 6 
Add the third row to the first to obtain  3 1 8 1
 6 3 1 4 
 1 2 18 0 
(another solution: add 2 times the second row to the first to obtain  3 1
8 1 ).
 6
3 1 4 
19.
(a)
k
1
Add 4 times the first row to the second to obtain 
0 8  4k
x
4 
which corresponds to the system
18 
ky  4
8  4k  y  18
If k  2 then the second equation becomes 0  18 , which is contradictory thus the system becomes
inconsistent.
If k  2 then we can solve the second equation for y and proceed to substitute this value into the first equation
and solve for x .
Consequently, for all values of k  2 the given augmented matrix corresponds to a consistent linear system.
(b)
k
1
Add 4 times the first row to the second to obtain 
0 8  4 k
1
which corresponds to the system
0 
1.1 Introduction to Systems of Linear Equations
x
7
ky  1
8  4k  y  0
If k  2 then the second equation becomes 0  0 , which does not impose any restriction on x and y therefore
we can omit it and proceed to determine the solution set using the first equation. There are infinitely many
solutions in this set.
If k  2 then the second equation yields y  0 and the first equation becomes x  1 .
Consequently, for all values of k the given augmented matrix corresponds to a consistent linear system.
20.
(a)
k 
 3 4
Add 2 times the first row to the second to obtain 
which corresponds to the system
0 2 k  5 
0
3x  4 y  k
0  2k  5
If k   25 then the second equation becomes 0  0 , which does not impose any restriction on x and y
therefore we can omit it and proceed to determine the solution set using the first equation. There are infinitely
many solutions in this set.
If k   25 then the second equation is contradictory thus the system becomes inconsistent.
Consequently, the given augmented matrix corresponds to a consistent linear system only when k   25 .
(b)
1 2 
 k
Add the first row to the second to obtain 
which corresponds to the system
0 
4  k 0
kx
4  k  x

y
 2

0
If k  4 then the second equation becomes 0  0 , which does not impose any restriction on x and y
therefore we can omit it and proceed to determine the solution set using the first equation. There are infinitely
many solutions in this set.
If k  4 then the second equation yields x  0 and the first equation becomes y  2 .
Consequently, for all values of k the given augmented matrix corresponds to a consistent linear system.
21.
Substituting the coordinates of the first point into the equation of the curve we obtain
y1  ax12  bx1  c
Repeating this for the other two points and rearranging the three equations yields
x12 a  x1b  c  y1
x22 a  x2 b  c  y2
x32 a  x3 b  c  y3
1.1 Introduction to Systems of Linear Equations
 x12

This is a linear system in the unknowns a , b , and c . Its augmented matrix is  x22
 x32

23.
8
x1 1 y1 

x2 1 y2  .
x3 1 y3 
Solving the first equation for x1 we obtain x1  c  kx2 therefore the solution set of the original equation can be
described by the parametric equations
x1  c  kt,
x2  t
where the parameter t is an arbitrary real number.
Substituting these into the second equation yields
c  kt  lt  d
which can be rewritten as
c  kt  d  lt
This equation must hold true for all real values t , which requires that the coefficients associated with the same power
of t on both sides must be equal. Consequently, c  d and k  l .
24.
The system has no solutions if either
(a)

at least two of the three lines are parallel and distinct or

each pair of lines intersects at a different point (without any lines being parallel)
The system has exactly one solution if either
(b)

two lines coincide and the third one intersects them or

all three lines intersect at a single point (without any lines being parallel)
The system has infinitely many solutions if all three lines coincide.
(c)
25.
2 x  3y  z  7
2 x  y  3z  9
4 x  2 y  5z  16
26.
We set up the linear system as discussed in Exercise 21:
12 a  1b  c  1
2 2 a  2b  c  4
 1 a
2

1b  c 
1
a 
i.e.
b  c 
1
4 a  2b  c  4
a

b  c 
1
One solution is expected, since exactly one parabola passes through any three given points  x1 , y1  ,  x2 , y2  ,  x3 , y3 
if x1 , x2 , and x3 are distinct.
27.
x 
y 
2x 
y  2z 
x

z  12
z 
5
1
1.1 Introduction to Systems of Linear Equations
9
True-False Exercises
(a)
True.  0,0,,0  is a solution.
(b)
False. Only multiplication by a nonzero constant is a valid elementary row operation.
(c)
True. If k  6 then the system has infinitely many solutions; otherwise the system is inconsistent.
(d)
True. According to the definition, a1 x1  a2 x2    an xn  b is a linear equation if the a's are not all zero. Let us
assume a j  0 . The values of all x's except for x j can be set to be arbitrary parameters, and the equation can be used
to express x j in terms of those parameters.
(e)
False. E.g. if the equations are all homogeneous then the system must be consistent. (See True-False Exercise (a)
above.)
(f)
False. If c  0 then the new system has the same solution set as the original one.
(g)
True. Adding 1 times one row to another amounts to the same thing as subtracting one row from another.
(h)
False. The second row corresponds to the equation 0  1 , which is contradictory.
1.2 Gaussian Elimination
1.
2.
(a)
This matrix has properties 1-4. It is in reduced row echelon form, therefore it is also in row echelon form.
(b)
This matrix has properties 1-4. It is in reduced row echelon form, therefore it is also in row echelon form.
(c)
This matrix has properties 1-4. It is in reduced row echelon form, therefore it is also in row echelon form.
(d)
This matrix has properties 1-4. It is in reduced row echelon form, therefore it is also in row echelon form.
(e)
This matrix has properties 1-4. It is in reduced row echelon form, therefore it is also in row echelon form.
(f)
This matrix has properties 1-4. It is in reduced row echelon form, therefore it is also in row echelon form.
(g)
This matrix has properties 1-3 but does not have property 4: the second column contains a leading 1 and a
nonzero number ( 7 ) above it. The matrix is in row echelon form but not reduced row echelon form.
(a)
This matrix has properties 1-3 but does not have property 4: the second column contains a leading 1 and a
nonzero number (2) above it. The matrix is in row echelon form but not reduced row echelon form.
(b)
This matrix does not have property 1 since its first nonzero number in the third row (2) is not a 1. The matrix is
not in row echelon form, therefore it is not in reduced row echelon form either.
(c)
This matrix has properties 1-3 but does not have property 4: the third column contains a leading 1 and a
nonzero number (4) above it. The matrix is in row echelon form but not reduced row echelon form.
(d)
This matrix has properties 1-3 but does not have property 4: the second column contains a leading 1 and a
nonzero number (5) above it. The matrix is in row echelon form but not reduced row echelon form.
1.2 Gaussian Elimination
3.
(e)
This matrix does not have property 2 since the row that consists entirely of zeros is not at the bottom of the
matrix. The matrix is not in row echelon form, therefore it is not in reduced row echelon form either.
(f)
This matrix does not have property 3 since the leading 1 in the second row is directly below the leading 1 in
the first (instead of being farther to the right). The matrix is not in row echelon form, therefore it is not in
reduced row echelon form either.
(g)
This matrix has properties 1-4. It is in reduced row echelon form, therefore it is also in row echelon form.
(a)
The first three columns are pivot columns and all three rows are pivot rows. The linear system
 3y  4z  7
x
y
 2z
 2
z
 5
can be rewritten as
x
 7  3y  4z
y
 2  2z
z
 5
and solved by back-substitution:
z5
y  2  2  5   8
x  7  3  8   4  5   37
therefore the original linear system has a unique solution: x  37 , y  8 , z  5 .
(b)
The first three columns are pivot columns and all three rows are pivot rows. The linear system
w

8 y  5z  6
w  6  8 y  5z
x  4 y  9z  3
y 
can be rewritten as
z  2
x
 3  4 y  9z
y
 2z
Let z  t . Then
y 2t
x  3  4  2  t   9t  5  13t
w  6  8  2  t   5t  10  13t
therefore the original linear system has infinitely many solutions:
w  10  13t , x  5  13t , y  2  t , z  t
where t is an arbitrary value.
(c)
Columns 1, 3, and 4 are pivot columns. The first three rows are pivot rows. The linear system
x1
 7 x2
 2 x3
x3

x4
x4
 8 x5
 3
 6 x5
 3 x5
0



5
9
0
can be rewritten: x1  3  7 x2  2 x3  8 x5 , x3  5  x4  6 x5 , x4  9  3 x5 .
Let x2  s and x5  t . Then
10
1.2 Gaussian Elimination
11
x4  9  3t
x3  5   9  3t   6t  4  3t
x1  3  7s  2  4  3t   8t  11  7s  2t
therefore the original linear system has infinitely many solutions:
x1  11  7s  2t , x2  s, x3  4  3t , x4  9  3t , x5  t
where s and t are arbitrary values.
(d)
The first two columns are pivot columns and the first two rows are pivot rows. The system is inconsistent since
the third row of the augmented matrix corresponds to the equation
0 x  0 y  0 z  1.
4.
(a)
The first three columns are pivot columns and all three rows are pivot rows. A unique solution: x  3 , y  0 ,
z7.
(b)
The first three columns are pivot columns and all three rows are pivot rows. Infinitely many solutions:
w  8  7t , x  2  3t , y  5  t , z  t where t is an arbitrary value.
(c)
Columns 1, 3, and 4 are pivot columns. The first three rows are pivot rows. Infinitely many solutions:
v  2  6 s  3t , w  s , x  7  4t , y  8  5t , z  t where s and t are arbitrary values.
(d)
Columns 1 and 3 are pivot columns. The first two rows are pivot rows. The system is inconsistent since the
third row of the augmented matrix corresponds to the equation
0 x  0 y  0 z  1.
5.
 1 1 2 8
 1 2 3 1


 3 7 4 10 
The augmented matrix for the system.
1 2 8
1
 0 1 5 9 


 3 7 4 10 
The first row was added to the second row.
1 2
8
1
0
1 5
9 

 0 10 2 14 
3 times the first row was added to the third row.
1 2
8
1
0
1 5 9 

 0 10 2 14 
The second row was multiplied by 1 .
2
8
1 1
 0 1 5
9 

 0 0 52 104 
10 times the second row was added to the third row.
1.2 Gaussian Elimination
8
1 1 2
 0 1 5 9 


 0 0
1 2 
The third row was multiplied by  521 .
The system of equations corresponding to this augmented matrix in row echelon form is
x1

x2
x2
 2 x3
 5 x3
x3
 8
 9
 2
and can be rewritten as
x1
x2
x3
 8  x2  2 x3
 9  5 x3
 2
Back-substitution yields
x3  2
x 2  9  5  2   1
x1  8  1  2  2   3
The linear system has a unique solution: x1  3 , x2  1 , x3  2 .
6.
 2 2 2 0
 2 5 2
1

 8 1 4 1
The augmented matrix for the system.
 1 1 1 0
 2 5 2
1

 8 1 4 1
The first row was multiplied by 12 .
 1 1 1 0
0 7 4
1

 8 1 4 1
2 times the first row was added to the second row.
1
1 0
1
0
7
4
1

 0 7 4 1
8 times the first row was added to the third row.
1
1 0
1
0
4
1
1
7
7

 0 7 4 1
The second row was multiplied by 17 .
 1 1 1 0
0 1 4 1 
7
7

0 0 0 0 
7 times the second row was added to the third row.
The system of equations corresponding to this augmented matrix in row echelon form is
12
1.2 Gaussian Elimination
x1

x2

x3

x2

4
x3 
7
0 
0
1
7
0
Solve the equations for the leading variables
x1   x2  x3
1 4
x2   x3
7 7
then substitute the second equation into the first
1 3
x1    x3
7 7
1 4
x 2   x3
7 7
If we assign x3 an arbitrary value t , the general solution is given by the formulas
1 3
1 4
x1    t, x2   t,
7 7
7 7
7.
x3  t
 1 1 2 1 1
 2
1 2 2 2 

 1 2 4
1
1


0 3 3 
 3 0
The augmented matrix for the system.
 1 1 2 1 1
 0 3 6 0 0 


 1 2 4
1 1


0 3 3
 3 0
2 times the first row was added to the second row.
 1 1 2 1 1
0 3 6 0 0 


0
1 2 0 0 


0 3 3
3 0
The first row was added to the third row.
 1 1 2 1 1
 0 3 6 0 0 


0
1 2 0 0 


 0 3 6 0 0 
3 times the first row was added to the fourth row.
13
1.2 Gaussian Elimination
 1 1 2 1 1
0
1 2 0 0 

0
1 2 0 0 


 0 3 6 0 0 
The second row was multiplied by 13 .
 1 1 2 1 1
0
1 2 0 0 

0 0
0 0 0


 0 3 6 0 0 
1 times the second row was added to the third row.
 1 1 2 1 1
0
1 2 0 0 

0 0
0 0 0


0 0 0
0 0
3 times the second row was added to the fourth row.
The system of equations corresponding to this augmented matrix in row echelon form is
x  y  2 z  w  1
y  2z

0 
0 
0
0
0
Solve the equations for the leading variables
x  1  y  2 z  w
y  2z
then substitute the second equation into the first
x  1  2 z  2 z  w  1  w
y  2z
If we assign z and w the arbitrary values s and t , respectively, the general solution is given by the formulas
x  1  t, y  2s,
8.
z  s,
wt
3
1
 0 2
 3 6 3 2 


 6
6
3
5
The augmented matrix for the system.
 3 6 3 2 
 0 2
3
1

 6
6
3
5
The first and second rows were interchanged.
14
1.2 Gaussian Elimination
 1 2 1  23 


1
0 2 3
6 6 3
5
The first row was multiplied by 13 .
 1 2 1  23 


1
0 2 3
0 6 9
9
6 times the first row was added to the third row.
 1 2 1  23 


1  23  12 
0
0 6
9
9 
The second row was multiplied by  12 .
 1 2 1  23 

3
1
0 1  2  2 
0 0
0
6 
6 times the second row was added to the third row.
 1 2 1  23 

3
1
0 1  2  2 
0 0
0
1
The third row was multiplied by 61 .
The system of equations corresponding to this augmented matrix in row echelon form
a  2b 
b 
2
3
3
1
c  
2
2
0 
1
c  
is clearly inconsistent.
9.
1 2 8
 1
 1 2 3 1


 3 7 4 10 
The augmented matrix for the system.
1 2 8
1
 0 1 5 9 


 3 7 4 10 
The first row was added to the second row.
1 2
8
1
0
1 5
9 

 0 10 2 14 
3 times the first row was added to the third row.
1 2
8
1
0
1 5 9 

 0 10 2 14 
The second row was multiplied by 1 .
15
1.2 Gaussian Elimination
2
8
1 1
 0 1 5
9 

 0 0 52 104 
8
1 1 2
 0 1 5 9 


 0 0
1 2 
10 times the second row was added to the third row.
The third row was multiplied by  521 .
 1 1 2 8
0 1 0 1


0 0 1 2 
5 times the third row was added to the second row.
 1 1 0 4
0 1 0 1


0 0 1 2 
2 times the third row was added to the first row.
 1 0 0 3
0 1 0 1


0 0 1 2 
1 times the second row was added to the first row.
The linear system has a unique solution: x1  3 , x2  1 , x3  2 .
10.
 2 2 2 0
 2 5 2
1

 8 1 4 1
The augmented matrix for the system.
 1 1 1 0
 2 5 2
1

 8 1 4 1
The first row was multiplied by 12 .
 1 1 1 0
0 7 4
1

 8 1 4 1
2 times the first row was added to the second row.
1
1 0
1
0
7
4
1

 0 7 4 1
8 times the first row was added to the third row.
1
1 0
1
0
4
1
1
7
7

 0 7 4 1
The second row was multiplied by 17 .
16
1.2 Gaussian Elimination
 1 1 1 0
0 1 4 1 
7
7

0 0 0 0 
7 times the second row was added to the third row.
 1 0 37  17 

4
1
7
0 1 7
0 0 0
0 
1 times the second row was added to the first row.
Infinitely many solutions: x1   17  37 t , x2  17  47 t , x3  t where t is an arbitrary value.
11.
 1 1 2 1 1
 2
1 2 2 2 

 1 2 4
1
1


0  3 3 
 3 0
The augmented matrix for the system.
 1 1 2 1 1
 0 3 6 0 0 


 1 2 4
1 1


0  3 3 
 3 0
2 times the first row was added to the second row.
 1 1 2 1 1
0 3 6 0 0 


0
1 2 0 0 


0 3 3
3 0
the first row was added to the third row.
 1 1 2 1 1
 0 3 6 0 0 


0
1 2 0 0 


 0 3 6 0 0 
3 times the first row was added to the fourth row.
 1 1 2 1 1
0
1 2 0 0 

0
1 2 0 0 


 0 3 6 0 0 
The second row was multiplied by 13 .
 1 1 2 1 1
0
1 2 0 0 

0 0
0 0 0


 0 3 6 0 0 
1 times the second row was added to the third row.
 1 1 2 1 1
0
1 2 0 0 

0 0
0 0 0


0 0 0
0 0
3 times the second row was added to the fourth row.
17
1.2 Gaussian Elimination
1
0

0

0
0 1 1
1 2 0 0 
0
0 0 0

0
0 0 0
0
the second row was added to the first row.
The system of equations corresponding to this augmented matrix in row echelon form is
 w  1
x
y  2z

0 
0 
0
0
0
Solve the equations for the leading variables
x  1  w
y  2z
If we assign z and w the arbitrary values s and t , respectively, the general solution is given by the formulas
x  1  t ,
12.
y  2s,
z  s,
wt
3
1
 0 2
 3 6  3 2 


 6
6
3
5
The augmented matrix for the system.
 3 6  3 2 
 0 2
3
1

 6
6
3
5
The first and second rows were interchanged.
 1 2 1  23 


1
0 2 3
6 6 3
5
The first row was multiplied by 13 .
 1 2 1  23 


1
0 2 3
0 6 9
9
6 times the first row was added to the third row.
 1 2 1  23 


1  23  12 
0
0 6
9
9
The second row was multiplied by  12 .
 1 2 1  23 

3
1
0 1  2  2 
0 0
0
6 
6 times the second row was added to the third row.
18
1.2 Gaussian Elimination
 1 2 1  23 

3
1
0 1  2  2 
0 0
0
1
The third row was multiplied by 61 .
 1 2 1  23 


3
0
0 1  2
0 0
0
1
1
2
times the third row was added to the second row.
 1 2 1 0 
0 1  3 0 
2


0 0
0 1
2
3
times the third row was added to the first row.
2 0
1 0
0 1  3 0 
2


0 0
0 1
2 times the second row was added to the first row.
The last row corresponds to the equation
0 a  0b  0 c  1
therefore the system is inconsistent.
(Note: this was already evident after the fifth elementary row operation.)
13.
Since the number of unknowns (4) exceeds the number of equations (3), it follows from Theorem 1.2.2 that this
system has infinitely many solutions. Those include the trivial solution and infinitely many nontrivial solutions.
14.
The system does not have nontrivial solutions.
(The third equation requires x3  0 , which substituted into the second equation yields x2  0. Both of these
substituted into the first equation result in x1  0 .)
15.
We present two different solutions.
Solution I uses Gauss-Jordan elimination
2 1 3 0 
 1 2 0 0


0 1 1 0 
The augmented matrix for the system.
 1 12 23 0 


 1 2 0 0
0 1 1 0 
The first row was multiplied by 12 .
3
0
 1 12
2


3
3
0 2  2 0 
0 1
1 0 
1 times the first row was added to the second row.
19
1.2 Gaussian Elimination
3
0
 1 12
2


0 1 1 0 
0 1 1 0 
The second row was multiplied by 23 .
3
0
 1 12
2


0 1 1 0 
0 0 2 0 
1 times the second row was added to the third row.
3
0
 1 12
2


0 1 1 0 
0 0 1 0 
The third row was multiplied by 12 .
 1 12 0 0 


0 1 0 0 
0 0 1 0 
 1 0 0 0
0 1 0 0 


0 0 1 0 
20
The third row was added to the second row
and  23 times the third row was added to the first row
 12 times the second row was added to the first row.
Unique solution: x1  0 , x2  0 , x3  0 .
Solution II. This time, we shall choose the order of the elementary row operations differently in order to avoid
introducing fractions into the computation. (Since every matrix has a unique reduced row echelon form, the exact
sequence of elementary row operations being used does not matter – see part 1 of the discussion “Some Facts About
Echelon Forms” in Section 1.2)
2 1 3 0 
 1 2 0 0


0 1 1 0 
 1 2 0 0
2 1 3 0 


0 1 1 0 
The augmented matrix for the system.
The first and second rows were interchanged
(to avoid introducing fractions into the first row).
 1 2 0 0


0 3 3 0 
0
1 1 0 
2 times the first row was added to the second row.
 1 2 0 0
0 1 1 0 


0 1 1 0 
The second row was multiplied by  13 .
1.2 Gaussian Elimination
 1 2 0 0
0 1 1 0 


0 0 2 0 
1 times the second row was added to the third row.
 1 2 0 0
0 1 1 0 


0 0 1 0 
The third row was multiplied by 12 .
 1 2 0 0
0 1 0 0 


0 0 1 0 
The third row was added to the second row.
 1 0 0 0
0 1 0 0 


0 0 1 0 
2 times the second row was added to the first row.
Unique solution: x1  0 , x2  0 , x3  0 .
16.
We present two different solutions.
Solution I uses Gauss-Jordan elimination
 2 1 3 0 
 1 2 3 0 


 1 1 4 0 
The augmented matrix for the system.
 1  12  23 0 


 1 2 3 0 
 1
1 4 0 
The first row was multiplied by 12 .
 1  12  23 0 


3
 29 0 
2
0
 1
1 4 0 
The first row was added to the second row.
 1  12

3
2
0
3
0
2
1 times the first row was added to the third row.
 23

9
2
11
2
0

0
0 
 1  12  23 0 


1 3 0 
0
3
11
0
0 
2
2
The second row was multiplied by 23 .
 1  12  23 0 


1 3 0 
0
0
0 10 0 
 23 times the second row was added to the third row.
21
1.2 Gaussian Elimination
 1  12  23 0 


1 3 0 
0
0
0
1 0 
 1  12 0 0 


1 0 0
0
0
0 1 0 
 1 0 0 0
0 1 0 0 


0 0 1 0 
22
The third row was multiplied by 101 .
3 times the third row was added to the second row
and 32 times the third row was added to the first row
1
2
times the second row was added to the first row.
Unique solution: x  0 , y  0 , z  0 .
Solution II. This time, we shall choose the order of the elementary row operations differently in order to avoid
introducing fractions into the computation. (Since every matrix has a unique reduced row echelon form, the exact
sequence of elementary row operations being used does not matter – see part 1 of the discussion “Some Facts
About Echelon Forms” in Section 1.2)
 2 1 3 0 
 1 2 3 0 


 1 1 4 0 
 1 1 4 0
 1 2 3 0 


 2 1 3 0 
The augmented matrix for the system.
The first and third rows were interchanged
(to avoid introducing fractions into the first row).
 1 1 4 0
0 3
1 0 

2 1 3 0 
The first row was added to the second row.
4 0
1 1
0 3
1 0 

0 3 11 0 
2 times the first row was added to the third row.
4 0
1 1
0 3
1 0 

0 0 10 0 
The second row was added to the third row.
 1 1 4 0
0 3 1 0 


0 0 1 0 
The third row was multiplied by  101 .
1.2 Gaussian Elimination
 1 1 4 0
0 3 0 0 


0 0 1 0 
1 times the third row was added to the second row.
 1 1 0 0
0 3 0 0 


0 0 1 0 
4 times the third row was added to the first row.
 1 1 0 0
0 1 0 0 


0 0 1 0 
The second row was multiplied by 13 .
 1 0 0 0
0 1 0 0 


0 0 1 0 
1 times the second row was added to the first row.
23
Unique solution: x  0 , y  0 , z  0 .
17.
3 1 1 1 0 
5 1 1 1 0 


The augmented matrix for the system.
1
0
 1 13 13
3


 5 1 1 1 0 
The first row was multiplied by 13 .
1
1
3

8
0

3


1
3
2
3

1
3
8
3
0

0
5 times the first row was added to the second row.
 1 13

0 1
1
3
1
4
0

1 0
The second row was multiplied by  83 .
1 0

0 1
1
4
1
4
0 0

1 0
 13 times the second row was added to the first row.
1
3
If we assign x3 and x4 the arbitrary values s and t , respectively, the general solution is given by the formulas
1
1
x1   s, x2   s  t, x3  s, x4  t .
4
4
(Note that fractions in the solution could be avoided if we assigned x3  4 s instead, which along with x4  t would
yield x1  s , x2  s  t , x3  4 s , x4  t .)
1.2 Gaussian Elimination
18.
1 3 2
 0
 2
1 4
3

 2
3 2 1

5 4
  4 3
0
0 
0

0
The augmented matrix for the system.
1 4
3
 2
 0
1 3 2

 2
3 2 1

5 4
  4 3
0
0 
0

0
The first and second rows were interchanged.
3
1
2
 1
2
2

1 3 2
 0
 2 3 2 1

5 4
 4 3
0

0
0

0
The first row was multiplied by 12 .
3
 1 12 2
2

0
1
3
2


0 2
6 4

 0 1 3 2
0

0
0

0
3
 1 12 2
2

 0 1 3 2
0 0
0
0

0
0
0 0
0

0
0

0
1

0
0

0
0  72
1
0
0
2 times the first row was added to the third row
and 4 times the first row was added to the fourth row.
 2 times the second row was added to the third row and
the second row was added to the fourth row.
0

3 2 0 
0
0 0

0
0 0
5
2
 12 times the second row was added to the first row.
If we assign w and x the arbitrary values s and t , respectively, the general solution is given by the formulas
7
5
u  s  t,
2
2
19.
 0
 1

 2

 2
2 2
4 0
0 1 3 0 
3 1
1 0

1 3 2 0 
v  3s  2t,
w  s,
xt.
The augmented matrix for the system.
24
1.2 Gaussian Elimination
 1
 0

 2

 2
0  1 3 0 
2 2
4 0 
3 1
1 0

1 3 2 0 
1
0

0

0
0  1 3 0 
2 2
4 0 
3 3 7 0

1 1 8 0 
1
0

0

0
0  1 3 0 
1 1 2 0 
3 3 7 0

1 1 8 0 
1
0

0

0
0 1
1 1
0
0
1
0

0

0
0  1 3 0 
1 1 2 0 
0 0
1 0

0 0 0 0
1
0

0

0
0 1 0 0 
1 1 0 0 
0 0 1 0

0 0 0 0
The first and second rows were interchanged.
2 times the first row was added to the third row
and 2 times the first row was added to the fourth row.
The second row was multiplied by 12 .
3 0 
2 0 
0
1 0

0 10 0 
3 times the second row was added to the third and
1 times the second row was added to the fourth row.
10 times the third row was added to the fourth row.
2 times the third row was added to the second and
3 times the third row was added to the first row.
If we assign y an arbitrary value t the general solution is given by the formulas
w  t,
20.
1 3 0 1
1 4 2 0

0 2 2 1

1 1
2 4
 1 2 1 1
0
0 
0

0
0 
x  t ,
y  t,
z 0.
The augmented matrix for the system.
25
1.2 Gaussian Elimination
3 0 1
1
0
1 2 1

0 2 2 1

1 1
0 10
0 5 1 0
1
0

0

0
0
1
0

0

0
0
1
0

0

0
0

1
0

0

0
0

1
0

0

0
0
3
0
1 2
0 2
0 21
0
9
3
0
1 2
0 1
0 21
0
9
3 0
1 2
0 1
0 0
0 0
3 0
1 2
0 1
0 0
0 0
0
0 
0

0
0 
1 times the first row was added to the second row,
2 times the first row was added to the fourth row,
and 1 times the first row was added to the fifth row.
1 0
1 0 
3 0 

11 0 
5 0 
2 times the second row was added to the third row,
10 times the second row was added to the fourth row,
and 5 times the second row was added to the fifth row.
1 0
1 0 
 32 0 

11 0 
5 0 
The third row was multiplied by 12 .
1 0
1 0 
 32 0 

41
0
2
17
0 
2
21 times the third row was added to the fourth row
and 9 times the third row was added to the fifth row.
1 0
1 0 
 32 0 

1 0
17
0 
2
2
The fourth row was multiplied by 41
.
3 0
1 0
1 2 1 0 
0 1  32 0 

0 0
1 0
0 0
0 0 
 172 times the fourth row was added to the fifth row.
The augmented matrix in row echelon form corresponds to the system
x1
 3 x2
x2
 2 x3


x4
x4
 0
 0
x3

3
x4
2
x4
 0
 0
26
1.2 Gaussian Elimination
Using back-substitution, we obtain the unique solution of this system
x1  0,
 2 1 3 4 9 
 1 0 2 7 11


 3 3
1 5 8


1 4 4 10 
2
21.
 1 0 2 7 11
 2 1 3 4 9 


 3 3
1 5 8


1 4 4 10 
2
7
11
 1 0 2
 0 1 7 10 13


 0 3
7 16 25


1 8 10 12 
0
7
11
 1 0 2
0
1 7
10
13

 0 3
7 16 25


1 8 10 12 
0
2
1
0

0

0
0
11
1 7
10
13
0 14
14 14 

0
15 20 25
1
0

0

0
0 2
1
0

0

0
0 2
11
1 7 10
13
0
1 1 1

0
0 5 10 
1
0

0

0
0 2 7 11
1 7 10 13
0
1 1 1

0
0
1 2
x2  0,
x3  0,
x4  0 .
The augmented matrix for the system.
The first and second rows were interchanged
(to avoid introducing fractions into the first row).
2 times the first row was added to the second row,
3 times the first row was added to the third row,
and 2 times the first row was added to the fourth.
The second row was multiplied by 1 .
7
11
1 7
10
13
0
1 1 1

0 15 20 25
3 times the second row was added to the third row and
1 times the second row was added to the fourth row.
7
The third row was multiplied by  141 .
7
15 times the third row was added to the fourth row.
The fourth row was multiplied by  15 .
27
1.2 Gaussian Elimination
1
0

0

0
0 2 0 3 
1 7 0 7 
0
1 0
1

0
0 1 2
1
0

0

0
0 0 0 1
1 0 0 0 
0 1 0
1

0 0 1 2
The fourth row was added to the third row,
10 times the fourth row was added to the second,
and 7 times the fourth row was added to the first.
7 times the third row was added to the second row,
and 2 times the third row was added to the first row.
Unique solution: I1  1 , I 2  0 , I 3  1 , I 4  2 .
22.
1 1 1
 0 0
  1  1 2 3 1

 1 1  2 0 1

1
 2 2 1 0
0
0 
0

0
 1 1  2 0 1 0 
  1  1 2 3 1 0 


 0 0
1 1 1 0


1 0
 2 2 1 0
1
0

0

0
1 2 0  1 0 
0
0 3 0 0 
0
1 1 1 0

0
3 0 3 0
1
0

0

0
1 2
0
1
1
0

0

0
1 2
0
1
1
0

0

0
1 2
0
1
0
0
0
0
0
0
0 1 0 
1 1 0 
0 3 0 0 

3 0 3 0
0 1 0 
1 1 0 
0 3 0 0 

0 3 0 0 
0 1 0 
1 1 0 
0
1 0 0

0 3 0 0 
The augmented matrix for the system.
The first and third rows were interchanged.
The first row was added to the second row
and 2 times the first row was added to the last row.
The second and third rows were interchanged.
3 times the second row was added to the fourth row.
The third row was multiplied by  13 .
28
1.2 Gaussian Elimination
1
0

0

0
1 2 0 1 0 
0
1 1 1 0 
0
0 1 0 0

0
0 0 0 0
3 times the third row was added to the fourth row.
1
0

0

0
1 2 0 1 0 
0
1 0
1 0 
0
0 1 0 0

0
0 0 0 0
1 times the third row was added to the second row.
1
0

0

0
1 0
1 0 
0 0 1 0 0

0 0 0 0 0
29
1 0 0
0 1 0
2 times the second row was added to the first row.
If we assign Z 2 and Z 5 the arbitrary values s and t , respectively, the general solution is given by the formulas
Z1   s  t ,
23.
24.
Z 2  s,
Z 3  t ,
Z 4  0,
Z5  t .
(a)
The system is consistent; it has a unique solution (back-substitution can be used to solve for all three
unknowns).
(b)
The system is consistent; it has infinitely many solutions (the third unknown can be assigned an arbitrary value
t , then back-substitution can be used to solve for the first two unknowns).
(c)
The system is inconsistent since the third equation 0  1 is contradictory.
(d)
There is insufficient information to decide whether the system is consistent as illustrated by these examples:

1    


For 0 0 0 0  the system is consistent with infinitely many solutions.
0 0 1  

1    
1    


For 0 0 1 0  the system is inconsistent (the matrix can be reduced to 0 0 1 0  ).
0 0 1 1 
0 0 0 1 
(a)
The system is consistent; it has a unique solution (back-substitution can be used to solve for all three
unknowns).
(b)
The system is consistent; it has a unique solution (solve the first equation for the first unknown, then proceed
to solve the second equation for the second unknown and solve the third equation last.)
(c)
(d)
1 0 0 0 


The system is inconsistent (adding 1 times the first row to the second yields 0 0 0 1  ; the second
1    
equation 0  1 is contradictory).
There is insufficient information to decide whether the system is consistent as illustrated by these examples:
1.2 Gaussian Elimination
25.

1 0 0 1


For 1 0 0 1 the system is consistent with infinitely many solutions.
1 0 0 1

1 0 0 2 
1 0 0 2 


1 0 0 1 
For 
 the system is inconsistent (the matrix can be reduced to 0 0 0 1  ).
0 0 0 0 
1 0 0 1 
3
4 
1 2
 3 1
5
2 

 4 1 a 2  14 a  2 
4 
3
1 2
 0 7
10 
14

 0 7 a 2  2 a  14 
30
The augmented matrix for the system.
3 times the first row was added to the second row
and 4 times the first row was added to the third row.
3
4 
1 2
 0 7
10 
14

2
 0 0 a  16 a  4 
1 times the second row was added to the third row.
3
4 
1 2
0 1
10 
2
7


2
 0 0 a  16 a  4 
The second row was multiplied by  17 .
The system has no solutions when a  4 (since the third row of our last matrix would then correspond to a
contradictory equation 0  8 ).
The system has infinitely many solutions when a  4 (since the third row of our last matrix would then correspond
to the equation 0  0 ).
For all remaining values of a (i.e., a  4 and a  4 ) the system has exactly one solution.
26.
1
2
1 2
 2 2
3
1 

 1 2 ( a 2  3) a 
1
2 
1 2
 0 6
1
3 

0 0 a 2  2 a  2 
1
2 
1 2
0 1

1
 61
2


 0 0  a 2  2 a  2 
The augmented matrix for the system.
2 times the first row was added to the second row
and 1 times the first row was added to the third row.
The second row was multiplied by  61 .
1.2 Gaussian Elimination
31
The system has no solutions when a  2 or a   2 (since the third row of our last matrix would then correspond
to a contradictory equation).
For all remaining values of a (i.e., a  2 and a   2 ) the system has exactly one solution.
There is no value of a for which this system has infinitely many solutions.
 1 3 1 a 
1 1
2 b 

 0 2 3 c 
27.
3 1
a 
1
 0 2 3  a  b 


 0
c 
2 3
3
1
 0 2

0
0
1

 a  b 
3
0  a  b  c 
The augmented matrix for the system.
1 times the first row was added to the second row.
a
a
 1 3 1

0 1  3
a
b
 2 
2
2

 0 0
0  a  b  c 
The second row was added to the third row.
The second row was multiplied by  12 .
If  a  b  c  0 then the linear system is consistent. Otherwise (if  a  b  c  0 ) it is inconsistent.
28.
 1 3 1 a
 1 2 1 b 


 3 7 1 c 
1
1 3
0
1 2

 0 2 4
a 
a  b 
3a  c 
a 
1 3 1
0 1 2
a  b 

 0 0 0  a  2b  c 
The augmented matrix for the system.
The first row was added to the second row and
3 times the first row was added to the third row.
2 times the second row was added to the third row.
If  a  2b  c  0 then the linear system is consistent. Otherwise (if  a  2b  c  0 ) it is inconsistent.
29.
2 1 a 
3 6 b 


The augmented matrix for the system.
1 12

3 6
The first row was multiplied by 12 .
1
2
a

b
1.2 Gaussian Elimination
1

0
1
2
9
2


 a  b
1
2
a
3
2
3 times the first row was added to the second row.
1
a 
1 12
2

1
2 
0 1  3 a  9 b 
The third row was multiplied by 29 .
1 0 23 a  19 b 

1
2 
0 1  3 a  9 b 
 12 times the second row was added to the first row.
The system has exactly one solution: x  23 a  19 b and y   13 a  29 b .
30.
1 1 1 a 
2 0 2 b 


0 3 3 c 
1 1
a 
1
 0 2 0 2 a  b 


 0
c 
3 3
a 
1 1 1
0 1 0 a  b 
2

0 3 3
c 
a
1 1 1



b
a2
0 1 0

 0 0 3 3a  23 b  c 
The augmented matrix for the system.
2 times the first row was added to the second row.
The second row was multiplied by  12 .
3 times the second row was added to the third row.
a
1 1 1



b
a2 
0 1 0
 0 0 1  a  b2  3c 
The third row was multiplied by 13 .
1 1 0 2a  b2  3c 


a  b2 
0 1 0
0 0 1 a  b2  3c 
1 times the third row was added to the first row.
a  3c 
1 0 0


a  b2 
0 1 0
0 0 1 a  b2  3c 
1 times the second row was added to the first row.
The system has exactly one solution: x1  a  3c , x2  a  b2 , and x3   a  b2  3c .
31.
1 3
Adding 2 times the first row to the second yields a matrix in row echelon form 
.
0 1
32
1.2 Gaussian Elimination
1 0 
Adding 3 times its second row to the first results in 
 , which is also in row echelon form.
0 1 
32.
1
3
2
 0 2 29 


 3 4
5 
1
3
2
 0 2 29 


 1 3
2 
1 times the first row was added to the third row.
2
1 3
 0 2 29 


 2
1
3 
The first and third rows were interchanged.
2
1 3
 0 2 29 


 0 5
1
2 times the first row was added to the third row.
2
1 3
 0 2 29 


 0
1 86 
3 times the second row was added to the third row.
2
1 3
0
1 86 

 0 2 29 
The second and third rows were interchanged.
2
1 3
0 1 86 


0 0 143 
 1 3 2
0 1 86 


0 0 1
1 3 0 
0 1 0 


0 0 1 
1 0 0 
0 1 0 


0 0 1 
33.
2 times the second row was added to the third row.
1
.
The third row was multiplied by 143
86 times the third row was added to the second row
and 2 times the third row was added to the first row.
3 times the second row was added to the first row.
We begin by substituting x  sin  , y  cos  , and z  tan  so that the system becomes
33
1.2 Gaussian Elimination
x  2 y  3z  0
2 x  5 y  3z  0
 x  5 y  5z  0
 1 2 3 0
 2
5 3 0 

 1 5 5 0 
3 0
1 2
0
1 3 0 

 0 3 8 0 
The augmented matrix for the system.
2 times the first row was added to the second row
and the first row was added to the third row.
3 0
1 2
 0 1 3 0 


 0 0 1 0 
3 times the second row was added to the third row.
3 0
1 2
 0 1 3 0 


 0 0
1 0 
The third row was multiplied by 1 .
 1 2 0 0
0 1 0 0 


0 0 1 0 
 1 0 0 0
0 1 0 0 


0 0 1 0 
3 times the third row was added to the second row and
3 times the third row was added to the first row.
2 times the second row was added to the first row.
This system has exactly one solution x  0, y  0, z  0.
On the interval 0    2 , the equation sin   0 has three solutions:   0 ,    , and   2 .
On the interval 0    2 , the equation cos   0 has two solutions:   2 and   32 .
On the interval 0    2 , the equation tan   0 has three solutions:   0 ,    , and   2 .
Overall, 3  2  3  18 solutions  ,  ,   can be obtained by combining the values of  ,  , and  listed above:
    
 0, 2 ,0  ,   , 2 ,0  , etc.



34.
We begin by substituting x  sin  , y  cos  , and z  tan  so that the system becomes
2x 
y  3z  3
4 x  2 y  2z  2
6 x  3y 
z  9
34
1.2 Gaussian Elimination
 2 1 3 3 
 4 2 2 2 


 6 3
1 9 
The augmented matrix for the system.
2 1 3 3
0 4 8 4 


0 0 8 0 
2 times the first row was added to the second row
and 3 times the first row was added to the third row.
2 1 3 3
0 4 8 4 


0 0
1 0 
The third row was multiplied by  81 .
3
 2 1 0
 0 4 0 4 


 0 0 1 0 
8 times the third row was added to the second row
and 3 times the third row was added to the first row.
 2 1 0 3 
0
1 0 1

 0 0 1 0 
The second row was multiplied by 14 .
2 0 0 2 
 0 1 0 1


 0 0 1 0 
The second row was added to the first row.
1
1 0 0
 0 1 0 1


 0 0 1 0 
The first row was multiplied by 12 .
This system has exactly one solution x  1, y  1, z  0.
The only angles  ,  , and  that satisfy the inequalities 0    2 , 0    2 , 0     and the equations
sin   1,
cos   1,
tan   0
are   2 ,    , and   0 .
35.
We begin by substituting X  x 2 , Y  y 2 , and Z  z 2 so that the system becomes
X
X
2X
 1 1 1 6
 1 1 2 2 


 2
1 1 3 
 Y
 Y
 Y

Z
 2Z
 Z
 6
 2
 3
The augmented matrix for the system.
35
1.2 Gaussian Elimination
1 1 6
1
 0 2
1 4 

 0 1 3 9 
1 1 6
1
 0 1 3 9 


 0 2
1 4 
1 1 6
1
0
1 3 9 

 0 2 1 4 
1 times the first row was added to the second row
and 2 times the first row was added to the third row.
The second and third rows were interchanged
(to avoid introducing fractions into the second row).
The second row was multiplied by 1 .
1 1 1 6 
0 1 3 9 


 0 0 7 14 
2 times the second row was added to the third row.
1 1 1 6 
0 1 3 9 


0 0 1 2 
The third row was multiplied by 17 .
1 1 0 4 
0 1 0 3 


0 0 1 2 
1 0 0 1 
0 1 0 3 


0 0 1 2 
3 times the third row was added to the second row
and 1 times the third row was added to the first row.
1 times the second row was added to the first row.
We obtain
X 1 
x  1
Y 3 
y 3
Z 2  z 2
36.
We begin by substituting a  1x , b  1y , and c  1z so that the system becomes
a  2b 
4c  1
2 a  3b  8c  0
 a  9b  10c  5
 1 2 4 1
 2 3
8 0 

 1 9 10 5 
The augmented matrix for the system.
36
1.2 Gaussian Elimination
1
 1 2 4
0 1 16 2 


0 11 6 6 
2 times the first row was added to the second row
and the first row was added to the third row.
 1 2 4 1
0 1 16 2 


0 11
6 6 
The second row was multiplied by 1 .
1
 1 2 4
 0 1 16
2 

 0 0 182 16 
11 times the second row was added to the third row.
1
 1 2 4


2
0 1 16
0 0
1  918 
1
.
The third row was multiplied by 182
Using back-substitution, we obtain
c
8
91
b  2  16c 
54
91
a  1  2b  4c  
37.
1
91

8
c
1 91
y 
b 54
1
13
x 
7
a
 z

7

13
Each point on the curve yields an equation, therefore we have a system of four equations
equation corresponding to 1,7  :
equation corresponding to  3, 11 :
equation corresponding to  4, 14  :
equation corresponding to  0,10  :
 1 1 1
 27 9 3

64 16 4

 0 0 0
1
7
1 11
1 14 

1 10 
1
1
1
7
1
 0 18 24 26 200 


 0 48 60 63 462 


0
0
1
10 
0
a 
27a 
b  c  d 
9b  3c  d 
7
11
64a  16b  4c  d  14
d  10
The augmented matrix for the system.
27 times the first row was added to the second row
and 64 times the first row was added to the third.
37
1.2 Gaussian Elimination
1
1
1
7
1
0
13
100 
4
1
3
9
9 

 0 48 60 63 462 


0
0
1
10 
0
1
0

0

0
0 4
0 0
1
0

0

0
7
13
100 
9
9 
19
107 
0 1 12 6

0 0 1 10 
The third row was multiplied by 14 .
1
0

0

0
1 1 0 3
1 43 0  103 
0 1 0
2

0 0 1 10 
19
times the fourth row was added to the third row,
 12
1
0

0

0
1 0 0 5 
1 0 0 6 
0 1 0
2

0 0 1 10 
 43 times the third row was added to the second row and
1
0

0

0
0 0 0
1
1 0 0 6 
0 1 0
2

0 0 1 10 
1 times the second row was added to the first row.
1 1
1 43
1 1
1 43
7




1 10 
The second row was multiplied by  181 .
1
13
9
19
3
100
9
214
3
48 times the second row was added to the third row.
1
 139 times the fourth row was added to the second row,
and 1 times the fourth row was added to the first.
1 times the third row was added to the first row.
The linear system has a unique solution: a  1 , b  6 , c  2 , d  10 . These are the coefficient values required
for the curve y  ax 3  bx 2  cx  d to pass through the four given points.
38.
Each point on the curve yields an equation, therefore we have a system of three equations
equation corresponding to  2,7  :
equation corresponding to  4,5  :
equation corresponding to  4, 3  :
53a  2b  7c  d
41a  4b  5c  d
25a  4b  3c  d



0
0
0
 53 2 7 1 0 
The augmented matrix of this system  41 4 5 1 0  has the reduced row echelon form
 25 4 3 1 0 
38
1.2 Gaussian Elimination
1
1 0 0
29

2
0 1 0  29
0 0 1  294
39
0

0
0 
If we assign d an arbitrary value t , the general solution is given by the formulas
a
1
t,
29
b
2
t,
29
c
4
t,
29
d t
(For instance, letting the free variable d have the value 29 yields a  1 , b  2 , and c  4 .)
39.
Since the homogeneous system has only the trivial solution, its augmented matrix must be possible to reduce via a
1 0 0 0 
sequence of elementary row operations to the reduced row echelon form 0 1 0 0  .
0 0 1 0 
Applying the same sequence of elementary row operations to the augmented matrix of the nonhomogeneous system
1 0 0 r 
yields the reduced row echelon form 0 1 0 s  where r , s , and t are some real numbers. Therefore, the
0 0 1 t 
nonhomogeneous system has one solution.
40.
41.
(a)
3 (this will be the number of leading 1's if the matrix has no rows of zeros)
(b)
5 (if all entries in B are 0)
(c)
2 (this will be the number of rows of zeros if each column contains a leading 1)
(a)
There are eight possible reduced row echelon forms:
1 0 0  1 0 r  1 r 0  1 r s  0 1 0  0 1 r  0 0 1 
0 0 0 
0 1 0  , 0 1 s  , 0 0 1  , 0 0 0  , 0 0 1  , 0 0 0  , 0 0 0  , and 0 0 0 
 
 
 




 
 
 
0 0 0 
0 0 1  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0  0 0 0 
where r and s can be any real numbers.
(b)
There are sixteen possible reduced row echelon forms:
1.2 Gaussian Elimination
1
0

0

0
0 0 0  1
1 0 0  0
,
0 1 0  0
 
0 0 1  0
0 0 r  1
1 0 s  0
,
0 1 t  0
 
0 0 0  0
0 r 0  1
1 s 0  0
,
0 0 1  0
 
0 0 0  0
0 r t  1
1 s u  0
,
0 0 0  0
 
0 0 0  0
r 0 0  1
0 1 0  0
,
0 0 1  0
 
0 0 0  0
r 0 s
0 1 t 
,
0 0 0

0 0 0
1
0

0

0
r s 0  1
0 0 1  0
,
0 0 0  0
 
0 0 0  0
r s t  0
0 0 0  0
,
0 0 0  0
 
0 0 0  0
1 0 0  0
0 1 0  0
,
0 0 1  0
 
0 0 0  0
1 0 r  0
0 1 s  0
,
0 0 0  0
 
0 0 0  0
1 r 0  0
0 0 1  0
,
0 0 0  0
 
0 0 0  0
1 r s
0 0 0 
,
0 0 0

0 0 0
0
0

0

0
0 1 0  0
0 0 1  0
,
0 0 0  0
 
0 0 0  0
0 1 r  0
0 0 0  0
,
0 0 0  0
 
0 0 0  0
0 0 1
0

0
0 0 0
, and 
0
0 0 0


0 0 0
0
0 0 0
0 0 0 
.
0 0 0

0 0 0
where r , s , t , and u can be any real numbers.
42.
43.
(a)
Either the three lines properly intersect at the origin, or two of them completely overlap and the other one
intersects them at the origin.
(b)
All three lines completely overlap one another.
(a)
We consider two possible cases: (i) a  0 , and (ii) a  0 .
(i) If a  0 then the assumption ad  bc  0 implies that b  0 and c  0 . Gauss-Jordan elimination yields
0
c

b
d 
We assumed a  0
c
0

d
b 
The rows were interchanged.
 1 dc 


0 1 
The first row was multiplied by 1c and
1 0 
0 1 


 dc times the second row was added to the first row.
the second row was multiplied by 1b . (Note that b, c  0.)
(ii) If a  0 then we perform Gauss-Jordan elimination as follows:
a
c

b
d 
1 ba 


c d 
1

0
b
a
ad  bc
a



The first row was multiplied by 1a .
c times the first row was added to the second row.
40
1.2 Gaussian Elimination
 1 ba 


0 1 
The second row was multiplied by ad a bc .
1 0 
0 1 


 ba times the second row was added to the first row.
(Note that both a and ad  bc are nonzero.)
a
In both cases ( a  0 as well as a  0 ) we established that the reduced row echelon form of 
c
provided that ad  bc  0 .
(b)
41
b
1 0 
is 


d
0 1 
a b k 
Applying the same elementary row operation steps as in part (a) the augmented matrix 
 will be
c d l 
1 0 p 
transformed to a matrix in reduced row echelon form 
 where p and q are some real numbers. We
0 1 q 
conclude that the given linear system has exactly one solution: x  p , y  q .
True-False Exercises
(a)
True. A matrix in reduced row echelon form has all properties required for the row echelon form.
(b)
1 0 
False. For instance, interchanging the rows of 
 yields a matrix that is not in row echelon form.
0 1 
(c)
False. See Exercise 31.
(d)
True. In a reduced row echelon form, the number of nonzero rows equals to the number of leading 1's. The result
follows from Theorem 1.2.1.
(e)
True. This is implied by the third property of a row echelon form (see Section 1.2).
(f)
False. Nonzero entries are permitted above the leading 1's in a row echelon form.
(g)
True. In a reduced row echelon form, the number of nonzero rows equals to the number of leading 1's. From
Theorem 1.2.1 we conclude that the system has n  n  0 free variables, i.e. it has only the trivial solution.
(h)
False. The row of zeros imposes no restriction on the unknowns and can be omitted. Whether the system has
infinitely many, one, or no solution(s) depends solely on the nonzero rows of the reduced row echelon form.
(i)
False. For example, the following system is clearly inconsistent:
x  y  z 1
xyz2
1.3 Matrices and Matrix Operations
1.
(a)
Undefined (the number of columns in B does not match the number of rows in A )
(b)
Defined; 4  4 matrix
1.3 Matrices and Matrix Operations
2.
3.
(c)
Defined; 4  2 matrix
(d)
Defined; 5  2 matrix
(e)
Defined; 4  5 matrix
(f)
Defined; 5  5 matrix
(a)
Defined; 5  4 matrix
(b)
Undefined (the number of columns in D does not match the number of rows in C )
(c)
Defined; 4  2 matrix
(d)
Defined; 2  4 matrix
(e)
Defined; 5  2 matrix
(f)
Undefined ( BAT is a 4  4 matrix, which cannot be added to a 4  2 matrix D )
(a)
5  1 2  3  7 6 5
 1 6
 1  1 0  1 1  2    2 1 3
 

 

 3  4
2  1 4  3  7 3 7 
(b)
5  1 2  3   5 4 1
 1 6
 1  1 0  1 1  2    0 1 1
 

 

 3  4
2  1 4  3  1 1 1
(c)
5  0   15 0 
5  3
 5  1 5  2    5 10 
  
 

 5  1
5  1   5 5
(d)
 7  1 7  4 7  2   7 28 14 
 7  3 7  1 7  5   21 7 35

 

(e)
Undefined (a 2  3 matrix C cannot be subtracted from a 2  2 matrix 2B )
(f)
4  1 4  3  2  1
2  5 2  2   24  2
4  10 12  4 
4  6
 4  1 4  1 4  2   2  1 2  0 2  1    4  2
  4  0 8  2 
  
   
 
 4  4
4  1 4  3  2  3
2  2 2  4   16  6
4  4 12  8 
 22 6 8 
  2 4 6 
 10 0 4 
(g)
  1 5 2  2  6
2 1 2  3  
5  2 2  6
 1  12






3   1 0 1  2   1 2  1 2  2    3  1   2  0  2 1  4 

 3  8
2  1 2  3  
2  2 4  6 
  3 2 4  2  4
3  7 3  8   39 21 24 
 3  13

  3   3  3  2 3  5    9 6 15
 3  11
3  4 3  10   33 12 30 
42
1.3 Matrices and Matrix Operations
(h)
0  0  0 0 
 33
 1  1 2  2   0 0 
 

 

 1  1
1  1  0 0 
(i)
1 0  4  5
(j)
  1 5 2  3  6
  1  18
3 1 3  3 
5  3 2  9 







tr   1 0 1  3   1 3  1 3  2    tr   1   3 0  3 1  6  
  3 2 4  3  4
  3  12
3  1 3  3  
2  3 4  9  
 


  17 2 7  


 tr   2 3 5   17  3  5  25
  9 1 5 


4.
(k)
 7  4 7   1  
 28 7  
4tr  
   4tr  
   4  28  14   4  42  168
  0 14  
 7  0 7  2  
(l)
Undefined (trace is only defined for square matrices)
(a)
 3 1 1 1 4 2   2  3  1 2   1  4 2  1  2  7 2 4 
2




22 1
2  1  5  3 5 7 
0 2 1 3 1 5   2  0  3
(b)
 1 1 3 6 1 4   1  6 1   1 3  4   5 0 1

 5 0 2    1 1 1   5  1
0  1 2  1   4 1 1

 
 
2 1 4   3 2 3 2  3
1  2 4  3  1 1 1
(c)
  1 6
5  1 2  3     5 4 1 
 5 0 1

 





  1   1 0  1 1  2      0 1 1    4 1 1
 34
 1 1 1
2  1 4  3    1 1 1 

(d)
Undefined (a 2  2 matrix BT cannot be added to a 3  2 matrix 5C T )
(e)
 12  1
1
2 4
 12  2
(f)
1  0  0 1
 4 1  4 0   4  4
 0 2    1 2   0   1 2  2   1 0 

 
 

 
(g)
6 1 4 
 1 1 3 2  6 2   1 2  4   3  1 3   1 3  3 

 



2  1 1 1  3  5 0 2    2  1 2  1
2  1   3  5 3  0
3  2
 3 2 3
2 1 4   2  3 2  2
2  3  3  2 3  1
3  4 
T
1
2
1
2
1
2
 3   14  3
 
 1    14  (1)
 5  14  1
1
4
1
4
1
4
 0   12  34
 
 2    2  14
 1   1  14
T
3
2
1
2
5
2
 0    14
 
 21    94
 14   34
12  3 2   3  8  9   9 1 1


 2  15
20
2  6    13 2 4 
 66
43
6  12   0 1 6 



0
9
4
3
2
43
1.3 Matrices and Matrix Operations
 6 1 4 
 1 1 3   2  6 2   1 2  4   3  1 3   1 3  3 
 
 



 
2  1   3  5 3  0
3  2 
 2  1 1 1  3  5 0 2      2  1 2  1
  3 2 3
2
1 4     2  3 2  2
2  3 3  2 3  1
3  4  

 
T
(h)
T
 12  3 2   3  8  9     9 1 1 
 9 13 0 


 

20
2  6      13 2 4     1
2
1
  2  15

 1 4 6 
43
6  12     0 1 6  
 66
T
(i)
1  1   4  1   2  3 

  3  1  1  1   5  3 
T
 6 1 3
1  5   4  0    2  2  1  2    4  1   2  4    1 1 2 
 3  5  1  0    5  2   3  2   1  1   5  4    4 1 3


 6 1 3
 3 9 14  


  1 1 2 
17
25
27

  4 1 3


  3  6    9  1  14  4 

17  6    25  1   27  4 
 3  1   9  1  14  1  3  3   9  2   14  3 
17  1   25  1   27  1 17  3   25  2    27  3
 65 26 69 


185 69 182 
(j)
Undefined (a 2  2 matrix B cannot be multiplied by a 3  2 matrix A )
(k)
  1 5 2  6 1 4  


tr   1 0 1  1 1 1 
  3 2 4   3 2 3 



  1  6    5  1   2  3   1  1   5  1   2  2  1  4    5  1   2  3   


 tr    1  6    0  1  1  3  1  1   0  1  1  2   1  4    0  1  1  3   
   3  6    2  1   4  3    3  1   2  1   4  2   3  4    2  1   4  3   


  17 8 15 


 tr   3 3 1   17  3  26  46
  32 7 26  


5.
(l)
Undefined ( BC is a 2  3 matrix; trace is only defined for square matrices)
(a)
  3  4    0  0    3  1   0  2    12 3

1  1   2  2    4 5
  1  4    2  0 
 1  4   1  0   1  1  1  2    4
1


(b)
Undefined (the number of columns of B does not match the number of rows in A )
44
1.3 Matrices and Matrix Operations
(c)
3 1 3  3  1 5 2
3  6
3  1 3  1 3  2   1 0 1
  


3  4
3  1 3  3   3 2 4 
 18  1   3  1   9  3  18  5    3  0    9  2  18  2    3  1   9  4  


    3  1   3  1   6  3    3  5    3  0    6  2    3  2    3  1   6  4  
 12  1   3  1   9  3  12  5    3  0    9  2  12  2    3  1   9  4  


 42 108 75
  12 3 21
 36 78 63
(d)
  3  4    0  0    3  1   0  2  
 12 3
1 4 2 

 1 4 2  
  4 5 
1  1   2  2   


  1  4    2  0 
3 1 5
3 1 5 
 1  4   1  0   1  1  1  2   


4
1




 12  1   3  3  12  4    3  1 12  2    3  5  


    4  1   5  3    4  4    5  1   4  2    5  5  
  4  1  1  3 
 4  4   1  1  4  2   1  5 

 3 45 9 
 11 11 17 
 7 17 13
(e)
 3 0
 1 2    4  1  1  3 

  0  1   2  3 
 1 1 
 3 0
 4  4   1  1  4  2   1  5    1 2   1 15 3
 0  4    2  1  0  2    2  5  1 1 6 2 10 


  3  1   0  6   3  15    0  2   3  3    0  10    3 45 9 


   1  1   2  6   1  15    2  2   1  3    2  10    11 11 17 
 1  1  1  6 
1  15  1  2  1  3  1  10    7 17 13

(f)
1 3
1 4 2  
 1  1   4  4    2  2 
3 1 5   4 1    3  1  1  4  5  2
  

  2 5    


(g)
  1  3    5  1   2  1 1  0    5  2    2  1  

 0 2 11

   1  3    0  1  1  1  1  0    0  2   1  1    
1 8 
12
   3  3    2  1   4  1  3  0    2  2    4  1  


1  3   4  1   2  5  21 17 
 3  3  1  1   5  5  17 35
T
45
1.3 Matrices and Matrix Operations
(h)
  1 3

 1  4    3  0   1  1   3  2  
 4 1   3 1 1 

  3 1 1

  4 1   0 2   0 2 1    4  4   1  0    4  1  1  2   0 2 1


  2  4  5  0  2 1  5  2  
  2 5 
      




  4  3   5  0    4  1   5  2 
5
 4
 3 1 1 


 16 2  
 16  3   2  0   16  1   2  2 
0 2 1 


 8 8 
  8  3   8  0    8  1   8  2 
 4  1   5  1 
16  1   2  1 
8  1  8  1 
6 9
12

  48 20 14 
24
8 16 
(i)
  1 5 2   1 1 3 


tr   1 0 1  5 0 2  
  3 2 4  2 1 4  



  1  1   5  5    2  2   1  1   5  0    2  1 1  3    5  2    2  4   


 tr    1  1   0  5   1  2 
1  1   0  0   1  1  1  3   0  2   1  4   
   3  1   2  5    4  2    3  1   2  0    4  1  3  3    2  2    4  4   


 30 1 21 


 tr   1 2 1   30  2  29  61
  21 1 29  


(j)
 4  6  1
 6 1 4   1 5 2  
4   1  5 4  4  2  

 



tr  4  1 1 1   1 0 1   tr   4  1   1 4  1  0
4 1  1  
 4  3  3
  3 2 3  3 2 4  
42  2
4  3  4  
 

 

 23 9 14  


 tr   5 4 3   23  4  8  35
  9 6 8 


(k)
 1 3
6 1 4  
 3 1 1



tr   4 1  
 2  1 1 1 

  2 5 0 2 1
 3 2 3 


  1  3    3  0   1  1   3  2 

tr    4  3   1  0    4  1  1  2 
  2  3    5  0    2  1   5  2 

1  1   3  1  2  6 2   1 2  4  

2  1 
 4  1  1  1    2  1 2  1
2  3 
 2  1   5  1  2  3 2  2
  3 5 4  12 2 8  
 15 3 12  







tr  12 2 5   2 2 2    tr  14 0 7    15  0  13  28
 6
 12 12 13 
8 7   6 4 6  



46
1.3 Matrices and Matrix Operations
(l)
   6 1 3  1 3 T  3 0  



tr    1 1 2   4 1    1 2  
   4 1 3  2 5   1 1 
 

 
 

    6  1  1  4    3  2   6  3   1  1   3  5   T  3 0  
 


 tr     1  1  1  4    2  2   1  3   1  1   2  5     1 2  


    4  1  1  4    3  2   4  3   1  1   3  5     1 1 


  16 34  T  3 0  

 3 0 
 

 
 16 7 14  



 tr    7 8    1 2    tr  
1 2  


  14 28    1 1 
 34 8 28   1 1 
 










  16  3    7  1  14  1
 tr  

  34  3    8  1   28  1
16  0    7  2   14  1  

 34  0   8  2    28  1 
  55 28  
 tr  
   55  44  99
 122 44  
6.
(a)
  1 1 3  6 1 3   3 0  2  1  6
2   1  1 2  3  3   3 0 
 







2  2  2   1 2 
 2  5 0 2    1 1 2    1 2   2  5   1 2  0  1



2 1  1
2  4  3   1 1
 2 1 4   4 1 3   1 1 2  2  4
 4 3 3  3 0     4  3    3  1   3  1   4  0    3  2    3  1 


  11 1 2   1 2    11  3   1  1   2  1 11  0   1  2    2  1 
 0
1 5  1 1   0  3   1  1   5  1
 0  0   1  2    5  1 
 6 3
  36 0 
 4 7 
(b)
Undefined (a 2  3 matrix  4 B  C cannot be added to a 2  2 matrix 2B )
(c)
   3  1   0  3   3  4    0  1  3  2    0  5   
 1 1 3
 



    1  1   2  3   1  4    2  1  1  2    2  5     5  5 0 2 
  1  1  1  3 
 2
1 4 
1  4   1  1 1  2   1  5  
 
T
  3 12 6    5  1 5   1 5  3  3 5 4   5 5 15

 

    5 2 8     5  5 5  0
5  2    12 2 5  25 0 10 

5 7    5  2
5 1
5  4   6 8 7  10 5 20 
  4
T
 3  5 5   5  4  15   2 10 11


  12  25
5  10   13
20
2 5
 6  10
8  5
7  20   4 3 13

47
1.3 Matrices and Matrix Operations
(d)
  4 1  3 1 1  2  1 2  4 2  2  




  0 2  0 2 1 2  3 2  1 2  5 
   4  3   1  0    4  1  1  2 

  0  3    2  0    0  1   2  2 

T
 4  1  1  1   2 8 4  

 0  1   2  1 6 2 10  
T
 12 6 3 2 8 4    12  2 6  8


  
42
  0 4 2  6 2 10     0  6
3 4 

2  10  
T
T
 10 6 
T
  10 14 1 



   14 2 

2 8  
  6
 1 8 
(e)

 1 3
 3 0 
 4 0   1 4 2  
 3 1 1 


 1 2   3 1 5   4 1  0 2 1  1 2  


  2 5 
  1 1 





 4 0   1  1   4  4    2  2 

 
 1 2     3  1  1  4    5  2 
1  3   4  1   2  5 
 3  3  1  1   5  5 
  3  3   1  1  1  1

 0  3    2  1  1  1
 3  0   1  2   1  1  

 0  0    2  2   1  1 
 4 0   21 17   11 1   4 0   21  11 17   1   4 0  10 18 




  

35  5   1 2  18 30 
 1 2   17 35  1 5   1 2  17   1
  4  10    0  18   4  18    0  30    40 72 



  1  10    2  18   1  18    2  30    26 42 
(f)
 1 1 3 6 1 4    6 1 3  1 5 2  
 5 0 2   1 1 1    1 1 2   1 0 1 


 


 2
1 4   3 2 3    4 1 3  3 2 4  
 1  6   1  1   3  3  1  1  1  1   3  2 

  5  6    0  1   2  3   5  1   0  1   2  2 
  2  6   1  1   4  3   2  1  1  1   4  2 

T
1  4   1  1   3  3 
 5  4    0  1   2  3
 2  4   1  1   4  3 
   6  1  1  1   3  3   6  5   1  0    3  2   6  2   1  1   3  4   


    1  1  1  1   2  3   1  5   1  0    2  2   1  2   1  1   2  4   
   4  1  1  1   3  3   4  5   1  0    3  2   4  2   1  1   3  4   


T
14 4 12   14 36 25 
14 4 12  14 4 12  0 0 0 





 36 1 26     4 1 7    36 1 26   36 1 26   0 0 0 
 25 7 21  12 26 21 
 25 7 21  25 7 21 0 0 0 
T
48
1.3 Matrices and Matrix Operations
7.
(a)
6 2 4 

1 3
first row of AB  [first row of A ] B  3 2 7 0
 7
7 5
  3  6    2  0    7  7   3  2    2  1   7  7
 3  4    2  3   7  5
 67 41 41
(b)
6 2 4 

1 3
third row of AB  [third row of A ] B   0 4 9 0
 7
7 5
  0  6    4  0    9  7   0  2    4  1   9  7
 0  4    4  3   9  5 
 63 67 57 
(c)
second column of AB  A [second column of B ]
 3 2 7   2     3  2    2  1   7  7    41


 6 5 4   1     6  2    5  1   4  7     21
0 4 9   7     0  2    4  1   9  7   67 
(d)
first column of BA  B [first column of A ]
6 2 4   3   6  3    2  6    4  0    6 


 0 1 3  6     0  3   1  6    3  0     6 
 7 7 5  0    7  3    7  6    5  0   63
(e)
 3 2 7 


third row of AA  [third row of A ] A   0 4 9 6 5 4 
0 4 9 
  0  3   4  6    9  0    0  2    4  5   9  4 
  24 56 97 
(f)
third column of AA  A [third column of A ]
 3 2 7   7    3  7    2  4    7  9   76 


 6
5 4   4    6  7    5  4    4  9     98 
0 4 9   9   0  7    4  4    9  9    97 
8.
(a)
first column of AB  A [first column of B ]
 3 2 7  6    3  6    2  0    7  7   67 


 6
5 4  0    6  6    5  0    4  7    64 
0 4 9   7   0  6    4  0    9  7   63 
 0  7   4  4    9  9 
49
1.3 Matrices and Matrix Operations
(b)
third column of BB  B [third column of B ]
6 2 4   4   6  4    2  3   4  5  38 


 0
1 3  3     0  4   1  3   3  5   18 
 7 7 5  5    7  4    7  3   5  5  74 
(c)
 6 2 4 

1 3
second row of BB  [second row of B ] B   0 1 3  0
 7
7 5
  0  6   1  0    3  7   0  2   1  1   3  7
 0  4   1  3   3  5
  21 22 18
(d)
first column of AA  A [first column of A ]
 3 2 7   3    3  3    2  6    7  0    3


 6
5 4  6    6  3    5  6    4  0     48 
0 4 9  0   0  3    4  6    9  0    24 
(e)
third column of AB  A [third column of B ]
 3 2 7   4    3  4    2  3    7  5    41


 6
5 4   3    6  4    5  3    4  5    59 
0 4 9   5   0  4    4  3    9  5   57 
(f)
 3 2 7 

5 4 
first row of BA  [first row of B ] A  6 2 4   6
 0
4 9 
  6  3   2  6    4  0    6  2    2  5   4  4 
 6
9.
(a)
 6  7   2  4    4  9
6 70 
3
 2 
 7   3




   
first column of AA  3 6   6  5  0  4    48 
0 
 4 
 9   24 
3
 2 
 7  12 




   
second column of AA  2 6   5  5  4  4    29 
 0 
 4 
 9  56 
3
 2 
 7  76 




   
third column of AA  7 6   4  5  9  4    98 
0 
 4 
 9   97 
(b)
6 
 2 
 4  64 




   
first column of BB  6  0   0  1  7  3    21
 7 
 7 
 5  77 
50
1.3 Matrices and Matrix Operations
6   2 
 4  14 




   
second column of BB  2  0   1  1  7  3   22 
 7   7 
 5   28 
6 
 2 
 4   38 




   
third column of BB  4  0   3  1  5  3    18 
 7 
 7 
 5  74 
10.
(a)
3
 2 
 7  67 




   
first column of AB  6 6   0  5  7  4   64 
 0 
 4 
 9  63 
 3   2 
 7   41




   
second column of AB  2 6   1  5  7  4    21
 0   4 
 9  67 
3
 2 
 7   41




   
third column of AB  4 6   3  5  5  4   59 
0 
 4 
 9  57 
(b)
6 
 2 
4  6




   
first column of BA  3 0   6  1  0  3    6 
 7 
 7 
 5  63
6 
 2 
 4   6 




second column of BA  2 0   5  1  4  3   17 
 7 
 7 
 5   41 
6 
 2 
 4   70 




third column of BA  7  0   4  1  9  3    31
 7 
 7 
 5  122 
11.
(a)
(b)
 2 3 5   x1   7 
2 3 5
 x1 
 7






A   9 1 1 , x   x2  , b   1 ; the matrix equation:  9 1 1   x2    1
 1 5 4   x3   0 
 1 5 4 
 x3 
 0 
 4 0 3 1
1  x1   1
 x1 
 4 0 3
 1
x 
 
 5 1 0 8 
5
 3
1 0 8   x2   3
2







, x
A=
, b
; the matrix equation:
 x3 
 2 5 9 1  x3  0 
0 
 2 5 9 1
 

   
 


 0 3 1 7   x4  2 
2 
 x4 
 0 3 1 7 
51
1.3 Matrices and Matrix Operations
12.
13.
(a)
 3 
 1 2 3 
 3
 1 2 3
 x1 
 x1   
 0


2

2
1 0    0
1 0
 
, x   x2  , b =   ; the matrix equation: 
x 
A
0 3 4 
 0 3 4   2   1
 1
 x3 

  x3   


 
1
0
1
5


 5
 1 0 1
 
(b)
3
3  x1   3
3
3
 3 
 3
 3
 x1 







A   1 5 2  , x   x2  , b   3 ; the matrix equation:  1 5 2   x2    3
 0 4
 0 
 0 4
 x3 
1  x3   0 
1
(a)
5 x1
 6 x2
 7 x3
 2
 x1
 2 x2
 3 x3
 0

 3
4 x2
14.
15.
(a)
x3
(b) x  y  z  2
2 x  3y
 2
5 x  3 y  6 z  9
3 x1

x2
 2 x3

4 x1
 3 x2
 7 x3
 1
2 x1

 5 x3

x2
2
4
3w  2 x
 2y
(b) 5w
3w 
x  4y
2 w  5 x 
y
 1 1 0 k 
 k  1
2




 k 1 1  1 0 2  1    k 1 1  k  2   k 2  k  k  2  1  k 2  2k  1   k  1
 0 2 3  1 
 1 
The only value of k that satisfies the equation is k  1 .
16.
52
1 2 0  2 
 6 




2 2 k  2 0 3 2   2 2 k  3k  4   k 2  12k  20   k  10  k  2 
0 3 1   k 
 k  6 
The values of k that satisfy the equation are k  10 and k  2 .
17.
4
 3 
 0 4 8   6  9 3   6  5 5 
 2   0 1 2    1  2 3 1  0 2 4   2 3 1   2 1 3 
 
 

 
 

18.
0 
 2 
 0 0 0   6 0 4   6 0 4 
 4  1 4 1   3   3 0 2    4 16 4   9 0 6   13 16 2 
 
 

 
 

19.
1 
2 
3 
 1 2   6 8  15 18   22 28 
 4  1 2    5  3 4   6   5 6    4 8   15 20   30 36    49 64 
 
 
 

 
 
 

20.
0 
 4
2 
0 0  16 0   2 2  18 2 
 1   2 1   2   4 0    5  1 1  2 1   8 0    5 5    1 6 
 
 
 

 
 
 

 z 
 2z 
 7z 
 6z 
0
0
0
0
1.3 Matrices and Matrix Operations
21.
 x1   3r  4 s  2t   0   3r   4 s   2t   0   3  4   2 
  0   r   0   0   0   1  0   0 
x  
r
                
 2 
  0   0   2 s   0   0   0   2   0 
2 s
 x3  
 =  + + +  =  +r  +s +t  
 =
s
  0   0   s   0   0   0   1  0 
 x4  
  0   0   0   t   0   0   0   1
 x5  
t
 1       1      
  
1
 x6  
  3   0  0  0  3   0  0  0
3
22.
 x1   3r  4 s  2t   3r   4 s   2t 
 3
 4   2 
x  









   
r
 2 
  r   0  0
 1
 0  0
 x3  
  0   2 s   0 
 0
 2   0 
2 s
 


    r  s  t 
s
 x4  
  0  s   0
 0
 1  0 
 x5  
  0  0  t 
 0
 0   1
t
  
 
 
  
 
   
0
  0   0   0 
 0 
 0   0 
 x6  
23.
The given matrix equation is equivalent to the linear system
a4
3  d  2c
1  d  2 c
a  b  2
After subtracting first equation from the fourth, adding the second to the third, and back-substituting, we obtain the
solution: a  4 , b  6 , c  1 , and d  1 .
24.
The given matrix equation is equivalent to the linear system
a  b
a  b
c  3d
 c  2d




8
1
7
6
After subtracting first equation from the second, adding the third to the fourth, and back-substituting, we obtain the
solution: a  29 , b   72 , c   45 , and d  135 .
25.
(a)
If the i th row vector of A is  0  0  then it follows from Formula (9) in Section 1.3 that i th row vector
of AB   0  0  B   0  0 
(b)
0 
If the j th column vector of B is    then it follows from Formula (8) in Section 1.3 that the j th column
0 
0 
0 


vector of AB  A       
0 
0 
53
1.3 Matrices and Matrix Operations
26.
27.
(a)
0
0
0
0
0
 a11
 0 a
0
0
0
0 
22

 0
0 a33
0
0
0


0
0 a44
0
0
 0
 0
0
0
0 a55
0


0
0
0
0 a66 
 0
 a11 a12 a13 a14
 0 a
a23 a24
22

 0
0 a33 a34
(b) 
0
0 a44
 0
 0
0
0
0

0
0
0
 0
(c)
 a11
a
 21
 a31

 a41
 a51

 a61
0
0
0
0
 a11 a12
a
0
0
0 
 21 a22 a23
 0 a32 a33 a34
0
0
(d) 

0 a43 a44 a45
0
 0
 0
0
0 a54 a55 a56 


0
0
0 a65 a66 
 0
0
a22
a32
a42
a52
a62
0
0
0
0
0 
a33
0
0
0

a43 a44
0
0
a53 a54 a55
0

a63 a64 a65 a66 
0
0
0
 x   a11
  
Setting the left hand side A  y    a21
 z   a31
a12
a22
a32
54
a16 
a25 a26 
a35 a36 

a45 a46 
a55 a56 

0 a66 
a15
a13   x   a11 x  a12 y  a13 z 
 x  y





a23   y    a21 x  a22 y  a23 z  equal to  x  y  yields
 0 
a33   z   a31 x  a32 y  a33 z 
a11 x  a12 y  a13 z  x  y
a21 x  a22 y  a23 z  x  y
a31 x  a32 y  a33 z  0
Assuming the entries of A are real numbers that do not depend on x , y , and z , this requires that the coefficients
corresponding to the same variable on both sides of each equation must match. Therefore, the only matrix satisfying
 1 1 0
the given condition is A   1 1 0  .
0 0 0 
28.
 x   a11
Setting the left hand side A  y    a21
 z   a31
a12
a22
a32
a13   x   a11 x  a12 y  a13 z 
 xy 





a23   y    a21 x  a22 y  a23 z  equal to  0  yields
 0 
a33   z   a31 x  a32 y  a33 z 
a11 x  a12 y  a13 z  xy
a21 x  a22 y  a23 z  0
a31 x  a32 y  a33 z  0
Assuming the entries of A are real numbers that do not depend on x , y , and z , it follows that no real numbers a11 ,
a12 , and a13 exist for which the first equation is satisfied for all x , y , and z . Therefore no matrix A with real
number entries can satisfy the given condition.
0
y 0

(Note that if A were permitted to depend on x , y , and z , then solutions do exist e.g., A   z 0  x  .)
 0 z  y 
1.3 Matrices and Matrix Operations
(a)
 1 1
1 1
1 1 and  1 1




(b)
 5 0  5 0  5 0
 5
0
Four square roots can be found: 
, 
 , and 
.
, 
 0 3
 0 3  0 3  0 3
32.
(a)
2
3

4

5
33.
 the total cost of items purchased in January 
 the total cost of items purchased in February 
.
The given matrix product represents 
 the total cost of items purchased in March 


 the total cost of items purchased in April

34.
(a)
The 4  3 matrix M  J represents sales over the two month period.
(b)
The 4  3 matrix M  J represents the decrease in sales of each item from May to June.
(c)
1
x  1
1
(d)
y  1 1 1 1
(e)
The entry in the 11 matrix yMx represents the total number of items sold in May.
29.
3 4 5
4 5 6 
5 6 7

6 7 8
1
1
(b) 
1

1
1
8 
3 9 27 

4 16 64 
1
2
1
4
55
 1 1 1 1
 1 1 1 1

(c) 
 1 1 1 1


 1 1 1 1
True-False Exercises
(a)
True. The main diagonal is only defined for square matrices.
(b)
False. An m  n matrix has m row vectors and n column vectors.
(c)
1 0 
0 0 
0 0 
False. E.g., if A  
and B  
then AB  
 does not equal BA  B .


0 0 
1 0 
0 0 
(d)
False. The i th row vector of AB can be computed by multiplying the i th row vector of A by B .
(e)
True. Using Formula (14),
 A     A    A .
T
T
T
ij
ji
ij
(f)
1 0 
0 0 
0 0 
False. E.g., if A  
and B  
then the trace of AB  


 is 0, which does not equal tr( A)tr( B)  1 .
0 0 
0 1 
0 0 
(g)
1 0 
0 0 
0 0 
0 1 
T
False. E.g., if A  
and B  
then  AB   
does not equal AT BT  


.

0 0 
1 0 
0 0 
0 0 
(h)
True. The main diagonal entries in a square matrix A are the same as those in AT .
1.3 Matrices and Matrix Operations
(i)
True. Since AT is a 4  6 matrix, it follows from BT AT being a 2  6 matrix that BT must be a 2  4 matrix.
Consequently, B is a 4  2 matrix.
(j)
True.
56
  a11  a1n  
  ca11  ca1n  
 




  
tr  c        tr    
 a  a  
 ca  ca  
nn  
nn  
  n1
  n1
  a11  a1n  


 ca11    cann  c  a11    ann   c tr       
 a  a  
nn  
  n1
(k)
True. The equality of the matrices A  C and B  C implies that aij  cij  bij  cij for all i and j . Adding cij to
both sides yields aij  bij for all i and j . Consequently, the matrices A and B are equal.
(l)
1 0 
0 0 
0 0 
False. E.g., if A  
and B  C  
then AC  BC  
 even though A  B .


0 0 
1 0 
0 0 
(m) True. If A is a p  q matrix and B is an r  s matrix then AB being defined requires q  r and BA being defined
requires s  p . For the p  p matrix AB to be possible to add to the q  q matrix BA , we must have p  q .
(n)
0 
True. If the j th column vector of B is    then it follows from Formula (8) in Section 1.3 that
0 
0 
0 


the j th column vector of AB  A        .
 0 
 0 
(o)
1 1
1 0 
False. E.g., if A  
and B  

 then BA  A does not have a column of zeros even though B does.
1 1
1 0 
1.4 Inverses; Algebraic Properties of Matrices
1.
(a)
2
7
A   B  C    A  B  C  

 0 2 
(b)
 34 21
A  BC    AB  C  

 52 28 
1.4 Inverses; Algebraic Properties of Matrices
 12 3

 9 6
(c)
15
14
A  B  C   AB  AC  

 0 18 
(a)
 24 16 
a  BC    aB  C  B  aC   
36 
 64
(b)
5
 16
A  B  C   AB  AC  

 8 6 
(c)
 B  C  A  BA  CA  
(d)
 112 28 
a  bC    ab  C  
56 
 84
3.
(a)
 A   A  2
4.
(a)
 A  B   AT  BT  
5.
The determinant of A , det  A    2  4    3  4   20 , is nonzero. Therefore A is invertible and its inverse is
2.
(d)
 a  b  C  aC  bC  
(b)
 AB   BT AT  
(b)
 aC   aC T  
8
 18

 18 22 
T
3 1
4 

T
3 3 

1 0 
T
 4 3  15
A1  201 
 1
 4 2    5
6.
57
3
20
1
10
 1 4 

10 12 
T
16 12 

 4 8 
T

.

The determinant of B , det  B    3  2   1 5   1 , is nonzero. Therefore B is invertible and its inverse is
 2 1
B 1  
.
 5 3
7.
The determinant of C , det  C    2  3    0  0   6 , is nonzero. Therefore C is invertible and its inverse is
 3 0   12 0 
C 1  61 
.

1
0 2  0 3 
8.
The determinant of D , det  D    6  1   4  2   2 , is nonzero. Therefore D is invertible and its inverse is
 1 4    12 2 
D 1  12 

.
6   1
3
 2
9.


 12 e x  e  x
The determinant of A  
 1 e x  e x
2

  e  e 
,
  e  e 
1
2
x
x
1
2
x
x
   e  e    e  2  e    e  2  e    2  2   1 is nonzero. Therefore A is
  e  e    e  e 
.
invertible and its inverse is A  
  e  e 

e
e

 

det  A   14 e x  e  x
2
1
4
x
x
1
2
x
x
1
2
1
2
2 x
2x
1
4
x
x
x
1
2
1
2
2x
1
4
x
x
x
2 x
1
4
1.4 Inverses; Algebraic Properties of Matrices
10.
The determinant of the matrix is  cos  cos    sin    sin    1  0 . Therefore the matrix is invertible and its
cos
inverse is 
 sin 
11.
 
A1 
 4 3 1  4 3  15
1


 2  4    3 4   4 2  20 4 2    15
 4 3 1  4 3  15
1


 2  4    3 4  4 2  20 4 2   15
 
A1
13.
 15 
1 
10 
1
 4 4  1  4 4   15
1
 2 4
T



A 
; A
 2  4    4  3  3 2  20  3 2   203
 3 4 
T
A1 
12.
 sin  
.
cos 
1

 101
1
 15  101    203   15   15
3
20
1
10
 203  1  101
 5 1
1
5
100  5
T

 1
1
  35
; A
 20

3
20
1
10
 
 15 
1 
10 

;

 203 
 101

20
1
1
5
5
 203   2 3

A
1
4 4 
5
 36 12 
1
1  36 12   103
 18 12 
1


;  ABC  
ABC  



 18  36    12  64  64 18 120 64 18  158
 64 36 
 1  3 0    2 1  1  4 3   21 0   2 1  15
C 1 B1 A1   

 
  
 1
1
 6 0 2    5 3  20  4 2   0 3   5 3   5
3
20
1
10
  103
 8
   15


 
1
10
3
20
14.
 18 12 
 18 64 
 2 0  3 5   2 4   18 64 
T
;  ABC   
; C T B T AT  
ABC  






36 
 64
 0 3  1 2   3 4   12 36 
 12 36 
15.
From part (a) of Theorem 1.4.7 it follows that the inverse of  7A 
Thus 7 A 
16.

From part (a) of Theorem 1.4.7 it follows that the inverse of 5 AT
 2
1  2 1
1  2

. Consequently, A   51



1  5 3  5 3
 5
 is 5 A .
1
 5 2 
1
1  5 2    135


 1 5   2  4  4 1 13 4 1  134
1   5
Consequently, A    134
2   13
2
13
1
13
 1 0     139

    2
 0 1    13
T
1
.
3
5
From part (a) of Theorem 1.4.7 it follows that the inverse of  I  2 A 
Thus I  2 A 
18.
is 7 A .
 2 7  1  2 7  2 7 
1 2 7   27
1
A


.
Consequently,


7  1 3  17
 3 2    7 1  1 3 1  1 3  1 3
Thus 5 AT 
17.
1


 
1
2
13
1
13
is I  2 A .

.



 
1
13
6
13
 5 1



  . Therefore A  131 3 2  
From part (a) of Theorem 1.4.7 we have A  A 1
1
1
10
3
20

5
13
3
13
1
13
2
13

.

1
.
3
7
58
1.4 Inverses; Algebraic Properties of Matrices
(a)
 41 15
A3  AAA  

30 11
(b)
 A    4111 1 1530  30
(c)
 3 1  3 1
 3 1  1 0  11 4  6 2  1 0  6 2 
2
A2  2 A  I  









 2 1  2 1
 2 1  0 1   8 3  4 2  0 1   4 2 
(a)
 8 0
A3  AAA  

 28 1
(b)
 A   8 1 1 0  28  28 8  81 28 8  
(c)
2 0  2 0 
2 0  1 0   4 0   4 0  1 0  1 0 
2
A2  2 A  I  









4 1 4 1
 4 1  0 1  12 1 8 2  0 1   4 0 
21.
(a)
1 1 
A  2I  

 2 1
 20 7 
(b) 2 A2  A  I  

 14 6 
36 13
(c) A3  2 A  I  

26 10 
22.
(a)
0 0 
A  2I  

 4 1
 7 0
(b) 2 A2  A  I  

 20 2 
 5 0
(c) A3  2 A  I  

20 0 
23.
 a b  0 1  0 a 
0 1   a b   c d 
; BA  

AB  






.
 c d  0 0  0 c 
0 0   c d  0 0 
19.
20.
11 15  11 15

41  30
41

3 1
1 0
3 1


1 0




1
8
7
2
0

1
0 a   c d 
The matrices A and B commute if 

 , i.e.
0 c  0 0 
0c
ad
00
c0
a b 
0 1 
Therefore, 
and 

 commute if c  0 and a  d .
c d 
0 0 
If we assign b and d the arbitrary values s and t , respectively, the general solution is given by the formulas
a  t,
24.
b  s,
c  0,
 a b  0 0   b 0 
0 0   a b   0 0 
; CA  
AC  







.
 c d  1 0   d 0 
1 0   c d   a b 
 b 0  0 0 
The matrices A and C commute if 

 , i.e.
d 0  a b 
d t
59
1.4 Inverses; Algebraic Properties of Matrices
b0
00
da
0b
a b 
0 0 
Therefore, 
and 

 commute if b  0 and a  d .
c d 
1 0 
If we assign c and d the arbitrary values s and t , respectively, the general solution is given by the formulas
a  t,
b  0,
c  s,
25.
x1   3 5  2 4  231 ,
 5 1  2  3
x2   3 5  2  4   13
23
26.
x1   1 3  5 1   178 ,
 3 4   51
x2   1 3  5 1  83
27.
x1   6 3 1 4  222   111 ,
 3 0  1 2
x2   6 3 1 4  12
 116
22
28.
24
x1   2 4  21  10
 125 ,
 4  4   2  4 
x2   2 4  2 1  104  25
29.
4
2
p  A   A2  9 I  
,
 8 6 
 3 3  4  1
 11  1 4 
 6  2   4 0 
 2  4  1 4 
1
6 1 
0
, p2  A   A  3I  
p1  A   A  3I  
,

2 4 
 2 2 
30.
31.
d t
4
2
p1  A  p2  A   

 8 6 
p1  A  p2  A    A  3I  A  3I 
(a)
 A  A  3I    3I  A  3I 
Theorem 1.4.1(e)
 ( A2  A  3I )     3I  A   3I  3I  
Theorem 1.4.1(i)
 ( A2  3  AI )   3  IA   9 II 
Theorem 1.4.1(m)
 ( A2  3 A)   3 A  9 I 
Property AI  IA  A on p. 43
 A2  9 I  p  A 
Theorem 1.4.1(b)
1 0 
0 1 
 1 1   1 1  1 1
If A  
and B  
then  A  B  A  B   


 does not equal


0 0 
0 0 
0 0  0 0  0 0 
1 0  0 0  1 0 
A2  B 2  


.
0 0  0 0  0 0 
(b)
Using the properties in Theorem 1.4.1 we can write
 A  B  A  B   A  A  B   B  A  B   A2  AB  BA  B2
60
1.4 Inverses; Algebraic Properties of Matrices
(c)
32.
If the matrices A and B commute (i.e., AB  BA ) then  A  B  A  B   A2  B2 .
 1 0 0   1 0 0   1 0 0   1 0 0 
We can let A be one of the following eight matrices: 0 1 0  ,  0 1 0  , 0 1 0  ,  0 1 0  ,
0 0 1  0 0 1 0 0 1  0 0 1
 1 0 0   1 0 0   1 0 0   1 0 0 
 0 1 0  ,  0 1 0  ,  0 1 0  ,  0 1 0  .

 

 
 
0 0 1  0 0 1 0 0 1  0 0 1
0 1 0 
Note that these eight are not the only solutions - e.g., A can be  1 0 0  , etc.
0 0 1
33.
(a)
We can rewrite the equation
A2  2 A  I  O
A2  2 A  I
 A2  2 A  I
A   A  2I   I
which shows that A is invertible and A1   A  2 I .
(b)
Let p  x   cn x n    c2 x 2  c1 x  c0 with c0  0 . The equation p  A   O can be rewritten as
cn An    c2 A2  c1 A  c0 I  O
cn An    c2 A2  c1 A  c0 I
c
c
c
 n An    c20 A2  c10 A  I
c0


A  c0n An 1    c20 A  c10 I  I
c
c
c
c
c
c
which shows that A is invertible and A1   n An 1    2 A  1 I .
c0
c0
c0
34.
If A3  I then it follows that AA2  I therefore A must be invertible ( A1  A2 ).
35.
If the i th row vector of A is  0  0  then it follows from Formula (9) in Section 1.3 that
i th row vector of AB   0  0  B   0  0  .
Consequently no matrix B can be found to make the product AB  I thus A does not have an inverse.
0 
If the j th column vector of A is    then it follows from Formula (8) in Section 1.3 that
0 
61
1.4 Inverses; Algebraic Properties of Matrices
0  0 
the j th column vector of BA  B        .
0  0 
Consequently no matrix B can be found to make the product BA  I thus A does not have an inverse.
36.
If the i th and j th row vectors of A are equal then it follows from Formula (9) in Section 1.3 that
i th row vector of AB  j th row vector of AB .
Consequently no matrix B can be found to make the product AB  I thus A does not have an inverse.
If the i th and j th column vectors of A are equal then it follows from Formula (8) in Section 1.3 that
the i th column vector of BA  the j th column vector of BA
Consequently no matrix B can be found to make the product BA  I thus A does not have an inverse.
37.
 x11
Letting X   x21
 x31
x12
x22
x32
x13 
x23  , the matrix equation AX  I becomes
x33 
 x11  x31
x  x
21
 11
 x21  x31
x12  x32
x12  x22
x22  x32
x13  x33   1 0 0 
x13  x23   0 1 0 
x23  x33  0 0 1 
Setting the first columns on both sides equal yields the system
x11  x31  1
x11  x21  0
x21  x31  0
Subtracting the second and third equations from the first leads to 2 x21  1 . Therefore x21   12 and (after
substituting this into the remaining equations) x11  x31  12 .
The second and the third columns can be treated in a similar manner to result in
 12

X    12
 12
38.

1
2
1
2
1
2
 12 
 12

1
A1    12
2  . We conclude that A invertible and its inverse is
1
 12
2
 x11
Letting X   x21
 x31
x12
x22
x32

1
2
1
2
1
2
 12 
1
2 .
1
2
x13 
x23  , the matrix equation AX  I becomes
x33 
 x11  x21  x31

x11

 x21  x31
x12  x22  x32
x12
x22  x32
x13  x23  x33   1 0 0 
  0 1 0 
x13
 

x23  x33  0 0 1 
Although this corresponds to a system of nine equations, it is sufficient to examine just the three equations
corresponding to the first column
62
1.4 Inverses; Algebraic Properties of Matrices
63
x11  x21  x31  1
x11  0
x21  x31  0
to see that subtracting the second and third equations from the first leads to a contradiction 0  1 .
We conclude that A is not invertible.
39.
 AB   AC 1  D 1C 1  D 1
1
1

  C   D   D


 ( B 1 A 1 ) AC 1
1
1
1
1
1
Theorem 1.4.6
 ( B1 A1 ) AC 1  CD  D 1



 B1 A1 A C 1C DD 1
40.
Theorem 1.4.7(a)

Theorem 1.4.1(c)
 B1III
Formula (1) in Section 1.4
 B1
Property AI  IA  A in Section 1.4
 AC   AC  AC  AD
1
1
1
1
1
1
  A   AC  C  A  AD
1
 C 1

1
1
1
 
 C  A A  C C  A A  D
1
1
1
Theorem 1.4.6

 CA1 AC 1 CA1 AD1
1
1
1
Theorem 1.4.7(a)
1
Theorem 1.4.1(c)
 CIIID 1
Formula (1) in Section 1.4
 CD 1
Property AI  IA  A in Section 1.4
 c1 
 c1r1  c1rn 




 rn  and C     then CR       and RC  r1c1    rn cn    tr  CR   .
cn 
 cn r1  cn rn 
41.
If R  r1
42.
Yes, it is true. From part (e) of Theorem 1.4.8, it follows that ( A2 )T   AA   AT AT  AT
T
  . This statement can be
2
extended to n factors (see Section 1.4) so that
T
n


T
T
T

A
( An )T   
AA
A   
AT A


 A
n factors
 n factors 
43.
(a)
 
Assuming A is invertible, we can multiply (on the left) each side of the equation by A1 :
AB  AC
1.4 Inverses; Algebraic Properties of Matrices
A1  AB   A1  AC 
Multiply (on the left) each side by A
 A A B   A A C
Theorem 1.4.1(c)
1
(b)
44.
1
1
IB  IC
Formula (1) in Section 1.4
B C
Property AI  IA  A on Section 1.4
If A is not an invertible matrix then AB  AC does not generally imply B  C as evidenced by Example 3.
Invertibility of A implies that A is a square matrix, which is all that is required.
By repeated application of Theorem 1.4.1(m) and (l), we have
kA  kA  kA  kA  kA    kA  kA  kA  k 2 A2   kA  kA  k 3 A3    k n An
 kA   


n
n  2 factors
n factors
45.
(a)


A A1  B 1 B  A  B 

1

 AA1 B  AB 1 B  A  B 
  IB  AI  A  B 
  B  A  A  B 
1
  A  B  A  B 
1
n  3 factors
1
Theorem 1.4.1(d) and (e)
1
Formula (1) in Section 1.4
Property AI  IA  A in Section 1.4
Theorem 1.4.1(a)
I
(b)
Formula (1) in Section 1.4
We can multiply each side of the equality from part (a) on the left by A1 , then on the right by A to obtain
 A  B  B  A  B A  I
1
1
1
which shows that if A , B , and A  B are invertible then so is A1  B1 .

Furthermore, A1  B1
46.
(a)
 I  A
  B  A  B A .
1
1
2
  I  A  I  A 
 II  IA  AI  AA
Theorem 1.4.1(f) and (g)
 I  A  A  A2
Property AI  IA  A in Section 1.4
 I  A A A
A is idempotent so A  A
I A
(b)
 2 A  I  2 A  I 
2
64
1.4 Inverses; Algebraic Properties of Matrices
  2 A  2 A   2 AI  I  2 A   II
Theorem 1.4.1(f) and (g)
 4 A2  2 A  2 A  I
Theorem 1.4.1(l) and (m);
Property AI  IA  A in Section 1.4
 4A  4A  I
A is idempotent so A  A
2
I
47.
Applying Theorem 1.4.1(d) and (g), property AI  IA  A , and the assumption Ak  O we can write
 I  A  I  A  A2    Ak 2  Ak 1 
 I  A  A  A2  A2  A3    Ak 2  Ak 1  Ak 1  Ak
 I  Ak
 I O
I
48.
a b  a b 
a b 
1 0 
A2   a  d  A   ad  bc  I  
 a  d  
  ad  bc  





c d  c d 
c d 
0 1 
 a 2  bc ab  bd   a 2  da ab  bd   ad  bc
0  0 0 




2
2
ad  bc  0 0 
ca  dc cb  d   ac  dc ad  d   0
True-False Exercises
(a)
False. A and B are inverses of one another if and only if AB  BA  I .
(b)
False.  A  B    A  B  A  B   A2  AB  BA  B 2 does not generally equal A2  2 AB  B2 since AB may not
2
equal BA .
(c)
False.  A  B  A  B   A2  AB  BA  B2 does not generally equal A2  B2 since AB may not equal BA .
(d)
False.  AB   B 1 A 1 does not generally equal A1 B1 .
(e)
False.  AB   BT AT does not generally equal AT BT .
(f)
True. This follows from Theorem 1.4.5.
(g)
True. This follows from Theorem 1.4.8.
(h)
True. This follows from Theorem 1.4.9. (The inverse of AT is the transpose of A1 .)
(i)
False. p  I    a0  a1  a2    am  I .
1
T
65
1.4 Inverses; Algebraic Properties of Matrices
(j)
True.
If the i th row vector of A is  0  0  then it follows from Formula (9) in Section 1.3 that
i th row vector of AB   0  0  B   0  0  .
Consequently no matrix B can be found to make the product AB  I thus A does not have an inverse.
0 
If the j th column vector of A is    then it follows from Formula (8) in Section 1.3 that
0 
0  0 
the j th column vector of BA  B        .
0  0 
Consequently no matrix B can be found to make the product BA  I thus A does not have an inverse.
(k)
False. E.g. I and I are both invertible but I    I   O is not.
1.5 Elementary Matrices and a Method for Finding A-1
1.
2.
3.
(a)
Elementary matrix (corresponds to adding 5 times the first row to the second row)
(b)
Not an elementary matrix
(c)
Not an elementary matrix
(d)
Not an elementary matrix
(a)
Elementary matrix (corresponds to multiplying the second row by
(b)
Elementary matrix (corresponds to interchanging the first row and the third row)
(c)
Elementary matrix (corresponds to adding 9 times the third row to the second row)
(d)
Not an elementary matrix
(a)
1 3
Add 3 times the second row to the first row: 

0 1
(b)
  17 0 0 
Multiply the first row by  17 :  0 1 0 
 0 0 1
(c)
 1 0 0
Add 5 times the first row to the third row: 0 1 0 
 5 0 1
3)
66
1.5 Elementary Matrices and a Method for Finding A-1
4.
5.
6.
(d)
0
0
Interchange the first and third rows: 
1

0
0 1 0
1 0 0 
0 0 0

0 0 1
(a)
1 0 
Add 3 times the first row to the second row: 

3 1 
(b)
Multiply the third row by
1
3
(c)
0
0
Interchange the first and fourth rows: 
0

1
(d)
1

0
Add 17 times the third row to the first row: 
0

0
(a)
 3 6 6 6 
Interchange the first and second rows: EA  
5 1
 1 2
(b)
 2 1 0 4 4 
Add 3 times the second row to the third row: EA   1 3 1
5
3
 1 9 4 12 10 
(c)
13 28 
Add 4 times the third row to the first row: EA   2 5
 3 6 
(a)
6 12 30 6 
Multiply the first row by 6 : EA  

 3 6 6 6 
(b)
 2 1 0 4 4 
Add 4 times the first row to the second row: EA   7 1 1 21 19 
 2 0
1 3 1
(c)
 1 4
Multiply the second row by 5 : EA  10 25 
 3 6 
1 0 0 
:  0 1 0 
 0 0 13 
0 0 1
1 0 0 
0 1 0

0 0 0
0

1 0 0
0 1 0

0 0 1
0
1
7
67
1.5 Elementary Matrices and a Method for Finding A-1
7.
8.
9.
(a)
0 0 1 
0 1 0  ( B was obtained from A by interchanging the first row and the third row)


 1 0 0 
(b)
0 0 1 
0 1 0  ( A was obtained from B by interchanging the first row and the third row)


 1 0 0 
(c)
 1 0 0
 0 1 0  ( C was obtained from A by adding 2 times the first row to the third row)


 2 0 1
(d)
1 0 0 
0 1 0  ( A was obtained from C by adding 2 times the first row to the third row)


2 0 1 
(a)
 1 0 0
 0 3 0  ( D was obtained from B by multiplying the second row by 3 )


 0 0 1
(b)
 1 0 0
 0  1 0  ( B was obtained from D by multiplying the second row by  1 )
3
3


 0

0 1
(c)
1 0 0 
0 1 2  ( F was obtained from B by adding 2 times the third row to the second row)


0 0 1 
(d)
0
1 0
 0 1 2  ( B was obtained from F by adding 2 times the third row to the second row)


 0 0
1
(a)
(Method I: using Theorem 1.4.5)
68
The determinant of A , det  A   1 7    4  2   1 , is nonzero. Therefore A is invertible and its inverse is
 7 4   7 4 
.

A1  11 
1  2 1
 2
(Method II: using the inversion algorithm)
 1 4 1 0


2 7 0 1 
The identity matrix was adjoined to the given matrix.
 1 4 1 0


0 1 2 1
2 times the first row was added to the second row.
1.5 Elementary Matrices and a Method for Finding A-1
 1 4 1 0


0 1 2 1
 1 0 7 4 

,
0 1 2 1
69
The second row was multiplied by 1 .
4 times the second row was added to the first row.
 7 4 
The inverse is 
.
 2 1
(b)
(Method I: using Theorem 1.4.5)
The determinant of A , det  A    2  8    4  4   0 . Therefore A is not invertible.
(Method II: using the inversion algorithm)
 2 4 1 0 


8 0 1
 4
The identity matrix was adjoined to the given matrix.
2 4 1 0 


0 0 2 1 
2 times the first row was added to the second row.
A row of zeros was obtained on the left side, therefore A is not invertible.
10.
(a)
(Method I: using Theorem 1.4.5)
The determinant of A , det  A   1 16    5  3   1 , is nonzero. Therefore A is invertible and its inverse
 16 5 16 5
is A1  11 

.
 3 1  3 1
(Method II: using the inversion algorithm)
 1 5 1 0 


3 16 0 1 
The identity matrix was adjoined to the given matrix.
 1 5 1 0 


0 1 3 1
3 times the first row was added to the second row.
 1 5 1 0 


0 1 3 1
The second row was multiplied by 1 .
 1 0 16 5

,
0 1 3 1
5 times the second row was added to the first row.
16 5
The inverse is 
.
 3 1
1.5 Elementary Matrices and a Method for Finding A-1
(b)
(Method I: using Theorem 1.4.5)
The determinant of A , det  A    6  2    4  3   0 . Therefore A is not invertible.
(Method II: using the inversion algorithm)
 6 4 1 0


 3 2 0 1 
The identity matrix was adjoined to the given matrix.
 0 0 1 2


 3 2 0 1 
2 times the second row was added to the first row.
A row of zeros was obtained on the left side, therefore the matrix is not invertible.
11. (a)
1 2 3

2 5 3
1 0 8

1 0 0

0 1 0
0 0 1 
1 2 3 1 0 0


1 3 2 1 0 
0
 0 2 5 1 0 1 


1 2 3 1 0 0 


0 1 3 2 1 0 
0 0 1 5 2 1 


1 2 3 1 0 0


1 0
 0 1 3 2
0 0
1 5 2 1 

 1 2 0 14 6 3 


 0 1 0 13 5 3 
0 0 1
5 2 1 

 1 0 0 40 16 9 


 0 1 0 13 5 3
0 0 1
5 2 1

 40 16 9 
The inverse is  13 5 3 .
 5 2 1
The identity matrix was adjoined to the given matrix.
2 times the first row was added to the second row and
1 times the first row was added to the third row.
2 times the second row was added to the third row.
The third row was multiplied by 1 .
3 times the third row was added to the second row and
3 times the third row was added to the first row.
2 times the second row was added to the first row.
70
1.5 Elementary Matrices and a Method for Finding A-1
 1 3 4 1 0 0 


1 0 1 0
 2 4
 4 2 9 0 0 1 


(b)
 1 3 4 1 0 0 


1 0 1 0
 2 4
 4 2 9 0 0 1 


 1 3 4 1 0 0 


0 10 7 2 1 0 
0 10 7 4 0 1 


 1 3 4 1 0 0 


 0 10 7 2 1 0 
 0 0 0 2 1 1 


The identity matrix was adjoined to the given matrix.
The first row was multiplied by 1 .
2 times the first row was added to the second row and
4 times the first row was added to the third row.
The second row was added to the third row.
A row of zeros was obtained on the left side, therefore the matrix is not invertible.
12. (a)
 15
1
5
1
5

1
5
1
5
4
5
 25
1
10
1
10
1 0 0

0 1 0
0 0 1 
 1 1 2 5 0 0 


1
0 5 0
2
1 1
1
 1 4
0 0 5 
2

 1 1 2 5 0 0 


5
5 5 0 
2
0 0
5
 0 5
5 0 5 
2

The identity matrix was adjoined to the given matrix.
Each row was multiplied by 5 .
1 times the first row was added to the second and
1 times the first row was added to the third row.
 1 1 2 5 0 0 


5
5 0 5 
2
 0 5
5
0 0
5 5 0 
2

The second and third rows were interchanged.
 1 1 2 5 0 0 


1
1 0 1 
0 1 2
0 0
1 2 2 0 

The second row was multiplied by  15 and
1 1 0
1 4 0


 0 1 0 0 1 1 
 0 0 1 2 2 0 


the third row was multiplied by 25 .
1
2
times the third row was added to the second row and
2 times the third row was added to the first row.
71
1.5 Elementary Matrices and a Method for Finding A-1
1 0 0
1 3 1


 0 1 0 0 1 1 
 0 0 1 2 2 0 


1 times the second row was added to the first row.
 1 3 1
The inverse is  0 1 1 .
 2 2 0 
(b)
 15
2
5
1
5


1
5
3
5
4
5
 25
 103
1
10
1 0 0

0 1 0
0 0 1 
 1 1 2 5 0 0 


3
 2 3  2 0 5 0 
1
 1 4
0 0 5 
2

 1 1 2
5 0 0


5
10 5 0 
2
 0 5
5
 0 5
5 0 5 
2

 1 1 2
5 0 0


5
10 5 0 
2
 0 5
0 0 0
5 5 5 

The identity matrix was adjoined to the given matrix.
Each row was multiplied by 5 .
2 times the first row was added to the second and
1 times the first row was added to the third row.
1 times the second row was added to the third row.
A row of zeros was obtained on the left side, therefore the matrix is not invertible.
13.
1 0 1

0 1 1
1 1 0

1 0 0

0 1 0
0 0 1 
The identity matrix was adjoined to the given matrix.
1 0 1 1 0 0


0 1 1 0 1 0
 0 1 1 1 0 1 


1 times the first row was added to the third row.
1 0
1 1 0 0


0 1 1 0 1 0 
0 0 2 1 1 1 


1 times the second row was added to the third row.
1 0 1 1 0
0


0 1 1 0 1 0
 0 0 1 12 12  12 


The third row was multiplied by  12 .
72
1.5 Elementary Matrices and a Method for Finding A-1
 1 0 0 12

1
0 1 0 2
 0 0 1 12

 12
The inverse is   12
 12
14.
 12
1
2
1
2
 12

1
2
1
2
1
2






.

 
1
2
1
2
1
2
1
2
1
2

2 3 2 0 1 0 0


 4 2
2 0 0 1 0


0
0 1 0 0 1


 1 3 0

 4 1 0
 0 0 1

1
2
0
0
1
2
0
0
1
1 3 0
2

 0 13 0 2 2

0
 0 0 1
0
1 3 0

0 1 0

 0 0 1
1 0 0

0 1 0
0 0 1

1 times the third row was added to the second and
1 times the third row was added to the first row
1
2
0
1
2
0
2 2
13
2
26
0
0
2
26
2 2
13
0
The identity matrix was adjoined to the given matrix.
0

0

1
Each of the first two rows was multiplied by
0

0

1

4 times the first row was added to the second row.
0

0

1

The second row was multiplied by 131 .
0

2
0
26
0 1 

1
2
.
 3262
3 times the second row was added to the first row.
 262  3262 0 


2
The inverse is  2132
0 .
26
 0
0 1


15.
2 6 6

2 7 6
2 7 7

1 0 0

0 1 0
0 0 1 
2 6 6
1 0 0


 0 1 0 1 1 0 
 0 1 1 1 0 1 


The identity matrix was adjoined to the given matrix.
1 times the first row was added to the second and
1 times the first row was added to the third row
73
1.5 Elementary Matrices and a Method for Finding A-1
2 6 6 1 0 0 


0 1 0 1 1 0 
0 0 1 0 1 1 


1 times the second row was added to the third row.
 2 6 0 1 6 6 


 0 1 0 1 1 0 
 0 0 1 0 1 1 


6 times the third row was added to the first row
 2 0 0 7 0 6 


 0 1 0 1 1 0 
 0 0 1 0 1 1 


6 times the second row was added to the first row
 1 0 0 27 0 3 


 0 1 0 1 1 0 
 0 0 1 0 1 1 


The first row was multiplied by 12 .
0 3
 27

The inverse is  1 1 0  .
 0 1 1
1

1
1

 1
0 0 0 1 0 0 0

3 0 0 0 1 0 0
3 5 0 0 0 1 0

3 5 7 0 0 0 1 
1

0
0

 0
0 0 0 1 0 0 0

3 0 0 1 1 0 0 
3 5 0 1 0 1 0 

3 5 7 1 0 0 1 
1

0
0

 0
0 0 0 1
3 0 0 1
16.
1

0
0

 0
0 5 0
0 5 7
0
3
0
0
0
0
5
0
0 0 0

1 0 0
0 1 1 0 

0 1 0 1 
0 1 0 0
0 1 1 0
0 0 1 1
7 0 0 1
0

0
0

1 
The identity matrix was adjoined to the given matrix.
1 times the first row was added to each of the remaining
rows.
1 times the second row was added to the third row and
to the fourth row.
1 times the third row was added to the fourth row
74
1.5 Elementary Matrices and a Method for Finding A-1
1

0
0

 0
0 0
0 0 0 1 0

1
1
0 0
1 0 0 3
3
1
0
0 1 0 0  15
5

1
0  7 17 
0 0 1 0
The second row was multiplied by 13 ,
the third row was multiplied by 15 , and
the fourth row was multiplied by 17 .
0 0
 1 0
 1
1
0 0 
3
The inverse is  3
.
1
 0  15
0
5


0  17 17 
 0
17.
 2 4 0 0 1

 1 2 12 0 0
0 0 2 0 0

 0 1 4 5 0
0
1
0
0
 1 2 12 0 0

 2 4 0 0 1
0 0 2 0 0

 0 1 4 5 0
1 0 0

0 0 0
0 1 0

0 0 1 
0
0
1
0
0

0
0

1 
The identity matrix was adjoined to the given
matrix.
The first and second rows were interchanged.
 1 2 12 0 0
1

 0 8 24 0 1 2
0 0
2 0 0 0

 0 1 4 5 0 0
0 0

0 0
1 0

0 1 
2 times the first row was added to the second.
 1 2 12 0 0
1

 0 1 4 5 0 0
0 0
2 0 0 0

 0 8 24 0 1 2
0 0

0 1
1 0

0 0 
The second and fourth rows were interchanged.
 1 2 12

4
0 1
0 0
2

 0 8 24
0 0
1
5 0 0
0 0 0
0 1 2
0 0

0 1 
1 0

0 0 
The second row was multiplied by 1.
1

0
0

 0
0 0
5 0
0

1 
2 0 0 0 1 0

8 40 1 2 0 8 
2 12
1 4
0
0
1 0
0 0
8 times the second row was added to the fourth.
75
1.5 Elementary Matrices and a Method for Finding A-1
1

0
0

 0
0

1 
1 0 0 0 12
0

8 40 1 2 0 8 
2 12
1 4
0
0
0 0
5 0
1 0
0 0
1

0
0

 0
2 12 0
1 4 5
0 1 0
0 0 40
1

0
0

 0
2 12 0 0
1 4 5 0
0 1 0 0
1
0
0
1 401
 201
1

0
0

 0
2 12 0 0
1 4 0  81
0 1 0 0
1
0
1
4
0
1
40
 201

1
2
1
2
1
10
1

0
0

 0
2 0 0 0
1 0 0  81
0 1 0 0
1
6
 23
0
1
40
 201
1

0
0

 0
0 0 0 14
1 0 0  81
0 1 0 0
1
2
1
4
3
 23
0
1
40
 201
1
2
1
10
0
0
0
0 1
0 0 1
0 0 1
0
1 0 0

0 0 0 1 
1
0 0
0
2

1 2 4 8 
1
4
0
0



1
2
1
10
1
2
1
10
0

1 
0

 15 
0

0
0

 15 
0

0
0

 15 
0

0
0

 15 
The third row was multiplied by 12 .
8 times the third row was added
to the fourth row.
The fourth row was multiplied by 401 .
5 times the fourth row was added
to the second row.
4 times the third row was added
to the second row and
12 times the third row was added
to the first row.
2 times the second row was added
to the first row.
1
3
0
 14
2
 1

3
1
0
8
2
4

.
The inverse is
1
 0
0
0
2
 1

1
1
1
 40  20  10  5 
18.
0 0

1 0
 0 1

 2 1
0 1 0 0 0

1 0 1 0 0
3 0 0 0 1 0

5 3 0 0 0 1 
2
0
The identity matrix was adjoined to the given matrix.
76
1.5 Elementary Matrices and a Method for Finding A-1
1 0

0 0
 0 1

 2 1
0
2
1 0

0 0
 0 1

 0 1
0
1 0
1
2 0 1 0
3 0 0 0
5 5 0 2
1 0

 0 1
0 0

 0 1
0
3
1 0 0

0 1 0
2 0 1 0 0 0

5 5 0 2 0 1 
The second and third rows were interchanged.
1

0
0

 0
0 0
1 3
1 0 0

0 1 0 
2 0 1 0 0 0

5 5 0 2 0 1 
The second row was multiplied by 1.
1

0
0

 0
0 0
1 0
1 0
1 3 0 0 0 1
0 2 0 1 0 0
0 8 5 0 2 1
0

0
0

1 
1

0
0

 0
0 0
1 0
1 0
1 3 0 0 0 1
0 2 0 1 0 0
0 0 5 4 2 1
0

0
0

1 
1

0
0

 0
0
0 0 1 0 1 0

1 3 0 0 0 1 0 
0
0
0
1 0 12 0

0 0 1 45 25  15  15 
1

0
0

 0
0 0 0  45
3
1 0 0
2
0
1
1 0 1 0 0

0 1 0 0 0
3 0 0 0 1 0

5 3 0 0 0 1 
0 1 0
0 0 1
0
0
1
0
0

0
0

1 
The first and second rows were interchanged.
2 times the first row was added to the fourth row
and to the fourth row.
1 0
0 0
1 0
0 0
1
2
4
5


0 1 0 
0
0
0

2
1
 5  15 
5
3
5
1
5
1 times the second row was added
to the fourth row.
4 times the third row was added
to the fourth row.
The third row was multiplied by 12 and
the fourth row was multiplied by  15 .
1
5
1 times the fourth row was added to the first row
and
3 times the third row was added to the second.
77
1.5 Elementary Matrices and a Method for Finding A-1
  45
 3
The inverse is  21
 2
 4
 5
19.
(a)
 k1

0
0

 0
1

0

0
0



0 1 0 
.
0
0
0

2
 15  15 
5
3
5
1
5
1
5
0
1
0
0
0
0
1
0
0

0
0

1 
The identity matrix was adjoined to the given matrix.
1
k1
0
0
1 0 0 0
0 1 0 0
0 0 1 0
1
k2
0
The first row was multiplied by 1 / k1 ,
0
1
k3
0
0
0

0

0
1 
k4 
0
0
k2
0
0
0
k3
0
0 1
0 0
0 0
k4 0
0 0 0
the second row was multiplied by 1 / k2 ,
the third row was multiplied by 1 / k3 , and
the fourth row was multiplied by 1 / k4 .
 k11

0
The inverse is 
0
0

(b)
k

0
0

 0
1
1
0
0
0
0
k
0
0
0
1
k2
0
0
1
k3
0
0
0
0
1
1
0

0
.
0
1 
k4 
1
0
0
0
 1 1k 0 0 1k

0 1 0 0 0
 0 0 1 1k 0

 0 0 0 1 0
1

0
0

 0
0

0
0

1 
The identity matrix was adjoined to the given matrix.
0 0 0

1 0 0
0 1k 0 

0 0 1 
First row and third row were both multiplied by 1 / k .
0
1
0
0
0
0
1
0
0 0 0 1k  1k 0
0

1 0 0 0
1 0
0
0 1 0 0
0 1k  1k 

0 0 1 0
0 0
1 
0
 1k  1k 0


0
1 0
0
The inverse is 
.
0
0 1k  1k 


0 0
1
0
 1k times the fourth row was added
to the third row and
 1k times the second row was added
to the first row.
78
1.5 Elementary Matrices and a Method for Finding A-1
20.
(a)
0

0
0

 k4
 k4

0
0

 0
1

0

0
0

k1 1
0 0
0 0
0 0
0
1
0
0
0
0
1
0
0

0
0

1 
The identity matrix was adjoined to the given matrix.
0 0
0 0
0 0
k1 1
0
0
1
0
0
1
0
0
1

0
0

0 
The first and fourth rows were interchanged;
the second and third rows were interchanged.
0 0 0 0
1 0 0 0
0
0
1
k4
0
1
k3
0 1 0 0
0 0 1 1
1
k2
0
0
0


0

0
0 
0
0
1
0
0

0
0

1 
0
0
0
k3
0
k2
0
0
0
0
k3
0
0
0
k2
0
k1
0

0
The inverse is 
0
1
 k1
k

1
0

 0
(b)
0
k
1
0
0
0
0
1
k3
1
k2
0
0
0
0
0
k
1
0
0
0
k
The first row was multiplied by 1 / k4 ,
the second row was multiplied by 1 / k3 ,
the third row was multiplied by 1 / k2 , and
the fourth row was multiplied by 1 / k1 .


0
.
0
0 
1
k4
1
0
0
0
0
1
0
0
The identity matrix was adjoined to the given matrix.
 1 0 0 0 1k 0 0 0 
1

1
 k 1 0 0 0 k 0 0
 0 1k 1 0 0 0 1k 0 


1
1
 0 0 k 1 0 0 0 k 
Each row was multiplied by 1 / k .
1
0 0 0
1 0 0 0
k


1
1
 0 1 0 0  k2 k 0 0 
0 1 1 0
0 0 1k 0 
k


1
0 0 0 1k 
 0 0 k 1
 1k times the first row was added
1

0
0

 0
0 0 0

1
0 0
1 0 0 
k
0 1 0 k13  k12 1k 0 

0 1k 1
0
0 0 1k 
0 0 0
1
k
1
k2
to the second row.
 1k times the second row was added
to the third row.
79
1.5 Elementary Matrices and a Method for Finding A-1
1

0
0

0

1
k
1
k2
0 0 0
1 0 0 
1
0 1 0
k3
 k12
0 0 1  14
k
1
k3
 1k
 1
 k2
The inverse is  1
 k3
  14
 k
21.
0
1
k
0
1
k
 k12
1
k3
0 0

0 0
1
0
k

 k12 1k 
80
 1k times the third row was added
to the fourth row.
0 0

0 0
.
1
0
k

 k12 1k 
It follows from parts (a) and (c) of Theorem 1.5.3 that a square matrix is invertible if and only if its reduced row
echelon form is identity.
c c c 
1 c c 


 1 1 c 
1 1 c 
1 c c 


 c c c 
1
c
1
 0 1  c
0

 0
c  c2
0




The first and third rows were interchanged.
1 times the first row was added to the second row and
c times the first row was added to the third row.
If c  c 2  c 1  c   0 or 1  c  0 , i.e. if c  0 or c  1 the last matrix contains at least one row of zeros, therefore
it cannot be reduced to I by elementary row operations.
Otherwise (if c  0 and c  1 ), multiplying the second row by 11 c and multiplying the third row by c 1c2 would
result in a row echelon form with 1’s on the main diagonal. Subsequent elementary row operations would then lead
to the identity matrix.
We conclude that for any value of c other than 0 and 1 the matrix is invertible.
22.
It follows from parts (a) and (c) of Theorem 1.5.3 that a square matrix is invertible if and only if its reduced row
echelon form is identity.
c 1 0 
1 c 1 


 0 1 c 
1.5 Elementary Matrices and a Method for Finding A-1
1 c 1 
c 1 0 


 0 1 c 
The first and second rows were interchanged.
1 c 1 
0 1 c 


 c 1 0 
The second and third rows were interchanged.
c
1 
1
0
1 c 

 0 1  c 2 c 
c times the first row was added to the third row.
1 
1 c
0 1
c 

 0 0 c3  2c 
c 2  1 times the second row was added to the third.
81
If c 3  2c  c(c 2  2)  0 , i.e. if c  0 , c  2 or c   2 the last matrix contains a row of zeros, therefore it cannot
be reduced to I by elementary row operations.
Otherwise (if c3  2c  0 ), multiplying the last row by c3 1 2 c would result in a row echelon form with 1’s on the main
diagonal. Subsequent elementary row operations would then lead to the identity matrix.
We conclude that for any value of c other than 0 ,
23.
2 and  2 the matrix is invertible.
We perform a sequence of elementary row operations to reduce the given matrix to the identity matrix. As we do so,
we keep track of each corresponding elementary matrix:
 3 1 
A

 2 2 
1 5 
2 2 


2 times the second row was added to the first.
1 2 
E1  

0 1 
1 5 
 0 8 


2 times the first row was added to the second.
 1 0
E2  

 2 1
1 5 
0 1 


The second row was multiplied by  81 .
 1 0
E3  
1
0  8 
1 0 
0 1 


5 times the second row was added to the first.
 1 5 
E4  
1
0
Since E4 E3 E2 E1 A  I , then
1.5 Elementary Matrices and a Method for Finding A-1
82
 1 2   1 0   1 0   1 5 
1
and
A   E4 E3 E2 E1  I  E11 E21 E31 E41  
1  2 1   0 8   0 1
0
 1 5  1 0   1 0   1 2 
.
A1  E4 E3 E2 E1  


1 0  81   2 1  0 1 
0
Note that this answer is not unique since a different sequence of elementary row operations (and the corresponding
elementary matrices) could be used instead.
24.
We perform a sequence of elementary row operations to reduce the given matrix to the identity matrix. As we do so,
we keep track of each corresponding elementary matrix:
 1 0 
A

 5 2 
1 0 
0 2 


5 times the first row was added to the second row.
1 0 
E1  

5 1 
1 0 
0 1 


The second row was multiplied by 12 .
1 0 
E2  
1
0 2 
1 0   1 0 
 1 0  1 0 
1
and A1  E2 E1  
Since E2 E1 A  I , A   E2 E1  I  E11 E21  
.



1
 5 1  0 2 
 0 2   5 1
Note that this answer is not unique since a different sequence of elementary row operations (and the corresponding
elementary matrices) could be used instead.
25.
We perform a sequence of elementary row operations to reduce the given matrix to the identity matrix. As we do so,
we keep track of each corresponding elementary matrix:
 1 0 2 
3 
A   0 4
 0 0
1 
 1 0 2 
0 1
3 
4 

 0 0
1 
The second row was multiplied by 14 .
1 0 0 
E1   0 14 0 
 0 0 1 
 1 0 2 
0 1 0


 0 0
1 

0
1 0

E2   0 1  34 
 0 0
1
1 0 0
0 1 0


 0 0 1 
3
4
times the third row was added to the second.
2 times the third row was added to the first row.
 1 0 2
E3  0 1 0 
0 0 1
1.5 Elementary Matrices and a Method for Finding A-1
Since E3 E2 E1 A  I , we have A   E3 E2 E1 
1
83
 1 0 0   1 0 0   1 0 2 
I  E E E  0 4 0   0 1 34  0 1 0 
0 0 1   0 0 1  0 0
1
1
1
1
2
1
3
0  1 0 0 
 1 0 2  1 0



and A  E3 E2 E1   0 1 0   0 1  43  0 41 0  .
 0 0 1  0 0
1 0 0 1 
Note that this answer is not unique since a different sequence of elementary row operations (and the corresponding
elementary matrices) could be used instead.
1
26.
We perform a sequence of elementary row operations to reduce the given matrix to the identity matrix. As we do so,
we keep track of each corresponding elementary matrix:
1 1 0
A   1 1 1 
 0 1 1 
1 1 0
0 0 1


 0 1 1 
1 1 0
0 1 1


 0 0 1 
1 1 0
0 1 0


 0 0 1 
1 0 0
0 1 0


 0 0 1 
1 times the first row was added to the second row.
 1 0 0
E1   1 1 0 
 0 0 1
The second and third rows were interchanged
1 0 0 
E2  0 0 1 
0 1 0 
1 times the third row was added to the second.
 1 0 0
E3  0 1 1
0 0
1
1 times the second row was added to the first row.
 1 1 0 
E4  0
1 0 
0 0 1
Since E4 E3 E2 E1 A  I , we have
A   E4 E3 E2 E1 
1
 1 0 0  1 0 0   1 0 0   1 1 0 
I  E E E E   1 1 0   0 0 1  0 1 1 0 1 0  and
 0 0 1  0 1 0  0 0 1 0 0 1
1
1
1
2
1
3
1
4
 1 1 0   1 0 0   1 0 0   1 0 0 
A  E4 E3 E2 E1  0
1 0  0 1 1  0 0 1   1 1 0  .
0 0 1 0 0
1  0 1 0   0 0 1
Note that this answer is not unique since a different sequence of elementary row operations (and the corresponding
elementary matrices) could be used instead.
1
27.
Let us perform a sequence of elementary row operations to produce B from A . As we do so, we keep track of each
1.5 Elementary Matrices and a Method for Finding A-1
84
corresponding elementary matrix:
1 2 3
A   1 4 1 
 2 1 9 
3
1 2
 0 2 2 


 2 1 9 
5
1 0
 0 2 2 


 2 1 9 
5
1 0

B   0 2 2 
 1 1 4 
1 times the first row was added to the second row.
 1 0 0
E1   1 1 0 
 0 0 1
1 times the second row was added to the first row.
 1 1 0 
E2   0 1 0 
 0 0 1 
1 times the first row was added to the third row.
 1 0 0
E3   0 1 0 
 1 0 1
Since E3 E2 E1 A  B , the equality CA  B is satisfied by the matrix
 1 0 0   1 1 0   1 0 0   2 1 0 
C  E3 E2 E1   0 1 0  0
1 0   1 1 0    1 1 0  .
 1 0 1 0 0 1  0 0 1  2
1 1
Note that this answer is not unique since a different sequence of elementary row operations (and the corresponding
elementary matrices) could be used instead.
28.
Let us perform a sequence of elementary row operations to produce B from A . As we do so, we keep track of each
corresponding elementary matrix:
 2 1 0
A   1 1 0 
 3 0 1 
1 0
 2
 5  1 0 


 3 0 1 
1 0
 2
 5 1 0 


 1 2 1 
9 4
 6

B   5 1 0 
 1 2 1 
2 times the first row was added to the second.
 1 0 0
E1   2 1 0 
 0 0 1
2 times the first row was added to the third row.
 1 0 0
E2   0 1 0 
 2 0 1
4 times the third row was added to the first row.
 1 0 4 
E3  0 1 0 
0 0
1
1.5 Elementary Matrices and a Method for Finding A-1
85
Since E3 E2 E1 A  B , the equality CA  B is satisfied by the matrix
 1 0 4   1 0 0   1 0 0   9 0 4 
C  E3 E2 E1   0 1 0   0 1 0   2 1 0    2 1 0  .
 0 0
1  2 0 1  0 0 1  2 0
1
Note that a different sequence of elementary row operations (and the corresponding elementary matrices) could be
used instead. (However, since both A and B in this exercise are invertible, C is uniquely determined by the
formula C  BA1 .)
 1 0 0
A   0 1 0  cannot result from interchanging two rows of I 3 (since that would create a nonzero entry above the
 a b c 
main diagonal).
29.
A can result from multiplying the third row of I 3 by a nonzero number c
(in this case, a  b  0, c  0 ).
The other possibilities are that A can be obtained by adding a times the first row to the third (b  0, c  1) or by
adding b times the second row to the third  a  0, c  1 .
In all three cases, at least one entry in the third row must be zero.
30.
Consider three cases:

If a  0 then A has a row of zeros (first row).

If a  0 and h  0 then A has a row of zeros (fifth row).

If a  0 and h  0 then adding  da times the first row to the third, and adding  he times the fifth row to the third
results in the third row becoming a row of zeros.
In all three cases, the reduced row echelon form of A is not I 5 . By Theorem 1.5.3, A is not invertible.
True-False Exercises
(a)
False. An elementary matrix results from performing a single elementary row operation on an identity matrix; a
product of two elementary matrices would correspond to a sequence of two such operations instead, which generally
is not equivalent to a single elementary operation.
(b)
True. This follows from Theorem 1.5.2.
(c)
True. If A and B are row equivalent then there exist elementary matrices E1 ,, E p such that B  E p  E1 A .
Likewise, if B and C are row equivalent then there exist elementary matrices E1* ,, Eq* such that C  Eq*  E1* B .
Combining the two equalities yields C  Eq*  E1* E p  E1 A therefore A and C are row equivalent.
(d)
True. A homogeneous system Ax  0 has either one solution (the trivial solution) or infinitely many solutions. If A
is not invertible, then by Theorem 1.5.3 the system cannot have just one solution. Consequently, it must have
infinitely many solutions.
1.5 Elementary Matrices and a Method for Finding A-1
(e)
86
True. If the matrix A is not invertible then by Theorem 1.5.3 its reduced row echelon form is not I n . However, the
matrix resulting from interchanging two rows of A (an elementary row operation) must have the same reduced row
echelon form as A does, so by Theorem 1.5.3 that matrix is not invertible either.
(f)
True. Adding a multiple of the first row of a matrix to its second row is an elementary row operation. Denoting by E
be the corresponding elementary matrix we can write  EA   A1 E 1 so the resulting matrix EA is invertible if A
1
is.
(g)
 1 0  1 / 2 0   2 0  1 / 3 0   3 0 
False. For instance, 




.
0 1   0 1  0 1   0 1  0 1 
1.6 More on Linear Systems and Invertible Matrices
1.
x 
1 1 
2 
The given system can be written in matrix form as Ax  b , where A  
, x   1  , and b    .

5 6 
9
 x2 
We begin by inverting the coefficient matrix A
1 1 1 0 


5 6 0 1 
The identity matrix was adjoined to the coefficient matrix.
1 1 1 0 


0 1 5 1
5 times the first row was added to the second row.
1 0 6 1


0 1 5 1
1 times the second row was added to the first row.
 6 1
Since A 1  
, Theorem 1.6.2 states that the system has exactly one solution x  A1b :

1
 5
 x1   6 1  2   3
x   
      , i.e., x1  3, x2  1 .
 2   5 1  9   1
2.
x 
 4 3 
 3 
The given system can be written in matrix form as Ax  b , where A  
, x   1  , and b    .

 2 5 
 9
 x2 
We begin by inverting the coefficient matrix A
 4 3 1 0 


 2 5 0 1 
The identity matrix was adjoined to the coefficient matrix.
 2 5 0 1 


 4 3 1 0 
The first and second rows were interchanged.
1.6 More on Linear Systems and Invertible Matrices
2 5 0
1


0 7 1 2 
2 times the first row was added to the second row.
1
 1  25 0

2


1 17  27 
0
The first row was multiplied by 12 and
 1 0 145

1
0 1 7
5
Since A1   141
7
 x1   145
x    1
 2  7
3.
 143 

 27 
the second row was multiplied by 17 .
5
2
times the second row was added to the first row.
 143 
1
 , Theorem 1.6.2 states that the system has exactly one solution x  A b :
 27 
 143   3  3

, i.e., x1  x2  3 .

 27   9   3
1 3 1
 x1 


The given system can be written in matrix form as Ax  b , where A  2 2 1 , x   x2  , and
 x3 
2 3 1
 4
b   1 . We begin by inverting the coefficient matrix A
 3
1 3 1 1 0 0


2 2 1 0 1 0
2 3 1 0 0 1


1 3 1 1 0 0


 0 4 1 2 1 0 
 0 3 1 2 0 1 


The identity matrix was adjoined to the coefficient matrix.
2 times the first row was added to the second and
2 times the first row was added to the third row.
1 3 1 1 0 0 


0 4 1 2 1 0 
0
1 0 0 1 1 

1 times the second row was added to the third row.
1 3 1 1 0 0 


1 0 0 1 1 
0
0 4 1 2 1 0 


The second and third rows were interchanged.
1 3 1 1 0 0 


0 1 0 0 1 1 
0 0 1 2 3 4 


4 times the second row was added to the third row.
87
1.6 More on Linear Systems and Invertible Matrices
1 3 1 1 0 0 


0 1 0 0 1 1 
0 0 1 2 3 4 


The third row was multiplied by 1 .
1 3 0 1 3 4 


0 1 0 0 1 1 
0 0 1 2 3 4 


1 times the third row was added to the first row.
1 0 0 1 0
1


0 1 0 0 1 1 
0 0 1 2 3 4 


3 times the second row was added to the first row.
1
 1 0

Since A   0 1 1 , Theorem 1.6.2 states that the system has exactly one solution x  A1b :
 2 3 4 
1
1  4   1
 x1   1 0
 x    0 1
1  1   4  , i.e., x1  1, x2  4, and x3  7 .
 2 
 x3   2 3 4   3  7 
4.
5 3 2
 x1 


The given system can be written in matrix form as Ax  b , where A   3 3 2  , x   x2  , and
 x3 
 0 1 1 
4 
b   2  . We begin by inverting the coefficient matrix A
 5 
5 3 2 1 0 0


3 3 2 0 1 0
0 1 1 0 0 1


The identity matrix was adjoined to the coefficient matrix.
 2 0 0 1 1 0 


3 3 2 0 1 0
 0 1 1 0 0 1


1 times the second row was added to the first row.
 1 0 0 12  12 0 


1 0
3 3 2 0
0 1 1 0
0 1 

The first row was multiplied by 12 .
1 0 0 12  12 0 


3
5
0
2
0 3 2  2
0 1 1 0
0 1 

3 times the first row was added to the second row.
88
1.6 More on Linear Systems and Invertible Matrices
1
1 0 0
 12 0 
2


0 1
0 1 1 0
5
0 3 2  32
0 
2

1
1 0 0
0
 12
2


0
1
0 1 1 0
5
0 0 1  32
3 
2

1 0 0 12  12 0 


0 1
0 1 1 0
0 0 1 23  25 3 


1 0 0 12

3
0 1 0  2
0 0 1 32

 12
 12

Since A1    23
 23
 12


5
2
5
2
5
2
5
2
 x1   12
  
x  A1b :  x2     32
 x3   32
5.
0

2 
3 
The second and third rows were interchanged.
3 times the second row was added to the third row.
The third row was multiplied by 1 .
1 times the third row was added to the second row.
0

2  , Theorem 1.6.2 states that the system has exactly one solution
3
 12

5
2
5
2
0   4   1

2   2    11 , i.e., x1  1, x2  11, and x3  16 .
3  5   16 
1
 1 1
x


The given system can be written in matrix form as Ax  b , where A   1 1 4  , x   y  , and
 z 
 4 1
1
 5
b  10  . We begin by inverting the coefficient matrix A
 0 
 1 1 11 0 0


 1 1 4 0 1 0 
 4 1 1 0 0 1 


1 1 1 1 0 0


 0 0 5 1 1 0 
 0 5 5 4 0 1


1 1 1 1 0 0 


0 5 5 4 0 1 
0 0 5 1 1 0 


The identity matrix was adjoined to the coefficient matrix.
1 times the first row was added to the second row and
4 times the first row was added to the third row.
The second and third rows were interchanged.
89
1.6 More on Linear Systems and Invertible Matrices
1 1 1 1 0 0 


4
0 15 
0 1 1 5
0 0 1 15  15 0 


 1 1 0 45

3
0 1 0 5
0 0 1 15

1 0 0

0 1 0
0 0 1

 15

Since A1   35
 15
 x   15
 y   3
  5
 z   15
6.
1
5
3
5
1
5

1
5
1
5
1
5
90
The second row was multiplied by 15 and
the third row was multiplied by  15 .
0
1 
5 
0 
1 times the third row was added to the second row
and to the first row.
0  15 
1
1 
5
5 
1
0 
5
1 times the second row was added to the first row.
0  15 
1
1
1
5
5  , Theorem 1.6.2 states that the system has exactly one solution x  A b :
 15
0 
0  15   5  1
1
1
10    5 , i.e., x  1, y  5, and z  1 .
5
5 
 15
0   0   1
 0 1 2 3 
w 
0 


 1

7
x
1 4
4


The given system can be written in matrix form as Ax  b , where A 
, x
, and b    .
y
 1 3
4
7
9
 


 
 1 2 4  6 
6 
z
We begin by inverting the coefficient matrix A
 0 1 2 3 1

 1 1 4 40
 1 3 7 90

 1 2 4 6 0
0
1
0
0
 1 1 4 40

 0 1 2 3 1
 1 3 7 90

 1 2 4 6 0
1 0 0

0 0 0
0 1 0

0 0 1 
1 1 4 40 1

 0 1 2 3 1 0
0 2
3 5 0 1

 0 1 0 2 0 1
0
0
1
0
0

0
0

1 
0 0

0 0
1 0

0 1 
The identity matrix was adjoined to the coefficient
matrix.
The first and second rows were interchanged.
1 times the first row was added to the third row and
the first row was added to the fourth row.
1.6 More on Linear Systems and Invertible Matrices
1 1

0 1
0 2

 0 1
1

0
0

 0
1 0 0

0 0 0
3 5 0 1 1 0 

0 2 0 1 0 1 
4
2
4 0
3 1
1 4 4 0 1
1 2 3 1 0
0 1 1 2 1
0 2 1 1 1
1

0
0

 0
0
0
1
0
0

0
0

1 
0 0

0 0
0 1 1 2 1 1 0 

0 2 1 1 1 0 1 
1 4 4 0 1
1 2 3 1 0
The second row was multiplied by 1 .
2 times the second row was added to the third row
and the second row was added to the fourth.
The third row was multiplied by 1 .
1

0
0

 0
1 4
1 2
4 0
3 1
1
0
0 0

0 0
0 1 1 2 1 1 0 

0 0 1 3 1 2 1 
1

0
0

 0
1
1
0
0
4
2
1
0
4 0
3 1
1 2
1 3
1 0 0

0 0 0
1 1 0 

1 2 1 
1

0
0

 0
1
1
0
0
4
2
1
0
0 12 3 8 4 

0 8 3 6 3 
0 1 0
1 1

1 3 1 2 1 
1 times the last row was added to the third row,
3 times the last row was added to the second row
and 4 times the last row was added to the first.
1

0
0

 0
1 0 0 8 3
1 0 0 6 3
0

1
0
1 1

1 2 1 
2 times the third row was added to the second row
and
4 times the third row was added to the first row.
1

0
0

 0
0 1 0 1
0 0 1 3
0 1 

4 1
0
1 1

1 2 1 
0 0 0 2 0
1 0 0 6 3
0 1 0 1
0 0 1 3
4
4
2 times the third row was added to the fourth.
The fourth row was multiplied by 1 .
1 times the second row was added to the first.
91
1.6 More on Linear Systems and Invertible Matrices
0 1
 2 0
 6 3 4
1
Since A1  
, Theorem 1.6.2 states that the system has exactly one solution
 1 0
1 1


1 2 1
 3
x  A1b :
0 1  0   6 
w   2 0
 x   6 3 4
1  7   1
 
,

y  1 0
1 1  4   10 
  
   
1 2 1  6   7 
 z   3
i.e., w  6 , x  1 , y  10 , and z  7 .
7.
x 
3 5 
The given system can be written in matrix form as Ax  b , where A  
, x   1  , and

1 2 
 x2 
b 
b   1  . We begin by inverting the coefficient matrix A
 b2 
3 5 1 0


1 2 0 1
The identity matrix was adjoined to the coefficient matrix.
1 2 0 1


3 5 1 0
The first and second rows were interchanged.
1 20
1


 0 1 1 3 
3 times the first row was added to the second row.
 1 2 0 1


 0 1 1 3 
The second row was multiplied by 1 .
 1 0 2 5 


 0 1 1 3 
2 times the second row was added to the first row.
 2 5 
1
Since A 1  
 , Theorem 1.6.2 states that the system has exactly one solution x  A b :
1
3



 x1   2 5  b1   2b1  5b2 
 
 , i.e.,
   
 x2   1 3  b2   b1  3b2 
8.
x1  2b1  5b2 , x2  b1  3b2 .
1 2 3
 x1 
 b1 




The given system can be written in matrix form as Ax  b , where A   2 5 5  , x   x2  , and b   b2  . We
 x3 
 3 5 8 
 b3 
begin by inverting the coefficient matrix A
92
1.6 More on Linear Systems and Invertible Matrices
1 2 3 1 0 0


2 5 5 0 1 0
3 5 8 0 0 1


1 2 3 1 0 0


 0 1 1 2 1 0 
 0 1 1 3 0 1 


1 2
3 1 0 0


 0 1 1 2 1 0 
 0 0 2 5 1 1 


1 2 3 1 0
0


1 0
 0 1 1 2
 0 0 1 25  12  12 


 1 2 0  132

1
0 1 0 2
 0 0 1 25

The identity matrix was adjoined to the coefficient matrix.
2 times the first row was added to the second row and
3 times the first row was added to the third row.
The second row was added to the third row.
The third row was multiplied by  12 .

3
2
1
2
1
2


 
 

1
2
1
2
1
2


 
 
  152

Since A1   12
 25

1
2
1
2
1
2


  , Theorem 1.6.2 states that the system has exactly one solution x  A1b :
 
 x1    152
x    1
 2  2
 x3   25
  b1    152 b1  21 b2  25 b3 



  b2    12 b1  12 b2  12 b3  , i.e.,
   b3   25 b1  12 b2  21 b3 
 1 0 0  152

1
0 1 0 2
 0 0 1 25


1
2
1
2
1
2
3
2
1
2
1
2
5
2
1
2
1
2
The third row was added to the second row and
3 times the third row was added to the first row.
2 times the second row was added to the first row.
5
2
1
2
1
2
5
2
1
2
1
2
x1   152 b1  21 b2  25 b3 , x2  12 b1  12 b2  12 b3 , and x3  25 b1  12 b2  12 b3 .
9.
 1 5 1 2 


3 2 4 5
We augmented the coefficient matrix with two columns of
constants on the right hand sides of the systems
(i) and (ii) – refer to Example 2.
 1 5 1 2 


 0 17 1 11 
3 times the first row was added to the second row.
 1 5 1 2 

11 
1 171 17
 0

The second row was multiplied by 171 .
93
1.6 More on Linear Systems and Invertible Matrices
1 0

 0 1
22
17
1
17
21
17
11
17



5 times the second row was added to the first row.
We conclude that the solutions of the two systems are:
(i)
10.
22
x1  17
, x2  171
(ii)
 1 4
1 0 3 


 1 9 2 1 4 
 6 4 8 0 5 


 1 4 1 0 3 


 1 9 2 1 4 
 6 4 8 0 5 


 1 4 1 0
3


1
 0 13 1 1
 0 28 2 0 23 


 1 4 1 0
3

1 
1  131 131
13 
0
 0 28 2 0 23 


21
11
, x2  17
x1  17
We augmented the coefficient matrix with two columns of
constants on the right hand sides of the systems
(i) and (ii) – refer to Example 2.
The first row was multiplied by 1 .
1 times the first row was added to the second row and
6 times the first row was added to the third row.
The second row was multiplied by 131 .
 1 4 1
0
3

1
1
1 
1  13
13
13 
0
28
327 
2
0
0


13
13
13 

28 times the second row was added to the third row.
 1 4 1 0
3

1
1 
1  131
13
13 
0
0 0

1 14  327
2 

The third row was multiplied by 132 .
 1 4 0 14  321

2

25 
1 0 1  2 
0
 0 0 1 14  327

2 

 1 0 0 18  421

2

25 
 0 1 0 1  2 
 0 0 1 14  327

2 

1
13
times the third row was added to the second row
and the third row was added to the first row.
4 times the second row was added to the first row.
94
1.6 More on Linear Systems and Invertible Matrices
We conclude that the solutions of the two systems are:
(i)
x1  18, x2  1 , x3  14
 4 7 0 4 1 5 


 1 2 1 6 3 1
11.
The first and second rows were interchanged.
1
2 1
6
3 1


 0 15 4 28 13 9 
1 0

 0 1
4 times the first row was added to the second row.
1 6 3 1
28 13
3 

15 15
5 
The second row was multiplied by  151 .
4
15
7
15
4
15
34
15
28
15
19
15
13
15
421
25
327
, x2   , x3  
.
2
2
2
We augmented the coefficient matrix with four columns
of constants on the right hand sides of the systems (i),
(ii), (iii), and (iv) – refer to Example 2.
 1 2 1 6 3 1


 4 7 0 4 1 5 
1 2

 0 1
x1  
(ii)
 15 
3 
5 

2 times the second row was added to the first row.
We conclude that the solutions of the four systems are:
12.
(i)
x1  157 , x2  154
(ii)
34
28
, x2  15
x1  15
(iii)
x1  19
, x2  13
15
15
(iv)
x1   15 , x2  35
 1 3 5 1 0 1


 1 2 0 0 1 1
 2
5 4 1 1 0 

 1 3 5 1 0 1


 0 1 5 1 1 2 
 0 1 6 3 1 2 


We augmented the coefficient matrix with three columns
of constants on the right hand sides of the systems
(i), (ii) and (iii) – refer to Example 2.
The first row was added to the second row and
2 times the first row was added to the third row.
 1 3 5 1 0 1


 0 1 5 1 1 2 
 0 0 1 2 2 0 


The second row was added to the third row.
 1 3 5 1 0 1


 0 1 5 1 1 2 
 0 0 1 2 2 0 


The third row was multiplied by 1 .
95
1.6 More on Linear Systems and Invertible Matrices
 1 3 0 9 10 1


 0 1 0 9 11 2 
 0 0 1 2 2 0 


 1 0 0 18 23 5


 0 1 0 9 11 2 
 0 0 1 2 2 0 


5 times the third row was added to the first row
and to the second row.
3 times the second row was added to the first row.
We conclude that the solutions of the three systems are:
x1  18, x2  9 , x3  2
(i)
13.
(ii)
x1  23 , x2  11 , x3  2
(iii)
x1  5, x2  2 , x3  0
 1 3 b1 


 2 1 b2 
b1 
1 3


0 7 2b1  b2 
1 3

0 1


b1  b2 
b1
2
7
1
7
The augmented matrix for the system.
2 times the first row was added to the second row.
The second row was multiplied by 17 .
The system is consistent for all values of b1 and b2 .
14.
 6 4

b1 

 3 2 b 
2


The augmented matrix for the system.
 1  23

3 2
The first row was multiplied by 61 .
1
6
b1 

b2 
1
 1  23

b
6 1


1
0  2 b1  b2 
0
3 times the first row was added to the second row.
The system is consistent if and only if  12 b1  b2  0 , i.e. b1  2b2 .
15.

 1 2

 4 5

 3
3



5 b1 

8 b2 

 3 b3 

The augmented matrix for the system.
96
1.6 More on Linear Systems and Invertible Matrices


b1 
5
 1 2


0
3 12 4b1  b2 


 0 3 12 3b  b 
1
3 





b1
5
 1 2



0
3 12 4b1  b2 


0
0
0  b1  b2  b3 





b1
5
 1 2



0
1 4  43 b1  13 b2 


0
0
0
 b1  b2  b3 



4 times the first row was added to the second row
and 3 times the first row was added to the third row.
The second row was added to the third row.
The second row was multiplied by 13 .
The system is consistent if and only if  b1  b2  b3  0 , i.e. b1  b2  b3 .
16.


 1 2 1 b1 


 4
5 2 b2 


 4
7 4 b3 





b1 
 1 2 1


 0 3 2 4b1  b2 


 0 1 0 4b  b 
1
3 




 1 2

 0 1

 0 3





0 4b1  b3 

 2 4b1  b2 

1
The augmented matrix for the system.
4 times the first row was added to the second row
and to the third row.
b1
The second and third rows were interchanged.
97
1.6 More on Linear Systems and Invertible Matrices

 1 2

0
1

 0 3





0 4b1  b3 

 2 4b1  b2 

1
b1
The second row was multiplied by 1 .
 1 2 1

b1


1 0
4b1  b3 
0
0 0 2 8b1  b2  3b3 


3 times the second row was added to the third row.
 1 2 1

b1


1 0
4b1  b3 
0
0 0 1 4b1  12 b2  23 b3 


The third row was multiplied by  12 .
The system is consistent for all values of b1 , b2 , and b3 .
17.
 1 1

1
 2
 3 2

 4 3
2 b1 

1 b2 
2 1 b3 

1 3 b4 
3
5
 1 1

b1
3 2


 0 1 11 5 2b1  b2 
 0 1 11 5 3b1  b3 


1 11 5 4b1  b4 
 0
 1 1

b1
3 2


1 11 5 2b1  b2 
0
 0 1 11 5 3b1  b3 


1 11 5 4b1  b4 
 0
 1 1

b1
3 2


1 11 5 2b1  b2 
0
0 0
0 0 b1  b2  b3 


0 0 2b1  b2  b4 
 0 0
The augmented matrix for the system.
2 times the first row was added to the second row,
3 times the first row was added to the third row, and
4 times the first row was added to the fourth row.
The second row was multiplied by 1 .
The second row was added to the third row and
1 times the second row was added to the fourth row.
The system is consistent for all values of b1 , b2 , b3 , and b4 that satisfy the equations
b1  b2  b3  0 and 2b1  b2  b4  0 .
98
1.6 More on Linear Systems and Invertible Matrices
99
These equations form a linear system in the variables b1 , b2 , b3 , and b4 whose augmented matrix
 1 0 1 1 0 
 1 1 1 0 0 
has the reduced row echelon form 
 . Therefore the system is consistent if
 2

1 0 1 0
 0 1 2 1 0 

b1  b3  b4 and b2  2b3  b4 .
18.
(a)
The equation Ax  x can be rewritten as A x  Ix , which yields Ax  Ix  0 and
 A  I x  0 .
This is a matrix form of a homogeneous linear system - to solve it, we reduce its augmented matrix to a row
echelon form.
1 1 2 0 


2 1 2 0 
3 1 0 0 


1 1 2 0 


0 1 6 0 
0 2 6 0 


1 1 2 0 


1 6 0
0
0 2 6 0 


The augmented matrix for the homogeneous system
 A  I x  0 .
2 times the first row was added to the second row
and 3 times the first row was added to the third row.
The second row was multiplied by 1 .
1 1 2 0 


0 1 6 0 
0 0 6 0 


2 times the second row was added to the third row.
1 1 2 0 


0 1 6 0 
0 0 1 0 


The third row was multiplied by 61 .
Using back-substitution, we obtain the unique solution: x1  x2  x3  0 .
(b)
As was done in part (a), the equation A x  4x can be rewritten as  A  4 I  x  0 . We solve the latter system
by Gauss-Jordan elimination
 2
1 2 0


 2 2 2 0 
 3
1 3 0 

 2 2 2 0 


1 2 0
 2
 3
1 3 0 

The augmented matrix for the homogeneous system
 A  4I  x  0 .
The first and second rows were interchanged.
1.6 More on Linear Systems and Invertible Matrices
 1 1 1 0 


 2 1 2 0 
 3 1 3 0 


 1 1 1 0 


0 1 0 0 
0 4 0 0 


 1 1 1 0 


0 1 0 0 
0 4 0 0 


 1 0 1 0 


0 1 0 0 
0 0 0 0 


The first row was multiplied by 12 .
2 times the first row was added to the second row and
3 times the first row was added to the third row.
The second row was multiplied by 1 .
4 times the second row was added to the third row and
the second row was added to the first row.
If we assign x3 an arbitrary value t , the general solution is given by the formulas
x1  t , x2  0 , and x3  t .
19.
 1 1 1
X  2 3 0 
0 2 1
1
1
 1 1 1
 2 1 5 7 8 
 4 0 3 0 1 . Let us find 2 3 0  :




0 2 1
 3 5 7 2 1
 1 1 1 1 0 0 


2 3 0 0 1 0 
0 2 1 0 0 1 


The identity matrix was adjoined to the matrix.
 1 1 1 1 0 0 


0 5 2 2 1 0 
0 2 1 0 0 1 


2 times the first row was added to the second row.
 1 1 1 1 0 0 


0 1 0 2 1 2 
0 2 1 0 0
1 

2 times the third row was added to the second row.
 1 1 1 1 0 0 


1 2 
0 1 0 2
0 0 1 4 2
5 

2 times the second row was added to the third row.
100
1.6 More on Linear Systems and Invertible Matrices
 1 1 1 1 0 0 


0 1 0 2 1 2 
0 0 1 4 2 5 


The third row was multiplied by 1 .
 1 1 0 5 2
5


1 2 
0 1 0 2
0 0 1 4 2 5 


1 times the third row was added to the first row.
 1 0 0 3 1 3 


0 1 0 2 1 2 
0 0 1 4 2 5 


The second row was added to the first row.
1
 1 1 1
 3 1 3 


1 2  we obtain
Using 2 3 0    2
0 2 1
 4 2 5
 3 1 3  2 1 5 7 8   11 12 3 27 26 
X   2
1 2   4 0 3 0 1    6 8
1 18 17 
 4 2 5  3 5 7 2 1   15 21 9 38 35
1
20.
1
1  4 3 2 1 
1
 2 0
 2 0





X   0 1 1  6 7 8 9  . Let us find  0 1 1 :
 1 1 4 
 1 1 4   1 3 7 9 
 2 0
11 0 0


 0 1 1 0 1 0 
 1 1 4 0 0 1 


The identity matrix was adjoined to the matrix.
 1 1 4 0 0 1 


 0 1 1 0 1 0 
 2 0
1 1 0 0 

The first and third rows were interchanged.
 1 1 4 0 0 1 


0 1 1 0 1 0 
0 2 7 1 0 2 


2 times the first row was added to the third row.
 1 1 4 0 0 1 


0 1 1 0 1 0 
0 2 7 1 0 2 


The second row was multiplied by 1 .
101
1.6 More on Linear Systems and Invertible Matrices
 1 1 4 0 0 1 


0 1 1 0 1 0 
0 0 9 1 2 2 


2 times the second row was added to the third row.
 1 1 4 0
0
1


0 1 1 0 1 0 
0 0
1  19  29  29 

 1 1 0  49

1
0 1 0 9
0 0 1  19

 89
 79
 29


.
 
 1 0 0  95

1
0 1 0 9
0 0 1  19

 19
 19 
2 
9 
 29 

The third row was multiplied by  19 .
1
9
2
9
2
9
 79
2
9
102
1 times the third row was added to the second row and
4 times the third row was added to the first row.
1 times the second row was added to the first row.
1
  95
 2 0

Using  0 1 1   19
  19
 1 1 4 
1
 19
 79
 29
 19 
2
we obtain
9
2
 9
  95

X   19
  19
 19
 19   4 3 2 1   3  259

2
6 7 8 9   4  409
9
 29  1 3 7 9  2  239
 97
 29
 259
 409
 329
 239 

 449 
 379 
True-False Exercises
(a)
True. By Theorem 1.6.1, if a system of linear equation has more than one solution then it must have infinitely many.
(b)
True. If A is a square matrix such that Ax  b has a unique solution then the reduced row echelon form of A must
be I . Consequently, Ax  c must have a unique solution as well.
(c)
True. Since B is a square matrix then by Theorem 1.6.3(b) AB  I n implies B  A 1 .
Therefore, BA  A 1 A  I n .
(d)
True. Since A and B are row equivalent matrices, it must be possible to perform a sequence of elementary row
operations on A resulting in B . Let E be the product of the corresponding elementary matrices, i.e., EA  B . Note
that E must be an invertible matrix thus A  E 1 B .
Any solution of Ax  0 is also a solution of Bx  0 since Bx  EAx  E 0  0 .
Likewise, any solution of Bx  0 is also a solution of Ax  0 since Ax  E 1 Bx  E 1 0  0 .
(e)


True. If S 1 AS x  b then SS 1 ASx  A  Sx   Sb . Consequently, y  Sx is a solution of Ay  Sb .
1.6 More on Linear Systems and Invertible Matrices
(f)
True. Ax  4 x is equivalent to Ax  4 I n x , which can be rewritten as  A  4 I n  x  0 . By Theorem 1.6.4, this
homogeneous system has a unique solution (the trivial solution) if and only if its coefficient matrix A  4 I n is
invertible.
(g)
True. If AB were invertible, then by Theorem 1.6.5 both A and B would be invertible.
1.7 Diagonal, Triangular, and Symmetric Matrices
1.
2.
(a)
The matrix is upper triangular. It is invertible (its diagonal entries are both nonzero).
(b)
The matrix is lower triangular. It is not invertible (its diagonal entries are zero).
(c)
This is a diagonal matrix, therefore it is also both upper and lower triangular. It is invertible (its diagonal
entries are all nonzero).
(d)
The matrix is upper triangular. It is not invertible (its diagonal entries include a zero).
(a)
The matrix is lower triangular. It is invertible (its diagonal entries are both nonzero).
(b)
The matrix is upper triangular. It is not invertible (its diagonal entries are zero).
(c)
This is a diagonal matrix, therefore it is also both upper and lower triangular. It is invertible (its diagonal
entries are all nonzero).
(d)
The matrix is lower triangular. It is not invertible (its diagonal entries include a zero).
3.
 3 0 0   2 1   3 2 
0 1 0   4 1   1 4


   
0 0 2   2 5   2  2 
 31  6 3
 11   4 1
 2  5  4 10 
4.
 4 0 0 
 1 2 5 
  1 4 
 3 1 0   0 3 0    3 4

  0 0 2    


5.
 5 0 0   3 2 0 4 4    5  3

0 2 0   1 5 3 0
3    2 1


0 0 3  6 2 2 2 2   3  6 
 2  3   5 2    4
 1 3  0  2    12
6 10 
3
0 
 5 2   5 0   5 4   5 4 
 2  5  2  3  2  0   2  3
 3 2   3 2   3 2   3 2 
0 20 20 
 15 10

  2 10
6
0
6 
 18 6 6 6 6 
6.
2 0 0   4 1 3  3 0 0    2  4  3 
0 1 0   1 2 0   0 5 0    1 1 3



     
0 0 4   5 1 2   0 0 2   4  5  3 
 2  1 5  2  3 2  
 1 2  5  1 0  2  
 4 1 5  4  2  2  
103
1.7 Diagonal, Triangular, and Symmetric Matrices
12 
 24 10

  3 10
0 
 60
20 16 
7.
12
A 
 0
8.
 6 2

A2   0
 0

2
0  1 0 

,
2
 2   0 4 
2
0
 6  k

A k   0
 0

9.
 1  2
 2
2
A  0

 0

10.
3 k
0
0  1 0
4


0   0 19

2
 14   0 0
0
 1   k
 2
k
A  0

 0

 2 2

 0
2
A 
 0

 0
 2 2

 0
2
A 
 0

 0
 2  k

 0
k
A 
 0

 0
1
0    6 k

0  0

5 k   0
 
0
 13 
 1 0 

,
2 
 2   0 14 
2
0
k
0
0
2
0
0
 3
0
0
0
 4 
0
0
32
0
2
0
2
0
0
 3
0
0
0
0
 4 
k
2
0
0
 3 
0
0
k
k
1 k

 0
0   361
 
0   0
52   0

 1

k 
 2   0
0
0
1
9
0
0

0 ,
1 
25 
0

0

1

5k 
0

0 ,
1 
16 
 1 2
 2
2
A  0

 0

0
 13 
2
0
0  4 0 0 0 
 

0   0 16 0 0 

,
 

0
0
9
0
0 


22   0 0 0 4 
0
 4 
1
3k
A
0  2 k 0 0 



0    0 3k
0

k

k
 14    0 0 4 
0
 13 
0
0
 6 2

A2   0
 0

0  36 0 0 

0    0 9 0  ,
52   0 0 25

0
3
12
A 
 0
2
0   14
 
0  0
  0
0  

2 2  0
0
1
16
0
0
 1k
0    2 


0   0

0   0
 
2 k   0

0 0

0 0
,
1
0
9

0 14 
0
0
1
0
 4 k
0
0
1
 3k
0
0

0

0

1 
2k 
0  4 0 0

0    0 9 0  ,

2
 14    0 0 16 
0 
1 
 2 k 
104
1.7 Diagonal, Triangular, and Symmetric Matrices
1 2  0 

0


0

 0  5 2 
12.
 1 3  5 

0


0

0
 2  5 2 
0
13.
139

 0
14.
11000

 0
11.
15.
16.
17.
0
0
0
 1
39
  15
0
0 0
 

0
   0 20 0 
 4  7  3  0 0 84 
  1 0


  0 1
0
 1
1000
 1 0 


  0 1 
(a)
 au av 
bw bx 


 cy cz 
(a)
 ua vb 
 wa xb 


 ya zb 
(a)
 0 0 0 
 

0
  0 0 0 
 3 0 1  0 0 0 
0
 2 1
 1 3 


0 3 
3 0 


(b)
 ra
ua

 xa
sb tc 
vb wc 
yb zc 
(b)
 ar as at 
 bu bv bw 


 cx cy cz 
(b)
 1 3 7 2
3
1 8 3

 7 8 0 9 


 2 3 9 0 
(b)
 1 7 3 2 
 7 4
5 7 

 3 5
1 6 


3
 2 7 6
18.
(a)
19.
From part (c) of Theorem 1.7.1, a triangular matrix is invertible if and only if its diagonal entries are all nonzero.
Since this upper triangular matrix has a 0 on its diagonal, it is not invertible.
20.
From part (c) of Theorem 1.7.1, a triangular matrix is invertible if and only if its diagonal entries are all nonzero.
Since this upper triangular matrix has all three diagonal entries nonzero, it is invertible.
21.
From part (c) of Theorem 1.7.1, a triangular matrix is invertible if and only if its diagonal entries are all nonzero.
Since this lower triangular matrix has all four diagonal entries nonzero, it is invertible.
22.
From part (c) of Theorem 1.7.1, a triangular matrix is invertible if and only if its diagonal entries are all nonzero.
Since this lower triangular matrix has a 0 on its diagonal, it is not invertible.
105
1.7 Diagonal, Triangular, and Symmetric Matrices
23.
24.
 3  1

AB   0
 0

 4  6 

AB   
 




  . The diagonal entries of AB are: 3, 5,  6 .
 1 6 

1 5
0
0
 0  5

106


 . The diagonal entries of AB are: 24, 0, 42 .
 7  6 
0
0
25.
The matrix is symmetric if and only if a  5  3 . In order for A to be symmetric, we must have a  8 .
26.
The matrix is symmetric if and only if the following equations must be satisfied
3
a  2b  2 c 
2a  b  c 
0
a
 c  2
We solve this system by Gauss-Jordan elimination
 1 2 2 3 


1 1 0
2
 1 0 1 2 


The augmented matrix for the system.
 1 0 1 2 


1 1 0
2
 1 2 2 3 


The first and third rows were interchanged.
 1 0 1 2 


1 1 4 
0
0 2 1 5 


2 times the first row was added to the second row
and 1 times the first row was added to the third.
 1 0 1 2 


0 1 1 4 
0 0 1 13 


2 times the second row was added to the third row.
 1 0 1 2 


0 1 1 4 
0 0 1 13 


The third row was multiplied by 1 .
 1 0 0 11 


0 1 0 9 
0 0 1 13 


The third row was added to the second row
and 1 times the third row was added to the first.
In order for A to be symmetric, we must have a  11 , b  9 , and c  13 .
27.
From part (c) of Theorem 1.7.1, a triangular matrix is invertible if and only if its diagonal entries are all nonzero.
Therefore, the given upper triangular matrix is invertible for any real number x such that x  1 , x  2 , and x  4 .
1.7 Diagonal, Triangular, and Symmetric Matrices
107
28.
From part (c) of Theorem 1.7.1, a triangular matrix is invertible if and only if its diagonal entries are all nonzero.
Therefore, the given lower triangular matrix is invertible for any real number x such that x  12 , x  13 , and x   14 .
29.
By Theorem 1.7.1, A1 is also an upper triangular or lower triangular invertible matrix. Its diagonal entries must all
be nonzero - they are reciprocals of the corresponding diagonal entries of the matrix A .
30.
By Theorem 1.4.8(e),  AB   BT AT . Therefore we have:
T
 B B  B  B   B B ,
 BB    B  B  BB , and
 B AB    B  AB     AB   B   B A B  B AB since A is symmetric.
T
T
T
T
T
T
T
T
T
T
T
T
T
T
T
T
T
T
T
T
T
T
31.
 1 0 0
A   0 1 0 
 0 0 1
32.
0 0
 13 0 0 
  13 0 0   13




 
1
1
0  , etc.)
For example A  0 2 0  (there are seven other possible answers, e.g.,  0 2 0  , 0  12
 0 0 1 0
0 0 1 
0 1
33.
  1 2    2  0    5  0 

AB    0  2   1 0    3  0 

 0  2    0  0    4  0 
 1 8    2  2    5 0   1 0    2 1   5 3 
 0  8   1 2    3 0   0  0   11   3 3 
 0  8    0  2    4  0   0  0    0 1   4  3
17 
 2 12

 0 2
10  . Since this is an upper triangular matrix, we have verified Theorem 1.7.1(b).
 0 0 12 
34.
(a)
Theorem 1.4.8(e) states that  AB   BT AT (if the multiplication can be performed). Therefore,
T
 A    AA   A A   A 
2
T
T
T
T
T
2

A is
symmetric
A2
which shows that A2 is symmetric.
(b)
2 A  3A  I   2  A   3A  I  2  A   3A  I
2
T
2
T
T
Th.
1.4.8
(b-d)
T
T
Th.
1.4.8
(e)
2
T
T

A and I
are
symmetric
2 A2  3 A  I
which shows that 2 A2  3 A  I is symmetric.
35.
(a)
1
A 
1
(2)(3)  ( 1)( 1)
3 1   35
1 2    1

 5
1
5
2
5

 is symmetric, therefore we verified Theorem 1.7.4.

1.7 Diagonal, Triangular, and Symmetric Matrices
(b)
 1 2 3 1 0 0 


1 7 0 1 0 
 2
 3 7 4 0 0 1 


 1 2 3 1 0 0 


 0 3 1 2 1 0 
 0 1 5 3 0 1 


2 times the first row was added to the second row and
3 times the first row was added to the third row.
 1 2 3 1 0 0 


 0 1 5 3 0 1 
 0 3 1 2 1 0 


The second and third rows were interchanged.
 1 2 3 1 0 0 


1 5 3 0 1 
0
 0 3 1 2 1 0 


The second row was multiplied by 1 .
 1 2 3 1 0 0 


1 5 3 0 1 
0
 0 0 14 11 1 3 


3 times the second row was added to the third row.
 1 2 3 1

1 5 3
0
11
 0 0 1 14

The third row was multiplied by 141 .
19
 1 2 3  14

13
1 0  14
0
11
 0 0 1 14

0

0 1 
1
 143 
14
0
45
13
 14
  14
 13
 145
Since A1    14
11
1
 14
14

9
14
1
14
3
14






11
14
1
14
3
14





 143

45
 1 0 3  14

13
 0 1 0  14
11
 0 0 1 14

36.
The identity matrix was adjoined to the matrix A .
5
14
1
14
13
 14
 145
1
14

11
14
1
14
3
14
5 times the third row was added to the second row and
3 times the third row was added to the first row.
2 times the second row was added to the first row.


 is symmetric, we have verified Theorem 1.7.4

a 0 0
All 3  3 diagonal matrices have a form  0 b 0  .
 0 0 c 
 a 0 0   a 0 0   a 0 0  1 0 0 
A  3 A  4 I   0 b 0   0 b 0   3  0 b 0   4 0 1 0 


 
 

 0 0 c   0 0 c   0 0 c  0 0 1 
2
108
1.7 Diagonal, Triangular, and Symmetric Matrices
a2

0
0

0
b
2
0
109
0   3a 0 0   4 0 0 

0    0 3b 0    0 4 0 
c 2   0 0 3c   0 0 4 
 a 2  3a  4

0
0


2

0
0
b  3b  4

2




0
0
3
4
c
c


 a  4  a  1


0

0

0
 b  4  b  1
0


0

c

4
c

1
   
0
This is a zero matrix whenever the value of a , b , and c is either 4 or 1 . We conclude that the following are all
3  3 diagonal matrices that satisfy the equation:
 4 0 0   4 0 0   4 0 0   1 0 0 
 0 4 0  ,  0 4 0  ,  0 1 0  ,  0 4 0  ,

 
 
 

 0 0 4   0 0 1  0 0 4   0 0 4 
 4 0 0   1 0 0   1 0 0   1 0 0 
 0  1 0  ,  0 4 0  ,  0  1 0  ,  0 1 0 

 
 
 

 0 0 1  0 0 1  0 0 4   0 0 1
37.
(a)
a ji  j 2  i 2  i 2  j 2  aij for all i and j therefore A is symmetric.
(b)
a ji  j 2  i 2 does not generally equal aij  i 2  j 2 for i  j therefore A is not symmetric (unless n  1 ).
(c)
a ji  2 j  2i  2i  2 j  aij for all i and j therefore A is symmetric.
(d)
a ji  2 j 2  2i 3 does not generally equal aij  2i 2  2 j 3 for i  j therefore A is not symmetric (unless n  1 ).
38.
If aij  f  i, j  then A is symmetric if and only if f  i, j   f  j, i  for all values of i and j .
39.
a b
For a general upper triangular 2  2 matrix A  
 we have
0 c 
a b  a b  a b 
A3  



0 c  0 c  0 c 
a2

0
ab  bc   a b   a 3


c2  0 c   0
3
a 2 b   ab  bc  c   a

c3
  0
 a  ac  c  b 
2
2
c3

1.7 Diagonal, Triangular, and Symmetric Matrices
 1 30 
Setting A3  
we obtain the equations a 3  1 , a 2  ac  c2 b  30 , c 3  8 .

 0 8 
The first and the third equations yield a  1, c  2 .


Substituting these into the second equation leads to 1  2  4  b  30 , i.e., b  10 .
 1 30 
 1 10 
We conclude that the only upper triangular matrix A such that A3  
is A  

.
 0 8 
0 2 
40.
(a)
 1 0 0   y1   1
Step 1. Solve  2 3 0   y2    2 
 2 4 1  y3   0 
The first equation is y1  1 .
The second equation  2 1  3 y2  2 yields y2  0 .
The third equation  2 1   4  0   1y3  0 yields y3  2.
 2 1 3   x1   1
Step 2. Solve  0 1 2   x2    0  using back-substitution:
 0 0 4   x3   2 
The third equation 4 x3  2 yields x3   12 .
The second equation 1x2   2    12   0 yields x2  1 .
The first equation 2 x1   11   3    12   1 yields x1  74 .
(b)
0 0   y1   4 
 2

Step 1. Solve  4
1 0   y2    5
 3 2 3  y3   2 
The first equation 2 y1  4 yields y1  2 .
The second equation  4  2   1y2  5 yields y2  13 .
The third equation  3  2    2  13   3 y3  2 yields y3  6 .
 3 5 2   x1   2 
Step 2. Solve  0 4 1   x2    13 using back-substitution:
 0 0 2   x3   6 
The third equation 2 x3  6 yields x3  3 .
The second equation 4 x2  1 3   13 yields x2   25 .
The first equation 3 x1   5    25    2  3   2 yields x1   32 .
 0 0 4
 0 0 1


 4 1 0 
 0 0 8 
(b)  0 0 4 
 8 4
0 
41.
(a)
42.
The condition AT   A is equivalent to the linear system
110
1.7 Diagonal, Triangular, and Symmetric Matrices
2a  3b  c
3a  5b  5c
5a  8b  6c
 2 3 1 0
 3 5 5 0
The augmented matrix 
 5 8 6 0

0 0 0 1



d 
111
2
3
5
0
2
1
0

3
has the reduced row echelon form 
0
5


0
0
0 10 0 1
1 7 0 0 
.
0
0 1 0

0
0 0 0
If we assign c the arbitrary value t , the general solution is given by the formulas
a  1  10t , b  7t , c  t , d  0 .
43.
No. If AB  BA , AT   A , and BT  B then  AB   BT AT    B   A   BA  AB which does not generally
T
equal AB . (The product of skew-symmetric matrices that commute is symmetric.)
44.
1
2
 A  A  is symmetric since   A  A    A   A    A  A  and  A  A  is skew-symmetric since
T
T
1
2
T
T
1
2
T
1
2
T
1
2
T
1
2
T
  A  A   A   A    A  A     A  A  therefore the result follows from the identity
1
2
45.
T
T
1
2
1
2
T
1
2
T
T
1
2
T
1
2
T
A A  A A   A.
(a)
T
T
1
2
A 
T
1
 
 AT
1
   A
Theorem 1.4.9(d)
1
  A1
(b)
The assumption: A is skew-symmetric
Theorem 1.4.7(c)
A 
T
T
A
Theorem 1.4.8(a)
  AT
The assumption: A is skew-symmetric
 A  B
T
 AT  BT
Theorem 1.4.8(b)
 A  B
The assumption: A and B are skew-symmetric
   A  B
Theorem 1.4.1(h)
1.7 Diagonal, Triangular, and Symmetric Matrices
 A  B
T
 AT  BT
Theorem 1.4.8(c)
  A   B
The assumption: A and B are skew-symmetric
   A  B
Theorem 1.4.1(i)
 kA 
T
47.
 kAT
Theorem 1.4.8(d)
 k   A
The assumption: A is skew-symmetric
 kA
Theorem 1.4.1(l)

AT  AT A
  A  A   A A  A therefore A is symmetric; thus we have A  AA  A A  A .
T
T
T
T
T
2
T
True-False Exercises
(a)
True. Every diagonal matrix is symmetric: its transpose equals to the original matrix.
(b)
False. The transpose of an upper triangular matrix is a lower triangular matrix.
(c)
 1 1 1 0   2 1 
False. E.g., 


 is not a diagonal matrix.
 0 1 1 1   1 2 
(d)
True. Mirror images of entries across the main diagonal must be equal - see the margin note next to Example 4.
(e)
True. All entries below the main diagonal must be zero.
(f)
False. By Theorem 1.7.1(d), the inverse of an invertible lower triangular matrix is a lower triangular matrix.
(g)
False. A diagonal matrix is invertible if and only if all or its diagonal entries are nonzero (positive or negative).
(h)
True. The entries above the main diagonal are zero.
(i)
True. If A is upper triangular then AT is lower triangular. However, if A is also symmetric then it follows that
AT  A must be both upper triangular and lower triangular. This requires A to be a diagonal matrix.
(j)
0 1 
0 0 
0 1 
False. For instance, neither A  
nor B  
is symmetric even though A  B  


 is.
1 0 
0 0 
1 0 
(k)
 0 1
0 0 
0 1 
False. For instance, neither A  
nor B  
is upper triangular even though A  B  

 is.

1 0 
0 0 
 1 0 
112
1.7 Diagonal, Triangular, and Symmetric Matrices
(l)
0 0 
0 0 
False. For instance, A  
is not symmetric even though A2  

 is.
0 0 
1 0 
(m) True. By Theorem 1.4.8(d),  kA   kAT . Since kA is symmetric, we also have  kA   kA . For nonzero k the
T
equality of the right hand sides kAT  kA implies AT  A .
1.8 Matrix Transformations
1.
(a)
TA  x   Ax maps any vector x in R2 into a vector w  Ax in R3 .
The domain of TA is R2 ; the codomain is R3 .
(b)
TA  x   Ax maps any vector x in R3 into a vector w  Ax in R2 .
The domain of TA is R3 ; the codomain is R2 .
(c)
TA  x   Ax maps any vector x in R3 into a vector w  Ax in R3 .
The domain of TA is R3 ; the codomain is R3 .
(d)
TA  x   Ax maps any vector x in R6 into a vector w  Ax in R1  R .
The domain of TA is R6 ; the codomain is R .
2.
(a)
TA  x   Ax maps any vector x in R5 into a vector w  Ax in R 4 .
The domain of TA is R5 ; the codomain is R 4 .
(b)
TA  x   Ax maps any vector x in R 4 into a vector w  Ax in R5 .
The domain of TA is R 4 ; the codomain is R5 .
(c)
TA  x   Ax maps any vector x in R 4 into a vector w  Ax in R 4 .
The domain of TA is R 4 ; the codomain is R 4 .
(d)
TA  x   Ax maps any vector x in R1  R into a vector w  Ax in R3 .
The domain of TA is R ; the codomain is R3 .
3.
(a)
The transformation maps any vector x in R2 into a vector w in R2 .
Its domain is R2 ; the codomain is R2 .
(b)
The transformation maps any vector x in R2 into a vector w in R3 .
Its domain is R2 ; the codomain is R3 .
4.
(a)
The transformation maps any vector x in R3 into a vector w in R3 .
Its domain is R3 ; the codomain is R3 .
T
113
1.8 Matrix Transformations
(b)
The transformation maps any vector x in R3 into a vector w in R2 .
Its domain is R3 ; the codomain is R2 .
5.
(a)
The transformation maps any vector x in R3 into a vector in R2 .
Its domain is R3 ; the codomain is R2 .
(b)
The transformation maps any vector x in R2 into a vector in R3 .
Its domain is R2 ; the codomain is R3 .
6.
(a)
The transformation maps any vector x in R2 into a vector in R2 .
Its domain is R2 ; the codomain is R2 .
(b)
The transformation maps any vector x in R3 into a vector in R3 .
Its domain is R3 ; the codomain is R3 .
7.
(a)
The transformation maps any vector x in R2 into a vector in R2 .
Its domain is R2 ; the codomain is R2 .
(b)
The transformation maps any vector x in R3 into a vector in R2 .
Its domain is R3 ; the codomain is R2 .
8.
(a)
The transformation maps any vector x in R 4 into a vector in R2 .
Its domain is R 4 ; the codomain is R2 .
(b)
The transformation maps any vector x in R3 into a vector in R3 .
Its domain is R3 ; the codomain is R3 .
9.
The transformation maps any vector x in R2 into a vector in R3 . Its domain is R2 ; the codomain is R3 .
10.
The transformation maps any vector x in R3 into a vector in R 4 . Its domain is R3 ; the codomain is R 4 .
11.
(a)
 x1 
 w1   2 3 1  
The given equations can be expressed in matrix form as    
  x2 
 w2   3 5 1  x 
 3
1
 2 3
therefore the standard matrix for this transformation is 
5 1
3
(b)
 w1   7 2 8   x1 
The given equations can be expressed in matrix form as  w2    0 1 5  x2 
 w3   4 7 1  x3 
 7 2 8 
therefore the standard matrix for this transformation is  0 1 5 .
 4 7 1
114
1.8 Matrix Transformations
12.
(a)
1
 w1   1
x 



The given equations can be expressed in matrix form as  w2    3 2   1 
x
 w3   5 7   2 
 1 1
therefore the standard matrix for this transformation is  3 2  .
 5 7 
(b)
 w1  1
w  
1
The given equations can be expressed in matrix form as  2   
 w3  1
  
 w4  1
1
1
therefore the standard matrix for this transformation is 
1

1
13.
0 0 0   x1 
 
1 0 0   x2 
1 1 0   x3 
 
1 1 1   x4 
0 0 0
1 0 0 
.
1 1 0

1 1 1
(a)
 x2   0 1
1
 0
 x  



1
   1 0   x1  ; the standard matrix is  1 0 
T  x1 , x2   
 x1  3 x2   1 3  x2 
 1 3

 



 1 1
 x1  x2   1 1
(b)
 x1 
7 x1  2 x2  x3  x4   7 2 1 1  
   0 1 1 0   x2  ;
T  x1 , x2 , x3 , x4   
x 2  x3
 
 x 

  1 0 0 0   3 
 x1
 x4 
 7 2 1 1
the standard matrix is  0 1 1 0 
 1 0 0 0 
14.
(c)
0  0
0  0
  
T  x1 , x2 , x3   0   0
  
0  0
0  0
(d)
 x4   0
 x  
 1  1
T  x1 , x2 , x3 , x4    x3   0

 
 x2   0
 x1  x3   1


(a)
0 0
0
0

0 0   x1 

 
0 0   x2  ; the standard matrix is 0


0 0   x3 
0
0
0 0 
0
0
0
0
0
1
1
0
0 1
0 0
0 0 
0 0

0 0
0 0 
1
0
 x1 
1

0  

x
2
0    ; the standard matrix is 0

  x3 
0  
0
 x4 
 1

0
 2 x  x2  2 1  x1 
 2 1
; the standard matrix is 

T  x1 , x2    1





 1 1
 x1  x2   1 1  x2 
0
0
0
0
0
1
1
0
0 1
1
0 
0

0
0 
115
1.8 Matrix Transformations
15.
(b)
 x   1 0   x1 
1 0 
; the standard matrix is 
T  x1 , x2    1   




0 1 
 x2   0 1   x2 
(c)
1 2 1 
 x1  2 x2  x3   1 2 1   x1 






T  x1 , x2 , x3    x1  5 x2    1 5 0   x2  ; the standard matrix is  1 5 0 

  0 0 1   x3 
0 0 1 
x3
(d)
 4 x1   4 0 0   x1 
4 0 0






T  x1 , x2 , x3    7 x2    0 7 0   x2  ; the standard matrix is  0 7 0 
 8 x3   0 0 8   x3 
 0 0 8 
 w1   3 5 1  x1 
The given equations can be expressed in matrix form as  w2    4 1 1  x2  therefore the standard matrix for
 w3   3 2 1  x3 
 3 5 1
this operator is  4 1 1 .
 3 2 1
By directly substituting  1,2,4  for  x1 , x2 , x3  into the given equation we obtain
w1    3 1   5  2   1 4   3
w2    4 1  1 2   1 4   2
w3    3 1   2  2   1 4   3
 w1   3 5 1  1    31   5 2   1 4    3
  
  
  
By matrix multiplication,  w2    4 1 1  2      4 1  1 2   1 4     2  .
 w3   3 2 1  4     3 1   2  2   1 4    3
16.
116
 x1 
 
3 5 1  x2 
 w  2
The given equations can be expressed in matrix form as  1   
therefore the standard

 w2   1 5 2 3  x3 
 
 x4 
3 5 1
2
matrix for this transformation is 
.
 1 5 2 3 
By directly substituting 1, 1,2,4  for  x1 , x2 , x3 , x4  into the given equation we obtain
w1   2 1   3 1   5  2   1 4   15
w2  11   5 1   2  2    3  4   2
By matrix multiplication,
1.8 Matrix Transformations
 1
 
 w1  2 3 5 1  1   2 1   3 1   5  2   1 4    15



w  

.
 2   1 5 2 3  2  11   5 1   2  2    3  4    2 
 
 4
17.
(a)
  x  x2   1 1  x1 
 1 1
; the standard matrix is 
T  x1 , x2    1





.
 0 1
 x2   0 1  x2 
 1 1  1  11  1 4    5
T x  
    matches T  1,4   1  4,4    5,4  .
   
 0 1  4     0 1  1 4    4 
(b)
2 1 1
2 x1  x2  x3  2 1 1  x1 






T  x1 , x2 , x3    x2  x3   0
1 1  x2  ; the standard matrix is 0
1 1 .

 0 0 0   x3 
0 0 0 
0
2 1 1  2    2  2   11  1 3    0 


T  x   0 1 1  1    0  2   11  1 3    2 
0 0 0   3  0  2    0 1   0  3    0 
matches T  2,1, 3    4  1  3,1  3,0    0, 2,0  .
18.
(a)
 2 x  x2  2 1  x1 
 2 1
; the standard matrix is 
T  x1 , x2    1





.
 1 1
 x1  x2   1 1  x2 
2 1  2     2  2   1 2    6 
T x  
    matches
   
 1 1  2    1 2   1 2    0 
T  2,2    4  2, 2  2    6,0  .
(b)
 x1   1 0 0   x1 
 1 0 0






T  x1 , x2 , x3    x2  x3    0 1 1  x2  ; the standard matrix is 0 1 1 .
 x2   0 1 0   x3 
0 1 0 
 1 0 0   1 11   0  0    0  5    1


T  x   0 1 1 0     0 1  1 0   1 5     5 matches T 1,0,5   1, 5,0  .
0 1 0   5  0 1  1 0    0  5    0 
19.
20.
(a)
1 2   3  1
TA  x   Ax  
    
3 4   2   1
(b)
 1
 1 2 0     3
TA  x   Ax  
  1   
 3 1 5  3 13
 
(a)
 2 1 4   x1   2 x1  x2  4 x3 
TA  x   Ax   3 5 7   x2    3 x1  5 x2  7 x3 
 6 0 1  x3  

6 x1  x3
117
1.8 Matrix Transformations
21.
(b)
 1 1
  x1  x2 
 x1  


TA  x   Ax   2 4      2 x1  4 x2 
 x2   7 x  8 x 
2
 7 8 
 1
(a)
If u   u1 , u2  and v   v1 , v2  then
T  u  v   T  u1  v1 , u2  v2 
  2  u1  v1    u2  v2  ,  u1  v1    u2  v2  
  2u1  u2 , u1  u2    2 v1  v2 , v1  v2 
 T u  T v
and T  ku   T  ku1 , ku2    2 ku1  ku2 , ku1  ku2   k  2u1  u2 , u1  u2   kT  u  .
(b)
If u   u1 , u2 , u3  and v   v1 , v2 , v3  then
T  u  v   T  u1  v1 , u2  v2 , u3  v3 
  u1  v1 , u3  v3 , u1  v1  u2  v2 
  u1 , u3 , u1  u2    v1 , v3 , v1  v2 
 T u  T v
and T  ku   T  ku1 , ku2 , ku3    ku1 , ku3 , ku1  ku2   k  u1 , u3 , u1  u2   kT  u  .
22.
(a)
If u   u1 , u2 , u3  and v   v1 , v2 , v3  then
T  u  v   T  u1  v1 , u2  v2 , u3  v3 
  u1  v1  u2  v2 , u2  v2  u3  v3 , u1  v1 
  u1  u2 , u2  u3 , u1    v1  v2 , v2  v3 , v1 
 T u  T v
and T  ku   T  ku1 , ku2 , ku3    ku1  ku2 , ku2  ku3 , ku1   k  u1  u2 , u2  u3 , u1   kT  u  .
(b)
If u   u1 , u2  and v   v1 , v2  then
T  u  v   T  u1  v1 , u2  v2 
  u2  v2 , u1  v1 
118
1.8 Matrix Transformations
119
  u2 , u1    v2 , v1 
 T u  T v
and T  ku   T  ku1 , ku2    ku2 , ku1   k  u2 , u1   kT  u  .
23.
(a)
The homogeneity property fails to hold since T (kx, ky)  ((kx )2 , ky)  ( k 2 x 2 , ky) does not generally equal

 

kT  x, y   k x 2 , y  kx 2 , ky . (It can be shown that the additivity property fails to hold as well.)
(b)


The homogeneity property fails to hold since T  kx, ky, kz    kx, ky, kxkz   kx, ky, k 2 xz does not generally
equal kT  x, y, z   k  x, y, xz    kx, ky, kxz  . (It can be shown that the additivity property fails to hold as well.)
24.
(a)
The homogeneity property fails to hold since T  kx, ky    kx, ky  1 does not generally equal
kT  x, y   k  x, y  1   kx, ky  k  . (It can be shown that the additivity property fails to hold as well.)
(b)

The homogeneity property fails to hold since T  kx1 , kx2 , kx3   kx1 , kx2 , kx3

 
 does not generally equal

kT  x1 , x2 , x3   k x1 , x2 , x3  kx1 , kx2 , k x3 . (It can be shown that the additivity property fails to hold as
well.)
25.
The homogeneity property fails to hold since for b  0 , f  kx   m  kx   b does not generally equal
kf  x   k  mx  b   kmx  kb . (It can be shown that the additivity property fails to hold as well.)
On the other hand, both properties hold for b  0 : f  x  y   m  x  y   mx  my  f  x   f  y  and
f  kx   m  kx   k  mx   kf  x  .
Consequently, f is not a matrix transformation on R unless b  0
26.
Both properties of Theorem 1.8.2 hold for T  x, y    0,0  :
T   x, y    x , y    T  x  x , y  y    0,0    0,0    0,0   T  x, y   T  x , y 
T  k  x, y    T  kx, ky    0,0   k  0,0   kT  x, y 
On the other hand, neither property holds in general for T  x, y   1,1 , e.g.,
T   x, y    x , y    T  x  x , y  y   1,1 does not equal
T  x, y   T  x, y   1,1  1,1   2,2 
27.
By Formula (13), the standard matrix for T is A   T  e1 
T e2 
1 2    0 1   4  0   2 
4
1 0

  


A   3 0 3  and T  x   Ax   3  2    0 1   3  0    6  .
  0  2   11  1 0   1 
 0 1 1


T  e 3   . Therefore
1.8 Matrix Transformations
28.
By Formula (13), the standard matrix for T is A   T  e1 
T e2 
T  e 3   . Therefore
  2  3    3 2   11   1
 2 3 1

  


A   1 1 0  and T  x   Ax   1 3  1 2    0 1    1 .
 3 3    0  2    2 1  11
 3 0 2 


29.
(a)
 1 0   1  1 
 0 1  2    2 

   
(b)
 1 0   1  1 
 0 1  2    2 

   
(c)
 0 1  1  2 
 1 0   2    1

   
30.
(a)
 1 0  a   a 
 0 1  b     b 

   
(b)
 1 0   a    a 
 0 1  b    b 

   
(c)
 0 1  a   b 
 1 0  b  a 

   
(a)
 1 0 0  2  2
 0 1 0   5   5

   
 0 0 1  3  3
(b)
 1 0 0   2  2 
 0 1 0   5   5

   
 0 0 1  3  3
(c)
 1 0 0   2   2 
 0 1 0   5   5

   
 0 0 1  3  3
32.
(a)
 1 0 0 a   a 
0 1 0   b    b 

   
 0 0 1  c   c 
(b)
 1 0 0 a   a 
 0 1 0   b     b 

   
0 0 1  c   c 
(c)
 1 0 0   a   a 
 0 1 0 b   b

   
 0 0 1  c   c 
33.
(a)
 1 0   2  2 
 0 0   5    0 

   
(b)
0 0   2   0 
 0 1  5    5 

   
34.
(a)
 1 0 a  a 
0 0   b    0 

   
(b)
0 0   a  0 
 0 1  b    b 

   
(a)
 1 0 0   2   2 
0 1 0   1   1

   
0 0 0   3  0 
(b)
 1 0 0   2   2 
0 0 0   1   0 

   
0 0 1  3  3
(c)
 0 0 0   2   0 
 0 1 0   1   1

   
 0 0 1  3  3 
36.
(a)
 1 0 0 a  a 
0 1 0   b    b 

   
 0 0 0   c   0 
(b)
 1 0 0 a  a 
0 0 0   b    0 

   
 0 0 1  c   c 
(c)
0 0 0   a  0 
0 1 0   b   b 

   
 0 0 1  c   c 
37.
(a)
3
 cos30  sin 30  3  2


 sin 30 cos30  4 

    12
 12   3  3 2 3  2   4.60 




3  4 
3
2 
    2  2 3   1.96 
(b)
 cos  60   sin  60    3  12

    3
 sin  60  cos  60    4    2
(c)
2
 cos 45  sin 45  3  2


sin 45
cos 45  4   22

(d)
 cos 90  sin 90   3   0 1  3   4 
 sin 90 cos 90   4    1 0   4    3 

  
   
31.
35.
  3  3  2 3   1.96 

   2 3 3

1
  4    2  2   4.60 
2
3
2
 22   3  7 2 2   4.95




2  4 
     22   0.71
2 
120
1.8 Matrix Transformations
38.
39.
 cos
sin 

(b)
 cos     sin      v1   v1 cos     v2 sin      v1 cos   v2 sin  




cos      v2   v1 sin     v2 cos      v1 sin   v2 cos  
sin   
By Formula (13), the standard matrix for T is A   T  e1 
a
A
b
40.
 sin    v1   v1 cos   v2 sin  
 

cos   v2   v1 sin   v2 cos 
(a)
T  e 2   . Therefore
c
1  a  c 
and T 1,1  A    

.
d
1  b  d 
(a)
a 
 ka 
TA  e1     . Since TA is a matrix transformation, TA  k e1   kTA  e1     .
 kc 
c 
(b)
b
TA  e 2     . Since TA is a matrix transformation,
d 
 k a   l b   ka  l b 
TA  k e1  le2   kTA  e1   lTA  e 2         
.
 kc   l d   k a  l d 
41.
(a)
3
 0
 1




TA  e1    2  , TA  e 2    1 , TA  e 3    2  .
 5 
 4 
 3
(b)
Since TA is a matrix transformation,
 1  3  0  2 
TA  e1  e 2  e 3   TA  e1   TA  e 2   TA  e 3    2    1   2    5 .
 4   5  3  6 
(c)
42.
 0  0
Since TA is a matrix transformation, TA  7e3   7TA  e 3   7  2    14  .
 3   21
 1 0 0   1  1
Orthogonal projection onto the xy -plane: T 1,2,3    0 1 0   2    2  .
 0 0 0   3  0 
 1 0 0   1  1
Orthogonal projection onto the xz -plane: T 1,2,3    0 0 0   2    0  .
 0 0 1  3  3
 0 0 0   1  0 
Orthogonal projection onto the yz -plane: T 1,2,3    0 1 0   2    2  .
 0 0 1  3  3
121
1.8 Matrix Transformations
43.
122
 1 0 0   1  1
Reflection about the xy -plane: T 1,2,3   0 1 0  2    2  .
 0 0 1  3  3
 1 0 0   1  1
Reflection about the xz -plane: T 1,2,3   0 1 0   2    2  .
0 0 1  3  3
 1 0 0   1  1
Reflection about the yz -plane: T 1,2,3    0 1 0  2    2  .
 0 0 1  3  3
44.
 cos 
If A  
 sin 
 sin  
 cos
then AT  

cos  
  sin 
sin    cos     sin    

 (since cos     cos and
cos    
cos  sin   
sin      sin  ). The geometric effect of multiplying AT by x is to rotate the vector through the angle  (i.e.,
to rotate through the angle  clockwise).
45.
The standard matrix for T is A   T  e1 
1 
1  2 
T  e 2   . Observe that    3      . Because
0 
1  3 
 1 2  
 1 
 2  
 1  2   5
TA is a transformation, TA  e1   TA  3        3TA      TA      3       
.
 2   5  11
 1 3  
 1 
 3  
0  2 
1
Likewise,       2   so we obtain
1  3 
1
 2 
 2  
 1   2 
1 
 1  4 
TA  e 2   TA     2     TA      2TA         2      .
1 
 2   9
 3
  3 
 1   5
 5 4 
Therefore, the matrix for TA is A  
.
 11 9 
46.
The standard matrix for T is A   T  e1 

T e2 
T  e 3   , so we need to express the

 1  1
 3




standard basis vectors e1 , e 2 , and e3 as linear combinations of the vectors  0  , 1 , and  1 .
 2 
 2  1
 1 1 3
To do this, we compute the inverse of  0 1 1 .
 2 1 2 
1.8 Matrix Transformations
 1 1 3 1 0 0 


 0 1 1 0 1 0 
2 1 2 0 0 1


 1 1 3 1 0 0 


 0 1 1 0 1 0 
 0 1 8 2 0 1 


The identity matrix was adjoined to the original matrix.
2 times the first row was added to the third row.
 1 1 3 1 0 0 


0 1 1 0 1 0 
0 0 7 2 1 1 


The second row was added to the third row.
 1 1 3 1 0 0 


0 1 1 0 1 0 
0 0
1  27 17 17 

The third row was multiplied by 17 .
 1 1 3 1 0 0 

8
2
1 
0 1 0  7 7 7 
0 0
1  27 17 17 

The third row was added to the second row.
 1 1 0 17

2
0 1 0  7
0 0 1  27

3
7
8
7
1
7
3
7
1
7
1
7





3 times the third row was added to the first row.
 1 0 0 37

2
0 1 0  7
0 0 1  27

 75
2
7
1
7
1
7





1 times the second row was added to the first row.
 37
We obtain   27
  27
 75
8
7
1
7
8
7
1
7
2
7
1
7
1
7
  1   37   37
   2 ,  2
 0     7    7
  0    27    27
 75
8
7
1
7
2
7
1
7
1
7
  0    75 
 37
    8  and  2
 7
 1    7  ,
  27
  0   17 
 75
so that
 1 
1
 3 
 3   2   2  
T  e1   T  7 0   7 1  7  1 
 2 
1
 2  
  
 1  
 1 
  3 
 2
1 
 5 2 
  2   2   3   2   2 
 T  0    7 T  1   7 T   1   7  3  7 3  7  11  1  .
 2  
 1 
  2 
 10 
8 
 7  0 
 
 
 
3
7
8
7
1
7
2
7
1
7
1
7
  0   27 
  1
 0    7 
  1   17 
123
1.8 Matrix Transformations
124
 2
1 
 5  0 
 2
1 
 5  1






8  
1
2
1 
1
Likewise, T  e 2     3  7 3   7  11   4  and T  e 3   7  3  7 3  7  11   2  .
 10 
8 
 7   3
 10 
8 
 7   5
5
7
2 1 0 
Therefore, the standard matrix for T is A   1 4 2  .
0 3 5
47. The terminal point of the vector is first rotated about the origin through the angle  , then it is
translated by the vector x 0 . No, this is not a matrix transformation, for instance it fails the additivity
property: T  u  v   x 0  R  u  v   x 0  R u  R v  x 0  R u  x 0  R v  T  u   T  v  .
 1 0 0
0 0 1


0 1 0 
(b)
0 0 1
0 1 0 


 1 0 0 
(c)
0 1 0 
 1 0 0


0 0 1
48.
(a)
49.
cos  2   sin  2  
Since cos2   sin 2   cos  2  and 2sin  cos  sin  2  , we have A  
 . The geometric
 sin  2  cos  2  
effect of multiplying A by x is to rotate the vector through the angle 2 .
True-False Exercises
(a)
False. The domain of TA is R 3 .
(b)
False. The codomain of TA is R m .
(c)
True. Since the statement requires the given equality to hold for some vector x in R n , we can let x  0 .
(d)
False. (Refer to Theorem 1.8.3.)
(e)
True. The columns of A are T  e i   0 .
(f)
False. The given equality must hold for every matrix transformation since it follows from the homogeneity property.
(g)
False. The homogeneity property fails to hold since T  kx   kx  b does not generally equal
kT  x   k  x  b   kx  kb .
1.9 Compositions of Matrix Transformations
1.9 Compositions of Matrix Transformations
1.
(a)
0 1 
1 0 
From Tables 1 and 3 in Section 1.8, T1   
and T2   

;
1 0 
0 0 
0 0 
0 1 


T1  T2   T1 T2    1 0  ; T2  T1   T2 T1   0 0  .


For these transformations, T1  T2  T2  T1 .
(b)
 1 0
0 1 
and T2   
From Table 1 in Section 1.8, T1   
;

 0 1
1 0 
 0
1


 0 1
.
0 

T1  T2   T1 T2    1 0  ; T2  T1   T2 T1    1
For these transformations, T1  T2  T2  T1 .
2.
(a)
1 0 
0 0 
and T2   
From Table 3 in Section 1.8, T1   
;

0 0 
0 1 
0 0 
0 0 


T1  T2   T1 T2   0 0  ; T2  T1   T2 T1   0 0  .


For these transformations, T1  T2  T2  T1 .
(b)
cos 4
From Tables 5 and 1 in Section 1.8, T1   

 sin 4
  22
T1  T2   T1 T2   
2
  2
 sin 4   22

cos 4   22
 2
 22 
; T2  T1   T2 T1    2

2
2

2 
 2
 22 
 1 0 
and T2   

;
2
0
1



2 

.
2

2 
2
2
For these transformations, T1  T2  T2  T1 .
3.
 1 0 0
0 0 0 


From Tables 2 and 4 in Section 1.8, T1   0 1 0  and T2    0 1 0  ;
0 0 1
 0 0 1 
0 0 0 
0 0 0 


T1  T2   T1 T2   0 1 0  ; T2  T1   T2 T1   0 1 0  .
 0 0 1
 0 0 1
For these transformations, T1  T2  T2  T1 .
4.
1 0 0 
2 x  2 0 0   x 


From Table 4 in Section 1.8, T1    0 1 0  . In vector form, T2    3 y    0 3 0   y  so that
 0 0 0 
 z   0 0 1   z 
125
1.9 Compositions of Matrix Transformations
126
2 0 0 
T2   0 3 0  . Therefore,
 0 0 1 
2 0 0 
2 0 0 


T1  T2   T1 T2   0 3 0  and T2  T1   T2 T1   0 3 0  .
 0 0 0 
 0 0 0 
For these transformations, T1  T2  T2  T1 .
7 
 10
;
 5 10 
5.
TB  TA   TB TA   BA  
6.
0 20 
 40

TB  TA   TB TA   BA   12 9 18  ;
 38 18 43 
7.
(a)
 8
3 


TA  TB   TA TB   AB   13 12 
19 18 22 
TA  TB   TA TB   AB  10 3 16  .
31 33 58 
We are looking for the standard matrix of T  T2  T1 where T1 is a rotation of 90 and T2 is a reflection about
the line y  x . From Tables 5 and 1 in Section 1.8,
0 1 
 1 0
 cos 90  sin 90   0 1
, T2   
. Therefore, T   T2 T1   


.


cos 90   1 0 
 0 1
1 0 

T1   sin 90
(b)
We are looking for the standard matrix of T  T2  T1 where T1 is an orthogonal projection onto the y -axis and
T2 is a rotation of 45 about the origin. From Tables 3 and 5 in Section 1.8,
2
cos 45  sin 45  2
0 0 
T


,



2
 sin 45 cos 45 
1 

  22

T1    0
(c)
0
 22 
. Therefore, T   T2 T1   

2
0
2 


.
2
2 

 2
2
We are looking for the standard matrix of T  T2  T1 where T1 is a reflection about the x -axis and T2 is a
 1 0
rotation of 60 about the origin. From Tables 1 and 5 in Section 1.8, T1   
 and
 0 1
 cos60  sin 60  12

T
 2  sin 60 cos60   3

  2
1
Therefore, T   T2 T1    2
3
 2
8.
(a)
 23 
.
1

2

.
 12 
3
2
We are looking for the standard matrix of T  T3  T2  T1 where T1 is a rotation of 60 , T2 is an orthogonal
projection onto the x -axis, and T3 is a reflection about the line y  x . From Tables 5, 3, and 1 in Section 1.8,
 cos60  sin 60  12

cos60  23

T1   sin 60
 23 
1 0 
0 1 
, and T3   
 , T2   

.
1

0 0 
1 0 
2
0
0
.
Therefore, T   T3 T2 T1    1
3
 2  2 
1.9 Compositions of Matrix Transformations
(b)
127
We are looking for the standard matrix of T  T3  T2  T1 where T1 is an orthogonal projection onto the x-axis,
T2 is a rotation of 45 , and T3 is a reflection about the y -axis. From Tables 3, 5, and 1 in Section 1.8,
2
 cos 45  sin 45  2
1 0 


T
,



2
sin 45
cos 45  22
0 


T1    0
 2
Therefore, T   T3 T2 T1    2
 22
(c)
 22 
 1 0 
 , and T3   
.
2

 0 1
2 
0
.
0 
We are looking for the standard matrix of T  T3  T2  T1 where T1 is a rotation of 15 , T2 is a rotation of
105 , and T3 is a rotation of 60 . The net effect of the three rotations is a single rotation of
15  105  60  180 . From Table 5 in Section 1.8,
 cos180  sin180   1 0 
.

cos180   0 1

T    sin180
9.
(a)
We are looking for the standard matrix of T  T2  T1 where T1 is a reflection about the yz -plane and T2 is an
orthogonal projection onto the xz -plane. From Tables 2 and 4 in Section 1.8,
1 0 0 
 1 0 0 
 1 0 0 




T1    0 1 0  and T2   0 0 0  . Therefore, T   T2 T1    0 0 0  .
 0 0 1 
 0 0 1
 0 0 1
(b)
We are looking for the standard matrix of T  T2  T1 where T1 is a reflection about the xy -plane and T2 is an
orthogonal projection onto the xy -plane. From Tables 2 and 4 in Section 1.8,
 1 0 0
1 0 0 
 1 0 0




T1   0 1 0  and T2   0 1 0  . Therefore, T   T2 T1   0 1 0  .
0 0 1
 0 0 0 
 0 0 0 
(c)
We are looking for the standard matrix of T  T2  T1 where T1 is an orthogonal projection on the xy -plane and
T2 is a reflection about the yz -plane. From Tables 4 and 2 in Section 1.8,
1 0 0 
 1 0 0 
 1 0 0 




T1   0 1 0  , T2    0 1 0  . Therefore, T   T2 T1    0 1 0  .
 0 0 0 
 0 0 1
 0 0 0 
10.
(a)
We are looking for the standard matrix of T  T3  T2  T1 where T1 is a reflection about the xy -plane, T2 is an
orthogonal projection onto the xz -plane, and T3 is the transformation such that T3  x    x.
 1 0 0
1 0 0 


From Tables 2 and 4 in section 1.8, T1   0 1 0  and T2    0 0 0  .
0 0 1
 0 0 1 
1.9 Compositions of Matrix Transformations
128
 1 0 0 
  x1   1 0 0   x1 






In vector form, T3  x1 , x2 , x3     x2    0 1 0   x2  so that T3    0 1 0  .
  x3   0 0 1  x3 
 0 0 1
 1 0 0 
Therefore, T   T3 T2 T1    0 0 0  .
 0 0 1
(b)
We are looking for the standard matrix of T  T3  T2  T1 where T1 is a reflection about the xy -plane, T2 is a
reflection about the xz -plane, and T3 is an orthogonal projection on the yz -plane. From Tables 2 and 4 in
 1 0 0
1 0 0 
0 0 0 




Section 1.8, T1   0 1 0  , T2   0 1 0  , and T3    0 1 0  . Therefore,
0 0 1
0 0 1 
 0 0 1
0 0 0
T   T3 T2 T1    0 1 0  .
 0 0 1 
(c)
We are looking for the standard matrix of T  T3  T2  T1 where T1 is an orthogonal projection onto the yz plane, T2 is the transformation such that T2  x   2 x , and T3 is a reflection about the xy -plane.
0 0 0 
 1 0 0


From Tables 4 and 2 in section 1.8, T1    0 1 0  and T3   0 1 0  .
 0 0 1 
0 0 1
 2 x1   2 0 0   x1 
2 0 0 






In vector form, T2  x1 , x2 , x3   2 x2    0 2 0   x2  so that T2    0 2 0  .
 2 x3   0 0 2   x3 
 0 0 2 
0
0 0

Therefore, T   T3 T2 T1    0 2
0  .
 0 0 2 
11.
(a)
 x  x2  1 1  x1 
1 1

so that T1   
In vector form, T1  x1 , x2    1




.
1 1
 x1  x2  1 1  x2 
 3 x1   3 0   x1 
3 0 

so that T2   
Likewise, T2  x1 , x2   




.
2 4 
2 x1  4 x2  2 4   x2 
(b)
3
 3 0  1 1  3




4  1 1 6 2 

T2  T1   T2 T1   2
1
1  3 0   5
4


4   1 4 

T1  T2   T1 T2   1 1 2

(c)
T1  T2  x1 , x2     5x1  4 x2 , x1  4 x2  ; T2  T1  x1 , x2     3x1  3x2 , 6 x1  2 x2 
1.9 Compositions of Matrix Transformations
12.
(a)
 4 x1   4 0 0   x1 
 4 0 0






In vector form, T1  x1 , x2 , x3    2 x1  x2    2
1 0   x2  so that T1    2
1 0  .
  x1  3 x2   1 3 0   x3 
 1 3 0 
 1 2 0
 x1  2 x2   1 2 0   x1 






Likewise, T2  x1 , x2 , x3     x3    0 0 1  x2  so that T2    0 0 1 .
 4 x1  x3   4 0 1  x3 
 4 0 1
(b)
 1 2 0  4 0 0  0 2 0
T2  T1   T2 T1    0 0 1  2 1 0    1 3 0 
 4 0 1  1 3 0  17 3 0 
8 0
 4 0 0  1 2 0  4





T1  T2   T1 T2    2 1 0   0 0 1   2 4 1
 1 3 0   4 0 1  1 2 3
(c)
T1  T2  x1 , x2 , x3     4 x1  8 x2 , 2 x1  4 x2  x3 ,  x1  2 x2  3x3 
T2  T1  x1 , x2 , x3     2 x2 , x1  3x2 ,17 x1  3x2 
13.
(a)
 1 1
 x1  x2   1 1
 x1 




In vector form, T1  x1 , x2     x1  2 x2    1 2    so that T1    1 2  .
x
 3 x1   3 0   2 
 3 0 
 x1 
 4 x2   0 4 0   
0 4 0 
Likewise, T2  x1 , x2 , x3   

x2  so that T2   


.

1 2 0 
 x1  2 x2   1 2 0   x 
 3
(b)
 1 1
0 4 0  
 4 8 
T2  T1   T2 T1   1 2 0   1 2    1 3

  3 0 



 1 1
 1 2 0 
0 4 0  


T1  T2   T1 T2    1 2  1 2 0    2 0 0 
  0 12 0 
 3 0  


14.
(c)
T1  T2  x1 , x2 , x3      x1  2 x2 , 2 x1 ,12 x2  ; T2  T1  x1 , x2     4 x1  8 x2 ,  x1  3x2 
(a)
 x1 
 
 x1  2 x2  3 x3   1 2 3 0   x2 
In vector form, T1  x1 , x2 , x3 , x4   


 x2  x 4
 0 1 0 1  x3 
 
 x4 
 1 2 3 0
so that T1   
.
 0 1 0 1
129
1.9 Compositions of Matrix Transformations
  x1   1
 0  

 0
Likewise, T2  x1 , x2  
 x1  x2   1

 
 3 x2   0
(b)
 1
 0
T2  T1   T2 T1    1

 0
0
0 
.
1

3
0
 1 2 3 0 

0   1 2 3 0   0 0 0 0 

1 0 1 0 1  1 3 3 1



3
 0 3 0 3
 1

 1 2 3 0  0
T1  T2   T1 T2   0 1 0 1  1



 0
(c)
0
 1
 0

0   x1 

  so that T2   
1
1  x2 


3
 0
130
0
0  2 3

1 0 3

3
T1  T2  x1 , x2     2 x1  3x2 , 3x2 
T2  T1  x1 , x2 , x3 , x4      x1  2 x2  3x3 ,0, x1  3x2  3x3  x4 , 3x2  3x4 
15.
(a)
 y  0 1
0 1
 x   1 0 x
 1 0
 




.

so that T1   
In vector form, T1  x,y  
 x  y   1 1  y 
 1 1

 



 x  y   1 1
 1 1
x
 x  w   1 0 0 1  
 1 0 0 1
y





so that T2    0 1 0 1 .
Likewise, T2  x,y,z,w    y  w   0 1 0 1
z
 z  w  0 0 1 1  
 0 0 1 1
w
16.
(b)
0 1
 1 0 0 1 
1 0 
1 0  



T2  T1   T2 T1   0 1 0 1  1 1  2 1
0 0 1 1 
 2 0 
 1 1
(c)
T1  T2 is not defined because the outputs from T2 are vectors in R3 but the inputs for T1 are vectors in R 2 .
(d)
T2  T1  x,y     x ,2 x  y, 2 x 
(a)
 1 2
 x  2y   1 2 
x




In vector form, T1  x,y    0    0 0    so that T1   0 0  .
y
 2 x  y   2 1  
2 1
1.9 Compositions of Matrix Transformations
 3z   0 0
 x  y   1 1

Likewise, T2  x,y,z   
 3z   0 0

 
  x  y   1 1
17.
3
 0 0
x 

 1 1
0  
so
that
T

y
 2   0 0
3  
 z 

0  
 1 1
131
3
0 
.
3

0
(b)
 0 0
 1 1
T2  T1   T2 T1    0 0

 1 1
(c)
T1  T2 is not defined because the outputs from T2 are vectors in R 4 but the inputs for T1 are vectors in R 2 .
(d)
T2  T1  x1 , x2     6 x  3y, x  2 y,6 x  3y,  x  2 y 
(a)
3
3
 6
 1 2 


0 
   1 2
0
0
  6
3 
3

 
2
1


0
 1 2 
 w1  8 x1  4 x2  8 4   x1 
8 4 
w    2 x  x   
 x  ; the standard matrix is 

 . Using Theorem 1.5.3(c), we attempt to find
2 1 
 2   1 2  2 1   2 
the inverse:
8 4 1 0


2 1 0 1 
The identity matrix was adjoined to the coefficient matrix.
0 0 1 4 


1
2 1 0
4 times the second row was subtracted from the first row.
Since we obtained a row of zeros on the left side, the operator is not one-to-one.
(b)
 w1    x1  3 x2  2 x3   1 3 2   x1 
 1 3 2 
w    2 x  4 x
   2 0 4  x 


1
3
 2 
 
  2  ; the standard matrix is  2 0 4  . Using Theorem 1.5.3(c), we
 w3   x1  3 x2  6 x3   1 3 6   x3 
 1 3 6 
attempt to find the inverse:
 1 3 2 1 0 0 


 2 0 4 0 1 0
 1 3 6 0 0 1


The identity matrix was adjoined to the coefficient matrix.
 1 3 2 1 0 0 


 0 6 8 2 1 0
 0 6 8 1 0 1


2 times the first row was added to the second row and the
first row was added to the third row.
 1 3 2 1 0 0 


 0 6 8 2 1 0
 0 0 0 1 1 1 


The second row was subtracted from the third row.
Since we obtained a row of zeros on the left side, the operator is not one-to-one.
1.9 Compositions of Matrix Transformations
18.
(a)
132
 w1  2 x1  3 x2  2 3  x1 
 2 3
. Using Theorem 1.5.3(c), we attempt to
w    5x  x   
 x  ; the standard matrix is 

1
5
 2   1 2   5 1  2 
find the inverse:
2 3 1 0 


5 1 0 1
The identity matrix was adjoined to the coefficient matrix.
17 0 1 3


 5 1 0 1
3 times the second row was added to the first row.
 1 0 171

5 1 0
3
17


1
The first row was multiplied by 171 .
 1 0 171

5
0 1  17
3
17
2
17



5 times the first row was subtracted from the second row.
Since the reduced row echelon form of the operator’s standard matrix is the identity, the operator is invertible.
(b)
 w1   x1  2 x2  3 x3   1 2 3  x1 
 1 2 3
 w   2 x  5 x  3 x   2 5 3  x 


2
3
 2  1

  2  ; the standard matrix is  2 5 3 .
 w3  
  1 0 8   x3 
 1 0 8 
x1  8 x3
Using Theorem 1.5.3(c), we attempt to find the inverse:
1 2 3 1 0 0


2 5 3 0 1 0
1 0 8 0 0 1


1 2 3 1 0 0


1 3 2 1 0 
0
 0 2 5 1 0 1 


1 2 3 1 0 0


 0 1 3 2 1 0 
 0 0 1 5 2 1 


1 2 3 1 0 0


1 0
 0 1 3 2
0 0
1 5 2 1 

 1 2 0 14 6 3 


 0 1 0 13 5 3 
 0 0 1 5 2 1 


The identity matrix was adjoined to the matrix A .
2 times the first row was added to the second row and
the first row was subtracted from the third row.
2 times the second row was added to the third row.
The second row was multiplied by 1 .
3 times the third row added to the second row and 3
times the third row was subtracted from the first row.
1.9 Compositions of Matrix Transformations
 1 0 0 40 16 9 


 0 1 0 13 5 3 
 0 0 1 5 2 1 


133
2 times the second row was subtracted from the first
row.
Since the reduced row echelon form of the operator’s standard matrix is the identity, the operator is invertible.
19.
(a)
1 2
 w1   x1  2 x2   1 2   x1 
 1 2
; since
 3  0 , it follows from
w    x  x   
 x  ; the standard matrix is 


1 1
 1 1
 2   1 2   1 1  2 
Theorem 1.4.5 that the operator is invertible;
the standard matrix of T
(b)
20.
(a)
1
1 2   13
is 
  1
1 1  3
1
3
 23 
; T 1  w1 , w2    13 w1  23 w2 , 13 w1  13 w2 
1
3
4 6
 w1   4 x1  6 x2   4 6   x1 
 4 6 
; since
 0 , it follows from
 w    2 x  3 x   
 x  ; the standard matrix is 


2
3
3
 2 3  2 
 2
1
2
 2 
Theorem 1.4.5 that the operator is not invertible.
 w1   x1  2 x2  2 x3   1 2 2   x1 
 1 2 2 
 w    2 x  x  x   2



1 1  x2  ; the standard matrix is  2
1 1 ;
2
3 
 2  1

 w3  
  1 1 0   x3 
 1 1 0 
x1  x2
 1 2 2 1 0 0 
1 0 0 1 2 4 




1 1 0 1 0  is 0 1 0 1 2 3 , it follows
since the reduced row echelon form of the matrix 2
 1 1 0 0 0 1
0 0 1 1 3 5




from Theorem 1.5.3(c) that the operator T is invertible. Therefore, the standard matrix of T 1 is
 1 2 4 
 1 2 3  ;


 1 3 5
T 1  w1 , w2 , w3    w1  2 w2  4 w3 , w1  2 w2  3w3 , w1  3w2  5w3 
(b)
 w1   x1  3 x2  4 x3   1 3 4   x1 
 1 3 4 
 w     x  x  x    1



1 1  x2  ; the standard matrix is  1 1 1 ;
2
3 
 2  1

 w3   2 x2  5 x3   0 2 5  x3 
 0 2 5
Adding row 1 to row 2 followed by adding row 2 to row 3 in the reduced row echelon form of the matrix
 1 3 4 1 0 0 
 1 3 4 1 0 0 




 1 1 1 0 1 0  produces 0 2 5 1 1 0  , it follows from Theorem 1.5.3(c) that the operator T
 0 2 5 0 0 1 
0 0 0 1 0 1




is not invertible.
1.9 Compositions of Matrix Transformations
21.
(a)
(b)
134
1 0
 1 0
From Table 1 in Section 1.8, the standard matrix is 
; since
 1  0 , the matrix operator is

0 1
 0 1
invertible. The inverse is also a reflection about the x -axis.
 cos60  sin 60   12
From Table 5 in Section 1.8, the standard matrix is 

cos60   23
 sin 60
 23 
 . Since
1

2 
1
2
 23
3
2
1
2
 1  0,
the matrix operator is invertible. The inverse is a rotation of 60 (equivalent to 300) about the origin.
(c)
22.
(a)
1 0
 1 0
From Table 3 in Section 1.8, the standard matrix is 
; since
 0 , the matrix operator is not

0 0
0 0 
invertible.
0 1
 0 1
From Table 1 in Section 1.8, the standard matrix is 
; since
 1  0 , the

1 0
 1 0
matrix operator is invertible. The inverse is also a reflection about the line y  x .
(b)
(c)
23.
(a)
0 0
0 0 
From Table 3 in Section 1.8, the standard matrix is 
; since
 0 , the matrix operator is not

0 1
 0 1
invertible.
1 0
 1 0 
The standard matrix is 
; since
 1  0 , the matrix operator is invertible. The inverse is also a

0 1
 0 1
reflection about the origin.
Since
1 2
1 1
 1  0 , it follows from Theorem 1.4.5 that the operator TA is invertible;
 1 2  1   3
 1 2   1 2 
. Therefore, TA1  x   
A 1   1 

     .


1  1 1
 1 1 2   1
 1
24.
1 1
 0 , it follows from Theorem 1.4.5 that the operator TA is not invertible.
(b)
Since
(a)
 1 2 0 1 0 0
1 0 0 1 0 0 




Since the reduced row echelon form of the matrix  1 1 1 0 1 0  is 0 1 0 0 1 0  , it follows
2 3 1 0 0 1 
0 0 0 1 1 1




1 1
from Theorem 1.5.3 that the operator TA is not invertible.
(b)
 1 1 0 1 0 0
 1 0 0 12



Since the reduced row echelon form of the matrix 0 1 1 0 1 0  is  0 1 0 12
 0 0 1  12
 1 0 1 0 0 1



from Theorem 1.5.3 that the operator TA is invertible.
 12
1
2
1
2


  , it follows


1
2
1
2
1
2
1.9 Compositions of Matrix Transformations
 12

A1   12
  12
25.
(a)
 12
1
2
1
2

 12


  . Therefore, TA1  x    12

  12
1
2
1
2
1
2
 12
1
2
1
2
135
 1  1 

  2   0  .
 3  2 
1
2
1
2
1
2
 0 1  x    y 
In vector form, TA  x,y   
      . The geometric effect of applying
 1 0   y    x 
this transformation to x is to reflect x about y  x and then to reflect the result about the
origin.
(b)
0 1 
For instance, if B  
 (the standard matrix of the reflection about y  x ) and
1 0 
 1 0 
C
 (the standard matrix of the reflection about the origin) then TA  TC  TB .
 0 1
26.
(a)
Since cos2   sin 2   cos  2  and 2sin  cos  sin  2  , we have
 cos  2   sin  2  
A
 . The geometric effect of applying this transformation to x is to rotate the vector
 sin  2  cos  2  
through the angle 2 .
(b)
 cos
For instance, if B  
 sin 
 sin  
(the standard matrix of the rotation through an angle  ) then TA  TB  TB .
cos 
True-False Exercises
(a)
False. For instance, Example 2 shows two matrix operators on R 2 whose composition is not commutative.
(b)
True. This is stated as Theorem 1.9.1.
(c)
True. This was established in Example 3.
(d)
False. For instance, composition of any reflection operator with itself is the identity operator, which is not a
reflection.
(e)
x
y
x
True. The reflection of a vector   about the line y  x is   so a second reflection yields   .
x
y
 y
(f)
False. This follows from Example 6.
(g)
True. The reflection about the origin is given by the transformation T  x   x so that T is its own inverse.
1.10 Applications of Linear Systems
1.10 Applications of Linear Systems
1.
There are four nodes, which we denote by A , B , C , and D (see the figure on the left).
We determine the unknown flow rates x1 , x2 , and x3 assuming the counterclockwise direction (if any of these
quantities are found to be negative then the flow direction along the corresponding branch will be reversed).
Network node Flow In
Flow Out
A
x2  50 
x1
B
x1

x3  30
C
50

x2  60
D
x3  40

50
This system can be rearranged as follows
 x1

 50
x2
x1
 x3

30
x3


10
10
 x2
By inspection, this system has a unique solution x1  40 , x2  10 , x3  10 . This yields the flow rates and
directions shown in the figure on the right.
2.
(a)
There are five nodes – each of them corresponds to an equation.
Network node
Flow In
Flow Out
top left
top right

200
x3  150 
x1  x3
x 4  x5
bottom left
x1  25

x2
bottom middle
bottom right
x2  x 4
x 5  x6


x6  175
200
This system can be rearranged as follows
136
1.10 Applications of Linear Systems
 x3
 x3
x1
 x1


x4

 200
 150
x5

x2

x2
x4
x5
(b)
137
25

x6
 175

x6
 200
The augmented matrix of the linear system obtained in part (a) has the reduced row echelon form
1
0

0

0
0
0 0 1 0 1 150 
1 0 1 0 1 175
0 1 1 0 1 50  . If we assign x4 and x6 the arbitrary values s and t , respectively, the general

0 0 0 1 1 200 
0 0 0 0 0
0 
solution is given by the formulas
x1  150  s  t , x2  175  s  t , x3  50  s  t , x4  s , x5  200  t , x6  t
(c)
When x4  50 and x6  0 , the remaining flow rates become x1  100 , x2  125 , x3  100 , and x5  200 .
The directions of the flow agree with the arrow orientations in the diagram.
3.
(a)
There are four nodes – each of them corresponds to an equation.
Network node
top left
Flow In
Flow Out
x2  300  x3  400
top right (A)
x3  750 
x4  250
bottom left
x1  100

x2  400
bottom right (B) x4  200 
x1  300
This system can be rearranged as follows
x2
 x3
x3
x1
 x1
(b)

 x4
 x2

x4
100
 500

300

100
1 1 0
100 
 0
 0 0
1 1 500 
The augmented matrix of the linear system obtained in part (a) 
has the reduced row
 1 1 0 0
300 


1 100 
 1 0 0
1
0
echelon form 
0

0
0 0 1 100 
1 0 1 400 
. If we assign x4 the arbitrary value s , the general solution is given by
0 1 1 500 

0 0 0
0
the formulas
x1  100  s , x2  400  s , x3  500  s , x4  s
1.10 Applications of Linear Systems
(c)
138
In order for all xi values to remain positive, we must have s  500 . Therefore, to keep the traffic flowing on
all roads, the flow from A to B must exceed 500 vehicles per hour.
4.
(a)
There are six intersections – each of them corresponds to an equation.
Intersection
Flow In
top left
500  300
top middle
x1  x4
top right
x2  100
x3  x6
bottom left
bottom middle x7  600





Flow Out
x1  x3
x2  200
x5  600
400  350
x4  x6
x5  450

x7  400
bottom right
We rewrite the system as follows

x1
 x1

x2

x2
x3

x4
+
x5
x3
x4
x6

x6

x7
800

200

500

750

600
general solution is given by the formulas
x1  50  s , x2  450  t , x3  750  s ,
0
t

s
t
60
0
1 0 0 0 0 1 0 50 
0 1 0 0 0 0 1 450 


0 0 1 0 0 1 0 750 

 . If we assign x6
0 0 0 1 0 1 1 600 
0 0 0 0 1 0 1 50 


0 
0 0 0 0 0 0 0
and x7 the arbitrary values s and t , respectively, the
750  s  0
x5
 x7 
50
The augmented matrix of the linear system obtained in part (a) has the reduced row echelon form
750  s  0
(b)


0
60

s
t
0
150
50  t  0
50
50  t  0
600 750
s
x4  600  s  t , x5  50  t , x6  s , x7  t subject
to the restriction that all seven values must be nonnegative. Obviously, we need both s  x6  0 and t  x7  0 ,
which in turn imply x1  0 and x2  0 . Additionally imposing the three inequalities x3  750  s  0 ,
x4  600  s  t  0 , and x5  50  t  0 results in the set of allowable s and t values depicted in the grey
region on the graph.
(c)
Setting x1  0 in the general solution obtained in part (b) would result in the negative value s  x6  50
which is not allowed (the traffic would flow in a wrong way along the street marked as x6 .)
5.
From Kirchhoff's current law at each node, we have I1  I 2  I 3  0. Kirchhoff's voltage law yields
1.10 Applications of Linear Systems
Voltage Rises
Voltage Drops
Left Loop (clockwise)
2 I1
2I2  6

Right Loop (clockwise)
2I2  4I3
8

(An equation corresponding to the outer loop is a combination of these two equations.)
The linear system can be rewritten as
I1
2 I1
 I2
 2I2
2I2

I3
 4I3
 0
 6
 8
 1 0 0 135 


Its augmented matrix has the reduced row echelon form 0 1 0  25  .
0 0 1 115 
The solution is I1  2.6A , I 2  0.4A , and I 3  2.2A .
Since I 2 is negative, this current is opposite to the direction shown in the diagram.
6.
From Kirchhoff's current law at each node, we have I1  I 2  I 3  0. Kirchhoff's voltage law yields
Voltage Rises
Voltage Drops

Left Inside Loop (clockwise)
4 I1  6 I 2
1

Right Inside Loop (clockwise)
2I3
2  4 I1
(An equation corresponding to the outer loop is a combination of these two equations.)
The linear system can be rewritten as
I1
4 I1
4 I1
 I2
 6I2

I3
 2I3
 0
 1
 2
 1 0 0  225 

7 
Its augmented matrix has the reduced row echelon form 0 1 0
22  .
6 
0 0 1
11 
5
7
6
The solution is I1   A , I 2 
A , and I 3  A .
22
22
11
Since I1 is negative, this current is opposite to the direction shown in the diagram.
7.
From Kirchhoff's current law, we have
Top Left Node
Current In
Currrent Out

I1
I2  I4
Top Right Node
I4
=
I3  I5
Bottom Left Node
I2  I6
=
I1
Bottom Right Node
I3  I5

I6
Kirchhoff's voltage law yields
139
1.10 Applications of Linear Systems
Voltage Rises
Voltage Drops
Left Loop (clockwise)
10

20 I1  20 I 2
Middle Loop (clockwise)
Right Loop (clockwise)
20 I 2
20 I 3  10
=

20 I 3
20 I 5
(Equations corresponding to the other loops are combinations of these three equations.)
The linear system can be rewritten as
I1

I2

 I1

I3
 I4
 I4
I5

I5
 I6
I2
I3
20 I1

 20 I 2
20 I 2
 20I 3
20I 3
 I6
 20 I 5
1

0
0

Its augmented matrix has the reduced row echelon form  0
0

0
0




0
0
0

0
 10

0
 10
0 0 0 0 0 12 

1 0 0 0 0 0
0 1 0 0 0 0

0 0 1 0 0 12  .
0 0 0 1 0 12 

0 0 0 0 1 12 
0 0 0 0 0 0 
The solution is I1  I 4  I 5  I 6  0.5A , I 2  I 3  0A .
8.
From Kirchhoff's current law at each node, we have I1  I 2  I 3  0. Kirchhoff's voltage law yields
Voltage Rises
Voltage Drops

Top Inside Loop (clockwise)
3 I1  4 I 2
54

Bottom Inside Loop (clockwise)
4  5I 3
3  4I 2
The corresponding linear system can be rewritten as
I1
3 I1
 I2
 4I2
 4I2

I3
 5I 3
 0
 9
 1
1 0 0

Its augmented matrix has the reduced row echelon form 0 1 0
0 0 1
The solution is I1  77
A , I 2  48
A , and I 3  29
A.
47
47
47
9.
We are looking for positive integers x1 , x2 , x3 , and x4 such that
77
47
48
47
29
47


.

140
1.10 Applications of Linear Systems
141
x1  C3 H8   x2  O2   x3  CO2   x4  H 2 O 
The number of atoms of carbon, hydrogen, and oxygen on both sides must equal:
Left Side
Right Side
Carbon
3 x1

x3
Hydrogen
Oxygen
8 x1
2 x2


2 x4
2 x3  x 4
3 x1

The linear system
8 x1
2 x2
 0
x3
 2 x3
 2 x4

x4
 0
 0
 1 0 0  14
has the augmented matrix whose reduced row echelon form is  0 1 0  45
 0 0 1  34
0

0 .
0 
The general solution is x1  14 t , x2  45 t , x3  34 t , x4  t where t is arbitrary. The smallest positive integer values
for the unknowns occur when t  4 , which yields the solution
x1  1 , x2  5 , x3  3 , x4  4 . The balanced equation is
C3 H8  5O2  3CO2  4H 2 O
10.
We are looking for positive integers x1 , x2 , and x3 such that
x1  C6 H12 O6   x2  CO2   x3  C2 H 5 OH 
The number of atoms of carbon, hydrogen, and oxygen on both sides must equal:
Left Side
Right Side
Carbon
6 x1

x 2  2 x3
Hydrogen
12 x1

6 x3
Oxygen
6 x1

2 x 2  x3
x2
 2 x3
 0
 6 x3
 0

 0
The linear system
6 x1

12 x1
6 x1
 2 x2
x3
 1 0  12 0 
has the augmented matrix whose reduced row echelon form is  0 1 1 0  .
 0 0
0 0 
1.10 Applications of Linear Systems
142
The general solution is x1  12 t , x2  t , x3  t where t is arbitrary. The smallest positive integer values for the
unknowns occur when t  2 , which yields the solution x1  1 , x2  2 , x3  2 . The balanced equation is
C6 H12 O6  2CO2  2C2 H 5 OH
11.
We are looking for positive integers x1 , x2 , x3 , and x4 such that
x1  CH 3 COF   x2  H 2 O   x3  CH 3 COOH   x4  HF 
The number of atoms of carbon, hydrogen, oxygen, and fluorine on both sides must equal:
Left Side
Right Side
Carbon
2 x1

2 x3
Hydrogen
3 x1  2 x2

4 x3  x 4
Oxygen
x1  x2

2 x3
Fluorine
x1

x4
2 x1
 2 x3
The linear system
3 x1
 2 x2
 4 x3
x1

 2 x3
x2
 0
 x4
 0
 0
 x4
x1
 0
1
0
has the augmented matrix whose reduced row echelon form is 
0

0
0 0 1 0 
1 0 1 0 
.
0 1 1 0 

0 0 0 0
The general solution is x1  t , x2  t , x3  t , x4  t where t is arbitrary. The smallest positive integer values for the
unknowns occur when t  1 , which yields the solution x1  1 , x2  1 x3  1 , x4  1 . The balanced equation is
CH 3 COF  H 2 O  CH 3 COOH  HF
12.
We are looking for positive integers x1 , x2 , x3 , and x4 such that
x1  CO 2   x2  H 2 O   x3  C6 H12 O6   x4  O 2 
The number of atoms of carbon, hydrogen, and oxygen on both sides must equal:
Left Side
Right Side
Carbon
x1

6 x3
Hydrogen
Oxygen
2 x2
2 x1  x2


12 x3
6 x3  2 x 4
1.10 Applications of Linear Systems
143
The linear system

6 x3
 0
2 x2
 12 x3
 0
x2

x1
2 x1

 2 x4
6 x3
 0
 1 0 0 1 0 
has the augmented matrix whose reduced row echelon form is  0 1 0 1 0  .
 0 0 1  61 0 
The general solution is x1  t , x2  t , x3  61 t , x4  t where t is arbitrary. The smallest positive integer values for
the unknowns occur when t  6 , which yields the solution x1  6 , x2  6 , x3  1 , x4  6 . The balanced equation is
6CO 2  6H 2 O  C6 H12 O6  6O2
13.
We are looking for a polynomial of the form p  x   a0  a1 x  a2 x 2 such that p 1  1, p  2   2 , and p  3   5 . We
obtain a linear system
a0

a2
 1
a0
 2a1
 4 a2
 2
a0
 3a1
 9a2
 5
a1

2
1 0 0

Its augmented matrix has the reduced row echelon form  0 1 0 2  .
 0 0 1
1
There is a unique solution a0  2 , a1  2 , a2  1 .
The quadratic polynomial is p  x   2  2 x  x 2 .
14.
We are looking for a polynomial of the form p  x   a0  a1 x  a2 x 2 such that p  0   0, p  1  1 , and p 1  1 .
We obtain a linear system
 0
a0
a0
 a1
 a2
 1
a0
 a1
 a2
 1
1 0 0 0 
Its augmented matrix has the reduced row echelon form 0 1 0 0  .
0 0 1 1 
There is a unique solution a0  0 , a1  0 , a2  1 . The quadratic polynomial is p  x   x 2 .
15.
We are looking for a polynomial of the form p  x   a0  a1 x  a2 x 2  a3 x 3 such that p  1  1, p  0   1 , p 1  3
and p  4   1 . We obtain a linear system
1.10 Applications of Linear Systems
a0


a1
a2

a3
a0
a0
 1

a0
 a1
 4 a1

a2
 16 a2

a3
 64 a3
1
0
Its augmented matrix has the reduced row echelon form 
0

0
144
1
 3
 1
1

1 0 0
.
0 1 0
0

0 0 1  61 
0 0 0
13
6
There is a unique solution a0  1 , a1  136 , a2  0 , a3   61 .
The cubic polynomial is p  x   1  136 x  61 x 3 .
16.
We are looking for a polynomial of the form p  x   a0  a1 x  a2 x 2  a3 x 3 such that p  0   0, p  2   5 , p  4   8
and p  6   3 . We obtain a linear system
 0
a0
a0
 2 a1

4 a2

8a3
 5
a0
 4 a1
 16 a2

64 a3
 8
a0
 6 a1
 36a2
 216 a3
 3
1
0
Its augmented matrix has the reduced row echelon form 
0

0
0
2 
.
1
0 1 0
2

0 0 1  81 
0 0 0
1 0 0
There is a unique solution a0  0 , a1  2 , a2  12 , a3   81 .
The cubic polynomial is p  x   2 x  12 x 2  81 x 3 .
17.
(a)
We are looking for a polynomial of the form p  x   a0  a1 x  a2 x 2 such that p  0   1 and p 1  2 . We
obtain a linear system
a0
a0
 a1
 a2
 1
 2
 1 0 0 1
Its augmented matrix has the reduced row echelon form 
.
0 1 1 1
The general solution of the linear system is a0  1 , a1  1  t , a2  t where t is arbitrary.
Consequently, the family of all second-degree polynomials that pass through  0,1 and 1,2  can be
represented by p  x   1  1  t  x  tx 2 where t is an arbitrary real number.
1.10 Applications of Linear Systems
145
(b)
True-False Exercises
(a)
False. In general, networks may or may not satisfy the property of flow conservation at each node (although the ones
discussed in this section do).
(b)
False. When a current passes through a resistor, there is a drop in the electrical potential in a circuit.
(c)
True.
(d)
False. A chemical equation is said to be balanced if for each type of atom in the reaction, the same number of atoms
appears on each side of the equation.
(e)
False. By Theorem 1.10.1, this is true if the points have distinct x -coordinates.
1.11 Leontief Input-Output Models
1.
(a)
 0.50 0.25 
C

 0.25 0.10 
(b)
 1 0   0.50 0.25   0.50 0.25
The Leontief matrix is I  C  


;
 0 1   0.25 0.10   0.25 0.90 
 7,000 
the outside demand vector is d  
.
14,000 
The Leontief equation  I  C  x  d leads to the linear system with the augmented matrix
1 0
 0.50 0.25 7,000 
 0.25 0.90 14,000  . Its reduced row echelon form is 0 1



784,000
31
700,000
31
 1 0 25,290.32 

.
 0 1 22,580.65
To meet the consumer demand, M must produce approximately $25,290.32 worth of mechanical work and B
must produce approximately $22,580.65 worth of body work.
2.
(a)
 0.30 0.20 
C

 0.10 0.60 
(b)
 1 0   0.30 0.20   0.70 0.20 
The Leontief matrix is I  C  
;


0.40 
 0 1   0.10 0.60   0.10
1.11 Leontief Input-Output Models
130,000 
the outside demand vector is d  
.
130,000 
The Leontief equation  I  C  x  d leads to the linear system with the augmented matrix
 0.70 0.20 130,000 
1 0 300,000 
 0.10 0.40 130,000  . Its reduced row echelon form is 0 1 400,000  .




To meet the consumer demand, the economy must produce $300,000 worth of food and $400,000 worth of
housing.
3.
(a)
 0.10 0.60 0.40 
C   0.30 0.20 0.30 
 0.40 0.10 0.20 
(b)
 1 0 0   0.10 0.60 0.40   0.90 0.60 0.40 
The Leontief matrix is I  C   0 1 0    0.30 0.20 0.30    0.30
0.80 0.30  ;
 0 0 1   0.40 0.10 0.20   0.40 0.10
0.80 
1930 
the outside demand vector is d  3860  .
5790 
The Leontief equation  I  C  x  d leads to the linear system with the augmented matrix
 0.90 0.60 0.40 1930 
 0.30 0.80 0.30 3860  .


 0.40 0.10 0.80 5790 
 1 0 0 31,500 
Its reduced row echelon form is  0 1 0 26,500  .
 0 0 1 26,300 
 $31,500 
The production vector that will meet the given demand is x  $26,500  .
$26,300 
4.
(a)
 0.40 0.20 0.45 
C   0.30 0.35 0.30 
 0.15 0.10 0.20 
(b)
 1 0 0   0.40 0.20 0.45   0.60 0.20 0.45
The Leontief matrix is I  C   0 1 0    0.30 0.35 0.30    0.30
0.65 0.30  ;
 0 0 1   0.15 0.10 0.20   0.15 0.10
0.80 
5400 
the outside demand vector is d  2700  .
 900 
146
1.11 Leontief Input-Output Models
147
The Leontief equation  I  C  x  d leads to the linear system with the augmented matrix
 0.60 0.20 0.45 5400 
 0.30
0.65 0.30 2700  .

 0.15 0.10 0.80 900 
1 0 0

Its reduced row echelon form is  0 1 0
 0 0 1
9378000
479
7830000
479
3276000
479
  1 0 0 19578.29 
 

   0 1 0 16346.56  .
  0 0 1 6839.25 
 $19578.29 
The production vector that will meet the given demand is x  $16346.56  .
 $6839.25 
5.
 0.9 0.3
I C  
;
 0.5 0.6 
 20
1
x   I  C  d   13
50
 39
6.
1
10
13
30
13
 0.7 0.1
I C  
;
 0.3 0.3
5
1
x   I  C  d   35
3
7.
I  C 
(a)
5
9
35
9
20
100 0.6 0.3  13


39  0.5 0.9  50
39
10
13
30
13



 50   1600
123.08 
13 
     7900   

 60   39  202.56 
I  C 
1
100 0.3 0.1  35

18 0.3 0.7  35
5
9
35
9



 22   400
 44.44 
9 
     820   

 14   9   91.11
 1 0
The Leontief matrix is I  C   2
.
0 0 
 1 0 2
2 
The Leontief equation  I  C  x    leads to the linear system with the augmented matrix  2
 . Its
0 
0 0 0 
 1 0 4
4 
therefore a production vector can be found (namely,   for an
reduced row echelon form is 

0 0 0 
t 
arbitrary nonnegative t ) to meet the demand.
2 
On the other hand, the Leontief equation  I  C  x    leads to the linear system with the augmented matrix
1 
 12 0 2 
 1 0 0

 . Its reduced row echelon form is 
 ; the system is inconsistent, therefore a production
0 0 1
0 0 1
vector cannot be found to meet the demand.
(b)
 1 0   x1   d1 
 12 x1   d1 
Mathematically, the linear system represented by  2
can
be
rewritten
as

   

 .
 0 0   x2   d2 
 0   d2 
Clearly, if d2  0 the system has infinitely many solutions: x1  2d1 ; x2  t where t is an arbitrary
nonnegative number.
If d2  0 the system is inconsistent. (Note that the Leontief matrix is not invertible.)
1.11 Leontief Input-Output Models
148
0 
An economic explanation of the result in part (a) is that c 2    therefore the second sector consumes all of
1 
its own output, making it impossible to meet any outside demand for its products.
8.
 12

I  C    12
  12
 14

7
8
1
4
 14 

 14 
7 
8 
If the open sector demands k dollars worth from each product-producing sector, i.e. the outside demand vector is
k 
d   k  . The Leontief equation  I  C  x  d leads to the linear system with the augmented matrix
 k 
 12
 1
 2
  12
 14

7
8
1
4
 14
 14
7
8
k
 1 0 0 18k 
 . Its reduced row echelon form is  0 1 0 16 k  .
k


 0 0 1 16 k 
k 
We conclude that the first sector must produce the greatest dollar value to meet the specified open sector demand.
9.
From the assumption c21c12  1  c11 , it follows that the determinant of
 1  c11
det  I  C   det  
  c21
c12  
  1  c11  c12 c21 is nonzero. Consequently, the Leontief matrix is invertible; its
1  
c12 
1
1
inverse is  I  C   1 c11 1c12 c21 
 . Since the consumption matrix C has nonnegative entries and
 c21 1  c11 
1  c11  c21c12  0 , we conclude that all entries of  I  C  are nonnegative as well. This economy is productive (see
1
the discussion above Theorem 1.10.1) - the equation x  Cx  d has a unique solution x   I  C  d for every
1
demand vector d .
True-False Exercises
(a)
False. Sectors that do not produce outputs are called open sectors.
(b)
True.
(c)
False. The i th row vector of a consumption matrix contains the monetary values required of the i th sector by the
other sectors for each of them to produce one monetary unit of output.
(d)
True. This follows from Theorem 1.11.1.
(e)
True.
Chapter 1 Supplementary Exercises
1.
The corresponding system of linear equations is
Supplementary Exercises

3 x1
2 x1
 4 x4
 3 x4
x2
 3 x3

1
 1
1
 3 1 0 4
 2 0 3 3 1


The original augmented matrix.
 1 1 3 1 2 
2 0
3 3 1

1 times the second row was added to the first row.
 1 1 3 1 2 
 0 2 9 1 5


2 times the first row was added to the second row.
2
 1 1 3 1
0
9
1
1
 25 
2
2

The second row was multiplied by 12 .
This matrix is in row echelon form. It corresponds to the system of equations
x1
 x2

3 x3

x4
x2

9
x3
2

1
x4
2

2
 
5
2
Solve the equations for the leading variables
x1  x2  3 x3  x4  2
9
1
5
x2   x3  x4 
2
2
2
then substitute the second equation into the first
3
3
1
x1   x3  x4 
2
2
2
9
1
5
x2   x3  x4 
2
2
2
If we assign x3 and x4 the arbitrary values s and t , respectively, the general solution is given by the formulas
3
3 1
x1   s  t  ,
2
2 2
2.
9
1 5
x2   s  t  ,
2
2 2
x3  s,
The corresponding system of linear equations is
x1
2 x1
3 x1

4 x2
 1
 8 x2  2
 12 x2  3
0  0
x4  t
149
Supplementary Exercises
 1 4 1
 2 8 2 


 3 12 3 


0
0
 0
1
0

0

0
150
The original augmented matrix.
4 1
0 0 
0 0

0 0
2 times the first row was added to the second row and 3
times the first row was added to the third row.
This matrix is both in row echelon form and in reduced row echelon form. It corresponds to the system of equations
x1
 4 x2
 1
0 
0 
0 
0
0
0
If we assign x2 an arbitrary value t , the general solution is given by the formulas
x1  1  4t ,
3.
x2  t
The corresponding system of linear equations is
2 x1
4 x1
 4 x2
x2
 x3
 3 x3
 x3
 6
 1
 3
1 6
 2 4
 4
0 3 1

 0
1 1 3
The original augmented matrix.
1
3
 1 2
2


0 3 1
 4
 0
1 1 3
The first row was multiplied by 12 .
1
3
 1 2
2


 0 8 5 11
 0
1 1 3
4 times the first row was added to the second row.
1
3
 1 2
2


1 1 3
0
 0 8 5 11
The second and third rows were interchanged.
Supplementary Exercises
1
3
 1 2
2


1 1 3 
0
 0
0 3 35 
8 times the second row was added to the third row.
1
3
 1 2
2


1 1
3
0
 0
0
1  353 
The third row was multiplied by  13 .
This matrix is in row echelon form. It corresponds to the system of equations
x1
1
x3
2
x3

 2 x2

x2
x3

3

3
35
 
3
Solve the equations for the leading variables
1
x1  2 x2  x3  3
2
x 2  x3  3
x3  
35
3
then finish back-substituting to obtain the unique solution
x1  
4.
17
,
2
26
,
3
x3  

x2
 3 x2
 2 x2
 2
 6

1
x2  
35
3
The corresponding system of linear equations is
3 x1
9 x1
6 x1
1 2 
 3
  9 3 6 


 6 2
1
 3 1 2 
0 0 0 


0 0
5
The original augmented matrix.
3 times the first row was added to the second row and
2 times the first row was added to the third row.
Although this matrix is not in row echelon form yet, clearly it corresponds to an inconsistent linear system
151
Supplementary Exercises
3 x1

152
 2
0 
0
0 
5
x2
since the third equation is contradictory. (We could have performed additional elementary row operations to obtain a
 1 13  23 

matrix in row echelon form  0 0
1 .)
 0 0
0 
 35
4
5
5.
 45
3
5
 1  43
4
3
5
5
 1  43

5
3
0
5
3
x

y
The augmented matrix corresponding to the system.
x

y
The first row was multiplied by 35 .
x

 x  y
5
3
4
3
5
x
 1  43
3 


1  45 x  35 y 
0
3
x  45 y 
1 0
5

3 
4
0 1  5 x  5 y 
 45 times the first row was added to the second row.
The second row was multiplied by 35 .
4
3
times the second row was added to the first row.
The system has exactly one solution: x   35 x  45 y and y   45 x  35 y .
6.
We break up the solution into three cases:
Case I: cos  0 and sin   0
 cos
 sin 

 sin 
cos
sin 
 1  cos


sin
cos



sin 
 1  cos


1
0
cos

yx
x
y 
The augmented matrix corresponding to the system.


y
The first row was multiplied by cos1  .
x
cos
x
cos
sin 
cos



sin 
x
 1  cos

cos 


1 y cos  x sin  
0
 1 0 x cos  y sin  
 0 1 y cos  x sin  


 sin  times the first row was added to the second


( sin
 cos
 cos1  ).
cos
cos
2
2
The second row was multiplied by cos  .
sin 
cos
(
times the second row was added to the first row
x sin 2 
cos
cos 
 cosx   x cos
 x cos  ) .

2
Supplementary Exercises
153
The system has exactly one solution: x  x cos  y sin  and y   x sin   y cos .
Case II: cos  0 which implies sin 2   1 . The original system becomes x   y sin , y  x sin  . Multiplying
both sides of the each equation by sin  yields x  y sin  , y   x sin  .
Case III: sin  0, which implies cos2   1 . The original system becomes x  x cos , y  y cos . Multiplying
both sides of each equation by cos yields x  x cos , y  y cos .
Notice that the solution found in case I
x  x cos  y sin  and y   x sin   y cos .
actually applies to all three cases.
7.
1 1 1 9 
1 5 10 44 


The original augmented matrix.
 1 1 1 9
 0 4 9 35


1 times the first row was added to the second row.
1 1 1
0 1 9
4

35
4
9


The second row was multiplied by 14 .
 1 0  45

9
4
0 1
1
4
35
4



1 times the second row was added to the first row.
If we assign z an arbitrary value t , the general solution is given by the formulas
x
1 5
 t,
4 4
y
35 9
 t,
4 4
zt
The positivity of the three variables requires that 14  45 t  0 , 354  49 t  0 , and t  0 . The first inequality can be
rewritten as t   14 , while the second inequality is equivalent to t  359 . All three unknowns are positive whenever
0  t  359 . There are three integer values of t  z in this interval: 1 , 2 , and 3 . Of those, only z  t  3 yields integer
values for the remaining variables: x  4 , y  2 .
8.
Let x, y, and z denote the number of pennies, nickels, and dimes, respectively. Since there are 13 coins, we must
have
x  y  z  13.
On the other hand, the total value of the coins is 83 cents so that
x  5 y  10 z  83.
Supplementary Exercises
154
1 1 1 13
The resulting system of equations has the augmented matrix 
 whose reduced row echelon form is
1 5 10 83
 1 0  45

9
4
0 1
 92 
35 
2 
If we assign z an arbitrary value t , the general solution is given by the formulas
9 5
x    t,
2 4
y
35 9
 t,
2 4
zt
However, all three unknowns must be nonnegative integers.
The nonnegativity of x requires the inequality  29  45 t  0 , i.e., t  185 .
Likewise for y , 352  94 t  0 yields t  709 .
When 185  t  709 , all three variables are nonnegative. Of the four integer t  z values inside this interval ( 4 , 5 , 6 ,
and 7 ), only t  z  6 yields integer values for x and y .
We conclude that the box has to contain 3 pennies, 4 nickels, and 6 dimes.
a 0 b 2 
a a 4 4


 0 a 2 b 
9.
(a)
The augmented matrix for the system.
b
2
a 0
0 a 4  b 2


 0 a
2
b 
1 times the first row was added to the second row.
2 
b
a 0
0 a 4  b
2 

 0 0 b  2 b  2 
1 times the second row was added to the third row.
the system has a unique solution if a  0 and b  2 (multiplying the rows by 1a , 1a , and b 1 2 , respectively,
1 0
yields a row echelon form of the augmented matrix  0 1
 0 0
(b)
b
a
4b
a
1

 ).

1 
2
a
2
a
the system has a one-parameter solution if a  0 and b  2 (multiplying the first two rows by
 1 0 2a 2a 
reduced row echelon form of the augmented matrix  0 1 2a 2a  ).
 0 0 0 0 
1
yields a
a
Supplementary Exercises
(c)
155
the system has a two-parameter solution if a  0 and b  2
0 0 1 1 
(the reduced row echelon form of the augmented matrix is 0 0 0 0  ).
0 0 0 0 
(d)
the system has no solution if a  0 and b  2
0 0 1 0 


(the reduced row echelon form of the augmented matrix is 0 0 0 1  ).
0 0 0 0 
10.
1
4 
1 1
0 0
1
2  .

0 0 a 2  4 a  2 
4
1 1 1

0 0 1

2


0 0 0 2a 2  a  6 
The augmented matrix for the system.
a 2  4 times the second row was added to the third.
From quadratic formula we have 2 a 2  a  6  2  a  32   a  2  .
The system has no solutions when a  2 and a   32 (since the third row of our last matrix would then correspond to
a contradictory equation).
The system has infinitely many solutions when a  2 or a   23 .
No values of a result in a system with exactly one solution.
11.
a b 
For the product AKB to be defined, K must be a 2  2 matrix. Letting K  
 we can write
c d 
b  4 d b  4 d 
 1 4
 1 4
 2 a  8c
 a b  2 0 0  
 2 a b b  



  2
  4 a  6c 2b  3d 2b  3d  .
ABC   2
3 
3 




c d  0 1 1
2c d  d 
 1 2  
 1 2  
 2 a  4c
b  2 d b  2 d 
The matrix equation AKB  C can be rewritten as a system of nine linear equations
Supplementary Exercises
 8c
2a

8
6
b
 4d

b
 4d
 6

3d


6
1
 3d

1
4 a
 6c
 2b
2b
 4c
2a


156
 4
b
 2d

0
b
 2d

0
which has a unique solution a  0 , b  2 , c  1 , d  1 . (An easy way to solve this system is to first split it into two
smaller systems. The system 2 a  8c  8 , 4 a  6c  6 , 2 a  4c  4 involves a and c only, whereas the
0 2 
remaining six equations involve just b and d .) We conclude that K  
.
1 1 
12.
Substituting the values x 1, y  1 , and z  2 into the original system yields a system of three equations in the
unknowns a, b, and c :
a

b
 2 1 
a


b
  3  1 
 3 2  
2c
2c
3
 1
 3
that can be rewritten as
a  b
a
 3
b  2c  1
 2c  0
 1 0 0 2


The augmented matrix of this system has the reduced row echelon form 0 1 0 1 . We conclude that for the
0 0 1 1
original system to have x  1 , y  1 , and z  2 as its solution, we must let a  2, b  1 , and c  1 .
(Note that it can also be shown that the system with a  2, b  1 , and c  1 has x  1 , y  1 , and z  2 as its only
solution. One way to do that would be to verify that the reduced row echelon form of the coefficient matrix of the
original system with these specific values of a, b and c is the identity matrix.)
13.
(a)
a b
d e
X must be a 2  3 matrix. Letting X  
1
 1 0
a b

X  1 1 0   
d e
 3 1 1 
c
we can write
f 
1
 1 0
c
  a  b  3c
1 1 0   


f
d  e  3 f
 3 1 1 
bc
e f
ac
d  f 
Supplementary Exercises
157
therefore the given matrix equation can be rewritten as a system of linear equations:
 a  b  3c
a
b 

c
c
 d
d
 e  3f

1


2
0
 3
e 
f

1

f

5
1
0

0
The augmented matrix of this system has the reduced row echelon form 
0
0

 0
0 0 0 0 0 1
1 0 0 0 0 3
0 1 0 0 0 1

0 0 1 0 0 6
0 0 0 1 0 0

0 0 0 0 1 1
so the system has a unique solution
 1 3 1
a  1 , b  3 , c  1 , d  6 , e  0 , f  1 and X  
.
1
 6 0
(An alternative to dealing with this large system is to split it into two smaller systems instead: the first three
equations involve a , b , and c only, whereas the remaining three equations involve just d , e , and f .
Since the coefficient matrix for both systems is the same, we can follow the procedure of Example 2 in
Section 1.6; the
 1 1 3 1 3 
 1 0 0 1 6 




1  is  0 1 0 3 0  .)
reduced row echelon form of the matrix  0 1 1 2
 1 0 1 0 5 
 0 0 1 1 1 




Yet another way of solving this problem would be to determine the inverse
1
 1 0 1
 1 1 1
 1 1 0    1 2 1



 using the method introduced in Section 1.5, then multiply both sides of the
 3 1 1
 2 1 1
given matrix equation on the right by this inverse to determine X :
 1 1 1
 1 2 0 
 1 3 1
X
1 2 1  


1
 3 1 5  2 1 1  6 0


(b)
a b 
 we can write
c d 
X must be a 2  2 matrix. Letting X  
 1 1 2   a b   1 1 2   a  3b  a 2 a  b 
X




3 0 1  c d  3 0 1  c  3d c 2c  d 
therefore the given matrix equation can be rewritten as a system of linear equations:
Supplementary Exercises
a  3b
 5
a
2a 


1
0
c  3d

6
c
 3
b

2c 
d

158
7
1
0

0
The augmented matrix of this system has the reduced row echelon form 
0
0

0
0 0 0
1
1 0 0 2 
0 1 0
3
 so the system has
0 0 1
1
0 0 0
0

0 0 0
0 
 1 2 
a unique solution a  1 , b  2 , c  3 , d  1 . We conclude that X  
.
1
3
(An alternative to dealing with this large system is to split it into two smaller systems instead: the first three
equations involve a and b only, whereas the remaining three equations involve just c and d . Since the
coefficient matrix for both systems is the same, we can follow the procedure of Example 2 in Section 1.6;
 1 3 5 6 
1 0
1 3




the reduced row echelon form of the matrix  1 0 1 3  is  0 1 2 1  .)
 2 1 0 7
0 0 0 0




(c)
a b 
 we can write
c d 
X must be a 2  2 matrix. Letting X  
 3 1
 1 4   3 1  a b   a b   1 4 
 1 2  X  X 2 0    1 2   c d    c d  2 0 



 

 


3b  d   a  2b 4 a 
 3a  c



  a  2 c b  2 d   c  2 d 4 c 
 2 a  2b  c

 a  c  2d
4 a  3b  d 
b  4c  2 d 
therefore the given matrix equation can be rewritten as a system of linear equations:
2 a  2b 
c

2
4 a  3b
 d  2
a
 c  2d 
5
 b  4c  2 d  4
Supplementary Exercises
1

0
The augmented matrix of this system has the reduced row echelon form 
0

0
0
1
0
0
0
0
1
0
159
0  113
37 
160 
0  37 
so the

0  20
37

1  46
37 

system has a unique solution a   113
, b   160
, c   20
, d   46
.
37
37
37
37
  113
We conclude that X   3720
  37
14.
(a)
 160
37 
.
46 
 37

By Theorem 1.4.1, the properties AI  IA  A (Section 1.4) and the assumption A4  0 , we have
 I  A   I  A  A 2  A3  
II  IA  IA2  IA3  AI  AA  AA2  AA3
 I  A  A 2  A 3  A  A 2  A3  A 4
 I
This shows that  I  A  I  A  A2  A3 .
1
(b)
By Theorem 1.4.1, the properties AI  IA  A (Section 1.4) and the assumption A n 1  0 , we have
 I  A   I  A  A2    An1  An 
 II  IA  IA2    IAn 1  IAn  AI  AA  AA2    AAn 1  AAn
 I  A  A2    An1  An  A  A2  A3    An  An1
I
15.
We are looking for a polynomial of the form
p  x   ax 2  bx  c
such that p 1  2, p  1  6 , and p  2   3 . We obtain a linear system
a 
b  c  2
 b  c  6
4 a  2b  c  3
a
1
1 0 0
0 1 0 2 
Its augmented matrix has the reduced row echelon form 
.
0 0 1 3
There is a unique solution a  1 , b  2 , c  3 .
16.
Since p  1  0 and p  2   9 we have the equations a  b  c  0 and 4 a  2b  c  9 .
From calculus, the derivative of p  x   ax 2  bx  c is p  x   2 ax  b .
For the tangent to be horizontal, the derivative p  2   4a  b must equal zero. This leads to the equation 4 a  b  0.
We proceed to solve the resulting system of two equations:
Supplementary Exercises
 b  c  0
4 a  2 b  c  9
 0
4a  b
a
1
1 0 0
0 1 0 4 
The reduced row echelon form of the augmented matrix of this system is 
 . Therefore, the values
0 0 1 5
a  1 , b  4 , and c  5 result in a polynomial that satisfies the conditions specified.
17.
When multiplying the matrix J n by itself, each entry in the product equals n . Therefore, J n J n  nJ n .
 I  J n   I  n11 J n 
 I 2  I n11 J n  J n I  J n n11 J n
Theorem 1.4.1(f) and (g)
 I  n11 J n  J n  J n n11 J n
Property AI  IA  A in Section 1.4
I 
1
1
Jn  Jn 
Jn Jn
n 1
n 1
Theorem 1.4.1(m)
 I  n11 J n  J n  nn1 J n
J n J n  nJ n
 I   n11  1  nn1  J n
Theorem 1.4.1(j) and (k)
 I   n11  nn 11  nn1  J n
I
160
2.1 Deterrminants by C
Cofactor Expaansion
CHA
APTER 2: DETE
ERMINA
ANTS
2.1 Deterrminants by
b Cofacto
or Expansion
1 2
1.
6
M11
1 
3
7 1
7 1 
 29
1 4
1 4
1 2
6
M12
1 
3
6
M 21
2 
3
C13   1
M13  M13  27
1 3
C22   1
22
M 22  M 22  13
C23   1
2 3
M 23   M 23  5
3 1
M 31  M 31  19
2 1
M 21   M 21  11
3
3
1 2
 5
7 1 
3
1
1 4
3
2 3
7 1 
 19
7 1
1 4
C31   1
3
1 3
 19
7 1 
6 1
1 4
1 2
6
M33
3 
3
M12   M12  21
1 2
1 3
7 1 
 13
3 4
1 4
1 2
M32
6
3 
3
C12   1
C21   1
1 2
6
M31
3 
3
M11  M11  29
2 3
 11
1 4
1 2
M 23
6
2 
3
3
7 1 
1 4
1 2
6
M 22
2 
3
11
3
6 7
7 1 
 27
3 1
1 4
1 2
C11   1
3
6 1
7 1 
 21
3 4
1 4
1 2
6
M13
1 
3
3
C32   1
3 2
M 32   M 32  19
3 3
M 33  M 33  19
3
1 2
7 1 
 19
6
7
1 4
C33   1
1
Cofactor Expaansion
2.1 Deterrminants by C
1 1 2
3 6
M11  3 3 6 
6
1 4
0 1 4
2.
C11   1
11
M11  M 11  6
1 1 2
3 6
M12  3 3 6 
 12
1
0 4
0 1 4
C12   1
M12   M12  12
C13   1
M13  M13  3
1 2
1 1 2
3 3
M13  3 3 6 
3
0 1
0 1 4
1 3
1 1 2
1 2
M 21  3 3 6 
2
1 4
0 1 4
C21   1
2 1
M 21   M 21  2
C22   1
22
M 22  M 22  4
1 1 2
1 2
M 22  3 3 6 
4
0 4
0 1 4
1 1 2
1 1
M 23  3 3 6 
1
0 1
0 1 4
C23   1
23
M 23   M 23  1
3 1
M 31  M 31  0
1 1 2
1 2
M 31  3 3 6 
0
3 6
0 1 4
C31   1
1 1 2
1 2
M 32  3 3 6 
0
3 6
0 1 4
C32   1
3 2
M 32   M 32  0
3 3
M 33  M 33  0
1 1 2
1 1
M 33  3 3 6 
0
3 3
0 1 4
0 0
3.
(a)
M13
 4
4
3
1 14
4 14
4 1
1 14  0
3
0
4 2
1 2
4 1
1 2
 0  0  3 0  0
C13
   1
1 3
C33   1
M133  M13  0
cofactor expanssion
along the first roow
2
2.1 Determinants by Cofactor Expansion
(b)
M 23
4 1 6
1 14
4 14
4 1
4
1 14  4
  1
6
1 2
4 2
4 1
4
1 2
cofactor expansion
along the first row
 4  12   1 48   6  0   96
(c)
C23
  1
M22
4 1 6
1 6
4 6
4 1
 4 0 14  4
0
 14
3 2
4 2
4 3
4 3 2
2 3
M 23   M 23  96
cofactor expansion
along the second row
 4  16   0  14  8   48
(d)
C22
  1
M21
1 1 6
1 6
1 6
1 1
 1 0 14  1
0
 14
3 2
1 2
1 3
1 3 2
22
M 22  M 22  48
cofactor expansion
along the second row
 1 16   0  14  4   72
4.
(a)
C21
   1
M 32
2 1 1
0 3
3 3
3 0
 3 0 3  2
  1
1
1 4
3 4
3 1
3 1 4
2 1
M 21   M 21  72
cofactor expansion
along the first row
 2  3  1 21  1 3  30
(b)
C32
  1
M44
2
3 1
2 0
3 0
3 2
 3 2 0  2
3
  1
2 1
3 1
3 2
3 2
1
3 2
M 32   M 32  30
 2  2   3  3  1 0   13
C 44
   1
44
M 44  M 44  13
cofactor expansion
along the first row
3
2.1 Deterrminants by C
Cofactor Expaansion
3 1 1
(c)
0 3
2 3
2 0
  1
1
0 3 3
1 0
2 0
2 1
1 0
M41  2
2
coofactor expanssion
allong the first roow
 3  3  1 6   1 2   1
  1
C41
4 1
3 1
2
(d)
M 41   M 41  1
3 1
2 1
3 2
1 2
  1
3
3 1
2 1
3 2
1
M24  3 2
3 2
coofactor expanssion
allong the first roow
 2  0   3  0   1 0   0
   1
C24
2 4
M 24  M 24  0
5.
3 5
 4 5   112
  3 4    5  2   12  10  22  0 . Inverrse: 221 
1
2 4
 2 3   11
6.
4 1
  4  2   1 8   0 ; The matriix is not inverrtible.
8 2
7.
5 7
 2 7   592
1
4  59  0 . Inverse:
  5  2    7  7   10  49
I
7
59 
7 2
 7 5  59
8.
9.

a3
5
3 a  2

2
5 1 2
3 8 4
 5
3
7
1
4
3 5 7
1 6 2
1
3 6



 3

 4
7
59
5
59



1
 6 3 2
 4
2   3 6
5
a3
  a  3 a  2   5  3  a 2  5a  6  15  a 2  5a  21
3 a  2
6
2
11.
6
4
2
10.
 2  3    6   4   6  4 6  3 6  0 . Inverrse:
3
2
5
22
3
22
2
7
6 2
1 2 5
8 4 3
1
4 2
 3 5 7
1 6 2
7
1   8  42  24
40   18  322  140   0
8
1
3 5   20  7  72    20  844  6   65
1 6


1
3 3

1
3
4
2.1 Deterrminants by C
Cofactor Expaansion
1
12.
0
1 1
2 1 1
3
0
 3 0 5 3 0   0  5  42    0  35  6   4
1 7 2 1 7
0
0
 2 1 5
1 9 4
2 1 5
1 9 4
c 4
14.
2
3 0 5
1 7 2
3
13.
1
3
0
2 1  12  0  0    0  135  0   123
1 9
3
c 4
3 c 4
c2  2
2
1
1
c2 2
1  2c  16c 2  6  c  1   12   c  1 c 3  16 
4 c 1 2
4 c 1 2 4 c 1
 2c  16c 2  6c  6  122  c 4  c 3  16
1  c 4  c 3  16c 2  8c  2
15.
det  A  
 2
5
1
  λ  2  λ  4   1 5   λ 2  2λ  3   λ  3  λ  1
4
The determinant is zero iff   3 or   1 .
16.
Calculate thee determinantt by a cofacto
or expansion along
a
the firstt row:
4 0
det  A   0

0
0
2
3  1
   4

2
00
3  1
    4      1  6      4   2    6      4    3   2 
The determinant is zero iff   2 ,   3 , or   4 .
17.
det  A  
 1
2
0
  λ  1 λ  1
 1
The determinant is zero iff   1 or   1 .
18.
or expansion along
a
the thirdd row:
Calculate thee determinantt by a cofacto
4 4
det  A   1 
0
0
0
0
 5
 0  0     5
 4 4
1 
    5    4    4      5   2  4  4      5   2 
2
5
2.1 Determinants by Cofactor Expansion
The determinant is zero if   2 or   5 .
19.
20.
21.
(a)
3
1 5
 0  0  3  41  123
9 4
(b)
3
1 5
0 0
0 0
2
1
 3  41  2  0   1 0   123
9 4
9 4
1 5
(c)
2
(d)
0   1
(e)
1
(f)
05
3 0
3 0
  4 
 5  27   4  3  123
1 9
2 1
(a)
 1
0 5
3 5
3 0
1
2
  1 35  111  2  21  4
7 2
1 2
1 7
(b)
 1
0 5
1 2
1 2
3
1
  1 35   3  12   1 5   4
7 2
7 2
0 5
(c)
3
1 2
1 1
 0   5
 3  12   0  5  8   4
7 2
1 7
(d)
1
3 5
1 2
07
 111  0  7  1  4
1 2
3 5
(e)
1
1 2
1 1
1 2
7
2
 1 5  7  1  2  3   4
0 5
3 5
3 0
(f)
2
3 0
1 1
1 1
  5 
2
 2  21  5  8   2  3   4
1 7
1 7
3 0
0 0
3 0
3 0
  1
5
 2  0   1 12   5  27   123
9 4
1 4
1 9
3 0
3 0
9
 1 12   9 15  123
1 4
2 5
0 0
3 0
3 0
9
  4 
 1 0   9 15  4  3  123
1 5
2 5
2 1
Calculate the determinant by a cofactor expansion along the second column:
0  5
22.
Calculate the determinant by a cofactor expansion along the second row:
1
23.
3 7
 0  5  8   40
1 5
3 1
3 3
 0   4 
 118   0  4  12   66
3 5
1 3
Calculate the determinant by a cofactor expansion along the first column:
6
2.1 Determinants by Cofactor Expansion
1
24.
k k2
k k2
k k2

1

1
 1  0   1 0   1  0   0
k k2
k k2
k k2
Calculate the determinant by a cofactor expansion along the second column:
  k  1
k 1 7
k 1 7
2 4
  k  3
  k  1
k
5 k
5
2
4
   k  1 2 k  20    k  3    k  1 k  35    k  1  4  k  1  14 
 k 3  8k 2  10 k  95
25.
Calculate the determinant by a cofactor expansion along the third column:
3 3
5
3 3
5
det  A   0  0   3  2 2 2  3 2 2 2
2 10
2
4 1 0
Calculate the determinants in the third and fourth terms by a cofactor expansion along the first row:
3 3
5
2 2
2 2
2 2
2 2 2  3
3
5
 3  24   3  8   5 16   128
10
2
2
2
2 10
2 10
2
3 3
5
2 2
2 2
2 2
2 2 2  3
3
5
 3  2   3  8   5  6   48
1 0
4
0
4 1
4 1 0
Therefore det  A   0  0  3 128   3  48   240.
26.
Calculate the determinant by a cofactor expansion along the first row:
3 3 1 0
det  A   4
3 3 3 0
2 4
4 6
2 3
1 2 4 3
 0  0 1
0
2 3
9 4 6 3
2 4
2 3
2 2 4 3
Calculate each of the two determinants by a cofactor expansion along its first row:
3 3 1 0
2 4
4 6
2 4
4 2 3
2 2 3
2 4 3
2 3
 3 6 2 3  3 4 2 3   1 4 6 3  0  3  0   3  0   1 0   0  0
2 3
4 2 3
2 2 3
2 4 3
2 3
3 3 3 0
2 4 3
1 4 3
1 2 3
1 2 4 3
 3 4 6 3  3 9 6 3  3 9 4 3  0  3  0   3  6   3  6   0  0
9 4 6 3
2 4 3
2 4 3
2 2 3
2 2 4 3
Therefore det  A   4  0   0  0  1 0   0.
7
2.1 Determinants by Cofactor Expansion
27.
By Theorem 2.1.2, determinant of a diagonal matrix is the product of the entries on the main diagonal:
det  A   1 11  1.
28.
By Theorem 2.1.2, determinant of a diagonal matrix is the product of the entries on the main diagonal:
det  A    2  2  2   8.
29.
By Theorem 2.1.2, determinant of a lower triangular matrix is the product of the entries on the main diagonal:
det  A    0  2  3 8   0 .
30.
By Theorem 2.1.2, determinant of an upper triangular matrix is the product of the entries on the main diagonal:
det  A   1 2  3 4   24.
31.
By Theorem 2.1.2, determinant of an upper triangular matrix is the product of the entries on the main diagonal:
det  A   11 2  3  6.
32.
By Theorem 2.1.2, determinant of a lower triangular matrix is the product of the entries on the main diagonal:
det  A    3 2  1 3  18 .
33.
(a)
sin 
 cos
cos
  sin   sin     cos   cos   sin 2   cos2   1
sin 
(b)
Calculate the determinant by a cofactor expansion along the third column:
0  0 1
35.
sin 
 cos
cos
 0  0  11  1 (we used the result of part (a))
sin 
The minor M11 in both determinants is
1 f
 1 . Expanding both determinants along the first row yields
0 1
d1    d2 .
37.
If n 1 then the determinant is 1 .
If n  2 then
1 1
0.
1 1
If n  3 then a cofactor expansion will involves minors
1 1
 0 . Therefore the determinant is 0 .
1 1
By induction, we can show that the determinant will be 0 for all n  3 as well.
43.
Calculate the determinant by a cofactor expansion along the first column:
1 x1
x12
1 x2
x22 
1 x3
2
3
x
x2
x22
x1
x12
x1
x12
x3
x3
x3
x3
x2
x22

2

2


 ( x2 x32  x3 x22 )  ( x1 x32  x3 x12 )  ( x1 x22  x2 x12 )   x32  x2  x1   x3 x22  x12   x1 x22  x2 x12 .
8
2.1 Determinants by Cofactor Expansion
Factor out  x2  x1  to get  x2  x1   x32  x2 x3  x1 x3  x1 x2    x2  x1   x32   x2  x1  x3  x1 x2  .
Since x32   x2  x1  x3  x1 x2   x3  x1  x3  x2  , the determinant is  x2  x1  x3  x1  x3  x2  .
True-False Exercises
(a)
False. The determinant is ad  bc .
(b)
False. E.g., det  I 2   det  I 3   1 .
(c)
True. If i  j is even then  1
(d)
a b
True. Let A   b d
 c e
Then C12   1
1 2
b
c
i j
 1 therefore Cij   1
i j
Mij  Mij .
c
e  .
f 
e
c
2 1 b
   bf  ec  and C21   1
   bf  ce  therefore C12  C21 . In the same way,
f
e f
one can show C13  C 31 and C 23  C 32 .
(e)
True. This follows from Theorem 2.1.1.
(f)
True. In formulas (7) and (8), each cofactor Cij is zero.
(g)
False. The determinant of a lower triangular matrix is the product of the entries along the main diagonal.
(h)
False. E.g. det  2 I 2   4  2  2det  I 2  .
(i)
False. E.g., det  I 2  I 2   4  2  det  I 2   det  I 2  .
(j)
  a b  2  a 2  bc ab  bd

True. det  
 a 2  bc bc  d 2   ab  bd  ac  cd 
  c d   ac  cd bc  d 2





 a 2 bc  a 2 d 2  b2 c 2  bcd 2  a 2 bc  abcd  abcd  bcd 2  a 2 d 2  b 2 c 2  2abcd .
2
 a b 2    a b   
   det
  ad  bc   a d  2 adbc  b c therefore det  
 .
  c d      c d   
c d




a b
2
2
2
2
2 2
2.2 Evaluating Determinants by Row Reduction
1.
det  A  
2 3
2 1
  2  4    31  11 ; det AT 
  2  4   1 3  11
1 4
3 4
2.
det  A  
1
6
6 2
  6  2   1 2   10 ; det AT 
  6  2    2 1  10
2 2
1 2
 
 
9
2.2 Evaluating Determinants by Row Reduction
3.
2 1 3
det  A   1 2 4   24  20  9  30  24  6   5 ;
5 3 6
 
det A
T
4.
2 1 5
 1 2 3   24  9  20   30  24  6   5 (we used the arrow technique)
3 4 6
4 2 1
det  A   0 2 3   40  6  0    2  12  0   56 ;
1 1 5
 
det A
T
4 0 1
 2 2
1   40  0  6    2  12  0   56 (we used the arrow technique)
1 3 5
5.
The third row of I 4 was multiplied by 5 . By Theorem 2.2.4, the determinant equals 5.
6.
5 times the first row of I 3 was added to the third row. By Theorem 2.2.4, the determinant equals 1.
7.
The second and the third rows of I 4 were interchanged. By Theorem 2.2.4, the determinant equals 1.
8.
The second row of I 4 was multiplied by  13 . By Theorem 2.2.4, the determinant equals  13 .
9.
3 6
9
1 2
3
2
7 2  3 2
7 2
0
1 5
0
1 5
1 2 3
30
3 4
0
1 5
1 2 3
 3  1 0
1 5
0
3 4
A common factor of 3 from the first row
was taken through the determinant sign.
2 times the first row was added to the
second row.
The second and third rows were
interchanged.
1 2
3
  3  1 0
1
5
0
0 11
3 times the second row was added to the
third row.
1 2 3
  3  1 11 0
1 5
0
0 1
A common factor of 11 from the last row
was taken through the determinant sign.
  3 1 111  33
10
2.2 Evaluating Determinants by Row Reduction
11
Another way to evaluate the determinant would be to use cofactor expansion along the first column after the second
step above:
3 6
9
1 2 3
 3 4

2
 0  0   3 111   33 .
7 2  3 0
3 4  3 1
 1 5

0
1 5
0
1 5
10.
3 6 9
1 2 3
0 0 2  3 0 0 2
2 1 5
2 1 5
1 2 3
 3 0 0 2
0 5 1
1 2 3
 3  1 0 5 1
0 0 2
1 2 3
  3  1 5  0 1  15
0 0 2
1 2 3
  3  1 5  2  0 1  15
0 0
1
A common factor of 3 from the first row
was taken through the determinant sign.
2 times the first row was added to the third
row.
The second and third rows were
interchanged.
A common factor of 5 from the second row
was taken through the determinant sign.
A common factor of 2 from the last row
was taken through the determinant sign.
  3 1 5 2 1  30
Another way to evaluate the determinant would be to use cofactor expansion along the first column after the
second step above:
3 6 9
1 2 3
 0 2

0 0 2  3 0 0 2  3  1
 0  0   3 110    30 .
 5 1

0 5 1
2 1 5
11.
2 1 3 1
1 0 1 1
0 2 1 0
0 1 2 3
  1
1 0 1 1
2 1 3 1
0 2 1 0
0 1 2 3
The first and second rows
were interchanged.
2.2 Evaluating Determinants by Row Reduction
1 0 1 1
0 1 1 1
  1
  1
0 2 1
0 1 2
0
3
1 0
1
0 1 1 1
0 0 1 2
0
  1
1
1
1 0
0 1
2
3
1 1
1 1
0 0 1
0 0
1
2
4
1 0 1
1
  1 1
  1 1
0 1 1 1
0 0 1 2
0 0 1
4
1 0
1
1
0 1 1 1
0 0 1 2
0 0 0
  1 1 6 
6
1 0 1 1
0 1 1 1
0 0 1 2
0 0 0
1
2 times the first row was
added to the second row.
2 times the second row was
added to the third row.
1 times the second row was
added to the fourth row.
A common factor of 1 from
the third row was taken
through the determinant sign.
1 times the third row was
added to the fourth row.
A common factor of 6 from
the third row was taken
through the determinant sign.
  1 1 6 1  6
Another way to evaluate the determinant would be to use cofactor expansions along the first column after the
fourth step above:
2 1 3 1
1 0 1 1
0 2 1 0
0 1 2 3
  1
1 0
0 1
1 1
1 1
0 0 1
0 0 1
1 1 1
1 2
  11 0 1 2   111
2
1 4
0 1 4
4
  111 6   6 .
12.
1 3 0 1 3 0
2
4 1  0 2 1
5 2 2 5 2 2
2 times the first row was
added to the second row.
12
2.2 Evaluating Determinants by Row Reduction
1 3 0
 0 2 1
0 13 2
1 3
0
 2 0
1  12
0 13
2
1 3
0
 2 0
1  12
0
  2  
17
2
0
17
2
1 3
0
 0 1  12
0 0
1
5 times the first row was
added to the third row.
A common factor of 2 from the second
row was taken through the determinant
sign.
13 times the second row was
added to the third row.
A common factor of 172 from the last row
was taken through the determinant sign.
  2   172  1  17
Another way to evaluate the determinant would be to use cofactor expansion along the first column after the
second step above:
1 3 0 1 3 0
2 1
2
 1 17   17 .
4 1  0 2 1  1
13 2
5 2 2 0 13 2
13.
1 3
2 7
0 0
0 0
0
1 5
0 4
1 0
2
1
0 0
1 3
0 1
0 0
0 0
0
1
0
  1 0
0
3
2
1
1
1 1
1
2
1
2
0 0
5
6
0
1
3
8
1
1
1 1
3
1 5 3
1 2 6 8
0
1 0
1
0 2
1 1
0 0
0
2 times the first row was
added to the second row.
1
1
A common factor of 1 from the second row
was taken through the determinant sign.
13
2.2 Evaluating Determinants by Row Reduction
1
0
  1 0
0
3
1 5 3
1 2 6 8
0
1 0
1
0 0
1 1
0 0
1
0
  1 0
0
1
1
3
1 5 3
1 2 6 8
0
1 0
1
0 0
1 1
0 0
1
0
  1 2  0
0
0
0
0
0
0
1 times the fourth row was
added to the fifth row.
2
3
1 5 3
1 2 6 8
0
1 0
1
0
0
1 1
0 0
2 times the third row was
added to the fourth row.
A common factor of 2 from the fifth row
was taken through the determinant sign.
1
  1 2 1  2
Another way to evaluate the determinant would be to use cofactor expansions along the first column after the
third step above:
1 3
2 7
0 0
0 0
0
1 5
0 4
1 0
2
1
0 0
3
1
2
0
1   1 0
1
0
1 1
3
1 5 3
1 2 6 8
1 2 6 8
0
1 0
1
0
1 0
1   11
0
0
1 1
0
0
1 1
0
0
1 1
0 0
0
1 1
1 0
1
1 1
  111 0 1 1   1111
  1111 2   2 .
1 1
0 1 1
14.
1 2
3
1
1 2
3
1
5 9 6
3
0
1 9 2

1 2 6 2 1 2 6 2
2 8 6
1
2 8 6
1
1 2 3
1
0
1 9 2

0 0 3 1
2
8 6
1
5 times the first row was added to the
second row.
The first row was added to the third row.
14
2.2 Evaluating Determinants by Row Reduction
1 2 3
1
0
1 9 2

0 0 3 1
0 12 0 1
1 2
3
1
0
1 9 2

0
0 3 1
0
0 108 23
1 2
3
1
0
1 9 2
 3
1
0 0
1
3
0 0 108 23
1 2
3
1
0
1 9 2
 3
1
0 0
1
3
0 0 0 13
1 2
3
1
0
1 9 2
  3  13 
1
0 0
1
3
0 0 0
1
2 times the first row was added to the
fourth row.
12 times the second row was added to the
fourth row.
A common factor of 3 from the third row
was taken through the determinant sign.
108 times the third row was added to the
fourth row.
A common factor of 13 from the third row
was taken through the determinant sign.
  3 131  39
Another way to evaluate the determinant would be to use cofactor expansions along the first column after the
fourth step above:
1 2
3
1 1 2
3
1
1 9 2
3 1
5 9 6
3 0
1 9 2

 1 0 3 1  11
1 2 6 2 0 0 3 1
108 23
0 108 23
2
8 6
1 0 0 108 23
 111 39   39 .
15.
d e
g h
a b
f
a b
i   1 g h
c
d e
c
i
f
The first and third rows were interchanged.
a b
  1 1 d e
g h
c
f
i
The second and third rows were interchanged.
15
2.2 Evaluating Determinants by Row Reduction
  1 1 6   6
16.
17.
g h
The first and the third rows were interchanged, therefore d e
a b
3a 3b 3c
a b
c
 d e  f  3  d e  f
4 g 4 h 4i
4 g 4 h 4i
c
f    6   6.
i
A common factor of 3 from the first row
was taken through the determinant sign.
a b c
 3  1 d
e f
4 g 4 h 4i
a b
 3  1 4  d e
g h
i
a b
f  d e
c
g h
A common factor of 1 from the second
row was taken through the determinant
sign.
c
f
i
A common factor of 4 from the third row
was taken through the determinant sign.
 3  1 4  6   72
18.
ad be c f
a b
c
d
e
 f   d e  f
g
h
i
g h
i
a b
 1 d e
g h
The second row was added to the first row.
c
f
i
A common factor of 1 from the second
row was taken through the determinant
sign.
  1 6   6
19.
ag bh ci a b
d
e
f d e
g
h
i
g h
c
f
i
1 times the third row was added to the first
row.
 6
20.
a
b
c
a b
c
2d
2e
2 f  2 d 2e 2 f
g  3a h  3b i  3c
g h
i
3 times the first row was added to the last
row.
16
2.2 Evaluating Determinants by Row Reduction
a b
2 d e
g h
c
f
i
A common factor of 2 from the second row
was taken through the determinant sign.
  2  6   12
21.
3a
3b
3c
d
e
f
g  4 d h  4e i  4 f
a
b
c
 3 d
e
f
g  4 d h  4e i  4 f
a b
 3 d e
g h
c
f
i
A common factor of 3 from the first row
was taken through the determinant sign.
4 times the second row was
added to the last row.
  3 6   18
22.
a b c
The third row is proportional to the first row, therefore by Theorem 2.2.5 d
e
f  0.
2 a 2b 2c
(This can also be shown by adding 2 times the first row to the third, then performing a cofactor expansion of the
a b c
resulting determinant d e f along the third row.)
0 0 0
23.
1 1 1
1
a b c  0
a 2 b2 c2 a 2
1
1
ba ca
b2
c2
1
1
 0 ba
0 b2  a 2
1
ca
c2  a2
1
1
1
 0 ba
ca
2
2
0
0
c  a   c  a  b  a 
 1 b  a  c  a  c  a  b  a 
  b  a  c  a  c  b 
a times the first row was added to the
second row.
 a 2 times the first row was added to the third
row.
  b  a  times the second row was added to
the third row.
17
2.2 Evaluating Determinants by Row Reduction
24.
(a)
Interchanging the first row and the third row and applying Theorem 2.1.2 yields
0 a13 
 0
 a31 a32 a33 


det  0 a22 a23    1 det  0 a22 a23   a13 a22 a31
 a31 a32 a33 
 0
0 a13 
(b)
We interchange the first and the fourth row, as well as the second and the third row. Then we use
Theorem 2.1.2 to obtain
0
0 a14 
 0
 a41 a42 a43 a44 
 0

 0 a
0 a23 a24 
a33 a34 
32


  1 1 det
a a a a
det
 0 a32 a33 a34 
 0
0 a23 a24  14 23 32 41




0
0 a14 
 0
 a41 a42 a43 a44 
Generally for any n  n matrix A such that aij  0 if i  j  n we have
det  A    1 a1n a2,n 1  an1 .
n
a1
a2
a3
25.
a1  b1  c1
a2  b2  c2
a3  b3  c3
b1
b2
b3
a1
 a2
a3
b1
b2
b3
b1  c1
b2  c2
b3  c3
1 times the first column was
added to the third column.
b1
b2
b3
1 times the second column was
added to the third column.
a1
 a2
a3
26.

a1  b1t
a1t  b1
a2  b2 t
a2 t  b2
a3  b3 t
a3 t  b3
c1
c2
c3
a1  b1t

 1  t b1
2
a2  b2 t

 1 t

a3  b3 t
1  t  b 1  t  b
2
2
2
c1
2
c1
c2
c3
a1  b1t
b1
c1

3
c2
c3
a2  b2 t
b2
c2
a3  b3 t
b3
c3
 1 t
2

a1
b1
c1
a2
b2
c2
a3
b3
c3
t times the first row was
added to the second row.
A common factor of 1  t 2 from the second
row was taken through the determinant sign.
t times the second row was
added to the first row.
18
2.2 Evaluating Determinants by Row Reduction
27.
28.
a1
a2
a3
a1  b1
a2  b2
a3  b3
a1  b1
a2  b2
a3  b3
c1
c2
c3
a1  b1
 a2  b2
a3  b3
2b1
2b2
2b3
c1
c2
c3
a1  b1
 2 a2  b2
a3  b3
b1
b2
b3
c1
c2
c3
a1
 2 a 2
a3
b1
b2
b3
c1
c2
c3
1 times the first column was added to the
second column.
A common factor of 2 from the second
column was taken through the determinant
sign.
1 times the second column was added to
the first column.
b1  ta1
b2  ta2
b3  ta3
c1  rb1  sa1
c2  rb2  sa2
c3  rb3  sa3
a1
 a2
a3
b1
b2
b3
c1  rb1  sa1
c2  rb2  sa2
c3  rb3  sa3
t times the first column was
added to the second column.
a1
 a2
a3
b1
b2
b3
c1  rb1
c2  rb2
c3  rb3
s times the first column was
added to the third column.
a1
 a2
a3
b1
b2
b3
c1
c2
c3
a1
 b1
c1
a2
b2
c2
a3
b3
c3
r times the second column was
added to the third column.
The matrix was transposed.
(Theorem 2.2.2)
29.
The second column vector is a scalar multiple of the fourth. By Theorem 2.2.5, the determinant is 0.
30.
Adding the second, third, fourth, and fifth rows to the first results in the first row made up of zeros.
31.
1 2 0 3 0
0

1 2 
3 0
det  M   2 5 0 2 1 0   0  0  2
 0  0   4 
   2  12   24
2 5 
2 1

 1 3 2  3 8 4
19
2.2 Evaluating Determinants by Row Reduction
32.
1 2 0
1 2
  111   11   1
det  M   0 1 2
0 1
0 0 1
33.
In order to reverse the order of rows in 2  2 and 3  3 matrix, the first and the last rows can be interchanged, so
det  B   det  A  .
For 4  4 and 5  5 matrices, two such interchanges are needed: the first and last rows can be swapped, then the
second and the penultimate one can follow.
Thus, det  B    1 1 det  A   det  A  in this case.
Generally, to rows in an n  n matrix can be reversed by

interchanging row 1 with row n ,

interchanging row 2 with row n 1 ,



interchanging row n / 2 with row n  n / 2
where x is the greatest integer less than or equal to x (also known as the "floor" of x ).
We conclude that det  B    1
34.
a b b b
b a b b
b b a b
b b b a



det  A  .
a
b
ba ab
ba
ba
0
0
ab
b
b
0
0
a  2b
b
ba ab
b
0
ab
0
ab
0
b
ba ab
0
ba
ab
0
0

n /2
b
0
0
0
ab
b
0
b
0
0
0
0
0
0
ab
0
ab
a  3b
b
b
0
0
0
ab
0
0
ab
0
0
(a)
True. det  B    1 1 det  A   det  A  .
The last column was
added to the first column.
The third column was
added to the first column.
b
0
0
ab
  a  3b  a  b 
True-False Exercises
1 times the first row was added
to each of the remaining rows.
3
The second column was
added to the first column.
20
2.2 Evaluating Determinants by Row Reduction
21
(b)
True. det  B    4   34  det  A   3det  A  .
(c)
False. det  B   det  A  .
(d)
False. det  B   n  n  13  2  1  det  A    n! det  A  .
(e)
True. This follows from Theorem 2.2.5.
(f)
True. Let B be obtained from A by adding the second row to the fourth row, so det  A   det  B  . Since the fourth
row and the sixth row of B are identical, by Theorem 2.2.5 det  B   0 .
2.3 Properties of Determinants; Cramer’s Rule
1.
det  2 A  
2 4
  2  8    4  6   40
6 8
 2  det  A  4
2
2.
det  4 A  
1 2
 4   1 4    2  3    4  10   40
3 4
8 8
  8  8    8  20   224
20 8
 4  det  A   16
2
3.
2 2
 16   2  2    2  5   16  14   224
5 2
We are using the arrow technique to evaluate both determinants.
4
2 6
det  2 A   6 4 2   160  8  288    48  64  120   448
2 8 10
2 1 3
 2  det  A   8 3 2 1   8    20  1  36    6  8  15    8  56   448
1 4 5
3
4.
We are using the cofactor expansion along the first column to evaluate both determinants.
3 3
3
6 9
 3   6  6    9  3     3  63   189
det  3 A   0 6
9 3
3 6
0 3 6
1 1
1
2 3
 27   2  2    3 1    27  7   189
3 det  A   27 0 2
3   27 1
1 2
0 1 2
3
5.
We are using the arrow technique to evaluate the determinants in this problem.
2.3 Properties of Determinants; Cramer’s Rule
9 1 8
det  AB   31 1 17  18  170  0    80  0  62   170 ;
10 0 2
 1 3 6
det  BA   17 11 4   22  120  510    660  20  102   170 ;
10 5 2
3 0 3
det  A  B   10 5 2   45  0  0    75  0  0   30 ;
5 0 3
det  A   16  0  0    0  0  6   10 ;
det  B   1  10  0   15  0  7   17 ;
det  A  B   det  A  det  B 
6.
We are using the arrow technique to evaluate the determinants in this problem.
6 15 26
det  AB   2 4 3   288  90  520    208  180  360   66 ;
2 10 12
5 8 3
det  BA   6 14 7   350  280  36    210  70  240   66 ;
5 2 5
1 7 2
det  A  B   2 1 2  1  28  20    4  10  14   75 ;
1
2 5
det  A    0  16  4    0  2  16   2 ;
det  B    2  0  12    0  18  1  33 ;
det  A  B   det  A  det  B  ;
7.
det  A    6  0  20    10  0  15  1  0 therefore A is invertible by Theorem 2.3.3
8.
det  A    24  0  0    18  0  0   6  0 therefore A is invertible by Theorem 2.3.3
9.
det  A    2 1 2   4  0 therefore A is invertible by Theorem 2.3.3
10.
det  A   0 (second column contains only zeros) therefore A is not invertible by Theorem 2.3.3
11.
det  A    24  24  16    24  16  24   0 therefore A is not invertible by Theorem 2.3.3
12.
det  A   1  0  81   8  36  0   124  0 therefore A is invertible by Theorem 2.3.3
22
2.3 Properties of Determinants; Cramer’s Rule
13.
det  A    2 1 6   12  0 therefore A is invertible by Theorem 2.3.3
14.
det  A   0 (third column contains only zeros) therefore A is not invertible by Theorem 2.3.3
15.
det  A    k  3  k  2    2  2   k 2  5k  2  k  5 2 17

 k 
5  17
2
 . By Theorem 2.3.3, A is invertible if
k  52 17 and k  52 17 .
16.
det  A   k 2  4   k  2  k  2  . By Theorem 2.3.3, A is invertible if k  2 and k  2 .
17.
det  A    2  12k  36    4k  18  12   8  8k  8 1  k  .
By Theorem 2.3.3, A is invertible if k  1 .
18.
det  A   1  0  0    0  2k  2k   1  4k . By Theorem 2.3.3, A is invertible if k  14 .
19.
det  A    6  0  20    10  0  15  1  0 therefore A is invertible by Theorem 2.3.3.
The cofactors of A are:
C11 =
–1 0
4 3
C21 = –
C31 =
= –3
C12 = –
=5
C22 =
=5
C32 = –
5 5
4 3
5 5
–1 0
–1 0
=3
2 3
2 5
2 3
=–4
2 5
–1 0
=–5
C13 =
–1 –1
2
4
2
5
C23 = –
C33 =
2 4
2
=–2
=2
5
=3
–1 –1
5 5
 3 3 2 
 3



The matrix of cofactors is  5 4 2  and the adjoint matrix is adj  A    3 4 5 .
 5 5 3
 2
2
3
5 5   3  5 5 
 3

From Theorem 2.3.6, we have A  det1 A adj  A   11  3 4 5   3 4
5  .
 2
2
3  2 2 3 
1
20.
det  A    24  0  0    18  0  0   6  0 therefore A is invertible by Theorem 2.3.3.
The cofactors of A are:
C11 =
3
0 –4
C21 = –
C31 =
2
0
= –12 C12 = –
3
0 –4
0 3
3 2
=0
= –9
C22 =
0
2
–2 –4
2
3
–2 –4
C32 = –
2 3
0 2
= – 4 C13 =
=–2
=–4
0 3
–2 0
C23 = –
C33 =
=6
2 0
–2 0
2 0
0 3
=6
=0
23
2.3 Properties of Determinants; Cramer’s Rule
0 9 
 12 4 6 
 12



The matrix of cofactors is  0 2 0  and the adjoint matrix is adj  A    4 2 4  .
 9 4 6 
 6
0
6 
3
 12 0 9   2 0
2
 2 1

1
1
1 
2
.
From Theorem 2.3.6, we have A  det  A adj( A)  6  4 2 4    3 3
3


 6 0 6   1 0 1
21.
det  A    2 1 2   4  0 therefore A is invertible by Theorem 2.3.3.
The cofactors of A are:
C11 =
1 3
0
C21 = 
C31 =
2
=2
3 5
0 2
3
5
1 3
C12 = 
0 3
0
2
2 5
=6
C22 =
=4
C32 = 
0 2
2
=0
=4
5
0 3
=6
C13 =
0
1
2 3
C23 = 
C33 =
=0
0 0
0
0
2 3
0
1
=0
=2
2 0 0
2 6 4 


The matrix of cofactors is  6 4 0  and the adjoint matrix is adj  A   0 4 6  .
 4 6 2 
0 0 2 
2 6 4   12 32 1 


 
From Theorem 2.3.6, we have A1  det1 A adj( A)  14 0 4 6   0 1 23  .
0 0 2  0 0 12 
22.
det  A    2 1 6   12 is nonzero, therefore by Theorem 2.3.3, A is invertible.
The cofactors of A are:
C11 
1 0
3 6
C21  
C31 
6
0 0
1 0
8 0
C22 
0
C32  
5 6
2 0
8 0
8 1
 48
C13 
 12
C23  
0
C33 
5 6
2 0
0
3 6
0 0
C12  
5 3
2 0
5 3
2 0
8
 29
1
 6
2
0 0
6 48 29 
 6



The matrix of cofactors is 0 12 6  and the adjoint matrix is adj  A    48 12 0  .
0
 29 6 2 
0 2 
0 0
 6 0 0   12




1
1 0 .
From Theorem 2.3.6, we have A  det1 A adj  A   121  48 12 0    4
29
 29 6 2   12
 12 61 
24
2.3 Properties of Determinants; Cramer’s Rule
23.
1
2
1
1
3
5
3
3
1
2
8
2
1 1 3
2 0 1

9 0 0
2 0 0
1
0
7
1
1
0
8
1
1 3
0 1

0 0
0 0
1
0
1
7
1
0
1
8
The third row and the fourth row were interchanged.
1 3
0 1

0 0
0 0
1
0
1
0
1
0
1
1
7 times the third row was added to the fourth row
2 times the first row was added to the second row; 1
times the first row was added to the third and fourth
rows.
  1 111  1
The determinant of A is nonzero therefore by Theorem 2.3.3, A is invertible.
The cofactors of A are:
5 2 2
C11  3 8 9   80  54  12    48  90  12   4
3 2 2
2 2 2
C12   1 8 9    32  18  4   16  36  4    2
1 2 2
2 5 2
C13  1 3 9  12  45  6    6  54  10   7
1 3 2
2 5 2
C14   1 3 8   12  40  6    6  48  10    6
1 3 2
3 1 1
C21   3 8 9    48  27  6    24  54  6    3
3 2 2
1 1 1
C22  1 8 9  16  9  2    8  18  2   1
1 2 2
1 3 1
C23   1 3 9    6  27  3    3  27  6    0
1 3 2
25
2.3 Properties of Determinants; Cramer’s Rule
1 3 1
C24  1 3 8   6  24  3    3  24  6   0
1 3 2
3 1 1
C31  5 2 2  12  6  10    6  12  10   0
3 2 2
1 1 1
C32   2 2 2    4  2  4    2  4  4    0
1 2 2
1 3 1
C33  2 5 2  10  6  6    5  6  12   1
1 3 2
1 3 1
C34   2 5 2   10  6  6    5  6  12    1
1 3 2
3 1 1
C41   5 2 2    54  6  40    6  48  45    1
3 8 9
1 1 1
C42  2 2 2  18  2  16    2  16  18   0
1 8 9
1 3 1
C43   2 5 2    45  6  6    5  6  54    8
1 3 9
1 3 1
C44  2 5 2   40  6  6    5  6  48   7
1 3 8
 4 2 7 6 
 4 3 0 1
 3 1 0 0 
 2 1 0 0 
.


and the adjoint matrix is adj( A)  
The matrix of cofactors is
 0 0 1 1
 7 0 1 8 




 1 0 8 7 
 6 0 1 7 
 4 3 0 1  4 3 0 1
 2 1 0 0   2 1 0 0 
1
1
1

.
From Theorem 2.3.6, we have A  det  A adj  A   1
 7 0 1 8   7 0 1 8 

 

 6 0 1 7   6 0 1 7 
24.
7 3
7 2 
3 2 
det A
det  A 
26
A
, A1  
, A2  
; x1  det A   13
 1 , x2  det  A   13
2



13
3
1
5
1
3
5






1
2
26
2.3 Properties of Determinants; Cramer’s Rule
25.
4 5 0
det  A   11 1 2   8  10  0    0  40  110   132 ,
1 5 2
2 5 0
det  A1   3 1 2   4  10  0    0  20  30   36 ,
1 5 2
4 2 0
det  A2   11 3 2   24  4  0    0  8  44   24 ,
1 1 2
4 5 2
det  A3   11 1 3   4  15  110    2  60  55   12 ;
1 5 1
det  A 
det  A 
det  A 
36
24
x  det  A1  132
 113 , y  det  A2  132
 112 , z  det  A3  12
  111 .
132
26.
1 4
1
det  A   4 1 2   3  16  8    2  4  48   55 ,
2
2 3
6 4
1
det  A1   1 1 2  18  160  2    20  24  12   144 ,
2 3
20
1
6
1
det  A2   4
1 2   3  24  80    2  40  72   61 ,
2 20 3
1 4
6
det  A3   4 1 1   20  8  48    12  2  320   230 ;
2
2 20
det  A 
det  A 
det  A 
61
46
, y  det  A2  6155   55
, z  det  A3  230
.
x  det  A1  144
  144
 11
55
55
55
27.
1 3
1
det  A   2 1 0   3  0  0    4  0  18   11 ,
4 0 3
4 3
1
4 3
  3  4  6   30 ,
det  A1   2 1 0  3
2 1
0 0 3
27
2.3 Properties of Determinants; Cramer’s Rule
1 4
1
det  A2   2 2 0   6  0  0    8  0  24   38 ,
4
0 3
1 3 4
3 4
  4  6  4   40 ;
det  A3   2 1 2  4
 1 2
4 0
0
det  A 
det  A 
det  A 
38
, x2  det  A2  3811   11
, x3  det  A3  4011   40
.
x1  det  A1  3011   30
11
11
28.
1 4 2
1
2 1 7 9
det  A  
1 1 3
1
1 2 1 4
1 7
9
2 7
9
2 1 9
2 1 7
1  4 1 3
1  2 1
1
1  1 1
1 3
 1 1 3
1 1 4
1 2 4
1 2 1
2 1 4
  12  14  9    54  1  28    4  24  7  9    27  2  28  
2  8  1  18    9  4  4     2  3  14    7  12  1 
= 90  332 +16  17 = 423
32 4 2
1
14 1 7 9
det  A1  
11 1 3
1
4 2 1 4
1 7
9
14 7
9
14 1 9
14 1 7
1  4 11 3
1  2 11
1
1  1 11
1 3
 32 1 3
 2 1 4
4 1 4
4 2 4
4 2 1
 32 12  14  9    54  1  28    4  168  28  99    108  14  308  
2  56  4  198    36  28  44    14  12  154    28  84  11 
= 2880 +1220  460 + 5 = 2115
1 32 2
1
2 14 7 9
det  A2  
1 11 3
1
1 4 1 4
14 7
9
2 7
9
2 14
9
2 14 7
1  32 1 3
1  2 1 11
1  1 1 11 3
 1 11 3
1 1 4
1 4 4
1 4 1
4 1 4
   168  28  99    108  14  308    32  24  7  9    27  2  28  
28
2.3 Properties of Determinants; Cramer’s Rule
2  88  14  36    99  8  56     22  42  28    77  24  14  
= 305  2656  370  53 = 3384
1 4 32
1
2 1 14
9
det  A3  
1 1 11 1
1 2 4 4
1 14
9
2 14
9
2 1 9
2 1 14
1  4 1 11
1  32 1
1
1  1 1
1 11
 1 1 11
1 4 4
1 2 4
1 2 4
2 4 4
   44  28  36    198  4  56    4  88  14  36    99  8  56  
32  8  1  18    9  4  4     8  11  28   14  44  4  
= 230  740  256  43 = 1269
1 4 2 32
2 1 7 14
det  A4  
1 1 3
11
1 2 1 4
1 7 14
2 7 14
2 1 14
2 1 7
1 11  32 1
1 3
 1 1 3 11  4 1 3 11  2 1
1 1 4
1 2 4
1 2 1
 2 1 4
  12  154  14    84  11  28    4  24  77  14    42  22  28  
2  8  11  28   14  44  4    32  2  3  14    7  12  1 
= 5  212 +86 + 544 = 423
det  A 
x1  det  A1  2115
 5,
423
det  A 
x3  det  A3  1269
 3,
423
det  A 
x2  det  A2  3384
8,
423
det  A 
x 4  det  A4  423
 1
423
29.
det  A   0 therefore Cramer’s rule does not apply.
30.
det  A   cos2   sin2   1 is nonzero for all values of  , therefore by Theorem 2.3.3, A is invertible.
The cofactors of A are:
C11  cos
C12  sin 
C13  0
C21   sin 
C31  0
C22  cos
C32  0
C23  0
C33  cos2   sin 2   1
The matrix of cofactors is
 cos sin  0 
  sin  cos 0 



0
0 1
29
2.3 Properties of Determinants; Cramer’s Rule
and the adjoint matrix is
 cos  sin  0 
adj  A    sin 
cos 0 

0
0 1
From Theorem 2.3.6, we have
1
A 
1
det  A 
cos  sin  0   cos  sin  0 
1
adj  A    sin 
cos 0    sin 
cos 0  .
1

0
0 1 
0
0 1
31.
4
3
det  A  
7
1
1 1
7 1
3 5
1 1
4 6
1
1
3 1 1
1
 424 ; det  A2  
7 3 5
8
1 3 1
2
32.
4
3
A
7

1
1 1
7 1
3 5
1 1
1
1
,
8

2
(a)
 6
 1
A1  
 3

 3
1 1
7 1
3 5
1 1
det  A 
1
1
4 6


1
3 1 1
, A2  
 7 3 5
8


2
1 3 1
det  A 
1
1
det A
0
 0 ; y  det A2  424
0
8
2
1
4

3
1
, A3  
7
8


2
1
1 6
7
1
3 3
1 3
det  A 
1
4

3
1
, A4  
7
8


2
1
1 1 6
7 1 1
;
3 5 3

1 1 3
det  A 
0
848
0
x  det  A1  424
 1 , y  det  A2  424
 0 , z  det  A3  424
 2 , w  det  A4  424
0
424
(b)
4
3
The augmented matrix of the system 
7

1
1
0

0

0
33.
0
1
0
0
0
0
1
0
0
0
0
1
1 1
7 1
3 5
1 1
1 6
1 1
has the reduced row echelon form
8 3

2 3
1
0 
therefore the system has only one solution: x 1 , y  0 , z  2 , and w  0 .
2

0
(c)
The method in part (b) requires fewer computations.
(a)
det  3 A  33 det  A    27  7  189 (using Formula (1))
(b)
det A 1  det1 A  17   17 (using Theorem 2.3.5)
 
30
2.3 Properties of Determinants; Cramer’s Rule



1
 
(c)
det 2 A1  23 det A 1  det8 A  87   87 (using Formula (1) and Theorem 2.3.5)
(d)
det  2 A 
(e)
a g
b h
c i

1
det  2 A 
d
a
e b
f
c
d
e
f
 23 det1  A  817   561 (using Theorem 2.3.5 and Formula (1))
g
a b
h d e
i
g h
c
f    7   7 (in the first step we interchanged the last two columns
i
applying Theorem 2.2.3(b); in the second step we transposed the matrix applying Theorem 2.2.2)
34.
35.
(a)
det   A   det   1 A    1 det  A   det  A   2 (using Formula (1))
(b)
det A 1  det1 A  12   12 (using Theorem 2.3.5)
(c)
det 2 AT  2 4 det AT  16det  A   32 (using Formula (1) and Theorem 2.2.2)
(d)
det A3  det  AAA   det  A  det  A  det  A    2   8 (using Theorem 2.3.4)
(a)
det  3 A   33 det  A    27  7   189 (using Formula (1))
(b)
det A1  det1 A  17 (using Theorem 2.3.5)
(c)
det 2 A1  23 det A1  det8 A  87 (using Formula (1) and Theorem 2.3.5)
(d)
det  2 A 
4
 


 
 
3
 



1
 

1
det  2 A 
 23 det1  A  81 7  561 (using Theorem 2.3.5 and Formula (1))
True-False Exercises
(a)
False. By Formula (1), det  2 A   23 det  A   8det  A  .
(b)
1 0 
0 0 
False. E.g. A  
and B  

 have det  A   det  B   0 but det  A  B   1  2 det  A  .
0 0 
0 1 
(c)
True. By Theorems 2.3.4 and 2.3.5,


 
det A 1 BA  det A1 det  B  det  A   det1 A det  B  det  A   det  B  .
(d)
False. A square matrix A is invertible if and only if det  A   0 .
(e)
True. This follows from Definition 1.
(f)
True. This is Formula (8).
(g)
True. If det  A   0 then by Theorem 2.3.8 Ax  0 must have only the trivial solution, which contradicts our
assumption. Consequently, det  A   0 .
31
2.3 Properties of Determinants; Cramer’s Rule
(h)
True. If the reduced row echelon form of A is I n then by Theorem 2.3.8 Ax  b is consistent for every b , which
contradicts our assumption. Consequently, the reduced row echelon form of A cannot be I n .
(i)
True. Since the reduced row echelon form of E is I then by Theorem 2.3.8 Ex  0 must have only the
trivial solution.
(j)
True. If A is invertible, so is A 1 . By Theorem 2.3.8, each system has only the trivial solution.
(k)
True. From Theorem 2.3.6, A1  det1 A adj  A  therefore adj  A   det  A  A1 . Consequently,

(l)
1
det  A 


A adj  A   det1 A A
  det  A A  
1
det  A 
det  A 
 AA   I so  adj A     A .
1
1
n
1
det A
False. If the k th row of A contains only zeros then all cofactors C jk where j  i are zero (since each of them
involves a determinant of a matrix with a zero row). This means the matrix of cofactors contains at least one zero
row, therefore adj  A  has a column of zeros.
Chapter 2 Supplementary Exercises
1.
(a)
(b)
Cofactor expansion along the first row:
4 2
3 3

3 3
4 2
   3
1 1
4 2
   3
1 1
0 6
   31 6   18
2.
(a)
(b)
Cofactor expansion along the first row:
7 1
2 6

2 6
7 1
   2 
   2 
1
1
3
7 1
3
0 22
4 2
  4  3    2  3  12  6  18
3 3
The first and second rows were interchanged.
A common factor of 3 from the first row
was taken through the determinant sign.
4 times the first row was added to the second row
Use Theorem 2.1.2.
7 1
2 6
  7  6    1 2   42  2  44
The first and second rows were interchanged.
A common factor of 2 from the first row
was taken through the determinant sign.
7 times the first row was added to the second row
32
Supplementary Exercises
   2 1 22 
Use Theorem 2.1.2.
 44
3.
(a)
Cofactor expansion along the second row:
1 5 2
1 2
1 5
0 2 1  0  2
  1
3 1
3 1
3 1 1
 0  2  11   2  3     1  11   5  3  
 0   2  5   114   0  10  14  24
(b)
1 5 2
1 5 2
0 2 1   1 0 2 1
3 1 1
3
1 1
1 5 2
  1 0
2 1
0 14 5
1 5 2
  1 0 2
1
0 0 12
  11 2  12   24
4.
(a)
A common factor of 1 from the first row
was taken through the determinant sign.
3 times the first row was added to the third
row.
7 times the second row was added to the third.
Use Theorem 2.1.2.
Cofactor expansion along the first row:
1 2 3
5 6
4 6
4 5
4 5 6   1
  2 
  3 
8 9
7 9
7 8
7 8 9
  1  5  9    6  8     2   4  9    6  7  
  3   4  8    5  7  
  1 3   2  6    3 3  3  12  9  0
(b)
1 2 3
1 2
3
4 5 6   1 4 5 6
7 8 9
7 8 9
1 2 3
  1 0 3 6
0 6 12
A common factor of 1 from the first row
was taken through the determinant sign.
4 times the first row was added to the
second row and 7 times the first row was
added to the third row
33
Supplementary Exercises
1 2 3
  1 0 3 6
0 0 0
  1 0   0
5.
(a)
2 times the second row was added to the
third row
Use Theorem 2.2.1.
Cofactor expansion along the first row:
3 0 1
1 1
1 1
1 1 1  3
 0   1
4 2
0 4
0 4 2
  3  1 2   1 4    0   1 1 4   1 0  
  3 2   0   1 4   6  0  4  10
(b)
3 0 1
1 1 1
1 1 1   1 3 0 1
0 4 2
0 4 2
1 1
1
  1 0 3 4
0 4
2
3 times the first row was added to the second.
1 1
1
  1 0 3 4
0
1 2
The second row was added to the third row
1 1
1
  1 1 0
1 2
0 3 4
The second and third rows were interchanged.
1 1
1
  1 1 0 1 2
0 0 10
3 times the second row was added to the third.
  1 111 10   10
6.
(a)
The first and second rows were interchanged.
Use Theorem 2.1.2.
Cofactor expansion along the second row:
5
1 4
1 4
5
1
3 0 2  3
02
2 2
1 2
1 2 2
 3 1 2    4  2    2  5  2   11    3 10   2  9   30  18  48
34
Supplementary Exercises
5
1 4
1 2 2
3 0 2  3 0 2
1 2 2
5
1 4
(b)
The first and third rows were interchanged.
1 2
2
0
6 4
0 9 14
3 times the first row was added to the second
row and 5 times the first row was added to the
third row
1 2
2
 6 0
1  23
0 9 14
A common factor of 6 from the second row
was taken through the determinant sign.
1 2
2
 6 0
1  23
0
0
8
9 times the second row was added to the third
row.
 6 11 8   48
7.
(a)
Use Theorem 2.1.2.
We perform cofactor expansions along the first row in the 4x4 determinant. In each of the 3x3 determinants,
we expand along the second row:
3
2
1
9
6 0
3
1
0 1
2 2
1
3
1 4
1 4
1
2
2 3
4
 3 0 1 1  6 1 1 1  0  1 1 0 1
1
2 2 2
9 2 2
9 2 2
2

3 4
3
1 
1 4
2 4
2
1
 3  0   1
1
  1
1
  6  1

2 2
2 2   2 2
9 2
9 2 


 0 1 1

3
1
2 3 
 0   1

2 2
9 2 
 3  0  1 2   1 8    6  110   1 32   113    0  1 1 8   0  1 23  
 3 10   6  55  0  1 31
 329
(b)
3
2
1
9
6 0
3
1
0 1
2 2
1
1
4
2
  1
1
3
2
9
0 1
3
1
6 0
2 2
1
4
1
2
The first and third rows were
interchanged.
35
Supplementary Exercises
1
0
  1
0
0
0 1 1
3 1 6
6
3 2
2 11 11
1
0
   1
0
0
0 1
1
3 1
6
0
5 14
31
0 3
7
  1
1 0 1
0 3 1
0 0
0 0
2 times the first row was added to
the second, 3 times the first row
was added to the third and 9 times
the first row was added to the
fourth.
2 times the second row was
added to the third and  23 times
the second row was added to the
fourth.
1
6
31
5 14
0  329
15
15
  11 3  5    329
 329
15 
8.
(a)
times the third row was added
to the fourth.
Use Theorem 2.1.2.
We perform cofactor expansions along the first row in the 4x4 determinant, as well as in each of the
3x3 determinants:
1 2 3 4
4
3 2
1
1 2
3 4
4 3 2 1
3
 1 2
2
3
1
4
4   2  1
3 2 1
  4 
2
3
1
4
4   3  1
4 2 1
4
1
3
2
3
2
1
4
4 3 1
2
3
4 3 2
 3 4
2 4
2
3
 1 3
2
1

3 1
3 2 
 2 1

3 4
1 4
1 3
 2 4
2
1

4 1
4 2 
 2 1
 2 4
1 4
1 2
3  4
3
1

4 1
4 3 
 3 1
 2
3
1 3
1 2
 4 4
3
2

4 2
4 3 
 3 2
36
Supplem
mentary Exerrcises
0   5    2  4  5   2   4  5    2 15   10 
    3  5    2 10
3   4 
 10    3 155  5  4   4  5   3100    2  5 
0000
0
1 2 3 4
4
1
(b)
3
2
4
1

0
4
2
3
4 3 2
1 2 3 4
1
3
0
2
0
1
0
4 3 2
1
0
1 5
9.
1 5
2
 1 2  3
 4 5 6 4 5   45
4  84  96    105  48  72   0
 7  8 9  7  8
3 0 1
3 0 1
1 1
0 4
1
2
1 1
0 4
5
1 4
5
e.g.
3
5
6
0
3 0
1 1  6  0  4    0  122  0   10
0 4
1
2
4 5
1
 3 0
1 2
3 0 2
1 2 2
(a)
2 1 5
 1  2 3  1  2
4  5 6
7 8 9
10.
Theorem 2.2.1.
Use T
 0 2 1 0 2   2  15
1  0    122  1  0   24
3 1 1 3 1
0 2 1
3 1 1
4 0
8 0
The ffirst row was aadded to the
third row.
2
2
4
1
3 0   0  2  24    0  20  6   48
1 2
3 6
8 5
 3 8 5 0  18
  188 15   2700 was easy too calculate byy cofactor
13 10
7 3 7 10
0
13 10
0 0
13 0 10 0
expansions (first, we
w expanded along
a
the seco
ond column, tthen along thee third colum
mn), but wouldd be more
difficu
ult to calculate using elemeentary row op
perations.
37
Supplementary Exercises
(b)
1 2 3 4
4
3 2
1
e.g.,
of Exercise 8 was easy to calculate using elementary row operations, but more
1 2
3 4
4 3 2 1
difficult using cofactor expansion.
11.
In Exercise 1:
4 2
 18  0 therefore the matrix is invertible.
3 3
In Exercise 2:
7 1
 44  0 therefore the matrix is invertible.
2 6
1 5 2
In Exercise 3: 0 2 1  24  0 therefore the matrix is invertible.
3 1 1
1 2 3
In Exercise 4: 4 5 6  0 therefore the matrix is not invertible.
7 8 9
12.
3 0 1
In Exercise 5: 1 1 1  10  0 therefore the matrix is invertible.
0 4 2
5
1 4
In Exercise 6: 3 0 2  48  0 therefore the matrix is invertible.
1 2 2
3
2
In Exercise 7:
1
9
6 0
3
1
0 1
2 2
1
4
 329  0 therefore the matrix is invertible.
1
2
1 2 3 4
4
3 2
1
 0 therefore the matrix is not invertible.
In Exercise 8:
1 2
3 4
4 3 2 1
13.
14.
5
b3
  5 3   b  3 b  2   15  b2  2b  3b  6  b2  5b  21
b  2 3
3
4
2
1
2 
a
1
2
3
2
a  1 4  a  a  2 0 2 a  6
a
2
a
4a2  3
2
0
8a
4 times the second row was added
to the first row and 1  a times the
second row was added to the last
row.
38
Supplementary Exercises
4a 2  3
8a
 0  1 3
0
2
a  a  2 2 a  6



 4a 2  3  2a  6    8  a  a3  a 2  2
Cofactor expansion along
the second column.

 a 4  a 3  16a 2  8a  2
0
0
0
0
15.
0 0
0 3
0 0 4 0
0 1 0 0
2 0
0 0
5 0
5
0
  1 0
0
0
0
0 0 0
0 0 4
0 1 0
2 0 0
0
0
0
0
0 0
5
0
  1 1 0
0
0
0
0 3
0 0 0
2 0 0
0 1 0
0 0 4
0 0
0
The first row and the fifth row were
interchanged.
0
0
0
0
The second row and the fourth row
were interchanged.
0 3
  1 1 5 2  1 4  3  120
16.
x
1
3 1 x
 x 1  x    1 3   x 2  x  3 ;
Adding 2 times the first row to the second row, then performing cofactor expansion along the second row yields
1 0 3
1 0 3
1 3
2 x 6  0 x
0 x
 x  x  5  3  x 2  2 x
1 x5
1 3 x5 1 3 x5
Solve the equation
 x2  x  3  x2  2 x
2 x 2  3 x  3  0
From quadratic formula x  3 4924  3 4 33 or x  3 49 24  3 4 33 .
39
Supplementary Exercises
17.
It was shown in the solution of Exercise 1 that
4 2
 18 . The determinant is nonzero, therefore by
3 3
 4 2 
Theorem 2.3.3, the matrix A  
 is invertible.
 3 3
The cofactors are:
C11  3
C12  3
C21  2
C22  4
 3 2 
 3 3
The matrix of cofactors is 
and the adjoint matrix is adj  A   

.
2 4 
 3 4 
 3 2    61
From Theorem 2.3.6, we have A1  det1 A adj  A   118 
 1
 3 4   6
18.
It was shown in the solution of Exercise 2 that
1
9
2
9

.

7 1
 44 . The determinant is nonzero, therefore by
2 6
 7 1
Theorem 2.3.3, the matrix A  
 is invertible.
 2 6 
The cofactors are:
C11  6
C12  2
C21  1
C22  7
 6 1
 6 2 
The matrix of cofactors is 
and the adjoint matrix is adj  A   
.

 2 7
 1 7
 6 1  223
From Theorem 2.3.6, we have A1  det1 A adj  A   144 
 1
 2 7    22
19.
 441 
.
 447 
1 5 2
It was shown in the solution of Exercise 3 that 0 2 1  24 . The determinant is nonzero, therefore by
3 1 1
 1 5 2 
Theorem 2.3.3, A   0 2 1 is invertible.
 3 1 1
The cofactors of A are:
40
Supplementary Exercises
C11 =
2 –1
1
C21 = –
C31 =
1
5 2
1 1
5
2
2 –1
=3
C12 = –
= –3 C22 =
= –9
0 –1
–3
1
–1 2
–3 1
C32 = –
=3
=5
–1
2
C13 =
–3 1
C23 = –
= –1 C33 =
0 –1
0 2
=6
–1 5
= –14
–3 1
–1 5
0 2
= –2
6
 3 3
 3 3 9 


The matrix of cofactors is  3 5 14  and the adjoint matrix is adj  A    3
5 1 .
6 14 2 
 9 1 2 
 3 3 9   81

From Theorem 2.3.6, we have A1  det1 A adj  A   241  3
5 1   81
6 14 2   14
20.
 81

 83 

 241  .
 121 
5
24
7
12
1 2 3
It was shown in the solution of Exercise 4 that 4 5 6  0 therefore by Theorem 2.3.3, the matrix is
7 8 9
not invertible.
21.
3 0 1
It was shown in the solution of Exercise 5 that 1 1 1  10 . The determinant is nonzero, therefore by
0 4 2
 3 0 1
Theorem 2.3.3, A   1 1 1  is invertible.
 0 4 2 
The cofactors of A are:
C11 
1 1
4 2
C21  
C31 
 2
0 1
4
1
1
1 1
0 2
3 1
 4
C22 
1
C32  
2
0 1
C12  
0
2
C13 
6
C23  
3 1
1
1
1 1
 2
 4
C33 
0 4
4
3 0
0 4
3 0
1 1
 12
3
4
1
 2 2
 2 4



The matrix of cofactors is  4
6 12  and the adjoint matrix is adj  A    2
6 4  .
 1 4
 4 12
3 
3
1  15
 2 4

From Theorem 2.3.6, we have A1  det1 A adj  A   110  2
6 4    15
 4 12
3   25

2
5
3
5
6
5
 101 
2
.
5
3 
 10 
41
Supplementary Exercises
22.
42
5
1 4
It was shown in the solution of Exercise 6 that 3 0 2  48 . The determinant is nonzero, therefore by
1 2 2
1 4
 5

Theorem 2.3.3, A   3 0 2  is invertible.
 1 2 2 
The cofactors of A are:
C11 =
0 2
–2 2
C21 = –
C31 =
=4
1 4
–2 2
1 4
0 2
=2
C12 = –
= –10 C22 =
3 2
1 2
–5 4
1 2
C32 = –
3
= –4
C13 =
= –14
C23 = –
–5 4
3 2
= 22 C33 =
0
1 –2
= –6
–5
1
1 –2
–5 1
3 0
= –9
= –3
 4 4 6 
 4 10 2 


The matrix of cofactors is  10 14 9  and the adjoint matrix is adj  A    4 14 22  .
 2 22 3
 6 9 3
 4 10 2    121

 
From Theorem 2.3.6, we have A1  det1 A adj  A   148  4 14 22    121
 6 9 3  81
23.
3
2
It was shown in the solution of Exercise 7 that
1
9
 3
 2
Theorem 2.3.3, A  
 1

 9
6
0
3
1
0 1
2 2
6
0
3
1
0 1
2 2
1
4 
is invertible.
1

2
The cofactors of A are:
3
1 4
C11  0 1 1   6  2  0    8  6  0   10
2 2 2
2
1 4
C12   1 1 1    4  9  8    36  4  2    55
9 2 2
2 3 4
C13  1 0 1   0  27  8    0  4  6   21
9 2 2
5
24
7
24
3
16
 241 

.
 11
24 
1 
16 
1
4
 329 . The determinant of A is nonzero therefore by
1
2
Supplementary Exercises
2 3
1
C14   1 0 1    0  27  2    0  4  6    31
9 2 2
6
0 1
C21   0 1 1    12  0  0    2  12  0    2
2 2 2
3 0 1
C22  1 1 1   6  0  2    9  6  0   11
9 2 2
3 6 1
C23   1 0 1    0  54  2    0  6  12    70
9 2 2
3 6
0
C24  1 0 1   0  54  0    0  6  12   72
9 2 2
6
0 1
C31  3
1 4  12  0  6    2  48  0   52
2 2 2
3 0 1
C32   2
1 4    6  0  4    9  24  0    43
9 2 2
3 6 1
C33  2 3 4  18  216  4    27  24  24   175
9 2 2
3 6
0
C34   2 3
1    18  54  0    0  6  24    102
9 2 2
6 0 1
C41   3 1 4    6  0  3    0  24  0    27
0 1 1
3 0 1
C42  2
1 4   3  0  2   1  12  0   16
1 1 1
3 6 1
C43   2 3 4    9  24  0    3  0  12    42
1 0 1
43
Supplementary Exercises
44
3 6 0
C44  2 3 1   9  6  0    0  0  12   15
1 0 1
52 27 
 10 55 21 31
 10 2
 2 11


55 11 43 16 
70 72 
.
The matrix of cofactors is 
and adj  A   
 52 43 175 102 
 21 70 175 42 




 31 72 102 15
 27 16 42 15
2
 329
10
52 27   329
 10 2
 55 11 43 16   55
1 
   329
From Theorem 2.3.6, we have A1  det1 A adj  A   329
 21 70 175 42    473

  31
 31 72 102 15   329
24.
11
 329
10
47
72
329
52
329
43
329
25
47
102
329


27
 329

16 
329 
.
 476 

15
 329

1 2 3 4
4
3 2
1
 0 therefore by Theorem 2.3.3, the matrix
It was shown in the solution of Exercise 8 that
1 2
3 4
4 3 2 1
is not invertible.
25.
3
A   45
5
 45 
 x  45 
 35
3
3
9
16
4
4
A


A
,
det
A






1
;
,


 5  5   5  5  25 25
4

1
2
3
3
5
5
y
5
det  A 
x
;
y
det  A 
x   det  A1  35 x  45 y , y'  det  A2  35 y  45 x
26.
cos
A
 sin 
 sin  
cos
 x  sin  
, A1  
, A2  


cos 
 y cos 
 sin 
det  A 
x
;
y 
det  A 
y sin 
x sin 
x   det  A1  xcoscos2   sin
 x cos   y sin  , y  det  A2  ycoscos2   sin
 y cos   x sin 
2
2


27.
 1 1 
The coefficient matrix of the given system is A   1 1   . Coefficient expansion along the first row yields
  1
det  A   1
1 
1 
1 1
1

 1  1
 
 1   2  1             2  2   2      
2
By Theorem 2.3.8, the given system has a nontrivial solution if and only if det  A   0 , i.e.,    .
28.
According to the arrow technique (see Example 7 in Section 2.1), the determinant of a 3  3 matrix can be expressed
as a sum of six terms:
Supplementary Exercises
a11
a21
a31
a12
a22
a32
45
a13
a23  a11a22 a33  a12 a23 a31  a13 a21a32  a13 a22 a31  a11a23 a32  a12 a21a33
a33
If each entry of A is either 0 or 1 , then each of the terms must be either 0 or 1 . The largest value 3 would result
from the terms 1  1  1  0  0  0 , however, this is not possible since the first three terms all equal 1 would require
that all nine matrix entries be equal 1, making the determinant 0 .
0 1 1 
The largest value of the determinant that is actually attainable is 2 , e.g., let A   1 0 1  .
 1 1 0 
29.
(a)
We will justify the third equality, a cos   b cos   c by considering three cases:
CASE I:   2 and   2
Referring to the figure on the right side, we have
x  b cos and y  a cos  .
Since x  y  c we obtain, a cos   b cos   c .
CASE II:   2 and   2
Referring to the picture on the right side, we can write
x  b cos      b cos and y  a cos 
This time we can write c  y  x  a cos    b cos 
therefore once again a cos   b cos   c .
CASE III:   2 and   2 (similarly to case II, c  b cos  a cos      b cos  a cos  )
The first two equations can be justified in the same manner.
Denoting X  cos , Y  cos  , and Z  cos  we can rewrite the linear system as
cY
cX
bX
 bZ
 aZ
 aY
 a
 b
 c
0 c b
We have det  A   c 0 a   0  abc  abc    0  0  0   2 abc and
b a 0
a c b
det  A1   b 0 a   0  ac 2  ab 2   0  a 3  0   a b 2  c 2  a 2 therefore by Cramer's rule
c a 0

cos   X 
det  A1 
det  A 


a b2  c2  a2
2 abc
  b c a .
2
2
2bc
2

Supplementary Exercises
(b)
Using the results obtained in part (a) along with
0 a b
det  A2   c b a   0  a 2 b  bc 2   b3  0  0   b  a 2  c 2  b 2  and
b c 0
0 c a
det  A3   c 0 b   0  b 2 c  a 2 c   0  0  c 3   c a 2  b 2  c 2 therefore by Cramer's rule
b a c

cos   Y 
31.
det( A3 ) a 2  b 2  c 2
det( A2 ) a 2  c 2  b 2
and cos   Z 
.


det( A)
2 ac
det( A)
2 ab
From Theorem 2.3.6, A1  det1 A adj  A  therefore adj  A   det  A  A1 . Consequently,

1
det  A 


A adj  A   det1 A A
  det  A A  
1
 
det  A 
det  A 
 AA   I so  adj A     A .
1
1
n
1
det A
  A     A.
Using Theorem 2.3.5, we can also write adj A1  det A1
33.

1
1
1
det A
1  0 
1




The equality A       means that the homogeneous system Ax  0 has a nontrivial solution x    .
1  0 
1
Consequently, it follows from Theorem 2.3.8 that det  A   0 .
34.
(b)
1
2
3 3 1
4 0 1   192 is the negative of the area of the triangle because it is being traced clockwise; (reversing
2 1 1
the order of the points would change the orientation to counterclockwise, and thereby result in the positive area:
 2 1 1
1
4 0 1  192 .
2
3 3 1
37.
1 x1

In the special case that n  3, the augmented matrix for the system (13) of Section 1.10 is 1 x2
1 x3

1 x1

We apply Cramer’s Rule to the coefficient matrix A  1 x2
1 x3

 y1

A1   y2
 y3

x1
x2
x3
1 y1
x12 

2
x2  , A2  1 y2
1 y3
x32 

x12 
1 x1
2
x2  , and A3  1 x2
1 x3
x32 
x12
x
x
2
2
2
3
y1 

y2  .
y3 
x12 

x22  .
x32 
y1 
y2  so the coefficients of the desired interpolating
y3 
polynomial y  a0  a1 x  a2 x 2 are: a0  det A1 , a1  det A2 , and a2  det A3 . From the result of Exercise 43 of
det A
Section 2.1, det  A   x2  x1  x3  x1  x3  x2  .
det A
det A
46
Supplementary Exercises
  y1

det   y2
 y
 3
x12  

x22    y3 x1 x2  x2  x1   y2 x1 x3  x3  x1   y1 x2 x3  x3  x2  ,
x32  
x1
x2
x3
 1 y1

det  1 y2
 1 y
3

x12  

x22     y3 x22  x12  y2 x32  x12  y1 x32  x22 ,
x32  

 1 x1

and det  1 x2
 1 x
3






y1  

y2    y3  x2  x1   y2  x3  x1   y1  x3  x2  .
y3  
Therefore,
a0 
a1 
y3 x1 x 2 ( x 2 – x1 ) – y2 x1 x3 ( x3 – x1 )  y1 x 2 x3 ( x3 – x2 )
( x 2 – x1 )( x3 – x1 )( x3 – x 2 )




 x – x  x – x  x – x 
2
and a2 
1
3
1
3

2
 x  x  x  x  x  x 
1
3
1
No. For instance, T 1,0,0,1 
3
2
y3 x1 x2
( x3 – x1 )( x3 – x 2 )
y2  x3  x1 
–
y2 x1 x3
( x 2 – x1 )( x3 – x2 )
y3  x2  x1 
–

( x3 – x1 )( x2 – x1 )
y1  x3  x2 
–

2
2
1
3
y3
1
3
2
3
y2

1
2
1
y1

 x  x  x  x   x  x  x  x   x  x  x  x 
3
1
3
2
2
1
3
2
1 0
 1 so that T 1,0,0,1  T 1,0,0,1  2
0 1
but T  1,0,0,1  1,0,0,1   T  2,0,0,2  
y1 x2 x3
 x – x  x – x   x – x  x – x   x – x  x – x 
3
y3  x2  x1   y2  x3  x1   y1  x3  x2 
2
38.

– y3 x22 – x12  y2 x32 – x12 – y1 x32 – x22

2 0
0 2
 4 which shows that additivity fails.
3
1
2
1
.
,
47
3.1 Vectors in 2-Space, 3-Space, and n-Space
1
CHAPTER 3: EUCLIDEAN VECTOR SPACES
3.1 Vectors in 2-Space, 3-Space, and n-Space
1.
(a)
 4  1,1  5   3,  4 
(b)
 0  2, 0  3, 4  0    2, 3, 4 
2.
(a)
 3  2, 3  3   5, 0 
(b)
 0  3, 4  0, 4  4    3, 4, 0 
3.
(a)

P1 P2   2  3, 8  5    1, 3 
(b)

P1 P2   2  5, 4   2  , 2  1   3, 6,1
4.
(a)

P1 P2   4   6  , 1  2    2, 3 
(b)

P1 P2   1  0, 6  0, 1  0    1, 6,1
5.
(a)

Denote the terminal point by B  b1 , b2  . Since the vector AB   b1  1, b2  1 is to be equivalent to the
vector u  1, 2  , the coordinates of B must satisfy the equations
b1  1  1 and b2  1  2
therefore b1  2 and b2  3 . The terminal point is B  2, 3 .
(b)

Denote the initial point by A  a1 , a2 , a3  . Since the vector AB   1  a1 , 1  a2 , 2  a3  is to be
equivalent to the vector u  1,1, 3 , the coordinates of A must satisfy the equations
1  a1  1,
 1  a2  1,
and
2  a3  3
therefore a1  2 , a2  2 , and a3  1 . The initial point is A  2,  2,  1 .
6.
(a)

Denote the initial point by A  a1 , a2  . Since the vector AB   2  a1 ,0  a2    2  a1 ,  a2  is to be
equivalent to the vector u  1, 2  , the coordinates of A must satisfy the equations
2  a1  1 and  a2  2
therefore a1  1 and a2  2 . The initial point is A 1, 2  .
(b)
Denote the terminal point by B  b1 , b2 , b3  . Since the vector

AB   b1  0, b2  2, b3  0    b1 , b2  2, b3  is to be equivalent to the vector u  1,1, 3 , the
coordinates of B must satisfy the equations
b1  1, b2  2  1, and b3  3
therefore b1  1, b2  3, and b3  3 . The terminal point is B 1, 3, 3 .
2
7.
Chapter 3: Euclidean Vector Spaces
(a)
For any positive real number k , the vector u  kv has the same direction as v . For example, letting
k  1 , we have u   4,  2,  1 . If the terminal point is Q  3, 0, 5 then the initial point has
coordinates  3  4, 0   2  , 5   1  , i.e.,  1, 2, 4  .
(b)
For any negative real number k , the vector u  kv is oppositely directed to v . For example, letting
k  1 , we have u   4, 2,1 . If the terminal point is Q  3, 0,  5 then the initial point has
coordinates  3   4  , 0  2,  5  1 , i.e.,  7, 2, 6  .
8.
(a)
For any positive real number k , the vector u  kv has the same direction as v . For example, letting
k  1 , we have u   6, 7, 3 . If the initial point is P  1,3, 5 then the terminal point has
coordinates  1  6, 3  7, 5  3 , i.e.,  5,10, 8  .
(b)
For any negative real number k , the vector u  kv is oppositely directed to v . For example, letting
k  1 , we have u   6, 7, 3 . If the initial point is P  1,3, 5 then the terminal point has
coordinates  1  6, 3  7, 5  3 , i.e.,  7, 4, 2  .
9.
(a)
u  w   4   3  ,  1   3    1,  4 
(b)
v  3u   0, 5   12,  3    0  12, 5   3     12, 8 
(c)
2  u  5w   2  4,  1   15,  15    2 19,14    38, 28 
(d)
3v  2  u  2 w    0,15   2  4,  1   6,  6     0,15   2  2,  7 
  0,15   4,  14    4, 29 
10.
(a)
v  w   4  6, 0   1 , 8   4     2,1, 4 
(b)
6u  2v   18, 6,12    8, 0, 16    10, 6, 4 
(c)
3  v  8w   3  4, 0, 8    48, 8, 32    3  44, 8, 24   132, 24, 72 
(d)
11.
 2u  7w   8v  u    6, 2, 4    42,  7,  28     32, 0, 64    3,1, 2 
  48, 9, 32    29,1, 62    77, 8, 94 
(a)
v  w   4  5, 7   2  , 3  8, 2  1   1, 9,  11,1
(b)
 u   v  4 w    3, 2, 1,0    4, 7,  3, 2    20, 8, 32, 4  
  3, 2, 1,0    16,15, 35, 2    13,13, 36, 2 
(c)
6  u  3v   6  3, 2,1, 0   12, 21, 9, 6    6  15, 19,10, 6    90, 114, 60, 36 
3.1 Vectors in 2-Space, 3-Space, and n-Space
(d)
12.
(a)
v  w   0  7, 4  1, 1  4,1  2, 2  3   7, 5, 5,  1, 5
(b)
3  2 u  v   3  2, 4, 6,10, 0    0, 4, 1,1, 2    3  2, 0, 5, 9, 2    6, 0, 15, 27, 6 
(c)
(d)
13.
 6v  w    4u  v    24, 42, 18, 12    5, 2, 8,1   12, 8, 4, 0    4, 7, 3, 2 
 19, 44, 26,11   8,15,1, 2    27, 29, 27, 9
 3u  v    2u  4w    3, 6, 9,15, 0    0, 4, 1,1, 2     2, 4, 6,10, 0    28, 4, 16, 8,12 
  3, 2, 8,14, 2    30, 8, 22, 2,12    27, 6,14,12, 14 
 w  5v  2u   v  12  7,1, 4, 2,3    0, 20, 5, 5,10    2, 4, 6,10, 0    0, 4, 1,1, 2 
 12  9, 15, 5, 3, 7    0, 4, 1,1, 2    29 ,  27 ,  27 , 25 ,  23 
1
2
Solve the vector equation using the properties listed in Theorems 3.1.1 and 3.1.2:
3u  v   2  w  3x  2w
[Part (c) of Theorem 3.1.2 and part (g) of Theorem 3.1.1]
 3u  v    4  w  3x  0w
[Add 2 w to both sides, use parts (b) and (d) of Th. 3.1.1]
 3u  v    4  w  3x
[Use part (a) of Theorem 3.1.2]
1
3
 3u  v    4  w   13  3x 
[Multiply both sides by 13 ]
1
3
 3u  v    4  w   x
[Parts (g) and (h) of Theorem 3.1.1]
Therefore x  13  5,13, 0, 2    20, 8, 32, 4    13  25, 21, 32, 2     253 , 7,  323 ,  32  .
14.
Solve the vector equation using the properties listed in Theorems 3.1.1 and 3.1.2:
2u   1 v  x  7x  w
[Part (c) of Theorem 3.1.2]
2u   1 v  0x  6x  w
[Add x to both sides, use parts (b) and (d) of Th. 3.1.1]
2u   1 v  6x  w
[Use part (a) of Theorem 3.1.2]
2u   1 v   1 w  6x  0w
[Add w to both sides, use parts (b) and (d) of Th. 3.1.1]
2u   1 v   1 w  6x
[Use part (a) of Theorem 3.1.2]
1
6
 2u   1 v   1 w    6x 
[Multiply both sides by 61 ]
1
6
 2u   1 v   1 w   x
[Parts (g) and (h) of Theorem 3.1.1]
1
6
Therefore x  61  2, 4, 6,10, 0    0, 4,1, 1, 2    7, 1, 4, 2, 3      65 ,  61 ,  61 , 116 ,  65  .
3
4
15.
16.
Chapter 3: Euclidean Vector Spaces
Vectors u and v are parallel (collinear) if one of them is a scalar multiple of the other one, i.e. either
u  av for some scalar a or v  bu for some scalar b or both (the two conditions are not equivalent if one
of the vectors is a zero vector, but the other one is not.)
(a)
v   4, 2, 0, 6,10, 2  does not equal ku   2k, k, 0, 3k, 5k, k  for any scalar k; v is not parallel to u
(b)
v   4, 2,0, 6, 10, 2   2u ; v is parallel to u
(c)
v   0, 0, 0, 0, 0, 0   0u ; v is parallel to u
Vectors u and v are parallel (collinear) if one of them is a scalar multiple of the other one, i.e. either
u  av for some scalar a or v  bu for some scalar b or both (the two conditions are not equivalent if one
of the vectors is a zero vector, but the other one is not.)
(a)
Let v   8t, 2  .
u  av

4  8at and  1  2 a

a  12
and
t 1
v  bu

8t  4b and  2  b

b2
and
t 1
Therefore the vector  8t, 2  is parallel to  4, 1 if and only if t  1 .
(b)
Let v   8t,2t  .
u  av

4  8at and  1  2 at
v  bu

8t  4b and 2t  b


1  2 at
b0
and
and
 1  2 at - contradiction
t 0
Therefore the vector  8t, 2t  is parallel to  4, 1 if and only if t  0 .
(c)
 
Let v  1,t 2 .
u  av

4  a and  1  at 2
- contradiction
v  bu

1  4b and t 2  b
- contradiction
 
Therefore the vector 1,t 2 is not parallel to  4, 1 for any real value t .
17.
The vector equation a 1, 1, 3, 5  b  2,1, 0, 3  1, 4, 9,18  is equivalent to the linear system
1a  2b 
1
1a  1b  4
3a  0 b 
9
5a  3b 
18
3.1 Vectors in 2-Space, 3-Space, and n-Space
1
1
 1 2
0
 1 1 4 


whose augmented matrix
has the reduced row echelon form 
0
 3 0
9



0
 5 3 18 
Therefore, the unique solution is a  3 and b  1 .
18.
0 3
1 1
.
0 0

0 0
The vector equation a  2,1, 0,1, 1  b  2, 3,1, 0, 2    8, 8, 3, 1, 7  is equivalent to the linear system
2 a  2b  8
1a  3b 
0 a  1b 
8
3
1a  0b 
1a  2b 
1
7
1
 2 2 8 
0
 1 3 8



whose augmented matrix  0
1 3 has the reduced row echelon form  0



0
 1 0 1
 0
 1 2 7 
0 1
1 3
0 0 .

0 0
0 0 
Therefore, the unique solution is a  1 and b  3 .
19.
The vector equation c1 1, 1, 0   c2  3, 2,1  c3  0,1, 4    1,1,19  is equivalent to the linear system
1c1
 3c2
 0c3
 1
1c1
 2c2


0c1

 4c3
1c2
1c3
1
 19
 1 3 0 1
 1 0 0 2


whose augmented matrix  1 2 1 1 has the reduced row echelon form  0 1 0 1 .
 0 1 4 19 
0 0 1 5
Therefore, the unique solution is c1  2, c2  1, and c3  5 .
20.
The vector equation c1  1, 0, 2   c2  2, 2,  2   c3 1,  2,1   6,12, 4  is equivalent to the linear system
1c1
 2c2

0c1
2c1
 2c2
 2c2
 2c3
 1c3
1c3
 6


12
4
1 6 
 1 2
 1 0 0 6


whose augmented matrix  0 2 2 12  has the reduced row echelon form 0 1 0
2  .
0 0 1 4 
 2 2
1 4 
Therefore, the unique solution is c1  6, c2  2, and c3  4 .
5
6
21.
Chapter 3: Euclidean Vector Spaces
The vector equation c1  2, 9, 6   c2  3, 2,1  c3 1, 7, 5   0, 5, 4  is equivalent to the linear system
2c1
9c1
6c1
 3c2
 2c2
 1c2
 1c3
 7c3
 5c3
 0
 5
 4
 1 0 1 0
 2 3 1 0 


whose augmented matrix  9 2 7 5 has the reduced row echelon form 0 1 1 0  .
0 0 0 1
 6
1 5 4 
The system has no solution.
22.
Equating the second components on both sides yields a contradictory equation 0  2.
23.
(a)
The midpoint of the segment is the terminal point of the vector
 

OM  OP  12 PQ   2, 3, 2   12  7  2, 4  3,1   2     29 ,  21 ,  21 
therefore the midpoint has coordinates  29 ,  12 ,  12  .
(b)
The desired point is the terminal point of the vector
 

ON  OP  34 PQ   2, 3, 2   34  7  2, 4  3,1   2     234 ,  49 , 14 
therefore this point has coordinates  234 ,  94 , 14  .
24.

 
When the vector u  OP1  12 OP2  OP1

 is positioned so its initial point is at the origin, its terminal point
is the midpoint of the line segment connecting the points P1  x1 , y1  and P2  x2 , y2  since
u   x1 , y1   12  x2  x1 , y2  y1  
25.
26.
27.
(a)
u  v  w   5, 5   10, 2    3, 8   2, 5
(b)
u  v  w  10, 7   3, 8    4, 9   3,  8
(a)
u  v  w   5, 5   10, 2    3, 8  18,1
(b)
u  v  w  10, 7   3, 8    4, 9    9, 24 

x1  x2
2
y y
, 12 2

The midpoint of the line segment connecting the points P  x1 , y1 , z1  and Q  x2 , y2 , z2  is
 x1  x2 y1  y2 z1  z2 
 2 , 2 , 2 


Therefore we have
 1  x2 3  y2 7  z2 
 2 , 2 , 2    4, 0, 6  .


3.1 Vectors in 2-Space, 3-Space, and n-Space
This vector equation is equivalent to a system of three linear equations in three unknowns that is easy to
solve:
1  x2
4
2

x2  7
3  y2
0
2

y2  3
7  z2
 6
2

z2  19
We conclude that the point Q is  7, 3, 19 .
28.
Yes. Arranging the three vectors "tip-to-tail" we obtain a triangle since the terminal point of the last vector
is the same as the initial point of the first one.
29.
(a)
We have a  d  b  e  c  f  0 therefore a  b  c  d  e  f  0 .
(b)
The sum is 12  0   0 .
(c)
From part (a), b  c  d  e  f  a .
(d)
From part (a), the sum of any five vectors remaining after one is removed equals to the negative of
the removed vector.
30.
The sum of all radial vectors of a regular n -sided polygon is always 0. When consecutive vectors are
arranged "tip-to-tail", a regular n -sided polygon is obtained. (An argument similar to the one used in 29(a)
could also be used when n is even.)
True-False Exercises
(a)
False. Equivalent vectors have the same length and direction - they may have different initial points.
(b)
False. According to Definition 2, equivalent vectors must have the same number of components.
(c)
False. v and kv are parallel for any k .
(d)
True. This is a consequence of Theorem 3.1.1.
(e)
True. This is a consequence of Theorem 3.1.1.
(f)
False. At least one of the scalars must be nonzero for the vectors to be parallel.
(g)
False. For nonzero vector u , the vectors u and u are collinear and have the same length but are not
equal.
(h)
True.
(i)
False.  k  m  u  v    k  m  u   k  m  v .
(j)
True. x  85 v  12 w .
(k)
False. For instance, if v 2  2v1 then 4v1  2v 2  2v1  3v 2 .
7
8
Chapter 3: Euclidean Vector Spaces
3.2 Norm, Dot Product, and Distance in Rn
1.
(a)
v  2 2  2 2  2 2  12  2 3 ;
1
v
(b)
v  2 1 3  2, 2, 2  
 , , ;  v  
1
3
1
3
1
3
1
v
1
2 3
 2, 2, 2     13 ,  13 ,  13 
v  12  0 2  2 2  12  32  15 ;
1
v
v  115 1,0,2,1,3  

1
15

,0, 215 , 115 , 315 ;

 1v v   115 1,0,2,1,3    115 ,0,  215 ,  115 ,  315
2.
(a)
v  12   1  2 2  6 ;
2
1
v
(b)
v  16 1, 1,2  
v 
1
v
 ,  ,  ;  v   1, 1,2     , ,  
1
6
1
6
2
6
1
v
1
6
1
6
2
2
6
2


v  123  2,3,3, 1   223 , 323 , 323 ,  123 ;

2
23
,  323 ,  323 , 123

(a)
u  v   3, 5, 7  ; u  v  32   5   72  83
(b)
u  v  2 2   2   32  12   3   4 2  17  26
(c)
2u  2v   4, 4, 6    2, 6, 8   2, 2, 2  ;
2
2
2 u  2 v 
(d)
1
6
 2   32  32   1  23 ;
 1v v   123  2,3,3, 1 
3.

2
 2    2   22  12  2 3
2
2
3u  5v  w   6, 6, 9   5, 15, 20    3, 6, 4    4,15, 15 ;
3u  5v  w  4 2  152   15   466
2
4.
(a)
u  v  w   6,1,3 ; u  v  w  62  12  32  46
(b)
u  v  1,1, 1 ; u  v  12  12   1  3
(c)
3v   3, 9,12  ;
2
3v  3 v  32   9   12 2  3 12   3   4 2  234  3 26  0
2
(d)
2
u  v  2 2   2   32  12   3   4 2  17  26
2
2
3.2 Norm, Dot Product, and Distance in Rn
5.
3u  5v  w   6, 3,12,15  15, 5, 25, 35   6, 2,1,1   27, 6, 38, 19 ;
(a)
 27    6   382   19   2570
2
3u  5v  w 
2
2
3u  5 v  w
(b)
 6    3   122  152  5 32  12   5   72   6   22  12  12
2

2
2
2
 414  5 84  42  3 46  10 21  42
u 
(c)
 2    1  42  52  46 ;
2
2
 u v   46 v  46 32  12   5   72  46 84  2 966
2
6.
2v   6, 2,10, 14  , 3w  18, 6, 3, 3
(a)
u  2v  3w
=
 2    1  42  52   6    2   102   14 
2
2
2
 182   6    3    3 
2
2
2
2
2
 46  336  378  46  4 21  3 42
(b)
u  v   5, 2,9, 2  , u  v 
 5    2   92   2   114
2

2
2

u  v w  6 114,2 114, 114, 114 ; || u  v || w  4788  6 133
7.
kv 
 2 k    3k   0 2   6 k   49k 2  7 k 2 ; this quantity equals 5 if k  75
2
2
2
or k   75
8.
kv  k 2  k 2   2 k    3k   k 2  16 k 2  4 k 2 ; this quantity equals 4 if k  1
2
2
or k  1
9.
(a)
u  v   3 2   1 2    4  4   8
u  u   3 3  11   4  4   26
v  v   2  2    2  2    4  4   24
(b)
u  v  1 2   1 2    4  3   6  2   0
u  u  11  11   4  4    6  6   54
v  v   2  2    2  2    3 3   2  2   21
9
10
10.
Chapter 3: Euclidean Vector Spaces
(a)
u  v  1 1  1 0    2  5   31  8
u  u  11  11   2  2    3 3  15
v  v   1 1   0  0    5 5  11  27
(b)
u  v   2 1   1 2   1 2    0  2    2 1  0
u  u   2  2    1 1  11   0  0    2  2   10
v  v  11   2  2    2  2    2  2   11  14
11.
(a)
d  u, v   || u  v || 
cos  ||uu||||vv|| 
(b)
(a)
 31   3 0    3 4 
32  32  32 12  02  42
2
 2715 17  551 ; the angle is acute since u  v  0
2
2
 0  3   2  2    1 4   1 4 
2
 3  2  2 2  4 2  4 2
2
2
2
 6445 ; the angle is obtuse since u  v  0
1  5    2  1   3  2    0   2   
2
2
1 5   2 1   3  2    0  2 
1  22   3   02 52 12  2 2   2 
2
d  u, v  || u  v ||
cos  ||uu||||vv|| 
2
 0   3     2  2    1  4   1  4   59
02   2    1 12
d  u, v   || u  v || 
cos  ||uu||v||v|| 
(b)
2
d  u , v   || u  v || 
cos  ||uu||||vv|| 
12.
 3  1   3  0    3  4   14
2
2
2
2
46
 141 34 ; the angle is acute since u  v  0
 0  2   1  1  1  0   1   1    2  3   10
2
2
 0  2   11  1 0   1 1   2  3 
02 12 12 12  22 22 12  0 2   1  32
2
2
2
2
 76 15 ; the angle is acute since u  v  0
13.
The angle between the two vectors is 30 , so by Formula (1) we have a  b || a || || b || cos30  452 3 .
14.
a  b  0 since the angle between the two vectors is 90
15.
(a)
u   v  w  does not make sense; v  w is a scalar, whereas the dot product is only defined for vectors
(b)
u   v  w  makes sense (the result is a scalar)
(c)
|| u  v || does not make sense; u  v is a scalar, whereas the norm is only defined for vectors
(d)
 u  v   || u || makes sense (the result is a scalar)
16.
(a)
|| u ||  || v || does not make sense: || u || and || v || are scalars, whereas the dot product is only defined
for vectors
(b)
 u  v   w does not make sense: u  v is a scalar so the vector w cannot be subtracted from it
(c)
 u  v   k makes sense (the result is a scalar)
(d)
k  u does not make sense: k is a scalar, whereas the dot product is only defined for vectors
3.2 Norm, Dot Product, and Distance in Rn
17.
(a)
11
u  v   3  2   1 1   0  3   7 ;
|| u |||| v ||
 3   12  02 22   1  32  10 14
2
2
Since u  v  7  49  140  10 14  uv , the Cauchy-Schwarz inequality holds.
(b)
u  v   0 1   2 1   2 1  11  5
|| u || || v || 0 2  2 2  2 2  12 12  12  12  12  9 4  6
Since u  v  5  6 || u |||| v || , the Cauchy-Schwarz inequality holds.
18.
(a)
u  v   4 1  1 2   1 3   9 ; || u || || v || 4 2  12  12 12  2 2  32  18 14
Since u  v  9  81  252  18 14 || u || || v || , the Cauchy-Schwarz inequality holds.
(b)
u  v  1 0    2 1  11   2  5    3  2   7
uv  12  2 2  12  2 2  32 0 2  12  12  52   2   19 31
2
Since u  v  7  49  589  19 31 || u || || v || , the Cauchy-Schwarz inequality holds.
21.
22.
23.
We have || i ||  || j ||  || k ||  1 . Therefore
cos  
 v 1   v2  0    v3  0   v1
vi
 1
|| v || || i ||
|| v ||
|| v ||
cos  
 v  0    v2 1   v3  0   v2
v j
 1
|| v || || j ||
|| v ||
|| v ||
cos  
 v  0    v2  0    v3 1  v3
vk
 1
|| v || || k ||
|| v ||
|| v ||
      
cos2   cos2   cos2   ||v1||
v
2
v2
||v||
2
v3 2
||v||
v12  v22  v32
||v ||2
v2  v2  v2
 v12  v22  v32  1
1
2
3
Using the result of Exercise 21, and letting v1   a1 , b1 , c1  and v2   a2 , b2 , c2  , we can have
cos 1 cos  2  cos 1 cos  2  cos  1 cos  2 
a1
a2
b
b2
c
c2
v1  v 2
 1
 1

|| v1 || || v 2 || || v1 || || v 2 || || v1 || || v 2 || || v1 || || v 2 ||
The left-hand side is zero if and only if the right-hand side is zero; this happens if and only if v1 and v 2 are
nonzero orthogonal vectors.
24.
(a)
We have d  1,1,1 and u  1,1, 0  .
cos  ||dd||||uu || 
11  11  1 0 
12 12 12 12 12  02
 32 2 
2
3
therefore   cos 1
2
3
 35 .
12
Chapter 3: Euclidean Vector Spaces
(b)
The vectors d and v   1,0,1 form a right angle since
cos  
25.
1 1  1 0   11
12 12 12
 12  02 12
0.
Align the edges of the box with the coordinate axes so that the diagonal becomes the vector v  10,15,25 .
The length of this vector is || v || 10 2  152  252  5 38 therefore



26.

the angle between v and the y -axis is cos 
the angle between v and the z -axis is cos 

   71 ,
  cos    61 ,
  cos    36 .
the angle between v and the x -axis is cos1 ||vv||||i i||  cos1
1
v j
||v || || j||
1
vk
||v || ||k ||
2
38
1
3
38
1
5
38
Let us assume both vectors v and w have the same number of components (otherwise v  w would be
undefined).
From Theorem 3.2.5(a), we obtain two inequalities: || v  w ||  || v ||  ||  w ||  || v ||  || w ||  5 .
The norm || v  w || can actually attain this upper bound if w   32 v (so that the two vectors have opposite
directions):
y
vw
v
x
w  _3 v
2
5
5
 3 
|| v  w ||  v    v   v 
|| v ||  5

2 Theorem 2
 2 
3.2.1c
Applying Theorem 3.2.5(a) to w   v  w    v  yields ||  w || || v  w ||  ||  v ||
thus || v  w|| || w ||  || v ||  1 .
3.22 Norm, Dot P
Product, and D
Distance in Rn
The norrm || v  w || attains
a
this lo
ower bound iff w  32 v (so tthat the two vvectors have tthe same direcction):
1
1
3
vw  v v   v 
  || v ||  1
2 Theoreem 2
2
3.2.1cc
m
||v ||
v m.
229.
The scaalar product of ||mv|| v has thee same directiion as v and its length is
331.
We are looking for th
he force F su
uch that F  10cos60,10 sin60   88, 0    0, 0  .


) The magnitu
tude of F is
This yieelds F   5, 5 3   8, 0   (3, 5 3 ).
84 lb  9.17 lb ; the vector forms
the anglle  70.9 with
w the positive x -axis.
332.
We are looking for th
he force F su
uch that
F  10
00, 0   150co
os60,150sinn60  120cos135,120siin135   0, 0  . This yieldds

 

F   100,
1 0   75, 75 3  60
0 2, 60 2  ( 175  60 2,  75 3  660 2 ).
The maagnitude of F is  232.91 lb
l ; the vectorr forms the anngle  112.88 with the poositive x -axiis.
T
True-False Exercises
E
((a)
True. By Theorem 3.2.1(b), || 2v ||  2 || v ||  2 || v || .
((b)
True.
((c)
False. Norm
N
can be zero
z
for the zeero vector.
((d)
True. Th
he two vectorrs are ||1v|| v an
nd  ||1v|| v .
((e)
True. Th
his follows frrom Formula (13).
13
14
Chapter 3: Euclidean Vector Spaces
(f)
False. The first expression does not make sense since the scalar u  v cannot be added to a vector.
(g)
False. For example, let u  1,0  , v   0,1 , and w   0,2  . We have v  w even though u  v  u  w .
(h)
False. For example, for u  1,1   0,0  and v  1, 1   0,0  we have u  v  0 .
(i)
True. Cosine of such angle cannot be positive, therefore neither can u  v .
(j)
True. Applying triangle inequality twice, || u  v  w ||  || u  v ||  || w ||  || u ||  || v ||  || w || .
3.3 Orthogonality
1.
2.
(a)
u  v   6  2   1 0    4  3  0 therefore u and v are orthogonal vectors
(b)
u  v   0 1   0 1   11  1  0 therefore u and v are not orthogonal vectors
(c)
u  v   3 4    2 1  1 3   3 7  4  0 therefore u and v are not orthogonal vectors
(d)
u  v   5 4    4 1   0  3   3 7  3  0 therefore u and v are not orthogonal vectors
(a)
u  v   2  5   3 7  11  0 therefore u and v are not orthogonal vectors
(b)
u  v  1 0   1 0   1 0   0 therefore u and v are orthogonal vectors
(c)
u  v  1 3   5 3   4  3  0 therefore u and v are orthogonal vectors
(d)
u  v   4  1  1 5   2  3   51  0 therefore u and v are orthogonal vectors
3.
2  x   1   1 y  3   1 z   2    0 can be rewritten as 2  x  1   y  3   z  2   0
4.
1 x  1  9  y  1  8  z  4   0 can be rewritten as x  1  9  y  1  8  z  4   0
5.
0  x  2   0  y  0   2  z  0   0 can be rewritten as 2 z  0
6.
1 x  0   2  y  0   3  z  0   0 can be rewritten as x  2 y  3z  0
7.
The plane 4 x  y  2 z  5 has a normal vector  4, 1, 2  .
The plane 7 x  3 y  4 z  8 has a normal vector  7, 3, 4  .
The two normal vectors are not parallel (neither of them can be expressed as a scalar multiple of the other
one) therefore the planes are not parallel either.
8.
The plane x  4 y  3z  2  0 has a normal vector 1, 4, 3 .
The plane 3 x  12 y  9 z  7  0 has a normal vector  3, 12, 9  .
The two normal vectors are parallel:  3, 12, 9   3 1, 4, 3 therefore the planes are parallel as well.
3.3 Orthogonality
9.
15
Rewriting the first plane equation 2 y  8 x  4 z  5 as 8 x  2 y  4 z  5 yields a normal vector  8, 2, 4  .
Rewriting the second plane equation x  12 z  14 y as x  14 y  21 z  0 yields a normal vector 1,  14 ,  12  .
The two normal vectors are parallel:  8, 2, 4   8 1,  14 ,  12  therefore the planes are parallel as well.
10.
The normal vectors of the two planes are parallel:  8, 2, 4   2  4,1,2  therefore the planes are parallel
as well.
11.
Normal vectors of the two planes are not orthogonal:
 3, 1,1  1, 0, 2    31   1 0   1 2   5  0
therefore the given planes are not perpendicular.
12.
Normal vectors of the two planes are orthogonal:
1, 2,3   2,5,4   1 2    2  5   3 4   0
therefore the given planes are perpendicular.
13.
14.
15.
u a
(a)
From Formula (12), proja u  ||a || 
(b)
From Formula (12), proja u  ||a || 
(a)
From Formula (12), proja u  ||a || 
(b)
From Formula (12), proja u  ||a || 
u a
u a
u a
1 4    2  3
 225  25
 4 2   32
 3  2    0  3    4  3 
22  32  32
 5 2    6  1
22   1
2
 1822
 45
 31   2  2    6  7 
12  2 2   7 
2
 4354  3436
u  a   6  3   2  9  0 , a   3    9   90 ,
2
2
2
the vector component of u along a is proja u  ||ua||a2 a  900  3, 9    0, 0  ,
the vector component of u orthogonal to a is u  proja u   6, 2    0, 0    6, 2 
16.
u  a   1 2    2  3  4 , a   2   32  13 ,
2
2
the vector component of u along a is proja u  ||ua||a2 a   134  2,3    138 ,  12
,
13 
21
the vector component of u orthogonal to a is u  proja u   1, 2    138 ,  12
   13
,  14
13 
13 
17.
u  a   31  1 0    7 5  32 , || a ||2  12  02  52  26 ,
the vector component of u along a is
80
proja u  ||ua||a2 a  2632 1,0, 5     32
, 0,  160
   16
, 0,  13
,
26
26 
13
the vector component of u orthogonal to a is
80
u  proja u   3,1, 7     16
, 0,  13
   1355 ,1,  1311 
13
16
18.
Chapter 3: Euclidean Vector Spaces
u  a   2 1   0  2   1 3  5 , a  12  2 2  32  14 ,
2
15
the vector component of u along a is proja u  ||ua||a2 a  145 1,2,3    145 , 75 , 14
,
15
the vector component of u orthogonal to a is u  proja u   2,0,1   145 , 75 , 14
   1423 ,  75 ,  141 
19.
u  a   2  4   1 4   1 2    2  2   2 , a  4 2   4   2 2   2   40 ,
2
2
2
the vector component of u along a is proja u  ||ua||a2 a  402  4, 4, 2, 2    15 ,  15 , 101 ,  101  ,
the vector component of u orthogonal to a is
21
u  proja u   2,1,1, 2    15 ,  15 , 101 ,  101    95 , 65 , 109 , 10

20.
u  a   5 2    0 1   3 1   7 1  6 , || a ||2  2 2  12   1   1  7 ,
2
2
the vector component of u along a is proja u  ||ua||a2 a  67  2,1, 1, 1   127 , 67 ,  67 ,  67  ,
the vector component of u orthogonal to a is
u  proja u   5,0, 3,7    127 , 67 ,  67 ,  67    237 ,  67 ,  157 , 557 
21.
From Theorem 3.3.4(a) the distance between the point and the line is D 
 4  3    3 1  4
22.
From Theorem 3.3.4(a) the distance between the point and the line is D 
1 1   3 4   2
23.
From Theorem 3.3.4(a) the distance between the point and the line is D 
 4  2   1 5   2
4 2  32
12  ( 3)2
4 2 12
 525  1
 1110
 117
(the equation of the line had to be rewritten in the form ax  by  c  0 as 4 x  y  2  0 )
24.
From Theorem 3.3.4(a) the distance between the point and the line is D 
 3 1  1 8   5
32 12
 610
(the equation of the line had to be rewritten in the form ax  by  c  0 as 3 x  y  5  0 )
25.
From Theorem 3.3.4(b) the distance between the point and the plane is
D
1 3   2 1   2  2   4
12  2 2   2 
2
 59  35 (the equation of the plane had to be rewritten in the form
ax  by  cz  d  0 as x  2 y  2 z  4  0 )
26.
From Theorem 3.3.4(b) the distance between the point and the plane is D 
 2  1   5 1   6  2   4
22  52   6 
2
 2365 (the
equation of the plane had to be rewritten in the form ax  by  cz  d  0 as 2 x  5 y  6 z  4  0 )
27.
First, select an arbitrary point in the plane 2 x  y  z  5 by setting x  y  0 ; we obtain P0  0,0, 5 . From
Theorem 3.3.4(b) the distance between P0 and the plane
4 x  2 y  2 z  12  0 is D 
28.
 4  0    2  0    2  5 12
 4  2  2 2  2 2
 2224  116
First, select an arbitrary point in the plane 2 x  y  z  1 by setting x  y  0 ; we obtain P0  0,0,1 . From
Theorem 3.3.4(b) the distance between P0 and the plane 2 x  y  z  1  0 is
3.3 Orthogonality
D
29.
 2  0    1 0   11 1
22   1 12
2
 26
In order for w   a, b, c  to be orthogonal to both 1,0,1 and  0,1,1 , we must have a  c  0 and b  c  0.
1 0 1 0 
These equations form a linear system whose augmented matrix 
 is already in reduced row
0 1 1 0 
echelon form. For arbitrary real number t , the solutions are a  t , b  t , c  t .
Since w is also required to be a unit vector, we must have w 
 t    t   t 2  3t 2  1 .
2
2
This yields t   13 , consequently there are two possible vectors that satisfy the given conditions:
  ,  ,  and  , ,  .
1
3
30.
31.
32.
1
3
1
3
1
3
1
3
1
3
(a)
v  w   a  b    b  a   0 therefore v and w are orthogonal vectors
(b)
 3,2  and  3, 2 
(c)
 4,3 and  4,3


AB   2  1, 0  1, 3  1   3,  1, 2  , AC   3  1,  1  1,1  1   4,  2, 0  ,

BC   3   2  ,  1  0, 1  3    1, 1, 2 
 
AB  BC   3  1   1 1   2  2   0
therefore the points A , B , and C form the vertices of a right triangle


AB   4  3, 3  0, 0  2   1, 3, 2  , AC   8  3,1  0,  1  2    5,1, 3  ,

BC   8  4,1  3, 1  0    4, 2, 1
 
AB  BC  1 4    3  2    2  1  0
therefore the points A , B , and C form the vertices of a right triangle
33.
Assuming v  w1  v  w 2  0 and using Theorem 3.2.2, we have
v   k1w1  k2 w2   v   k1w1   v   k2 w2   k1  v  w1   k2  v  w2    k1  0    k2  0   0 .
34.
Yes.
One possible scenario is when u  a - in this case, proja u  proju a  proju u  u .
Another possibility is to take u and a to be orthogonal vectors, so that proja u  proju a  0 .
35.
17
cos 23
By Formula (14), the standard matrix for the reflection H /3  
2
 sin 3
 1
From  2
3
 2
sin 23    12

 cos 23   23
  3    23  2 3  1.96 

    3 3
 we obtain H /3  3,4   1.96, 4.60  .
1
  4   2  2   4.60 
2 
3
2

.
1

2 
3
2
18
36.
Chapter 3: Euclidean Vector Spaces
 cos 2
By Formula (14), the standard matrix for the reflection H / 4  

 sin 2
sin 2   0 1 
.

 cos 2   1 0 
0 1  1  2 
From 
      we obtain H /4 1,2    2,1 .
1 0  2  1 
37.
 cos2 
By Formula (12), the standard matrix for the projection is P /3    3 
sin 3 cos 3
3
4
 cos2 
By Formula (12), the standard matrix for the projection is P / 4    4 
sin 4 cos 4
40.
 1   32 
3 3
     3  we obtain P / 4 1,2    2 , 2  .
2
   2

W  || F || PQ cos  10  50  cos 3  125 ft-lb
41.

W  || F || PQ cos   500 100  cos 4  50,000
 35,355 Nm.
2
1
From  21
2

.
3

4 
3
4
  3   34  3   2.48 

   3 3
 we obtain P /3  3,4    2.48, 4.30  .
3
  4   4  3   4.30 
4 
1
From  4
 43
38.
sin 3 cos 3   14

sin 2 3   43
sin 4 cos 4   12

sin 2 4   12
1
2
1
2

.

1
2
1
2
True-False Exercises
(a)
True.  3, 1, 2    0, 0, 0   0 .
(b)
True. By Theorem 3.2.2(c) and Theorem 3.2.3(e),  ku    mv    km  u  v    km  0   0 .
(c)
True. This follows from Theorem 3.3.2.
(d)
True. proja  projb  u   
 u b 
b  a

 ||b||2 
||a ||2
a
u b
 b a 
||b||2
||a ||2
a  0a  0
( projb  u  has the same direction as b , so it is also orthogonal to a ).
u a
(e)
a a
u a
||a ||2
True. proja  proja  u    ||a||||a ||2 a  ||a||||a ||2 a  ||ua||a2 a  proja  u 
2
2
( proja  u   ka for some scalar k and then proja  ka   ka ).
(f)
False. For instance, let u be a nonzero vector orthogonal to a . Then
proja  u   proja  2u   0 even though u  2 u .
(g)
False. By Theorem 3.2.5(a), || u  v ||  || u ||  || v || . This becomes an equality only when u and v are
collinear vectors in the same direction. (For instance, || (1,0)  (0,1) ||  || (1,1) ||  2 does not equal
|| (1,0) ||  || (0,1) ||  1  1  2 .)
3.4 The Geometry of Linear Systems
3.4 The Geometry of Linear Systems
1.
The vector equation in Formula (5) can be expressed as  x, y    4,1  t  0, 8  .
This yields the parametric equations x  4 , y  1  8t .
2.
The vector equation in Formula (5) can be expressed as  x, y    2, 1  t  4, 2  .
This yields the parametric equations x  2  4t , y  1  2t .
3.
The vector equation in Formula (5) can be expressed as  x, y, z   t  3, 0,1 .
This yields the parametric equations x  3t , y  0 , z  t .
4.
The vector equation in Formula (5) can be expressed as  x, y, z    9,3,4   t  1,6,0  .
This yields the parametric equations x  9  t , y  3  6t , z  4 .
5.
A point on the line:  3, 6 ; a vector parallel to the line:  5, 1 .
6.
A point on the line:  0,7,4  ; a vector parallel to the line:  4,0,3 .
7.
Rewriting the vector equation as  x, y    4  6t,6  6t  yields a point on the line:  4, 6  and a vector
parallel to the line:  6, 6  .
8.
A point on the line:  0, 5,1 ; a vector parallel to the line:  0,5, 1 .
9.
The vector equation in Formula (6) can be expressed as
 x, y, z    3,1, 0   t1  0, 3,6   t2  5,1, 2  .
This yields the parametric equations x  3  5t2 , y  1  3t1  t2 , z  6t1  2t2 .
10.
The vector equation in Formula (6) can be expressed as
 x, y, z    0,6, 2   t1  0,9, 1  t2  0, 3,0  .
This yields the parametric equations x  0 , y  6  9t1  3t2 , z  2  t1 .
11.
The vector equation in Formula (6) can be expressed as
 x, y, z    1,1, 4   t1  6, 1, 0   t2  1,3,1 .
This yields the parametric equations x  1  6t1  t2 , y  1  t1  3t2 , z  4  t2 .
12.
The vector equation in Formula (6) can be expressed as
 x, y, z    0,5, 4   t1  0,0, 5  t2 1, 3, 2  .
This yields the parametric equations x  t2 , y  5  3t2 , z  4  5t1  2t2 .
19
20
13.
Chapter 3: Euclidean Vector Spaces
We find a nonzero vector orthogonal to v, e.g.,  3, 2  . The vector equation of the line passing through
 0,0  and parallel to  3, 2 can be expressed as  x, y   t  3, 2  . Parametric equations are x  3t and
y  2t .
14.
We find a nonzero vector orthogonal to v, e.g.,  4,1 . The vector equation of the line passing through
 0,0  and parallel to  4,1 can be expressed as  x, y   t  4,1 . Parametric equations are x  4t and y  t .
15.
We find two nonparallel nonzero vectors orthogonal to v, e.g.,  5,0,4  and  0,1,0  . The vector equation of
the plane that contains the origin and these two vectors can be expressed as
 x, y, z  = t1  5,0,4  + t2 (0,1,0) . Parametric equations are x  5t1 , y  t2 , and z  4t1 .
16.
We find two nonparallel nonzero vectors orthogonal to v, e.g.,  1,3,0  and  0,6,1 . The vector equation
of the plane that contains the origin and these two vectors can be expressed as
 x, y, z   t1  1,3,0   t2  0,6,1 . Parametric equations are x  t1 , y  3t1  6t2 , and z  t2 .
17.
1 1 1 0 
The augmented matrix of the linear system  2 2 2 0  has the reduced row echelon form
 3 3 3 0 
1 1 1 0 
 0 0 0 0  . A general solution of the system, x   s  t , x  s ,
x3  t expressed in vector form as
1
2


 0 0 0 0 
x   s  t, s, t  is orthogonal to the rows of the coefficient matrix of the original system r1  1,1,1 ,
r2   2,2,2  , and r3   3,3,3 since r1  x  1 s  t   1 s   1 t   0 ,
r2  x =  2  – s – t  +  2  s  +  2  t  = 0 , and r3  x   3 s  t    3 s    3 t   0 .
18.
 1 3 4 0 
The augmented matrix of the linear system 
 has the reduced row echelon form
 2 6 8 0 
 1 3 4 0 
 0 0 0 0  . A general solution of the system, x1  3s  4t , x2  s , x3  t expressed in vector form


as x =  3s + 4t, s, t  is orthogonal to the rows of the coefficient matrix of the original system
r1  1,3, 4  and r2   2,6, 8 since r1  x  1 3s  4t    3 s    4  t   0 and
r2  x   2  3s  4t    6  s    8 t   0 .
19.
1 5 1 2 1 0 
The augmented matrix of the linear system 
 has the reduced row echelon form
1 2 1 3 2 0 
 1 0  37

2
0 1 7
19
7
1
7

8
7
 73
0
 . A general solution of the system,
0
x1 = 37 r  197 s  87 t , x2   27 r  17 s  73 t , x3  r , x4  s , x5  t expressed in vector form as
3.4 The Geometry of Linear Systems
21
x   37 r  197 s  87 t ,  72 r  17 s  73 t , r , s, t  is orthogonal to the rows of the coefficient matrix of the original
system r1  1,5,1,2, 1 and r2  1, 2, 1,3,2  since
r1  x  1  73 r  197 s  87 t    5    72 r  17 s  73 t   1 r    2  s    1 t   0 and
r2  x  1  37 r  197 s  87 t    2    27 r  71 s  73 t    1 r    3  s    2  t   0 .
20.
1 3 4 0 
The augmented matrix of the linear system 
has the reduced row echelon form
3 0 
1 2
 1 0 17 0 
 0 1 7 0  . A general solution of the system, x1  17t , x2  7t , x3  t expressed in vector form as


x   17t,7t, t  is orthogonal to the rows of the coefficient matrix of the original system r1  1,3, 4  and
r2  1,2,3 since r1  x  1 17t    3 7t    4  t   0 and r2  x  1 17t    2  7t    3 t   0 .
21.
(a)
Theorem 3.4.3 yields the following homogeneous linear system that satisfies our requirements:
x
2 x

y 
 3y
z
 0
 0
(b)
A straight line passing through the origin – this line is parallel to any vector that is orthogonal to both
a and b .
(c)
The augmented matrix of the system obtained in part (a) has the reduced row echelon form
1 0

0 1
3
5
2
5
0
3
2
 . A general solution of the system is x   5 t , y   5 t , z  t . It can also be expressed
0
in vector form as u   x, y, z     35 t ,  25 t , t  . To confirm that Theorem 3.4.3 holds, we verify that u
is orthogonal to both a and b :
u  a    35 t  1    25 t  1   t 1  0 , u  b =  – 35 t   –2  +  – 25 t   3  +  t  0  = 0 .
22.
(a)
Theorem 3.4.3 yields the following homogeneous linear system that satisfies our requirements:
3 x
 2y
 2y
 z
 2z
 0
 0
(b)
A straight line passing through the origin – this line is parallel to any vector that is orthogonal to both
a and b .
(c)
The augmented matrix of the system obtained in part (a) has the reduced row echelon form
1 0 1 0 
 0 1 1 0  . A general solution of the system is x  t , y  t , z  t . It can also be expressed in


vector form as u   x, y, z    t, t, t  . To confirm that Theorem 3.4.3 holds, we verify that u is
orthogonal to both a and b :
u  a   t  3   t  2    t  1  0 , u  b = –t  0  + –t  –2  + t  –2  = 0 .
22
23.
Chapter 3: Euclidean Vector Spaces
(a)
The image of the line x  x 0  tv under multiplication by A is y  A  x0  tv   Ax0  tAv .
Since x  x 0  tv is a line, v  0 . Observe that this implies Av  0 since A is invertible. Therefore,
by Definition 1, y  Ax 0  tAv is also a line.
(b)
Using part (a) above, the image under A of the line x  1,3  t  2  1 is given by
1 1 2
1  2   2 1   3    2  2    1   5  3t 
2
Ax  
t
t




   
 . Hence, Ax can
 3 4  3  3 4   1 3 1  4  3   3  2   4  1   9  10t 
be expressed as  x, y    5, 9  t  3,10  . Equating corresponding components on the two sides of
this equation yields the parametric equations x  5  3t , y  9  10t.
24.
In vector form, x  1  t  2, 3,1  t  4,1, 2    2  2t, 3  4t,1  t  where 0  t  1. So we
2 4 3  2  2t  2  2  2t   4  3  4t   3 1  t    19  9t 



1 2   3  4t    3  2  2t    3  4t   2 1  t     5  12t  .
may write Ax   3
 1 4 1  1  t    2  2t   4  3  4t   11  t    11  17t 
Hence, Ax can be expressed in vector form as  x, y, z   19, 5, 11   t  9,12,17 where 0  t  1.
True-False Exercises
(a)
True. This follows from Definition 1.
(b)
False. We need two vectors parallel to the plane that are not collinear.
(c)
True. This follows from Theorem 3.4.1.
(d)
True.
If b  0 then by Theorem 3.4.3, all solution vectors of Ax  b are orthogonal to the row vectors of A .
If all solution vectors of Ax  b are orthogonal to the row vectors of A , r1 ,, rm then the i th component
of the product Ax is ri  x  0 , so we must have b  0 .
(e)
(f)
False. By Theorem 3.4.4, the general solution of Ax  b can be obtained by adding any specific solution of
Ax  b to the general solution of Ax  0.
True. Subtracting Ax1  b from Ax 2  b yields Ax1  Ax 2  b  b , i.e., A  x1  x2   0 .
3.5 Cross Product
1.
(a)
 2 3 0 3 0 2 
,
,
vw  
   32, 6, 4 
6 7 2 7 2 6
(b)
6 7 2 7 2 6
,
,
wv  
   32, 6, 4 
 2 3 0 3 0 2 
3.5 Cross Product
 4 4 3 4 3 4 
,
,
   52, 29,10 
7 2
7 2 6
6
(c)
 u  v   w   3, 4, 4    2, 6, 7   
(d)
Using the result of part (a),
v   v  w    0, 2, 3   32, 6, 4    0  32    2  6    3 4   0
(e)
 2 3 0 3 0 2 
,
,
vv  
   0, 0, 0 
 2 3 0 3 0 2 
(f)
 u  3w    u  3w    3, 16, 22    3, 16, 22 
 16 22 3 22 3 16 

,
,
   0, 0, 0 
 16 22 3 22 3 16 
2.
(a)
 2 1 3 1 3 2 
,
,
uv 
   4,9,6 
 2 3 0 3 0 2 
(b)
Using the result of part (a),   u  v     4,9,6    4, 9, 6 
(c)
 2 1 3 1 3 2 
,
,
u   v  w    3, 2, 1   2,8, 4   
  16, 14, 20 
8 4 2 4 2 8
(d)
Using the result of Exercise 1(b),
w   w  v    2, 6, 7   32, 6, 4    2  32    6  6    7 4   0
(e)
6 7 2 7 2 6
,
,
ww  
   0, 0, 0 
6 7 2 7 2 6
(f)
 7v  3u    7v  3u    9,8, 18   9,8, 18
 8 18 9 18 9 8 

,
,
   0, 0, 0 
 8 18 9 18 9 8 
3.
By Lagrange's identity (Theorem 3.5.1(c)) and Formula (18) in Section 3.2, we have
|| u  w ||2  || u ||2 || w ||2   u  w    u  u  w  w    u  w  .
2
2
 2 1 3 1 3 2 
,
,
uw 
   20, 23,14 
6 7 2 7 2 6
2
2
|| u  w ||2   20 2   23   14 2   1125


 u  u  w  w    u  w    32  2 2   1   2 2  6 2  72    3  2    2  6    1 7  
2
 14  89  112  1246  121  1125
2
2
23
24
4.
Chapter 3: Euclidean Vector Spaces
By Lagrange's identity (Theorem 3.5.1(c)) and Formula (18) in Section 3.2, we have
|| v  u ||2  || v ||2 || u ||2   v  u    v  v  u  u    v  u  .
2
2
 2 3 0 3 0 2 
,
,
vu  
   4, 9,  6 
 2 1 3 1 3 2 
2
2
2
|| v  u ||2   4 2   9    6    133


 v  v  u  u    v  u    0 2  2 2   3    32  2 2   1    0  3    2  2    3  1
2
2
2
2
 1314   72  182  49  133
5.
 2 3 0 3 0 2 
,
,
vw  
   32, 6, 4 
6 7 2 7 2 6
 2 1
3 1 3 2 
,
,
u  v  w  
   14, 20, 82 
 6 4 32 4 32 6 
By Theorem 3.5.1(d),
u   v  w   u  w v   u  v w
  3  2    2  6    1 7    0, 2, 3    3  0    2  2    1 3    2, 6, 7 
 11 0, 2, 3  7  2, 6, 7   0, 22, 33  14, 42, 49   14, 20, 82 
6.
 2 1 3 1 3 2 
,
,
uv 
   4,9,6 
 2 3 0 3 0 2 
 9 6 4 6 4 9 
,
,
   27, 40, 42 
2 7 2 6
6 7
By Theorem 3.5.1(e),
u  v  w  
 u  v  w   u  w v   v  w u
  3  2    2  6    1 7    0, 2, 3    0  2    2  6    3  7    3, 2, 1
 11 0, 2, 3   9 3, 2, 1   0, 22, 33   27,18, 9   27, 40, 42 
7.
 4 2 6 2 6 4 
,
,
uv  
  18, 36, 18  is orthogonal to both u and v.
3 5 3 1
1 5
8.
 1 2 1 2 1 1 
,
,
uv  
   0, 6, 3  is orthogonal to both u and v.
 1 2 2 2 2 1 
9.
 1 2 1 2 1 1 
,
,
uv  
   7, 1, 3 
 3 1 0 1 0 3
The area of the parallelogram determined by both u and v is || u  v ||
 7    1  3  59.
2
2
3.5 Cross Product
10.
11.
 1 4 3 4 3 1 
,
,
uv 
   0,0,0 
 2 8 6 8 6 2 
The area of the parallelogram determined by both u and v is || u  v || 0 2  0 2  0 2  0.




P1 P2   3, 2   P4 P3 ,
P1 P4   3,1  P2 P3
Viewing these as vectors in 3-space, we obtain
 
P1 P2  P1 P4   3,2,0    3,1,0 
2 0 3 0 3 2

,
,

1 0 3 0 3 1
  0,0, 3
The area of the parallelogram is
 
2
P1 P2  P1 P4  0 2  0 2   3   3.
12.


P1 P2   2,2   P4 P3 ,


P1 P4   4,0   P2 P3
Viewing these as vectors in 3-space, we obtain
 
P1 P2  P1 P4   2,2,0    4,0,0 
2 0 2 0 2 2

,
,

0 0 4 0 4 0
  0,0, 8
The area of the parallelogram is
 
2
P1 P2  P1 P4  0 2  0 2   8   8.
13.


We have AB  1,4  and AC   3, 2  . Viewing these as vectors in 3-space, we obtain
 
4 0
1 0 1 4
AB  AC  1,4,0    3, 2,0   
,
,
   0,0,14  .
 2 0 3 0 3 2 
 
The area of the triangle is 12 AB  AC  12 0 2  0 2  14 2  7.
14.


We have AB  1,1 and AC   2, 4  . Viewing these as vectors in 3-space, we obtain
 
 1 0 1 0 1
1
AB  AC  1,1,0    2, 4,0   
,
,
   0,0, 6  .
 4 0 2 0 2 4 
 
2
The area of the triangle is 12 AB  AC  12 0 2  0 2   6   3.
25
26
15.
Chapter 3: Euclidean Vector Spaces

P1 P2   1, 5, 2  ,

P1 P3   2, 0, 3 
   5 2 1 2 1 5 
P1 P2  P1 P3  
,
,
   15, 7,10  .
2 3 2 0
 0 3
 
2
The area of the triangle is 12 P1 P2  P1 P3  12  15   72  10 2  374
.
2
16.

PQ   1,4,2  ,

PR   5,2,6 
   4 2 1 2 1 4 
PQ  PR  
,
,
   20,16, 22  .
5 6 5 2
2 6
 
2
The area of the triangle is 12 PQ  PR  12 20 2  16 2   22   285.
17.
2 6 2 


From Theorem 3.5.4(b), the volume of the parallelepiped is equal to det 0 4 2   16.
2 2 4 
18.
3 1 2 


From Theorem 3.5.4(b), the volume of the parallelepiped is equal to det  4 5 1   45.
 1 2 4 
1 2
19.
1
3 0 2  16  0 therefore by Theorem 3.5.5 these vectors do not lie in the same plane when they
5 4 0
have the same initial point.
20.
5 2 1
4 1 1  0 therefore by Theorem 3.5.5 these vectors lie in the same plane when they have the same
1 1 0
initial point.
21.
2 0 6
From Formula (7), u   v  w   1 3 1  92 .
5 1 1
1 2
22.
23.
4
From Formula (7), u   v  w   3 4 2  10 .
1 2
5
a 0 0
From Formula (7), u   v  w   0 b 0  abc .
0 0 c
3.5 Cross Product
24.
27
1 0 0
From Formula (7), i   j  k   0 1 0  1 .
0 0 1
25.
(a)
u1
u   w  v   w1
u2
w2
v1
v2
u3
u1
w3 can be obtained from u   v  w   v1
v3
w1
u2
v2
w2
u3
v3 by interchanging the
w3
second row and the third row. This reverses the sign of the determinant, therefore u   w  v   3.
(b)
By Theorem 3.2.2(a),  v  w   u  u   v  w   3 .
w1
(c)
w   u  v   u1
v1
w2
w3
u1
u2
u3
u2
v2
u3 can be obtained from w1
v3
v1
w2
v2
w3 by interchanging the first row and the
v3
second row.
u1
The latter determinant can be obtained from u   v  w   v1
u2
v2
w1
w2
u3
v3 by interchanging the second
w3
row and the third row.
Overall, we reversed the sign of the determinant twice, therefore w   u  v    1 1 3  3.
26.
(a)
v1
v   u  w   u1
v2
u2
w1
w2
v3
u1
u3 can be obtained from u   v  w   v1
w3
w1
u2
v2
w2
u3
v3 by interchanging the first
w3
row and the second row. This reverses the sign of the determinant, therefore v   u  w   3.
27.
(b)
 u  w   v  v   u  w   3 as shown in part (a) above
(c)
v1
v   w  w   w1
v2
w2
v3
w3  0 since this determinant has two equal rows (this follows from
w1
w2
w3
(a)
Theorem 2.2.5).


From AB   1, 2, 2  and AC  1,1, 1 we obtain
   2 2 1 2 1 2 
,
,
AB  AC  
   4,1, 3  .
1 1 1 1 
 1 1
 
2
2
The area of the triangle is 12 AB  AC  12  4   12   3   226 .
(b)

Denoting the altitude from C to AB by h , we must have 12 AB h  226 .

Since AB 
 1  22  22  3 , we conclude that h  326 .
2
28
28.
Chapter 3: Euclidean Vector Spaces
 3 6 2 6 2 3 
2
2
2
,
,
uv  
   36, 24,0  ; || u  v || 36   24   0  12 13 ;
3
6
2
6
2
3


|| u ||  2 2  32   6   7 ; || v ||  2 2  32  6 2  7
2
From Formula (6), sin  
29.
|| u  v || 12

13 .
|| u || || v || 49
Using parts (a), (b), (c), and (f) of Theorem 3.5.2, we can write  u  v    u  v 
  u  u    u  v    v  u    v  v   0     v  u    v  u   0  2  v  u  .
30.
31.
The result follows directly from part (b) of Theorem 3.2.3 with u  a , v  d , and w  b  c.

(a) Taking F  1000  12 , 0, 12  500 2  1, 0,1 and d  PQ   0, 2, 1 we obtain


 0 1 1 1 1 0 
,
,
F  d  500 2  1, 0,1   0, 2, 1   500 2 

0 1 0 2 
 2 1
 500 2  2, 1, 2  . || F  d ||  500 2
 2    1   2   1500 2 therefore the scalar
2
2
2
moment of F about the point P is 1500 2 Nm  2121.32 Nm.
(b)
It was shown in the solution of part (a) that the vector moment of F about the point P is
F  d  500 2  2, 1, 2  and its magnitude is 1500 2 . The direction angles are






2
500 2
2
cos1  1000
 132 , cos1  1500
 109 , and cos1  1000
 132 .
1500 2
2
1500 2
32.
39.
Taking F  200  cos72,sin 72,0  and d   0.2, 0.03, 0  we obtain || F  d ||  36.19 Nm.
(a)
(b)
The volume is
1
6
 3 1 3
  
1
PQ  PR  PS  6 det  2 1 1  61 17  176 .
 4 4 3
The volume is
1
6
 1 2 1
  
1
PQ  PR  PS  6 det  3 4 0   61 3  21 .
 1 3 4 




True-False Exercises
(a)
True. This follows from Formula (6): for nonzero vectors u and v , || u  v ||  || u |||| v || sin  is zero if and
only if sin   0 (i.e., the vectors are parallel).
(b)
True. The cross product of two nonzero noncollinear vectors in a plane is a nonzero vector perpendicular to
both vectors, and therefore to the entire plane.
(c)
False. The scalar triple product is a scalar, rather than a vector.
(d)
True. This follows from Theorem 3.5.3 and from the equality || u  v ||  || v  u || .
3.5 Cross Product
(e)
False. These two triple vector products are generally not the same, as evidenced by parts (d) and (e) of
Theorem 3.5.1.
(f)
False. For instance, let u  v  i and w  2i . We have u  v  u  w  0 even though v  w .
Chapter 3 Supplementary Exercises
1.
(a)
3v  2u   9, 3,18    4, 0, 8   13, 3,10 
(b)
u  v  w   3, 6, 5 ; || u  v  w ||  32   6   52  70
(c)
3u   v  5w    6, 0, 12     3, 1, 6   10, 25, 25     7,26, 7 
2
d ( 3u, v  5w )  || 3u  ( v  5w ) || 
(d)
2
u  w   2  2    0  5   4  5  24 ; || w ||2  2 2   5    5   54 ;
2
projw u 
(e)
 7   26 2  72  774  3 86
uw
w  5424  2, 5, 5     89 , 209 , 209 
|| w ||2
2 0
From Formula (7) in Section 3.5, u   v  w   3 1
4
6  122
2 5 5
(f)
5v  w   15, 5, 30    2, 5, 5   13, 0, 35
 u  v  w   2  3    0  1   4  6   w  18w   36, 90, 90 

0 35 13 35 13
0
,
,

36 90 36 90 
 90 90
 5v  w     u  v  w   
  3150, 2430,1170 
2.
Rewrite u   3, 5,1 , v   2,0,2  , and w   0, 1,4  .
(a)
3v  2u   6,0,6    6, 10,2    12,10,4 
(b)
u  v  w  1, 6,7 ; || u  v  w || 12   6   72  86
(c)
3u   v  5w    9,15, 3     2,0,2    0, 5,20     7,20, 25 
2
d ( 3u, v  5w )  || 3u  ( v  5w ) || 
(d)
 7   20 2   25   1074
2
2
u  w   3 0    5 1  1 4   9 ; || w ||2  0 2   1  4 2  17 ;
2
2
29
30
Chapter 3: Euclidean Vector Spaces
projw u 
uw
36
w  179  0, 1,4    0,  179 , 17

|| w ||2
3 5 1
(e)
From Formula (7) in Section 3.5, u   v  w   2
0
(f)
5v  w  10,0, 10    0, 1,4   10, 1, 6 
0 2  32
1 4
 u  v  w   3  2    5 0   1 2  w  4w   0,4, 16 
 1
6 10 6 10 1 
,
,
   40,160,40 
0 16 0 4 
 4 16
 5v  w     u  v  w   
3.
(a)
3v  2u   9, 0, 24, 0    4,12, 4, 2    5, 12, 20, 2 
(b)
u  v  w   4, 7, 4, 5 ; || u  v  w ||  4 2  72  4 2   5   106
(c)
3u   v  5w    6, 18, 6, 3     3, 0, 8, 0    45, 5, 30, 30     36, 23,16, 27 
2
d ( 3u, v  5w )  || 3u  ( v  5w ) || 
 36    23   162   27   2810
2
2
2
u  w   2  9   6 1   2  6   1 6   30 ;
(d)
|| w ||2  92  12   6    6   154 ;
2
2
30
projw u  ||uww||2 w  154
,  15
, 90 , 90
 9,1, 6, 6   7715  9,1, 6, 6     135
77
77 77 77 
4.
5.
6.
(a)
A line through the origin perpendicular to the given vector.
(b)
A plane through the origin perpendicular to the given vector.
(c)
The origin.
(d)
A line through the origin perpendicular to the given vectors (and to the plane containing them).
By Theorem 3.5.5, this set is the plane containing A, B, and C.
    
 
 
By part (b) of Theorem 3.5.2, AB  AD  AB  AC  CD  AB  AC  AB  CD . Therefore
        
 

AC  AB  AD  AC  AB  AC  AC  AB  CD  0 (since AB  AC is orthogonal to AC ).


7.




 
 


By Theorem 3.5.5, this implies that the four points lie on the same plane.
 
We assumed that AB  CD  0 so the line through A and B cannot be parallel to the line through C and
D . We conclude that these two coplanar lines must intersect.


Denoting S  1, a, b  we have have RS   6, a  1, b  1 . For this vector to be parallel to PQ   3,1, 2 


there must exist a scalar k such that RS  k PQ . The equality of the first components immediately leads to
k  2 . Equating the remaining pairs of components yields the equations:
Supplementary Exercises
a  1   2 1 ,
31
b  1   2  2 
therefore a  1 and b  5 . We conclude that the point S has coordinates  1, 1, 5 .
8.


Denoting S  a, b,6, c  we have RS   a  4, b  1,2, c  . For this vector to be parallel to PQ   3,4,1, 8 


there must exist a scalar k such that RS  k PQ . The equality of the third components immediately leads to
k  2 . Equating the remaining pairs of components yields the equations:
a  4   2  3 ,
b  1   2  4  ,
c   2  8
therefore a  2 , b  9 , and c  16 . We conclude that the point S has coordinates  2,9,6, 16  .
9.

 

PQ  PR
 
PQ   3,1, 2  ; PR  (2, 2, 3) ; cos  ||
PQ || || PR ||
10.


PQ   3,4,1, 8  ; PR   1,0,4, 6  ;
 
PQ  PR
 
cos  ||
PQ || || PR ||
11.
 3 1   4  0   1 4    8  6 
3  4 2 12   8 
2
2
 12  02  42   6 2

 3 2   1 2    2  3
3 12   2 
49
90 53
2
2
2 2  2 2   3 
2
 1414 17 
14
17
 3 49530
From Theorem 3.3.4(b) the distance between the point and the plane is D 
 5 3   31  1 3  4
52   3  12
2
 1135 (the
equation of the plane had to be rewritten in the form ax  by  cz  d  0 as 5 x  3 y  z  4  0 )
12.
The planes are parallel since their normal vectors,  3, 1,6  and  6,2, 12  , are parallel:
 6,2, 12   2  3, 1,6  . We select an arbitrary point in the plane 3 x  y  6 z  7 by setting x  z  0 to
obtain P0  0, 7,0  .
From Theorem 3.3.4(b) the distance between P0 and the plane 6 x  2 y  12 z  1  0 is
D
13.
 6  0    2  7    12  0  1
 6 2  22   12 2
 15
 2 1546
184


A vector equation of the plane that contains the point P and vectors PQ  1, 2, 2  and PR   5, 1, 5 
can be expressed as  x, y, z    2,1, 3  t1 1, 2, 2   t2  5, 1, 5 .
Parametric equations are x  2  t1  5t2 , y  1  2t1  t2 , and z  3  2t1  5t2 .
14.
Since the line is to be orthogonal to the plane 4 x  z  5 , it must be parallel to a normal vector to the plane
 4,0, 1 .
A vector equation of the line can be expressed as  x, y, z    1,6,0   t  4,0, 1 .
This yields parametric equations x  1  4t , y  6 , z  t .
15.
A vector equation of the line can be expressed as  x, y    0, 3  t  8, 1 .
This yields parametric equations x  8t , y  3  t.
32
16.
Chapter 3: Euclidean Vector Spaces
Since the plane is to be parallel to the plane 8 x  6 y  z  4 , it must be orthogonal to a normal vector to
the given plane  8,6, 1 . To find a vector form and parametric form of the plane equation, we construct
two nonzero nonparallel vectors orthogonal to  8,6, 1 , e.g., 1,0, 8 and  0,1,6  .
A vector equation of the plane that contains P  2,1,0  and these two vectors can be expressed as
 x, y, z    2,1,0   t1 1,0, 8  t2  0,1,6  .
Parametric equations are x  2  t1 , y  1  t2 , and z  8t1  6t2 .
17.
Since the line has a slope 3, the vector 1,3 is parallel to the line (any scalar multiple of 1,3 could be
used instead).
Substituting an arbitrary number into the line equation for x , we can solve for y to obtain coordinates of a
point on the line. For instance, letting x  0 yields y  5 resulting in the point  0, 5 .
A vector equation of the line can now be expressed as  x, y    0, 5  t 1, 3 .
This yields parametric equations x  t , y  5  3t .
18.
To find a vector form and parametric form of the plane equation, we construct two nonzero nonparallel
vectors orthogonal to the plane's normal vector  2, 6,3 , e.g.,  3,0,2  and  3,1,0  .
We also need a point on the plane 2 x  6 y  3z  5 , e.g., 1,0,1 - note that any one of the infinitely many
solutions can be used here.
A vector equation of the plane that contains the point 1,0,1 and the vectors  3,0,2  and  3,1,0  can be
expressed as  x, y, z   1,0,1  t1  3,0,2   t2  3,1,0  .
Parametric equations are x  1  3t1  3t2 , y  t2 , and z  1  2t1 .
19.
The given vector equation specifies a point on the plane,  1,5,6  , as well as two vectors parallel to the
plane. A normal vector can be obtained as a cross product of these two vectors:
 1 3 0 3 0 1 
,
,
   3, 6, 2 
 1 0 2 0 2 1 
 0, 1,3   2, 1, 0   
A point-normal equation for the plane can be written as 3  x  1  6  y  5  2  z  6   0.
20.
Since the plane is to be orthogonal to the line x  3  5t , y  2t , z  7 , we can use the vector  5,2,0  as a
normal vector for the plane. This yields a point-normal equation 5  x  5  2  y  1  0.
21.


Begin by forming two vectors parallel to the plane: PQ   10, 4, 1 and PR   9, 6, 6  . A normal
vector can be obtained as a cross product of these two vectors:
   4 1 10 1 10 4 
,
,
PQ  PR  
   18, 51, 24 
9 6 9 6 
 6 6
Supplementary Exercises
33
A point-normal equation for the plane can be written as 18  x  9  51y  24  z  4   0.
25.
The equation represents a plane through the origin perpendicular to the xy -plane. It intersects the xy -plane
along the line Ax  By  0.
4.1 Real Vector Spaces
1
CHAPTER 4: GENERAL VECTOR SPACES
4.1 Real Vector Spaces
1.
(a)
u  v   1  3, 2  4    2, 6  ;
ku   0, 3  2    0, 6 
(b)
For any u   u1 , u2  and v   v1 , v2  in V , u  v   u1  v1 , u2  v2  is an ordered pair of real numbers,
therefore u  v is in V . Consequently, V is closed under addition.
For any u   u1 , u2  in V and for any scalar k , ku   0, ku2  . is an ordered pair of real numbers,
therefore ku is in V . Consequently, V is closed under scalar multiplication.
(c)
Axioms 1-5 hold for V because they are known to hold for R2 .
(d)
Axiom 7: k   u1 , u2    v1 , v2    k  u1  v1 , u2  v2    0, k  u2  v2     0, ku2    0, kv2 
 k  u1 , u2   k  v1 , v2  for all real k , u1 , u2 , v1 , and v2 ;
 k  m  u1 , u2    0,  k  m  u2    0, ku2  mu2    0, ku2    0, mu2 
 k  u1 , u2   m  u1 , u2  for all real k , m , u1 , and u2 ;
Axiom 8:
Axiom 9:
(e)
k  m  u1 , u2    k  0, mu2    0, kmu2    km  u1 , u2  for all real k , m , u1 , and u2 ;
Axiom 10 fails to hold: 1 u1 , u2    0, u2  does not generally equal  u1 , u2  .
Consequently, V is not a vector space.
2.
(a)
u  v   0  1  1, 4  3  1   2,2  ; ku   2  0, 2  4    0,8 
(b)
 0,0    u1 , u2    0  u1  1, 0  u2  1   u1  1, u2  1   u1 , u2  therefore  0,0  is not
the zero vector 0 required by Axiom 4
(c)
For all real numbers u1 and u2 , we have
 1, 1   u1 , u2    1  u1  1,  1  u2  1   u1 , u2  and
 u1 , u2    1, 1   u1  1  1, u2  1  1   u1 , u2  therefore Axiom 4 holds for
0   1, 1
d)
For any pair of real numbers u   u1 , u2  , letting  u   2  u1 , 2  u2  yields
u   u    u1   2  u1   1, u2   2  u2   1   1, 1  0 ;
Since  u   u  0 holds as well, Axiom 5 holds.
(e)
Axiom 7 fails to hold:
k  u  v   k  u1  v1  1, u2  v2  1   ku1  kv1  k , ku2  kv2  k 
2
Chapter 4: General Vector Spaces
ku  kv   ku1 , ku2    kv1 , kv2    ku1  kv1  1, ku2  kv2  1
therefore in general k  u  v   ku  kv
Axiom 8 fails to hold:
 k  m  u    k  m  u1 ,  k  m  u2    ku1  mu1 , ku2  mu2 
ku  mu   ku1 , ku2    mu1 , mu2    ku1  mu1  1, ku2  mu2  1
therefore in general  k  m  u  ku  mu
3.
Let V denote the set of all real numbers.
Axiom 1:
x  y is in V for all real x and y ;
Axiom 2:
x  y  y  x for all real x and y ;
Axiom 3:
x   y  z    x  y   z for all real x , y , and z ;
Axiom 4:
taking 0  0 , we have 0  x  x  0  x for all real x ;
Axiom 5:
for each u  x , let  u   x ; then x    x     x   x  0
Axiom 6:
kx is in V for all real k and x ;
Axiom 7:
k  x  y   kx  ky for all real k , x , and y ;
Axiom 8:
 k  m  x  kx  mx for all real k , m , and x ;
Axiom 9:
k  mx    km  x for all real k , m , and x ;
Axiom 10: 1x  x for all real x .
This is a vector space – all axioms hold.
4.
Let V denote the set of all pairs of real numbers of the form  x,0  .
Axiom 1:
 x,0    y,0    x  y,0  is in V for all real x and y ;
Axiom 2:
 x,0    y,0    x  y,0    y  x,0    y,0    x,0  for all real x and y ;
Axiom 3:
 x,0     y,0    z,0     x,0    y  z,0    x  y  z,0    x  y,0    z,0 
   x,0    y,0     z,0  for all real x , y , and z ;
Axiom 4:
taking 0   0,0  , we have  0,0    x,0    x,0  and  x,0    0,0    x,0 
for all real x ;
Axiom 5:
for each u   x,0  , let  u    x,0  ;
then  x,0     x,0    0,0  and   x,0    x,0    0,0  ;
Axiom 6:
k  x,0    kx,0  is in V for all real k and x ;
4.1 Real Vector Spaces
Axiom 7:
k   x,0    y,0    k  x  y,0    kx  ky,0   k  x,0   k  y,0 
for all real k , x , and y ;
Axiom 8:
 k  m  x,0     k  m  x,0    kx  mx,0   k  x,0   m  x,0 
for all real k , m , and x ;
Axiom 9:
k  m  x,0    k  mx,0    kmx,0    km  x,0  for all real k , m , and x ;
Axiom 10: 1 x,0    x,0  for all real x .
This is a vector space – all axioms hold.
5.
Axiom 5 fails whenever x  0 since it is then impossible to find  x, y  satisfying x  0 for which
 x, y    x, y    0, 0  . (The zero vector from axiom 4 must be 0   0,0  .)
Axiom 6 fails whenever k  0 and x  0 .
This is not a vector space.
6.
Let V denote the set of all n -tuples of real numbers of the form  x, x,, x  .
Axiom 1:
Axiom 2:
Axiom 3:
Axiom 4:
 x, x,, x    y, y,, y    x  y, x  y,, x  y  is in V for all real x and y ;
 x, x,, x    y, y,, y    x  y, x  y,, x  y    y  x, y  x,, y  x 
  y, y,, y    x, x,, x  for all real x and y ;
 x, x,, x     y, y,, y    z, z,, z     x, x,, x    y  z, y  z,, y  z 
  x  y  z, x  y  z,, x  y  z    x  y, x  y,, x  y    z, z,, z 
   x, x,, x    y, y,, y     z, z,, z  for all real x , y , and z ;
taking 0   0,0,,0  , we have  0,0,,0    x, x,, x    x, x,, x  and
 x, x,, x    0,0,,0    x, x,, x  for all real x ;
Axiom 5:
for each u   x, x,, x  , let  u    x,  x,,  x  ;
then  x, x,, x     x,  x,,  x    0,0,,0  and
  x,  x,,  x    x, x,, x    0,0,,0  ;
Axiom 6:
k  x, x,, x    kx, kx,, kx  is in V for all real k and x ;
Axiom 7:
k   x, x,, x    y, y,, y    k  x  y, x  y,, x  y    kx  ky, kx  ky,, kx  ky 
 k  x, x,, x   k  y, y,, y  for all real k , x , and y ;
3
4
Chapter 4: General Vector Spaces
Axiom 8:
 k  m  x, x,, x     k  m  x,  k  m  x,,  k  m  x 
  kx  mx, kx  mx,, kx  mx   k  x, x,, x   m  x, x,, x 
for all real k , m , and x ;
Axiom 9:
k  m  x, x,, x    k  mx, mx,, mx    kmx, kmx,, kmx    km  x, x,, x 
for all real k , m , and x ;
Axiom 10: 1 x, x,, x    x, x,, x  for all real x .
This is a vector space – all axioms hold.
7.
Axiom 8 fails to hold:
 k  m  u    k  m  x,  k  m  y,  k  m  z 
2

2
 
2
 
 
 

ku  mu  k 2 x, k 2 y, k 2 z  m 2 x, m 2 y, m 2 z  k 2  m 2 x, k 2  m 2 y, k 2  m 2 z
therefore in general  k  m  u  ku  mu .
This is not a vector space.
8.
1 0 
Axiom 1 fails since a sum of two 2  2 invertible matrices may or may not be invertible, e.g. both 

0 1 
 1 0 
 1 0   1 0  0 0 
and 
are invertible, but 


 is not invertible.

 0 1   0 1 0 0 
 0 1
Axiom 6 fails whenever k  0 .
9.
a 0
Let V be the set of all 2  2 matrices of the form 
 (i.e., all diagonal 2  2 matrices)
0 b 
Axiom 1:
the sum of two diagonal 2  2 matrices is also a diagonal 2  2 matrix.
Axiom 2:
follows from part (a) of Theorem 1.4.1.
Axiom 3:
follows from part (b) of Theorem 1.4.1.
Axiom 4:
0 0 
taking 0  
 ; follows from part (a) of Theorem 1.4.2.
0 0 
Axiom 5:
0
a 0
 a
be 
let the negative of 

 ; follows from part (c) of Theorem 1.4.2 and Axiom 2.
0 b 
 0 b 
Axiom 6:
the scalar multiple of a diagonal 2  2 matrix is also a diagonal 2  2 matrix.
Axiom 7:
follows from part (h) of Theorem 1.4.1.
Axiom 8:
follows from part (j) of Theorem 1.4.1.
Axiom 9:
follows from part (l) of Theorem 1.4.1.
4.1 Real Vector Spaces
a 0  a 0 
Axiom 10: 1 
 
 for all real a and b .
0 b  0 b 
This is a vector space – all axioms hold.
10.
Let V be the set of all real-valued functions f defined for all real numbers and such that f 1  0 .
Axiom 1:
If f and g are in V then f  g is a function defined for all real numbers and
 f  g 1  f 1  g 1  0 therefore V is closed under the operation of addition defined by
Formula (2).
Axiom 6:
If k is a scalar and f is in V then kf is a function defined for all real numbers and
 kf 1  k  f 1   0 therefore V is closed under the operation of scalar multiplication defined by
Formula (3).
Verification of the eight remaining axioms proceeds analogously to Example 6.
This is a vector space – all axioms hold.
11.
Let V denote the set of all pairs of real numbers of the form 1, x  .
Axiom 1:
1, y   1, y   1, y  y  is in V for all real y and y ' ;
Axiom 2:
1, y   1, y   1, y  y   1, y  y   1, y   1, y  for all real y and y ;
Axiom 3:
1, y    1, y  1, y   1, y   1, y  y  1, y  y  y  1, y  y  1, y
  1, y   1, y    1, y  for all real y , y  , and y  ;
Axiom 4:
taking 0  1,0  , we have 1,0   1, y   1, y  and 1, y   1,0   1, y 
for all real y;
Axiom 5:
for each u  1, y  , let u  1,  y  ;
then 1, y   1,  y   1,0  and 1,  y   1, y   1,0  ;
Axiom 6:
k 1, y   1, ky  is in V for all real k and y ;
Axiom 7:
k  1, y   1, y    k 1, y  y   1, ky  ky   1, ky   1, ky   k 1, y   k 1, y 
for all real k , y , and y  ;
Axiom 8:
 k  m 1, y   1,  k  m  y   1, ky  my   1, ky   1, my   k 1, y   m 1, y 
for all real k , m , and y ;
Axiom 9:
k  m 1, y    k 1, my   1, kmy    km 1, y  for all real k , m , and y ;
Axiom 10: 11, y   1, y  for all real y .
5
6
Chapter 4: General Vector Spaces
This is a vector space – all axioms hold.
12.
Let V be the set of polynomials of the form a  bx .
Axiom 1:
Axiom 2:
Axiom 3:
 a0  b0 x    a1  b1 x    a0  a1    b0  b1  x is in V for all real a0 , a1 , b0 , and b1 ;
 a0  b0 x    a1  b1 x    a0  a1    b0  b1  x   a1  a0    b1  b0  x
  a1  b1 x    a0  b0 x  for all real a0 , a1 , b0 , and b1 ;
 a0  b0 x     a1  b1 x    a2  b2 x     a0  a1  a2    b0  b1  b2  x
  a  b x    a  b x     a  b x  for all real a , a , a , b , b , and b ;
0
Axiom 4:
0
1
1
2
0
2
1
2
0
1
2
taking 0  0  0 x , we have  0  0x    a  bx   a  bx and
 a  bx    0  0 x   a  bx for all real a and b ;
Axiom 5:
for each u  a  bx , let u  a  bx ;
then  a  bx     a  bx   0  0 x    a  bx    a  bx  for all real a and b ;
Axiom 6:
k  a  bx   ka   kb  x is in V for all real a , b , and k ;
Axiom 7:
k   a0  b0 x    a1  b1 x    k   a0  a1    b0  b1  x   k  a0  b0 x   k  a1  b1 x  for all real a0 ,
a1 , b0 , b1 , and k ;
Axiom 8:
 k  m  a  bx    k  m  a   k  m  bx  k  a  bx   m  a  bx 
for all real a , b , k , and m ;
Axiom 9:
k  m  a  bx    k  ma  mbx   kma  kmbx   km  a  bx 
for all real a , b , k , and m ;
Axiom 10: 1 a  bx   a  bx for all real a and b .
This is a vector space – all axioms hold.
13.
Axiom 3:
follows from part (b) of Theorem 1.4.1 since
u   v
u
u   v  w    11 12     11
u21 u22    v21
 u
u  v
   11 12    11
 u21 u22   v21
Axiom 7:
v12   w11

v22   w21
v12    w11

v22    w21
w12  

w22  
w12 
 u  v  w
w22 
follows from part (h) of Theorem 1.4.1 since
 u
u  v
k  u  v   k   11 12    11
 u21 u22   v21
v12  
 u11 u12 
v
 k  11
  k 


v22  
u21 u22 
 v21
v12 
 ku  kv
v22 
4.1 Real Vector Spaces
Axiom 8:
follows from part (j) of Theorem 1.4.1 since
u12 
u 
u 
u
u
u
 k  11 12   m  11 12   ku  mu

u21 u22 
u21 u22 
u21 u22 
 k  m  u   k  m   11
Axiom 9:
follows from part (l) of Theorem 1.4.1 since
 u
u 
u 
u
k  mu   k  m  11 12     km   11 12    km  u
u21 u22 
 u21 u22  
15.
Axiom 1:
 u1 , u2    v1 , v2    u1  v1 , u2  v2  is in V
Axiom 2:
 u1 , u2    v1 , v2    u1  v1 , u2  v2    v1  u1 , v2  u2    v1 , v2    u1 , u2 
Axiom 3:
 u1 , u2     v1 , v2    w1 , w2     u1 , u2    v1  w1 , v2  w2 
  u1  v1  w1 , u2  v2  w2    u1  v1 , u2  v2    w1 , w2 
=   u1 , u2    v1 , v2     w1 , w2 
Axiom 4:
taking 0   0,0  , we have  0,0    u1 , u2    u1 , u2  and  u1 , u2    0,0    u1 , u2 
Axiom 5:
for each u   u1 , u2  , let  u   u1 , u2  ;
then  u1 , u2    u1 , u2    0,0  and  u1 , u2    u1 , u2    0,0 
Axiom 6:
k  u1 , u2    ku1 ,0  is in V
Axiom 7:
k   u1 , u2    v1 , v2    k  u1  v1 , u2  v2    ku1  kv1 ,0    ku1 ,0    kv1 ,0   k  u1 , u2   k  v1 , v2 
Axiom 8:
Axiom 9:
 k  m  u1 , u2     k  m  u1 ,0    ku1  mu1 ,0    ku1 ,0    mu1 ,0 
 k  u1 , u2   m  u1 , u2 
k  m  u1 , u2    k  mu1 ,0    kmu1 ,0    km  u1 , u2 
 u 1
19.
1
u
20.
For positive real numbers u , u k  1 if and only if k  0 or u  1 .
21.
uwvw
 u  w    w    v  w    w 
u   w   w    v   w   w  
u0v0
uv
22.
(1)
(2)
(3)
(4)
Axiom 7
Axiom 4
Axiom 5
Axiom 1
Hypothesis
Add  w to both sides
Axiom 3
Axiom 5
Axiom 4
7
8
Chapter 4: General Vector Spaces
(5)
(6)
(7)
Axiom 3
Axiom 5
Axiom 4
True-False Exercises
(a)
True. This is a part of Definition 1.
(b)
False. Example 1 discusses a vector space containing only one vector.
(c)
False. By part (d) of Theorem 4.1.1, if ku  0 then k  0 or u  0 .
(d)
False. Axiom 6 fails to hold if k  0 . (Also, Axiom 4 fails to hold.)
(e)
True. This follows from part (c) of Theorem 4.1.1.
(f)
False. This function must have a value of zero at every point in  ,   .
4.2 Subspaces
1.
(a)
Let W be the set of all vectors of the form  a, 0, 0  , i.e. all vectors in R3 with last two components
equal to zero.
This set contains at least one vector, e.g.  0, 0, 0  .
Adding two vectors in W results in another vector in W :  a, 0, 0    b, 0, 0    a  b, 0, 0  since the
result has zeros as the last two components.
Likewise, a scalar multiple of a vector in W is also in W : k  a, 0, 0    ka, 0, 0  - the result also has
zeros as the last two components.
According to Theorem 4.2.1, W is a subspace of R3 .
(b)
Let W be the set of all vectors of the form  a,1,1 , i.e. all vectors in R3 with last two components
equal to one. The set W is not closed under the operation of vector addition since
 a,1,1   b,1,1   a  b,2,2  does not have ones as its last two components thus it is outside W .
According to Theorem 4.2.1, W is not a subspace of R3 .
(c)
Let W be the set of all vectors of the form  a, b, c  , where b  a  c .
This set contains at least one vector, e.g.  0, 0, 0  . (The condition b  a  c is satisfied when
a  b  c  0 .)
Adding two vectors in W results in another vector in W
 a, a  c, c    a, a  c, c   a  a, a  c  a  c, c  c since in this result, the second component is
the sum of the first and the third: a  c  a  c   a  a    c  c  .
Likewise, a scalar multiple of a vector in W is also in W : k  a, a  c, c    ka, k  a  c  , kc  since in
this result, the second component is once again the sum of the first and the third:
k  a  c   ka  kc .
According to Theorem 4.2.1, W is a subspace of R3 .
2.
(a)
Let W be the set of all vectors of the form  a, b, c  , where b  a  c  1 . The set W is not closed
under the operation of vector addition, since in the result of the following addition of two vectors
from W
 a, a  c  1, c    a, a  c  1, c   a  a, a  c  a  c  2, c  c 
the second component does not equal to the sum of the first, the third, and 1:
a  c  a  c  2   a  a    c  c   1 . Consequently, this result is not a vector in W .
According to Theorem 4.2.1, W is not a subspace of R3 .
(b)
Let W be the set of all vectors of the form  a, b, 0  , i.e. all vectors in R3 with last component equal
to zero.
This set contains at least one vector, e.g.  0, 0, 0  .
Adding two vectors in W results in another vector in W
 a, b, 0    a, b, 0    a  a, b  b, 0  since the result has 0 as the last component.
Likewise, a scalar multiple of a vector in W is also in W : k  a, b, 0    ka, kb, 0  - the result also has
0 as the last component.
According to Theorem 4.2.1, W is a subspace of R3 .
(c)
Let W be the set of all vectors of the form  a, b, c  , where a  b  7. The set W is not closed under
the operation of vector addition, since in the result of the following addition of two vectors from W
we obtain
 a, b, c    a, b, c    a  a, b  b, c  c where
a  a  b  b  a  b  a  b = 7 + 7 = 14. Consequently, this result is not a vector in W .
According to Theorem 4.2.1, W is not a subspace of R3 .
3.
(a)
Let W be the set of all n  n diagonal matrices.
This set contains at least one matrix, e.g. the zero n  n matrix.
Adding two matrices in W results in another n  n diagonal matrix, i.e. a matrix in W :
 a11
0

 

0
0
a22

0
 0   b11
 0   0

   
 
 ann   0
0
b22

0
 0   a11  b11
 0   0

    
 
 bnn   0
Likewise, a scalar multiple of a matrix in W is also in W :
0
a22  b22

0

0 

0 




 ann  bnn 
10
Chapter 4: General Vector Spaces
 a11
0
k
 

0
0   ka11
0   0

    
 
 ann   0
0 
0 

 

 kann 
0 
a22 
0 
ka22 

0

0
According to Theorem 4.2.1, W is a subspace of M nn .
(b)
Let W be the set of all n  n matrices such whose determinant is zero. We shall show that W is not
1 0 
closed under the operation of matrix addition. For instance, consider the matrices A  
 and
0 0 
0 0 
B
 - both have determinant equal 0 , therefore both matrices are in W . However,
0 1 
1 0 
A B 
 has nonzero determinant, thus it is outside W .
0 1 
According to Theorem 4.2.1, W is not a subspace of M nn .
(c)
Let W be the set of all n  n matrices with zero trace.
This set contains at least one matrix, e.g., the zero n  n matrix is in W .
Let us assume A   aij  and B  bij  are both in W , i.e. tr  A   a11  a22    ann  0 and
tr  B   b11  b22    bnn  0 .
Since tr  A  B    a11  b11    a22  b22      ann  bnn 
 a11  a22    ann  b11  b22    bnn  0  0  0 , it follows that A  B is in W .
A scalar multiple of the same matrix A with a scalar k has tr  kA   ka11  ka22   
kann  k  a11  a22    ann   0 therefore kA is in W as well.
According to Theorem 4.2.1, W is a subspace of M nn .
(d)
Let W be the set of all symmetric n  n matrices (i.e., n  n matrices such that AT  A ).
This set contains at least one matrix, e.g., I n is in W .
Let us assume A and B are both in W , i.e. AT  A and BT  B . By Theorem 1.4.8(b), their sum
satisfies  A  B   AT  BT  A  B therefore W is closed under addition.
T
From Theorem 1.4.8(d), a scalar multiple of a symmetric matrix is also symmetric:  kA   kAT  kA
T
which makes W closed under scalar multiplication.
According to Theorem 4.2.1, W is a subspace of M nn .
4.
(a)
Let W be the set of all n  n matrices such that AT   A .
This set contains at least one matrix, e.g., the zero n  n matrix is in W .
Let us assume A and B are both in W , i.e. AT   A and BT   B . By Theorem 1.4.8(b), their sum
satisfies  A  B   AT  BT   A  B    A  B  therefore W is closed under addition.
T
From Theorem 1.4.8(d), we have  kA   kAT  k   A   kA which makes W closed under scalar
T
multiplication.
According to Theorem 4.2.1, W is a subspace of M nn .
(b)
Let W be the set of n  n matrices for which Ax  0 has only the trivial solution. It follows from
Theorem 1.5.3 that the set W consists of all n  n matrices that are invertible. This set is not closed
under scalar multiplication when the scalar is 0. Consequently, W is not a subspace of M nn .
(c)
Let B be some fixed n  n matrix, and let W be the set of all n  n matrices A such that AB  BA .
This set contains at least one matrix, e.g., I n is in W .
Let us assume A and C are both in W , i.e. AB  BA and CB  BC . By Theorem 1.4.1(d,e), their
sum satisfies  A  C  B  AB  CB  BA  BC  B  A  C  therefore W is closed under addition.
From Theorem 1.4.1(m), we have  kA  B  k  AB   k  BA   B  kA  which makes W closed under
scalar multiplication.
According to Theorem 4.2.1, W is a subspace of M nn .
(d)
Let W be the set of all invertible n  n matrices (i.e., n  n matrices such that A1 exists). This set is
not closed under scalar multiplication when the scalar is 0. Consequently, W is not a subspace of
M nn .
5.
(a)
Let W be the set of all polynomials a0  a1 x  a2 x 2  a3 x 3 for which a0  0 .
This set contains at least one polynomial, 0  0 x  0 x 2  0 x 3  0 .
Adding two polynomials in W results in another polynomial in W :
0  a x  a x  a x   0  b x  b x  b x 
2
1
3
2
2
3
1
3
2
3
 0   a1  b1  x   a2  b2  x 2   a3  b3  x 3 .
Likewise, a scalar multiple of a polynomial in W is also in W :


k 0  a1 x  a2 x 2  a3 x 3  0   ka1  x   ka2  x 2   ka3  x 3 .
According to Theorem 4.2.1, W is a subspace of P3 .
(b)
Let W be the set of all polynomials a0  a1 x  a2 x 2  a3 x 3 for which a0  a1  a2  a3  0 , i.e. all
polynomials that can be expressed in the form  a1  a2  a3  a1 x  a2 x 2  a3 x 3 .
Adding two polynomials in W results in another polynomial in W
  a  a  a  a x  a x  a x    b  b  b  b x  b x  b x 
2
1
2
3
1
2
3
2
3
1
2
3
1
2
3
3
   a1  a2  a3  b1  b2  b3    a1  b1  x   a2  b2  x 2   a3  b3  x 3
since we have   a1  a2  a3  b1  b2  b3    a1  b1    a2  b2    a3  b3   0 .
Likewise, a scalar multiple of a polynomial in W is also in W


k  a1  a2  a3  a1 x  a2 x 2  a3 x 3   ka1  ka2  ka3  ka1 x  ka2 x 2  ka3 x 3
12
Chapter 4: General Vector Spaces
since it meets the condition   ka1  ka2  ka3    ka1    ka2    ka3   0 .
According to Theorem 4.2.1, W is a subspace of P3 .
6.
(a)
Let W be the set of all polynomials a0  a1 x  a2 x 2  a3 x 3 in which a0 , a1 , a2 , and a3 are rational
numbers. The set W is not closed under the operation of scalar multiplication, e.g., the scalar product
of the polynomial x 3 in W by k   is  x 3 , which is not in W .
According to Theorem 4.2.1, W is not a subspace of P3 .
(b)
The set of all polynomials of degree  1 is a subset of P3 . It is also a vector space (called P1 ) with
same operations of addition and scalar multiplication as those defined in P3 . By Definition 1, we
conclude that P1 is a subspace of P3 .
7.
(a)
Let W be the set of all functions f in F  ,   for which f  0   0.
This set contains at least one function, e.g., the constant function f  x   0 .
Assume we have two functions f and g in W , i.e., f  0   g  0   0 . Their sum f  g is also a
function in F  ,   and satisfies  f  g  0   f  0   g  0   0  0  0 therefore W is closed under
addition.
A scalar multiple of a function f in W , kf , is also a function in F  ,   for which
 kf  0   k  f  0    0 making W closed under scalar multiplication.
According to Theorem 4.2.1, W is a subspace of F  ,   .
(b)
Let W be the set of all functions f in F  ,   for which f  0   1.
We will show that W is not closed under addition. For instance, let f  x   1 and g  x   cos x be
two functions in W . Their sum, f  g , is not in W since  f  g  0   f  0   g  0   1  1  2 .
We conclude that W is not a subspace of F  ,   .
8.
(a)
Let W be the set of all functions f in F  ,   for which f   x   f  x  .
This set contains at least one function, e.g., the constant function f  x   0 .
Assume we have two functions f and g in W , i.e., f   x   f  x  and g   x   g  x  . Their sum
f  g is also a function in F  ,   and satisfies  f  g   x   f   x   g   x  
f  x   g  x    f  g  x  therefore W is closed under addition.
A scalar multiple of a function f in W , kf , is also a function in F  ,   for which
 kf   x   k  f   x    k  f  x     kf  x  making W closed under scalar multiplication.
According to Theorem 4.2.1, W is a subspace of F  ,   .
(b)
A sum of two polynomials of degree 2 may be a polynomial of lower degree, e.g.,
1  x    x  x   1  x therefore the set is not closed under addition, and consequently is not a
2
2
subspace of F  ,   .
9.
(a)
Let W be the set of all sequences in R of the form  v,0, v,0, v,0, .
This set contains at least one sequence, e.g.  0, 0, 0, .
Adding two sequences in W results in another sequence in W :
 v,0, v,0, v,0,   w,0, w,0, w,0,   v  w,0, v  w,0, v  w,0, .
Likewise, a scalar multiple of a vector in W is also in W : k  v,0, v,0, v,0,   kv,0, kv,0, kv,0, .
According to Theorem 4.2.1, W is a subspace of R .
(b)
Let W be the set of all sequences in R of the form  v,1, v,1, v,1, .
This set is not closed under addition since
 v,1, v,1, v,1,   w,1, w,1, w,1,   v  w,2, v  w,2, v  w,2, is not in W .
We conclude that W is not a subspace of R .
10.
(a)
Let W be the set of all sequences in R of the form  v,2 v,4 v,8v,16 v, .
This set contains at least one sequence, e.g.  0, 0, 0, .
Adding two sequences in W results in another sequence in W :
 v,2v,4v,8v,16v,   w,2w,4w,8w,16w,
  v  w,2  v  w  ,4  v  w  ,8  v  w  ,16  v  w  , .
Likewise, a scalar multiple of a vector in W is also in W :
k  v,2 v,4 v,8v,16 v,   kv,2 kv,4 kv,8kv,16 kv, .
According to Theorem 4.2.1, W is a subspace of R .
(b)
Let W be the set of all sequences in R whose components are 0 from some point on.
This set contains at least one sequence, e.g.  0, 0, 0, .
Let a sequence u in W have 0 components starting from the i th element; also, let a sequence v in
W have 0 components starting from the j th element. It follows that u  v must have 0 component
starting no later than from the position corresponding to max  i, j  - the larger of the two numbers.
Therefore, u  v is in W .
The scalar product ku must have 0 components starting no later than from the i th element, therefore
ku is also in W .
According to Theorem 4.2.1, W is a subspace of R .
11.
(a)
a 0
Let W be the set of all matrices of form 
 . This set contains at least one matrix, e.g. the zero
b 0 
matrix. Adding two matrices in W results in another matrix in W :
14
Chapter 4: General Vector Spaces
 a 0   a 0   a  a 0 
 b 0    b 0    b  b 0  .

 
 

Likewise, a scalar multiple of a matrix in W is also in W :
 a 0   ka 0 
k

 . According to Theorem 4.2.1, W is a subspace of M 22 .
 b 0   kb 0 
(b)
(c)
 a 1
Let W be the set of all matrices of form 
 . This set is not closed under scalar multiplication
 b 1
when the scalar is 0. Consequently, W is not a subspace of M 22 .
 1 2 
Let W be the set of all 2  2 matrices A such that A      . This set is not closed under addition
 1 0 
since if A and B are matrices in W then
 1
 1
 1 2  2   4 
 A    B            . Consequently, the matrix A  B is not contained

 1
 1
 1 0  0   0 
 A  B 
in W. According to Theorem 4.2.1, W is not a subspace of M 22 .
12.
(a)
 1 0 
Let W be the set of all 2  2 matrices A such that A      . This set contains at least one
 1 0 
matrix, e.g. the zero matrix. Adding two matrices in W results in another matrix in W :
 1
 1
 1 0  0  0 
 A    B            . Likewise, a scalar multiple of a matrix in W is

 1
 1
 1 0  0  0 
 A  B 
  1 
 1
0  0 
also in W :  kA     k  A     k      . According to Theorem 4.2.1, W is a subspace of
 1
0  0 
  1 
M 22 .
(b)
0 2  0 2 
Let W be the set of all 2  2 matrices A such that A 

 A . This set contains at least
2 1 2 1
one matrix, e.g. the zero matrix. Adding two matrices in W results in another matrix in W :
0 2 
0 2 
0 2  0 2 
0 2 
0 2 
 A
 B

A
B





  A  B  . Likewise, a
 2 1
2 1
2 1 2 1
2 1
2 1
 A  B 
 0 2  
0 2 
 k A
scalar multiple of a matrix in W is also in W :  kA  

 
2 1
 2 1 
 0 2   0 2 
k
 A  
  kA  . According to Theorem 4.2.1, W is a subspace of M 22 .
 2 1  2 1
(c)
Let W be the set of all 2  2 matrices A such that det  A   0. This set is not closed under addition.
0 0 
1 1 
For example, the matrices 
and 
 are in W because each has determinant zero but

0 0 
0 1 
 1 1  0 0  
 1 1 
det  

  det  


   1. According to Theorem 4.2.1, W is not a subspace of M 22 .
 0 0  0 1  
 0 1 
13.
(a)
Let W be the set of all vectors in R 4 of form  a, a 2 , a 3 , a 4  . This set is not closed under addition. For
example, the vector 1,1,1,1 is in W but 1,1,1,1  1,1,1,1   2,2,2,2  is not. According to
Theorem 4.2.1, W is not a subspace of R 4 .
(b)
Let W be the set of all vectors in R 4 of form  a,0, b,0  . This set contains at least one vector, e.g. the
zero vector. Adding two vectors in W results in another vector in W :
 a,0, b,0    a,0, b,0    a  a,0, b  b,0  .
Likewise, a scalar multiple of a vector in W is also in W : k  a,0, b,0    ka,0, kb,0  . According to
Theorem 4.2.1, W is a subspace of R 4 .
14.
(a)
0 
Let W be the set of all vectors x in R 4 such that Ax    . This set is not closed under scalar
1 
multiplication when the scalar is 0. Consequently, W is not a subspace of R 4 .
(b)
0 
Let W be the set of all vectors x in R 4 such that Ax    . This set contains at least one vector, e.g.
0 
the zero vector. Adding two vectors in W results in another vector in W :
0  0 
0 
A(x  y )  Ax  ay    . +   .=   . Likewise, a scalar multiple of a vector
0  0 
0 
0  0 
in W is also in W : A  kx   kA  x   k      . According to Theorem 4.2.1, W is a subspace of
0  0 
R4 .
15.
(a)
Let W be the set of all polynomials of degree less than or equal to six. This set is not empty. For
example, p  x   x is contained in W . Adding two polynomials in W results in another polynomial
in W because the sum of two polynomials of degree at most six is another polynomial of degree at
least six. Likewise, a scalar multiple of a polynomial of degree at most six is another polynomial of
degree at most six. According to Theorem 4.2.1, W is a subspace of P .
(b)
Let W be the set of all polynomials of degree equal to six. This set is not closed under addition. For
example, p  x   x 6  x and q  x    x 6 are both polynomials in W but
16
Chapter 4: General Vector Spaces
p  x   q  x   x 6  x  x 6  x has degree 1 so it is not contained in W . According to Theorem 4.2.1,
W is not a subspace of P .
(c)
Let W be the set of all polynomials of degree greater than or equal to six. This set is not closed under
addition. For example, p  x   x 6  x and q  x    x 6 are both polynomials in W but
p  x   q  x   x 6  x  x 6  x has degree 1 so it is not contained in W . According to Theorem 4.2.1,
W is not a subspace of P .
16.
(a)
Let W be the set of all polynomials with even coefficients. This set is not empty. For example, the
polynomial p  x   2 x 2 is contained in W . Adding two polynomials in W results in another
polynomial in W because the sum of any two corresponding even coefficients is even. Likewise, a
scalar multiple of a polynomial with even coefficients is another polynomial with even coefficients..
According to Theorem 4.2.1, W is a subspace of P .
(b)
Let W be the set of all polynomials whose coefficients sum to zero. This set is not empty. For
example, the polynomial p  x   x 2  x is contained in W . Adding two polynomials in W results in
another polynomial in W :
 a  a x  a x    a x   b  b x  b x    b x 
2
0
1
2
n
2
n
0
1
m
2
m
  a0  b0    a1  b1  x   a2  b2  x 2     am  bm  x m  am 1 x m 1   an x n where we assume
without loss of generality that n  m .
The sum of this polynomial’s coefficients is:
( a0  b0 )  (a1  b1 )  (am  bm )  am 1    an  a0  a1   an  b0  b1   bm  0  0  0 which
means it is contained in W . Likewise, a scalar multiple of a polynomial whose coefficients sum to
zero is another polynomial whose coefficients sum to zero: k  a0  a1 x  a2 x 2    an x n  
ka0  ka1 x  ka2 x 2    kan x n so that ka0  ka1   kan  k  a0  a1   an   k 0  0. According to
Theorem 4.2.1, W is a subspace of P .
(c)
Let W be the set of all polynomials of even degree. This set is not empty. For example, the
polynomial p  x   x 2 is contained in W . Adding two polynomials in W results in another
polynomial in W :
 a  a x  a x    a x   b  b x  b x    b x 
2
0
1
2
2n
2
n
0
1
2
2m
m
  a0  b0    a1  b1  x   a2  b2  x 2     a2m  b2m  x 2m  a2 m 1 x 2 m 1   an x 2 n
where we assume without loss of generality that n  m .
This polynomial also has even degree which means it is contained in W . Likewise, a scalar multiple
of a polynomial of even degree is another polynomial of even degree:


k a0  a1 x  a2 x 2    an x 2n  ka0  ka1 x  ka2 x 2    kan x 2n . According to Theorem 4.2.1, W
is a subspace of P .
17.
(a)
Let W be the set of all sequences of the form  v1 , v2 , v3 ,  such that lim vn  0. This set is
n 
nonempty (e.g. it contains the zero sequence  0,0,0,  ). Adding two sequences  v1 , v2 , v3 ,  and
 w1 , w2 , w3 ,  in W results in the sequence  v1  w1 , v2  w2 , v3  w3 ,  which is also in W since
lim n vn  lim n wn  lim n (vn  wn )  0. Likewise, a scalar multiple of a sequence  v1 , v2 , v3 , 
in W is also in W because k (lim n vn )  lim n kvn  0. (These results both follow because sums
and constant multiples of convergent sequences are also convergent.). According to Theorem 4.2.1,
W is a subspace of R .
(b)
Let W be the set of all sequences of the form  v1 , v2 , v3 ,  such that lim n vn exists and is finite.
This set is nonempty (e.g. it contains the zero sequence  0,0,0,  ). Adding two sequences
 v1 , v2 , v3 ,  and  w1 , w2 , w3 ,  in W results in the sequence  v1  w1 , v2  w2 , v3  w3 ,  which is
also in W . This follows because both lim n vn and lim n wn exist and are finite so that
lim n vn  lim n wn  lim n (vn  wn ) also exists and is finite. Likewise, a scalar multiple of a
sequence  v1 , v2 , v3 ,  in W is also in W because k  lim n vn   lim n  kvn . According to
Theorem 4.2.1, W is a subspace of R .
(c)
Let W be the set of all sequences of the form  v1 , v2 , v3 ,  such that  n 1 vn  0. This set is

nonempty (e.g. it contains the zero sequence  0,0,0,  ). Adding two sequences  v1 , v2 , v3 ,  and
 w1 , w2 , w3 ,  in W results in the sequence  v1  w1 , v2  w2 , v3  w3 ,  which is also in W. This
follows because both  n 1 vn and  n 1 wn converge to zero so that



v   n 1 wn   n 1 ( vn  wn )  0. Likewise, a scalar multiple of a sequence  v1 , v2 , v3 ,  in



n 1 n
W is also in W because k  n 1 vn   n 1 kvn  0 . According to Theorem 4.2.1, W is a subspace of


R .
(d)
Let W be the set of all sequences of the form  v1 , v2 , v3 ,  such that  n 1 vn converges. This set is

nonempty (e.g. it contains the zero sequence (0,0,0, ) ). Adding two sequences  v1 , v2 , v3 ,  and
 w1 , w2 , w3 ,  in W results in the sequence  v1  w1 , v2  w2 , v3  w3 ,  which is also in W. This
follows because both  n 1 vn and  n 1 wn converge so  n 1 vn   n 1 wn   n 1 ( vn  wn ) also





converges. Likewise, a scalar multiple of a sequence  v1 , v2 , v3 ,  in W is also in W because
k  n 1 vn   n 1 kvn . According to Theorem 4.2.1, W is a subspace of R .

18.

The line L contains at least one point – e.g., the origin.
If the points  x1 , y1 , z1  and  x2 , y2 , z2  are both on L , then there must exist real numbers t1 and t2 such
that x1  at1 , y1  bt1 , z1  ct1 , x2  at 2 , y2  bt 2 , and z2  ct 2 .
L is closed under addition since  x1 , y1 , z1    x2 , y2 , z2     a  t1  t2  ,  b  t1  t2  ,  c  t1  t2   .
18
Chapter 4: General Vector Spaces
It is also closed under scalar multiplication because k  x1 , y1 , z1     a  kt1  ,  b  kt1  ,  c  kt1   .
It follows from Theorem 4.2.1 that L is a subspace of R3 .
19.
(a)
 1 0 12 


The reduced row echelon form of the coefficient matrix A is 0 1 32  therefore the solution
0 0 0 
are x   12 t , y   32 t , z  t . These are parametric equations of a line through the origin.
(b)
(c)
(d)
 1 0 0


The reduced row echelon form of the coefficient matrix A is 0 1 0  therefore the only solution
0 0 1
is x  y  z  0 - the origin.
 1 3 1


The reduced row echelon form of the coefficient matrix A is 0 0 0  which corresponds to an
0 0 0 
equation of a plane through the origin x  3 y  z  0 .
 1 0 3


The reduced row echelon form of the coefficient matrix A is 0 1 2  therefore the solutions
0 0 0 
are x  3t , y  2t , z  t . These are parametric equations of a line through the origin.
b
21.
Let W denote the set of all continuous functions f  f  x  on  a, b such that  f  x  dx  0 .
a
This set contains at least one function f  x   0 .
Let us assume f  f  x  and g  g  x  are functions in W . From calculus,
b
b
b
b
b
a
a
a
a
a
 f  x   g  x  dx   f  x  dx  g  x  dx  0 and kf  x  dx  k  f  x  dx  0 therefore both f  g and kf are
in W for any scalar k . According to Theorem 4.2.1, W is a subspace of C  a, b  .
23.
Since TA : R 3  R m , it follows from Theorem 4.2.5 that the kernel of TA must be a subspace of R 3 . Hence,
according to Table 1 the kernel can be one of the following four geometric obects:
25.



the origin,
a line through the origin,
a plane through the origin,

R3 .
Let W be the set of all function. of the form x  t   c1 cos  t  c2 sin  t - W is a subset of C   ,   .
This set contains at least one function x  t   0 .
A sum of two functions in W is also in W :
 c1 cos  t  c2 sin t    d1 cos  t  d2 sin t    c1  d1  cos  t   c2  d2  sin  t .
A scalar product of a function in W by any scalar k is also a function in W :
k  c1 cos  t  c2 sin  t    kc1  cos  t   kc2  sin  t .
According to Theorem 4.2.1, W is a subspace of C   ,   .
26.
For example, consider the subsets U   x,2 x  |x  R and V   x,3 x  |x  R of R 2 . Let W  U  V . Then
1,2  U and 1,3  V but 1,2   1,3    2,5  is not contained in W. According to Theorem 4.2.1, W is
not a subspace of R 2 .
True-False Exercises
(a)
True. This follows from Definition 1.
(b)
True.
(c)
False. The set of all nonnegative real numbers is a subset of the vector space R containing 0 , but it is not
closed under scalar multiplication.
(d)
False. By Theorem 4.2.4, the kernel of TA : R n  R m is a subspace of R n .
(e)
False. The solution set of a nonhomogeneous system is not closed under addition: Ax  b and Ay  b do
not imply A  x  y   b .
(f)
True. This follows from Theorem 4.2.2.
(g)
False. Consider W1  span 1,0  and W2  span  0,1 . The union of these sets is not closed under vector
addition, e.g. 1,0    0,1  1,1 is outside the union.
(h)
True. This set contains at least one matrix (e.g., I n ). A sum of two upper triangular matrices is also upper
triangular, therefore the set is closed under addition. A scalar multiple of an upper triangular matrix is also
upper triangular, hence the set is closed under scalar multiplication.
4.3 Spanning Sets
1.
(a)
For  2, 2, 2  to be a linear combination of the vectors u and v , there must exist scalars a and b
such that
a  0, 2, 2   b 1, 3, 1   2, 2, 2 
Equating corresponding components on both sides yields the linear system
20
Chapter 4: General Vector Spaces
0 a  1b  2
2 a  3b  2
2 a  1b  2
1 0 2 


whose augmented matrix has the reduced row echelon form 0 1 2  . The linear system is
0 0 0 
consistent, therefore  2, 2, 2  is a linear combination of u and v .
(b)
For  0, 4, 5  to be a linear combination of the vectors u and v , there must exist scalars a and b
such that
a  0, 2, 2   b 1, 3, 1   0, 4, 5 
Equating corresponding components on both sides yields the linear system
0 a  1b  0
2 a  3b  4
2 a  1b  5
1 0 0 


whose augmented matrix has the reduced row echelon form 0 1 0  . The last row corresponds to
0 0 1 
the equation 0  1 which is contradictory. We conclude that  0, 4, 5  is not a linear combination of u
and v .
(c)
By inspection, the zero vector  0, 0, 0  is a linear combination of u and v since
0  0, 2, 2   0 1, 3, 1   0, 0, 0 
2.
(a)
For  9, 7, 15  to be a linear combination of the vectors u , v , and w , there must exist scalars a ,
b , and c such that
a  2,1,4   b 1, 1,3   c  3,2,5    9, 7, 15 
Equating corresponding components on both sides yields the linear system
2 a  1b  3c  9
1a  1b  2c  7
4 a  3b  5c  15
4.3 Spanning Sets
21
 1 0 0 2 

1 . There is only one
whose augmented matrix has the reduced row echelon form 0 1 0
0 0 1 2 
solution to this system, a  2 , b  1 , c  2 , therefore  9, 7, 15   2 u  1v  2 w .
(b)
For  6,11,6  to be a linear combination of the vectors u , v , and w , there must exist scalars a , b ,
and c such that
a  2,1,4   b 1, 1,3   c  3,2,5    6,11,6 
Equating corresponding components on both sides yields the linear system
2 a  1b  3c  6
1a  1b  2c  11
4a  3b  5c  6
 1 0 0 4


whose augmented matrix has the reduced row echelon form 0 1 0 5 . There is only one
0 0 1 1
solution to this system, a  4 , b  5 , c  1 , therefore  6,11,6   4 u  5v  1w .
(c)
For  0,0,0  to be a linear combination of the vectors u , v , and w , there must exist scalars a , b ,
and c such that
a  2,1,4   b 1, 1,3   c  3,2,5    0,0,0 
Equating corresponding components on both sides yields the linear system
2 a  1b  3c  0
1a  1b  2c  0
4 a  3b  5c  0
1 0 0 0 


whose augmented matrix has the reduced row echelon form 0 1 0 0  . There is only one
0 0 1 0 
solution to this system, a  0 , b  0 , c  0 , therefore  0,0,0   0 u  0 v  0 w .
3.
(a)
 6 8 
For 
 to be a linear combination of A , B , and C , there must exist scalars a , b , and c such
 1 8 
that
0
 4
 1 1
0 2   6 8 
a
 b
 c




 2 2 
 2 3
1 4   1 8 
Equating corresponding entries on both sides yields the linear system
22
Chapter 4: General Vector Spaces
4a 
1b  0c 
6
0 a  1b  2c  8
2 a  2b  1c  1
2 a  3b  4c  8
1
0
whose augmented matrix has the reduced row echelon form 
0

0
1
1 0 2 
. The linear system is
0 1 3

0 0 0
0 0
 6 8 
consistent, therefore 
 is a linear combination of A , B , and C .
  1 8 
(b)
(c)
0 0 
0 0 
The zero matrix 
is a linear combination of A , B , and C since 0 A  0 B  0C  
.

0 0 
0 0 
 1 5
For 
 to be a linear combination of A , B , and C , there must exist scalars a , b , and c such
 7 1
that
 4 0
 1 1
0 2   1 5
a
 b
c




 2 2 
 2 3
1 4   7 1
Equating corresponding entries on both sides yields the linear system
4a 
1b  0c  1
0a  1b  2c 
2a  2b  1c 
5
7
2a  3b  4c 
1
1
0
whose augmented matrix has the reduced row echelon form 
0

0
0
1
0
0
0
0
1
0
0
0 
. The last row corresponds
0

1
 1 5
to the equation 0  1 which is contradictory. We conclude that 
 is not a linear combination
 7 1
of A , B , and C .
4.
(a)
An arbitrary vector in P2 has form p  a  bx  cx 2 so it is in the span of p1  2  x  x 2 ,
p2  1  x 2 , and p3  1  2x if we can solve the equation




k1 2  x  x 2  k2 1  x 2  k3 1  2 x   a  bx  cx 2 . This can be rewritten as
4.3 Spanning Sets
23
(2 k1  k2  k3 )  (k1  2 k3 ) x  (k1  k2 ) x 2  a  bx  cx 2 . Equating coefficients yields a linear system
2 1 1 a 
2 1 1 




with augmented matrix 1 0 2 b  . The coefficient matrix 1 0 2  has determinant 5  0
1 1 0 
1 1 0 c 
so we can solve the system for all possible choices of a, b, and c . Therefore,
p  1  x is in the span of p1 , p 2 , and p3 .
5.
(b)
From part (a), p  1  x 2 is in the span of p1 , p 2 , and p3 .
(c)
From part (a), p  1  x  x 2 is in the span of p1 , p 2 , and p3 .
(a)
 1 1
0 1
0 1 
2 0  1 2 
 k2 
 k3 
 k4 
We need to solve the equation k1 




 to express
0 2 
0 1
0 0 
 1 1 2 4 
1 2 
the vector 
 as the desired linear combination. We can rewrite this as
2 4 
 k1  2 k4
 k

4
k1  k2  k3  1 2 

. Equating coefficients produces a linear system whose augmented
2 k1  k2  k4  2 4 
 1
 1
matrix is 
 0

 2
0
1
0
1
0 2
1 0
0
1
0 1
1
1
0

2
. This matrix has reduced row echelon form 
0
2


4
0
0 3
0 12 
0 13

1
2
0
1
0
0
0
0
1
0
0
1
0
1
0 2
1 0
0
1
0 1
 1 1
0 1
0 1 
2 0  1 2 
hence 3 
 12 
 13 
 2




.
0 2 
0 1
0 0 
 1 1 2 4 
(b)
 1
 1
Following part (a) we obtain a linear system whose augmented matrix is 
 0

 2
1
0
matrix has reduced row echelon form 
0

0
0
1
0
0
0
0
1
0
0
0
0
1
3
1
. This
1

2
1
1
1

1
1 1 0 1 0 1  2 0  3 1 
hence 




.
0 2  0 1 0 0  1 1 1 2 
6.
(a)
For 9  7 x  15 x 2 to be a linear combination of the vectors p1 , p 2 , and p3 , there must exist scalars
a , b , and c such that

 
 

a 2  x  4 x 2  b 1  x  3 x 2  c 3  2 x  5 x 2  9  7 x  15 x 2
24
Chapter 4: General Vector Spaces
holds for all real x values. Grouping the terms according to the powers of x yields
 2a  b  3c    a  b  2c  x   4a  3b  5c  x 2  9  7 x  15 x 2
Since this equality must hold for every real value x , the coefficients associated with the like powers
of x on both sides must match. This results in the linear system
2 a  1b  3c  9
1a  1b  2c  7
4 a  3b  5c  15
 1 0 0 2 

1 . There is only one
whose augmented matrix has the reduced row echelon form 0 1 0
0 0 1 2 
solution to this system, a  2 , b  1 , c  2 , therefore
9  7 x  15 x 2  2p1  1p2  2 p3 .
(b)
For 6  11x  6 x 2 to be a linear combination of the vectors p1 , p 2 , and p3 , there must exist scalars
a , b , and c such that

 
 

a 2  x  4 x 2  b 1  x  3 x 2  c 3  2 x  5 x 2  6  11x  6 x 2
holds for all real x values. Grouping the terms according to the powers of x yields
 2a  b  3c    a  b  2c  x   4a  3b  5c  x 2  6  11x  6 x 2
Since this equality must hold for every real value x , the coefficients associated with the like powers
of x on both sides must match. This results in the linear system
2 a  1b  3c  6
1a  1b  2c  11
4a  3b  5c  6
 1 0 0 4


whose augmented matrix has the reduced row echelon form 0 1 0 5 . There is only one
0 0 1 1
solution to this system, a  4 , b  5 , c  1 , therefore 6  11x  6 x 2  4 p1  5p2  1p3 .
(c)
By inspection, 0  0p1  0p2  0p3 .
(d)
For 7  8 x  9 x 2 to be a linear combination of the vectors p1 , p 2 , and p3 , there must exist scalars
a , b , and c such that

 
 

a 2  x  4 x2  b 1  x  3x2  c 3  2 x  5x2  7  8 x  9 x2
4.3 Spanning Sets
25
holds for all real x values. Grouping the terms according to the powers of x yields
 2a  b  3c    a  b  2c  x   4a  3b  5c  x 2  7  8 x  9 x 2
Since this equality must hold for every real value x , the coefficients associated with the like powers
of x on both sides must match. This results in the linear system
2 a  1b  3c  7
1a  1b  2c  8
4 a  3b  5c  9
 1 0 0 0


whose augmented matrix has the reduced row echelon form 0 1 0 2  . There is only one
0 0 1 3
solution to this system, a  0 , b  2 , c  3 , therefore 7  8 x  9 x 2  0p1  2p2  3p3 .
7.
(a)
The given vectors span R3 if an arbitrary vector b   b1 , b2 , b3  can be expressed as a linear
combination
 b1 , b2 , b3   k1  2, 2, 2   k2  0, 0, 3   k3  0,1,1
Equating corresponding components on both sides yields the linear system
2 k1
 0 k2
 0 k3

2 k1
2 k1
 0 k2
 3k2


 b2
 b3
1k3
1k3
b1
By inspection, regardless of the right hand side values b1 , b2 , b3 , the first equation can be solved for
k1 , then the second equation can be used to obtain k3 , and the third would yield k2 .
We conclude that v1 , v 2 , and v 3 span R3 .
(b)
The given vectors span R3 if an arbitrary vector b   b1 , b2 , b3  can be expressed as a linear
combination
 b1 , b2 , b3   k1  2,  1, 3  k2  4,1, 2   k3 8, 1, 8 
Equating corresponding components on both sides yields the linear system
2 k1
 4 k2
 8k3

1k1
3k1
 1k2
 2 k2
 1k3
 8k3
 b2
 b3
b1
26
Chapter 4: General Vector Spaces
2 4 8
The determinant of the coefficient matrix of this system is 1 1 1  0 , therefore by
3 2 8
Theorem 2.3.8, the system cannot be consistent for all right hand side vectors b .
We conclude that v1 , v 2 , and v 3 do not span R3 .
8.
(a)
In order for the vector  2,3, 7,3  to be in span v1 ,v 2 ,v 3  , there must exist scalars a , b , and c such
that
a  2,1,0,3   b  3, 1,5,2   c  1,0,2,1   2,3, 7,3 
Equating corresponding components on both sides yields the linear system
2 a  3b 
1c 
2
1a  1b  0c 
3
0 a  5b  2c  7
3a  2 b 
1c 
3
1
0
whose augmented matrix has the reduced row echelon form 
0

0
0 2
0 1
.
1 1

0 0
This system is consistent (its only solution is a  2 , b  1 , c  1 ),
0
1
0
0
therefore  2,3, 7,3  is in span v1 , v 2 , v 3  .
(b)
The vector  0,0,0,0  is obviously in span v1 , v 2 , v 3  since
0  2,1,0,3   0  3, 1,5,2   0  1,0,2,1   0,0,0,0 
(c)
In order for the vector 1,1,1,1 to be in span v1 , v 2 , v 3  , there must exist scalars a , b , and c such
that
a  2,1,0,3   b  3, 1,5,2   c  1,0,2,1  1,1,1,1
Equating corresponding components on both sides yields the linear system
2 a  3b 
1c  1
1a  1b  0c  1
0 a  5b  2c  1
3a  2 b 
1c  1
4.3 Spanning Sets
1
0
whose augmented matrix has the reduced row echelon form 
0

0
0
1
0
0
0
0
1
0
27
0
0 
. This system is
0

1
inconsistent therefore 1,1,1,1 is not in span v1 , v 2 , v 3  .
(d)
In order for the vector  4,6, 13,4  to be in span v1 , v 2 , v 3  , there must exist scalars a , b , and c
such that
a  2,1,0,3   b  3, 1,5,2   c  1,0,2,1   4,6, 13,4 
Equating corresponding components on both sides yields the linear system
2a  3b 
1c 
4
1a  1b  0c 
6
0a  5b  2c  13
3a  2 b 
1c 
4
1
0
whose augmented matrix has the reduced row echelon form 
0

0
0
1
0
0
0 3
0 3
.
1 1

0 0
This system is consistent (its only solution is a  3 , b  3 , c  1 ),
therefore  4,6, 13,4  is in span v1 , v 2 , v 3  .
9.
The given polynomials span P2 if an arbitrary polynomial in P2 , p  a0  a1 x  a2 x 2 can be expressed as a
linear combination





a0  a1 x  a2 x 2  k1 1  x  2 x 2  k2  3  x   k3 5  x  4 x 2  k4 2  2 x  2 x 2

Grouping the terms according to the powers of x yields
a0  a1 x  a2 x 2   k1  3k2  5k3  2 k4     k1  k2  k3  2 k4  x   2 k1  4 k3  2 k4  x 2
Since this equality must hold for every real value x , the coefficients associated with the like powers of x
on both sides must match. This results in the linear system
1k1
 3k2
 5k3
 2 k4
 a0
1k1
2 k1
 1k2
 0 k2
 1k3
 4k3
 2 k4
 2 k4
 a1
 a2
28
Chapter 4: General Vector Spaces
1 0 2
 1 3 5 2 a0 



whose augmented matrix  1 1 1 2 a1  reduces to  0 1 1
 0 0 0
 2 0 4 2 a2 
the system has no solution if  12 a0  32 a1  a2  0 .
1
1
a  34 a1 
4 0

1
1
a  14 a1  therefore
4 0
0  12 a0  32 a1  a2 
Since polynomials p  a0  a1 x  a2 x 2 for which  12 a0  23 a1  a2  0 cannot be expressed as a linear
combination of p1 , p 2 , p3 , and p 4 , we conclude that the polynomials p1 , p 2 , p3 , and p 4 do not span P2 .
10.
The given polynomials span P2 if an arbitrary polynomial in P2 , p  a0  a1 x  a2 x 2 can be expressed as a
linear combination



a0  a1 x  a2 x 2  k1 1  x   k2 1  x   k3 1  x  x 2  k4 2  x 2

Grouping the terms according to the powers of x yields
a0  a1 x  a2 x 2   k1  k2  k3  2 k4    k1  k2  k3  x   k3  k4  x 2
Since this equality must hold for every real value x , the coefficients associated with the like powers of x
on both sides must match. This results in the linear system
1k1

1k2
 1k3
 2 k4
 a0
1k1
0 k1
 1k2
 0 k2
 1k3
 1k3
 0 k4
 1k4
 a1
 a2
 1 1 1 2 a0 


whose augmented matrix  1 1 1 0 a1  reduces
0 0 1 1 a2 
1 0 0 2

to 0 1 0 1
0 0 1 1
1
2
a0  12 a1  a2 

1
a  12 a1  therefore the system has a solution for every choice of a1 , a2 ,
2 0

a3
and a3 . We conclude that the polynomials p1 , p 2 , p3 , and p 4 span P2 .
11.
(a)
a
The given matrices span M 22 if an arbitrary matrix 
c
b
can be expressed as a linear combination
d 
1 0 
1 1 
0 1
0 0   a b 
 k2 
 k3 
 k4 
k1 




 . We can rewrite this as
1 0 
0 0 
0 1
1 1   c d 
 k1  k2
k  k
 1 4
k2  k3   a b 

. Equating coefficients produces a linear system whose augmented matrix
k3  k4   c d 
4.3 Spanning Sets
1
0
is 
1

0
1
1
0
0
0
1
0
1
0
0
1
1
 1
a


0
b
. The coefficient matrix has det  

1
c
 

d
 0
29
1 0 0 

1 1 0  
 0 which means the system is
0 0 1 

0 1 1  
not consistent. We conclude that the given matrices do not span M 22 .
(b)
a
The given matrices span M 22 if an arbitrary matrix 
c
b
can be expressed as a linear combination
d 
 1 1
0 1 
1 1 
1 0   a b 
 k2 
 k3 
 k4 
k1 




 . We can rewrite this as
 0 1
0 0 
1 0 
0 1   c d 
 k1  k3  k4

k3

k1  k2  k3   a b 

. Equating coefficients produces a linear system whose
k1  k4   c d 
 1
 1
augmented matrix is 
 0

 1
0
1
0
0
1
1
1
0
1
0
0
1
 1
a


1
b
. The coefficient matrix has det  
 0
c
 

d
 1
1 0 1 

1 1 0  
1
0 1 0 

0 0 1 
which means the system is consistent. We conclude that the given matrices span M 22 .
(c)
a
The given matrices span M 22 if an arbitrary matrix 
c
b
can be expressed as a linear combination
d 
1 0 
1 1 
1 1 
1 1  a b 
 k2 
 k3 
 k4 
k1 




 . We can rewrite this as
0 0 
0 0 
1 0 
1 1  c d 
 k1  k2  k3  k4

k3  k4

k2  k3  k4   a b 

 . Equating coefficients produces a linear system whose
k4
 c d 
1
0
augmented matrix is 
0

0
1
1
0
0
1
1
1
0
1
1
1
1
 1
a


0
b
. The coefficient matrix has det  
 0
c
 

d
 0
1 1 1 

1 1 1 
 1 which
0 1 1 

0 0 1 
means the system is consistent. We conclude that the given matrices span M 22 .
12.
(a)
The vector u  1,2  is in the span of {TA (e1 ), TA (e 2 )} if it is a linear combination of the columns of
1 2 
1 2  1  1 
since TA (e1 )  
A

    
0 1 
0 1  0  0 
 1 2  0  2 
1 
2  1 
   . Observe that 3    2      . We conclude that u  1,2  is in
and TA  e 2   



0 1  1  1 
0 
1  2 
the span of {TA (e1 ), TA (e 2 )}.
30
Chapter 4: General Vector Spaces
(b)
The vector u  1,2  is in the span of {TA (e1 ), TA (e 2 )} if it is a linear combination of the columns of
1 1
1 1 1  1
1 1 0  1
since TA  e1   
   and TA  e 2   
A




      . Observe that this means
1 1
1 1 0  1
1 1 1  1
every vector in the span of {TA (e1 ), TA (e 2 )} is a scalar multiple of the vector 1,1 . Since u  1,2 
not a scalar multiple of 1,1 we conclude that it is not in the span of {TA (e1 ), TA (e 2 )}.
13.
(a)
The vector u  1,1,1 is in the span of {TA (e1 ), TA (e 2 )} if it is a linear combination of the columns of
2
2
0
0 
0
 2
0 2 
1   
0   






A   1 2  : TA  e1    1 2     1  and TA  e 2    1 2      2  . So there must exist
0
1
 1 0    1 
 1 0 
 1 0     0 
scalars a and b such that a  0,1,1  b  2, 2,0   1,1,1 .
0  2b  1
Equating corresponding components on both sides leads to the linear system a  2b  1
a  0  1
which is inconsistent since subtracting the last equation from the second yields 2b  0 while the first
equation is 2b  1. We conclude that the vector u  1,1,1 is not in the span of {TA (e1 ), TA (e 2 )}.
(b)
The vector u  1,1,1 is in the span of {TA (e1 ), TA (e 2 )} if it is a linear combination of the columns of
0 2 
0 2 
2 
0 2 
0 
0   
1   






A  1 1  : TA  e1    1 1     1  and TA  e 2   1 1      1 . . So there must exist scalars
0
1
2 0 
2 0    0 
2 0    2 
a and b such that a  0,1,2   b  2,1,0   1,1,1 . Observe that
1
2
 0,1,2   12  2,1,0   1,1,1 . We conclude that the vector u  1,1,1 is not in the span of
{TA (e1 ), TA (e 2 )}.
14.
(a)
It follows from the trigonometric identity cos2 x  cos2 x  sin 2 x that cos2x is in span f , g .
(b)
In order for 3  x 2 to be in span f , g , there must exist scalars a and b such that
a cos2 x  b sin 2 x  3  x 2
holds for all real x values. When x  0 the equation becomes a  3 , however if x   then it yields
a  3   2 - a contradiction. We conclude that 3  x 2 is not in span f , g .
(c)
It follows from the trigonometric identity cos2 x  sin 2 x  1 that 1 is in span f , g .
(d)
In order for sin x to be in span f , g , there must exist scalars a and b such that
a cos2 x  b sin 2 x  sin x
4.3 Spanning Sets
31
holds for all real x values. When x  2 the equation becomes b  1 , however if x   2 then it
yields b  1 - a contradiction. We conclude that sin x is not in span f , g .
15.
(e)
Since 0 cos2 x  0sin 2 x  0 holds for all real x values, we conclude that 0 is in span f , g .
(a)
1 1 1 1 


The solution space W to the homogenous system Ax  0 where A  1 0 1 0  is obtained from
0 1 0 1 
1 0 1 0 


the reduced row echelon form 0 1 0 1  . The general solution in vector form is
0 0 0 0 
 x, y, z, w    s, t, s, t   s 1,0, 1,0   t  0,1,0, 1 therefore the solution space is spanned by the
vectors v1  1,0, 1,0  and v 2   0,1,0, 1 . We conclude that the vectors u  1,0, 1,0  and
v   0,1,0, 1 span the solution space W .
(b)
From part (a) and Theorem 4.3.2 we need to show that the vectors u  1,0, 1,0  and v  1,1, 1, 1
are contained in the span of the vectors v1  1,0, 1,0  and v 2   0,1,0, 1 . Observe that u  v1
and v  v1  v 2 . We conclude that the vectors u  1,0, 1,0  and v  1,1, 1, 1 span the solution
space W.
16.
(a)
0 1 1 1 


The solution space W to the homogenous system Ax  0 where A  0 2 2 2  is obtained
0 3 3 3 
 0 1 1 1


from the reduced row echelon form  0 0 0 0  . The general solution in vector form is
 0 0 0 0 
 x, y, z, w    r, s  t, s, t   r 1,0,0,0   s  0,1,1,0   t  0, 1,0,1 therefore the solution space is spanned
by the vectors v1  1,0,0,0  , v 2   0,1,1,0  and v 3   0, 1,0,1 . Observe that
1,0,0,0   a 1,1,1,0   b  0, 1,0,1 has no solution so that v1 is not in the span of the vectors
u  1,1,1,0  and v   0, 1,0,1 . By Theorem 4.3.2, they do not span the solution space W .
(b)
Using part (a), observe that 1,0,0,0   a  0,1,1,0   b 1,0,1,1 has no solution so that v1 is not in the
span of the vectors u   0,1,1,0  and v  1,0,1,1 . By Theorem 4.3.2, they do not span the solution
space W .
17.
(a)
The vectors TA 1,2    1,4  and TA  1,1   2, 2  span R2 if an arbitrary vector
b   b1 , b2  can be expressed as a linear combination
 b1 , b2   k1  1, 4   k2  2, 2 
32
Chapter 4: General Vector Spaces
Equating corresponding components on both sides yields the linear system
1k1
 2 k2

4 k1
 2 k2
 b2
b1
The determinant of the coefficient matrix of this system is
1 2
 6  0 , therefore by
4 2
Theorem 2.3.8, the system is consistent for all right hand side vectors b .
We conclude that TA  u1  and TA  u 2  span R2 .
(b)
The vectors TA 1,2    1,2  and TA  1,1   2, 4  span R2 if an arbitrary vector
b   b1 , b2  can be expressed as a linear combination
 b1 , b2   k1  1, 2   k2  2, 4 
Equating corresponding components on both sides yields the linear system
1k1
2 k1
 2 k2
 4 k2
 b1
 b2
The determinant of the coefficient matrix of this system is
1 2
2
4
 0 , therefore by Theorem 2.3.8,
the system cannot be consistent for all right hand side vectors b .
We conclude that TA  u1  and TA  u 2  do not span R2 .
18.
(a)
The vectors TA  0,1,1  1,0  , TA  2, 1,1  1, 2  , and , TA 1,1, 2    2, 3  span R2 if an arbitrary
vector
b   b1 , b2  can be expressed as a linear combination
 b1 , b2   k1 1, 0   k2 1,  2   k3  2,3
Equating corresponding components on both sides yields the linear system
1k1
0 k1
 1k2
 2 k2
 2 k3
 3k3
 b1
 b2
7
1 0
2
, therefore the
The reduced row echelon form of the coefficient matrix of this system is 
3
0 1  2 
system is consistent for all right hand side vectors b .
We conclude that TA  u1  , TA  u 2  , and, TA  u 3  span R2 .
(b)
The vectors TA  0,1,1  1,4  , TA  2, 1,1   1, 4  , and , TA 1,1, 2   1, 4  span R2 if an arbitrary
vector
4.3 Spanning Sets
33
b   b1 , b2  can be expressed as a linear combination
 b1 , b2   k1 1, 4   k2  1, 4   k3 1, 4 
Equating corresponding components on both sides yields the linear system
 1k2
 4 k2
1k1
4 k1
 1k3
 4 k3
 b1
 b2
 1 0 0
The reduced row echelon form of the coefficient matrix of this system is 
 , therefore the
 0 1 1
system is consistent for all right hand side vectors b . We conclude that TA  u1  , TA  u 2  , and
TA  u 3  span R2 .
19.
Using Theorem 4.3.2, we need to show that the each of the polynomials q 1  2 x and
q 2  1  x 2 is in the span of the polynomials p1  1  x 2 and p2  1  x  x 2 . Clearly, q 2  p1 . Observe that


2 x   2  ( 1  x 2 )  2 1  x  x 2 so that q1   2  p1  2 p2 .
20.
We begin by showing that the vector w1 is a linear combination of the vectors v1 , v 2 , and v 3 , i.e., that
there exist scalars a , b , and c such that
a 1,6,4   b  2,4, 1  c  1,2,5   1, 2, 5 
Equating corresponding components on both sides leads to the linear system
1a  2b  1c  1
6a  4b  2c  2
4a  1b  5c  5
1 1
1 0
 0 1 1 1
whose augmented matrix has the reduced row echelon form 
 . A general solution of this
 0 0 0 0 
system is a  1  t , b  1  t , c  t . E.g., letting t  0 yields a solution a  1 , b  1 ,
c 0.
Applying the same procedure repeatedly to each of the remaining four vectors, we can show that
w1  1v1  1v 2  0v3
w2  2v1  1v 2  0v3
v1  1w1  1w 2
v 2  2 w1  1w 2
34
Chapter 4: General Vector Spaces
v 3  1w1  0w 2
It follows from Theorem 4.3.2 that the sets v1 , v 2 , v 3  and w1 , w 2  span the same subspace of R3 .
21.
For the vector  3,5 to be expressed as v  w where v is in the subspace spanned by  3,1 and w is in the
subspace spanned by  2,1 , we must produce scalars a and b such that
a  3,1  b  2,1   3,5  . Equating corresponding components yields a linear system with augmented matrix
3 2 3 
 1 0 7 
1 1 5 which has reduced row echelon form  0 1 12  .




Therefore v  7  3,1   21, 7  and w  12  2,1   24,12  .
22.
For the vector 1,0,1 to be expressed as v  w where v is in the solution space V of
4 x  y  2 x  0 and w is in the subspace spanned by 1,1,1 , we first must find vectors that span V . This
do this we write the augmented matrix  4 1 2  which has reduced row echelon form 1  14
1
2
 . A
general solution is then  x, y, z   s  14 ,1,0   t   12 ,0,1 so that the vectors  14 ,1,0  and   12 ,0,1 span V .
We must produce scalars a, b, and c such that a  14 ,1,0   b   12 ,0,1  c 1,1,1  1,0,1 . Equating
 14  12 1 1 


corresponding components yields a linear system with augmented matrix  1 0 1 0  which has
0 1 1 1 
 1 0 0  65 


reduced row echelon form  0 1 0  15  . Therefore,  65  14 ,1,0   15  21 ,0,1  65 1,1,1  1,0,1 .
6
 0 0 1
5
Therefore v   65  14 ,1,0   15  21 ,1,0     15 ,  65 ,  15  and w   65 , 65 , 65  .
True False Exercises
(a)
True.
(b)
False. The span of the zero vector is just the zero vector.
(c)
False. For example the vectors 1,1,1 and  2,2,2  span a line.
(d)
True.
(e)
True. This follows from part (a) of Theorem 4.2.1.
(f)
False. For any nonzero vector v in a vector space V , both v and 2 v span the same subspace of V .
(g)
False. The constant polynomial p  x   1 cannot be represented as a linear combination of these, since at
x  1 all three are zero, whereas p 1  1 .
4.4 Linear Independence
35
4.4 Linear Independence
1.
2.
(a)
Since u 2  5u1 , linear dependence follows from Definition 1.
(b)
A set of 3 vectors in R2 must be linearly dependent by Theorem 4.4.3.
(c)
Since p 2  2 p1 , linear dependence follows from Definition 1.
(d)
Since A   1 B , linear dependence follows from Definition 1.
(a)
The vector equation a  3,0,4   b  5, 1,2   c 1,1,3    0,0,0  can be rewritten as a homogeneous
linear system by equating the corresponding components on both sides
3a  5b  1c  0
0 a  1b  1c  0
4 a  2b  3c  0
1 0 0 0 


The augmented matrix of this system has the reduced row echelon form 0 1 0 0  therefore the
0 0 1 0 
system has only the trivial solution a  b  c  0 . We conclude that the given set of vectors is linearly
independent.
3.
(b)
A set of 4 vectors in R3 must be linearly dependent by Theorem 4.4.3.
(a)
The vector equation a  3, 8, 7, 3   b 1, 5, 3, 1  c  2, 1, 2, 6   d  4, 2, 6, 4    0, 0, 0, 0  can be
rewritten as a homogeneous linear system by equating the corresponding components on both sides
3a  1b  2c  4 d  0
8a  5b  1c  2 d  0
7a  3b  2c  6 d  0
3a  1b  6c  4 d  0
1
0
The augmented matrix of this system has the reduced row echelon form 
0

0
a general solution of the system is a   t , b  t , c   t , d  t .
1 0
1 0 1 0 
therefore
0 1 1 0

0 0 0 0
0 0
Since the system has nontrivial solutions, the given set of vectors is linearly dependent.
(b)
The vector equation a  3, 0, 3, 6   b  0, 2, 3,1  c  0, 2, 2, 0   d  2,1, 2,1   0, 0, 0, 0  can be
rewritten as a homogeneous linear system by equating the corresponding components on both sides
36
Chapter 4: General Vector Spaces
3a  0 b  0 c  2 d  0
0 a  2b  2c  1d  0
3a  3b  2c  2 d  0
6a 
1b  0c 
1d  0
1 0 0 0 0 
0 1 0 0 0 
 therefore
The augmented matrix of this system has the reduced row echelon form 
0 0 1 0 0 


0 0 0 1 0 
the system has only the trivial solution a  b  c  d  0 . We conclude that the given set of vectors is
linearly independent.
4.
(a)
The terms in the equation

 
 

a 2  x  4 x 2  b 3  6 x  2 x 2  c 2  10 x  4 x 2  0
can be grouped according to the powers of x
 2a  3b  2c    a  6b  10c  x   4a  2b  4c  x 2  0  0 x  0 x 2
For this to hold for all real values of x , the coefficients corresponding to the same powers of x on
both sides must match, which leads to the homogeneous linear system
2 a  3b  2c  0
a  6b  10c  0
4 a  2b  4c  0
1 0 0 0 


The augmented matrix of this system has the reduced row echelon form 0 1 0 0  therefore the
0 0 1 0 
system has only the trivial solution a  b  c  0 . We conclude that the given set of vectors in P2 is
linearly independent.
(b)
The terms in the equation

 
 
 

a 1  3x  3x2  b x  4 x2  c 5  6 x  3x2  d 7  2 x  x2  0
can be grouped according to the powers of x
 a  5c  7d    3a  b  6c  2d  x   3a  4b  3c  d  x 2  0  0 x  0 x 2
For this to hold for all real values of x , the coefficients corresponding to the same powers of x on
both sides must match, which leads to the homogeneous linear system
4.4 Linear Independence
37
a
 5c  7d  0
3a  b  6 c  2 d  0
3a  4b  3c  d  0
 1 0 0  174

5
The augmented matrix of this system has the reduced row echelon form  0 1 0
4
9
 0 0 1
4
17
5
9
therefore a general solution of the system is a  4 t , b   4 t , c   4 t , d  t .
0

0
0 
Since the system has nontrivial solutions, the given set of vectors is linearly dependent.
5.
(a)
1 0 
1 2 
0 1 0 0 
 b
c
The matrix equation a 



 can be rewritten as a homogeneous
1 2 
2 1 
2 1 0 0 
linear system
1a 
1b  0c  0
0 a  2b  1c  0
1a  2b  2c  0
2a 
1b 
1c  0
1
0
The augmented matrix of this system has the reduced row echelon form 
0

0
0 0 0
1 0 0 
therefore the
0 1 0

0 0 0
system has only the trivial solution a  b  c  0 . We conclude that the given matrices are linearly
independent.
(b)
6.
1 0 0 
0 0 1 
0 0 0  0 0 0 
 b
 c
By inspection, the matrix equation a 



 has only
0 0 0 
0 0 0 
0 1 0  0 0 0 
the trivial solution a  b  c  0 . We conclude that the given matrices are linearly independent.
1 0 
 1 0 
2 0  0 0 
 b
c
The matrix equation a 



 can be rewritten as a homogeneous linear
1 k 
 k 1
1 3  0 0 
system
1a 
1b  2c  0
0 a  0b  0c  0
1a  kb  1c  0
ka 
1b  3c  0
Omitting the second equation (which imposes no restrictions on the unknowns), we obtain the coefficient
 1 1 2 


matrix A   1 k 1 . Performing elementary row operations
 k
1 3
38
Chapter 4: General Vector Spaces



add 1 times the first row to the second row,
add k times the first row to the third row, and
add 1 times the second row to the third row
1
2
1

1 . We have det  A   det  B   1  k  4  2 k  therefore by Theorem 2.3.8, the
yields B  0 1  k
0
0 4  2k 
system has only the trivial solution, whenever 1  k  4  2 k   0 .
Consequently, the given matrices are linearly independent for all k values except 1 and 2 .
7.
Three vectors in R3 lie in a plane if and only if they are linearly dependent when they have their initial
points at the origin. (See the discussion following Example 6.)
(a)
The vector equation a  2, 2, 0   b  6,1, 4   c  2, 0, 4    0,0,0  can be rewritten as a homogeneous
linear system by equating the corresponding components on both sides
2 a  6b  2 c  0
2 a  1b  0c  0
0 a  4b  4 c  0
1 0 0 0 


The augmented matrix of this system has the reduced row echelon form 0 1 0 0  therefore the
0 0 1 0 
system has only the trivial solution a  b  c  0 . We conclude that the given vectors are linearly
independent, hence they do not lie in a plane.
(b)
The vector equation a  6, 7, 2   b  3, 2, 4   c  4, 1, 2    0,0,0  can be rewritten as a homogeneous
linear system by equating the corresponding components on both sides
6 a  3b  4c  0
7a  2b  1c  0
2 a  4b  2 c  0
 1 0  13 0 


2
The augmented matrix of this system has the reduced row echelon form 0 1
0  therefore a
3
0 0
0 0 
general solution of the system is a  13 t , b   23 t , c  t .
Since the system has nontrivial solutions, the given vectors are linearly dependent, hence they lie in a
plane.
8.
(a)
The set v1 , v 3  can be shown to be linearly independent since a  1,2,3   b  3,6,0    0,0,0  has
only the trivial solution a  b  0 . Therefore the three vectors do not lie on the same line (even
though the vectors v1 and v 2 are collinear).
4.4 Linear Independence
(b)
39
Any subset of two vectors chosen from these three vectors can be shown to be linearly independent
(e.g., a  2, 1,4   b  4,2,3    0,0,0  has only the trivial solution a  b  0 ). Therefore the three
vectors do not lie on the same line.
(An alternate way to show this would be to demonstrate that the three vectors form a linearly
independent set, therefore they do not even lie on the same plane, so that they cannot possibly lie on
the same line.)
(c)
Each subset of two vectors chosen from these three vectors can be shown to be linearly dependent
since 1v1  2 v 2  0 , 1v1  2v3  0 , and 1v 2  1v3  0 . Therefore all three vectors lie on the same
line.
9.
(a)
The vector equation a  0, 3,1, 1  b  6, 0, 5,1  c  4, 7,1, 3    0, 0, 0, 0  can be rewritten as a
homogeneous linear system by equating the corresponding components on both sides
0 a  6b  4c  0
3 a  0 b  7c  0
1a  5b 
1a 
1c  0
1b  3c  0
1

0
The augmented matrix of this system has the reduced row echelon form 
0

0
0  73 0 

2
1
0
3
therefore a
0
0 0

0
0 0
general solution of the system is a  73 t , b   23 t , c  t .
Since the system has nontrivial solutions, the given set of vectors is linearly dependent.
(b)
From part (a), we have 73 tv1  23 tv 2  tv 3  0 .
Letting t  37 , we obtain v1  27 v 2  37 v 3 .
Letting t   32 , we obtain v 2  27 v1  23 v 3 .
Letting t  1 , we obtain v 3   73 v1  23 v 2 .
10.
(a)
The vector equation a 1,2,3,4   b  0,1,0, 1  c 1,3,3,3    0,0,0,0  can be rewritten as a
homogeneous linear system by equating the corresponding components on both sides
1a
 0b  1c
 0
2 a  1b  3c  0
3a  0b  3c  0
4 a  1b
 3c  0
40
Chapter 4: General Vector Spaces
1
0
The augmented matrix of this system has the reduced row echelon form 
0

0
0 1 0
1 1 0 
therefore a
0 0 0

0 0 0
general solution of the system is
a   t , b  t , c  t
Since the system has nontrivial solutions, the given set of vectors is linearly dependent.
(b)
In the general solution we obtained in part (a), let the parameter t have a nonzero value, e.g., t  1 .
Then a  1, b  1 , and c  1 so that v1  v 2  v 3  0 . This can be solved for each of the three
vectors: v1  v 2  v3 , v 2  v1  v3 , and v3  v1  v 2 .
11.
By inspection, when    12 , the vectors become linearly dependent (since they all become equal). We
proceed to find the remaining values of  .
The vector equation a   ,  12 ,  12   b   12 ,  ,  12   c   12 ,  12 ,     0,0,0  can be rewritten as a
homogeneous linear system by equating the corresponding components on both sides
a 
1
2
b 
1
2
1
2
c  0
 a  b 
c  0
1
 a  2 b  c  0
1
2
1
2
  12  12
The determinant of the coefficient matrix is  12
  12   3  34   41 . This determinant equals zero for

 12  12
all  values for which the vectors are linearly dependent. Since we already know that    12 is one of
those values, we can divide   12 into  3  34   14 to obtain
 3  34   14     12    2  12   12      12    12     1 .
We conclude that the vectors form a linearly dependent set for    12 and for   1 .
12.
By part (b) of Theorem 4.4.2, a set with one vector is linearly independent if that vector is not 0 .
13.
(a)
We calculate TA 1,2    1,4  and TA  1,1   2, 2  . The vector equation
k1  1, 4   k2  2, 2    0, 0 
can be rewritten as a homogeneous linear system
1k1
4 k1
 2 k2
 2 k2
 0
 0
4.4 Linear Independence
The determinant of the coefficient matrix of this system is
41
1 1
 6  0 , therefore by
4 1
Theorem 2.3.8, the system has only the trivial solution. We conclude that TA  u1  and TA  u 2  form a
linearly independent set.
(b)
We calculate TA 1,2    1,2  and TA  1,1   2, 4  . Since  2, 4   2  1, 2  , it follows by
Definition 1 that TA  u1  and TA  u 2  form a linearly dependent set.
14.
(a)
We calculate TA 1,0,0   1,1,2  , TA  2, 1,1   3, 1,2  , and TA  0,1,1   3, 3,2  . The vector
equation
k1 1,1,2   k2  3, 1,2   k3  3, 3,2    0, 0, 0 
can be rewritten as a homogeneous linear system
1k1
 3k 2
 3k3
 0
1k1
2 k1
 1k2
 2 k2
 3k3
 2 k3
 0
 0
1 3 3
The determinant of the coefficient matrix of this system is 1 1 3  8  0 , therefore by
2 2 2
Theorem 2.3.8, the system has only the trivial solution. We conclude that the set
T  u  ,T  u  ,T  u  is linearly independent.
A
(b)
1
A
2
A
3
We calculate TA 1,0,0   1,1,2  , TA  2, 1,1   2, 2,2  , and TA  0,1,1   2, 2,2  . Since
TA  u 2   1TA  u 3  , it follows that the set TA  u1  , TA  u 2  , TA  u 3  is linearly dependent.
15.
16.
Three vectors in R3 lie in a plane if and only if they are linearly dependent when they have their initial
points at the origin. (See the discussion following Example 6.)
(a)
After the three vectors are moved so that their initial points are at the origin, the resulting vectors do
not lie on the same plane. Hence these vectors are linearly independent.
(b)
After the three vectors are moved so that their initial points are at the origin, the resulting vectors lie
on the same plane. Hence these vectors are linearly dependent.
(a)
From the identity sin 2 x  cos2 x  1 we have  1 6    2   3sin 2 x    3   2 cos2 x   0 for all real x .
Therefore, the set is linearly dependent.
(b)
The equality ax  b cos x  0 is to hold for all real x . Taking x  0 yields b  0 , whereas taking
x  2 implies a  0 . The set is linearly independent.
(c)
The equality  a 1  b sin x  c sin 2 x  0 is to hold for all real x . Taking x  0 yields a  0 . When
x  2 , we obtain b  0 . Finally, substituting x  4 results in c  0 . The set is linearly independent.
42
Chapter 4: General Vector Spaces
(d)
From the identity cos2 x  sin 2 x  cos2 x we have 1 cos 2 x   1  sin 2 x    1  cos2 x   0 for all
real x . Therefore, the set is linearly dependent.
(e)
Since  3  x   9  6 x  x 2 we can write  3  x    x 2  6 x   9  0 or
2
2
1 3  x    1  x 2  6 x     95   5  0. The set is linearly dependent.
2
(f)
17.
From Theorem 4.4.2(a), this set is linearly dependent.
The Wronskian is W  x  
x
cos x
1  sin x
  x sin x  cos x. Since W  x  is not identically 0 on  ,   (e.g.,
W  0   1  0 ), the functions x and cos x are linearly independent.
18.
The Wronskian is W  x  
sin x cos x
  sin 2 x  cos2 x  1. Since W  x  is not identically 0 on
cos x  sin x
 ,   , sin x and cos x are linearly independent.
19.
(a)
1 x ex
The Wronskian is W  x   0 1 e x  e x . Since W  x  is not identically 0 on  ,   (e.g.,
0 0 ex
W  0   1  0 ), the functions 1 , x and e x are linearly independent.
x2
1 x
(b)
The Wronskian is W  x   0 1 2 x  2. Since W  x  is not identically 0 on  ,   , the
0 0 2
functions 1 , x and x 2 are linearly independent.
ex
20.
xe x
x2ex
W  x   ex
e x  xe x
2 xe x  x 2 e x
x
2e  xe
2e  4 xe  x e
e
x
x
x
x
The Wronskian
2 x
x2
2 x  x2
1
x
3x
 e 1 1 x
1 2  x 2  4 x  x2
1 x
3x
e 0 1
x2
2x
0 2 2  4x
 
 e3 x 1
1
A common factor of e x from
each row was taken through
the determinant sign.
1 times the first row was added to the
second row and to the third row.
2x
2 2  4x
Cofactor expansion along
the first column
4.4 Linear Independence
43
 
 e3 x 1 2  4 x  4 x   2e3 x
Since W  x  is not identically 0 on  ,   , f1  x  , f2  x  , and f3  x  are linearly independent.
sin x
cos x
x cos x
W  x   cos x  sin x
cos x  x sin x .
 sin x  cos x 2sin x  x cos x
21.
sin x cos x
x cos x
 cos x  sin x cos x  x sin x
2sin x
0
0
 2sin x
sin x
The Wronskian
The first row was added to the third.
cos x
cos x  sin x

 2sin x  sin 2 x  cos2 x
Cofactor expansion along
the third row

  2sin x  1  2sin x
Since W  x  is not identically 0 on  ,   , f1  x  , f2  x  , and f3  x  are linearly independent.
True-False Exercises
(a)
False. By part (b) of Theorem 4.4.2, a set containing a single nonzero vector is linearly independent.
(b)
True. This follows directly from Definition 1.
(c)
False. For instance 1,1 ,  2,2  is a linearly dependent set that does not contain  0,0  .
(d)
True. If av1  bv 2  cv 3  0 has only one solution a  b  c  0 then
a  kv1   b  kv 2   c  kv 3   k  av1  bv 2  cv 3  can only equal 0 when a  b  c  0 as well.
(e)
True. Since the vectors must be nonzero, v1 must be linearly independent.
Let us begin adding vectors to the set until the set v1 ,, v k  becomes linearly dependent, therefore, by
construction, v1 ,, v k 1  is linearly independent. The equation c1v1    ck 1v k 1  ck v k  0 must have a
solution with ck  0 , therefore v k   cc1k v1    ckk1 v k 1 . Let us assume there exists another representation
c
v k  d1v1    dk 1v k 1 . Subtracting both sides yields




0  d1  c1k v1    dk 1  ckk1 v k 1 . By linear independence of v1 ,, v k 1  , we must have d1   cc1k ,  ,
c
c
dk 1   ckk1 , which shows that v k is a unique linear combination of v1 ,, v k 1 .
c
44
Chapter 4: General Vector Spaces
(f)
1 1  0 0  1 0  0 1 1 0  0 1  
False. The set 
,
,
,
,
,
  is linearly dependent since
0 0  1 1  1 0  0 1 0 1  1 0  
1 1 
0 0  1 0  0 1
0 0    1 1 1   1 0   0 1 .



 
 

(g)
True. Requiring that for all x values a  x  1 x  2   bx  x  2   cx  x  1  0 holds true implies that the
equality must be true for any specific x value. Setting x  0 yields a  0 . Likewise, x  1 implies b  0 ,
and x  2 implies c  0 . Since a  b  c  0 is required, we conclude that the three given polynomials are
linearly independent.
(h)
False. The functions f1 and f2 are linearly dependent if there exist scalars k1 and k2 , not both equal 0, such
that k1 f1  x   k2 f2  x   0 for all real numbers x .
4.5 Coordinates and Basis
1.
Vectors  2,1 and  3,0  are linearly independent if the vector equation
c1  2,1  c2  3,0    0,0 
has only the trivial solution. For these vectors to span R2 , it must be possible to express every vector
b   b1 , b2  in R2 as
c1  2,1  c2  3,0    b1 , b2 
These two equations can be rewritten as linear systems
2c1
 3c2
c1
 0
 0
and
Since the coefficient matrix of both systems has determinant
2c1
 3c2
c1

b1
 b2
2 3
 3  0 , it follows from
1 0
parts (b), (e), and (g) of Theorem 2.3.8 that the homogeneous system has only the trivial solution and the
nonhomogeneous system is consistent for all real values b1 and b2 . Therefore the vectors  2,1 and  3,0 
are linearly independen and span R2 so that they form a basis for R 2 .
2.
Vectors  3,1, 4  ,  2, 5, 6  , and 1, 4,8  are linearly independent if the vector equation
c1  3,1, 4   c2  2, 5, 6   c3 1, 4,8    0,0,0 
has only the trivial solution. For these vectors to span R3 , it must be possible to express every vector
b   b1 , b2 , b3  in R3 as.
c1  3,1, 4   c2  2, 5, 6   c3 1, 4, 8    b1 , b2 , b3 
4.6 Dimension
45
These two equations can be rewritten as linear systems
3c1
 2c2

1c3
 0
1c1
4c1
 5c2
 6c2
 4c3
 8c3
 0
 0
and
3c1
 2c2

1c1
4c1
 5c2
 6c2
 4c3
 8c3
1c3

b1
 b2
 b3
3 2 1
Since the coefficient matrix of both systems has determinant 1 5 4  26  0 , it follows from
4 6 8
parts (b), (e), and (g) of Theorem 2.3.8 that the homogeneous system has only the trivial solution and the
nonhomogeneous system is consistent for all real values b1 , b2 , and b3 . Therefore the vectors  3,1, 4  ,
 2, 5, 6  , and 1, 4,8  are linearly independent and span R3 so that they form a basis for R3 .
3.
Polynomials x 2  1 , x 2  1 , and 2 x  1 are linearly independent if the equation




c1 x 2  1  c2 x 2  1  c3  2 x  1  0
has only the trivial solution. For these polynomials to span P2 , it must be possible to express every
polynomial a0  a1 x  a2 x 2 . as




c1 x 2  1  c2 x 2  1  c3  2 x  1  a0  a1 x  a2 x 2
Grouping the terms on the left hand side of both equations as  c1  c2  c3    2c3  x   c1  c2  x 2 these
equations can be rewritten as linear systems
1c1

0c1
1c1
 0c2
 1c2
1c2

1c3
 0
 2c3
 0c3
 0
 0
and
1c1

0c1
1c1
 0c2
 1c2
1c2

1c3
 a0
 2c3
 0c3
 a1
 a2
1 1 1
Since the coefficient matrix of both systems has determinant 0 0 2  4  0 , it follows from
1 1 0
parts (b), (e), and (g) of Theorem 2.3.8 that the homogeneous system has only the trivial solution and the
nonhomogeneous system is consistent for all real values a0 , a1 , and a2 . Therefore the polynomials x 2  1 ,
x 2  1 , and 2 x  1 are linearly independent and span P2 so that they form a basis for P2 .
4.
Polynomials 1  x , 1  x , 1  x 2 , and 1  x 3 are linearly independent if the equation




c1 1  x   c2 1  x   c3 1  x 2  c4 1  x 3  0
has only the trivial solution. For these polynomials to span P3 , it must be possible to express every
polynomial a0  a1 x  a2 x 2  a3 x 3 as
46
Chapter 4: General Vector Spaces




c1 1  x   c2 1  x   c3 1  x 2  c4 1  x 3  a0  a1 x  a2 x 2  a3 x 3
Grouping the terms on the left hand side of both equations as  c1  c2  c3  c4    c1  c2  x  c3 x 2  c4 x 3
these equations can be rewritten as linear systems
1c1
1c1


1c2
1c2
 1c3
 0c3
 1c4
 0 c4
 0
 0
0c1
0c1
 0c2
 0c2
 1c3
 0c3
 0 c4
 1c4
 0
 0
and
1c1
1c1


1c2
1c2
 1c3
 0c3
 1c4
 0 c4
 a0
 a1
0c1
0c1
 0c2
 0c2
 1c3
 0c3
 0 c4
 1c4
 a2
 a3
1
1
1
1 1
0
0
0
0 1
0
0
0
1
Since the coefficient matrix of both systems has determinant
 2  0 , it follows from
0 1
parts (b), (e), and (g) of Theorem 2.3.8 that the homogeneous system has only the trivial solution and the
nonhomogeneous system is consistent for all real values a0 , a1 , a2 , and a3 . Therefore the polynomials
1  x , 1  x , 1  x 2 and 1  x 3 are linearly independent and span P3 so that they form a basis for P3 .
5.
 1 0
3 6   0 1  0 8 
Matrices 
, and 
, 
, 
 are linearly independent if the equation



 1 2 
3 6   1 0   12 4 
3 6 
 0 1
 0 8 
 1 0  0 0 
c1 
 c2 
 c3 
 c4 





3 6 
 1 0 
 12 4 
 1 2  0 0 
has only the trivial solution. For these matrices to span M 22 , it must be possible to express every matrix
 a11
a
 21
a12 
as
a22 
3 6 
 0 1
 0 8 
 1 0   a11
c1 
 c2 
 c3 
 c4 




3 6 
 1 0 
 12 4 
 1 2   a21
a12 
a22 
Equating corresponding entries on both sides yields linear systems
3c1
 0c2

6c1
3c1
6c1
 1c2
 1c2
 0c2
 8c3
 12c3
 4c3
0c3

1c4
 0c4
 1c4
 2c4
 0
3c1
 0c2

 0
and
 0
 0
6c1
3c1
6c1
 1c2
 1c2
 0c2
 8c3
 12c3
 4c3
 0c4
 1c4
 2c4
0
0
1
6 1
8
0
3
Since the coefficient matrix of both systems has determinant
0c3

3 1 12 1
6
0
4
1c4

a11
 a12
 a21
 a22
 48  0 , it follows from
2
parts (b), (e), and (g) of Theorem 2.3.8 that the homogeneous system has only the trivial solution and the
4.6 Dimension
47
nonhomogeneous system is consistent for all real values a11 , a12 , a21 , and a22 . Therefore the matrices
3 6   0 1  0 8 
 1 0
3 6  ,  1 0  ,  12 4  , and  1 2  are linearly independent and span M 22 so that they form a

 
 



basis for M 22 .
6.
 1 0
1 1  1 1 0 1
Matrices 
, 
, 
, and 
 are linearly independent if the equation



0 0 
1 1 0 0   1 0 
1 1
 1 1
0 1
 1 0  0 0 
c1 
 c2 
 c3 
 c4 





1 1
0 0 
 1 0
0 0  0 0 
has only the trivial solution. For these matrices to span M 22 , it must be possible to express every matrix
 a11
a
 21
a12 
as
a22 
1 1
 1 1
0 1
 1 0   a11
c1 
 c2 
 c3 
 c4 




1 1
0 0 
 1 0
0 0   a21
a12 
a22 
Equating corresponding entries on both sides in each equation yields linear systems
1c1
1c1
1c1
1c1
 1c2
 1c2
 0c2
 0c2
 0c3
 1c3
 1c3
 0c3
 1c4
 0 c4
 0 c4
 0 c4




0
0
0
0
and
1c1
1c1
1c1
1c1
 1c2
 1c2
 0c2
 0c2
 0c3
 1c3
 1c3
 0c3
 1c4
 0 c4
 0 c4
 0 c4
1
1
1
Since the coefficient matrix of both systems has determinant
0
1 1 1 0
1
0
1 0
1
0
0 0
 a11
 a12
 a21
 a22
 1  0 , it follows from parts
(b), (e), and (g) of Theorem 2.3.8 that the homogeneous system has only the trivial solution and the
nonhomogeneous system is consistent for all real values a11 , a12 , a21 , and a22 . Therefore the matrices
1 1  1 1 0 1
 1 0
1 1 , 0 0  ,  1 0  , and 0 0  are linearly independent and span M 22 so that they form a basis

 
 



for M 22 .
7.
(a)
Vectors  2, 3,1 ,  4,1,1 , and  0, 7,1 are linearly independent if the vector equation
c1  2, 3,1  c2  4,1,1  c3  0, 7,1   0,0,0 
has only the trivial solution. This equation can be rewritten as a linear system
2c1
 4c2
 0c3
 0
3c1
1c1


 7c3
 1c3
 0
 0
1c2
1c2
48
Chapter 4: General Vector Spaces
2 4 0
Since the determinant of the coefficient matrix of this system is 3 1 7  0 , it follows from
1 1 1
parts (b) and (g) of Theorem 2.3.8 that the homogeneous system has nontrivial solutions. Since the
vectors  2, 3,1 ,  4,1,1 , and  0, 7,1 are linearly dependent, they do not form a basis for R3 .
(b)
Vectors 1, 6, 4  ,  2, 4, 1 , and  1, 2, 5  are linearly independent if the vector equation
c1 1, 6, 4   c2  2, 4, 1  c3  1, 2, 5    0,0,0 
has only the trivial solution. This equation can be rewritten as a linear system
1c1
 2c2

1c3
 0
6c1
4c1
 4c2
 1c2
 2c3
 5c3
 0
 0
1 2 1
Since the determinant of the coefficient matrix of this system is 6 4 2  0 , it follows from parts
4 1 5
(b) and (g) of Theorem 2.3.8 that the homogeneous system has nontrivial solutions. Since the vectors
1, 6, 4  ,  2, 4, 1 , and  1, 2, 5 are linearly dependent, they do not form a basis for R3 .
8.
Vectors p1  1  3 x  2 x 2 , p2  1  x  4 x 2 , and p3  1  7x are linearly independent if the vector equation
c1p1  c2 p2  c3 p3  0 has only the trivial solution.
By grouping the terms on the left hand side as c1 1  3 x  2 x 2   c2 1  x  4 x 2   c3 1  7 x  
 c1  c2  c3    3c1  c2  7c3  x   2c1  4c2  x 2 this equation can be rewritten as the linear system
c1
3c1
2c1

c2
 c2
 4c2

c3
 0
 7c3
 0
 0
1 1 1
The coefficient matrix of this system has determinant 3 1 7  0 , thus it follows from
2 4 0
parts (b) and (g) of Theorem 2.3.8 that the homogeneous system has nontrivial solutions. Since the vectors
p1 , p 2 , and p3 are linearly dependent, we conclude that they do not form a basis for P2 .
9.
0 1
1 0  2 2  1 1
Matrices 
, 
, 
, and 
 are linearly independent if the equation



1 1   3 2  1 0 
 1 1
1 0 
2 2 
1 1
0 1 0 0 
c1 
 c2 
 c3 
 c4 





1 1 
3 2
1 0 
 1 1 0 0 
4.6 Dimension
49
has only the trivial solution. Equating corresponding entries on both sides yields a linear system
1c1
0c1
1c1
1c1
 2c2
 2c2
 3c2
 2c2


1c3
1c3
 1c3
 0c3
 0 c4
 1c4
 1c4
 1c4
 0
 0
 0
 0
1
Since the determinant of the coefficient matrix of this system is
2
1
0
0 2 1 1
1
3
1
1
1
2
0
1
 0 , it follows from parts
(b) and (g) of Theorem 2.3.8 that the homogeneous system has nontrivial solutions. Since the matrices
0 1
1 0  2 2  1 1
1 1  ,  3 2  , 1 0  , and  1 1 are linearly dependent, we conclude that they do not form a basis



 
 

for M 22 .
10.
(a)
The identity cos2 x  sin 2 x  cos2 x implies that v1 , v 2 , v 3  is linearly dependent, therefore it is not a
basis for V .
(b)
For the equation c1 cos2 x  c2 sin 2 x  0 to hold for all real x values, we must have c1  0 (required
when x  0 ) and c2  0 (required when x  2 ). Therefore the vectors v1  cos2 x and v 2  sin 2 x are
linearly independent.
Any vector v in V can be expressed as v  k1 cos2 x  k2 sin 2 x  k3 cos2 x . However, from the
identity cos2 x  sin 2 x  cos2 x it follows that we can express v as a linear combination of cos2 x
and sin 2 x alone: v  k1 cos2 x  k2 sin 2 x  k3  cos2 x  sin 2 x    k1  k3  cos2 x   k2  k3  sin 2 x . This
proves that the vectors v1  cos2 x and v 2  sin 2 x span V .
We conclude that v1  cos2 x and v 2  sin 2 x form a basis for V .
(Note that v1 , v 3  and v 2 , v 3  are also bases for V .)
11.
(a)
Expressing w as a linear combination of u 1 and u 2 we obtain
1,1  c1  2, 4   c2  3, 8 
Equating corresponding components on both sides yields the linear system
2c1
 3c2
 1
4c1
 8c2
 1
1 0
whose augmented matrix has the reduced row echelon form 
0 1
5
28
3
14

 . The solution of the linear

system is c1  285 , c2  143 , therefore the coordinate vector is  w S   285 , 143  .
50
Chapter 4: General Vector Spaces
(b)
Expressing w as a linear combination of u 1 and u 2 we obtain
 a, b   c1 1,1  c2  0, 2 
Equating corresponding components on both sides yields the linear system
1c1
1c1
 0c2
 2c2
 a
 b
1 0
whose augmented matrix has the reduced row echelon form 
0 1
a
 . The solution of the linear

ba
2
system is c1  a , c2  b 2 a , therefore the coordinate vector is  w S   a, b 2 a  .
12.
(a)
Expressing w as a linear combination of u 1 and u 2 we obtain
1,0   c1 1, 1  c2 1,1
Equating corresponding components on both sides yields the linear system
c1
 c2

c1
 c2
 0
1
1 0
whose augmented matrix has the reduced row echelon form 
0 1
1
2
1
2

 . The solution of the linear

system is c1  12 , c2  12 , therefore the coordinate vector is  w S   12 , 12  .
(b)
Expressing w as a linear combination of u 1 and u 2 we obtain
 0,1  c1 1, 1  c2 1,1
Equating corresponding components on both sides yields the linear system
c1
c1
 c2
 c2
 0
 1
 1 0  12 
. The solution of the linear
whose augmented matrix has the reduced row echelon form 
1
2
0 1
system is c1   12 , c2  12 , therefore the coordinate vector is  w S    12 , 12  .
13.
(a)
Expressing v as a linear combination of v1 , v 2 , and v 3 we obtain
 2, 1, 3   c1 1, 0, 0   c2  2, 2, 0   c3  3, 3, 3
Equating corresponding components on both sides yields the linear system
4.6 Dimension
c1
 2c2
 3c3

2c2
 3c3
3c3
 1
 3
51
2
which can be solved by back-substitution to obtain c3  1 , c2  2 , and c1  3 . The coordinate vector
is  v S   3, 2,1  .
(b)
Expressing v as a linear combination of v1 , v 2 , and v 3 we obtain
 5, 12, 3   c1 1, 2, 3  c2  4, 5, 6   c3  7, 8, 9 
Equating corresponding components on both sides yields the linear system
1c1
 4c2
 7c3

2c1
3c1
 5c2
 6c2
 8c3
 9c3
 12

3
5
 1 0 0 2 


whose augmented matrix has the reduced row echelon form 0 1 0 0  . The solution of the
0 0 1 1
linear system is c1  2 , c2  0 , and c3  1 . The coordinate vector is  v S   2, 0,1  .
14.
(a)
Since p  4 p1   3  p2  1p3 we conclude that the coordinate vector is  p S   4, 3,1 .
(b)
Expressing p as a linear combination of p1 , p 2 , and p3 we obtain



2  x  x 2  c1 1  x   c2 1  x 2  c3 x  x 2

Grouping the terms on the right hand side according to powers of x yields
2  x  x 2   c1  c2    c1  c3  x   c2  c3  x 2
For this equality to hold for all real x , the coefficients associated with the same power of x on both
sides must match. This leads to the linear system
c1
 c2
c1
c2

 c3
 c3
2
 1
 1
 1 0 0 0


whose augmented matrix has the reduced row echelon form 0 1 0 2  . The solution is c1  0 ,
0 0 1 1
c2  2 , c3  1 , therefore the coordinate vector is  p S   0,2, 1 .
15.
Matrices (vectors in M 22 ) A1 , A2 , A3 , and A4 are linearly independent if the equation
52
Chapter 4: General Vector Spaces
k1 A1  k2 A2  k3 A3  k4 A4  0
has only the trivial solution. For these matrices to span M 22 , it must be possible to express every matrix
a b 
B
 as
c d 
k1 A1  k2 A2  k3 A3  k4 A4  B
k1
k1  k2


The left hand side of each of these equations is the matrix 
 . Equating
 k1  k2  k3 k1  k2  k3  k4 
corresponding entries, these two equations can be rewritten as linear systems
k1
k1
 k2
k1
k1
 k2
 k2
 k3
 k3
 k4
 0
 0
 0
 0
and
k1
k1
 k2
k1
k1
 k2
 k2
 a
 b
 k3
 k3
 k4
 c
 d
1 0 0 0
Since the coefficient matrix of both systems has determinant
1 1 0 0
1 1 1 0
 1  0 , it follows from parts (b),
1 1 1 1
(e), and (g) of Theorem 2.3.8 that the homogeneous system has only the trivial solution and the
nonhomogeneous system is consistent for all real values a , b, c and d . Therefore the matrices A1 , A2 ,
A3 , and A4 are linearly independent and span M 22 so that they form a basis for M 22 .
1 0 
To express A  
 as a linear combination of the matrices A1 , A2 , A3 , and A4 , we form the
1 0 
nonhomogeneous system as above, with the appropriate right hand side values

k1
k1
 k2
k1
k1
 k2
 k2
 k3
 k3
 k4
1
 0
 1
 0
which can be solved by forward-substitution to obtain k1  1 , k2  1 , k3  1 , k4  1 .
This allows us to express A  1A1  1A2  1A3  1A4 .
The coordinate vector is  A S  1, 1,1, 1 .
16.
Matrices (vectors in M 22 ) A1 , A2 , A3 , and A4 are linearly independent if the equation
k1 A1  k2 A2  k3 A3  k4 A4  0
4.6 Dimension
has only the trivial solution. For these matrices to span M 22 , it must be possible to express every matrix
a b 
B
 as
c d 
k1 A1  k2 A2  k3 A3  k4 A4  B
k  k  k
The left hand side of each of these equations is the matrix  1 2 3
 k1  k4
entries, these two equations can be rewritten as linear systems
k1
 k2
k2
 k3
 0
 0
 k4
k1
 0
k1
and
 k2
k2
k2 
. Equating corresponding
k3 
 k3
 k4
k1
 0
k3
 a
 b

c
 d
k3
1 1 1 0
Since the coefficient matrix of both systems has determinant
0 1 0 0
1 0 0 1
 1  0 , it follows from parts
0 0 1 0
(b), (e), and (g) of Theorem 2.3.8 that the homogeneous system has only the trivial solution and the
nonhomogeneous system is consistent for all real values a , b, c and d . Therefore the matrices A1 , A2 ,
A3 , and A4 are linearly independent and span M 22 so that they form a basis for M 22 .
6 2 
To express A  
 as a linear combination of the matrices A1 , A2 , A3 , and A4 , we form the
5 3
nonhomogeneous system as above, with the appropriate right hand side values
k1
 k2
k2
 k3
 k4
k1
k3




6
2
5
3
1
0
The augmented matrix of this system has the reduced row echelon form 
0

0
0 0 0 1
1 0 0 2 
therefore the
0 1 0 3

0 0 1 4
solution is k1  1 , k2  2 , k3  3 , k4  4 .
This allows us to express A  1A1  2 A2  3 A3  4 A4 . The coordinate vector is  A S  1, 2, 3, 4  .
17.
Vectors p1 , p 2 , and p3 are linearly independent if the vector equation
c1p1  c2 p2  c3 p3  0
53
54
Chapter 4: General Vector Spaces
has only the trivial solution. For these vectors to span P2 , it must be possible to express every vector
p  a0  a1 x  a2 x 2 in P2 as
c1p1  c2 p2  c3 p3  p
Grouping the terms on the left hand sides as c1 1  x  x 2   c2  x  x 2   c3 x 2  c1   c1  c2  x 
 c1  c2  c3  x 2 these two equations can be rewritten as linear systems
 0
c1
c1
c1
 c2
 c2
 c3
 0
 0
 a0
c1
and
 c2
 c2
c1
c1
 c3
 a1
 a2
1 0 0
Since the coefficient matrix of both systems has determinant 1 1 0  1  0 , it follows from
1 1 1
parts (b), (e), and (g) of Theorem 2.3.8 that the homogeneous system has only the trivial solution and the
nonhomogeneous system is consistent for all real values a0 , a1 , and a2 . Therefore the vectors p1 , p 2 , and
p3 are linearly independent and span P2 so that they form a basis for P2 .
To express p  7  x  2 x 2 as a linear combination of the vectors p1 , p 2 , and p3 , we form the
nonhomogeneous system as above, with the appropriate right hand side values

c1
c1
c1
 c2
 c2
 c3
7
 1
 2
which can be solved by forward-substitution to obtain c1  7 , c2  8 , c3  3 .
This allows us to express p  7p1  8p2  3p3 . The coordinate vector is  p S   7, 8, 3  .
18.
Vectors p1 , p 2 , and p3 are linearly independent if the vector equation
c1p1  c2 p2  c3 p3  0
has only the trivial solution. For these vectors to span P2 , it must be possible to express every vector
p  a0  a1 x  a2 x 2 in P2 as
c1p1  c2 p2  c3 p3  p
Grouping the terms on the left hand sides as c1 1  2 x  x 2   c2  2  9 x   c3  3  3 x  4 x 2  
 c1  2c2  3c3    2c1  9c2  3c3  x   c1  4c3  x 2 these two equations can be rewritten as linear systems
4.6 Dimension
c1
 2c2
 3c3
 0
2c1
c1
 9c2
 3c3
 4c3
 0
 0
and
c1
 2c2

3c3
 a0
2c1
c1
 9c2
 3c3
 4c3
 a1
 a2
55
1 2 3
Since the coefficient matrix of both systems has determinant 2 9 3  1  0 , it follows from
1 0 4
parts (b), (e), and (g) of Theorem 2.3.8 that the homogeneous system has only the trivial solution and the
nonhomogeneous system is consistent for all real values a0 , a1 , and a2 . Therefore the vectors p1 , p 2 , and
p3 are linearly independent and span P2 so that they form a basis for P2 .
To express p  2  17 x  3 x 2 as a linear combination of the vectors p1 , p 2 , and p3 , we form the
nonhomogeneous system as above, with the appropriate right hand side values
c1
 2c2
 3c3

2c1
c1
 9c2
 3c3
 4c3
 17
 3
2
 1 0 0 1


The augmented matrix of this system has the reduced row echelon form 0 1 0 2  therefore the
0 0 1 1
solution is c1  1 , c2  2 , c3  1 . This allows us to express p  1p1  2 p2   1 p3 .
The coordinate vector is  p S  1, 2, 1 .
19.
(a)
The third vector is a sum of the first two. This makes the set linearly dependent, hence it cannot be a
basis for R2 .
(b)
The two vectors generate a plane in R3 , but they do not span all of R3 . Consequently, the set is not a
basis for R3 .
(c)
For instance, the polynomial p  1 cannot be expressed as a linear combination of the given two
polynomials. This means these two polynomials do not span P2 , hence they do not form a basis for
P2 .
(d)
20.
0 1 
For instance, the matrix 
 cannot be expressed as a linear combination of the given four
0 0 
matrices. This means these four matrices do not span M 22 , hence they do not form a basis for M 22 .
If the set contains at least two vectors, then the zero vector can be expressed as a scalar product of any other
vector in the set and zero scalar. According to Definition 1 in Section 4.3, this makes the set linearly
dependent.
A set with only one vector is linearly dependent if and only if the vector is a zero vector (see the margin
note next to Definition 1 in Section 4.3).
56
Chapter 4: General Vector Spaces
21.
(a)
We have TA 1,0,0   1,0, 1 , TA  0,1,0   1,1,2  , and TA  0,0,1  1, 3,0  . The vector equation
k1 1,0, 1  k2 1,1,2   k3 1, 3,0    0, 0, 0 
can be rewritten as a homogeneous linear system
1k1

0k1
1k1
 1k2
 2k2
1k2

1k3
 0
 3k3
 0 k3
 0
 0
The determinant of the coefficient matrix of this system is det  A   10  0 , therefore by Theorem
2.3.8, the system has only the trivial solution. We conclude that the set TA  e1  , TA  e 2  , TA  e 3  is
linearly independent.
(b)
We have TA 1,0,0   1,0, 1 , TA  0,1,0   1,1,2  , and TA  0,0,1   2,1,1 . By inspection,
 2,1,1  1,0, 1  1,1,2 
We conclude that the set TA  e1  , TA  e 2  , TA  e 3  is linearly dependent.
22.
(a)
Expressing TA  u    4, 2, 0  as a linear combination of the vectors in S we obtain
 4, 2, 0   c1 1,1, 0   c2  0,1,1  c3 1,1,1
Equating corresponding components on both sides yields the linear system
1c1
 0c2
 1c3

1c1
0c1


 1c3
 1c3
 2
 0
1c2
1c2
4
 1 0 0 2 


whose augmented matrix has the reduced row echelon form 0 1 0 6  .
0 0 1 6 
The solution of the linear system is c1  2 , c2  6 , and c3  6 .
The coordinate vector is  TA  u  S   2, 6, 6  .
(b)
Expressing TA  u    2, 0, 1 as a linear combination of the vectors in S we obtain
 2, 0, 1  c1 1,1, 0   c2  0,1,1  c3 1,1,1
Equating corresponding components on both sides yields the linear system
1c1
 0c2
 1c3
 2
1c1
0c1


 1c3
 1c3


1c2
1c2
0
1
4.6 Dimension
1
1 0 0


whose augmented matrix has the reduced row echelon form 0 1 0 2  .
0 0 1 3
The solution of the linear system is c1  1 , c2  2 , and c3  3 .
The coordinate vector is  TA  u  S   1, 2, 3  .
23.
We have u1   cos30,sin 30  
(a)
 ,  and u   0,1 .
3
2
1
2
2
By inspection, we can express w 
 3,1 as a linear combination of u and u
1
2
 3,1  2  ,   0  0,1
3
2
1
2
therefore the coordinate vector is  w S   2,0  .
(b)
Expressing w 
 3,1 as a linear combination of u and u we obtain
1
2
1,0   c1  23 , 12   c2  0,1
Equating corresponding components on both sides yields the linear system
3
2
1
2

c1
c1
 c2
1
 0
The first equation yields c1  23 , then the second equation can be solved to obtain c2   13 . The
coordinate vector is  w S 
(c)
 ,  .
2
3
1
3
By inspection, we can express w   0,1 as a linear combination of u 1 and u 2
 0,1  0  23 , 12   1 0,1
therefore the coordinate vector is  w S   0,1 .
(d)
Expressing w   a, b  as a linear combination of u 1 and u 2 we obtain
 a, b   c1  23 , 12   c2  0,1
Equating corresponding components on both sides yields the linear system
3
2
1
2
 a
c1
c1
 c2
 b
57
58
Chapter 4: General Vector Spaces
The first equation yields c1  2 a3 , then the second equation can be solved to obtain c2  b  a3 . The
coordinate vector is  w S 
 ,b   .
2a
3
a
3
24.
(a)
 0, 2  ;
 1, 2  ;
25.
(a)
Polynomials 1 , 2t , 2  4t 2 , and 12t  8t 3 are linearly independent if the equation
1,0  ; (c)
(b)

 a  b, 2b 
(d)



c1 1  c2  2t   c3 2  4t 2  c4 12t  8t 3  0
has only the trivial solution. For these polynomials to span P3 , it must be possible to express every
polynomial a0  a1t  a2 t 2  a3 t 3 as




c1 1  c2  2t   c3 2  4t 2  c4 12t  8t 3  a0  a1t  a2 t 2  a3 t 3
Grouping the terms on the left hand side of both equations as
 c1  2c3    2c2  12c4  t  4c3t 2  8c4 t 3 these equations can be rewritten as linear systems
1c1
0c1
0c1
0c1




0c2
2c2
0c2
0c2




2c3
0c3
4c3
0c3
 0c4
 12c4
 0c4
 8c4




0
0
0
0
1c1
0c1
0c1
0c1
and




0c2
2c2
0c2
0c2




1 0 2
Since the coefficient matrix of both systems has determinant
 0c4
 12c4
 0c4
 8c4
2c3
0c3
4c3
0c3
 a0
 a1
 a2
 a3
0
0 2
0 12
0 0
4
0
0 0
0
8
 64  0 , it follows
from parts (b), (e), and (g) of Theorem 2.3.8 that the homogeneous system has only the trivial
solution and the nonhomogeneous system is consistent for all real values a0 , a1 , a2 , and a3 .
Therefore the polynomials 1 , 2t , 2  4t 2 , and 12t  8t 3 are linearly independent and span P3 so
that they form a basis for P3 .
(b)
To express p  1  4t  8t 2  8t 3 as a linear combination of the four vectors in B , we form the
nonhomogeneous system as was done in part (a), with the appropriate right hand side values
1c1
0c1
0c1
0c1
 0c2
 2c3
 2c2
 0c2
 0c2
 0c3
 4c3
 0c3

0c4
 12c4
 0c4
 8c4

 4
 8
 8
Back-substitution yields c4  1 , c3  2 , c2  4 , and c1  3 .
The coordinate vector is  p  B   3, 4, 2,1 .
1
4.6 Dimension
26.
(b)
 p  B   2, 8, 0,1
27.
(a)
w  6  3,1, 4   1 2, 5, 6   4 1, 4, 8    20,17, 2 
(b)
q  3 x 2  1  0 x 2  1  4  2 x  1  3 x 2  8 x  1
(c)
3 6 
 0 1
 0 8 
 1 0   21 103
B  8 
 7
6
 3




30 
3 6 
 1 0 
 12 4 
 1 2   106

 
59

True-False Exercises
(a)
False. The set must also be linearly independent.
(b)
False. The subset must also span V .
(c)
True. This follows from Theorem 4.5.1.
(d)
True. For any vector v   a1 ,, an  in R n , we have v  a1e1    an e n therefore the coordinate vector of
v with respect to the standard basis S  e1 ,, e n  is  v S   a1 ,, an   v .
(e)
False. For instance, 1  t 4 , t  t 4 , t 2  t 4 , t 3  t 4 , t 4  is a basis for P4 .
4.6 Dimension
1.
 1 1 1 0 


The augmented matrix of the linear system  2 1 2 0  has the reduced row echelon form
 1 0 1 0 
 1 0 1 0 
0 1 0 0  . The general solution is
x1  t , x2  0 , x3  t . In vector form


0 0 0 0 
 x1 , x2 , x3    t, 0, t   t 1, 0,1
therefore the solution space is spanned by a vector v1  1, 0,1 . This vector is nonzero, therefore it forms a
linearly independent set (Theorem 4.4.2(b)). We conclude that v1 forms a basis for the solution space and
that the dimension of the solution space is 1 .
2.
3 1 1 1 0 
The augmented matrix of the linear system 
 has the reduced row echelon form
5 1 1 1 0 
1 0

0 1
1
4
1
4
0 0
1
1
 . The general solution is x1   4 s , x2   4 s  t , x3  s , x4  t . In vector form
1 0
60
Chapter 4: General Vector Spaces
 x1 , x2 , x3 , x4     14 s,  14 s  t, s, t   s   14 ,  14 ,1,0   t  0, 1,0,1
therefore the solution space is spanned by vectors v1    14 ,  14 ,1,0  and v 2   0, 1,0,1 . These vectors are
linearly independent since neither of them is a scalar multiple of the other (Theorem 4.4.2(c)). We conclude
that v1 and v 2 form a basis for the solution space and that the dimension of the solution space is 2 .
3.
2 1 3 0 


The augmented matrix of the linear system  1 0 5 0  has the reduced row echelon form
0 1 1 0 
1 0 0 0 
0 1 0 0  . The only solution is x  x  x  0 .
1
2
3


0 0 1 0 
The solution space has no basis - its dimension is 0.
4.
 1 4 3 1 0 
The augmented matrix of the linear system 
 has the reduced row echelon form
2 8 6 2 0 
 1 4 3 1 0 
0 0 0 0 0  . The general solution is x1  4r  3s  t , x2  r , x3  s , x4  t . In vector form


 x1 , x2 , x3 , x4    4r  3s  t, r, s, t   r  4,1, 0, 0   s  3, 0,1,0   t 1, 0, 0,1
therefore the solution space is spanned by vectors v1   4,1, 0, 0  , v 2   3, 0,1, 0  , and v 3  1, 0, 0,1 . By
inspection, these vectors are linearly independent since rv1  sv 2  tv3  0 implies r  s  t  0 . We
conclude that v1 , v 2 , and v 3 form a basis for the solution space and that the dimension of the solution
space is 3 .
5.
1 3 1 0 


The augmented matrix of the linear system 2 6 2 0  has the reduced row echelon form
 3 9 3 0 
1 3 1 0 
0 0 0 0 

 . The general solution is x1  3s  t , x2  s , x3  t . In vector form
0 0 0 0 
 x1 , x2 , x3    3s  t, s, t   s  3,1,0   t  1,0,1
therefore the solution space is spanned by vectors v1   3,1,0  and v 2   1,0,1 . These vectors are
linearly independent since neither of them is a scalar multiple of the other (Theorem 4.4.2(c)). We conclude
that v1 and v 2 form a basis for the solution space and that the dimension of the solution space is 2 .
4.6 Dimension
6.
1
3
The augmented matrix of the linear system 
4

6
1
0

0

0
61
1 0
2 2 0 
has the reduced row echelon form
3 1 0 

5
1 0
1
0 4 0 
1 5 0 
. The general solution is x  4t , y  5t , z  t . In vector form
0
0 0

0
0 0
 x, y, z    4t, 5t, t   t  4, 5,1
therefore the solution space is spanned by vector v1   4, 5,1 . By Theorem 4.4.2(b), this vector forms a
linearly independent set since it is not the zero vector. We conclude that v1 forms a basis for the solution
space and that the dimension of the solution space is 1 .
7.
(a)
If we let y  s and z  t be arbitrary values, we can solve the plane equation for x : x  23 s  35 t .
Expressing the solution in vector form  x, y, z    23 s  35 t , s, t   s  32 ,1,0   t   35 ,0,1 . By Theorem
4.4.2(c),  23 ,1,0  ,   35 ,0,1 is linearly independent since neither vector in the set is a scalar multiple
of the other. A basis for the subspace is  23 ,1,0  ,   35 ,0,1 . The dimension of the subspace is 2 .
(b)
If we let y  s and z  t be arbitrary values, we can solve the plane equation for x : x  s .
Expressing the solution in vector form  x, y, z    s, s, t   s 1,1,0   t  0,0,1 . By Theorem 4.4.2(c),
1,1,0  ,  0,0,1 is linearly independent since neither vector in the set is a scalar multiple of the
other. A basis for the subspace is 1,1,0  ,  0,0,1 . The dimension of the subspace is 2 .
(c)
In vector form,  x, y, z    2t , t ,4t   t  2, 1, 4  . By Theorem 4.4.2(b), the vector  2, 1,4  forms a
linearly independent set since it is not the zero vector. A basis for the subspace is  2, 1, 4  . The
dimension of the subspace is 1 .
(d)
The subspace contains all vectors  a, a  c, c   a 1,1, 0   c  0,1,1 thus we can express it as as
span  S  where S  1,1, 0  ,  0,1,1 . By Theorem 4.4.2(c), S is linearly independent since neither
vector in the set is a scalar multiple of the other. Consequently, S forms a basis for the given
subspace. The dimension of the subspace is 2 .
8.
(a)
The given subspace can be expressed as span  S  where S  1,0,0,0  ,  0,1,0,0  ,  0,0,1,0  is a set of
linearly independent vectors. Therefore S forms a basis for the subspace, so its dimension is 3 .
(b)
The subspace contains all vectors  a, b, a  b, a  b   a 1,0,1,1  b  0,1,1, 1 thus we can express it
as span  S  where S  1,0,1,1 ,  0,1,1, 1 . By Theorem 4.4.2(c), S is linearly independent since
62
Chapter 4: General Vector Spaces
neither vector in the set is a scalar multiple of the other. Consequently, S forms a basis for the given
subspace. The dimension of the subspace is 2 .
(c)
The subspace contains all vectors  a, a, a, a   a 1,1,1,1 thus we can express it as as span  S  where
S  1,1,1,1 . By Theorem 4.4.2(b), S is linearly independent since it contains a single nonzero
vector. Consequently, S forms a basis for the given subspace. The dimension of the subspace is 1 .
9.
(a)
Let W be the space of all diagonal n  n matrices. We can write
 d1
0



0
0
1 0  0 
0 0  0 
0 0  0 





0 0  0 
0
0 0  0
0 1  0



d
d
   dn 
    
   1      2     







 dn 
0 0  0
0 0  0
0 0  1






0 
d2 

0
A1
A2
An
The matrices A1 ,..., An are linearly independent and they span W ; hence, A1 ,..., An form a basis for
W . Consequently, the dimension of W is n .
(b)
A basis for this space can be constructed by including the n matrices A1 ,..., An from part (a), as well
as  n  1   n  2     3  2  1 
n  n 1
2
matrices Bij (for all i  j ) where all entries are 0 except for
the  i, j  and  j, i  entries, which are both 1.
For instance, for n  3 , such a basis would be:
 1 0 0  0 0 0  0 0 0  0 1 0  0 0 1 0 0 0 
 0 0 0  , 0 1 0  , 0 0 0  ,  1 0 0  , 0 0 0  , 0 0 1

 
 
 
 
 

 0 0 0  0 0 0  0 0 1 0 0 0   1 0 0  0 1 0 
     
A1
A2
The dimension is n 
(c)
n  n 1
2
A3

n  n 1
2
B12
B13
B23
.
A basis for this space can be constructed by including the n matrices A1 ,..., An from part (a), as well
as  n  1   n  2     3  2  1 
n  n 1
2
matrices Cij (for all i  j ) where all entries are 0 except for
the  i, j  entry, which is 1.
For instance, for n  3 , such a basis would be:
 1 0 0  0 0 0  0 0 0  0 1 0  0 0 1 0 0 0 
 0 0 0  , 0 1 0  , 0 0 0  , 0 0 0  , 0 0 0  , 0 0 1

 
 
 
 
 

 0 0 0  0 0 0  0 0 1 0 0 0  0 0 0  0 0 0 
     
A1
The dimension is n 
n  n 1
2
A2

n  n 1
2
A3
.
C12
C13
C23
4.6 Dimension
10.
63
The given subspace can be expressed as span  S  where S   x, x 2 , x 3  is a set of linearly independent
vectors in P3 . Therefore S forms a basis for the subspace. The dimension of the subspace is 3 .
11.
(a)
W is the set of all polynomials a0  a1 x  a2 x 2 for which a0  a1  a2  0 , i.e. all polynomials that
can be expressed in the form a1  a2  a1 x  a2 x 2 .
Adding two polynomials in W results in another polynomial in W
  a  a  a x  a x    b  b  b x  b x 
2
1
2
1
2
2
1
2
1
2
   a1  a2  b1  b2    a1  b1  x   a2  b2  x 2
since we have   a1  a2  b1  b2    a1  b1    a2  b2   0 .
Likewise, a scalar multiple of a polynomial in W is also in W


k  a1  a2  a1 x  a2 x 2   ka1  ka2  ka1 x  ka2 x 2
since it meets the condition   ka1  ka2    ka1    ka2   0 .
According to Theorem 4.2.1, W is a subspace of P2 .
(c)
From part (a), an arbitrary polynomial in W can be expressed in the form

 a1  a2  a1 x  a2 x 2  a1  1  x   a2 1  x 2

therefore, the polynomials 1  x and 1  x 2 span W . Also, a1  1  x   a2  1  x 2   0 implies
a1  a2  0 , so 1  x and 1  x 2 are linearly independent, hence they form a basis for W . The
dimension of W is 2.
12.
(a)
Either 1,0,0  or  0,1,0  can be used since neither is in span v1 , v 2 
1
(e.g., with 1,0,0  , linear independence can be easily shown calculating
1 1
2 2 0  2  0 then
3 2 0
using parts (b) and (g) of Theorem 2.3.8; the set forms a basis by Theorem 4.6.4)
(b)
Any of the three standard basis vector for R3 can be used since none of them is in span v1 , v 2 
1 3 1
1 0  2  0 then
(e.g., with 1,0,0  , linear independence can be easily shown calculating 1
0 2 0
using parts (b) and (g) of Theorem 2.3.8; the set forms a basis by Theorem 4.6.4)
13.
The equation k1v1  k2 v 2  k3 e1  k4 e 2  k5 e 3  k6 e 4  0 can be rewritten as a linear system
 3k2
k1
4 k1
 8 k2
2 k1
 4 k2
3k1
 6 k2
 k3
 0
 k4
 0
 k5
 0
 k6
 0
64
Chapter 4: General Vector Spaces
1
0
whose augmented matrix has the reduced row echelon form 
0

0
0 2 0 0
1 0 
1 1 0 0  13 0 
.
0
0 1 0  43 0 

2
0
0 0 1
0
3
Based on the leading entries in the first, second, fourth, and fifth columns, the vector equation
k1v1  k2 v 2  k4 e 2  k5e 3  0 has only the trivial solution (the corresponding augmented matrix has the
1
0
reduced row echelon form 
0

0
0 0 0 0
1 0 0 0 
). Therefore the vectors v1 , v 2 , e 2 , and e 3 are linearly
0 1 0 0

0 0 1 0
independent. Since dim  R 4   4 , it follows by Theorem 4.6.4 that the vectors v1 , v 2 , e 2 , and e 3 form a
basis for R 4 . (The answer is not unique.)
14.
The equation c1u1  c2 u 2  c3 u3  0 implies c1v1  c2  v1  v 2   c3  v1  v 2  v 3   0 , i.e.,
 c1  c2  c3  v1   c2  c3  v 2  c3 v3  0 , which by linear independence of v1 , v 2 , v 3  requires that
c1  c2  c3  0
c2  c3  0
c3  0
Solving this system by back-substitution yields c1  c2  c3  0 therefore u1 , u 2 , u 3  is linearly
independent. Since the dimension of V is 3 (as its basis v1 , v 2 , v 3  contains three vectors), by
Theorem 4.6.4 u1 , u 2 , u 3  must also be a basis for V .
15.
The equation k1v1  k2 v 2  k3e1  k4 e 2  k5e 3  0 can be rewritten as a linear system
 k3
k1
2 k1
 5k2
3k1
 3k2
 0
 k4
 0
 k5
 0
1
1 0 0
3

1
whose augmented matrix has the reduced row echelon form  0 1 0
3
 0 0 1  13

5
9
2
9
5
9
0

0 .
0 
Based on the leading entries in the first three columns, the vector equation k1v1  k2 v 2  k3e1  0 has only the
 1 0 0 0


trivial solution (the corresponding augmented matrix has the reduced row echelon form 0 1 0 0  ).
0 0 1 0 
4.6 Dimension
65
Therefore the vectors v1 , v 2 , and e1 are linearly independent. Since dim  R 3   3 , it follows by
Theorem 4.6.4 that the vectors v1 , v 2 , and e1 form a basis for R3 . (The answer is not unique.)
16.
One of the infinitely many ways to enlarge the given set to a basis for R 4 is by adding the vectors  0,0,1,0 
and  0,0,0,1 to the set. Since the resulting set contains dim  R 4   4 vectors, by Theorem 4.6.4 we only
need to establish the linear independence of the set to be able to conclude that it forms a basis for R 4 . The
homogeneous equation k1 1,0,0,0   k2 1,1,0,0   k3  0,0,1,0   k4  0,0,0,1   0,0,0,0  can be rewritten as
1
0
a linear system whose coefficient matrix 
0

0
1 0 0
1 0 0 
has determinant 1. Using parts (b) and (g) of
0 1 0

0 0 1
Theorem 2.3.8, we conclude that there is only the trivial solution, therefore the enlarged set of four vectors
is linearly independent (and, consequently, forms a basis for R 4 ).
17.
The equation k1v1  k2 v 2  k3 v 3  k4 v 4  0 can be rewritten as a linear system
1k1

1k2
 2 k3
 0 k4
 0
0k1
 0 k2
 0 k3
 0 k4
 0
0k1



 0
1k2
1k3
1k4
 1 0 1 1 0


whose augmented matrix has the reduced row echelon form 0 1 1 1 0  .
0 0 0 0 0 
For arbitrary values of s and t , we have k1   s  t , k2   s  t , x3  s , k4  t .
Letting s  1 and t  0 allows us to express v 3 as a linear combination of v1 and v 2 : v 3  v1  v 2 .
Letting s  0 and t  1 allows us to express v 4 as a linear combination of v1 and v 2 : v 4  v1  v 2 .
By part (b) of Theorem 4.6.3, span v1 , v 2   span v1 , v 2 , v 3 , v 4  .
Based on the leading entries in the first two columns, the vector equation k1v1  k2 v 2  0 has only the trivial
 1 0 0


solution (the corresponding augmented matrix has the reduced row echelon form 0 1 0  ). Therefore
0 0 0 
the vectors v1 and v 2 are linearly independent. We conclude that the vectors v1 and v 2 form a basis for
span v1 , v 2 , v 3 , v 4  . (The answer is not unique.)
18.
The equation k1v1  k2 v 2  k3 v 3  k4 v 4  0 can be rewritten as a linear system
66
Chapter 4: General Vector Spaces
1k1
1k1
1k1
1k1
 2 k2
 2 k2
 2 k2
 0 k2
 0 k3
 0 k3
 0 k3
 3k3
 3k 4
 3k 4
 3k 4
 4 k4
1
0
whose augmented matrix has the reduced row echelon form 
0

0
 0
 0
 0
 0
4 0
 12 0 
1 
.
0
0
0 0

0
0
0 0
0
3
3
2
For arbitrary values of s and t , we have k1  3s  4t , k2  23 s  21 t , k3  s , k4  t .
Letting s  1 and t  0 allows us to express v 3 as a linear combination of v1 and v 2 : v3  3v1  23 v 2 .
Letting s  0 and t  1 allows us to express v 4 as a linear combination of v1 and v 2 : v 4  4v1  12 v 2 .
By part (b) of Theorem 4.6.3, span v1 , v 2   span v1 , v 2 , v 3 , v 4  .
Based on the leading entries in the first two columns, the vector equation k1v1  k2 v 2  0 has only the trivial
1
0
solution (the corresponding augmented matrix has the reduced row echelon form 
0

0
0 0
1 0 
). Therefore
0 0

0 0
the vectors v1 and v 2 are linearly independent. We conclude that the vectors v1 and v 2 form a basis for
span v1 , v 2 , v 3 , v 4  . (The answer is not unique.)
19.
The space of all vectors x   x1 , x2 , x3  for which TA  x   0 is the solution space of Ax  0 .
(a)
1
1 0


The reduced row echelon form of A is 0 1 1 so x1  t , x2  t , x3  t . In vector form,
0 0 0 
 x1 , x2 , x3    t, t, t   t  1,1,1 . Since  1,1,1 is a basis for the space, the dimension is 1.
(b)
 1 2 0


The reduced row echelon form of A is 0 0 0  so x1  2 s , x2  s , x3  t . In vector form,
0 0 0 
 x1 , x2 , x3    2s, s, t   s  2,1,0   t  0,0,1 . Since  2,1,0  ,  0,0,1 is a basis for the space, the
dimension is 2.
(c)
 1 0 0


The reduced row echelon form of A is 0 1 1 so x1  0 , x2   t , x3  t . In vector form,
0 0 0 
 x1 , x2 , x3    0, t, t   t  0, 1,1 . Since  0, 1,1 is a basis for the space, the dimension is 1.
4.6 Dimension
20.
67
The space of all vectors x   x1 , x2 , x3 , x4  for which TA  x   0 is the solution space of Ax  0 .
(a)
1 0 2 1 
The reduced row echelon form of A is 
so x1  2 s  t , x2   12 s  14 t , x3  s ,
1
1

0
1

2
4
x4  t . In vector form,
 x1 , x2 , x3 , x4    2s  t,  12 s  14 t, s, t   s  2,  12 ,1,0   t 1, 14 ,0,1 .
Since  2,  12 ,1,0  , 1, 14 ,0,1 is a basis for the space, the dimension is 2.
(b)
 1 0 0 1


The reduced row echelon form of A is 0 1 0 1 so x1  x2  x3  t , x4  t . In vector form,
0 0 1 1
 x1 , x2 , x3 , x4    t, t, t, t   t  1, 1, 1,1 .
Since  1, 1, 1,1 is a basis for the space, the dimension is 1.
27.
In parts (a) and (b), we will use the results of Exercises 18 and 19 by working with coordinate vectors with
respect to the standard basis for P2 , S  1, x, x 2  .
(a)
Denote v1  1  x  2 x 2 , v 2  3  3 x  6 x 2 , v3  9 .
Then  v1 S   1,1, 2  ,  v 2 S   3,3,6  ,  v 3 S   9,0,0  .
Setting k1  v1 S  k2  v 2 S  k3  v 3 S  0 we obtain a linear system with augmented matrix
1 0 0 0 
 1 3 9 0 
 1 3 0 0  whose reduced row echelon form is 0 1 0 0  . Since there is only the trivial




0 0 1 0 
 2 6 0 0 
solution, it follows that the three coordinate vectors are linearly independent, and, by the result of
Exercise 22, so are the vectors v1 , v 2 , and v 3 . Because the number of these vector matches
dim  P2   3 , from Theorem 4.6.4 the vectors v1 , v 2 , and v 3 form a basis for P2 .
(b)
Denote v1  1  x , v 2  x 2 , v 3  2  2 x  3 x 2 .
Then  v1 S  1,1,0  ,  v 2 S   0,0,1 ,  v 3 S   2, 2, 3  .
Setting k1  v1 S  k2  v 2 S  k3  v 3 S  0 we obtain a linear system with augmented matrix
 1 0 2 0
1 0 2 0 
 1 0 2 0  whose reduced row echelon form is 0 1 3 0  .




0 0 0 0 
0 1 3 0 
This yields solutions k1  2 t , k2  3t , k3  t . Taking t  1 , we can express  v 3 S as a linear
combination of  v1 S and  v 2 S :  v 3 S  2  v1 S  3  v 2 S - the same relationship holds true for the
vectors themselves: v 3  2v1  3v 2 . By part (b) of Theorem 4.6.3,
span v1 , v 2   span v1 , v 2 , v 3  .
68
Chapter 4: General Vector Spaces
Based on the leading entries in the first two columns, the vector equation
 1 0 0


k1  v1 S  k2  v 2 S  0 has only the trivial solution (the corresponding augmented matrix  1 0 0 
0 1 0 
1 0 0 


has the reduced row echelon form 0 1 0  ). Therefore the coordinate vectors  v1 S and  v 2 S are
0 0 0 
linearly independent and, by the result of Exercise 18, so are the vectors v1 and v 2 .
We conclude that the vectors v1 and v 2 form a basis for span v1 , v 2 , v 3  .
(c)

 

Clearly, 1  x  3 x 2  12 2  2 x  6 x 2  13 3  3 x  9 x 2 therefore from Theorem 4.6.3(b), the
subspace is spanned by 1  x  3 x 2 . By Theorem 4.4.2(b), a set containing a single nonzero vector is
linearly independent.
We conclude that 1  x  3 x 2 forms a basis for this subspace of P2 .
True-False Exercises
(a)
True.
(b)
True. For instance, e1 , , e17 .
(c)
False. This follows from Theorem 4.6.2(b).
(d)
True. This follows from Theorem 4.6.4.
(e)
True. This follows from Theorem 4.6.4.
(f)
True. This follows from Theorem 4.6.5(a).
(g)
True. This follows from Theorem 4.6.5(b).
(h)
 1 0   1 0  0 1   0 1
True. For instance, invertible matrices 
, 
, 
, 
 form a basis for M 22 .
0 1  0 1  1 0   1 0 
(i)
True. The set has n2  1 matrices, which exceeds dim  M nn   n 2 .
(j)
False. This follows from Theorem 4.6.6(c).
(k)
False. For instance, for any constant c , span  x  c, x 2  c 2  is a two-dimensional subspace of P2
consisting of all polynomials in P2 for which p  c   0 . Clearly, there are infinitely many different
subspaces of this type.
4.7 Change of Basis
4.7 Change of Basis
1.
(a)
69
In this part, B is the start basis and B is the end basis:
2
4 1 1 


end basis | start basis   2 1 3 1 
The reduced row echelon form of this matrix is
1 0
13

5
 12 

0 
 I | transition from start to end   0 1 102
 13
The transition matrix is PB B   102
 5
(b)
 12 
.
0
In this part, B is the start basis and B is the end basis:
 1 1 2
4
end basis | start basis   3 1 2 1 


The reduced row echelon form of this matrix is
1 0 0
5
 I | transition from start to end   0 1 2  132 

2

 0  25 
The transition matrix is PB B  
.
13 
 2  2 
(c)
Expressing w as a linear combination of u 1 and u 2 we obtain
 3
2 
 4
 5  c1 2   c2  1
 
 
 
Equating corresponding components on both sides yields the linear system
2c1
2c1
 4c2
 c2
 3
 5
 1 0  17
10 
whose augmented matrix has the reduced row echelon form 
. The solution of the linear
8
5
0 1
  17 
17
, c2  85 , therefore the coordinate vector is  w B   108  .
system is c1   10
 5
17
  4 
 0  25    10
 .
Using Formula (12),  w B  PB B  w B  
8
13  
 2  2   5   7 
(d)
Expressing w as a linear combination of u1 and u 2 we obtain
70
Chapter 4: General Vector Spaces
 3
1 
 1
 5  c1 3  c2  1
 
 
 
Equating corresponding components on both sides yields the linear system
c1
 c2

3c1
 c2
 5
3
 1 0 4 
whose augmented matrix has the reduced row echelon form 
 . The solution of the linear
0 1 7 
 4 
system is c1  4 , c2  7 , therefore the coordinate vector is  w B    . This matches the result
 7 
obtained in part (c).
2.
(a)
In this part, B is the start basis and B is the end basis:
1 0 2 3
   I | transition from start to end 
4

end basis | start basis  0 1 1
 2 3
No row operations were necessary to obtain the transition matrix PB B  
.
 1 4
(b)
In this part, B is the start basis and B is the end basis:
2 3 1 0 

4 0 1

end basis | start basis   1
The reduced row echelon form of this matrix is
1 0
4

11
 I | transition from start to end   0 1  111
 4
The transition matrix is PB B   111
  11
3
11
2
11
3
11
2
11




.

(c)
 114
 3
Clearly,  w B    . Using Formula (12),  w B  PB B  w B   1
 5
  11
(d)
Expressing w as a linear combination of u1 and u 2 we obtain
3
11
2
11
  3   113 
     13  .
  5   11 
 3
2 
 3
 5  c1 1   c2  4 
 
 
 
Equating corresponding components on both sides yields the linear system
2c1
 3c2

c1
 4c2
 5
3
4.7 Change of Basis
71
 1 0  113 
whose augmented matrix has the reduced row echelon form 
. The solution of the linear
13 
0 1  11 
  113 
3
13

c


c


w
, 2
, therefore the coordinate vector is  B  13  .
system is 1
11
11
  11 
This matches the result obtained in part (c).
3.
(a)
In this part, B is the start basis and B is the end basis:
 3
1 1 2 2 1

end basis | start basis   1 1 0 1 1 2 
 5 3 2 1 1 1


The reduced row echelon form of this matrix is
5
1 0 0 3 2

2

1 
 I | transition from start to end   0 1 0 2 3  2 
0 0 1 5
1
6 

5
 3 2
2 


The transition matrix is PB B   2 3  12  .
 5
1
6 
(b)
Expressing w as a linear combination of u 1 , u 2 , and u 3 we obtain
 5
2 
 2
 1
 8   c  1  c  1  c 2 
  1  2  3 
 5
 1
 1
 1
Equating corresponding components on both sides yields the linear system
2c1
 2c2

c1
c1

 2c3


 5

c2
c2
c3
c3
 5
8
 1 0 0 9


whose augmented matrix has the reduced row echelon form 0 1 0 9  . The solution of the
0 0 1 5
 9
 
linear system is c1  9 , c2  9 , c3  5 therefore the coordinate vector is  w B   9  .
 5
5
9    27 
 3 2
2 


 
Using Formula (12),  w B  PB  B  w B   2 3  12   9    232  .
 5
1
6   5  6 
72
Chapter 4: General Vector Spaces
(c)
Expressing w as a linear combination of u1 , u 2 and u3 we obtain
 5
 3
 1
 1
 8   c  1  c  1  c  0 
  1  2  3 
 5
 5
 3
 2 
Equating corresponding components on both sides yields the linear system
3c1

c1
5c1

c2
c2
 3c2

c3
 5

 2c3
8
 5
 1 0 0  27 


whose augmented matrix has the reduced row echelon form  0 1 0 232  .
 0 0 1
6 
The solution of the linear system is c1   27 , c2  232 , c3  6 therefore the coordinate vector is
  72 
 w B   232  , which matches the result we obtained in part (b).
 6 
4.
(a)
In this part, B is the start basis and B  is the end basis:
 6 2 2 3 3 1
end basis | start basis  6 6 3 0 2 6 
 0
4
7 3 1 1

The reduced row echelon form of this matrix is
3
 1 0 0 34
4

 I | transition from start to end    0 1 0  34  1712
2
0 0 1 0
3

3
 34
4

17
The transition matrix is PB B    34  12
2
 0
3
(b)

1
12
17
12
2
3

1
12
17
12
2
3







.

Expressing w as a linear combination of u 1 , u 2 , and u 3 we obtain
 5
 3
 3
 1
 8  c  0   c  2   c  6 
  1  2  3 
 5
 3
 1
 1
Equating corresponding components on both sides yields the linear system
4.7 Change of Basis
3c1
3c1
 3c2


c3
 5
2c2
 6c3

c2

 5
c3
73
8
 1 0 0 1


whose augmented matrix has the reduced row echelon form  0 1 0 1 . The solution of the linear
 0 0 1 1
1
 
system is c1  1 , c2  1 , c3  1 therefore the coordinate vector is  w B  1 .
1
3
 34
4
 3
17
Using Formula (12),  w B  PB  B  w B    4  12
2
 0
3
(c)

1
12
17
12
2
3
19
 1  12

    43 
 1    12  .
 1  43 
Expressing w as a linear combination of u1 , u 2 and u 3 we obtain
 5
 6 
 2 
 2 
 8   c  6   c  6   c  3
  1  2  3 
 5
 0 
 4 
 7 
Equating corresponding components on both sides yields the linear system
6c1
 2c2
 2c3
 5
6c1
 6c2
 3c3

4c2
 7c3
 5
8
19
1 0 0
12 

43 
whose augmented matrix has the reduced row echelon form  0 1 0  12
.
4
 0 0 1

3 
19
43
The solution of the linear system is c1  12
, c2   12
, c3  43 therefore the coordinate vector is
19
 12

 43 
 wB    12  , which matches the result we obtained in part (b).
 43 
5.
(a)
The set f1 , f2  is linearly independent since neither vector is a scalar multiple of the other. Thus
f1 , f2  is a basis for V and dim V   2 .
Likewise, the set g1 , g 2  of vectors in V is linearly independent since neither vector is a scalar
multiple of the other. By Theorem 4.6.4, g1 , g 2  is a basis for V .
(b)
a
2 
0 
Clearly,  g1 B    and  g 2 B    hence PB B   g1 B |  g 2 B    1
3
1 
 a2
b1  2 0 

.
b2  1 3 
74
Chapter 4: General Vector Spaces
(c)
We find the two columns of the transitions matrix PB B   f1 B |  f2 B 
f1  a1g1  a2 g 2
f2  b1g1  b2 g 2
sin x  a1  2sin x  cos x   a2  3cos x 
cos x  b1  2sin x  cos x   b2  3cos x 
equate the coefficients corresponding to the same function on both sides of each equation

2 a1
a1
 3a2
1
2b1
 0
b1
 0
 3b2

1
reduced row echelon form of the augmented matrix of each system
1 0 0 
0 1 1 
3 

1
1 0
2 

1 
0 1  6 
a
We obtain the transition matrix PB B   f1 B |  f2 B    1
 a2
b1   12

b2    61
0
.
1
3
(An alternate way to solve this part is to use Theorem 4.7.1 to yield
1
1
B  B
PB B  P
(d)
 2 0  
 3 0  1  3 0   12
1



 
 1
 2  3  0 1 
 1 2  6  1 2    6
 1 3  
 2
Clearly, the coordinate vector is  h B    .
 5
 1
Using Formula (12), we obtain  h B  PB B  h B   21
6
(e)
0
.)
1
3
0  2  1 
   2  .
1
 
3   5 
By inspection, 2sin x  5cos x   2sin x  cos x   2  3cos x  , hence the coordinate vector is
 1
 pB   2  , which matches the result obtained in part (d).

6.
(a)

We find the two columns of the transitions matrix PB B   q1 B | q 2 B 
q1  a1 p1  a2 p 2
q 2  b1 p1  b2 p 2
2  a1  6  3 x   a2 10  2 x 
3  2 x  b1  6  3 x   b2 10  2 x 
equate the coefficients corresponding to like powers of x on both sides of each equation
6 a1
3a1
 10 a2
 2 a2
 2
 0
6b1
3b1
 10b2
 2b2
reduced row echelon form of the augmented matrix of each system
 3
 2
4.7 Change of Basis
 1 0  29 

1
3
0 1
7
1 0
9

1
0 1  6 
a
We obtain the transition matrix PB B   q1 B |  q 2 B    1
 a2
(b)
75
b1    29

b2   13

.
 
7
9
1
6
We find the two columns of the transitions matrix PB B   p1 B |  p2 B 
p1  a1q1  a2 q 2
p 2  b1q1  b2 q 2
6  3 x  a1  2   a2  3  2 x 
10  2 x  b1  2   b2  3  2 x 
equate the coefficients corresponding to like powers of x on both sides of each equation
2 a1
 3a2
 6
2 a2
 3
 3b2
2b1
2b2
 10

2
reduced row echelon form of the augmented matrix of each system
1 0

0 1
3
4
3
2
1 0 27 


0 1 1 



a
We obtain the transition matrix PB B   p1 B |  p2 B    1
 a2
(c)

.
1
7
2
 1
Since 4  x   6  3 x   10  2 x  , the coordinate vector is  pB    .
 1
3
Using Formula (12), we obtain  pB  PB B  pB   43
2
(d)
b1   34

b2   23
  1   114 
    1  .
1  1  2 
7
2
c 
We are looking for the coordinate vector  pB   1  with c1 and c2 satisfying the equality
c2 
4  x  c1  2   c2  3  2 x 
for all real values x . Equating the coefficients associated with like powers of x on both sides yields
the linear system
2c1
 3c2
2c2
 4

1
which can easily be solved by back-substitution: c2  12 , c1 
  11 
 pB   41  , which matches the result obtained in part (c).

2

 
4  3 12
2
  114 . We conclude that
76
Chapter 4: General Vector Spaces
7.
(a)
In this part, B2 is the start basis and B1 is the end basis:
1 2 1 1
end basis | start basis   2 3 3 4  .


The reduced row echelon form of this matrix is
1 0 3
5


 I | transition from start to end  0 1 1 2  .
5
 3
The transition matrix is PB2  B1  
.
 1 2 
(b)
In this part, B1 is the start basis and B2 is the end basis:
1 1 1 2 
end basis | start basis   3 4 2 3  .


The reduced row echelon form of this matrix is
1 0 2
5


 I | transition from start to end  0 1 1 3  .
5
 2
The transition matrix is PB1  B2  
.
 1 3 
(c)
(d)
5 2
5 1 0 
5  3
5  1 0 
 2
 3

Since 
and 



 it follows that PB2  B1 and




 1 3  1 2  0 1 
 1 2   1 3 0 1 
PB1  B2 are inverses of one another.
Expressing w as a linear combination of u 1 and u 2 we obtain
0 
1 
2 
 1  c1 2   c2  3
 
 
 
Equating corresponding components on both sides yields the linear system
c1
2c1
 2c2
 3c2
 0
 1
1 0 2 
whose augmented matrix has the reduced row echelon form 
 . The solution of the linear
 0 1 1 
 2
system is c1  2 , c2  1 , therefore the coordinate vector is  w B    .
1
 1 
5  2   1
 2
From Formula (12),  w B  PB1  B2  w B  
    .
2
1
 1 3  1  1
4.7 Change of Basis
(e)
77
Expressing w as a linear combination of v1 and v 2 we obtain
2 
1 
 1
 5  c1 3  c2  4 
 
 
 
Equating corresponding components on both sides yields the linear system
1c1

1c2
 2
3c1
 4c2
 5
1 0 3
whose augmented matrix has the reduced row echelon form 
 . The solution of the linear
 0 1 1 
 3
system is c1  3 , c2  1 , therefore the coordinate vector is  w B    .
2
 1 
5  3   4 
 3
From Formula (12),  w B  PB2  B1  w B  
     .
1
2
 1 2   1  1
8.
(a)
2 3
By Theorem 4.7.2, PBS  
.
 1 4
(b)
 2 3 1 0 
In this part, S is the start basis and B is the end basis:  end basis | start basis  
.
 1 4 0 1
The reduced row echelon form of this matrix is
1 0
4

11
 I | transition from start to end   0 1  111
 4
The transition matrix is PS  B   111
  11
(c)
(d)
3
11
2
11
3
11
2
11

.


.

 4 3  2 3 1 0 
2 3  114

Since  111 112  
and
 

 1 4  1

  11
  11 11   1 4  0 1 
are inverses of one another.
3
11
2
11
 1 0 

 it follows that PBS and PS  B
 0 1 
 1
Since  5, 3    2,1   3,4  the coordinate vector is  w B    .
 1
2 3  1  5
From Formula (12),  w S  PB S  w B  
     .
 1 4   1  3
(e)
 4
 3
By inspection,  w S    . From Formula (12),  w B  PS  B  w S   111
 5
  11
3
11
2
11
  3   113 
     13  .
  5   11 
78
9.
Chapter 4: General Vector Spaces
(a)
 1 2 3


By Theorem 4.7.2, PBS  2 5 3 .
1 0 8 
(b)
In this part, S is the start basis and B is the end basis:
1 2 3 1 0 0
end basis | start basis   2 5 3 0 1 0  .
1 0 8 0 0 1


The reduced row echelon form of this matrix is
 1 0 0 40 16 9 
 I | transition from start to end   0 1 0 13 5 3  .
 0 0 1 5 2 1 


 40 16 9 


The transition matrix is PS  B   13 5 3  .
 5 2 1 
(c)
 40 16 9  1 2 3 1 0 0 
1 2 3  40 16 9  1 0 0 


 







Since  13 5 3  2 5 3  0 1 0  and 2 5 3  13 5 3   0 1 0  it
1 0 8   5 2 1  0 0 1 
 5 2 1  1 0 8  0 0 1 
follows that PBS and PS  B are inverses of one another.
(d)
Expressing w as a linear combination of v1 , v 2 , and v 3 we obtain
 5
 1
2 
3 
 3  c 2   c  5  c 3
  1  2  3 
 1
 1
0 
8 
Equating corresponding components on both sides yields the linear system
c1
 2c2
 3c3

2c1
 5c2
 3c3
 3
 8c3

c1
5
1
 1 0 0 239 

77  . The solution of the
whose augmented matrix has the reduced row echelon form 0 1 0
0 0 1
30 
linear system is c1  239 , c2  77 , c3  30 therefore the coordinate vector is
 239
1 2 3  239   5


 wB   77 . From Formula (12),  w S  PBS  wB  2 5 3  77  3 .
1 0 8   30   1
 30 
4.7 Change of Basis
(e)
79
 3
 
By inspection,  w S   5 .
 0 
 40 16 9   3  200 

  

From Formula (12),  w B  PS  B  w S   13 5 3   5   64  .
 5 2 1   0   25
10.
1 
0 
Reflecting e1    about the line y  x results in v1    .
0 
1 
0 
1 
Likewise for e 2    we obtain v 2    .
1 
0 
(a)
0 1 
From Theorem 4.7.5, PBS  
.
1 0 
(b)
0 1 
1
Denoting P  
 , it follows from Theorem 4.7.5 that PS  B  P . In our case, PP  I therefore
1
0


P  P 1 . Furthermore, since P is symmetric, we also have PS  B  P T .
11.
(a)
Clearly, v1   cos  2  ,sin  2   . Referring to the figure on the right, we see that the angle between
the positive x -axis and v 2 is 2  2  2     2  2 . Hence,
v 2   cos  2  2  ,sin  2  2     sin  2  ,  cos  2  
cos  2 
sin  2  
From Theorem 4.7.5, PB  S  
.
 sin  2   cos  2  
(b)
cos  2 
sin  2  
1
Denoting P  
 , it follows from Theorem 4.7.5 that PS  B  P . In our case,
 sin  2   cos  2  
PP  I therefore P  P 1 . Furthermore, since P is symmetric, we also have PS  B  P T .
80
Chapter 4: General Vector Spaces
12.
3 1 
7 2
Since for every vector v in R2 we have  v B  
v B and  v B  


  v B2 , it follows that
2
1
3
5 2 
 4 1
7
2  3 1 


31 11
31 11
v B so that PB1  B3  


.
1
2
 7 2

 v B   4 1 5 2   v B   7
3

1
 2
From Theorem 4.7.1, PB3  B1 is the inverse of this matrix:  157
 15
13.

.
 
11
15
31
15
Since for every vector v we have  v B  P  v B and  v C  Q  v B , it follows that  v C  QP  v B so that
PBC  QP . From Theorem 4.7.1, PC  B   QP   P 1Q 1 .
1
15.
(a)
By Theorem 4.7.2, P is the transition matrix from B  1,1,0  , 1,0,2  ,  0,2,1 to S .
(b)
 45

By Theorem 4.7.1, P 1   15
  25

1
5
1
5
2
5
 25 
2
is the transition matrix from B to S , hence by
5
1
5
Theorem 4.7.2, B   45 , 15 ,  25  ,  15 ,  15 , 25  ,   25 , 25 , 15  .
16.
Let the given basis be denoted as B  v1 , v 2 , v 3  with v1  1,1,1 , v 2  1,1,0  , v 3  1,0,0  and denote
the unknown basis as B  u1 , u 2 , u 3  .
1 0 0 


We have PB B   0 3 2    u1 B  u 2 B  u 3 B  . Equating the respective columns yields
 0 1 1 
1 
 u1 B  0   u1  1v1  0v 2  0v 3  1,1,1
0 
0 
 u 2 B  3   u 2  0v1  3v 2  1v 3   4,3,0 
 1 
0 
 u3 B  2   u 3  0v1  2v 2  1v 3   3,2,0 
 1 
Thus the given matrix is the transition matrix from the basis 1,1,1 ,  4,3,0  ,  3,2,0  .
17.
 2 3
From T 1,0    2,5  , T  0,1   3, 1 , and Theorem 4.7.2 we obtain PBS  
.
 5 1
4.7 Change of Basis
18.
81
From T 1,0,0   1,2,0  , T  0,1,0   1, 1,1 , T  0,0,1   0,4,3  , and Theorem 4.7.2 we obtain
 1 1 0
PB S  2 1 4  .
0 1 3
19.
By Formula (10), the transition matrix from the standard basis S  e1 ,, e n  to B is
PS  B   e1 B   e n B    e1  e n   I n therefore B must be the standard basis.
True-False Exercises
(a)
True. The matrix can be constructed according to Formula (10).
(b)
True. This follows from Theorem 4.7.1.
(c)
True.
(d)
True.
(e)
False. For instance, B1   0,2  ,  3,0  is a basis for R2 made up of scalar multiples of vectors in the
0 3 
standard basis B2  1,0  ,  0,1 . However, PB1  B2  
 (obtained by Theorem 4.7.2) is not a diagonal
2 0 
matrix.
(f)
False. A must be invertible.
4.8 Row Space, Column Space, and Null Space
1.
2.
(a)
 2 3 1   2 
3
 1 4   2   1  1  2  4 

   
 
(b)
 4 0 1  2 
4 
 0
 1
 3 6 2   3  2  3   3  6   5  2 

 
 
 
 
 0 1 4   5
 0 
 1
 4 
(a)
 3 6 2 
 3
 6
 2
 5 4 0   1
 5
 4 
 

  2   1    2    5  0 
 2
 2
 3
 1
3 1  

  5
 
 
 
 1 8 3
 1
 8
 3
(b)
 3
 2 1 5  
2 
1 
 5
6 3 8   0   3 6   0 3  5  8 

  5
 
 
 
 
82
3.
Chapter 4: General Vector Spaces
(a)
(b)
 1 0 1 0


The reduced row echelon form of the augmented matrix of the system Ax  b is 0 1 1 0  , thus
0 0 0 1
Ax  b is inconsistent. By Theorem 4.8.1, b is not in the column space of A .
1
1 0 0


The reduced row echelon form of the augmented matrix of the system Ax  b is 0 1 0 3 , so
0 0 1 1
the system has a unique solution x1  1 , x 2  3 , x3  1 . By Theorem 4.8.1, b is in the column space
 1
 1 1  5


     
of A . By Formula (2), we can write 9   3  3  1   1 .
 1
 1 1  1
4.
(a)
(b)
 1 0 0 0


The reduced row echelon form of the augmented matrix of the system Ax  b is  0 1 1 0  ,
 0 0 0 1
thus Ax  b is inconsistent.
By Theorem 4.8.1, b is not in the column space of A .
The reduced row echelon form of the augmented matrix of the system Ax  b is
0 0 0 26 
1 0 0
13
, so the system has a unique solution x1  26 , x2  13 , x3  7 , x 4  4 . By
0 1 0 7 

0 0 1
4
Theorem 4.8.1, b is in the column space of A .
1
0

0

0
 1
2 
0 
 1  4 
0 
 1
2 
 1  3






By Formula (2), we can write 26
 13
7
 4     .
 1
2 
 1
 3  5
 
 
 
   
0 
 1
2 
2   7 
5.
6.
(a)
 x1 
 5
 2   0 
x 
0 
   
 2   r    s  1  t 0 
 x3 
0 
 1  1
 
 
   
0 
 0   1
 x4 
(a)
 x1 
 3 
 4
x 
 1
 
 2   r    s  1
 x3 
 1
 0
 
 
 
 0
 1
 x4 
(b)
 x1   3
 5
 2  0 
x   


   
 2    0   r 0   s  1  t 0 
 x3   1
0 
 1  1
   
 
   
0 
 0   1
 x4   5 
(b)
 x1   1
 3 
 4
x   


 
 2    2   r  1  s  1
 x3   4 
 1
 0
   
 
 
 0
 1
 x 4   3 
4.8 Row Space, Column Space, and Null Space
7.
(a)
 1 3 1
The reduced row echelon form of the augmented matrix of the system Ax  b is 
 . The
0 0 0 
general solution of this system is x1  1  3t , x2  t ; in vector form,
 x1 , x2   1  3t, t   1,0   t  3,1 .
The vector form of the general solution of Ax  0 is  x1 , x2   t  3,1 .
(b)
 1 0 1 2 


The reduced row echelon form of the augmented matrix of the system Ax  b is 0 1 1 7  .
0 0 0 0 
The general solution of this system is x1  2  t , x2  7  t , x3  t ; in vector form,
 x1 , x2 , x3    2  t,7  t, t    2,7,0   t  1, 1,1 .
The vector form of the general solution of Ax  0 is  x1 , x2 , x3   t  1, 1,1 .
8.
(a)
The reduced row echelon form of the augmented matrix of the system Ax  b is
 1 2 1 2 1
0
0 0 0 0 

. The general solution of this system is x1  1  2r  s  2 t , x2  r ,
0
0 0 0 0


0 0 0 0
0
x3  s , x4  t ; in vector form,  x1 , x2 , x3 , x4    1  2r  s  2t , r , s, t    1,0,0,0  
r  2,1,0,0   s  1,0,1,0   t  2,0,0,1 .
The vector form of the general solution of Ax  0 is
 x1 , x2 , x3 , x4   r  2,1,0,0   s  1,0,1,0   t  2,0,0,1 .
(b)
The reduced row echelon form of the augmented matrix of the system Ax  b is
1

0
0

0
0  75
 15
1 
4
5
3
5
0
0
0
0


 . The general solution of this system is x  6  7 s  1 t , x  7  4 s  3 t ,
1
2
5
5
5
5
5
5
0 0

0 0
6
5
7
5
x3  s , x4  t ; in vector form,  x1 , x2 , x3 , x4    65  75 s  15 t , 75  45 s  35 t , s, t    65 , 75 ,0,0  
s  75 , 45 ,1,0   t  15 ,  35 ,0,1 .
The vector form of the general solution of Ax  0 is
 x1 , x2 , x3 , x4   s  75 , 45 ,1,0   t  15 ,  35 ,0,1 .
9.
(a)
 1 0 16 


The reduced row echelon form of A is 0 1 19  . The reduced row echelon form of the
0 0
0 
augmented matrix of the homogeneous system Ax  0 would have an additional column of zeros
83
84
Chapter 4: General Vector Spaces
appended to this matrix. The general solution of the system x1  16t , x2  19t , x3  t can be written
 x1  16 
16 
 




in the vector form  x2   t 19  therefore the vector 19  forms a basis for the null space of A .
 1
 x3   1
A basis for the row space is formed by the nonzero rows of the reduced row echelon form of A :
1 0 16 and 0 1 19 .
(b)
 1 0  12 


The reduced row echelon form of A is 0 0
0  . The reduced row echelon form of the
0 0
0 
augmented matrix of the homogeneous system Ax  0 would have an additional column of zeros
appended to this matrix. The general solution of the system x1  12 t , x2  s , x3  t can be written in
0 
 12 
 x1 
0   12 
 
 






the vector form  x2   s 1   t 0  therefore the vectors  1  and  0  form a basis for the null space
 1 
0 
 x3 
0   1 
of A .
A basis for the row space is formed by the nonzero row of the reduced row echelon form of A :
1 0  12  .
10.
(a)
 1 0 1  27 

4
The reduced row echelon form of A is  0 1 1
. The reduced row echelon form of the
7
 0 0 0
0 
augmented matrix of the homogeneous system Ax  0 would have an additional column of zeros
appended to this matrix. The general solution of the system x1  s  27 t , x2  s  47 t , x3  s , x4  t
 x1 
 1  27 
 27 
 1
x 
 1   4 
 4
 1

can be written in the vector form  2   s    t  7  therefore the vectors   and  7  form a
 x3 
 1  0 
 0
 1
 
 
 
   
 0
 0   1
 1
 x4 
basis for the null space of A .
A basis for the row space is formed by the nonzero rows of the reduced row echelon form of A :
1 0 1  27  and  0 1 1
(b)
4
7
 .
1 0 1 2 1 
0 1 1 1 2 
 . The reduced row echelon form of the
The reduced row echelon form of A is 
0 0 0 0 0 


0 0 0 0 0 
augmented matrix of the homogeneous system Ax  0 would have an additional column of zeros
appended to this matrix. The general solution of the system
4.8 Row Space, Column Space, and Null Space
85
x1  r  2 s  t , x2  r  s  2 t , x3  r , x 4  s , x5  t can be written in the vector form
 x1 
 1  2 
 1
 1
 2   1
x 
 1  1
 2 
 1
 1  2 
 2
   
 
 
   
 x3   r  1  s  0   t  0  therefore the vectors  1 ,  0  , and  0  form a basis for the null
 
   
 
 
   
 x4 
 0   1
 0
 0
 1  0 
 x5 




 1






 0  0
 0
 0   1
 
space of A .
A basis for the row space is formed by the nonzero rows of the reduced row echelon form of A :
1 0 1 2 1 and 0 1 1 1 2 .
11.
We use Theorem 4.8.4 to obtain the following answers.
(a)
1  2 
   
Columns containing leading 1's form a basis for the column space: 0  ,  1  .
0  0 
Nonzero rows form a basis for the row space: 1 0 2  ,  0 0 1 .
(b)
 1   3
0   1
Columns containing leading 1's form a basis for the column space:   ,   .
0   0 
   
0   0 
Nonzero rows form a basis for the row space: 1 3 0 0  ,  0 1 0 0  .
12.
We use Theorem 4.8.4 to obtain the following answers.
(a)
 1   2   4   5
0   1  3  0 
       
Columns containing leading 1's form a basis for the column space: 0  ,  0  ,  1 ,  3 .
       
0   0   0   1
0   0   0   0 
Nonzero rows form a basis for the row space:
1 2 4 5 , 0 1 3 0 , 0 0 1 3 , 0 0 0 1 .
(b)
 1   2   1  5
0   1  4   3
Columns containing leading 1's form a basis for the column space:   ,   ,   ,   .
0   0   1  7 
       
0   0   0   1
Nonzero rows form a basis for the row space:
1 2 1 5 , 0 1 4 3 , 0 0 1 7 , 0 0 0 1 .
86
13.
Chapter 4: General Vector Spaces
(a)
1
0
The reduced row echelon form of A is B  
0

0
0 11 0 3
1 3 0 0 
.
0 0 1 0

0 0 0 0
By Theorems 4.8.3 and 4.8.4, the nonzero rows of B form a basis for the row space of A :
r1  1 0 11 0 3 , r2   0 1 3 0 0  , and r3   0 0 0 1 0  .
By Theorem 4.8.4, columns of B containing leading 1's form a basis for the column space of B :
 1
0 
0 
0 
 1
0 




, c 
, and c 4    . By Theorem 4.8.5(b), a basis for the column space of A is formed
c1 
0  2 0 
 1
 
 
 
0 
0 
0 
 1
 2 
0 
 2 
 5
0 
by the corresponding columns of A : c1    , c 2    , and c 4    .
 1
 3
 1
 
 
 
 3
 8
 1
(b)
We begin by transposing the matrix A .
 1 2 1 3
 1 0 0 0
 2

0 1 0 1
5 3 8



We obtain AT   5 7 2 9  , whose reduced row echelon form is C  0 0 1 1 . By




1 1
 0 0
0 0 0 0 
 3 6 3 9 
0 0 0 0 
Theorem 4.8.4, columns of C containing leading 1's form a basis for the column space of C :
 1
0 
0 
0 
 1
0 
 
 
 
c1   0  , c2  0  , and c3   1 . By Theorem 4.8.5(b), a basis for the column space of AT is
 
 
 
0 
0 
0 
 0 
0 
0 
 1
 2 
 1
 2 
 5
 3
 
 
 
formed by the corresponding columns of AT : c1   5 , c 2   7  , and c 3   2  .
 
 
 
 0
 0
 1
 3
 6 
 3 
Since columns of AT are rows of A , a basis for the row space of A is formed by
r1  1 2 5 0 3 , r2   2 5 7 0 6  , and r3   1 3 2 1 3 .
4.8 Row Space, Column Space, and Null Space
14.
87
 1 2 2
 1 0 1
 . The reduced row echelon
We construct a matrix whose columns are the given vectors: A  
 4
2 3


 3 2 2 
1 0 0 
0 1 0 
 . By Theorem 4.8.4, the three columns of B form a basis for the column space
form of A is B  
0 0 1 


0 0 0 
of B . By Theorem 4.8.5(b), the three columns of A form a basis for the column space of A . We conclude
that 1,1, 4, 3  ,  2, 0, 2, 2  ,  2, 1, 3, 2  is a basis for the subspace of R 4 spanned by these vectors.
15.
1
1
We construct a matrix whose columns are the given vectors: A  
0

0
0 2
0
1
1
0
0 3
. The reduced row
2 0

2
3
1
0
echelon form of A is B  
0

0
0 0 0
1 0 0 
. By Theorem 4.8.4, the four columns of B form a basis for the
0 1 0

0 0 1
column space of B . By Theorem 4.8.5(b), the four columns of A form a basis for the column space of A .
We conclude that 1,1, 0, 0  ,  0, 0,1,1 ,  2, 0, 2, 2  ,  0, 3, 0, 3  is a basis for the subspace of R 4 spanned by
these vectors.
16.
Construct a matrix whose column vectors are the given vectors v1 , v 2 , v 3 , and v 4 :
 1 3 1 5
0
3 3 3

. Since its reduced row echelon form
A
 1 7 9 5


 1 1 3 1
1
0

0

0

–2 
0
2
1
1
1
0
0
0
0
0
0
 



w1 w 2 w 3 w 4
contains leading 1's in the first two columns, by Theorems 4.8.4 and 4.8.5(b), the vectors v1 and v 2 form a
basis for the column space of A , and for span v1 , v 2 , v 3 , v 4  .
88
Chapter 4: General Vector Spaces
By inspection, the columns of the reduced row echelon form matrix satisfy w3  2w1  w2 and
w 4  2 w1  w 2 . Because elementary row operations preserve dependence relations between column
vectors, we conclude that v3  2v1  v 2 and v 4  2 v1  v 2 .
17.
Construct a matrix whose column vectors are the given vectors v1 , v 2 , v 3 , v 4 , and v 5 :
 1 2 4 0 7 
 1 3 5 4 18 
 . Since its reduced row echelon form
A
 5
1 9 2 2


0 4 3 8 
 2
1
0

0

0
2 0 –1
1 –1 0 3
0 0 1 2

0 0 0 0
    
0
w1 w 2 w 3 w 4 w 5
contains leading 1's in columns 1, 2, and 4, by Theorems 4.8.4 and 4.8.5(b), the vectors v1 , v 2 and v 4 form
a basis for the column space of A , and for span v1 , v 2 , v 3 , v 4 , v 5  .
By inspection, the columns of the reduced row echelon form matrix satisfy w3  2w1  w2 and
w5  w1  3w2  2w 4 . Because elementary row operations preserve dependence relations between
column vectors, we conclude that v3  2v1  v 2 and v 5  v1  3v 2  2v 4 .
18.
We are employing the procedure developed in Example 9.
1
4
The reduced row echelon form of AT  
5

2
2 1
1

0
1 3
is 
0
3 2


0 2
0
1
1 1
. Since the first two columns of the
0 0

0 0
0
reduced row echelon form contain leading 1's, by Theorems 4.8.4 and 4.8.5(b) the first two columns of AT
form a basis for the column space of AT . Consequently, the first two rows of A , 1 4 5 2  and
 2 1 3 0 , form a basis for the row space of A .
19.
We are employing the procedure developed in Example 9.
 1 3 1
 4 2 0

The reduced row echelon form of AT   5
1 1

 6 4 2
 9 1 1
13
2
 1 0  17 14



5 
2
3
 0 1  7 14 
5 is  0 0 0 0  . Since the first two columns



7
0 0 0 0 
 0 0 0 0 
8 
of the reduced row echelon form contain leading 1's, by Theorems 4.8.4 and 4.8.5(b) the first two columns
4.8 Row Space, Column Space, and Null Space
89
of AT form a basis for the column space of AT . Consequently, the first two rows of A , 1 4 5 6 9
and 3 2 1 4 1 , form a basis for the row space of A .
1
20.
Let B  [v1 | v 2 ] 
2
1 0
. We are looking for a matrix A such that AB  O . Taking a transpose on both
3 2
2
4
sides results in BT AT  0T . We proceed to solve the homogeneous linear system BT u  0 . The reduced row
 1 1 3 2 0 
 1 0 1 2 0 
echelon form of its augmented matrix 
is 

 therefore the general
 2 0 2 4 0 
0 1 4 0 0 
 1   2 
 1 2 
4  0
4
0 
 1 4 1 0
solution in the vector form is s    t   . We can take AT  
thus A  
.
1   0 
 1 0
 2 0 0 1
   


1
 0   1
0
21.
Since TA  x   Ax , we are seeking the general solution of the linear system Ax  b .
(a)
8
1 0
1 2 0 0 
3
The reduced row echelon form of the augmented matrix 
is


4
0
1


1
1
4
0


3

0
 . The
0
general solution is x1   83 t , x2  43 t , x3  t . In vector form, x  t   83 , 43 ,1 where t is arbitrary.
(b)
8
1 0
1 2 0 1
3
The reduced row echelon form of the augmented matrix 
is 

4
1 1 4 3
0 1  3

 . The
 
7
3
2
3
general solution is x1  73  83 t , x2   23  43 t , x3  t .
In vector form, x   73 ,  23 , 0   t   83 , 43 ,1 where t is arbitrary.
(c)
8
1 0
1 2 0 1
3
The reduced row echelon form of the augmented matrix 
is


4
0
1


1
1
4
1


3

The general solution is x1  13  83 t , x2   23  43 t , x3  t .
In vector form, x   13 ,  23 , 0   t   83 , 34 ,1 where t is arbitrary.
22.
Since TA  x   Ax , we are seeking the general solution of the linear system Ax  b .
(a)
2
0
The reduced row echelon form of the augmented matrix 
1

2
The only solution is x1  x2  0 . In vector form, x   0, 0  .
0 0
1

0
1 0
is 
0
1 0


0 0
0
0 0
1 0 
.
0 0

0 0

.
 
1
3
2
3
90
Chapter 4: General Vector Spaces
(b)
2
0
The reduced row echelon form of the augmented matrix 
1

2
1
1

0
1 1
is 
0
1 1


0 1
0
0
0 0
1 0 
.
0 1

0 0
The system has no solution; no vector x exists for which TA  x   b .
(c)
2
0
The reduced row echelon form of the augmented matrix 
1

2
0 2
1

0
1 0
is 
0
1 0


0 2
0
0 0
1 0 
.
0 1

0 0
The system has no solution; no vector x exists for which TA  x   b .
23.
(a)
The associated homogeneous system x  y  z  0 has a general solution x   s  t , y  s , z  t . The
original nonhomogeneous system has a general solution x  1  s  t , y  s , z  t , which can be
expressed in vector form as
1,0,0     s  t , s, t 
 x, y, z   1  s  t, s, t   

particular
solution
of the
nonhomogeneous
system
(b)
general
solution
of the
homogeneous
system
Geometrically, the points  x, y, z  corresponding to solutions of x  y  z  1 form a plane passing
through the point 1,0,0  and parallel to the vectors  1,1,0  and  1,0,1 .
24.
(a)
The associated homogeneous system x  y  0 has a general solution x   t , y  t .
The original nonhomogeneous system has a general solution x  1  t , y  t , which can be expressed
in vector form as
1,0 
 x, y   1  t, t   
particular
solution
of the
nonhomogeneous
system
  t , t 

general
solution
of the
homogeneous
system
4.8 Row Space, Column Space, and Null Space
91
y
(b)
Geometrically, the points  x, y, z  corresponding
1
y=
y=
x+
to solutions of x  y  1 form a line
x+
1
0
-1
passing through the point 1,0,  and
x
parallel to the vector  1,1 .
1
-1
25.
(a)
The augmented matrix of the homogeneous system has the reduced row echelon form
 1 23  13 0 


0 0  . A general solution of the system is x1   23 s  13 t , x2  s , x3  t .
0 0
 0 0
0 0 
(b)
(c)
 3 2 1  1 
 2




4 2  0  yields  4  therefore x1  1, x2  0, x3  1 is a solution of the
Multiplying  6
 2 
 3 2
1  1 
nonhomogeneous system.
The vector form of a general solution of the nonhomogeneous system is
1,0,1    23 s  13 t , s, t 
 x1 , x2 , x3   



particular
solution
of the
nonhomogeneous
system
(d)
general
solution
of the
homogeneous
system
The augmented matrix of the homogeneous system has the reduced row echelon form
 1 23  13 32 


0 0  . A general solution of the system is x1  23  23 p  13 q , x 2  p , x3  q .
0 0
 0 0
0 0 
If we let p  s and q  t  1 then this agrees with the solution we obtained in part (c).
26.
(a)
The augmented matrix of the homogeneous system has the reduced row echelon form
 1 0 115 0 


11
2
2
 0 1  5 0  . A general solution of the system is x1   5 t , x2  5 t , x3  t .
 0 0
0 0 
92
Chapter 4: General Vector Spaces
(b)
 1 2 3 1
 2




1 4  1 yields  7 therefore x1  x2  x3  1 is a solution of the
Multiplying 2
 1
 1 7 5 1
nonhomogeneous system.
(c)
The vector form of a general solution of the nonhomogeneous system is
1,1,1
 x1 , x2 , x3   
particular
solution
of the
nonhomogeneous
system
(d)
   115 t , 25 t , t 

general
solution
of the
homogeneous
system
The augmented matrix of the homogeneous system has the reduced row echelon form
 1 0 115

2
0 1  5
 0 0
0


16
3
11
2
 . A general solution of the system is x1  5  5 s , x2  5  5 s , x3  s .
0 
16
5
3
5
If we let s  1  t then this agrees with the solution we obtained in part (c).
27.
 3 4 1 2 3


The augmented matrix of the nonhomogeneous system 6 8 2 5 7  has the reduced row echelon
 9 12 3 10 13
 1 43 13 0 13 


form  0 0 0 1 1  . A general solution of this system
 0 0 0 0 0 
x1  13  43 r  13 s , x2  r , x3  s , x 4  1 can be expressed in vector form as
, 0, 0, 1 +   34 r  13 s, r , s, 0 
 x1 , x2 , x3 , x4  = 13

 
particular
solution
of the
nonhomogeneous
system
28.
general
solution
of the
associated
homogeneous
system
 9 3 5 6 4 


The augmented matrix of the nonhomogeneous system 6 2 3 1 5 has the reduced row echelon
 3 1 3 14 8 
13
 1  13 0  133
3 


form  0
0 1
9 7  . A general solution of this system
 0
0 0
0 0 
x1  133  13 s  133 t ,
x2  s ,
x3  7  9t ,
can be expressed in vector form as
x4  t
4.8 Row Space, Column Space, and Null Space
93
13
13
1
 x1 , x2 , x3 , x4  = 
3 , 0, 7, 0  +  3 s  3 t , s, 9t , t 
 
particular
solution
of the
nonhomogeneous
system
29.
(a)
general
solution
of the
associated
homogeneous
system
 1 0 0


The reduced row echelon form of A is B   0 1 0  . The general solution x   x, y, z  of Ax  0
 0 0 0 
is x  0 , y  0 , z  t ; in vector form, x  t  0,0,1 . This shows that the null space of A consists of all
points on the z -axis.
The column space of A , span 1,0,0  ,  0,1,0  clearly consists of all points in the xy -plane.
(b)
30.
(a)
31.
(a)
0 0 0 
0 1 0  is an example of such a matrix.


0 0 1
1 0 0 


e.g., 0 1 0 
0 0 1 
null space is the origin
(b)
1 0 0 


e.g., 0 1 0 
0 0 0 
null space is the z -axis
(c)
1 0 0 


e.g., 0 0 0 
0 0 0 
null space is the yz -plane
 3 5
By inspection, 
 has the desired null space. In general, this will hold true for all matrices of
0 0 
3a 5a 
the form 
 where a and b are not both zero (if a  b  0 then the null space is the entire
 3b 5b 
plane).
(b)
Only the zero vector forms the null space for both A and B (their determinants are nonzero,
therefore in each case the corresponding homogeneous system has only the trivial solution).
The line 3 x  y  0 forms the null space for C .
The entire plane forms the null space for D .
True-False Exercises
(a)
True.
(b)
False. The column space of A is the space spanned by all column vectors of A .
(c)
False. Those column vectors form a basis for the column space of R .
(d)
False. This would be true if A were in row echelon form.
(e)
1 0 
1 0 
and B  
False. For instance A  

 have the same row space, but different column spaces.
2 0 
3 0 
94
Chapter 4: General Vector Spaces
(f)
True. This follows from Theorem 4.8.3.
(g)
True. This follows from Theorem 4.8.3.
(h)
False. Elementary row operations generally can change the column space of a matrix.
(i)
True. This follows from Theorem 4.8.1.
(j)
False. Let both A and B be n  n matrices. By Theorem 4.8.3, row operations do not change the row space
of a matrix. An invertible matrix can be reduced to I thus its row space is always R n . On the other hand, a
singular matrix cannot be reduced to identity matrix - at least one row in its reduced row echelon form is
made up of zeros. Consequently, its row space is spanned by fewer than n vectors, therefore the dimension
of this space is less than n .
4.9 Rank, Nullity, and the Fundamental Matrix Spaces
1.
(a)
(b)
2.
(a)
1
0
The reduced row echelon form of A is 
0

0

rank  A   1 (the number of leading 1's)

nullity  A   3 (by Theorem 4.9.2).
2 1 1
0 0 0 
. We have
0 0 0

0 0 0
 1 2 0 1 3


The reduced row echelon form of A is 0 0 1 2 2  . We have
0 0 0 0 0 

rank  A   2 (the number of leading 1's)

nullity  A   3 (by Theorem 4.9.2).
1
0
The reduced row echelon form of A is 
0

0
0 2 0
1
0
0

rank  A   3 (the number of leading 1's)

nullity  A   2 (by Theorem 4.9.2).
1
3 0 4 
. We have
0 1 1

0 0
0
4.9 Rank, Nullity, and the Fundamental Matrix Spaces
(b)
3.
4.
5.
6.
7.
0 2 0 
1 1 0 
0 0 1 . We have

0 0 0
0 0 0 

rank  A   3 (the number of leading 1's)

nullity  A   1 (by Theorem 4.9.2).
(a)
rank  A   3 ; nullity  A   0
(b)
rank  A   nullity  A   3  0  3  n  number of columns of A
(c)
3 leading variables; 0 parameters in the general solution (the solution is unique)
(a)
rank  A   2 ; nullity  A   1 ;
(b)
rank  A   nullity  A   2  1  3  n  number of columns of A
(c)
2 leading variables; 1 parameter in the general solution
(a)
rank  A   1 ; nullity  A   2
(b)
rank  A   nullity  A   1  2  3  n  number of columns of A
(c)
1 leading variable; 2 parameters in the general solution
(a)
rank  A   3 ; nullity  A   1 ;
(b)
rank  A   nullity  A   3  1  4  n  number of columns of A
(c)
3 leading variables; 1 parameter in the general solution
(a)
If every column of the reduced row echelon form of a 4  4 matrix A contains a leading 1 then
(b)
(c)
8.
1
0

The reduced row echelon form of A is 0

0
0

the rank of A has its largest possible value: 4

the nullity of A has the smallest possible value: 0
If every row of the reduced row echelon form of a 3  5 matrix A contains a leading 1 then

the rank of A has its largest possible value: 3

the nullity of A has the smallest possible value: 2
If every column of the reduced row echelon form of a 5  3 matrix A contains a leading 1 then

the rank of A has its largest possible value: 3

the nullity of A has the smallest possible value: 0
The largest possible value for the rank of an m  n matrix A is the smaller of the two dimensions of A :
95
96
Chapter 4: General Vector Spaces

n if m  n (when every column of the reduced row echelon form of A contains a leading 1),

m if m  n (when every row of the reduced row echelon form of A contains a leading 1).
The smallest possible value for the nullity of an m  n matrix A is

0 if m  n (when every column of the reduced row echelon form of A contains a leading 1),

n  m if m  n (when every row of the reduced row echelon form of A contains a leading 1).
9.
(a)
(i)
(ii)
(d)
(e)
(f)
(g)
mn
r
33 33 33 5 9 5 9 4  4 6  2
0
3
2
1
2
2
2
rank( A | b)
s
3
3
1
2
3
0
2
dimension of the row space of A
r
3
2
1
2
2
0
2
dimension of the column space of A
r
3
2
1
2
2
0
2
dimension of the null space of A
 nr
0
1
2
7
7
4
0
dimension of the null space of AT
 mr
0
1
2
3
3
4
4
is the system Ax  b consistent?
Is r  s ?
Yes
No
Yes
Yes
No
Yes
Yes
 n  r if
consistent
0
-
2
7
-
4
0
 1 0  67  47 

2
AT is
The reduced row echelon form of A is 0 1 177
7  whereas the reduced row echelon form of
0 0
0
0 
1
0

0

0
11.
(c)
Size of A :
rank( A)
(iii) number of parameters in the
general solution of Ax  b
10.
(b)
1
1 1
. We conclude that rank  A   rank AT  2 .
0 0

0 0
0
 
1 0 


The reduced row echelon form of A is 0 1  . Therefore rank  A   rank  AT   2 . Applying
0 0 
Formula (4) to both A and its transpose yields 2  nullity  A   2 and 2  nullity  AT   3.
  1  4  
  1  0  


It follows that bases for row  A  and row  A  are    ,    and   0  ,  3  respectively. Since
  4  3  


  9   0  
T
 1 0 9
nullity  A   0 the nullspace of A contains only the zero vector. AT  
 has
4 3 0 
4.9 Rank, Nullity, and the Fundamental Matrix Spaces
 1 0 9 
reduced row echelon form 
 . The general solution is
 0 1 12 
x1  9t , x2  12 t , x3  t . In vector form, x  t  9, 12,1 where t
 9 


is arbitrary. So a basis for the left null space (of A ) is   12   .


  1 
T
12.
1 2 4 
T
The reduced row echelon form of A is 
 . Therefore rank  A   rank  A   1 .
0
0
0


1  
  1 


It follows that bases for row  A  and row  A  are   2   and     respectively and a
  2  
4 
  
T
  1  1 


basis for the null space of A is   2  ,  0   .
  0   4  
    
Applying Formula (4) to AT yields 2  nullity  AT   3.
1 2 
 2  


Since A has reduced row echelon form 0 0  , a basis for the left null space (of AT ) is   .
 1 
0 0 
T
13.
1 0 4 


The reduced row echelon form of A is 0 1 4  . Therefore rank  A   rank  AT   2 .
0 0 0 
  0   1 
  0   1 
    


It follows that bases for row  A  and row  A  are   1 ,  0   and   1 ,  0   respectively


  4   4  
  2   3 
    
T
  3 


and a basis for the null space of A is   2   .
  1 
  
Applying Formula (4) to AT yields 2  nullity  AT   3
  4  
 1 0 3




T
Since A has reduced row echelon form 0 1 2  , a basis for the left null space (of A ) is   4   .
  1 
0 0 0 
  
T
97
98
Chapter 4: General Vector Spaces
14.
 3 4

1 5
Since det  
  1 4
 
  1 1
7 

2 2  
 144 , the nullspaces of A and AT only contain the zero vector.

0 3 

2
2  
0
The columns and rows of A form bases for row  A  and row  AT  respectively.
15.
  1  0  
From Exercise 11, a basis for row  A  is    ,    whereas the null space of A contains only the zero
  4  3  
vector. These subspaces clearly satisfy Theorem 4.9.7(a). Bases for row  AT  and the left null
 9 
 1 4
  1  4  
9 




    

space are given by   0  ,  3  and   12   respectively. Since det   0 3 12    678  0,
  9   0  
  1 
  9 0
1 

    


 1  9 
 4   9




  

these three vectors are a basis for R . Moreover  0    12   0 and  3   12   0 . Therefore
 9   1
 0   0 
3
the left null space is the set of all vectors in R3 that are orthogonal to all vectors in row  AT  .
16.
1  
  1  1 
  


From Exercise 12, a basis for row  A  is   2   and a basis for the null space of A is   2  ,  0   .
4 
  0   4  
  
    
  1 1 1 
 1  1


   

3
Since det   2 2 0    24  0, these three vectors are a basis for R . Moreover  2    2   0
  4 0 4  
 4   0 


 1  1
   
and  2    0   0 . Therefore, the null space of A is set of all vectors in R3 that are orthogonal to
 4   4 
  1 
all vectors in row  A  . Similarly, bases for row  AT  and the left null space are given by     and
  2  
  1 2  
 2  
2
   respectively. Since det  
   5  0, these vectors are a basis for R . Moreover,
2
1
1


  
 1  1
2
 2    2   0. Therefore the left null space is the set of all vectors in R that are orthogonal to all
   
4.9 Rank, Nullity, and the Fundamental Matrix Spaces
vectors in row  AT  .
17.
  0   1 
  3 
    


From Exercise 13, a basis for row  A  is   1 ,  0   and a basis for the null space of A is   2   .
  2   3  


    
  1 
  0 1 3  


Since det   1 0 2    14  0, these three vectors are a basis for R 3 . Moreover
  2 3
1 

 0   3
 1  3
 1   2   0 and  0    2   0 . Therefore, the null space of A is the set of all vectors in R3
   
   
 2   1
 3  1
that are orthogonal to all vectors in row  A  . Similarly, bases for row  AT  and the left null space are
  0 1 4  
  4  
  0   1 


  
    
given by   1 ,  0   and   4   respectively. Since det   1 0 4    33  0, these vectors
  1 
  4   4  
  4 4
1 
  

    
 0   4 
 1  4 




   
are a basis for R . Moreover,  1   4   0 and  0    4   0 . Therefore, the left null space is the
 4   1
 4   1
3
set of all vectors in R2 that are orthogonal to all vectors in row  AT  .
18.
From Exercise 14, the bases for row  A  and row  AT  are given by the rows and columns of
A respectively. These are each bases for R 4 . Also, both the null space and the left null space
contain only the zero vector so that both parts of Theorem 4.9.7 are clearly satisfied.
19.
Following Example 5, we find that the reduced row echelon form of the
8 7 1 0 0 
 0 2

4 0 0 1 0 
augmented matrix  2 2
 3 4 2
5 0 0 1
5
1 0 6 0
17

5
is 0 1 4 0
17
0 0 0 1  171
19
17
21
34
3
17
7
17
7
17
2
17
  0   2   7  

      

 . Hence, the column space basis is   2  ,  2  ,  0   , the row space
  3   4   5  

      
99
100
Chapter 4: General Vector Spaces
  0   2   3 
  6  
      
  
  2   2   4  
 4 
,
,
basis is 
 ,the null space basis is     , and the left null space contains only the
  8   4   2  
  1 
  7   0   5 
  0  




zero vector.
20.
Following Example 5, we find that the reduced row echelon form of the
1
2
Augmented matrix 
0

1
3 1 1 1 0 0 0
1
0

8 0 0 2 0 1 0 0
is 
0
4 6 0 1 0 0 1 0 


0
0 0 0 0 0 0 1
0
2
0 0 0 0 0
1
4
1 0 0
0
0 1 0 0 0
0 0
1
2
1
0
1
8
1
12
1
2
1 
1
0  14 
 61  61 

1
0
2
0
Hence, a basis for
  1 2   0   1 
  1  2   3  1 
        
        
 2   8   4  0  
2
8
0
0
        


,
,
the column space basis is  ,
 , the row space basis is   3 , 0  ,  6  , 0   ,the null









0
4
6
0


  1 0   0  0  
  1  0   0  0  
        


  1 2   1 0  
 0 
 1  
 4  


space basis is   0   , and the left null space contains only the zero vector.
 1  
 2  
  1 
21.
(a)
Applying Formula (4) to both A and its transpose yields
 
2  nullity  A   4 and 2  nullity AT  3
therefore
 
nullity  A   nullity AT  1
(b)
Applying Formula (4) to both A and its transpose yields
 
 
rank  A   nullity  A   n and rank AT  nullity AT  m
By Theorem 4.9.4, rank  AT   rank  A  therefore
 
nullity  A   nullity AT  n  m
4.9 Rank, Nullity, and the Fundamental Matrix Spaces
22.
101
1 3
 x1  3 x2  1 3
 x1 




T  x1 , x2    x1  x2   1 1   ; the standard matrix is A  1 1 .
x
 x1  1 0   2 
1 0 
 1 0


Its reduced row echelon form is 0 1 .
0 0 
(a)
23.
rank  A   2
(b)
nullity  A   0
 x1 
 
 x1  x2   1 1 0 0 0   x2 
T  x1 , x2 , x3 , x4 , x5    x2  x3  x4    0 1 1 1 0   x3  ; the standard matrix is
 
 x4  x5   0 0 0 1 1  x4 
 x5 
 
 1 1 0 0 0
 1 0 1 0 0 


A  0 1 1 1 0  . Its reduced row echelon form is 0 1 1 0 1 .
0 0 0 1 1
0 0 0 1 1
24.
(a)
rank  A   3
(a)
The determinant of A is
(b)
1 1 t
1
1
t
1 t 1  0 t 1 1 t
t 1 1 t 1 0 1 t
1 1 t
t
 0
0 1 t
t 1 1 t 1 t
  1  t 
nullity  A   2
1 times the first row was added to the
second row and to the third row.
Last column was added to
the second column.
1 t
1
t 1 1 t
Cofactor expansion along
the second row.
  1  t   1  t   1  t  t  1 
  1  t   2  t 
2
From parts (g) and (n) of Theorem 4.9.8, rank  A   3 when det  A   0 , i.e. for all t values other
than 1 or 2 .
102
Chapter 4: General Vector Spaces
1 1 1 


If t  1 , the matrix has the reduced row echelon form 0 0 0  so that its rank is 1.
0 0 0 
 1 0 1


If t  2 , the matrix has the reduced row echelon form 0 1 1 so that its rank is 2.
0 0 0 
(b)
The determinant of A is
t
3 1
t
3 1
3 6 2  3  2t 0 0
1 3
1 3 t
t
   3  2t 
2 times the first row was added
to the second row.
3 1
3
t
Cofactor expansion along
the second row.
   3  2t  3t  3 
 3  2t  3  t  1
From parts (g) and (n) of Theorem 4.9.8, rank  A   3 when det  A   0 , i.e. for all t values other
than 1 or 23 .
0
1 0


If t  1 , the matrix has the reduced row echelon form 0 1  13  so that its rank is 2.
0 0
0 
1
1 0


If t  , the matrix has the reduced row echelon form 0 1  65  so that its rank is 2.
0 0
0 
3
2
25.
By inspection, there must be leading 1's in the first column (because of the first row) and in the third
column (because of the fourth row) regardless of the values of r and s , therefore the matrix cannot have
rank 1.
It has rank 2 if r  2 and s  1 , since there is no leading 1 in the second column in that case.
26.
(a)
1 0 0 


e.g., A  0 1 0  - the column space is the xy -plane in R3
0 0 0 
(b)
The general solution of Ax  0 is x   0,0, t  . The null space is the z -axis.
(c)
The row space of A is the xy -plane in R3
4.9 Rank, Nullity, and the Fundamental Matrix Spaces
27.
103
No, both row and column spaces of A must be planes through the origin since from nullity  A   1 , it
follows by Formula (4) that rank  A   3  1  2 .
28.
29.
(a)
3; reduced row echelon form of A can contain at most 3 leading 1's when each of its rows is nonzero;
(b)
5; if A is the zero matrix, then the general solution of Ax  0 has five parameters;
(c)
3; reduced row echelon form of AT can contain at most 3 leading 1's when each of its columns has a
leading 1;
(d)
3; if A is the zero matrix, then the general solution of AT x  0 has three parameters;
(a)
3; reduced row echelon form of A can contain at most 3 leading 1's when each of its rows is nonzero;
(b)
5; if A is the zero matrix, then the general solution of Ax  0 has five parameters;
(c)
3; reduced row echelon form of A can contain at most 3 leading 1's when each of its columns has a
leading 1;
(d)
3; if A is the zero matrix, then the general solution of Ax  0 has three parameters;
30.
By part (b) of Theorem 4.9.3, the nullity of A is 0. By Formula(4), rank  A   6  0  6 .
31.
(a)
(b)
32.
By Formula (4), nullity  A   7  4  3 thus the dimension of the solution space of Ax  0 is 3.
No, the column space of A is a subspace of R5 of dimension 4, therefore there exist vectors b in R5
that are outside this column space. For any such vector, the system Ax  b is inconsistent.
The rank of A is 2 if and only if the two row vectors of A are not scalar multiples of one another, i.e. they
are nonparallel nonzero vectors. This is equivalent to the cross product of these vectors being nonzero, i.e.
 a12
 a
 22
33.
a13
a23
,
a11
a13 a11
,
a23 a21
a21
a12 
   0,0,0 
a22 
From the result of Exercise 32, the rank of the matrix being less than 2 implies that
x
y
1
x
 x2  y  0 ,
x
z
1
y
 xy  z  0 ,
y
z
x
y
 y 2  xz  0
therefore y  x 2 and z  xy  x 3 . Letting x  t , we obtain y  t 2 and z  t 3 .
34.
1 0 
0 1 
1 0 
0 0 
and B  
For instance, let A  
. We have A2  
and B2  

 , therefore


0 0 
0 0 
0 0 
0 0 
 
 
rank A2  1  0  rank B 2 even though rank  A   1  rank  B  .
104
35.
Chapter 4: General Vector Spaces
1
0

0
T
The reduced row echelon form of A is 
0
0

0
0 10 0 
1 5 0 
0 0 1
T
 . The general solution of A x  0 has components
0 0 0
0 0 0

0 0 0 
x1  10 t , x2  5t , x3  t , x4  0 , so in vector form x  t  10,5,1,0  . Evaluating dot products of columns
of A and v   10,5,1,0  , which forms a basis for the null space of AT we obtain
c1  v  1,2,0,2    10,5,1,0   1 10    2  5    0 1   2  0   0
c 2  v   3,6,0,6    10,5,1,0    3  10    6  5    0 1   6  0   0
c 3  v   2, 5,5,0    10,5,1,0    2  10    5  5    5 1   2  0   0
c 4  v   0, 2,10,8    10,5,1,0    0  10    2  5   10 1   8  0   0
c 5  v   2,4,0,4    10,5,1,0    2  10    4  5    0 1   4  0   0
c 6  v   0, 3,15,18    10,5,1,0    0  10    3  5   15 1  18  0   0
Since the column space of A is span c1 , c 2 , c 3 , c 4 , c 5 , c 6  and the null space of AT is span v , we conclude
that the two spaces are orthogonal complements in R 4 .
37.
(a)
m  3  2  n so the system is overdetermined. The augmented matrix of the system is row
b1  b3
1 0


 hence the system is inconsistent for all b 's that satisfy
0
1
b3
equivalent to 

0 0 3b1  b2  2b3 
3b1  b2  2b3  0 .
(b)
m  2  3  n so the system is underdetermined. The augmented matrix of the system is row
1
0
b  14 b2 
1 0
2 1
equivalent to 
 hence the system has infinitely many solutions for all b 's
4
1
1
0 1  3  6 b1  12 b2 
(no values of b 's can make this system inconsistent).
(c)
m  2  3  n so the system is underdetermined. The augmented matrix of the system is row
1 0  23  12 b1  23 b2 
equivalent to 
 hence the system has infinitely many solutions for all b 's (no
1
1
1
0 1  2  2 b1  2 b2 
values of b 's can make this system inconsistent).
4.9 Rank, Nullity, and the Fundamental Matrix Spaces
1
0

The augmented matrix of the system is row equivalent to  0

0
0

38.
0
1
0
0
0
2b1  3b2 
b1  b2 
3b1  4b2  b3  . For the system to be

2b1  b2  b4 
7b1  8b2  b5 
consistent, we must have 3b1  4b2  b3  0 , 2 b1  b2  b4  0 , and 7b1  8b2  b5  0 .
For arbitrary s and t , the b 's must satisfy b1  s , b2  t , b3  3s  4t , b4  2 s  t , b5  7s  8t .
True-False Exercises
(a)
1 2 
False. For instance, in, neither row vectors nor column 
 vectors are linearly independent.
2 4 
(b)
True. In an m  n matrix, if m  n then by Theorem 4.6.2(a), the n columns in Rm must be linearly
dependent. If m  n , then by the same theorem, the m rows in R n must be linearly dependent. We
conclude that m  n .
(c)
False. The nullity in an m  n matrix is at most n .
(d)
False. For instance, if the column contains all zeros, adding it to a matrix does not change the rank.
(e)
True. In an n  n matrix A with linearly dependent rows, rank  A   n  1 .
By Formula (4), nullity  A   n  rank  A   1 .
(f)
False. By Theorem 4.9.7, the nullity must be nonzero.
(g)
False. This follows from Theorem 4.9.1.
(h)
False. By Theorem 4.9.4, rank  AT   rank  A  for any matrix A .
(i)
True. Since each of the two spaces has dimension 1, these dimensions would add up to 2 instead of 3 as
required by Formula (4).
False. For instance, if n  3 , V  span i, j (the xy -plane), and W  span i (the x -axis) then
(j)
W   span  j, k (the yz -plane) is not a subspace of V   span k (the z -axis).
(Note that it is true that V  is a subspace of W  .)
Chapter 4 Supplementary Exercises
1.
(a)
u  v   3  1, 2  5, 4  2    4, 3, 2  ;
ku   1  3, 0, 0    3, 0, 0 
(b)
For any u   u1 , u2 , u3  and v   v1 , v2 , v3  in V , u  v   u1  v1 , u2  v2 , u3  v3  is an ordered
triple of real numbers, therefore u  v is in V . Consequently, V is closed under addition.
105
106
Chapter 4: General Vector Spaces
For any u   u1 , u2 , u3  in V and for any scalar k , ku   ku1 , 0, 0  is an ordered triple of real
numbers, therefore ku is in V . Consequently, V is closed under scalar multiplication.
(c)
Axioms 1-5 hold for V because they are known to hold for R3 .
(d)
Axiom 7:
k ((u1 , u2 , u3 )  (v1 , v2 , v3 ))  k (u1  v1 , u2  v2 , u3  v3 )  (k (u1  v1 ), 0, 0)  k (u1 , u2 , u3 )  k (v1 , v2 , v3 )
for all real k , u1 , u2 , u3 , v1 , v2 , and v3 .
Axiom 8:  k  m  u1 , u2 , u3     k  m  u1 , 0, 0    ku1  mu1 , 0, 0   k  u1 , u2 , u3   m  u1 , u2 , u3 
for all real k , m , u1 , u2 , and u3 ;
Axiom 9: k  m  u1 , u2 , u3    k  mu1 , 0, 0    kmu1 , 0, 0    km  u1 , u2 , u3  for all real k , m , u1 ,
u2 , and u3 ;
(e)
Axiom 10 fails to hold: 1 u1 , u2 , u3    u1 , 0, 0  does not generally equal  u1 , u2 , u3  .
Consequently, V is not a vector space.
2.
(a)
The solution space is R3 since all vectors  x, y, z  satisfy the system.
(b)
 1  23 21 0 


The augmented matrix of the system has the reduced row echelon form  0
0 0 0
 0
0 0 0 
therefore the general solution is x  23 s  12 t , y  s , z  t . The solution space is a plane in
R3 ; its equation is 2 x  3 y  z  0 , the first equation in our system (the other two equations
were its multiples).
(c)
 1 2 0 0 


The augmented matrix of the system has the reduced row echelon form 0 0 1 0 
0 0 0 0 
therefore the general solution is x  2t , y  t , z  0 - these form parametric equations for a
line in R3 .
(d)
1 0 0 0 


The augmented matrix of the system has the reduced row echelon form 0 1 0 0 
0 0 1 0 
therefore the homogeneous system has only the trivial solution  0,0,0  - the origin.
3.
1 1 s 
A  1 s 1
 s 1 1
The coefficient matrix of the system
1
s 
1
0 s  1 1  s 


 0 1  s 1  s 2 
1
s
1

0 s  1
1  s 

0
0
2  s  s 2 
1 times the first row was added to the second row and
s times the first row was added to the third row.
The second row was added to the third row.
After factoring 2  s  s 2   2  s 1  s  , we conclude that

the solution space is a plane through the origin if s  1 (the reduced row echelon form becomes
 1 1 1
0 0 0  , so
nullity  A   2 ),


0 0 0 

the solution space is a line through the origin if s  2 (the reduced row echelon form becomes
 1 0 1
0 1 1 , so
nullity  A   1 ),


0 0 0 

the solution space is the origin if s  2 and s  1 (the reduced row echelon form becomes
 1 0 0
0 1 0 

 , so nullity  A   0 ),
0 0 1

4.
(a)
(b)
5.
there are no values of s for which the solution space is R3 .
 4a, a  b, a  2b   a  4,1,1  b  0, 1,2 
 3a  b  3c, a  4b  c,2a  b  2c   a  3, 1,2   b 1,4,1  c  3, 1,2 
  a  c  3, 1,2   b 1,4,1
(c)
 2a  b  4c,3a  c,4b  c   a  2,3,0   b  1,0,4   c  4, 1,1
(a)
Using trigonometric identities we can write
f1  sin  x     sin x cos  cos x sin    cos  f   sin   g
g1  cos  x     cos x cos  sin x sin     sin   f   cos  g
which shows that f1 and g1 are both in W  span f , g .
(b)
The functions f1  sin  x    and f2  cos  x    are linearly independent since neither
function is a scalar multiple of the other. By Theorem 4.6.4, these functions form a basis for
W.
108
6.
Chapter 4: General Vector Spaces
(a)
We are looking for scalars c1 , c2 , and c3 such that c1v1  c2 v 2  c3 v3  v , i.e.,
1c1
1c1
 3c2
 2c3
 c3
 1
 1
 1 0 1 1
The augmented matrix of this system has the reduced row echelon form 
so that
2 
0 1 1 3 
the general solution is c1  1  t , c2  23  t , c3  t .
E.g., letting t  0 yields 1v1  23 v 2  0v 3  v , whereas with t  1 we obtain 0v1  13 v 2  1v 3  v .
(b)
The vectors v1 , v 2 , and v 3 do not form a basis for R2 therefore Theorem 4.5.1 does not
apply here.
7.
Denoting B   v1

v n  we can write AB   Av1
Av n  .

By parts (g) and (h) of Theorem 4.9.7, the columns of AB are linearly independent if and only if
det  AB   0 . This implies that det  A   0 , i.e., the matrix A must be invertible.
8.
9.
No, e.g., x  1 and x  1 form a basis for P1 even though both are of degree 1 .
(a)
(b)
 1 0 1
 1 0 1




The reduced row echelon form of 0 1 0  is 0 1 0  , so the rank is 2 and the nullity
 1 0 1
0 0 0 
is 1.
1
0
The reduced row echelon form of 
1

0
0
1
0
1
1
0
1
0
0
1

0
1
is 
0
0


1
0
0
1
0
0
1
0
0
0
0
1
, so the rank is 2 and the
0

0
nullity is 2.
10.
(c)
For n  1 , the rank is 1 and the nullity is 0.
For n  2 , the reduced row echelon form will always have two nonzero rows; the rank is 2
and the nullity is n  2 .
(a)
Adding 1 times the first row to the third row yields the reduced row echelon form
1 0 1 
0 1 0  ; we conclude that the matrix has rank 2 and nullity 1.


0 0 0 
(b)
Adding 1 times the first row to the fifth row and adding 1 times the second row to the
1
0

fourth row yields the reduced row echelon form  0

0
 0
rank 3 and nullity 2.
(c)
0 0 0 1
1 0 1 0 
0 1 0 0  therefore the matrix has

0 0 0 0
0 0 0 0 
After performing n elementary row operations which follow the same pattern as in parts (a)
and (b):

add 1 times row 1 to row 2n  1 ,

add 1 times row 2 to row 2n ,

add 1 times row 3 to row 2n  1 ,

…

add 1 times row n to row n  2 ,
the reduced row echelon form will be obtained: its top n  1 rows are identical to those in the
original X -matrix, whereas the bottom n rows are completely filled with zeros.
We conclude that the matrix has rank n  1 and nullity n .
11.
(a)
Let W be the set of all polynomials p in Pn for which p   x   p  x  . In order for a
polynomial p  x   a0  a1 x  a2 x 2    an x n to be in W , we must have
p  x   a0  a1 x  a2 x 2    an x n  a0  a1   x   a2   x     an   x   p   x  which
2
n
implies that for all x , 2 a1 x  2 a3 x 3    0 so a1  a3    0 .
Any polynomial of the form p  x   a0  a2 x 2  a4 x 4    a2  n / 2  x 2 n / 2  satisfies
p   x   p  x  (the notation t represents the largest integer less than or equal to t ).
This means W  span{1 , x 2 , x 4 , ..., x 
2 n /2 
} , so W is a subspace of Pn by Theorem 4.3.1(a).
The polynomials in {1, x 2 , x 4 , ..., x 
} are linearly independent (since they form a subset
2 n /2 
of the standard basis for Pn ), consequently they form a basis for W .
(b)
Let W be the set of all polynomials p in Pn for which p  0   p 1 .
In order for a polynomial p  x   a0  a1 x  a2 x 2    an x n to be in W , we must have
p  0   a0  a0  a1  a2    an  p 1 which implies that a1  a2    an  0 .
Therefore any polynomial in W can be expressed as
p  x   a0  a1 x  a2 x 2    an 1 x n 1    a1  a2    an 1  x n






 a0  a1 x  x n  a2 x 2  x n    an 1 x n 1  x n .
110
Chapter 4: General Vector Spaces
This means W  span 1, x  x n , x 2  x n ,, x n 1  x n  , so W is a subspace of Pn by Theorem
4.3.1(a). Since a0  a1  x  x n   a2  x 2  x n     an 1  x n 1  x n   0 implies
a0  a1  a2    an 1  0 , it follows that 1, x  x n , x 2  x n ,, x n 1  x n  is linearly
independent, hence it is a basis for W .
12.
For p  x   a0  a1 x  a2 x 2    an x n to have a horizontal tangent at x  0 , we must have
p  0   0 .
Since p  x   a1  2 a2 x    nan x n 1 it follows that p  0   a1  0 . The set of all polynomials
p  x  for which a1  0 is span 1, x 2 , x 3 ,, x n  and therefore a subspace of Pn .
Since the set 1, x 2 , x 3 ,, x n  is clearly linearly independent and spans the subspace, it forms a
basis for the subspace.
13.
(a)
a b

A general 3  3 symmetric matrix can be expressed as  b d
 c e
c
e 
f 
 1 0 0
0 1 0 
0 0 1
0 0 0 
0 0 0 
0 0 0 










 a 0 0 0   b  1 0 0   c 0 0 0   d 0 1 0   e 0 0 1  f 0 0 0  .
0 0 0 
0 0 0 
 1 0 0 
0 0 0 
0 1 0 
0 0 1
 1 0 0   0 1 0  0 0 1 0 0 0  0 0 0  0 0 0 

 
 
 
 
 

Clearly the matrices  0 0 0  ,  1 0 0  , 0 0 0  , 0 1 0  , 0 0 1 , 0 0 0 
 0 0 0   0 0 0   1 0 0  0 0 0  0 1 0  0 0 1
span the space of all 3  3 symmetric matrices. Also, these matrices are linearly indpendent,
 a b c  0 0 0 

 

since  b d e   0 0 0  requires that all six coefficients in the linear combination
 c e f  0 0 0 
above must be zero. We conclude that the matrices
0 0 0 
 1 0 0  0 1 0  0 0 1 0 0 0  0 0 0 
0 0 0  ,  1 0 0  , 0 0 0  , 0 1 0  , 0 0 1 , and 0 0 0  form a basis




 
 
 
 
0 0 1
0 0 0  0 0 0   1 0 0  0 0 0  0 1 0 
for the space of all 3  3 symmetric matrices.
(b)
A general 3  3 skew-symmetric matrix can be expressed as
 0 a b
 0 1 0
 0 0 1
0 0 0 
 a 0 c   a  1 0 0   b  0 0 0   c 0 0 1 .








 b c 0 
 0 0 0 
 1 0 0 
0 1 0 
 0 1 0   0 0 1 0 0 0 

 
 

Clearly the matrices  1 0 0  ,  0 0 0  , 0 0 1 span the space of all 3  3 skew 0 0 0   1 0 0  0 1 0 
symmetric matrices. Also, these matrices are linearly indpendent, since
 0 a b  0 0 0 
 a 0 c   0 0 0 

 

 b c 0  0 0 0 
requires that all three coefficients in the linear combination above must be zero. We conclude
 0 1 0   0 0 1
0 0 0 






that the matrices  1 0 0  ,  0 0 0  , and 0 0 1 form a basis for the space of all
 0 0 0   1 0 0 
0 1 0 
3  3 skew-symmetric matrices.
14.
(a)
 1 0
A submatrix 
 has nonzero determinant 1 therefore the rank of the original matrix is 2.
2 1
(b)
All three 2  2 submatrices have zero determinant:
1 2
2 4

1 3
2 6

2 3
4 6
 0 . Since
determinant of any 1  1 submatrix of the original matrix is nonzero, the original matrix has
rank 1.
(c)
The original 3  3 matrix has zero determinant. A submatrix
1
0
2 1
has nonzero determinant
1 therefore the rank of the original matrix is 2 .
(d)
15.
17.
The original 3  3 matrix has nonzero determinant 30 therefore the rank of the original
matrix is 3 .
All submatrices of size 3  3 or larger contain at least two rows that are scalar multiples of each
other, so their determinants are 0. Therefore the rank cannot exceed 2. The possible values are:

rank  A   2 , e.g., if a51  a16  1 regardless of the other values,

rank  A   1 , e.g., if a16  a26  a36  a46  0 and a56  1 regardless of the other values, and

rank  A   0 if all entries are 0.
 k 0  cos
The standard matrices for Dk , R , and Sk are 
, 
0 k   sin 
shear in the x -direction).
(a)
 k 0  cos
 0 k   sin 


commute.
 sin    k cos

cos   k sin 
k sin   cos

k cos   sin 
 sin  
1 k 
, and 
 (assuming a

cos 
0 1 
 sin    k 0 
therefore Dk and R
cos  0 k 
112
Chapter 4: General Vector Spaces
(b)
 cos

 sin 
 sin   1 k  cos

cos  0 1   sin 
 1 k   cos
 0 1   sin 


k cos  sin  
does not generally equal
k sin   cos 
 sin    cos  k sin 

cos  
sin 
 sin   k cos 

cos

therefore R and Sk do not commute (same result is obtained if a shear in the y -direction is
taken instead)
(c)
18.
(b)
 k 0  1 k   k k 2  1 k   k 0 

0 k  0 1   

 therefore Dk and Sk commute (same result is


  0 k  0 1   0 k 
obtained if a shear in the y -direction is taken instead)
Every vector  x, y, z  in R3 can be expressed in exactly one way as a sum of a vector
 x, y,0  in U and  0,0, z  in W . Consequently, R3  U  W .
(c)
Every vector  x, y, z  in R3 can be expressed as a sum of a vector in U and a vector in V .
However, in this case, this representation is not unique, for instance,
1,1,0    0,1,3   1,3,0    0, 1,3 
1,2,3  
   


vector in U
vector in V
vector in U
vector in V
We conclude that R3 is not a direct sum of the xy -plane and the yz -plane.
5.1 Eigenvalues and Eigenvectors
CHAPTER 5: EIGENVALUES AND EIGENVECTORS
5.1 Eigenvalues and Eigenvectors
1.
 1 2   1  1
Ax  
       1x therefore x is an eigenvector of A corresponding to the eigenvalue 1 .
3 2   1  1
2.
5 1 1  4 
Ax  
       4x therefore x is an eigenvector of A corresponding to the eigenvalue 4 .
 1 3 1  4 
3.
 4 0 1 1   5
Ax   2 3 2  2   10   5x therefore x is an eigenvector of A corresponding to the eigenvalue 5 .
 1 0 4  1   5
4.
 2 1 1 1 0 
Ax   1 2 1 1  0   0x therefore x is an eigenvector of A corresponding to the eigenvalue 0 .
 1 1 2  1 0 
5.
(a)
det   I  A  
 1
2
4
    1   3   4  2    2  4  5     5    1 .
 3
The characteristic equation is    5   1  0 . The eigenvalues are   5 and   1 .
 4 4 
 1 1
The reduced row echelon form of 5I  A  
is 

 . The general solution of
0 0 
 2 2 
x 
 x2 
t 
t 
1
1
 5I  A x  0 is x1  t , x2  t . In vector form,  1      t   .
A basis for the eigenspace corresponding to   5 is {(1,1)} .
 2 4 
1 2 
The reduced row echelon form of 1I  A  
is 

 . The general solution of
 2 4 
0 0 
x 
 x2 
 2t   2 
 t .
 t   1
 1I  A x  0 is x1  2t , x2  t . In vector form,  1   
A basis for the eigenspace corresponding to   1 is {(2,1)} .
(b)
det   I  A  
 2
1
7
    2    2    7  1   2  3 .
 2
The characteristic equation is  2  3  0 . There are no real eigenvalues.
1
5.1 Eigenvalues and Eigenvectors
(c)
det   I  A  
 1
0
0
2
    1 .
 1
The characteristic equation is    1  0 . The eigenvalue is   1 .
2
0 0 
The matrix I  A  
 is already in reduced row echelon form. The general solution of
0 0 
x 
 x2 
s 
t
 1
0 
0 
 1
 I  A x  0 is x1  s , x2  t . In vector form,  1      s    t   .
A basis for the eigenspace corresponding to   1 is {(1,0),(0,1)} .
(d)
det   I  A  
 1
0
2
2
    1 .
 1
The characteristic equation is    1  0 . The eigenvalue is   1 .
2
0 2 
0 1
The reduced row echelon form of I  A  
is 

 . The general solution of  I  A  x  0 is
0 0 
0 0 
 x   t   1
x1  t , x2  0 . In vector form,  1      t   .
 x2   0   0 
A basis for the eigenspace corresponding to   1 is {(1,0)} .
6.
(a)
det   I  A  
 2
1
1
2
2
    2    1   2  4  3     1   3  .
 2
The characteristic equation is    1   3  0 . The eigenvalues are   1 and   3 .
 1 1
1 1 
The reduced row echelon form of I  A  
is 

 . The general solution of
 1 1
0 0 
x 
 x2 
 t   1
 t .
 t   1
 I  A x  0 is x1  t , x2  t . In vector form,  1   
A basis for the eigenspace corresponding to   1 is {(1,1)} .
 1 1
 1 1
The reduced row echelon form of 3I  A  
is 

 . The general solution of
 1 1
0 0 
x 
 x2 
t 
t 
1
1
3I  A x  0 is x1  t , x2  t . In vector form,  1      t   .
A basis for the eigenspace corresponding to   3 is {(1,1)} .
(b)
det   I  A  
 2
0
3
2
   2 .
 2
The characteristic equation is    2   0 . The eigenvalue is   2 .
2
2
5.1 Eigenvalues and Eigenvectors
0 3
0 1
The reduced row echelon form of 2 I  A  
is 

 . The general solution of
0 0 
0 0 
x 
 x2 
 t
0 
 1
0 
 2I  A x  0 is x1  t , x2  0 . In vector form,  1      t   .
A basis for the eigenspace corresponding to   2 is {(1,0)} .
(c)
det   I  A  
 2
0
0
2
   2 .
 2
The characteristic equation is    2   0 . The eigenvalue is   2 .
2
0 0 
The matrix 2 I  A  
 is already in reduced row echelon form.
0 0 
The general solution of  2I  A x  0 is x1  s , x2  t . In vector form,
 x1   s 
 1 0 
 x      s    t   . A basis for the eigenspace corresponding to   2 is {(1,0),(0,1)} .
0   1
 2  t
(d)
det   I  A  
 1
2
2
    1   1   2  2    2  3 .
 1
The characteristic equation is  2  3  0 . There are no real eigenvalues.
7.
Cofactor expansion along the second column yields det   I  A  
    1
 4
2
 4
2
2
0
1
 1 0
0
 1
1
    1    4    1   1 2       1  2  5  6
 1


=    1   2    3 . The characteristic equation is    1   2    3  0 .
The eigenvalues are   1 ,   2 , and   3 .
 3 0 1
1 0 0 


The reduced row echelon form of I  A   2 0 0  is 0 0 1  . The general solution of
 2 0 0 
0 0 0 
 x1  0  0 
 I  A x  0 is x1  0 , x2  t , x3  0 . In vector form,  x2    t   t 1  . A basis for the eigenspace
 x3  0  0 
corresponding to   1 is {(0,1,0)} .
3
5.1 Eigenvalues and Eigenvectors
1
1 0
 2 0 1
2




The reduced row echelon form of 2 I  A   2 1 0  is 0 1 1 . The general solution of
0 0 0 
 2 0 1
 x1    12 t    12 
 2I  A x  0 is x1   12 t , x2  t , x3  t . In vector form,  x2    t   t  1 . A basis for the
 x3   t   1
eigenspace corresponding to   2 is {(1,2,2)} (scaled by a factor of 2 for convenience).
1
 1 0 1
1 0



The reduced row echelon form of 3I  A   2 2 0  is 0 1 1 . The general solution of
 2 0 2 
0 0 0 
 x1   t   1
3I  A x  0 is x1  t , x2  t , x3  t . In vector form,  x2    t   t  1 . A basis for the eigenspace
 x3   t   1
corresponding to   3 is {(1,1,1)} .
8.
 1 0
Cofactor expansion along the second column yields det   I  A   0

2

 1
2

2
0
0  4

2
     1   4    2  2      2  5   2    5  .
 4
The characteristic equation is  2    5  0 . The eigenvalues are   0 and   5 .
 1 0 2 
 1 0 2 


The reduced row echelon form of 0 I  A   0 0 0  is 0 0
0  . The general solution of
 2 0 4 
0 0
0 
 x1  2t 
0  2 




 0I  A x  0 is x1  2t , x2  s , x3  t . In vector form,  x2    s   s 1   t 0  . A basis for the
 x3   t 
0   1 
eigenspace corresponding to   0 is {(0,1,0),(2,0,1)} .
 1 0 12 
4 0 2


The reduced row echelon form of 5I  A   0 5 0  is 0 1 0  . The general solution of
0 0 0 
 2 0 1 
 x1    12 t    12 
 5I  A x  0 is x1   12 t , x2  0 , x3  t . In vector form,  x2    0   t  0  . A basis for the
 x3   t   1
eigenspace corresponding to   5 is {(1,0,2)} (scaled by a factor of 2 for convenience).
4
5.1 Eigenvalues and Eigenvectors
9.
Cofactor expansion along the second row yields det   I  A  
   2
 6
1
 6
5
3
8
 2
0
0
 3
0
1

8
    2     6    3  8  1     2   2  3  10
 3

=    2    2    5 . The characteristic equation is    2     5  0 .
2
The eigenvalues are   2 and   5 .
 8 3 8 
 1 0 1


The reduced row echelon form of 2 I  A   0 0 0  is 0 1 0  . The general solution of
 1 0 1
0 0 0 
 x1   t   1 
 2I  A x  0 is x1  t , x2  0 , x3  t . In vector form,  x2   0   t 0  . A basis for the eigenspace
 x3   t   1 
corresponding to   2 is {(1,0,1)} .
 1 3 8 
 1 0 8 


The reduced row echelon form of 5I  A   0 7 0  is 0 1 0  . The general solution of
 1 0 8 
0 0 0 
 x1  8t   8
 5I  A x  0 is x1  8t , x2  0 , x3  t . In vector form,  x2    0   t 0  . A basis for the eigenspace
 x3   t   1
corresponding to   5 is {(8,0,1)} .
10.
 1 1
Cofactor expansion along the first row yields det   I  A   1  1
1 1 

 1
1 1
1 
  1
  1
    2  1     1  1   
1 
1 
1 1


     1   1     1     1     1     1  1  1     1  2    2     1   1   2  .
We conclude that the eigenvalues are 1 and 2 .
 1 1 1
1 1 1 


The reduced row echelon form of 1I  A   1 1 1 is 0 0 0  . The general solution of
 1 1 1
0 0 0 
 x1   s  t 
 1  1




 1I  A x  0 is x1  s  t , x2  s , x3  t . In vector form,  x2    s   s  1  t  0  .
 x3  
 0   1
t 
A basis for the eigenspace corresponding to   1 is {(1,1,0),(1,0,1)} .
5.1 Eigenvalues and Eigenvectors
6
 2 1 1
 1 0 1


The reduced row echelon form of 2 I  A   1 2 1 is 0 1 1 .
 1 1 2 
0 0 0 
 x1  t  1
The general solution of  2I  A x  0 is x1  t , x2  t , x3  t . In vector form,  x2   t   t 1 .
 x3  t  1
A basis for the eigenspace corresponding to   2 is {(1,1,1)} .
11.
Cofactor expansion along the second column yields det   I  A  
    3
 4
1
 4
0
1
0
1
 3
0
0
 2


1
3
    3    4    2   1 1     3   2  6  9     3  .
 2
The characteristic equation is    3  0 . The eigenvalue is   3 .
3
 1 0 1
 1 0 1


The reduced row echelon form of 3I  A   0 0 0  is 0 0 0  .
 1 0 1
0 0 0 
The general solution of  3I  A x  0 is x1  t , x2  s , x3  t . In vector form,
 x1   t 
0  1 
 x    s   s  1   t 0  . A basis for the eigenspace corresponding to   3 is {(0,1,0),(1,0,1)} .
 2  
   
 x3   t 
0   1 
 1
12.
We use the arrow technique to evaluate the determinant: det   I  A   3
6
3
3
  5 3
6
 4
    1   5   4   54  54   18    5  18    1  9    4    3  12  16 .
The characteristic equation is  3  12  16  0 .
Following the procedure described in Example 3, we determine that the only integer solutions of the
characteristic equation are 1,  2,  4,  8, and 16 . Successively substituting these into the characteristic
polynomial, we find det  2I  A  0 so that   2 must be a factor of the polynomial. Dividing   2
into  3  12  16 we obtain


det   I  A     2   2  2  8     2    2    4  .
We conclude that the eigenvalues are 2 and 4 .
5.1 Eigenvalues and Eigenvectors
 3 3 3 
 1 1 1


The reduced row echelon form of 2 I  A   3 3 3  is 0 0 0  . The general solution of
 6 6 6 
0 0 0 
 x1   s  t 
 1   1




 2I  A x  0 is x1  s  t , x2  s , x3  t . In vector form,  x2    s   s 1  t  0  .
 x3   t 
0   1
A basis for the eigenspace corresponding to   2 is {(1,1,0),(1,0,1)} .
 1 0  12 
 3 3 3


The reduced row echelon form of 4 I  A   3 9 3 is 0 1  12  . The general solution of
0 0
 6 6 0 
0 
 x1   12 t   12 
 4I  A x  0 is x1  12 t , x2  12 t , x3  t . In vector form,  x2    12 t   t  12  . A basis for the eigenspace
 x3   t  1 
corresponding to   4 is {(1,1,2)} (scaled by a factor of 2 for convenience).
13.
The matrix  I  A is lower triangular, therefore by Theorem 2.1.2 its determinant is the product of the
entries on the main diagonal. Therefore the characteristic equation is    3   7    1  0 .
14.
The matrix  I  A is upper triangular, therefore by Theorem 2.1.2 its determinant is the product of the
entries on the main diagonal. Therefore the characteristic equation is    9   1   3   7   0 .
15.
16.
 x  4 y  1 4  x 
 1 4
; the standard matrix for the operator T is A  
T  x, y   





.
 2 x  3 y   2 3  y 
 2 3
The following results were obtained in the solution of Exercise 5(a). These statements apply to the matrix
A , therefore they also apply to the associated operator T :


the eigenvalues are   5 and   1 ,
a basis for the eigenspace corresponding to   5 is {(1,1)} ,

a basis for the eigenspace corresponding to   1 is {(2,1)} .
 2 x  y  z   2 1 1  x 
 2 1 1






T  x, y, z    x  z    1 0 1  y  ; the standard matrix for T is A   1 0 1 .
  x  y  2 z   1 1 2   z 
 1 1 2 
 2
We use the arrow technique to evaluate the determinant: det   I  A   1
1
1
1

1
1   2
    2      2   1  1       2      2    3  4 2  5  2 .
The characteristic equation is  3  4 2  5  2  0 .
Following the procedure described in Example 3, we determine that the only possible integer solutions of
the characteristic equation are 1 and 2 .
7
5.1 Eigenvalues and Eigenvectors
Since det  I  A  0 ,  1 must be a factor of the characteristic polynomial. Dividing  1 into
 3  4 2  5  2 leads to det   I  A     1   2  3  2      1   1   2  .
We conclude that the eigenvalues are 1 and 2 .
 1 1 1
 1 1 1


The reduced row echelon form of 1I  A   1 1 1 is 0 0 0  . The general solution of
 1 1 1
0 0 0 
 x1   s  t 
1  1 




1I  A x  0 is x1  s  t , x2  s , x3  t . In vector form,  x2    s   s 1   t 0  .
 x3   t 
0  1 
A basis for the eigenspace corresponding to   1 is {(1,1,0),(1,0,1)} .
1 1
 0
1 0 1 


The reduced row echelon form of 2 I  A   1 2 1 is 0 1 1  . The general solution of
 1 1 0 
0 0 0 
 x1   t   1
 2I  A x  0 is x1  t , x2  t , x3  t . In vector form,  x2    t   t  1 .
 x3   t   1
A basis for the eigenspace corresponding to   2 is {(1, 1,1)} .
17.
(a)
The transformation D 2 maps any function f  f  x  in C   ,   into its second derivative, i.e.
D2  f   f   x  . From calculus, we have
d d
d
f  x   g  x     f   x   g  x    f   x   g  x   D 2  f   D 2  g  and

dx dx
dx
d
d
d
D2  kf  
 kf  x   dx  kf   x   kf   x   kD2  f  . We conclude that D2 is linear.
dx dx
D2  f  g  
(b)
Denote f  sin  x and g  cos  x . We have



  



 

d d
d
sin  x 
 cos  x     sin  x   sin  x   f and
dx dx
dx
d d
d
D2  g  
cos  x 
  sin  x   
 cos  x   cos  x  g .
dx dx
dx
D2  f  


It follows that f  sin  x and g  cos  x are eigenvectors of D 2 ;    is the eigenvalue
associated with both of these eigenvectors.
18.
Denote f  sinh  x and g  cosh  x . We have


  cosh  x       sinh  x    sinh  x   f and


  sinh  x       cosh  x    cosh  x  g .
d d
d
sinh  x 
dx dx
dx
d d
d
D2  g  
cosh  x 
dx dx
dx
D2  f  
8
5.1 Eigenvalues and Eigenvectors
9
It follows that f  sinh  x and g  cosh  x are eigenvectors of D 2 ;    is the eigenvalue associated
with both of these eigenvectors.
19.
(a)
The reflection of any vector on the line y  x is the same vector: an eigenvalue   1 corresponds to
the eigenspace span{(1,1)} .
The reflection of any vector perpendicular to the line y  x (i.e., on the line y   x ) is the negative of
the original vector: an eigenvalue   1 corresponds to the eigenspace span{(1,1)} .
(b)
The projection of any vector on the x -axis is the same vector: an eigenvalue   1 corresponds to the
eigenspace span{(1,0)} .
The projection of any vector perpendicular to the x -axis (i.e., on the y -axis) is the zero vector: an
eigenvalue   0 corresponds to the eigenspace span{(0,1)} .
(c)
The result of the rotation through 90 of a nonzero vector is never a scalar multiple of the original
vector. Consequently, this operator has no real eigenvalues.
(d)
The result of the contraction of any vector v is a scalar multiple kv therefore the only eigenvalue is
  k and the corresponding eigenspace is the entire space R 2 .
20.
(e)
The result of the shear applied to any vector on the x -axis is the same vector whereas the result of the
shear applied to a nonzero vector in any other direction is not a scalar multiple of the original vector.
The only eigenvalue is   1 and the corresponding eigenspace is span{(1,0)} .
(a)
The reflection of any vector on the y -axis is the same vector: an eigenvalue   1 corresponds to the
eigenspace span{(0,1)} .
The reflection of any vector perpendicular to the y -axis (i.e., on the x -axis) is the negative of the
original vector: an eigenvalue   1 corresponds to the eigenspace span{(1,0)} .
(b)
The result of the rotation through 180 of any vector is the negative of the original vector: the only
eigenvalue is   1 and the corresponding eigenspace is the entire space R 2 .
(c)
The result of the dilation of any vector v is a scalar multiple kv therefore the only eigenvalue is
  k and the corresponding eigenspace is the entire space R 2 .
(d)
The result of the expansion applied to any vector on the x -axis is the same vector: an eigenvalue
  1 corresponds to the eigenspace span{(1,0)} .
The result of the expansion applied to any vector v on the y -axis is the scalar multiple kv : an
eigenvalue   k corresponds to the eigenspace span{(0,1)} .
(e)
The result of the shear applied to any vector on the y -axis is the same vector whereas the result of the
shear applied to a nonzero vector in any other direction is not a scalar multiple of the original vector.
The only eigenvalue is   1 and the corresponding eigenspace is span{(0,1)} .
21.
(a)
The reflection of any vector on the xy -plane is the same vector: an eigenvalue   1 corresponds to
the eigenspace span{(1,0,0),(0,1,0)} .
5.1 Eigenvalues and Eigenvectors
10
The reflection of any vector perpendicular to the xy -plane (i.e., on the z -axis) is the negative of the
original vector: an eigenvalue   1 corresponds to the eigenspace span{(0,0,1)} .
(b)
The projection of any vector on the xz -plane is the same vector: an eigenvalue   1 corresponds to
the eigenspace span{(1,0,0),(0,0,1)} .
The projection of any vector perpendicular to the xz -plane (i.e., on the y -axis) is the zero vector: an
eigenvalue   0 corresponds to the eigenspace span{(0,1,0)} .
(c)
The result of the rotation applied to any vector on the x -axis is the same vector whereas the result of
the rotation applied to a nonzero vector in any other direction is not a scalar multiple of the original
vector. The only eigenvalue is   1 and the corresponding eigenspace is span{(1,0,0)} .
(d)
The result of the contraction of any vector v is a scalar multiple kv therefore the only eigenvalue is
  k and the corresponding eigenspace is the entire space R 3 .
22.
(a)
The reflection of any vector on the xz -plane is the same vector: an eigenvalue   1 corresponds to
the eigenspace span{(1,0,0),(0,0,1)} .
The reflection of any vector perpendicular to the xz -plane (i.e., on the y -axis) is the negative of the
original vector: an eigenvalue   1 corresponds to the eigenspace span{(0,1,0)} .
(b)
The projection of any vector on the yz -plane is the same vector: an eigenvalue   1 corresponds to
the eigenspace span{(0,1,0),(0,0,1)} .
The projection of any vector perpendicular to the yz -plane (i.e., on the x -axis) is the zero vector: an
eigenvalue   0 corresponds to the eigenspace span{(1,0,0)} .
(c)
The result of the rotation applied to any vector on the y -axis is the same vector: an eigenvalue   1
corresponds to the eigenspace span{(0,1,0)} .
The result of the rotation applied to any vector perpendicular to the y -axis (i.e., on the xz -plane) is
the negative of the original vector: an eigenvalue   1 corresponds to the eigenspace
span{(1,0,0),(0,0,1)} .
(d)
The result of the dilation of any vector v is a scalar multiple kv therefore the only eigenvalue is
  k and the corresponding eigenspace is the entire space R 3 .
23.
A line through the origin in the direction of x  0 is invariant under A if and only if x is an eigenvector of
A.
(a)
det   I  A  
 4
2
1
    4    1  1 2    2  5  6     2    3  .
 1
The characteristic equation is    2    3  0 . The eigenvalues are   2 and   3 .
 1  12 
 2 1
The reduced row echelon form of 2 I  A  
is 
 . The general solution of

0
 2 1
0
 2I  A x  0 is x  12 t , y  t . Therefore y  2 x is an equation of the corresponding invariant line.
5.1 Eigenvalues and Eigenvectors
11
 1 1 
 1 1
The reduced row echelon form of 3I  A  
is 

 . The general solution of
 2 2 
0 0 
3I  A x  0 is x1  t , x2  t . Therefore y  x is an equation of the corresponding invariant line.
(b)
det   I  A  
 1
  2   11   2  1 .
1 
There are no real eigenvalues and no invariant lines.
24.
25.
Since p     det   I  A , it follows that p  0   det   A   1 det  A .
n
(a)
n  3 and p  0   5 therefore det  A  5 .
(b)
n  4 and p  0   7 therefore det  A   7 .
(a)
Since the degree of p    is 6 , A is a 6  6 matrix (see Exercise 37).
(b)
p  0   0 , therefore 0 is not an eigenvalue of A . From parts (a) and (r) of Theorem 5.1.5, A is
invertible.
(c)
27.
A has three eigenspaces since it has three distinct eigenvalues, each corresponding to an eigenspace.
Substituting the given eigenvectors x and the corresponding eigenvalues  into Ax   x yields
 1  1  1
 1
 1  1
 1
 1 0 














A  1  1  1   1 , A  1  1  1   1 , and A  1  0  1  0  .
 1  1  1
0 
0   0 
 0 
 0  0 
 1 1 1  1 1 0 

 

We can combine these three equations into a single equation A  1 1 1   1 1 0  .
 1 0 0  1 0 0

 

 1 1 1
Since the matrix  1 1 1 is invertible, we can multiply both sides on the right by its inverse,
 1 0 0 
0 1
0 1   12  12
1
0
 1 1 0  0
1
  1

1



1
1
1
0     2  2 1 .
0  , resulting in A   1 1 0   2
2
2
2
 12  12 1
 1 0 0   12  12 1  0
0 1
(Note that this exercise could also be solved by assigning nine unknown values to the elements of
a b
A   d e
 g h
Ax   x .)
c
f  , then solving the system of nine equations in nine unknowns resulting from the equation
i 
5.1 Eigenvalues and Eigenvectors
28.
12
a12 
  a11 a12
, we have det   I  A  

a22 
a21   a22
a
Denoting A   11
 a21
 (  a11 )(  a22 )  a12 a21   2  (a11  a22 )  a11a22  a12 a22 .
tr( A )
31.
det( A )
a
It follows from Exercise 28 that if the characteristic polynomial A   11
 a21
a12 
is p      2  c1  c2 then

a22 
c1  tr  A  a11  a22 and c2  det  A  a11a22  a12 a21 . Therefore
p  A  A2  c1 A  c2 I
a
  11
 a21
a12   a11
a22   a21
a12 
a
  a11  a22   11

a22 
 a21
 a2  a a
  11 12 21
 a21a11  a22 a21
a12 
1 0 
  a11a22  a12 a21  


a22 
0 1 
a11a12  a12 a22   a112  a22 a11

2
a21a12  a22
  a11a21  a22 a21
a a  a a
  11 22 12 21
0

a11a12  a22 a12 

2
a11a22  a22

0

a11a22  a12 a21 
0 0 

.
0 0 
33.
By Theorem 5.1.5(r), it follows from A being invertible that A cannot have a zero eigenvalue.
Multiplying both sides of the equation Ax   x by A1 on the left and applying Theorem 1.4.1 yields
A1 Ax   A1x . Since A1 A  I , dividing both sides of the equation by  we obtain 1 x  A1x . This
shows that 1 is an eigenvalue of A1 associated with an eigenvector x .
34.
Subtracting sIx  sx from both sides of the equation Ax   x and applying Theorem 1.4.1 yields
 A  sI  x     s  x . This shows that   s is an eigenvalue of A  sI associated with the eigenvector x .
35.
Multiplying both sides of the equation Ax   x by the scalar s yields s  Ax   s   x  . By Theorem 1.4.1,
the equation can be rewritten as  sA x   s  x . This shows that s is an eigenvalue of sA associated with
the eigenvector x .
36.
det   I  A  
 2
2
3
  3 2   3  6 2  11  6 ;
2   5
2
4
The only possibilities for integer solutions of the characteristic equation are 1,  2,  3,  6 .
Since det 1I  A  0 , we divide  1 into  3  6 2  11  6 to obtain


det   I  A     1  2  5  6     1   2    3 .
5.1 Eigenvalues and Eigenvectors
13
 3 2 3 
 1 0 1


The reduced row echelon form of I  A   2 2 2  is 0 1 0  .
 4 2 4 
0 0 0 
The general solution of  I  A  x  0 is x1  t , x2  0 , x3  t .
A basis for the eigenspace of the matrix A corresponding to   1 is 1,0,1 .
 1  12 0 
 4 2 3 


0 1 .
The reduced row echelon form of 2 I  A   2 1 2  is 0
0
 4 2 3 
0 0 
1
The general solution of  2I  A x  0 is x1  t , x2  t , x3  0 .
2
A basis for the eigenspace of the matrix A corresponding to   2 is {(1,2,0)} .
 5 2 3 
1 0 1


The reduced row echelon form of 3I  A   2 0 2  is 0 1 1 .
 4 2 2 
0 0 0 
The general solution of  3I  A x  0 is x1  t , x2  t , x3  t .
A basis for the eigenspace of the matrix A corresponding to   3 is {(1,1,1)} .
(a)
(b)
(c)
From the result of Exercise 33, the matrix A1 has

eigenvalue 1 with a basis for the corresponding eigenspace {(1,0,1)},

eigenvalue 12 with a basis for the corresponding eigenspace {(1,2,0)} ,

eigenvalue 13 with a basis for the corresponding eigenspace {(1,1,1)} .
From the result of Exercise 34, the matrix A  3I has

eigenvalue 2 with a basis for the corresponding eigenspace {(1,0,1)},

eigenvalue 1 with a basis for the corresponding eigenspace {(1,2,0)} ,

eigenvalue 0 with a basis for the corresponding eigenspace {(1,1,1)} .
From the result of Exercise 34, the matrix A  2I has

eigenvalue 3 with a basis for the corresponding eigenspace {(1,0,1)},

eigenvalue 4 with a basis for the corresponding eigenspace {(1,2,0)} ,

eigenvalue 5 with a basis for the corresponding eigenspace {(1,1,1)} .
True-False Exercises
(a)
(b)
False. The vector x must be nonzero (without that requirement, Ax   x holds true for all n  n matrices
A and all values  by taking x  0 ).
False. If  is an eigenvalue of A then   I  A x  0 must have nontrivial solutions.
5.1 Eigenvalues and Eigenvectors
14
True. Since p  0   1  0 , zero is not an eigenvalue of A . By Theorem 5.1.5(r), we conclude that A is
(c)
invertible.
(d)
False. Every eigenspace must include the zero vector, which is not an eigenvector.
(e)
False. E.g., the only eigenvalue of A  2I is 2 . However, the reduced row echelon form of A is I , whose
only eigenvalue is 1.
(f)
False. By Theorem 5.1.5(h), the set of columns of A must be linearly dependent.
5.2 Diagonalization
1.
1 1
1 0
 1 does not equal
 2 therefore, by Table 1 in Section 5.2, A and B are not
3 2
3 2
similar matrices.
2.
4 1
4 1
 18 does not equal
 14 therefore, by Table 1 in Section 5.2, A and B are not
2 4
2 4
similar matrices.
3.
1 2 3
1 2 0
1 2
0 1 2  111  1 does not equal 12 1 0  0  0  1 1
 0 therefore, by Table 1 in
1
2
0 0 1
0 0 1
Section 5.2, A and B are not similar matrices.
4.
1 0 1 
 1 0 1


The reduced row echelon forms of the matrices A and B are 0 0 0  and 0 1 1 ,
0 0 0 
0 0 0 
respectively. Since rank  A   1 does not equal rank  B   2 , by Table 1 in Section 5.2, A and B
are not similar matrices.
5.
det   I  A  
 1
6
0
    1   1 therefore A has eigenvalues 1 and 1 .
 1
 1  13 
The reduced row echelon form of 1I  A is 
 so that the eigenspace corresponding to
0
0
x 
1 
1
1  1 consists of vectors  1  where x1  t , x2  t . A vector p1    forms a basis for this
3
3 
 x2 
eigenspace.
1 0 
The reduced row echelon form of 1I  A is 
 so that the eigenspace corresponding to
0 0 
5.2 Diagonalization
15
x 
0 
2  1 consists of vectors  1  where x1  0 , x2  t . A vector p2    forms a basis for this
1 
 x2 
eigenspace.
1 0 
We form a matrix P using the column vectors p1 and p 2 : P  
 . (Note that this answer is
3 1 
not unique. Any nonzero multiples of these columns would also form a valid matrix P .
Furthermore, the two columns can be interchanged.)
 1 0  1 0
Calculating P 1  111 0 3 

 and performing matrix multiplications we check
 3 1  3 1
 1 0   1 0  1 0   1 0  1 0 
that P 1 AP  
.




 3 1 6 1 3 1  0 1  0 2 
6.
det   I  A  
  14
20
12
  2  3  2     1   2  therefore A has eigenvalues 1 and 2 .
  17
 1  45 
The reduced row echelon form of 1I  A is 
 so that the eigenspace corresponding to
0
0
x 
4
4
1  1 consists of vectors  1  where x1  t , x2  t . A vector p1    forms a basis for this
5
5
 x2 
eigenspace.
 1  34 
The reduced row echelon form of 2I  A is 
 so that the eigenspace corresponding to
0
0
x 
3
2  2 consists of vectors  1  where x1  34 t , x2  t . A vector p2    forms a basis for this
4
 x2 
eigenspace.
4 3
We form a matrix P using the column vectors p1 and p 2 : P  
 . (Note that this answer is
5 4
not unique. Any nonzero multiples of these columns would also form a valid matrix P .
Furthermore, the two columns can be interchanged.)
 4 3
Calculating P 1  
 and performing matrix multiplications we check that
 5 4 
 4 3  14 12   4 3  1 0  1 0 
P 1 AP  
.




 5 4   20 17   5 4  0 2   0 2 
7.
det   I  A  
 2
0
0
0
2
2
 3
0     2    3  thus A has eigenvalues 2 and 3 (with
0
 3
algebraic multiplicity 2 ).
5.2 Diagonalization
16
0 1 0 
The reduced row echelon form of 2I  A is 0 0 1  so that the eigenspace corresponding to
0 0 0 
 x1 
 1


1  2 contains vectors  x2  where x1  t , x2  0 , x3  0 . A vector p1  0  forms a basis for
 0 
 x3 
this eigenspace.
1 0 2 
The reduced row echelon form of 3I  A is 0 0 0  so that the eigenspace corresponding to
0 0 0 
 x1 
2  3  3 contains vectors  x2  where x1  2t , x2  s , x3  t . We can write
 x3 
 x1   2t 
0   2 
0 
 2 
 x    s   s  1   t  0  therefore vectors p   1  and p   0  form a basis for this
2
3
 2  
   
 
 
 x3   t 
0   1
 0 
 1
eigenspace. Note that the geometric multiplicity of this eigenvalue matches its algebraic
multiplicity.
 1 0 2 
We form a matrix P using the column vectors p1 , p 2 , and p3 : P  0 1 0  . (Note that this
0 0
1
answer is not unique. Any nonzero multiples of these columns would also form a valid matrix P .
Furthermore, the columns can be interchanged.)
To invert the matrix P , we can employ the procedure introduced in Section 1.5: since the reduced
 1 0 2 1 0 0  1 0 0 1 0 2 
row echelon form of the matrix 0 1 0 0 1 0  is 0 1 0 0 1 0  , we have
0 0
1 0 0 1 0 0 1 0 0 1 
1 0 2 
P  0 1 0  .
0 0 1 
1
1 0 2  2 0 2   1 0 2  2 0 0   1
We check that P AP  0 1 0  0 3 0  0 1 0   0 3 0    0
0 0 1  0 0 3  0 0
1 0 0 3   0
1
0
2
0
0
0  .
3 
5.2 Diagonalization
8.
Cofactor expansion along the second column yields det   I  A  
   1
 1
1
 1
0
0
17
0
0
  1 1 
1   1
1
2
    1    1  1     1   2   thus A has eigenvalues 0 , 1 and 2 .


 1
1 0 0 
The reduced row echelon form of 0I  A is 0 1 1  so that the eigenspace corresponding to
0 0 0 
 x1 
 0


1  0 contains vectors  x2  where x1  0 , x2  t , x3  t . A vector p1   1 forms a basis for
 1
 x3 
this eigenspace.
0 1 0 
The reduced row echelon form of 1I  A is 0 0 1  so that the eigenspace corresponding to
0 0 0 
 x1 
1 


2  1 contains vectors  x2  where x1  t , x2  0 , x3  0 . A vector p 2   0  forms a basis for
 0 
 x3 
this eigenspace.
 1 0 0
The reduced row echelon form of 2I  A is 0 1 1 so that the eigenspace corresponding to
0 0 0 
 x1 
0 


3  2 contains vectors  x2  where x1  0 , x2  t , x3  t . A vector p3   1  forms a basis for
 1 
 x3 
this eigenspace.
 0 1 0
We form a matrix P using the column vectors p1 , p 2 , and p3 : P   1 0 1 . (Note that this
 1 0 1
answer is not unique. Any nonzero multiples of these columns would also form a valid matrix P .
Furthermore, the columns can be interchanged.)
5.2 Diagonalization
0  12

Calculating P 1   1 0
0 12


0  and performing matrix multiplications we check that
1
2
1
2
0  12 12  1 0 0   0 1 0  0 0 0   1


P 1 AP   1 0 0  0 1 1   1 0 1  0 1 0    0
1
1
0
0 1 1   1 0 1 0 0 2   0
2
2
9.
(a)
18
0
2
0
0
0  .
3 
Cofactor expansion along the second column yields
 4
det   I  A   2
1
0
1
  4 1
2
2
  3 2     3 
    3     4   1     3     5 


1   4
0
 4
therefore A has eigenvalues 3 (with algebraic multiplicity 2 ) and 5 .
(b)
1 0 1 
The reduced row echelon form of 3I  A is 0 0 0  , consequently rank  3I  A  1 .
0 0 0 
 1 0 1
The reduced row echelon form of 5I  A is 0 1 2  , consequently rank  5I  A  2 .
0 0
0 
(c)
10.
(a)
Based on part (b), the geometric multiplicities of the eigenvalues   3 and   5 are
3  1  2 and 3  2  1 , respectively. Since these are equal to the corresponding algebraic
multiplicities, by Theorem 5.2.4(b) A is diagonalizable.
det   I  A  
 3
0
0
0
0
2
 2
0     3    2  therefore A has eigenvalues 2 (with
1   2
algebraic multiplicity 2 ) and 3 .
(b)
1 0 0 
The reduced row echelon form of 2I  A is 0 1 0  , consequently rank  2I  A  2 .
0 0 0 
0 1 0 
The reduced row echelon form of 3I  A is 0 0 1  , consequently rank  3I  A  2 .
0 0 0 
(c)
Based on part (b) , the geometric multiplicity of the eigenvalue   3 is 3  2  1 , which is
equal to the algebraic multiplicity of this eigenvalue. However, the geometric multiplicity of
the eigenvalue   2 is 3  2  1 , which is less than the corresponding algebraic multiplicity
 2  , therefore by Theorem 5.2.4(b) A is not diagonalizable.
5.2 Diagonalization
11.
19
Cofactor expansion along the second row yields
det   I  A  
 1
3
3
4
2
4
2
 1
2
 4
0  3
   4
1   3
3
 3
1   3
  3  4    3   2  1     4     1   3   2 3   3  6 2  11  6 .
Following the procedure described in Example 3 of Section 5.1, we determine that the only
possibilities for integer solutions of the characteristic equation are 1 , 2 , 3 , and 6 .
Since det 1I  A  0 ,  1 must be a factor of the characteristic polynomial. Dividing  1 into
 3  6 2  11  6 leads to det   I  A     1   2  5  6      1   2    3 .
We conclude that the eigenvalues are 1 , 2 , and 3 - each of them has the algebraic multiplicity 1 .
 1 0 1
The reduced row echelon form of 1I  A is 0 1 1 so that the eigenspace corresponding to
0 0 0 
 x1 
1


1  1 contains vectors  x2  where x1  t , x2  t , x3  t . A vector p1  1 forms a basis for this
 x3 
1
eigenspace. This eigenvalue has geometric multiplicity 1 .
 1 0  23 


The reduced row echelon form of 2I  A is 0 1 1 so that the eigenspace corresponding to
0 0
0 
 x1 
2 


2  2 contains vectors  x2  where x1  23 t , x2  t , x3  t . A vector p 2   3  forms a basis for
 3 
 x3 
this eigenspace. This eigenvalue has geometric multiplicity 1 .
 1 0  14 


The reduced row echelon form of 3I  A is 0 1  34  so that the eigenspace corresponding to
0 0
0 
 x1 
1 


3
1
3  3 contains vectors  x2  where x1  4 t , x2  4 t , x3  t . A vector p3   3  forms a basis for
 4 
 x3 
this eigenspace. This eigenvalue has geometric multiplicity 1 .
Since for each eigenvalue the geometric multiplicity matches the algebraic multiplicity, by
Theorem 5.2.4(b) A is diagonalizable.
5.2 Diagonalization
20
1 2 1
We form a matrix P using the column vectors p1 , p 2 , and p3 : P  1 3 3 . (Note that this
1 3 4 
answer is not unique. Any nonzero multiples of these columns would also form a valid matrix P .
Furthermore, the columns can be interchanged.)
1 0
P AP   0 2
 0 0
1
0  1 0 0 
0   0 2 0  .
3  0 0 3 
  19
12.
Since det   I  A   25
17
9
6
2
  11
9   3  4 2  5  2     1    2  , A has
9
4
eigenvalues   1 with the algebraic multiplicity 2 and   2 with the algebraic multiplicity 1 .
 1 0 1
The reduced row echelon form of 1I  A is 0 1  43  so that   1 has the geometric
0 0
0 
multiplicity 1 .
 1 0  34 


The reduced row echelon form of 2I  A is 0 1  34  so that   2 has the geometric
0 0
0 
multiplicity 1 .
Since for   1 the geometric multiplicity and the algebraic multiplicity are not equal, we conclude
from Theorem 5.2.4(b) that A is not diagonalizable.

13.
0
0
det   I  A   0 
0   2    1 so the eigenvalues are   0 with the algebraic
3 0   1
multiplicity 2 and   1 with the algebraic multiplicity 1 .
 1 0 13 


The reduced row echelon form of 0I  A is 0 0 0  so that the eigenspace corresponding to
0 0 0 
 x1 
1  2  0 contains vectors  x2  where x1   13 t , x2  s , x3  t . We can write
 x3 
 x1    13 t 
0    13 
0 
 1
 x    s   s 1   t  0  therefore vectors p   1  and p   0  form a basis for this
1
2

 2 
   
 
 
 x3   t 
0   1
 0 
 3
eigenspace. This eigenvalue has the geometric multiplicity 2 .
5.2 Diagonalization
1 0 0 
The reduced row echelon form of 1I  A is 0 1 0  so that the eigenspace corresponding to
0 0 0 
 x1 
0 


3  1 contains vectors  x2  where x1  0 , x2  0 , x3  t . A vector p3  0  forms a basis for
 1 
 x3 
this eigenspace. This eigenvalue has the geometric multiplicity 1 .
Since for each eigenvalue the geometric multiplicity matches the algebraic multiplicity, by
Theorem 5.2.4(b) A is diagonalizable.
0 1 0 
We form a matrix P using the column vectors p1 , p 2 , and p3 : P   1 0 0  . (Note that this
0 3 1
answer is not unique.)
1 0
P AP   0 2
 0 0
1
14.
0  0 0 0 
0   0 0 0  .
3  0 0 1 
From Theorem 5.1.2, A has only one eigenvalue   5 with the algebraic multiplicity 3.
1 0 0 
The reduced row echelon form of 5I  A is 0 1 0  so that   5 has the geometric
0 0 0 
multiplicity 1 .
Since the geometric multiplicity and the algebraic multiplicity are not equal, we conclude from
Theorem 5.2.4(b) that A is not diagonalizable.
15.
16.
(a)
The degree of the characteristic polynomial of A is 3 therefore A is a 3  3 matrix.
All three eigenspaces (for   1 ,   3 , and   5 ) must have dimension 1 .
(b)
The degree of the characteristic polynomial of A is 6 therefore A is a 6  6 matrix.
The possible dimensions of the eigenspace corresponding to   0 are 1 or 2 .
The dimension of the eigenspace corresponding to   1 must be 1 .
The possible dimensions of the eigenspace corresponding to   2 are 1 , 2 , or 3 .
(a)
The degree of the characteristic polynomial of A is 5 therefore A is a 5  5 matrix.


In factored form,  3  2  5  6   3    6    1 .
The possible dimensions of the eigenspace corresponding to   0 are 1 , 2 , or 3 .
The dimension of the eigenspace corresponding to   6 must be 1 .
The dimension of the eigenspace corresponding to   1 must be 1 .
21
5.2 Diagonalization
(b)
22
The degree of the characteristic polynomial of A is 3 therefore A is a 3  3 matrix.
In factored form,  3  3 2  3  1     1 .
3
The possible dimensions of the eigenspace corresponding to   1 are 1 , 2 , or 3 .
17.
det   I  A  

3
     1   3 2    2    6     2    3  therefore A has
2   1
eigenvalues 2 and 3 , each with the algebraic multiplicity 1 .
 1  23 
The reduced row echelon form of 2I  A is 
 so that the eigenspace corresponding to
0
0
x 
 x2 
3
2 
  2 consists of vectors  1  where x1  23 t , x2  t . A vector p1    forms a basis for this
eigenspace.
1 1 
The reduced row echelon form of 3I  A is 
 so that the eigenspace corresponding to
0 0 
x 
 1
 2

  3 consists of vectors  1  where x1  t , x2  t . A vector p2    forms a basis for this
x
1

eigenspace.
 3 1
 1 1 1  1 1  15
1
1
We form a matrix P  
and
calculate
P




 31  1 2   2 3 5  2 3   2



  5
2 1
1
5
3
5

 so

2 0 
that P 1 AP  
  D.
0 3
10
 3 1 2
Therefore A10  PD10 P 1  


2 1  0
  15
10  
 3   25
0
1
5
3
5
  3 1 1,024
0   15

 


59,049    25
 2 1  0
1
5
3
5



 24,234 34,815
.

35,839 
 23,210
18.
From Theorem 5.1.2, A has eigenvalues 1 and 2 , each with the algebraic multiplicity 1 .
 1 1
The reduced row echelon form of 1I  A is 
 so that the eigenspace corresponding to   1
0 0 
x 
1
consists of vectors  1  where x1  t , x2  t . A vector p1    forms a basis for this eigenspace.
1
 x2 
1 0 
The reduced row echelon form of 2I  A is 
 so that the eigenspace corresponding to   2
0 0 
x 
0 
consists of vectors  1  where x1  0 , x2  t . A vector p2    forms a basis for this eigenspace.
1 
 x2 
5.2 Diagonalization
23
1 0 
 1 0
1 0 
We form a matrix P  
and calculate P 1  
so that P 1 AP  


  D.
0 2 
1 1 
 1 1
1 0  110
Therefore A10  PD10 P 1  

1 1   0
19.
0 
0   1 0  1


.
10  
2   1 1  1023 1024 
To invert the matrix P , we can employ the procedure introduced in Section 1.5: since the reduced
 1 1 1 1 0 0
 1 0 0 0 5 1


row echelon form of the matrix 0 0 1 0 1 0  is 0 1 0 1 4 1 , we have
 1 0 5 0 0 1
0 0 1 0
1 0 
0 5 1
P   1 4 1 .
0
1 0 
1
0 5 1  1 7 1  1 1 1  2 0 0 
We verify that P AP   1 4 1  0 1 0  0 0 1   0 1 0   D is a diagonal
0
1 0   0 15 2   1 0 5  0 0 1
matrix therefore P diagonalizes A .
1
11
0
0  0 5 1
 1 1 1 (2)




11
11 1
11
A  PD P  0 0 1  0
(1)
0   1 4 1
 1 0 5  0
0
111  0
1 0 
 1 1 1  2,048 0 0  0 5 1  1 10,237 2,047 
 0 0 1 
0 1 0   1 4 1   0
1
0 
 1 0 5 
0 0 1 0
1 0   0 10,245 2,048 
20.
0 1 0 
After calculating P  0 0 1 , we verify that
 1 1 4 
1
0 1 0   1 2 8  1 4 1  1 0 0 
P AP  0 0 1 0 1 0   1 0 0    0 1 0   D is a diagonal matrix therefore
 1 1 4  0 0 1 0
1 0   0 0 1
P diagonalizes A .
1
(a)
1000

 1 4 1  1
A1000  PD1000 P 1   1 0 0   0

0
1 0   0

0
 1
1000
0
0  0 1 0  1 0 0 

0  0 0 1  0 1 0 

11000   1 1 4  0 0 1 

5.2 Diagonalization
21.
(b)
1000
 1 4 1 (1)

A1000  PD 1000 P 1   1 0 0   0
0
1 0   0
(c)
2301
0
0  0 1 0   1 2 8 
 1 4 1 (1)




A2301  PD 2301 P 1   1 0 0   0
(1)2301
0  0 0 1  0 1 0 
0
1 0   0
0
12301   1 1 4  0 0 1
(d)
2301
0
0  0 1 0   1 2 8 
 1 4 1 (1)




2301
2301 1
2301
A
 PD
P   1 0 0  0
(1)
0  0 0 1  0 1 0 
0
1 0   0
0
12301   1 1 4  0 0 1
0
0  0 1 0   1 0 0 

1000
(1)
0  0 0 1  0 1 0 
0
11000   1 1 4  0 0 1 
Cofactor expansion along the first row yields det   I  A  
    3
 2
1
24
 3
1
0
1
0
 2
1
1
 3
1
1
1
1
    3    2    3  1     3 
 3
0  3


    3  2  5  6  1  1     3  2  5  4     1   3   4  ,
therefore A has eigenvalues 1 , 3 , and 4 , each with the algebraic multiplicity 1 .
 1 0 1
The reduced row echelon form of 1I  A is 0 1 2  so that the eigenspace corresponding to
0 0
0 
 x1 
 1


1  1 contains vectors  x2  where x1  t , x2  2t , x3  t . A vector p1  2  forms a basis for
 1
 x3 
this eigenspace.
1 0 1 
The reduced row echelon form of 3I  A is 0 1 0  so that the eigenspace corresponding to
0 0 0 
 x1 
 1


2  3 contains vectors  x2  where x1  t , x2  0 , x3  t . A vector p 2   0  forms a basis for
 x3 
 1
this eigenspace.
 1 0 1
The reduced row echelon form of 4I  A is 0 1 1 so that the eigenspace corresponding to
0 0 0 
5.2 Diagonalization
25
 x1 
 1


3  4 contains vectors  x2  where x1  t , x2  t , x3  t . A vector p3   1 forms a basis for
 x3 
 1
this eigenspace.
 1 1 1
We form a matrix P  2 0 1 and find its inverse using the procedure introduced in
 1 1 1
 1 1 1 1 0 0 
Section 1.5. Since the reduced row echelon form of the matrix 2 0 1 0 1 0  is
 1 1 1 0 0 1
1
1 0 0
6

1
0 1 0  2
1
0 0 1
3
1
3
0

1
3
1
6
1
2
1
3

 61

 1
1
 , we have P    2

 13
1
3
0
 13
1
6
1
2
1
3


.

 1 1 1  1 0 0   61

We conclude that An  PD n P 1  2 0 1 0 3n 0    12
 1 1 1 0 0 4 n   13
 1
22.
1
3
0
 13
1
6
1
2
1
3


.

1
1
  1 1   3  3 2   2    3  , A has eigenvalues   3 with the
1   1
Since det   I  A   1
1
algebraic multiplicity 1 and   0 with the algebraic multiplicity 2.
The geometric multiplicity of 1  3 must be 1 .
1 1 1 
The reduced row echelon form of 0I  A is 0 0 0  so that the geometric multiplicity of
0 0 0 
2  3  0 is 2 .
Since for each eigenvalue the geometric multiplicity matches the algebraic multiplicity, by
Theorem 5.2.4(b) A is diagonalizable, i.e., there exists an invertible matrix P such that
1
P AP  D   0
 0
1
0
2
0
0  3 0 0 
0   0 0 0   B.
3  0 0 0 
Consequently, the matrices A and B are similar.
23.
By inspection, both A and B have rank 1 (both matrices are in reduced row echelon form).
a b 
1 0   a b   a b 
 a b  0 1  0 a 
Let P  
. Then AP  
and PB  








.
c d 
0 0   c d   0 0 
 c d  0 0  0 c 
Setting AP  PB requires that a  0 , b  a , and c  0 .
5.2 Diagonalization
26
0 0 
0 0 
For any value d , the matrix P  
satisfies the equality AP  PB . However, P  


0 d 
0 d 
has zero determinant therefore it is not invertible so that the similarity condition B  P 1 AP cannot
be met.
24.
By inspection, both A and B have only one eigenvalue,   1 .
a b 
1 1  a b   a  c b  d 
a b 
Let P  
. Then AP  
and PB  






.
d 
c d 
0 1  c d   c
c d 
Setting AP  PB requires that a  c  a and b  d  b , consequently c  d  0 .
a b
a b
For any a and b , the matrix P  
satisfies the equality AP  PB . However, P  


0 0
0 0
has zero determinant therefore it is not invertible so that the similarity condition B  P 1 AP cannot
be met.
25.
Since there exist matrices P and Q such that B  P 1 AP and C  Q1 BQ , we can write


C  Q1 P 1 AP Q   PQ  A  PQ  . Consequently, A is similar to C .
26.
27.
1
(a)
Any square matrix A is similar to itself since A  I 1 AI .
(b)
If Onn  P 1 AP then A  POnn P 1  Onn .
(c)
Table 1 lists invertibility as one of the similarity invariants. Consequently, a nonsingular
matrix cannot be similar to a singular matrix.
(a)
The dimension of the eigenspace must be at least 1 , but cannot exceed the algebraic
multiplicity of the corresponding eigenvalue. Since the algebraic multiplicities of the
eigenvalues 1,3 , and 4 are 1 , 2 , and 3 , respectively, we conclude that
(b)
(c)

The dimension of the eigenspace corresponding to   1 must be 1 .

The possible dimensions of the eigenspace corresponding to   3 are 1 or 2 .

The possible dimensions of the eigenspace corresponding to   4 are 1 , 2 , or 3 .
If A is diagonalizable then by Theorem 5.2.4(b) for each eigenvalue the dimension of the
eigenspace must be equal to the algebraic multiplicity. Therefore

The dimension of the eigenspace corresponding to   1 must be 1 .

The dimension of the eigenspace corresponding to   3 must be 2 .

The dimension of the eigenspace corresponding to   4 must be 3 .
If the dimension of the eigenspace were smaller than 3 then by Theorem 4.6.2(a), a set of
three vectors from that eigenspace would have to be linearly dependent. Consequently, for the
set of the three vectors to be linearly independent, the eigenspace containing the set must be
5.2 Diagonalization
27
of dimension at least 3 . This is only possible for the eigenspace corresponding to the
eigenvalue   4 .
29.
 b
Using the result obtained in Exercise 30 of Section 5.1, we can take P  
 a  1
1  12  a  d  

30.
b 
where
a  2 
 a  d   4bc  and 2  12  a  d    a  d   4bc  .
2
2



2 x  x  2 1  x1 
2 1
T  x1 , x2    1 2   
; the standard matrix for the operator T is A  



.
 1 1
 x1  x2   1 1  x2 
det   I  A  
 2
1
1
    2    1  1 1   2  3  3
 1
The discriminant of this quadratic polynomial, b2  4ac   3  4 1 3  3 is negative,
2
therefore the characteristic polynomial has no real zeros. Consequently, A has no real eigenvalues
and it cannot be diagonalized.
31.
  x   0 1  x1 
 0 1
T  x1 , x2    2   
; the standard matrix for the operator T is A  



.
 1 0 
  x1   1 0   x2 
det   I  A  
 1
  2  1     1   1 ; thus, A has eigenvalues 1 and 1 , both with
1 
algebraic multiplicities 1 .
1 1 
The reduced row echelon form of 1I  A is 
 so that the eigenspace corresponding to 1  1
0 0 
x 
 1
consists of vectors  1  where x1  t , x2  t . A vector p1    forms a basis for this
 1
 x2 
eigenspace.
 1 1
The reduced row echelon form of 1I  A is 
 so that the eigenspace corresponding to
0 0 
x 
1
2  1 consists of vectors  1  where x1  t , x2  t . A vector p2    forms a basis for this
1
 x2 
eigenspace.
 1 1
We form a matrix P using the column vectors p1 and p 2 : P  
 . (Note that this answer is
 1 1
not unique. Any nonzero multiples of these columns would also form a valid matrix P .
Furthermore, the two columns can be interchanged.)
5.2 Diagonalization
32.
28
8 x1  3 x2  4 x3   8 3 4   x1 
T  x1 , x2 , x3    3 x1  x2  3 x3    3 1 3  x2  ;
 4 x1  3 x2   4 3 0   x3 
  8 3
4
 8 3 4 


the standard matrix for T is A   3 1 3 . Since det( I  A)  3
  1 3 
 4 3 0 
4
3

(  4)2 (  1) , thus A has eigenvalues   1 with algebraic multiplicity 1 and   4 with
algebraic multiplicity 2.
 1 0 1
The reduced row echelon form of 4I  A is 0 1 0  so that   4 has geometric multiplicity 1 .
0 0 0 
Since the geometric multiplicity and the algebraic multiplicity are not equal, we conclude from
Theorem 5.2.4(b) that A is not diagonalizable.
33.
 3 x1   3 0 0   x1 
3 0 0






T  x1 , x2 , x3    x2   0 1 0   x2  ; the standard matrix for T is A  0 1 0  .
 x1  x2   1 1 0   x3 
 1 1 0 
Since det   I  A  
 3
0
1
0
0
  1 0     3    1  , thus A has eigenvalues 0 , 1 , and 3 , each
1

with algebraic multiplicity 1 .
1 0 0 
The reduced row echelon form of 0I  A is 0 1 0  so that the eigenspace corresponding to
0 0 0 
 x1 
0 


1  0 contains vectors  x2  where x1  0 , x2  0 , x3  t . A vector p1  0  forms a basis for
 1
 x3 
this eigenspace.
1 0 0 
The reduced row echelon form of 1I  A is 0 1 1  so that the eigenspace corresponding to
0 0 0 
 x1 
 0


2  1 contains vectors  x2  where x1  0 , x2  t , x3  t . A vector p 2   1 forms a basis for
 x3 
 1
this eigenspace.
5.2 Diagonalization
29
 1 0 3
The reduced row echelon form of 3I  A is 0 1 0  so that the eigenspace corresponding to
0 0 0 
 x1 
 3


3  3 contains vectors  x2  where x1  3t , x2  0 , x3  t . A vector p3  0  forms a basis for
 1
 x3 
this eigenspace.
0 0 3
We form a matrix P using the column vectors p1 , p 2 , and p3 : P  0 1 0  . (Note that this
 1 1 1
answer is not unique. Any nonzero multiples of these columns would also form a valid matrix P .
Furthermore, the columns can be interchanged.)
True-False Exercises
(a)
False. E.g., A  I 2 has only one eigenvalue   1 , but it is diagonalizable with P  I 2 .
(b)
True. This follows from Theorem 5.2.1.
(c)
True. Multiplying A  P 1 BP on the left by P yields PA  BP .
(d)
False. The matrix P is not unique. For instance, interchanging two columns of P results in a
different matrix which also diagonalizes A .
(e)
True. Since A is invertible, we can take the inverse on both sides of the equality
1
0
1
P AP  D  


0
0
1 / 1

 0
0
1 1
1
obtaining P A P  D  




n 
 0
0
2
0
0 
0 
.


1 / n 
0
1 / 2
0
Consequently, P diagonalizes both A and A1 .
(f)
True. We can transpose both sides of the equality P 1 AP  D obtaining PT AT  PT   DT  D ,
1
i.e.,
 P   A  P   D . Consequently,  P  diagonalizes A .
T
1 1
T
T
1
T
1
T
(g)
True. A basis for R n must be a linearly independent set of n vectors, so by Theorem 5.2.1 A is
diagonalizable.
(h)
True. This follows from Theorem 5.2.2(b).
(i)
True. From Theorem 5.1.5 we have det  A   0 . Since det  A2    det  A    02  0 , it follows
2
from the same theorem that A2 is singular.
5.3 Complex Vector Spaces
5.3 Complex Vector Spaces
1.

||u|| 
2.
3.

u  2  i,4i,1  i   2  i, 4i,1  i  ; Re  u    2,0,1 ; Im  u    1,4,1 ;
2
2
2
2  i  4i  1  i 
2   1   0  4   1  1   5  16  2  23
2
2
2
2
2
2
u   6,1  4i,6  2i  ; Re  u    6,1,6  ; Im  u    0,4, 2  ;
2
2
2
||u|| 
6  1  4i  6  2i  36  17  40  93
(a)
u   3  4i,2  i, 6i    3  4i,2  i,6i   3  4i,2  i, 6i   u
(b)
ku  i  3  4i,2  i, 6i    4  3i, 1  2i, 6    4  3i, 1  2i,6 
ku  i  3  4i,2  i, 6i   i 3  4i,2  i,6i    4  3i, 1  2i,6 
(c)
u  v   4  3i,4,4  6i    4  3i,4,4  6i 
u  v   3  4i,2  i,6i   1  i,2  i,4    4  3i,4,4  6i 
(d)
u  v   2  5i,2i, 4  6i    2  5i, 2i, 4  6i 
u  v   3  4i,2  i,6i   1  i,2  i,4    2  5i, 2i, 4  6i 
4.
(a)
u   6,1  4i,6  2i    6,1  4i,6  2i   6,1  4i,6  2i   u
(b)
ku   i  6,1  4i,6  2i    6i,4  i, 2  6i   6i,4  i, 2  6i 
ku   i  6,1  4i,6  2i   i 6,1  4i,6  2i   6i,4  i, 2  6i 
(c)
u  v  10,4  6i,3  i   10,4  6i,3  i 
u  v   6,1  4i,6  2i    4,3  2i, i  3  10,4  6i,3  i 
(d)
u  v   2, 2  2i,9  3i    2, 2  2i,9  3i 
u  v   6,1  4i,6  2i    4,3  2i, i  3   2, 2  2i,9  3i 
5.
ix  3v  u can be rewritten as ix  3v  u ; multiplying both sides by i and using the fact that
 i  i   1 , we obtain x   i 3v  u    i  3  3i,6  3i,12   3  4i,2  i,6i 
  i  6  7i,8  4i,12  6i    7  6i, 4  8i,6  12i .
6.
1  i  x  2u  v can be rewritten as 1  i  x  v  2u ; multiplying both sides by 1  i and using
the fact that 1  i 1  i   2 , we obtain 2x  1  i  v  2u ; therefore,
x  12 1  i  v  2u   12 1  i   4,3  2i, i  3  12,2  8i,12  4i 
 12 1  i  8,1  10i, 15  3i    4  4i,  29  112 i, 6  9i .
30
5.3 Complex Vector Spaces
7.
 5i
4   5i
4 
0 4 
 5 0 
A
; Re  A   
; Im  A   



;
2 1 
 1 5
2  i 1  5i  2  i 1  5i 
det  A   5i 1  5i    4  2  i   5i  25  8  4i  17  i ; tr  A  5i  1  5i   1
8.
 4i 2  3i 
0 2 
 4 3
; Re  A   
; Im  A   
A


;
1 
2  3i
2 1 
 3 0
det  A   4i 1   2  3i  2  3i   13  4i ; tr  A  1  4i
9.
(a)
 5i
4   5i
4 
A

A
2  i 1  5i  2  i 1  5i 
(b)
A 
(c)
4  1  i    5i 1  i    4  2i  
 5i
From AB  



2  i 1  5i   2i   2  i 1  i   1  5i  2i  
T
T
 5i 2  i  5i 2  i 
4 
T
 5i
5i 2  i 


;  A  




2  i 1  5i 
 4 1  5i 
 4 1  5i   4 1  5i 
5i  5  8i

  5  3i 
 5  3i 
we obtain AB  




.
2  2i  i  1  2i  10   9  i 
 9  i 
4  1  i    5i 1  i    4  2i  
 5i
AB



2  i 1  5i   2i   2  i 1  i   1  5i  2i  
5i  5  8i

  5  3i 



2  2i  i  1  2i  10   9  i 
10.
(a)
 4i 2  3i   4i
2  3i 
A
A

1 
1  2  3i
2  3i
(b)
A 
(c)
2  3i   5i   30  11i 
 4i
 30  11i 

From AB  
we obtain AB  





1  1  4i   14  6i 
2  3i
 14  6i 
T
T
 4i 2  3i   4i
2  3i 
2  3i 
T
 4i 2  3i 
 4i


;  A  



1 
1 
1 
1  2  3i
2  3i
2  3i
2  3i
 4i 2  3i   5i   30  11i 
.
AB

1  1  4i   14  6i 
2  3i
11.
   
u  w   i   2  i    2i   2i    3  5  3i    i  2  i    2i  2i    3 5  3i 
u  v   i   4    2i  2i   3 1  i   i  4    2i  2i   31  i   4i  4  3  3i  1  i
 2i  1  4  15  9i  18  7i
31
5.3 Complex Vector Spaces


 


v  w   4  2  i   2i  2i  1  i  5  3i   4  2  i    2i  2i   1  i  5  3i 
 8  4i  4  5  3i  5i  3  12  6i
 4 
i


T
Since both u v  i 2i 3  2i    1  i  and v u   4 2i 1  i  2i    1  i  are equal to
1  i 
 3 
u  v  1  i , Formula (5) holds.
T
(a)
 
v  u   4   i    2i  2i  1  i   3    4  i    2i  2i   1  i 3
 4i  4  3  3i  1  i  1  i  u  v
(b)





u   v  w    i  4  2  i   2i  2i  2i   3 1  i  5  3i

  i  6  i    2i  0    3 6  4i   6i  1  18  12i  17  6i equals
u  v  u  w  1  i  18  7i  17  6i .
(c)
k  u  v    2i  1  i   2  2i equals
 ku   v   2   4    4   2i   6i  1  i 
=  2  4    4  2i    6i 1  i   8  8i  6i  6  2  2i .
12.
   
u  w  1  i  1  i    4   4i    3i   4  5i   1  i 1  i    4  4i   3i  4  5i   15  2i
v  w   3 1  i    4i   4i    2  3i   4  5i    31  i    4i  4i    2  3i  4  5i 
u  v  1  i   3    4  4i   3i  2  3i  1  i 3   4  4i   3i  2  3i   12  25i
 20  25i
 3 
Since both u v  1  i 4 3i   4i   12  25i  and
2  3i 
T
1  i 
v u  3 4i 2  3i   4   12  25i  are equal to u  v  12  25i , Formula (5) holds.
 3i 
T
(a)
 
 
v  u   3 1  i   4i   4    2  3i  3i  3 1  i    4i  4    2  3i  3i 
 20  25i  20  25i  u  v
(b)





u   v  w   1  i  3  1  i   4  4i  4i  3i  2  3i  4  5i
 1  i  4  i    4  0    3i  6  2i   3  23i equals
u  v  u  w  12  25i  15  2i  3  23i.

32
5.3 Complex Vector Spaces
(c)
33
k  u  v   1  i 12  25i   13  37i equals
 ku   v   2i   3    4  4i   4i    3  3i   2  3i 
  2i  3   4  4i  4i    3  3i  2  3i   13  37i .
13.
u  v   i  4    2i  2i    31  i   4i  4  3  3i  7  7i
 
w  u   2  i   i    2i  2i   5  3i   3    2  i  i    2i  2i    5  3i 3 
 2i  1  4  15  9i  18  7i  18  7i
 u  v   w  u  7  7i  18  7i  11  14i  11  14i
14.
iu  w   1  i 1  i   4i  4i    3 4  5i   28  17i
2
2
||u||  1  i  4  3i 2  2  16  9  27  3 3
||u||v   u   9 3  1  i    12 3i   4   6 3  9 3i   3i   36 3  75 3i
iu  w  ||u||v   u  28  17i  36 3  75 3i  28  36 3  17  75 3  i
15.
det   I  A  
 4 5
    4     5 1   2  4  5
1 
Solving the characteristic equation  2  4  5  0 using the quadratic formula yields

4  42  4  5 
2
 4  2 4  2  i therefore A has eigenvalues   2  i and   2  i .
For the eigenvalue   2  i , the augmented matrix of the homogeneous system   2  i  I  A x  0
5
0
 2  i
is 
. The rows of this matrix must be scalar multiples of each other (see Example
2  i 0 
 1
3 in Section 5.3) therefore it suffices to solve the equation corresponding to the second row, which
yields  x1   2  i  x2  0 . The general solution of this equation (and, consequently, of the entire
2  i 
system) is x1   2  i  t , x2  t . The vector 
 forms a basis for the eigenspace corresponding
 1 
to   2  i .
2  i  2  i 
According to Theorem 5.3.4, the vector 

 forms a basis for the eigenspace
 1   1 
corresponding to   2  i .
16.
The characteristic equation of A is  2  6  13  0 . Solving this equation using the quadratic
formula yields  
  3  2i .
6
 6 2  413
2
 6 216  3  2i; therefore, A has eigenvalues   3  2i and
5.3 Complex Vector Spaces
34
For the eigenvalue   3  2i , the augmented matrix of the homogeneous system
4  2i
3  2i  I  A x  0 is  4
5
0
. The rows of this matrix must be scalar multiples of
4  2i 0 

each other (see Example 3 in Section 5.3) therefore it suffices to solve the equation corresponding
to the second row, which yields x1  1  12 i  x2  0 . The general solution of this equation (and,
 2  i 
consequently, of the entire system) is x1   1  12 i  t , x2  t . The vector 
 forms a basis for
 2 
the eigenspace corresponding to   3  2i .
 2  i   2  i 
According to Theorem 5.3.4, the vector 

 forms a basis for the eigenspace
 2   2 
corresponding to   3  2i .
17.
det   I  A  
 5
1
2
    5   3    2  1   2  8  17 Solving the characteristic
 3
equation  2  8  17  0 using the quadratic formula yields  
8  82  417 
2
 8 2 4  4  i ;
therefore, A has eigenvalues   4  i and   4  i .
For the eigenvalue   4  i , the augmented matrix of the homogeneous system
1  i
 4  i  I  A x  0 is  1
2 0
. The rows of this matrix must be scalar multiples of each
1  i 0 

other (see Example 3 in Section 5.3) therefore it suffices to solve the equation corresponding to the
second row, which yields  x1  1  i  x2  0 . The general solution of this equation (and,
1  i 
consequently, of the entire system) is x1  1  i  t , x2  t . The vector 
 forms a basis for the
 1 
eigenspace corresponding to   4  i .
1  i  1  i 
According to Theorem 5.3.4, the vector 

 forms a basis for the eigenspace
 1   1 
corresponding to   4  i .
18.
The characteristic equation of A is  2  10  34  0 . Solving this equation using the quadratic
formula yields  
10 
 10 2  4 34 
2
 10  2 36  5  3i; therefore, A has eigenvalues   5  3i and
  5  3i .
For the eigenvalue   5  3i , the augmented matrix of the homogeneous system
3  3i 6 0 
. The rows of this matrix must be scalar multiples of
3  3i 0 
 3
each other (see Example 3 in Section 5.3) so it suffices to solve the equation corresponding to the
 5  3i  I  A x  0 is 
5.3 Complex Vector Spaces
35
second row, which yields x1  1  i  x2  0 . The general solution of this equation (and,
1  i 
consequently, of the entire system) is x1   1  i  t , x2  t . The vector 
 forms a basis for the
 1 
eigenspace corresponding to   5  3i .
1  i  1  i 
According to Theorem 5.3.4, the vector 

 forms a basis for the eigenspace
 1   1 
corresponding to   5  3i .
19.
 a b  1 1
 b a   1 1 implies a  b  1 . We have   1  i  1  1  2 .

 

The angle inside the interval   ,  from the positive x -axis to the ray that joins the origin to the
point 1,1 is   4 .
20.
 a b   0 5 
 b a    5 0  implies a  0 and b  5 . We have   0  5i  0  25  5 .

 

The angle inside the interval   ,  from the positive x -axis to the ray that joins the origin to the
point  0, 5 is   
21.
 a b   1
b a   

   3

2
.
3
 implies a  1 and b   3 . We have   1  3i  1  3  2 .
1 
The angle inside the interval   ,  from the positive x -axis to the ray that joins the origin to the


point 1,  3 is   
22.
 a b   2
b a   

   2

3
.
2
 implies a  2 and b   2 . We have  
2 
2  2 i  22 2.
The angle inside the interval   ,  from the positive x -axis to the ray that joins the origin to the
point
23.
 2,  2  is    4 .
det   I  A  
 1
4
5
    1   7    5 4    2  6  13
 7
Solving the characteristic equation  2  6  13  0 using the quadratic formula yields

6  62  413
2
 6 216  3  2i therefore A has eigenvalues   3  2i and   3  2i .
For the eigenvalue   3  2i , the augmented matrix of the homogeneous system
5.3 Complex Vector Spaces
36
4  2i
3  2i  I  A x  0 is  4
5
0
. The rows of this matrix must be scalar multiples of
4  2i 0 

each other (see Example 3 in Section 5.3) so it suffices to solve the equation corresponding to the
second row, which yields x1  1  12 i  x2  0 . The general solution of this equation (and,
 2  i 
consequently, of the entire system) is x1   1  12 i  t , x2  t . Since 
 is an eigenvector
 2 
 2 1
corresponding to   3  2i , it follows from Theorem 5.3.8 that the matrices P  
 and
 2 0
 3 2 
1
C
 satisfy A  PCP .
2
3


24.
The characteristic equation of A is  2  4  5  0 . Solving this equation using the quadratic
formula yields  
4
 4 2  4 5
2
 4  2 4  2  i; therefore, A has eigenvalues   2  i and   2  i .
For the eigenvalue   2  i , the augmented matrix of the homogeneous system   2  i  I  A x  0
5
0
 2  i
is 
 . The rows of this matrix must be scalar multiples of each other (see Example
 1 2  i 0 
3 in Section 5.3) therefore it suffices to solve the equation corresponding to the second row, which
yields x1   2  i  x2  0 . The general solution of this equation (and, consequently, of the entire
2  i 
system) is x1   2  i  t , x2  t . Since 
 is an eigenvector corresponding to   2  i , it
 1 
2 1
2 1
1
follows from Theorem 5.3.8 that the matrices P  
and C  

 satisfy A  PCP .
1
0
1
2




25.
det   I  A  
 8
3
6
    8    2    6  3    2  10  34
 2
Solving the characteristic equation  2  10  34  0 using the quadratic formula yields

10  102  4 34 
2
 10 2 36  5  3i; therefore, A has eigenvalues   5  3i and   5  3i .
For the eigenvalue   5  3i , the augmented matrix of the homogeneous system
3  3i 6 0 
. The rows of this matrix must be scalar multiples of
3  3i 0 
 3
each other (see Example 3 in Section 5.3) so it suffices to solve the equation corresponding to the
 5  3i  I  A x  0 is 
second row, which yields x1  1  i  x2  0 . The general solution of this equation (and,
 1  i 
consequently, of the entire system) is x1   1  i  t , x2  t . Since 
 is an eigenvector
 1 
5.3 Complex Vector Spaces
37
 1 1
corresponding to   5  3i , it follows from Theorem 5.3.8 that the matrices P  
 and
 1 0
5 3
satisfy A  PCP 1 .
C

3 5
26.
The characteristic equation of A is  2  8  17  0 . Solving this equation using the quadratic
formula yields  
8
 82  417
2
 8 2 4  4  i therefore A has eigenvalues   4  i and   4  i .
For the eigenvalue   4  i , the augmented matrix of the homogeneous system   4  i  I  A x  0
 1  i 2 0 
is 
 . The rows of this matrix must be scalar multiples of each other (see Example 3
 1 1  i 0 
in Section 5.3) so it suffices to solve the equation corresponding to the second row, which yields
x1   1  i  x2  0 . The general solution of this equation (and, consequently, of the entire system)
1  i 
is x1  1  i  t , x2  t . Since 
 is an eigenvector corresponding to   4  i , it follows from
 1 
1 1
 4 1
Theorem 5.3.8 that the matrices P  
and C  
satisfy A  PCP 1 .


1 0 
 1 4
27.
(a)
 

Letting k  a  bi we have u  v   2i   i    i  6i   3i  a  bi

  2i  i    i  6i    3i  a  bi   2  6  3ai  3b  8  3b   3a  i . Setting this equal to
zero yields a  0 and b  
8
therefore the only complex scalar which satisfies our
3
requirements is k   83 i .
(b)
 
 
u  v   k   1    k  1  1  i  1  i   k 1   k  1  1  i 1  i   2i  0 ; therefore, no
complex scalar k satisfies our requirements.
True-False Exercises
(a)
False. By Theorem 5.3.4, complex eigenvalues of a real matrix occur in conjugate pairs, so the total
number of complex eigenvalues must be even. Consequently, in a 5  5 matrix at least one
eigenvalue must be real.
(b)
True.  2  tr  A   det  A  0 is the characteristic equation of a 2  2 complex matrix A .
(c)
False. By Theorem 5.3.5, A has two complex conjugate eigenvalues if tr  A  4det  A .
(d)
True. This follows from Theorem 5.3.4.
(e)
 i 0
False. E.g., 
 is symmetric, but its eigenvalue   i is not real.
0 i 
2
5.3 Complex Vector Spaces
(f)
38
False. (This would be true if we assumed   1 .)
5.4 Differential Equations
1.
(a)
1 4 
We begin by diagonalizing the coefficient matrix of the system A  
.
2 3 
The characteristic polynomial of A is
det   I  A  
 1
2
4
    1   3   4  2    2  4  5     5    1
 3
thus the eigenvalues of A are   5 and   1 .
 1 1
The reduced row echelon form of 5I  A is 
 so that the eigenspace corresponding to
0 0 
x 
 x2 
1
1
  5 consists of vectors  1  where x1  t , x2  t . A vector p1    forms a basis for this
eigenspace.
1 2 
The reduced row echelon form of 1I  A is 
 so that the eigenspace corresponding to
0 0 
 2 
 forms a basis
 1
x 
 x2 
  1 consists of vectors  1  where x1  2t , x2  t . A vector p2  
for this eigenspace.
1 2 
5 0
Therefore P  
diagonalizes A and P 1 AP  D  

.
1 1
0 1
5 0
The substitution y  Pu yields the “diagonal system” u  
 u consisting of equations
0 1
u1  5u1 and u2  1u2 . From Formula (2) in Section 5.4, these equations have the solutions
 c e5 x 
u1  c1e5 x , u2  c2 e x , i.e., u   1  x  . From y  Pu we obtain the solution
c2 e 
1 2   c1e5 x  c1e5 x  2c2 e x 
y
thus y1  c1e5 x  2c2 e x and y2  c1e5 x  c2 e x .
  x    5x
x 
1
1
c
e
c
e

c
e

 2   1
2

(b)
Substituting the initial conditions into the general solution obtained in part (a) yields a system
c1e    2c2 e0  0
50
c1e    c2 e0  0
50
which can be rewritten as
5.4 Differential Equations
c1
c1
 2c2
 c2
39
 0
 0.
1 2 0  1 0 0 
The reduced row echelon form of this system’s augmented matrix 
 is 
;
1 1 0  0 1 0 
therefore, c1  0 and c2  0 .
The solution satisfying the given initial conditions can be expressed as y1  0 and y2  0 .
2.
(a)
 1 3
We begin by diagonalizing the coefficient matrix of the system A  
 . The
 4 5
characteristic polynomial of A is det   I  A  
 1
4
3
  2  6  7     7    1
 5
thus the eigenvalues of A are   7 and   1 .
 1  12 
The reduced row echelon form of 7I  A is 
 so that the eigenspace corresponding to
0
0
x 
 x2 
1 
2 
  7 consists of vectors  1  where x1  12 t , x2  t . A vector p1    forms a basis for
this eigenspace.
 1 23 
The reduced row echelon form of 1I  A is 
 so that the eigenspace corresponding to
0 0 
 3
 forms a basis
 2
x 
 x2 
  1 consists of vectors  1  where x1   23 t , x2  t . A vector p2  
for this eigenspace.
 1 3
7 0
Therefore P  
diagonalizes A and P 1 AP  D  

.
2 2 
0 1
7 0
The substitution y  Pu yields the “diagonal system” u  
 u consisting of equations
0 1
u1  7u1 and u2  1u2 . From Formula (2) in Section 5.4, these equations have the solutions
 c e7 x 
u1  c1e7 x , u2  c2 e x , i.e., u   1  x  . From y  Pu we obtain the solution
c2 e 
1 3  c1e7 x   c1e7 x  3c2 e x 
y
. Therefore,
  x   
7x
x 
2 2  c2 e  2c1e  2c2 e 
y1  c1e7 x  3c2 e x and y2  2c1e7 x  2c2 e x .
(b)
Substituting the initial conditions into the general solution obtained in part (a) yields
c1
2c1
 3c2
 2c2
 2
 1.
5.4 Differential Equations
40
 1 3 2 
The reduced row echelon form of this system’s augmented matrix 
 is
2 2 1
7
1 0
8
; therefore, c1  87 and c2   83 .

3
0
1

8

The solution satisfying the given initial conditions can be expressed as
y1  87 e7 x  89 e x and y2  74 e7 x  34 e x .
3.
(a)
 4 0 1
We begin by diagonalizing the coefficient matrix of the system A   2 1 0  . Cofactor
 2 0 1
expansion along the second column yields
det   I  A  
 4
2
2
0
1
  4 1
  1 0     1
2
 1
0
 1


    1    4    1   1 2      1  2  5  6     1   2    3 .
The characteristic equation of A is    1   2    3  0. Thus, the eigenvalues of A are
1 , 2 , and 3 (each with the algebraic multiplicity 1).
1 0 0 
The reduced row echelon form of 1I  A is 0 0 1  so that the eigenspace corresponding to
0 0 0 
 x1 
0 


  1 consists of vectors  x2  where x1  0 , x2  t , x3  0 . A vector p1   1  forms a basis
 0 
 x3 
for this eigenspace.
1
1 0
2


The reduced row echelon form of 2I  A is 0 1 1 so that the eigenspace
0 0 0 
 x1 
corresponding to   2 consists of vectors  x2  where x1   12 t , x2  t , x3  t . A vector
 x3 
 1
p 2   2  forms a basis for this eigenspace.
 2 
5.4 Differential Equations
41
1
1 0

The reduced row echelon form of 3I  A is 0 1 1 so that the eigenspace correspding
0 0 0 
 x1 
 1


to   3 consists of vectors  x2  where x1  t , x2  t , x3  t . A vector p3   1 forms a
 x3 
 1
basis for this eigenspace.
0 1 1
1 0 0 


1
Therefore P   1 2 1 diagonalizes A and P AP  D  0 2 0  .
0 2 1
0 0 3 
1 0 0 
The substitution y  Pu yields the “diagonal system” u  0 2 0  u consisting of
0 0 3 
equations u1  u1 , u2  2u2 , and u3  3u3 . From Formula (2) in Section 5.4, these equations
 c1e x 


have the solutions u1  c1e x , u2  c2 e2 x , u3  c3e3 x i.e., u  c2 e2 x  . From y  Pu we obtain
 c3 e3 x 


x
2x
3x

0 1 1  c1e   c2 e  c3e
 2x   x


2x
3x 
the solution y   1 2 1 c2 e   c1e  2c2 e  c3e  ; thus,
0 2
1  c3e3 x   2c2 e2 x  c3e3 x 
y1  c2 e2 x  c3e3 x , y2  c1e x  2c2 e2 x  c3e3 x , and y3  2c2 e2 x  c3e3 x .
(b)
Substituting the initial conditions into the general solution obtained in part (a) yields a system
c2 e    c3e    1
2 0
30
c1e0  2c2 e    c3e    1
2 0
30
2c2 e    c3e    0
2 0
30
which can be rewritten as
c1
 c2
 2c2
2c2
 c3
 c3
 c3
 1
 1
 0
5.4 Differential Equations
42
0 1 1 1
The reduced row echelon form of this system’s augmented matrix  1 2 1 1 is
0 2 1 0 
 1 0 0 1
0 1 0 1 therefore c  1 , c  1 , and c  2 .
3
1
2


0 0 1 2 
The solution satisfying the given initial conditions can be expressed as
y1  e2 x  2e3 x , y2  e x  2e2 x  2e3 x , and y3  2e2 x  2e3 x .
4.
4 2 2 
We begin by diagonalizing the coefficient matrix of the system A   2 4 2  . The characteristic
 2 2 4 
 4
polynomial of A is det   I  A   2
2
2
2
2
  4 2   3  12 2  36  32     8    2  .
2   4
Thus, the eigenvalues of A are   8 and   2 .
 1 0 1
The reduced row echelon form of 8I  A is 0 1 1 so that the eigenspace corresponding to
0 0 0 
 x1 
1


  8 consists of vectors  x2  where x1  t , x2  t , x3  t . A vector p1  1 forms a basis for
 x3 
1
this eigenspace.
1 1 1 
The reduced row echelon form of 2I  A is 0 0 0  so that the eigenspace corresponding to
0 0 0 
 x1 
 1


  2 consists of vectors  x2  where x1  s  t , x2  s , x3  t . Vectors p 2   1 and
 x3 
 0 
 1
p3   0  form a basis for this eigenspace.
 1
1 1 1
8 0 0 


1
Therefore P  1 1 0  diagonalizes A and P AP  D  0 2 0  .
1 0
0 0 2 
1
5.4 Differential Equations
43
8 0 0 
The substitution y  Pu yields the “diagonal system” u  0 2 0  u consisting of equations
0 0 2 
u1  8u1 , u2  2u2 , and u3  2u3 . From Formula (2) in Section 5.4, these equations have the
 c1e8 x 


solutions u1  c1e8 x , u2  c2 e2 x , u3  c3e2 x i.e., u  c2 e2 x  . From y  Pu we obtain the solution
 c3 e2 x 


8x
8x
2x
2x
1 1 1  c1e  c1e  c2 e  c3e 

 

y  1 1 0  c2 e2 x    c1e8 x  c2 e2 x
 . Thus,
2
x
8
x
2
x

1 0 1   c3e   c1e  c3e

y1  c1e8 x  c2 e2 x  c3e2 x , y2  c1e8 x  c2 e2 x , and y3  c1e8 x  c3e2 x .
5.
Assume y  f  x  is a solution of y  ay so that f   x   af  x  .


We have dxd f  x  e ax  f   x  e ax  f  x  a  e ax  af  x  e ax  af  x  e ax  0 for all x so there
exists a constant c for which f  x  e ax  c , i.e., f  x   ecax  ceax . We conclude that every
solution of y  ay must have the form f  x   ceax .
7.
Substituting y1  y and y2  y allows us to rewrite the equation y  y  6 y  0 as
y2  y2  6 y1  0 . Also, y2  y  y1 so we obtain the system
y1  y2
y2  6 y1  y2 .
0 1
The coefficient matrix of this system is A  
 . The characteristic polynomial of A is
6 1
det   I  A  

1
     1   1 6    2    6     3    2  . Thus, the
6   1
eigenvalues of A are   3 and   2 .
 1  13 
The reduced row echelon form of 3I  A is 
 so that the eigenspace corresponding to
0
0
x 
 x2 
1 
3 
  3 consists of vectors  1  where x1  13 t , x2  t . A vector p1    forms a basis for this
eigenspace.
 1 12 
The reduced row echelon form of 2I  A is 
 so that the eigenspace corresponding to
0 0 
x 
 1
 2

  2 consists of vectors  1  where x1   12 t , x2  t . A vector p2    forms a basis for this
x
2

5.4 Differential Equations
44
eigenspace.
 1 1
3 0
Therefore P  
diagonalizes A and P 1 AP  D  

.
3 2 
0 2 
3 0
The substitution y  Pu yields the “diagonal system” u  
 u consisting of equations
0 2 
u1  3u1 and u2  2u2 . From Formula (2) in Section 5.4, these equations have the solutions
u1  c1e , u2  c2 e
3x
2 x
 c1e3 x 
, i.e., u   2 x  . From y  Pu we obtain the solution
c2 e 
1 1  c1e3 x   c1e3 x  c2 e 2 x 
y
. Thus, y1  c1e3 x  c2 e2 x and y2  3c1e3 x  2c2 e2 x .
  2 x   
3x
2 x 
3
2

 c2 e  3c1e  2c2 e 
We conclude that the original equation y  y  6 y  0 has the solution y  c1e3 x  c2 e2 x .
8.
Substituting y1  y and y2  y yields the system
y1  y2
y2  12 y1  y2
 0 1
The coefficient matrix of this system is A  
 . The characteristic polynomial of A is
12 1
det   I  A  

1
  2    12     3   4  thus the eigenvalues of A are   3 and
12   1
  4 .
 1  13 
The reduced row echelon form of 3I  A is 
 so that the eigenspace corresponding to
0
0
x 
 x2 
1 
3 
  3 consists of vectors  1  where x1  13 t , x2  t . A vector p1    forms a basis for this
eigenspace.
 1 14 
The reduced row echelon form of 4I  A is 
 so that the eigenspace corresponding to
0 0 
x 
 1
 2

  4 consists of vectors  1  where x1   14 t , x2  t . A vector p2    forms a basis for this
x
4

eigenspace.
 1 1
3 0
Therefore P  
diagonalizes A and P 1 AP  D  

.
3 4 
0 4 
3 0
The substitution y  Pu yields the “diagonal system” u  
 u consisting of equations
0 4 
u1  3u1 and u2  4u2 . From Formula (2) in Section 5.4, these equations have the solutions
5.4 Differential Equations
45
 c e3 x 
u1  c1e3 x , u2  c2 e4 x , i.e., u   1 4 x  . From y  Pu we obtain the solution
c2 e 
 1 1  c1e3 x   c1e3 x  c2 e 4 x 
y
thus y1  c1e3 x  c2 e4 x and y2  3c1e3 x  4c2 e4 x .
 4 x   

3x
4 x 
3 4  c2 e  3c1e  4c2 e 
We conclude that the original equation y  y  12 y  0 has the solution y  c1e3 x  c2 e4 x .
9.
Substituting y1  y , y2  y , and y3  y allows us to rewrite the equation y  6 y  11y  6 y  0
as y3  6 y3  11y2  6 y1  0 . With y2  y  y1 and y3  y  y2 we obtain the system
y1  y2
y2  y3
y3  6 y1  11y2  6 y3 .
1 0
0

The coefficient matrix of this system is A  0
0 1  .
6 11 6 

1
0
The characteristic polynomial of A is det   I  A   0 
1
6 11   6


1
0
1
  1
      6   11  6   3  6 2  11  6 .
11   6
6   6
Following the procedure described in Example 3 of Section 5.1, we determine that the only
possibilities for integer solutions of the characteristic equation are 1 , 2 , 3 , and 6 .
Since det 1I  A  0 ,  1 must be a factor of the characteristic polynomial. Dividing  1 into
 3  6 2  11  6 leads to det   I  A     1   2  5  6      1   2    3 .
We conclude that the eigenvalues are 1 , 2 , and 3 - each of them has the algebraic multiplicity 1 .
 1 0 1
The reduced row echelon form of 1I  A is 0 1 1 so that the eigenspace corresponding to
0 0 0 
 x1 
1


1  1 contains vectors  x2  where x1  t , x2  t , x3  t . A vector p1  1 forms a basis for this
 x3 
1
eigenspace.
 1 0  14 
The reduced row echelon form of 2I  A is 0 1  12  so that the eigenspace corresponding to
0 0
0 
5.4 Differential Equations
46
 x1 
1 


2  2 contains vectors  x2  where x1  14 t , x2  12 t , x3  t . A vector p 2   2  forms a basis for
 4 
 x3 
this eigenspace.
 1 0  19 
The reduced row echelon form of 3I  A is 0 1  13  so that the eigenspace corresponding to
0 0
0 
 x1 
1 


1
1
3  3 contains vectors  x2  where x1  9 t , x2  3 t , x3  t . A vector p3  3  forms a basis for
9 
 x3 
this eigenspace.
1 1 1 
1 0 0 


1
Therefore P  1 2 3  diagonalizes A and P AP  D  0 2 0  .
1 4 9 
0 0 3 
1 0 0 
The substitution y  Pu yields the “diagonal system” u  0 2 0  u consisting of equations
0 0 3 
u1  u1 , u2  2u2 , and u3  3u3 . From Formula (2) in Section 5.4, these equations have the solutions
 c1e x 


u1  c1e x , u2  c2 e2 x , and u3  c3e3 x , i.e., u  c2 e2 x  . From y  Pu we obtain the solution
 c3 e3 x 


x
x
2x
3x
1 1 1   c1e   c1e  c2 e  c3e 

 

y  1 2 3  c2 e2 x    c1e x  2c2 e2 x  3c3e3 x  ; thus, y1  c1e x  c2 e2 x  c3e3 x ,
1 4 9   c3e3 x  c1e x  4c2 e2 x  9c3e3 x 
y2  c1e x  2c2 e2 x  3c3e3 x , and y3  c1e x  4c2 e2 x  9c3e3 x .
We conclude that the original equation y  6 y  11y  6 y  0 has the solution
y  c1e x  c2 e2 x  c3e3 x .
10.
From Formula (2) in Section 5.4, the second equation of the system has the solution y2  c2 e x .
With this, the first equation becomes y1  y1  c2 e x . Using the terminology of differential equations,
this is an example of a linear equation (note that the term carries a different meaning compared to
linear algebra). One method that can be applied here involves rewriting the equation as
y1  y1  c2 e x and multiplying by the integrating factor e x to produce the equation y1e x   c2 .


Integrating both sides then yields y1e x  c2 x  c1 so that y1  c1e x  c2 xe x (note that this method is
5.4 Differential Equations
47
typically discussed in differential equations textbooks; it is outside the scope of this text). The
solution of the nondiagonalizable system is y1  c1e x  c2 xe x , y2  c2 e x .
12.
(a)
From (11) in Section 5.4, we have
 y  c e2 x  1 c e3 x  c e2 x    1 c e 3 x 
 1 
1
y   1    1 2 x 4 2 3 x    1 2 x    4 23 x   c1e2 x    c2 e3 x  4 
1
 1
 y2   c1e  c2 e  c1e   c2 e

14.
(a)
Let y and z be functions in C   ,   and let k be a real number. From calculus, we have
L  y  z   dxd 2  y  z   2 dxd  y  z   3  y  z   y  z  2 y  2z  3y  3z  L  y   L  z  and
2
L  ky   dxd 2  ky   2 dxd  ky   3  ky   ky  2ky  3ky  kL  y . Therefore, L is a linear
2
operator.
(b)
Substituting y1  y and y2  y we can rewrite y  2 y  3y  0 as the system
y1  y2
y2  3y1  2 y2
1
y 
0
This system can be expressed in the form y  Ay with y   1  and A  
.
 3 2 
 y2 
The characteristic polynomial of A is det( I  A) 

1
  2  2  3  (  1)(  3) ;
3   2
thus, the eigenvalues of A are   1 and   3 .
 1 1
The reduced row echelon form of 1I  A is 
 so that the eigenspace corresponding to
0 0 
x 
 x2 
1
1
  1 consists of vectors  1  where x1  t , x2  t . A vector p1    forms a basis for this
eigenspace.
 1 13 
The reduced row echelon form of 3I  A is 
 so that the eigenspace corresponding to
0 0 
x 
 1
 2

  3 consists of vectors  1  where x1   13 t , x2  t . A vector p2    forms a basis
x
3

for this eigenspace.
1 1
 1 0
Therefore P  
diagonalizes A and P 1 AP  D  

.
1 3
0 3
 1 0
The substitution y  Pu yields the “diagonal system” u  
 u consisting of equations
0 3
u1  u1 and u2  3u2 . From Formula (2) in Section 5.4, these equations have the solutions
5.4 Differential Equations
48
 c ex 
u1  c1e x , u2  c2 e3 x , i.e., u   1 3 x  . From y  Pu we obtain the solution
c2 e 
1 1  c1e x   c1e x  c2 e 3 x 
y
thus y1  c1e x  c2 e3 x and y2  c1e x  3c2 e3 x .
 3 x    x

3 x 
1 3  c2 e  c1e  3c2 e 
We conclude that the differential equation L  y   0 has the solution y  c1e x  c2 e3 x .
15.
(a)
Let y and z be functions in C   ,   and let k be a real number. From calculus, we have
d3
d2
d
y

z

2
 
 y  z   y  z  2 y  z
dx
dx 3
dx 2
 y  z  2 y  2z  y  z  2 y  2z  L  y   L  z  and
L  y  z 
d3
d2
d
ky

2
ky    ky   2  ky   ky  2ky  ky  2ky  kL  y 

3 
2 
dx
dx
dx
therefore L is a linear operator.
L  ky  
(b)
Substituting y1  y , y2  y , and y3  y we can rewrite y  2 y  y  2 y  0 as the system
y1  y2
y2  y3
y3  2 y1  y2  2 y3
 y1 
 0 1 0


This system can be expressed in the form y  Ay where y   y2  and A   0 0 1 .
 y3 
 2 1 2 

Cofactor expansion along the third column yields det   I  A   0
2

 1
2 1
   2
1
0
 1
1   2
 1
    2      2   2     2    2  1
0 
    2    1   1 .
The characteristic equation is    2    1   1  0. Therefore, the eigenvalues are 2 , 1 ,
and 1  each of them has the algebraic multiplicity 1 .
5.4 Differential Equations
49
 1 0  14 
The reduced row echelon form of 2I  A is 0 1  12  so that the eigenspace
0 0
0 
 x1 
corresponding to 1  2 contains vectors  x2  where x1  14 t , x2  12 t , x3  t . A vector
 x3 
1 
p1   2  forms a basis for this eigenspace.
 4 
 1 0 1
The reduced row echelon form of 1I  A is 0 1 1 so that the eigenspace corresponding
0 0 0 
 x1 
1


to 2  1 contains vectors  x2  where x1  t , x2  t , x3  t . A vector p 2  1 forms a basis
 x3 
1
for this eigenspace.
 1 0 1
The reduced row echelon form of 1I  A is 0 1 1 so that the eigenspace
0 0 0 
 x1 
corresponding to 3  1 contains vectors  x2  where x1  t , x2  t , x3  t . A vector
 x3 
 1
p3   1 forms a basis for this eigenspace.
 1
 1 1 1
2 0 0 


1
Therefore P   2 1 1 diagonalizes A and P AP  D  0 1 0  .
 4 1 1
0 0 1
2 0 0 
The substitution y  Pu yields the “diagonal system” u  0 1 0  u consisting of
0 0 1
equations u1  2u1 , u2  u2 , and u3  u3 . From Formula (2) in Section 5.4, these equations
 c1e2 x 


have the solutions u1  c1e2 x , u2  c2 e x , and u3  c3e x , i.e., u   c2 e x  . From y  Pu we
 c3 e  x 


5.4 Differential Equations
50
2x
2x
x
x
 1 1 1  c1e   c1e  c2 e  c3e 




obtain the solution y   2 1 1  c2 e x    2c1e2 x  c2 e x  c3e  x  . Thus,
 4 1 1 c3 e  x   4c1e2 x  c2 e x  c3e  x 
y1  c1e2 x  c2 e x  c3e x , y2  2c1e2 x  c2 e x  c3e x , and y3  4c1e2 x  c2 e x  c3e x . We
conclude that the differential equation L  y   0 has the solution y  c1e2 x  c2 e x  c3e x .
True-False Exercises
(a)
True. y  0 is always a solution (called the trivial solution).
(b)
False. If a system has a solution x  0 then any for any real number k , y  kx is also a solution.
(c)
True.  cx  dy   cx  dy  c  Ax   d  Ay   A  cx   A  dy   A  cx  dy 
(d)
True. The solution can be obtained by following the four-step procedure preceding Example 2.
(e)
False. If P  Q1 AQ then u  Q1 AQu implies Qu   A Qu  . Generally, u and y  Qu are not
the same.
5.5 Dynamical Systems and Markov Chains
1.
2.
3.
(a)
A is a stochastic matrix: each column vector has nonnegative entries that add up to 1.
(b)
A is not a stochastic matrix since entries in its columns do not add up to 1 .
(c)
A is a stochastic matrix: each column vector has nonnegative entries that add up to 1 .
(d)
A is not a stochastic matrix since  A 23   12 fails to be nonnegative.
(a)
A is a stochastic matrix: each column vector has nonnegative entries that add up to 1.
(b)
A is not a stochastic matrix since entries in its columns do not add up to 1 .
(c)
A is a stochastic matrix: each column vector has nonnegative entries that add up to 1.
(d)
A is not a stochastic matrix since  A 11  1 fails to be nonnegative.
0.5 0.6  0.5 0.55
x1  Px 0  
   

0.5 0.4  0.5 0.45
0.5 0.6  0.55 0.545
x 2  Px1  



0.5 0.4  0.45 0.455
0.5 0.6  0.545 0.5455
x3  Px 2  



0.5 0.4  0.455 0.4545
0.5 0.6  0.5455 0.54545
x 4  Px3  



0.5 0.4  0.4545 0.45455
5.5 Dynamical Systems and Markov Chains
51
0.5455 0.5454 
An alternate approach is to determine P 4  
 then calculate
0.4545 0.4546 
0.54545
x4  P 4 x0  
.
0.45455
4.
0.8 0.5 1  0.8 
x1  Px 0  
    
0.2 0.5 0  0.2 
0.8 0.5 0.8  0.74 
x 2  Px1  
   

0.2 0.5 0.2  0.26 
0.8 0.5 0.74  0.722 
x3  Px 2  



0.2 0.5 0.26   0.278
0.8 0.5 0.722  0.7166 
x 4  Px3  



0.2 0.5 0.278  0.2834 
0.7166 0.7085
0.7166 
An alternate approach is to determine P 4  
then calculate x 4  P 4 x 0  

.
0.2834 
0.2834 0.2915
5.
(a)
P is a stochastic matrix: each column vector has nonnegative entries that add up to 1 ;
since P has all positive entries, it is also a regular matrix.
(b)
By Theorem 1.7.1(b), the product of lower triangular matrices is also lower triangular.
Consequently, for all positive integers k , the matrix P k will have 0 in the first row second
column entry. Therefore P is not a regular matrix.
(c)
P is a stochastic matrix: each column vector has nonnegative entries that add up to 1 ;
21
 25
since P   4
 25
2
6.
(a)

 has all positive entries, we conclude that P is a regular matrix.

P is a stochastic matrix: each column vector has nonnegative entries that add up to 1 ;
3
since P 2   14
4
(b)
1
5
4
5
1
2
1
2

 has all positive entries, we conclude that P is a regular matrix.

By Theorem 1.7.1(b), the product of upper triangular matrices is also upper triangular.
Consequently, for all positive integers k , the matrix P k will have 0 in the second row first
column entry. Therefore P is not a regular matrix.
(c)
7.
P is a stochastic matrix: each column vector has nonnegative entries that add up to 1 ;
since P has all positive entries, it is also a regular matrix.
P is a stochastic matrix: each column vector has nonnegative entries that add up to 1 ;
since P has all positive entries, it is also a regular matrix.
 3
To find the steady-state vector, we solve the system  I  P  q  0 , i.e.,  43
 4
 23   q1  0 

. The
2
q  0 
3 2
 1  89 
reduced row echelon form of the coefficient matrix of this system is 
 thus the general
0
0
solution is q1  89 t , q2  t . For q to be a probability vector, its components must add up to 1:
5.5 Dynamical Systems and Markov Chains
52
q1  q2  1. Solving the resulting equation 89 t  t  1 for t results in t  179 , consequently the
8
steady-state vector is q   179  .
 17 
8.
P is a stochastic matrix: each column vector has nonnegative entries that add up to 1 ;
since P has all positive entries, it is also a regular matrix.
 0.8 0.6   q1  0 
To find the steady-state vector, we solve the system  I  P  q  0 , i.e., 
    .
 0.8 0.6  q2  0 
1 0.75
The reduced row echelon form of the coefficient matrix of this system is 
; thus, the
0 
0
general solution is q1  34 t , q2  t . For q to be a probability vector, its components must add up to
1: q1  q2  1. Solving the resulting equation 34 t  t  1 for t results in t  47 , consequently the
3
steady-state vector is q   74  .
7
9.
P is a stochastic matrix: each column vector has nonnegative entries that add up to 1 ;
 83

since P 2   13
 247
1
2
3
8
1
8
1
6
7
18
4
9


 has all positive entries, we conclude that P is a regular matrix.

0   q1   0 
 12  12

1
1
 13  q2  =  0  .
To find the steady-state vector, we solve the system  I  P  q  0 , i.e.,  4
2
 1
   
1
0
  4
3
  q3  0 
 1 0  43 


The reduced row echelon form of the coefficient matrix of this system is 0 1  43  ; thus, the
0 0
0 
general solution is q1  43 t , q2  43 t , q3  t . For q to be a probability vector, we must have
q1  q2  q3  1. Solving the resulting equation 43 t  43 t  t  1 for t results in t  113 , consequently
 114 
 
the steady-state vector is q   114  .
 113 
10.
P is a stochastic matrix: each column vector has nonnegative entries that add up to 1 ;
 17
45

since P 2   154
 16
45
13
48
9
16
1
6
47
150
19
50
23
75


 has all positive entries, we conclude that P is a regular matrix.

5.5 Dynamical Systems and Markov Chains
53
 23  14  35   q1  0 


1
 25  q2   0  .
To find the steady-state vector, we solve the system  I  P  q  0 , i.e.,  0
4
4 
  23
0
q  0 
5  3
 1 0  65 


The reduced row echelon form of the coefficient matrix of this system is 0 1  85  ; thus, the
0 0
0 
general solution is q1  65 t , q2  85 t , q3  t . For q to be a probability vector, we must have
q1  q2  q3  1. Solving the resulting equation 65 t  85 t  t  1 for t results in t  195 , consequently
 196 
 
the steady-state vector is q   198  .
 195 
11.
(a)
The entry 0.2 represents the probability that the system will stay in state 1 when it is in
state 1 .
(b)
The entry 0.1 represents the probability that the system will move to state 1 when it is in
state 2 .
(c)
(d)
12.
0.2 0.1 1  0.2 
0.8 0.9 0   0.8  . Therefore, if the system is in state 1 initially, there is 0.8

   
probability that it will be in state 2 at the next observation.
0.2 0.1 0.5 0.15
0.8 0.9 0.5  0.85 . Therefore, if the system has a 50% chance of being in state 1

  

initially, it will be in state 2 at the next observation with probability 0.85 .
(a)
The entry 67 represents the probability that the system will stay in state 2 when it is in state 2 .
(b)
The entry 0 represents the probability that the system will stay in state 1 when it is in state 1 .
(c)
0

1
1
7
6
7
 1  0 
      . Therefore, if the system is in state 1 , there is 0 probability that it will
 0  1 
remain in state 1 .
(d)
0

1
1
7
6
7
  12   141 
  1    13  . Therefore, if the system has a 50% chance of being in state 1 initially, it
  2   14 
13
will be in state 2 at the next observation with probability 14
.
good
13.
(a)
The transition matrix is
good  0.95
bad  0.05
bad
0.55 
.
0.45 
5.5 Dynamical Systems and Markov Chains
(b)
(c)
(d)
14.
(a)
(b)
(c)
(d)
0.95 0.55 0.95 0.55 1  0.93
0.05 0.45 0.05 0.45 0   0.07  . Therefore, if the air quality is good today, it will


  

also be good two days from now with probability 0.93 .
0.95 0.55 0.95 0.55 0.95 0.55 0   0.858
0.05 0.45 0.05 0.45 0.05 0.45  1  0.142 . Therefore, if the air quality is bad



  

today, it will also be bad three days from now with probability 0.142 .
0.95 0.55 0.2  0.63
0.05 0.45 0.8   0.37 . Therefore, if there is a 20% chance that air quality will be

  

good today, it will be good tomorrow with probability 0.63 .
The transition matrix is
type I
type II
type I  0.75
type II  0.25
0.5 
.
0.5 
0.75 0.5 0.75 0.5 1  0.6875
0.25 0.5 0.25 0.5 0   0.3125 . Therefore, if the mouse chooses type I today, it will


  

choose the same type two days from now with probability 0.6875 .
0.75 0.5 0.75 0.5 0.75 0.5 0  0.65625
0.25 0.5 0.25 0.5 0.25 0.5  1  0.34375 . Therefore, if the mouse chooses type



  

II today, it will choose the same type three days from now with probability 0.34375 .
0.75 0.5  0.1 0.525
0.25 0.5 0.9  0.475 . Therefore, if there is a 10% chance that the mouse chooses

  

type I today, it will choose type I tomorrow with probability 0.525 .
city
15.
(a)
54
The transition matrix is
city
suburbs
 0.95
 0.05

suburbs
0.03
 P.
0.97
 100,000   0.8 
   represents the fractions of the total population
The initial state vector x 0   125,000
25,000 
 125,000  0.2 
(125,000) living in the city and in the suburbs, respectively.
After one year, the corresponding fractions are contained in the state vector
 0.95 0.03  0.8 0.766 
x1  Px 0  
   
 . To determine the populations living in the city and
 0.05 0.97 0.2  0.234 
in the suburbs at that time, we can calculate the scalar multiple of the state vector:
95,750 
125,000 x1  
.
29,250 
5.5 Dynamical Systems and Markov Chains
55
0.73472 
 , and the
 0.26528 
After the second year, the state vector becomes x 2  Px1  
91,840 
corresponding population counts are 125,000 x 2  
.
33,160 
Repeating this process three more times results in the following:
state vector x k 
city population
suburb population
(b)
initial
state
k 0
after
1 year
k 1
after
2 years
k 2
after
3 years
k 3
after
4 years
k4
 0.8 
 0.2 
 
0.766 
0.234 


0.73472 
 0.26528 


0.705942 
 0.294058


0.679467 0.655110 
 0.320533 0.344890 

 

91,840
33,160
88,243
36,757
84,933
40,067
100,000 95,750
25,000 29,250
after
5 years
k 5
81,889
43,111
Since P is a regular stochastic matrix, there exists a unique steady-state probability vector.
To find the steady-state vector, we solve the system  I  P  q  0 , i.e.,
 0.05 0.03  q1  0 
 0.05 0.03 q   0  . The reduced row echelon form of the coefficient matrix of this

 2  
 1  35 
3
system is 
 . Thus, the general solution is q1  5 t , q2  t . The components of the
0
0


vector q must add up to 1 : q1  q2  1. Solving the resulting equation 35 t  t  1 for t results
5
, consequently over the long term the fractions of the total population living in the
8
city and in the suburbs will approach 35  85  83 and 85 , respectively.
in t 
We conclude that the city population will approach 83  125,000  46,875 and the suburbs
population will approach 85  125,000  78,125 .
station1 station 2
16.
(a)
The transition matrix is
station1
station 2
 0.9
 0.1

0.05
 P.
0.95
0.5
0.475
Multiplying this matrix by the initial state vector x 0    results in x1  Px 0  
.
0.525
0.5
0.454 
After the second year, the state vector becomes x 2  Px1  
 . Repeating this process
0.546 
three more times results in the following:
5.5 Dynamical Systems and Markov Chains
initial
state
x0
market share of station 1
market share of station 2
(b)
0.5
0.5
56
after
1 year
x1
after
2 years
x2
after
3 years
x3
after
4 years
x4
after
5 years
x5
0.475
0.525
0.454
0.546
0.436
0.564
0.420
0.580
0.407
0.593
Since P is a regular stochastic matrix, there exists a unique steady-state probability vector.
To find the steady-state vector, we solve the system  I  P  q  0 , i.e.,
 0.1 0.05  q1  0 
 0.1 0.05 q   0  . The reduced row echelon form of the coefficient matrix of this

 2  
1 0.5
1
system is 
 . Thus, the general solution is q1  2 t , q2  t . For q to be a probability
0
0


vector, its components must add up to 1: q1  q2  1. Solving the resulting equation 12 t  t  1
1
for t results in t  23 , consequently the steady-state vector is q   23  .
3
17.
1
 107 p12
5


3
p23  to be stochastic, each column vector must be a probability
For the matrix P   p21
10
3
3 
 101
5
10 
vector: a vector with nonnegative entries that add up to one. Applying the latter condition to each
column results in three equations, which can be used to solve for the missing entries:
7
1
 p21   1 yields
10
10
3 3
column 2: p12    1 yields
10 5
1
3
column 3:
 p23   1 yields
5
10
column 1:
7 1
2 1
 

10 10 10 5
3 3 1
p12  1   
10 5 10
1 3
5 1
p23  1   

5 10 10 2
p21  1 
 107 101 15 


The resulting transition matrix is P   15 103 12  . Since P is a regular stochastic matrix, there
 101 35 103 
exists a unique steady-state probability vector. To find the steady-state vector, we solve the system
 103
 I  P  q  0 , i.e.,   15
  101
 101
7
10
3
5

 15   q1  0 

 12  q2   0  . The reduced row echelon form of the coefficient
7 
q  0 
10   3 
5.5 Dynamical Systems and Markov Chains
57
 1 0 1
matrix of this system is 0 1 1 . Thus, the general solution is q1  t , q2  t , q3  t . For q to
0 0 0 
be a probability vector, we must have q1  q2  q3  1. Solving the resulting equation t  t  t  1
 13 
 
for t results in t  13 , consequently the steady-state vector is q   13  .
 13 
18.
MP  M since each entry in the product MP is a sum of all entries in a column of P , which must
be 1 .
19.
From Theorem 5.5.1(a), we have Pq  q . Therefore, for any positive integer k ,
P k q  P k 1  Pq   P k 1q  P k 2  Pq   P k 2 q 
20.
21.
q
(a)
From Theorem 5.5.1, for each i  1,2,, n , the sequence Pei , P 2 ei ,, P k ei , approaches q .
(b)
As k   , P k approaches the n  n matrix [ q | q  q ] .
Let A and B be two n  n stochastic matrices, and let B be partitioned into columns:
B  [ b1 | b2  bn ] . Using Formula (6) in Section 1.3, we can now see that the product
AB  A[ b1 | b2  bn   [A b1 | A b2  A bn 
has columns that are probability vectors (since each of them is a product of a stochastic matrix and
a probability vector). We conclude that AB is stochastic.
True-False Exercises
(a)
True. All entries are nonnegative and their sum is 1 .
(b)
True. This is a stochastic matrix since its columns are probability vectors.
2
0.2 1  0.84 0.2 
Furthermore, 
 
 has all positive entries.
0.8 0  0.16 0.8 
(c)
True. By definition, a transition matrix is a stochastic matrix.
(d)
False. For q to be a steady-state vector of a regular Markov chain, it must also be a probability
vector.
(e)
True. (See Exercise 21).
(f)
False. The entries must be nonnegative.
(g)
True. This follows from Theorem 5.5.1(a).
Supplementary Exercises
58
Chapter 5 Supplementary Exercises
1.
(a)
The characteristic polynomial is
det   I  A  
  cos
sin 
2
2
    cos    sin   .
 sin 
  cos
For a real eigenvalue  to exist, we must have   cos and sin  0 . However, the latter
equation has no solutions on the given interval 0     , therefore A has no real
eigenvalues, and consequently no real eigenvectors.
(b)
According to Table 5 in Section 1.8, A is the standard matrix of the rotation in the plane
about the origin through a positive angle  . Unless the angle is an integer multiple of  , no
vector resulting from such a rotation is a scalar multiple of the original nonzero vector.

2.
Since det   I  A   0
k 3
1

3k 2
0
3
1   3  3k 2  3k 2   k 3     k  , A has only one
  3k
eigenvalue:   k .
3.
(a)
 d11
0
If D  


0
0
d22
0
0 
0 
with dii  0 for all i then we can take


dnn 
 d11
0
0 


d22
0 
 0
2
S
 so that S  D holds true. (Note that the answer is not unique:


 0

0
d
nn 

the main diagonal entries of S could be negative square roots instead.)
(b)
From our assumptions it follows that there exists a matrix P such that A  P 1 DA where
1
0
D


0
0
2
0
 1
0


0
 0
with i  0 for all i . Taking R  



 0
n 

R2  D (see part (a) ), we can form the matrix S  PRP 1 so that
S 2  PRP 1PRP 1  PR2 P 1  PDP 1  A .
0
2
0
0 

0 
 so that

n 
Supplementary Exercises
(c)
59
By Theorem 5.1.2, A has eigenvalues 1  1 , 2  4 , and 3  9 .
0 1 0 
The reduced row echelon form of 1I  A is 0 0 1  so that the eigenspace corresponding
0 0 0 
 x1 
 1


to 1  1 contains vectors  x2  where x1  t , x2  0 , x3  0 . A vector p1   0  forms a
 0 
 x3 
basis for this eigenspace.
1 1 0 
The reduced row echelon form of 4I  A is 0 0 1  so that the eigenspace
0 0 0 
 x1 
corresponding to 2  4 contains vectors  x2  where x1  t , x2  t , x3  0 . A vector
 x3 
1 
p 2   1  forms a basis for this eigenspace.
 0 
 1 0  12 


The reduced row echelon form of 9I  A is 0 1 1 so that the eigenspace
0 0
0 
 x1 
corresponding to 3  9 contains vectors  x2  where x1  12 t , x2  t , x3  t . A vector
 x3 
1 
p3   2  forms a basis for this eigenspace.
2 
1 1 1 
1 0 0 


1
Therefore P  0 1 2  diagonalizes A and P AP  D  0 4 0  .
0 0 2 
0 0 9 
 1 0 0 1 1 12 
1 1 1 1 0 0 




Since the reduced row echelon form of 0 1 2 0 1 0  is 0 1 0 0 1 1 , we
1
0 0 1 0 0
0 0 2 0 0 1 
2
 1 1 12 


have P 1  0 1 1 . As described in the solution of part (b) we can let
1
0 0
2
Supplementary Exercises
 1

R 0

 0
60
0  1 0 0 

0   0 2 0  and form

9  0 0 3 
0
4
0
1 1 1  1 0 0   1 1 12  1 1 0 


S  PRP 1  0 1 2  0 2 0  0 1 1  0 2 1  . This matrix satisfies S 2  A .
1
0 0 2  0 0 3  0 0
0 0 3 
2
4.
7.
We assume there exists a matrix P such that B  P 1 AP .

  P A  P   P A  P  therefore A and B are similar.

  P APP AP P AP  P A P therefore A and B are similar.

  P A  P   P A P therefore A and B are similar.
T
1
BT  P 1 AP
(b)
Bk  P 1 AP
(c)
B1  P 1 AP
(a)
The characteristic polynomial is det   I  A  
T
k
1
T
(a)
T
1
1
1
T
1
1
T
T
1
1
1
T
1
1
T
k
k
1
1
 3
1
k
1
6
 5   2 .
 2
2
3 6  3 6 
15 30  15 30  0 0 

 
We verify that 5 A  A2  5 




.
1 2   1 2 
 5 10   5 10  0 0 

(b)
The characteristic polynomial is det   I  A   0
1
1
0
 1  1  3  3 2   3 .
3  3
We verify that
2
1 0
1 0  0
1 0
 1 0 0
0
0







2
3
 I 3  3 A  3 A  A   0 1 0   3 0 0 1  3 0 0 1  0 0 1
0 0 1
 1 3 3
 1 3 3  1 3 3
3 0  0
0 3  1 3 3 0 0 0 
 1 0 0  0





  0 1 0   0 0 3   3 9 9    3 8 6   0 0 0 
0 0 1  3 9 9   9 24 18  6 15 10  0 0 0 
9.
Since det   I  A  
 3
1
3
6
  2  5 , it follows from the Cayley-Hamilton Theorem that
 2
3 6  15 30 
15 30  75 150 
, A3  5 A2  5 
A2  5 A  0 . This yields A2  5 A  5 




,
1 2   5 10 
 5 10  25 50 
75 150  375 750 
375 750  1875 3750 
A4  5 A3  5 

, and A5  5 A4  5 



.
25 50  125 250 
125 250   625 1250 
Supplementary Exercises
10.
61
 0 1 0  0 1 0  0 0 1 
We begin by calculating A  0 0 1  0 0 1   1 3 3  .
1 3 3  1 3 3  3 8 6 
2

1
Since det   I  A   0
1

3
0
1   3  3 2  3  1 , it follows from the Cayley-Hamilton
 3
Theorem that A3  3 A2  3 A  I  0 . This yields
0 0 1 
0 1 0  1 0 0  1 3 3 


A  3 A  3 A  I  3 1 3 3   3 0 0 1   0 1 0   3 8 6  and
 3 8 6 
1 3 3  0 0 1  6 15 10 
3
2
1 0   3 8 6 
 1 3 3
0 0 1 0





A  3 A  3 A  A  3  3 8 6   3  1 3 3  0 0 1   6 15 10  .
6 15 10 
 3 8 6   1 3 3 10 24 15
4
11.
3
2
Method I
 x1 
x 
For  to be an eigenvalue of A associated with a nonzero eigenvector x   2  , we must have
 
 
 xn 
Ax   x ; i.e.
c1 x1  c2 x2 
c x  c x 
 1 1 2 2


c1 x1  c2 x2 
 cn x n 
 x1 

x 
 cn x n 
   2 .

 

 
 cn x n 
 xn 
There are two possibilities:
 cn  tr  A .

If   0 then x1  x2 

If   0 then Ax   x becomes a homogeneous system Ax  0 ; its coefficient matrix A can
c1
0
be reduced to 


0
 xn . This implies   c1 
cn 
0 
. The solution space has dimension of at least n 1 therefore


0
0
  0 is an eigenvalue whose geometric multiplicity is at least n 1 .
c2
0
We conclude that the only eigenvalues of A are 0 and tr  A  .
Method II
Supplementary Exercises
  c1
det   I  A  
c1
c1
c2
  c2
c2
c3
c3
  c3
cn 1
cn 1
cn 1
 cn
 cn
 cn
c1
c1
c2
c2
c3
c3
  cn 1
 cn
  cn
  c1 c2



0
c3
0


0
0
 cn 1  cn
c2

  c1  c2  c3 
0
0

0
0
cn 1
cn 1
0
0
 cn
0
0
0
0

0
0


c3
0
0

cn 1
0
0
 cn
0
0
0
0

0
0
    c1  c2  c3 

0
0

1 times the first
row was added to
each of the
remaining rows.
Each of the columns
from the second to
the last was added to
the first column.
 cn1  cn  λ n1
We conclude that the only eigenvalues of A are 0 and tr  A  c1 
 cn .
0
1
Using the companion matrix formula introduced in part (a) we obtain 
0

0
0
0
1
0
0 1
0 2 
.
0 1

1 3
12.
(b)
13.
By Theorem 5.1.2, all eigenvalues of An  0 are 0 . By Theorem 5.2.3, if A had any eigenvalue
  0 then  n would be an eigenvalue of An . We reached a contradiction, therefore all
eigenvalues of A must be 0 .
15.
 0 1 0
The three given eigenvectors can be used as columns of a matrix P   1 1 1 which
 1 1 1
0 0 0 
diagonalizes A , i.e. P AP  D  0 1 0  . The latter equation is equivalent to A  PDP 1 .
0 0 1
1
62
Supplementary Exercises
63
 1 0 0 1 12  12 
 0 1 0 1 0 0




0 .
The matrix  1 1 1 0 1 0  has the reduced row echelon form 0 1 0 1 0
1
0 0 1 0 12
 1 1 1 0 0 1
2
Therefore,
 1 12  12 


P  1 0
0  . We conclude that a matrix A satisfying the given conditions is
1
0 12
2
1
0
 0 1 0  0 0 0   1 12  12   1 0

 





1
A   1 1 1 0 1 0   1 0
0    1  2  12  .
1
 1  12  12 
 1 1 1 0 0 1 0 12
2
16.
(a)
Since the characteristic polynomial of A is det   I  A     1   2    3   3 , we
have det   A  det  0I  A   1 2  33  18 and det  A   1 det   A  18 .
4
(b)
Expanding the characteristic polynomial obtained in part (a) yields
det   I  A   4   3  11 2  9  18 . Using the result of Exercise 5, tr  A   1 .
17.
By Theorem 5.2.3, if A had any eigenvalue  then  3 is an eigenvalue of A3 corresponding to
the same eigenvector. From A3  A it follows that  3   , so the only possible eigenvalues are
1 , 0 , and 1 .
18.
(a)
1 3 
We begin by diagonalizing the coefficient matrix of the system A  
 . The
2 4 
characteristic polynomial of A is det   I  A  
formula, the eigenvalues of A are  
5
 1
2
 52  4 2 
2
1
The reduced row echelon form of 5 2 33 I  A is 
0
3
  2  5  2 . By the quadratic
 4
 5 2 33 .

 so that the eigenspace
0 
3  33
4
x 
corresponding to   5 2 33 consists of vectors  1  where x1  34 33 t , x2  t .
 x2 
3  33 
A vector p1  
 forms a basis for this eigenspace.
 4 
1
The reduced row echelon form of 5 2 33 I  A is 
0

 so that the eigenspace
0 
3  33
4
x 
corresponding to   5 2 33 consists of vectors  1  where x1  34 33 t , x2  t .
 x2 
Supplementary Exercises
64
3  33 
A vector p2  
 forms a basis for this eigenspace.
 4 
 5 2 33
3  33 3  33 
1
P
AP

D

A
Therefore P  
diagonalizes
and


4 
 0
 4
 5 33
The substitution y  Pu yields the “diagonal system” u   2
 0
0 
.
5  33
2 

0 
 u consisting of
5  33

2 
equations u1  5 2 33 u1 and u2  5 2 33 u2 . From Formula (2) in Section 5.4, these equations
have the solutions u1  c1e
 5 33  x /2 ,
u2  c2 e
 5 33  x /2 , i.e.,
 c e 5 33  x /2 
.
u 1
c e 5 33  x /2 
 2

From y  Pu we obtain the solution




 5 33  x /2   3  33 c e 5 33  x /2  3  33 c e 5 33  x /2 
3  33 3  33   c1e
1
2

 ; thus,
y
  5 33 x /2   





5 33  x /2
5  33  x /2

4

4



 c2 e
4c1e
 4c2 e
 


 
y1  3  33 c1e
 5 33  x /2
y2  4c1e
(b)

5 33 x /2
 4c2 e


 3  33 c2 e
5 33  x /2
and
5 33  x /2 .
Substituting the initial conditions into the general solution obtained in part (a) yields
3  33  c  3  33  c
1
4c1
2

4c2

5
 6.
3  33 3  33 5
The reduced row echelon form of this system’s augmented matrix 
 is
4
6
 4
1 0  34  1913233 

 ; therefore, c1   34  1913233 and c2   34  1913233 .
19 33
3
0 1  4  132 
After simplifying, the solution satisfying the given initial conditions can be expressed as
 5 33  x /2 5 7 33  5 33  x /2 and
y1  25  7 2233 e
 2  22 e


y  3 
 e
2
19.
19 33
33



 3 
 e
5 33 x /2
19 33
33

5 33 x /2
.
Let a and b denote the two unknown eigenvalues. We solve the system a  b  1  6 and
a  b  1  6 . Rewriting the first equation as b  5  a and substituting into the second equation
yields a  5  a   6 , therefore  a  2  a  3  0 . Either a  2 (and b  3 ) or a  3 (and b  2 ).
We conclude that the unknown eigenvalues are 2 and 3 .
6.1 Inner Products
CHAPTER 6: INNER PRODUCT SPACES
6.1 Inner Products
1.
2.
3.
(a)
 u, v  2(1)(3)  3(1)(2)  12
(b)
 kv, w  2((3)(3))(0)  3((3)(2))(1)  18
(c)
 u  v, w  2(1  3)(0)  3(1  2)( 1)  9
(d)
|| v ||   v, v1/2  [2(3)(3)  3(2)(2)]1/2  30
(e)
d  u, v   || u  v ||  (2, 1),(2, 1)1/2  [2(2)(2)  3(1)(1)]1/2  11
(f)
|| u  kv ||  (8, 5),(8, 5)1/2  [2(8)(8)  3(5)(5)]1/2  203
(a)
 u, v  12 (1)(3)  5(1)(2)  232
(b)
 kv, w  12 ((3)(3))(0)  5((3)(2))(1)  30
(c)
 u  v, w  12 (1  3)(0)  5(1  2)(1)  15
(d)
|| v ||   v, v1/2   12 (3)(3)  5(2)(2) 
(e)
d (u, v )  || u  v ||  ( 2, 1),( 2, 1)1/2   12 ( 2)( 2)  5( 1)( 1) 
(f)
|| u  kv ||  ( 8, 5),( 8, 5)1/2   12 ( 8)( 8)  5( 5)( 5) 
(a)
 2 1 1   2 1  3   3  8 
 u, v    
   
           34
 1 1 1   1 1 2   2  5
(b)
 2 1 9    2 1  0   24   1
 kv, w   
   
           39
 1 1 6    1 1  1   15  1
(c)
 2 1  4    2 1  0   11  1
 u  v, w   
   
           18
 1 1  3    1 1  1   7   1
(d)
|| v ||   v, v
(e)
 2 1  2    2 1  2   
d  u, v   || u  v ||   
   
   
  1 1  1    1 1  1  
1/2
1/2

49
2
 72
 2 1  3    2 1  3   
  
   
   
  1 1  2     1 1 2   
1/2
1/2
1/2
 7
 157
1/2
 8  8  
   
 5 5 
1/2
 89
1/2
  5   5  
   
  3   3  
 34
1
2
Chapter 6: Inner Product Spaces
1/2
1/2
(f)
  2 1  8     2 1  8   
|| u  kv ||   
   
   
  1 1  5     1 1  5   
(a)
  1 0  1    1 0   3  1  3 
 u, v    
  
       7
 2 1 1   2 1 2   1  4 
(b)
  1 0  9     1 0   0    9  0 
 kv, w   
   
           12
 2 1 6    2 1  1  12  1 
(c)
  1 0   4     1 0   0    4  0 
 u  v, w   
   
       5
 2 1  3   2 1  1   5  1 
(d)
|| v ||   v, v
(e)
   1 0   2     1 0   2   
d  u, v   u  v   
   
   
 2 1  1   2 1  1  
(f)
   1 0   8     1 0   8   
u  kv    
   
   
  2 1  5     2 1  5  
5.
 2

 0
0 

3 
6.
 12

 0
0 

5 
7.
 4
1  0     4
1 6    3 26 
 u, v    




           24
  2 3  3    2 3 2    9   6 
8.
  2 1  0     2 1 6    3 14 
 u, v    
   
           42
  1 3  3    1 3 2    9   0 
9.
  1 13 
If u  U and v  V then  u, v  tr U T V  tr  
  3 .
 10 2  
10.
  4 18  
If u  U and v  V then  u, v  tr U T V  tr  
   56 .
  8 52  
11.
 p, q   2  4   1 0    3  7   29
12.
 p, q   5  3    2  2   1 4   15
4.
1/2
  21  21 



  13   13  
  1 0   3     1 0   3   
  
   
   
  2 1 2    2 1 2   


1/ 2
1/2
1/2
 3  3  
   
 4  4  
1/2
 25  5
1/ 2
  2   2  
   
  3   3  
 13
1/2
  8   8  



  11  11 


 610
 185
6.1 Inner Products
13.
 3

 0
0 

5 
14.
2

0
15.
 p, q  p  2  q  2   p  1 q  1  p  0  q  0   p 1 q 1
0 

6
  10  5    2  2    0 1   2  2   50
16.
 p, q  p  1 q  1  p  0  q  0   p 1 q 1  p  2  q  2 
  2  2    0 1   2  2   10  5   50
17.
|| u ||   u, u1/2  2  3  3   3  2  2  
1/2
 30
d  u, v   || u  v ||  (4, 5),(4, 5)1/2  2  4  4   3  5  5  
18.
|| u ||   u, u1/2  2  1 1  3  2  2  
1/2
1/2
 107
 14
d  u, v   || u  v ||  ( 3, 3),( 3, 3)1/2  2  3  3   3  3  3  
1/2
3 5
19.
|| p ||   p, p1/2 
 2   12  32  14 ; d  p, q   || p  q ||   6   12  102  137
20.
|| p ||   p, p1/2 
 5  22  12  30 ; d  p, q   || p  q ||   8   02  52  89
21.
  25 26  
If u  U and v  V then || U ||   u, u1/2  tr U T U  tr  
   93 and
 26 68  
2
2
2
2




 25 1 
T
d U ,V   || U  V ||   u  v, u  v1/2  tr U  V  U  V   tr  
   99  3 11 .
  1 74  
22.
  10 13 
If u  U and v  V then || U ||   u, u1/2  tr U T U  tr  
   39 and
  13 29 




 18 21 
T
d U , V   || U  V ||   u  v, u  v1/2  tr U  V  U  V   tr  
   43.
 21 25 
23.
|| p ||   p, p1/2   p  2     p  1    p  0     p 1  
2
2
2
2
 10    2   02  22  6 3
2
2
d  p, q   || p  q ||   p  2   q  2     p  1  q  1    p  0   q  0     p 1  q 1 
2
=
 15   4    1  02  11 2
2
2
2
2
2
2
3
4
Chapter 6: Inner Product Spaces
24.
|| p ||   p, p1/2   p  1    p  0     p 1    p  2   
2
2
2
2
 2   02  22  102  6 3
2
d  p, q   || p  q ||   p  1  q  1    p  0   q  0     p 1  q 1    p  2   q  2  
2
=
25.
2
2
|| u ||   u, u
2
1/ 2
  4 0   1    4 0   1  
  
   
   
  3 5  2     3 5   2   
1/2
1/2
   4   4  
   
  7  7 
|| u ||   u, u
1/2
  1 2   1    1 2   1  
  
   
   
  1 3  2     1 3   2   
1/2
1/2
(a)
 65
1/2
  12   12  



  24   24  
 12 5
1/2
 3  3  
   
 7  7  
   1 2   3     1 2   3   
d  u, v   || u  v ||   
   
   
  1 3  3    1 3   3   
27.
2
 4    1  02  52  42
   4 0   3     4 0   3   
d  u, v   || u  v ||   
   
   
  3 5  3     3 5   3   
26.
2
1/2
 58
1/2
  9   9  
   
  6   6  
 3 13
 2 v  w,3u  2 w   2 v,3u  2 w   w,3u  2 w   2 v,3u    2 v,2 w   w,3u    w,2 w
 6 v, u  4 v, w  3 w, u  2 w, w  6 u, v  4 v, w  3 u, w  2 || w ||2
 6  2   4  6   3  3   2  49   101
(b)
|| u  v ||   u  v, u  v   u, u  v   v, u  v   u, u   u, v   v, u   v, v
 || u ||2  2 u, v  || v ||2  1  2  2   4  3
28.
(a)
 u  v  2 w,4 u  v   u,4 u  v   v, 4 u  v   2 w,4 u  v
  u,4 u    u, v   v,4 u   v, v   2 w, 4 u    2 w, v
 4  u , u    u , v   4  v , u    v , v   8 w, u   2  w, v 
 4 || u ||2 3 u, v  || v ||2  8 u, w  2 v, w  4  3  2   4  8  3   2  6   30
(b)
|| 2 w  v ||  2 w  v,2 w  v  2 w,2 w  v   v,2 w  v
  2w,2 w   2w, v   v,2 w   v, v  4 || w ||2  4 v, w  || v ||2  4  49   4  6   4
 224  4 14
6.1 Inner Products
29.
If u   x, y  then || u ||   u, u1/2 
30.
If u   x, y  then || u ||   u, u1/ 2  2 x 2  y 2 , so the equation of the unit circle is 2 x 2  y 2  1 , which can
1
4
5
2
y
1.
x 2  161 y 2 , so the equation of the unit circle is x4  16
2
2
x
 y1  1 .
be rewritten as 1/2
2
31.
 u, v  19 u1v1  u2 v2 (see Example 3)
32.
 u, v  169 u1v1  u2 v2 (see Example 3)
33.
Axiom 2 does not hold, e.g., with u  v  w  1,0,0  we have  u  v, w  4 but  u, w    v, w  1  1  2 ;
Axiom 3 does not hold either, e.g., with u  v  1,0,0  and k  2 ,  ku, v  4 does not equal k  u, v  2 ;
This is not an inner product on R 3 .
34.
Axiom 4 does not hold, e.g., (0,1,0),(0,1,0)  1  0 ; this is not an inner product on R 3 .
6
Chapter 6: Inner Product Spaces
35.
By Definition 1, Definition 2, and Theorem 6.1.2, we have
 2 v  4 u , u  3v 
36.





 2 v  4 u, u   2 v  4 u,3v
 2 v, u    4 u, u    2 v,3v   4 u,3v
2 v, u   4 u, u   6 v, v  12 u, v
2 u, v  4 u, u   6 v, v  12 u, v
14 u, v  4 || u ||2  6 || v ||2 .
By Definition 1, Definition 2, and Theorem 6.1.2, we have
 5u  6 v,4 v  3u   5u  6 v,4 v   5u  6 v,3u
  5u,4 v  6 v,4 v   5u,3u   6 v,3u 
 20 u, v  24 v, v  15 u, u   18 v, u
 20 u, v  24 v, v  15 u, u   18 u, v
 2 u, v  15 || u ||2  24 || v ||2 .
37.

  
1
1
(a)
 p, q   x 2 dx  x3 
(b)
d  p, q   || p  q ||   1  x 2
(c)
|| p ||   p, p1/2   1 dx
3
1
1
 
1
1

1
1
2
3

2


1/2
dx


1

x5
5
1
 

(b)
d  p, q   || p  q ||  
(c)
|| p ||   p, p1/2  
(d)
|| q ||   q, q1/2   1  x 3
1


1
1
1
1
3x3  1

1/ 2
2
dx

1
1/2
2
2
5
2 x7
7
 2 x  dx   
3
1/2
1

 1
x4
2
3
1
 415
1/2
1
1/2
1
3
16
15
1
|| q ||  q, q1/2   x 4 dx
1

   x   2
1/ 2
   
38. (a)  p, q    2 x 1  x  dx         
 

(d)
1/2
1
3
5
  x  23x  x5  
 1 

4 x7
7
1
1

1/2
 sin 2 x 2 
1
1/2

1
7
4
  9 7x  3 2x  x  
 1 

1

 1
 
 dx     x 
2
4
7
1/2
x4
2
8
7
 2 72

1/2
1
7
 x7  
 1 


16
7
32
7
 4 72
 47
39.
 f , g   cos2 x sin 2 x dx  21
40.
 f , g   xe x dx  xe x  e x   1 (used integration by parts with u  x and dv  e x dx )
41.
Part (a) follows directly from Definition 2 and Axiom 4 of Definition 1.
0
1
0

To prove part (b), write

2
 0
 0 (substituted u  sin 2 x )
1
0
6.1 Inner Products
|| kv || 
Def.2
42.
7

k  v, kv  
k  kv, v  
k 2  v, v   | k | || v ||
 kv, kv  Axiom
3
Axiom 1
Axiom 3
Def.2
Part (c) follows from Definition 2 and part (b) of Theorem 6.1.1, since
d  u, v   || u  v ||  || (1)(v  u ) ||  | 1| || v  u ||  || v  u ||  d (v, u )
Part (d) follows from Definition 2 and Axiom 4 of Definition 1.
43.
(b)
k1 and k2 must both be positive in order for  u , v  to satisfy the positivity axiom. (Refer to the
discussion following Theorem 6.1.1.)
44.
By using Definition 2 and Axioms 1, 2, and 3 of Definition 1, we have
1
1
1
1
 u, v   (u  v)  (u  v), (u  v)  (u  v) 
2
2
2
2
1
1
1
1

 u  v, u  v   u  v, u  v   u  v, u  v   u  v, u  v
4
4
4
4
1
1

|| u  v ||2  || u  v ||2 .
4
4
45.
By using Definition 2 and Axioms 1, 2, and 3 of Definition 1, we have
|| u  v ||2  || u  v ||2
  u  v, u  v    u  v, u  v 
  u , u    u , v    v , u    v, v    u , u    u , v    v , u    v , v 
 2  u, u   2  v, v 
 2 || u ||2 2 || v ||2 .
48.
(b)
T 1,1,1  (1,1,1),(1,0,2)  11  1 0   1 2   3
(c)
T x  x 2   x  x 2 , 1  x    0 1  11  1 0   1
(d)
T x  x 2   x  x 2 , 1  x  1  12 1  1  0  02 1  0   1   1









2
 1  1  4
True-False Exercises
(a)
True. The dot product is the special case of the weighted inner product with all the weights equal to 1.
(b)
False. For example, if  u, v  u  v , u  1,1 , and v   2,1 then  u, v  1 .
(c)
True. This follows from Axioms 1 and 2 of Definition 1 since  u, v  w   v  w, u   v, u    w, u .
(d)
True. This follows from Axiom 3 of Definition 1 as well as part (e) of Theorem 6.1.2.
(e)
False. For example, if  u, v  u  v , u  1,1 , and v   1,1 then  u, v  0 even though both vectors are
nonzero.
(f)
True. By Definition 2, || v ||2   v, v so by Axiom 4 of Definition 1, || v ||2  0 implies v  0 .
(g)
False. A must be invertible; otherwise Av  0 has nontrivial solutions v  0 even though
 v, v  Av  Av  0 which would violate Axiom 4 of Definition 1.
8
Chapter 6: Inner Product Spaces
6.2 Angle and Orthogonality in Inner Product Spaces
1.
2.
(a)
cos  ||uu||,||vv|| 
(b)
cos  ||uu||,||vv|| 
(c)
cos  ||uu||,||vv|| 
(a)
cos  ||uu||,||vv|| 
(b)
cos  ||uu||,||vv|| 
(c)
cos  ||uu||,||vv|| 
1 2    3 4 
12   3 
2
  1010 20   10
  1010 2   12
200
22  42
 1 2    5 4    2  9 
 12  52  22
22  42   9 
2
0
1 3   0  3  1 3   0  3
12  02 12  02
 1 3   0  8 
 12  02
  2 6 36   12
 32   32   32   32
  373
32  8 2
 4 1  1 0    8  3
42 12  82 12  02   3
2
  8120 10   9 2010
 2  4   1 0    7  0    1 0 
22 12  72   1
2
42  02  02  02
 1 2    5 4    2  9 
 4 855  255
3.
cos  ||pp||,||qq|| 
4.
cos  ||pp||,||qq|| 
5.
cos  ||UU||,||VV|| 
6.
cos  ||UU||,||VV|| 
7.
(a)
orthogonal:  u, v  4  6  2  0
(b)
not orthogonal:  u, v  2  2  2  6  0
(c)
orthogonal:  u, v   a  b    b  a   0
(a)
orthogonal:  u, v  0  0  0  0
(b)
not orthogonal:  u, v  8  6  20  9  27  0
(c)
orthogonal:  u, v   ac  0  ac  0
8.
 12  52  22
22  42   9 
 0  7   1 3   1 3
02 12   1
2
72  32  32
2
0
0


 2  3   6  2   11   3 0 

 5019 14  1019 7
tr U U  tr  V V 
2  6 1   3 3  2 1  0
tr U T V
T
2
T


 
tr U T V

tr U T U
tr V T V


2
2
2
2
2
2
2
 2  3   4 1   1 4    3 2 
22  42   1  32
2
 32 12  42  22
9.
 p, q   1 0    1 2    2 1  0
10.
 p, q   2  4    3  2   1 2   0
11.
U ,V    2  3   1 0    1 0    3  2   0
12.
U , V    5 1   1 3    2  1   2  0   0
0
6.2 Angle and Orthogonality in Inner Product Spaces
13.
9
The vectors are not orthogonal with respect to the Euclidean inner product since
 u, v  1 2    3  1  1  0 . Using the weighted inner product instead yields
 u, v  2 1 2   k  3  1  4  3k , so the vectors are orthogonal with respect to this inner product if
k  43 .
14.
The vectors are not orthogonal with respect to the Euclidean inner product since
 u, v   2  0    4  3   12  0 . Using the weighted inner product instead yields
 u, v  2  2  0   k  4  3   12 k . This would equal 0 if k  0 , however, the corresponding formula
 u, v  2u1v1 does not represent an inner product since it violates Axiom 4. Consequently, no k values can
be found for which the vectors are orthogonal.
15.
The orthogonality of the two vectors implies  w1 1 2    w2  2  4   0 . The weights must be positive
numbers such that w1  4 w2 .
16.
 2 1 4 0 
Begin by forming a matrix A whose rows are the given vectors u , v , and w : A   1 1 2 2  . The
 3 2
5 4 
 1 0 0 34
11 


reduced row echelon form of A is  0 1 0 4  therefore the general solution of the homogeneous
 0 0 1 116 
system Ax  0 is x1  
34
6
t , x2  4t , x3   t , x4  t , so that all solution vectors are scalar multiples of
11
11
 34,44, 6,11 . All solution vectors of the homogeneous system are orthogonal to every row vector of the
coefficient matrix (see Example 6 in Section 6.2).
The magnitude of the vector  34,44, 6,11 is
 34   442   6   112  57 .
2
2
We conclude that the two unit vectors that are orthogonal to all three of the vectors u , v , and w are
1
57
17.
 34,44, 6,11    3457 , 4457 ,  192 , 1157  and  571  34,44, 6,11   3457 ,  5744 , 192 ,  5711  .
Orthogonality of p1 and p3 implies  p1 , p3    2 1   k  2    6  3   2 k  20  0 so k  10 . Likewise,
orthogonality of p2 and p3 implies  p2 , p3    l 1   5  2    3  3   l  19 so l  19 .
Substituting the values of k and l obtained above yields the polynomials p1  2  10 x  6 x 2 and
p2  19  5 x  3 x 2 which are not orthogonal since  p1 , p2    2  19    10  5    6  3   70  0 . We
conclude that no scalars k and l exist that make the three vectors mutually orthogonal.
18.
 2 1 3   2 1  5  9   2 
 u, v    
   
       0
 1 1 3   1 1  8   6   3
19.
 p, q  p  2  q  2   p  0  q  0   p  2  q  2    2  4    0  0    2  4   0
20.
For A to be a in the subspace of M22 spanned by U and V , there must exist scalars a and b such that
10
Chapter 6: Inner Product Spaces
 1 1
 4 0   1 1
 b
a


.
3 0 
9 2  0 2
Equating corresponding entries in the second column on both sides yields a  1 and b  1 . However, for
these values, the remaining entries on both sides do not equal (e.g.,  11  1 4   1 ). We conclude
that A is not in the subspace of M 22 spanned by U and V .
21.
|  u, v |  | 2(1)(2)  3(0)(1)  (3)( 1) |  |1 | 1 ;
|| u ||   u, u  2 11  3  0  0    3  3   11 ;
|| v ||   v, v  2  2  2   3 11   1 1  12 ;
since || u || v ||  132  1  |  u, v | , we conclude that the Cauchy-Schwarz inequality holds.
22.
U ,V    11   2  0    6  3   1 3   20  20 ;
|| U ||  U ,U  
 1 1   2  2    6  6   11  42 ;
|| V ||  V , V  
11   0  0    3  3    3  3   19 ;
since || U || || V ||  798  400  20  U ,V  , we conclude that the Cauchy-Schwarz inequality holds.
23.
 p, q   1 2    2  0   1 4   6  6 ;
|| p ||   p, p 
 1 1   2  2   11  6 ;
|| q ||   q, q 
 2  2    0  0    4  4   20 ;
since || p || || q ||  120  36  6   p, q , we conclude that the Cauchy-Schwarz inequality holds.
24.
 2 1 1   2 1  1   3  1
 u, v    
   
       3  3;
  1 1 1    1 1  1  2  0 
 2 1 1   2 1 1 
 3  3 
|| u ||   u, u   

        13 ;






2  2 
 1 1 1   1 1 1 
 2 1  1   2 1  1 
 1  1
|| v ||   v, v   

        1  1;






0  0 
 1 1  1   1 1  1 
since || u || || v ||  13  9  3   u, v , we conclude that the Cauchy-Schwarz inequality holds.
25.
By inspection,  u, w1   2  0 . Since u is not orthogonal to w1 , it is not orthogonal to the subspace.
26.
We have  p, w1    1 2    1 0    2  1   4 1  0 and
 p, w 2    1 0    1 4    2  2    4  2   0 therefore for all scalars a and b
 p, aw1  bw 2   a  p, w1   b p, w 2   0 . We conclude that p is orthogonal to span w1 , w 2  .
6.2 Angle and Orthogonality in Inner Product Spaces
27.
11
Begin by forming a matrix A whose rows are the given vectors:
 1 4 5 2
 1 0 1  27 
4
A   2 1 3 0  has the reduced row echelon form  0 1 1
. The general solution of the
7
 0 0 0
 1 3 2 2 
0 
homogeneous system Ax  0 is x1  s  27 t , x2  s  47 t , x3  s , x4  t , therefore
x  s  1, 1,1,0   t  27 ,  47 ,0,1 .
A basis for the orthogonal complement is formed by vectors  1, 1,1,0  and  27 ,  47 ,0,1 .
28.
Begin by forming a matrix A whose rows are the given vectors:
 1 4 5 6 9
1
 3 2 1 4 1

 has the reduced row echelon form 0
A
0
 1 0 1 2 1



3 5 7 8
 2
0
0 1 2 1
1 1 1 2 
. The general solution of
0 0 0 0

0 0 0 0
the homogeneous system Ax  0 is x1  r  2 s  t , x2  r  s  2t , x3  r , x4  s , x5  t , therefore
x  r  1, 1,1,0,0   s  2, 1,0,1,0   t  1, 2,0,0,1 .
A basis for the orthogonal complement is formed by vectors  1, 1,1,0,0  ,  2, 1,0,1,0  , and
 1, 2,0,0,1 .
29.
(a)
Every vector in W has form  x, y    x,2 x  , i.e., W  span 1,2  . By inspection, all vectors in R 2
orthogonal to 1,2  are scalar multiples of the vector  2, 1 . Eliminating t from
 x, y   t  2, 1   2t, t  we obtain x  2   y  , i.e. W  can be represented using an equation
y   12 x .
(An alternate method of solving this exercise is to follow the procedure of Example 6: letting
x
A  1 2 , the general solution of A    0 is x  2t , y  t . Eliminating t yields y   12 x .)
 y
(b)
W  will have dimension 1. A normal to the plane is u  1, 2, 3  , so W  will consist of all scalar
multiples of u or tu   t, 2t , 3t  so parametric equations for W  are x  t ,
y  2t , z  3t .
30.
(a)
Every vector in W has form  0, y,0  , i.e. W  span  0,1,0  . By inspection, all vectors in R 3
orthogonal to  0,1,0  are linear combinations of 1,0,0  and  0,0,1 i.e. W  is the xz plane.
(b)
Every vector in W has form  0, y, z  , i.e. W  span  0,1,0  ,  0,0,1 . By inspection, all vectors in R 3
orthogonal to W have form 1,0,0  i.e. W  is the x axis.
31.
(a)
1
1
 p, q   x 3 dx  x4   14
0
4
0
12
Chapter 6: Inner Product Spaces
(b)

1
|| p ||   p, p   x dx
1
2
2
0

1
   
1
2
|| q ||  q, q1/2   x 4 dx
32.
33.
0
1
2
1

0
x3
3
1
3
   
1/2
 x5 
5
1 1/2
(a)
Using the results obtained in Exercise 31 we have cos 
(b)
d  p, q   || p  q ||   x  x 2
(a)
 p, q  
(b)
|| p ||   p, p 2  
 
1
0
1
1
 dx     
1/2
2
x3
3
1
2
x4
4
3
1
1/2

1
5
 x5  
0 

1
30
1
x2
2
1

    
|| q ||   q, q     x  1 dx      x  x    
 

1
1
x2  x
1
1
2
1
1
2
2
1
5
4
3
  x5  x2  x3    415
 1 

dx
1/2
2
x3
3
1
35.
x4
2
1
 p, q
 1 4 1  415 .
|| p || || q ||
 5
3
 x  x   x  1 dx    x  x  dx       0
1/2
34.
1
5
0
1/2
1
2
1
8
3
2
2
3
 p, q
0.
|| p || || q ||
(a)
Using the results obtained in Exercise 33 we have cos 
(b)
d  p, q   || p  q ||  
(a)
 p, q    12  x  dx  12 x  x2   0
0
0
(b)
13
|| p  q ||2    23  x  dx  94 x  32x  x3   12
; || p ||2   1 dx  x 0  1 ;
0
0
0
 
1
1

x2  2 x  1

1

2

2


1/2
dx


1/ 2
1
5
3
  x5  x 4  23x  2 x 2  x  
 1 

2
14
15
1

2
1
2
2
3

1
1
1
2
1
13
q2    12  x  dx  4x  x2  x3   121 ; we conclude that || p  q ||2  12
 || p ||2  || q ||2 .
0
0
1
36.
1
3
 x  x  dx      0
(a)
 p, q  
(b)
|| p  q || 2  
1
2
x4
4
3
1
1
1
2
1
1
 x  x  1 dx   
2
2
1
|| p ||2    x  dx  x3   23 ;
1
x2
2
3
1
x5
5

1
26
 x3  x 2  x   15
;
 1
x4
2
3
|| q ||2  
1
1
 x  1 dx   
2
2
x5
5
2 x3
3

1
 x   16
;
 1 15
26
 || p ||2  || q ||2 .
we conclude that || p  q ||2  15
37.
|| u  v ||2   u  v, u  v   u, u  v   v, u  v   u, u   u, v   v, u   v, v  1  0  0  1  2 therefore
|| u  v ||  2 .
38.
Assuming  w, u1    w, u 2   0 , it follows that
6.2 Angle and Orthogonality in Inner Product Spaces
13
 w, k1u1  k2 u 2    w, k1u1    w, k2 u 2   k1  w, u1   k2  w, u 2    k1  0    k2  0   0.
When V is R 3 with the Euclidean inner product this means that a vector perpendicular to two vectors is
also perpendicular to the entire plane spanned by these vectors (assuming they are noncollinear).
39.
Using the trigonometric identity cos cos   12 cos      12 cos     we obtain
fk , fl  12  cos   k  l  x  dx  12  cos   k  l  x  dx where both k  l and k  l are nonzero integers.


0
0
Substituting u   k  l  x in the first integral, and t   k  l  x in the second integral yields
 fk , fl   12
40.
sin   k  l  x 
k l


  1 sin   k  l  x    0  0  0 since sin  m   0 for any integer m .
 0 2 k  l  0
We are looking for positive real numbers w1 and w2 for which the inner product  u, v  w1u1v1  w2 u2 v2
satisfies  u, v  0 , || u || 2   u, u  1 , and || v || 2   v, v  1 when applied to the given two vectors. These
three equations yield a linear system
w1
 3w2
 0
w1
w1
 3w2
 3w2


1
1
 1 0 12 
The augmented matrix of this system has the reduced row echelon form  0 1 61  . Since the system has
 0 0 0 
only one solution w1  12 , w2  61 , we conclude that the weighted Euclidean inner product that satisfies the
given conditions is  u, v  12 u1v1  61 u2 v2 .
41.
span u1 , u 2 , , u r  contains all linear combinations k1u1  k2 u 2    kr u r
where k1 , k2 , , kr are arbitrary scalars. Let v  span u1 , u 2 , , u r  .
 w, v   w, k1u1  k2 u 2    kr u r   k1  w, u1   k2  w, u 2     kr  w, u r   0  0    0  0
Thus if w is orthogonal to each vector u1 , u 2 , , u r , then w must be orthogonal to every vector in
span u1 , u 2 , , u r .
42.
Let w be a vector orthogonal to all of the basis vectors. It must be possible to express w as a linear
combination of the basis vectors: w  k1v1  k2 v 2    kr v r . By the Axioms 1-3 of Definition 1 in
Section 6.1, we can write
 w, w   w, k1v1  k2 v 2    kr v r 
 k1  w, v1   k2  w, v 2     kr  w, v r 
 k1  0  k2  0    kr  0
 0
By Axiom 4 of Definition 1 in Section 6.1, we must have w  0 .
14
43.
Chapter 6: Inner Product Spaces
Suppose that v is orthogonal to every basis vector. Then, as in Exercise 41, v is orthogonal to the span of
the set of basis vectors, which is all of W, hence v is in W  . If v is not orthogonal to every basis vector,
then v clearly cannot be in W  . Thus W  consists of all vectors orthogonal to every basis vector.
44.
This result can be established by induction.
46.
Using the Euclidean inner product, apply the Cauchy-Schwarz inequality to u   a, b  and
v   cos , sin   .
47.
49.
Using the weighted Euclidean inner product of Formula (2) in Section 6.1, the desired inequality follows
from the Cauchy-Schwarz inequality.
Using the inner product  f , g   f  x  g  x  dx , part (a) follows from the Cauchy-Schwarz inequality and
1
0
part (b) follows from the triangle inequality (part (a) of Theorem 6.2.2).
50.
Squaring both sides of the Cauchy-Schwarz inequality and applying Definition 2 of Section 6.1 yields
Formula (4).
51.
(a)
We are looking for all vectors v   a, b  such that  x, v  a  b is equal to
2   a  b 
TA (x), TA (v)     
  2a  2b . The equation a  b  2 a  2b yields a  b  0, i.e. b  a .
0   a  b 
Vectors that satisfy  x, v  TA (x), TA (v ) must have a form a 1, 1 where a is an arbitrary scalar.
(b)
We are looking for all vectors v   a, b  such that  x, v  2 a  3b is equal to
2   a  b 
TA (x ), TA (v )     , 
   4 a  4b . The equation 2a  3b  4a  4b yields 2 a  b  0, i.e.
0    a  b 
b  2a . Vectors that satisfy  x, v  TA (x), TA (v) must have a form a 1, 2  where a is an
arbitrary scalar.
52.
(a)
We are looking for all vectors q  a  bx  cx 2 such that  p, q  a  b is equal to
T (p), T (q)  3,3a  cx 2   9a . The equation a  b  9a yields b  8a . Vectors that satisfy
 p, q  T ( p), T (q) must have a form a  8ax  cx 2 where a and c are arbitrary scalars.
(b)
We are looking for all vectors q  a  bx  cx 2 such that
 p, q  p( 1)q( 1)  p(0)q(0)  p(1)q(1)  (0)( a  b  c)  (1)( a )  (2)( a  b  c )  3a  2b  2c is
equal to T (p), T (q)  3,3a  cx 2   (3)(3a  c)  (3)(3a )  (3)(3a  c)  27a  6c .
The equation 3a  2b  2c  27a  6c yields b  12a  4c . Vectors that satisfy
 p, q  T ( p), T (q) must have a form a  (12 a  4c) x  cx 2 where a and c are arbitrary scalars.
True-False Exercises
(a)
False. If u is orthogonal to every vector of a subspace W, then u is in W  .
(b)
True. W  W   {0}.
6.2 Angle and Orthogonality in Inner Product Spaces
(c)
True. For any vector w in W,  u  v, w   u, w   v, w  0, so u + v is in W  .
(d)
True. For any vector w in W,  ku, w  k  u, w  k (0)  0, so ku is in W  .
(e)
False. If u and v are orthogonal |  u, v |  | 0  0.
(f)
False. If u and v are orthogonal, || u  v ||2  || u ||2  || v ||2 thus u  v  || u ||2  || v ||2  || u ||  || v ||
6.3 Gram-Schmidt Process; QR-Decomposition
1.
(a)
(0, 1), (2, 0)  0  0  0 ;
|| (0,1) ||  1 ; || (2,0) ||  2  1 ;
The set is orthogonal, but is not orthonormal.
(b)
  , ,  ,      0 ;
 ,     1 ;  ,  
1
2
1
2
1
2
1
2
1
2
1
2
1
2
1
2
1
2
1
2
1
2
1
2
1
2
 12  1
The set is orthogonal and orthonormal.
(c)
  ,   ,  ,      1  0 ;
1
2
1
2
1
2
1
2
1
2
1
2
The set is not orthogonal (therefore, it is not orthonormal either).
(d)
(0, 0), (0,1)  0  0  0 ;
|| (0,0) ||  0  1 ; || (0,1) ||  1 ;
The set is orthogonal, but is not orthonormal.
2.
(a)
 , 0,  ,  , ,     0   0 ;  , 0,  ,   , 0,     0   0 ;
 , ,   ,   , 0,        0 ;
1
2
1
2
1
3
1
3
1
3
1
3
1
3
1
3
1
2
1
6
1
2
1
6
1
6
1
2
1
6
1
2
1
2
1
2
1
2
1
2
2
6
The set is not orthogonal (therefore, it is not orthonormal either).
(b)
 23 ,  23 , 13  ,  23 , 13 ,  23   49  29  29  0 ;  23 ,  23 , 13  ,  13 , 23 , 23   29  49  29  0 ;
 23 , 13 ,  23  ,  13 , 23 , 23   29  92  94  0 ;
 23 ,  32 , 13  
4
9
 49  19  1 ;
 23 , 31 ,  23  
4
9
 19  94  1 ;
 13 , 32 , 32  
The set is orthogonal and orthonormal.
(c)
1, 0, 0  ,  0, 12 , 12   0  0  0  0 ;
 0,
1
2
(1, 0, 0), (0, 0,1)  0  0  0  0 ;

, 12 ,  0, 0,1  0  0  12  12  0 ;
The set is not orthogonal (therefore, it is not orthonormal either).
1
9
 94  94  1 ;
15
16
Chapter 6: Inner Product Spaces
(d)
 ,
 ,
1
6
1
6
1
6
1
6
  ,  , 0    0  0 ;
,   
   1 ;  ,  , 0 
,  26 ,
1
2
2
6
1
6
1
2
1
6
1
12
1
12
1
2
4
6
1
2
1
2
 21  0  1
The set is orthogonal and orthonormal.
3.
(a)
 p1 ( x ), p2 ( x )  23  23   23  13   13   23   49  29  29  0 ;
 p1 ( x ), p3 ( x )  23  13   23  23   13  23   29  49  29  0 ;
 p2 ( x ), p3 ( x )  23  13   13  23   32  23   29  92  94  0 ;
The set is orthogonal.
(b)
 p1 ( x ), p2 ( x )  1 0   0
   0    0 ;  p ( x), p ( x)  1 0   0  0   0 1  0 ;
1
2
1
2
1
3
 p2 ( x ), p3 ( x )  0  0   12  0   12 1  12  0 ;
The set is not orthogonal.
4.
(a)
2
0
 0
1 0 
3

B
Denoting the matrices A  
,
, C 2


1
2
0 0 
3  3
 3
2
3
1
3

0
 , and D   2
3

1
3
2
3

 we calculate

 A, B  1 0    0   23    0   13    0    23   0 ;
 A, C   1 0    0   23    0    23    0   13   0 ;
 A, D  1 0    0   13    0   23    0   23   0 ;
 B,C   0  0    23  23    13   23     23  13   0 ;
 B, D   0  0    23  13    13  23     23  23   0 ;
C, D   0  0    23  13     23  23    13  32   0 ;
The set is orthogonal.
(b)
0 1 
0 0 
0 0 
1 0 
Denoting the matrices A  
, B
, C
, and D  


 we calculate

0 0 
1 1 
 1 1
0 0 
 A, B  1 0    0 1   0  0    0  0   0 ;
 A, C   1 0    0  0    0 1   0 1  0 ;
 A, D  1 0    0  0    0 1   0  1  0 ;
 B,C   0  0   1 0    0 1   0 1  0 ;
 B, D   0  0   1 0    0 1   0  1  0 ;
C , D   0  0    0  0   11  1 1  0 ;
The set is orthogonal.
5.
Let us denote the column vectors u1  1, 0, 1 , u 2   2, 0, 2  , and u 3   0, 5, 0  . These vectors are
orthogonal since  u1 , u 2   2  0  2  0 ,  u1 , u 3   0  0  0  0 , and  u 2 , u 3   0  0  0  0 .
It follows from Theorem 6.3.1 that the column vectors are linearly independent, therefore they form an
6.3 Gram-Schmidt Process; QR-Decomposition
17
orthogonal basis for the column space of A . We proceed to normalize each column vector:
u1
u2
 110 1 1, 0, 1  12 , 0,  12 ;
 4 10  4  2, 0, 2   2 2 2 , 0, 2 2 2  12 , 0, 12 ;
|| u1 ||
|| u 2 ||

u3

|| u 3 ||
1
0  25  0
1
2
1
2


 

 0, 5, 0    0, 1, 0  . A resulting orthonormal basis for the column space is
 , 0,   ,  , 0,  ,  0, 1, 0  .
6.
1
2
1
2
Let us denote the column vectors u1   15 , 15 , 15  , u 2    12 , 12 , 0  , and u 3   13 , 13 ,  32  .
These vectors are orthogonal since  u1 , u 2    101  101  0  0 ,  u1 , u 3   151  151  152  0 , and
 u 2 , u 3    61  61  0  0 . It follows from Theorem 6.3.1 that the column vectors are linearly independent,
therefore they form an orthogonal basis for the column space of A . We proceed to normalize each column
vector:
u1
 1 11 1  15 , 15 , 15   53  15 , 15 , 15   13 , 13 , 13 ;
|| u1 ||
 
25 25 25

u2

|| u 2 ||
u3

|| u 3 ||
1
1  1 0
4 4
1
114
9 9 9

  12 , 12 , 0   2   12 , 12 , 0     12 , 12 ,0  ;
 13 , 13 ,  23   36  13 , 13 ,  23    16 , 16 ,  26  .
A resulting orthonormal basis for the column space is
7.
 , , ,   , ,0 ,  , ,   .
1
3
1
3
1
3
1
2
1
2
1
6
1
6
2
6
 v1 , v 2    12
 12
 0  0 ;  v1 , v 3   0  0  0  0 ;  v 2 , v 3   0  0  0  0 ;
25
25
|| v1 || 
9
25
 16
 0  1 ; || v 2 || 
25
16
25
 259  0  1 ; || v3 ||  0  0  1  1 ;
Since this is an orthogonal set of nonzero vectors, it follows from Theorem 6.3.1 that the set is linearly
 
independent. Because the number of vectors in the set matches dim R 3  3 , this set forms a basis for R 3
by Theorem 4.6.4. This basis is orthonormal, so by Theorem 6.3.2(b),
u   u, v1  v1   u, v2  v2   u, v3  v3
   35  85  0  v1   45  65  0  v2   0  0  2  v3
  115 v1  25 v2  2v3 .
8.
It was shown in Exercise 7 that the vectors v1 , v 2 , and v 3 form an orthonormal basis for R 3 . By Theorem
6.3.2(b),
u   u, v1  v1   u, v2  v2   u, v3  v3
   95  285  0  v1   125  215  0  v2   0  0  4  v3
  375 v1  95 v2  4v3 .
9.
 v 1 , v 2   4  2  2  0 ;  v1 , v 3   2  4  2  0 ;  v 2 , v 3   2  2  4  0 ;
Since this is an orthogonal set of nonzero vectors, it follows from Theorem 6.3.1 that the set is linearly
18
Chapter 6: Inner Product Spaces
 
independent. Because the number of vectors in the set matches dim R 3  3 , this set forms a basis for R 3
by Theorem 4.6.4. By Theorem 6.3.2(a),
u 
 u, v 3 
 u, v1 
 u, v 2 
v1 
v2 
v3
2
2
|| v1 ||
|| v 2 ||
|| v 3 ||2
2  0  2
2  0  4
v1 
v 2  v 3
4  4 1
4 1 4
2
1
 0 v1  v 2  v 3 .
3
3

10.
 v1 , v 2   2  2  6  2  0
 v1 , v 3   1  2  0  1  0
 v1 , v 4   1  0  0  1  0
 v 2 , v 3   2  4  0  2  0
 v 2 , v 4   2  0  0  2  0
 v3 , v 4   1  0  0  1  0
Since this is an orthogonal set of nonzero vectors, it follows from Theorem 6.3.1 that the set is linearly
independent. Because the number of vectors in the set matches dim  R 4   4 , this set forms a basis for R 4
by Theorem 4.6.4. We use Theorem 6.3.2(a):
u 
 u, v 3 
 u, v1 
 u, v 2 
 u, v 4 
v 
v2 
v3 
v4
2 1
2
2
|| v1 ||
|| v 2 ||
|| v3 ||
|| v 4 ||2
11 2 1
1 2  0 1
1 0  0 1
2  2  3  2
v1 
v2 
v3 
v4
11 4 1
4494
1 4  0 1
1 0  0 1
1
5
1
v1  v 2  v3  1v 4 .

7
21
3

11.
 u S    115 ,  25 , 2 
12.
 u S    375 ,  95 , 4 
13.
 u S   0,  23 , 13 
14.
 u S   17 , 215 , 31 ,1
15.
(a)
|| v || 
9
25
 16
 1 , so v forms an orthonormal basis for the line W  span v .
25
w1  projW u   u, v v    35  245  35 , 45   215  35 , 45    63
, 84
25
25 
(b)
w 2  u  w1   1, 6    63
, 84
   88
, 66 ;
25
25 
25 25 
264
264
w 2 is orthogonal to the line since  w 2 , v   125
 125
0.
16.
(a)
|| v || 
25
169
 144
 1 , so v forms an orthonormal basis for the line W  span v .
169
36
230 552
w1  projW u   u, v v   10
 13
, 169 
 135 , 1213   1346  135 , 1213    169
13
(b)
230 552
45
w 2  u  w1   2, 3    169
, 169    108
,  169
;
169
540
540
 2197
 0.
w 2 is orthogonal to the line since  w 2 , v  2197
6.3 Gram-Schmidt Process; QR-Decomposition
17.
(a)
w1  projW u  ||uv,||v2 v  2113 1, 1  25 1, 1   25 , 25 
(b)
w2  u  w1   2, 3   25 , 25     12 , 12  ;
19
w 2 is orthogonal to the line since  w 2 , v   12  12  0.
18.
(a)
w1  projW u  ||uv,||v2 v  99164  3, 4   255  3, 4    35 , 45 
(b)
w 2  u  w1   3, 1   35 , 45    125 ,  95  ;
w 2 is orthogonal to the line since  w 2 , v  365  365  0.
19.
(a)
 v1 , v 2   0 and || v1 ||  || v 2 ||  1 , so v1 , v 2  is an orthonormal basis for the plane W  span v1 , v 2  .
w1  projW u   u, v1  v1   u, v 2  v 2  2  13 , 23 ,  23   4  32 , 31 , 32    103 , 83 , 34 
(b)
w 2  u  w1   4, 2,1   103 , 83 , 34    23 ,  23 ,  13  ;
w 2 is orthogonal to the plane since  w 2 , v1   29  49  29  0 and  w 2 , v 2   49  29  29  0 .
20.
(a)
 v1 , v 2   0 and || v1 ||  || v 2 ||  1 , so v1 , v 2  is an orthonormal basis for the plane W  span v1 , v 2  .
w1  projW u   u, v1  v1   u, v 2  v 2   26
 , ,    , , 
1
6
1
6
2
6
4
3
1
3
1
3
1
3
   62 ,  62 , 64    43 , 43 , 43     13 ,  13 , 23    43 , 43 , 43   1,1,2 
(b)
w 2  u  w1   3, 1, 2   1,1,2    2, 2, 0  ;
w 2 is orthogonal to the plane since  w 2 , v1   26  26  0  0 and  w2 , v 2   23  23  0 .
21.
(a)
 v1 , v 2   0 , so v1 , v 2  is an orthogonal basis for the plane W  span v1 , v 2  .
w1  projW u  ||v ||12 v1  ||v ||22 v 2  64 1, 2,1  25  2,1, 0 
 u ,v 
 u ,v 
1
2
22
  23 ,  34 , 32    45 , 25 ,0    15
,  14
,2
15 3 
(b)
22
w2  u  w1  1, 0, 3   15
,  14
, 2    157 , 14
,7 ;
15 3 
15 3 
28
28  35
 37  7 15
 0 and
w 2 is orthogonal to the plane since  w 2 , v1    157  15
 w 2 , v 2    14
 14
00.
15
15
22.
(a)
 v1 , v 2   0 , so v1 , v 2  is an orthogonal basis for the plane W  span v1 , v 2  .
w1  projW u  ||v ||12 v1  ||v ||22 v 2  147  3,1, 2   13  1,1,1   23 , 12 ,1    13 , 13 , 13 
 u ,v 
 u ,v 
1
2
 , , 
7 5 4
6 6 3
(b)
w 2  u  w1  1, 0, 2    67 , 65 , 43     61 ,  65 , 23  ;
w 2 is orthogonal to the plane since  w 2 , v1    63  65  43  0 and  w 2 , v 2   61  65  23  0 .
20
23.
Chapter 6: Inner Product Spaces
projW b  ||v ||12 v1  ||v ||22 v 2  11210112 1,1,1,1  11210112 1,1, 1, 1
 b, v 
 b,v 
1
2
 14 1,1,1,1  45 1,1, 1, 1   23 , 23 , 1, 1
24.
projW b  ||v ||12 v1  ||v ||22 v 2  0012160 21  0,1, 4, 1  3910250112  3,5,1,1
b,v

25.
b,v
1
2
9
2
 0,1, 4, 1   3,5,1,1   1211 , 74 ,  127 , 121 
11
36
projW b   b, v1  v1  b, v 2  v 2  b, v 3  v 3


 0  218  0  118 v1   21  106  0  61  v 2 

1
18

5
 0  0  418 v 3  318 v1  2 v 2  18
v3
20
  0, 183 ,  12
,  183   1, 35 , 13 , 13    185 , 0, 185 ,  18
   1823 , 116 ,  181 ,  1817 
18
26.
projW b   b, v1  v1  b, v 2  v 2  b, v 3  v 3
  12  1  0  12  v1   12  1  0  12  v 2   12  1  0  12  v3  1v1  2v 2  0v3
  12 , 12 , 12 , 12   1, 1, 1, 1   23 , 23 ,  12 ,  12 
27.
v1  u1  1, 3 
v 2  u 2  ||v2 ||21 v1   2, 2   2106 1, 3    2, 2     25 , 65    125 , 45 
 u ,v 
1
An orthonormal basis is formed by the vectors q1  ||v11 ||  110 1, 3  
v
q 2  ||v22 || 
v
1
144  16
25 25
 125 , 45   1  125 , 45   4 510  125 , 45    310 , 110  .
160
25

1
10

,  310 and
6.3 Gram-Schmidt Process; QR-Decomposition
28.
v1  u1  1,0 
v 2  u 2  ||v2 ||21 v1   3, 5   31 0 1, 0    3, 5    3, 0    0, 5 
 u ,v 
1
An orthonormal basis is formed by the vectors q1  ||v11 ||  11 1,0   1,0  and
v
q 2  ||v22 ||  125  0, 5    0, 1 .
v
29.
v1  u1  1,1,1
v 2  u 2  ||v2 ||21 v1   1,1,0   111110 1,1,1   1,1,0   0 1,1,1   1,1,0 
 u ,v 
1
v 3  u 3  ||v ||2 v1  ||v3 ||22 v 2  1,2,1  112111 1,1,1  111200  1,1,0 
 u 3 , v1 
 u ,v 
1
 1,2,1 
4
3
2
1,1,1   1,1,0    61 , 61 ,  13 
1
2
 , , ,
 , ,    , ,  .
An orthonormal basis is formed by the vectors q1  ||v11 ||  13 1,1,1 
v


q 2  ||v 22 ||  12  1,1,0    12 , 12 ,0 , and q3  ||v33 ||  1/1 6
v
30.
v
1 1
6 6
1
3
1
3
1
6
1
3
1
6
1
3
2
6
v1  u1  1,0,0 
v 2  u 2  ||v2 ||21 v1   3,7, 2   310000 1,0,0    3,7, 2   3 1,0,0    0,7, 2 
 u ,v 
1
v 3  u 3  ||v ||2 v1  ||v3 ||22 v 2   0,4,1  010000 1,0,0   004289 24  0,7, 2 
 u 3 , v1 
 u ,v 
1
2
  0,4,1  0 1,0,0   26
0,7, 2    0, 30
, 105
53 
53 53 
An orthonormal basis is formed by the vectors q1  ||v11 ||  11 1,0,0   1,0,0  ,
v




q 2  ||v22 ||  153  0,7, 2   0, 753 ,  253 , and q3  ||v33 ||  15/1 53  0, 30
, 105  0, 253 , 753 .
53 53 
v
31.
v
First, transform the given basis into an orthogonal basis v1 , v 2 , v 3 , v 4 .
v1  u1   0,2,1,0 
v 2  u 2  ||v2 ||21 v1  1,  1, 0, 0   0  2 5 0  0  0, 2, 1,0   1,  1, 0, 0    0, 45 , 25 , 0 
 u ,v 
1
21
22
Chapter 6: Inner Product Spaces
= 1,  15 , 25 , 0 
v3  u3  ||v3 ||21 v1  ||v3 ||22 v2  1, 2, 0,  1  0  4 5 0 0  0, 2, 1, 0  
 u ,v 
 u ,v 
1 25  0  0
1
2
6
5
1,  15 , 25 , 0 
 1, 2, 0,  1   0, 85 , 45 , 0    12 ,  101 , 15 , 0    12 , 12 ,  1,  1
 u ,v 
 u ,v 
 u ,v 
1
2
3
v 4  u 4  ||v4 ||21 v1  ||v4 ||22 v 2  ||v4 ||23 v 3
 1, 0, 0, 1  0 05 0 0 v1  1 06 0 0 1,  15 , 25 , 0   2
1  0  0 1
5
 1, 0, 0, 1  00 5 0 0 v1  10 6 0 0 1,  15 , 25 , 0   2
5
2
1  0  0 1
5
5
2
 12 , 12 ,  1,  1
 21 , 21 ,  1,  1   154 , 154 ,  158 , 45 
An orthonormal basis is formed by the vectors q1  ||v11 || 
v

5
30
,  130 ,
q 4  ||v44 ||  15 15 4 15 5 

1
15
q 2  ||v22 || 
v
v
1,  15 , 25 , 0 
30
5

 4 , 4 ,  8 , 4
15
32.
,
1
15
2
30

 0, 2, 1, 0 
5
 1 , 1 , 1, 1
, 0 , q3  ||v33 ||  2 2 10
,  215 ,
v
2
3
15


 0,

1
10
,
2
5
,
1
10
1
5

, 0 ,

,  210 ,  210 ,
.
0 1 1
We begin by forming a matrix whose columns are the three given vectors:  1 0 1 . The reduced row
2 1 3
1 0 1 
echelon form of this matrix is 0 1 1  . By Theorem 4.8.5(b), vectors u1   0,1,2  and
0 0 0 
u 2   1,0,1 form a basis for the span of the three given vectors (the third vector can be expressed as a
linear combination of the first two).
We proceed to use the Gram-Schmidt process to transform the basis u1 , u 2  to an orthonormal basis.
v1  u1   0,1,2 
v 2  u 2  ||v2 ||21 v1   1,0,1  001042  0,1,2    1,0,1  25  0,1,2    1,  25 , 15 
 u ,v 
1
An orthonormal basis is formed by the vectors




q1  ||v11 ||  15  0,1,2   0, 15 , 25 and q 2  ||v22 ||  61/5  1,  25 , 15    530 ,  230 , 130 .
v
v
33.
From Exercise 23, w1  projW b   32 , 32 ,  1,  1 , so w 2  b  projW b    12 , 12 ,1,  1 .
34.
23 11
, 6 ,  181 ,  17
, so w2  b  w1    185 , 61 , 181 ,  181  .
From Exercise 25, w1  projW b   18
18 
35.
Let W be the plane spanned by the vectors u1 and u 2 .
v1  u1  1,1,1
v 2  u 2  ||v2 ||21 v1   2, 0, 1  211011 1, 1, 1   2, 0, 1   31 , 31 , 13    35 ,  31 ,  34 
 u ,v 
1
w1  projW w  ||v ||12 v1  ||v ||22 v 2  1 233 v1  3 423 3 v2  2 1, 1, 1  149  35 ,  13 ,  34 
 w,v 
 w,v 
5  2  12
1
2
9
6.3 Gram-Schmidt Process; QR-Decomposition
15
13
31
  2, 2, 2    14
,  143 ,  67    14
, 14
, 207 
13
31
w2  w  w1  1,2,3    14
, 14
, 207    141 ,  143 , 17 
36.
The vectors u1 and u 2 are not orthogonal, therefore we begin by using the Gram-Schmidt process to
transform the basis u1 , u 2  to an orthonormal basis.
v1  u1   1,0,1,2 
v 2  u 2  ||v2 ||21 v1   0,1,0,1  01 00 1042  1,0,1,2    0,1,0,1  13  1,0,1,2    13 ,1,  13 , 13 
 u ,v 
1
An orthonormal basis is formed by the vectors




1
q1  ||v11 ||  16  1,0,1,2    16 ,0, 16 , 26 and q 2  ||v22 ||  4/3
 13 ,1,  13 , 31   2 1 3 , 2 3 3 ,  2 1 3 , 2 1 3 .
v
v



w1  projW w   w, q1  q1   w, q 2  q 2  1 0 66  0  16 ,0, 16 , 26  126 36  0 2 1 3 , 2 3 3 ,  2 1 3 , 2 1 3

   67 ,0, 67 , 37     121 ,  41 , 121 ,  121     45 ,  41 , 45 , 49 
w2  w  w1   1,2,6,0     45 ,  14 , 45 , 94    14 , 94 , 194 ,  94  .
37.
First, transform the given basis into an orthogonal basis v1 , v 2 , v 3 .
v1  u1  1, 1, 1
v1   v1 , v1   12  2(1)2  3(1)2  1  2  3  6
v2  u2 
 u 2 , v1 
v1
2
v1  1, 1, 0  
 u 3 , v1 
v1
2
1, 1, 1  1, 1, 0   12 1, 1, 1   12 , 12 ,  12 
 12   2  12   3   12   6  14   26
2
v2  v2 , v2  
v3  u3 
11  2 11  3 0 1
6
v1 
2
 u3 , v2 
v2
2
2
v 2  1, 0, 0   1 06 0 1, 1, 1  2 6
1 00
4
 12 , 12 ,  12 
 1, 0, 0    61 , 61 , 61    61 , 61 ,  61    23 ,  13 , 0 
v3   v3 , v3  
 23   2   13   3(0)2 
2
2
The orthonormal basis is q1  v1 
v
1
 2 ,  13 , 0 
q 3  v3  3
v
3
38.
6
3

1, 1, 1
6

4
9
 ,
1
6
 29  36
1
6
,
1
6
, q 
2
v2
v2
 1, 1,  1 
 2 26 2 
2
 ,
1
6
1
6
 ,  , 0 .
2
6
1
6
Let us denote w1  1,0  and w 2   0,1 . The set is orthogonal because  w1 , w 2   4 1 0    0 1  0 .
w1
w1
  w 1,w  
w
1
1
1
4 11   0  0 
1,0    12 ,0  ;
w2
w2
  w 2,w   4 0 01  1 1  0,1   0,1
w
2
2
     
By normalizing the given orthogonal set, we obtained an orthonormal set  12 ,0  ,  0,1 .
39.

,  16 , and
For example, x 
 , 0  and y   0,  .
1
3
1
2
23
24
40.
Chapter 6: Inner Product Spaces
The line through the origin making 6 angle with the x -axis is a subspace of R 2 spanned by the vector
 ,  . Since this vector has norm 1, the orthogonal projection of x  1,5 onto
W  span u is proj x   x, u u     ,   
,
.
u   cos 6 ,sin 6  
3
2
1
2
3
2
W
41.
3
2
5
2
3 5 3
4
1
2
5 3
4
By inspection, v1  v1  v2 and v2  v1  2v2 so v1 and v2 are in W . The dimension of W is 2
(a)
since v1 , v 2  is an orthogonal basis for W . By Theorem 6.3.1, v1 , v 2  is linearly independent, so
by Theorem 4.6.4 it is a basis for W , hence it spans W .
Calculating projW u using v1 , v 2  we obtain
(b)
 u , v1 
 u ,v 2 
v1 
2
v1
v2
2
v 2  130017 1, 0,1  00 11 00  0,1, 0    2, 0, 2    0,1, 0    2,1, 2  .
Calculating projW u using v1 , v 2  instead yields the same vector:
 u , v1 
42.
 u , v 2 
v1 
2
v1
v 2
2
v2  131117 1,1,1  134217 1, 2,1   35 , 35 , 35    13 ,  32 , 13    2,1, 2  .
The first three Legendre polynomials are q1  x   1 , q2  x   x , and q3  x   12  3 x 2  1   12  23 x 2 .
By Example 8 and the Remark that follows it, these polynomials form an orthogonal set in the vector space
P2 with the inner product  p, q   p  x  q  x  dx .
1
1
We have q1
q3
(a)
2

2
  1 dx  x ]11  2 , q 2
1
2
2
1
1
   x  dx  x3   23 , and
1
2
3
1
1
   x  dx       .
1
3
2
1
2
1
2
2
x
4
x3
2
1
9 x5
20
1
2
5
For p  x   1  x  4 x 2 , Theorem 6.3.2(a) yields
p
 p, q1 
q1
2
q1 
 p, q 2 
q2
2
q2 
 p, q3 
q3
2
q3
 1 x x2 3x3


 6 x 4  dx
  

1
x
x
dx
x
x
x
dx
1
4
4




2


 2 2 2
 q
q1  1
q2 
 1
3
2
2
2
3
5
1

2

1

2
3
1

1
1
1

1
x2 4 x3 
3  x2 x3
5  x x2 x3 3x 4 6 x5 
4
 x 





q
q
x





   
  q3
1
2
2
2
3   1
2 2
3
2 2 4 6
8
5   1
  1
 1  14 
 3  2 
 5  16 
    q1     q 2     q 3
 2  3 
 2  3 
 2  15 
7
8
 q1  1q 2  q3
3
3
(b)
For p  x   2  7 x 2 , Theorem 6.3.2(a) yields
p
 p, q1 
q1
2
q1 
 p, q 2 
q2
2
q2 
 p, q3 
q3
2
q3
6.3 Gram-Schmidt Process; QR-Decomposition

13 x 2 21x 4 



1
 dx
 1 
2  7 x 2 dx
2 x  7 x 3 dx
2
2 


1
1
q1 
q2 
q3

2
2
2
3
5
1



1
1

1
1
1
1
7 x3 
3  2 7 x 4 
5
13 x 3 21x 5  
  2x 






q
q
x
x
 1

 2

  q3
2
3   1
2
4   1
2
6
10   1
 1  2 
3
 5  28 
     q1     0  q 2      q 3
2
3
2
 

 
 2  15 
1
14
  q1  0q 2  q3
3
3
For p  x   4  3 x , Theorem 6.3.2(a) yields
(c)
 p, q1 
p
q1
2
 p, q 2 
q1 
q2
q2 
2
 p, q3 
q3
2
q3

3x
9 x3 
2
x




2
6

 dx
 1 
4 x  3 x dx
 4  3 x  dx
2
2 


1
1
q1 
q2 
q3

2
2
2
3
5
1

1
2
1
1

1
1
3x2 
3
5
3x2
9 x 4 
 1 
2
3
3

2
2
2
   4x 







q
q
x
x
x
x


  q3
1
 1 2 2 
2   1
2
4
8   1
 2 



1
3
5
    8  q1     2  q 2     0  q 3
2
2
2
 4q1  3q 2  0q 3
43.
First transform the basis S  p1 , p2 , p3   1, x, x 2  into an orthogonal basis v1 , v 2 , v 3  .
v1  p1  1
v1   v1 , v1  
1
 1 dx  x  1
1
2
0
0
1
2 1
0
0
 p2 , v1    x  1 dx  12 x
v 2  p2 
v2
2
 p2 , v1 
2
v1
 12
v1  x  12 1  x  12 1   12  x
1
1
2
  v 2 , v 2      12  x  dx  
v2 
1
0
1
12
1
0
  x  x  dx   x  x  x  
2
1
4
1
4
1
2
2
1
3
3
1
0
1
12
 2 13
1
1
0
0
 p3 , v1    x 2  1 dx  13 x 3  13



 p3 , v 2    x 2   21  x  dx    12 x 2  x 3 dx   61 x 3  41 x 4
v 3  p3 
1
1
0
0
 p3 , v1 
v1
2
v1 
 p3 , v 2 
v2
2
 
1
0
1
12
v 2  x 2  13 1  121   21  x   x 2  13  12  x  61  x  x 2
1
1
12
25
26
Chapter 6: Inner Product Spaces
v3
2
 v3 , v3   
1
0
v3 
1
180
2
  x  x  dx   x  x  x  x  x  
2
1
6
1
36
1
6
2
3
4
9
4
1
2
1
5
5
1
0
1
180
 6 15
The orthonormal basis is
q1  v1  11  1;
v
1
q 2  v2 
v
2
q3 
44.
 12  x
1
2 3
 2 3   12  x   3  1  2 x  ;




1  x  x2
v3
 6 1  6 5 61  x  x 2  5 1  6 x  6 x 2 .
6 5
v3
1
0
The reduced row echelon form of the matrix is 
0

0
0 0
1 0 
therefore the three column vectors of the
0 1

0 0
original matrix, u1   6, 2,  2,6  , u 2  1, 1,  2, 8  , and u 3   5, 1, 5,  7  form a basis for the column
space. Applying the Gram-Schmidt process yields an orthogonal basis v1 , v 2 , v 3  :
v1  u1   6, 2, 2,6 
v2  u2 
 u 2 , v1 
v1
2
v1  1,1, 2, 8   366 24444836  6, 2, 2,6   1,1, 2, 8   34  6, 2, 2,6 
   27 ,  12 ,  12 , 72 
v3  u3 
 u 3 , v1 
v1
2
v1 
 u3 ,v2 
v2
2
30  2 10  42
v 2   5,1, 5, 7   36
6, 2, 2,6   492  21  21  492   27 ,  21 ,  21 , 27 
 4  4  36 
35  1  5  49
4
4
4
4
  5,1, 5, 7    6, 2, 2,6   25   72 ,  12 ,  12 , 27     25 , 145 , 145 , 25 
45.
Let u1  1, 2  , u 2   1, 3  , q1 
 ,  , and q    ,  . A QR-decomposition of the matrix A is
1
5
2
5
2
5
2
1
5
formed by the given matrix Q and the matrix
1
4
  u , q   u 2 , q1    5  5

R 1 1

 u 2 , q 2    0
0


46.
Let u1  1, 0,1 , u 2   2,1, 4  , q1 
 15  65   5

2
 35   0
5
5
.
5 
 , 0,  , and q    ,
1
2
1
2
1
3
2
1
3

, 13 . A QR-decomposition of the matrix
A is formed by the given matrix Q and the matrix
1
1
 u1 , q1   u 2 , q1    2  0  2

R
 u 2 , q 2   
0
 0

47.
 0  42   2

 23  13  43   0
3 2
.
3 
2
2
Let u1  1, 0,1 , u 2   0,1, 2  , u 3   2,1, 0  , q1 
 , 0,  , q    ,
1
2
1
2
2
1
3
1
3

, 13 , and
6.3 Gram-Schmidt Process; QR-Decomposition
q3 
 ,
1
6
2
6
27

,  16 . A QR-decomposition of the matrix A is formed by the given matrix Q and the
matrix
1
1
 u1 , q1   u 2 , q1   u 3 , q1    2  0  2
R   0
0
 u 2 , q 2   u 3 , q 2    

 0
 u 3 , q3   
0
0

48.
Let u1  1,1, 0  , u 2   2,1, 3  , u 3  1,1,1 , q1 

q3   319 ,
3
19
,
1
19
0  0  22
0  13  23
0
00   2
 
 23  13  0   0
 
2
 26  0  0
6
 , ,0 , q  
1
2
1
2
2
2
2
2
2 19
2
3
0
2 

 13  .

4
6 


2
,  2 19
, 3 192 , and
 . A QR-decomposition of the matrix A is formed by the given matrix Q and the
matrix
1
 12  0
 u1 , q1   u 2 , q1   u 3 , q1    2
R   0
0
 u 2 , q 2   u 3 , q 2    

0
 u 3 , q 3   
0
 0

 2

 0

0

49.
2
2
2 2
2 19
 12  0
2
 2 19
 9 192
0
 12  0 

2
2
3 2



2 19
2 19
19

 319  319  119 
1
2
2

3 2 
.
19

1 
19 
3
2
19
2
0
 1 0 1
 1 1 1
   u u u . By inspection, u  u  2 u , so the column vectors
In partitioned form, A  
1
2
3
3
1
2
 1 0 1


 1 1 1
of A are not linearly independent and A does not have a QR-decomposition.
51.
The proof of part (a) mirrors the proof of part (b) in the book. By Theorem 6.3.1, an orthogonal set of
nonzero vectors in W is linearly independent. It follows from part (b) of Theorem 4.6.5 that this set can be
enlarged to form a basis for W. Applying the Gram-Schmidt process (without the normalization step) will
yield an enlarged orthogonal set (the original orthogonal set will not be affected).
52.
If v 3  0 then
u3 
 u 3 , v1 

v1
2
v1 
 u 3 , v1 
v1
2
u3 , v2 
u1 
v2
2
v2
 u3 , v2 
v2
2
u2 
 u 3 , v 2   u 2 , v1 
v2
2
v1
2
u1
 u , v  u , v  u , v  
u , v 
2
1
 u1  3 22 u 2
  3 21  3 22
2
 v
v2
v1 
v2
1

28
Chapter 6: Inner Product Spaces
making u 3 a linear combination of u1 and u 2 , which contradicts the assumption of the linear
independence of u1 , u 2 , u 3  .
53.
The diagonal entries of R are  u i , qi  for i = 1, 2, ..., n, where q i  vi is the normalization of a vector v i
v
i
that is the result of applying the Gram-Schmidt process to u1 , u 2 , ..., u n . Thus, v i is ui minus a linear
combination of the vectors v1 , v 2 , ..., v i 1 , so u i  v i  k1v1  k2 v 2    ki 1v i 1 . Thus,
 u i , v i    v i , v i  and  u i , qi   u i ,
vi
vi
 v1  v i , v i   v i . Since each vector v i is nonzero, each
i
diagonal entry of R is nonzero.
55.
(b)
The range of T is W ; the kernel of T is W  .
True-False Exercises
(a)
False. For example, the vectors (1, 0) and (1, 1) in R2 are linearly independent but not orthogonal.
(b)
False. The vectors must be nonzero for this to be true.
(c)
True. A nontrivial subspace of R3 will have a basis, which can be transformed into an orthonormal basis
with respect to the Euclidean inner product.
(d)
True. A nonzero finite-dimensional inner product space will have finite basis which can be transformed into
an orthonormal basis with respect to the inner product via the Gram-Schmidt process with normalization.
(e)
False. projW x is a vector in W.
(f)
True. Every invertible n  n matrix has a QR-decomposition.
6.4 Best Approximation; Least Squares
1.
 1 1
 1 1
 2
 1 2 4 
 21 25
 1 2 4    20 



T
T
A   2 3 ; A A  
1 
2 3  
; A b
;
1 3 5 
1 3 5   20 
25 35 



 4 5
 5
 4 5
21 25  x1  20 
The associated normal equation is 
    .
25 35   x2  20 
6.4 Best Approximation; Least Squares
2.
 2 1 0 
 2 1 0 
 2 3 1 1 
 3 1 2
  15 1 5
3
1
2



T
 ; A A  1 1 4 2 
 
A

  1 4 5   1 22 30  ;
 1 4 5 
 0 2 5 4  


  5 30 45
 1 2 4
 1 2 4
 1
 2 3 1 1    1
0
AT b   1 1 4 2      9  ;
 1
 0 2 5 4     13
 2
 15 1 5  x1   1
The associated normal equation is  1 22 30   x2    9  .
 5 30 45  x3   13
3.
 1 1
 2
 1 2 4    20 
 1 2 4 
 21 25

T
A A
  2 3  
 ; A b   1 3 5  1  20  ;
 1 3 5  4 5  25 35 

  5  


 
T
 21 25  x1  20 
The normal system AT Ax  AT b is 
    .
25 35   x2  20 
20
1 0
11 
.
The reduced row echelon form of the augmented matrix of the normal system is 
8 
0 1  11 
, x2   118 is the unique least squares solution of Ax  b .
The solution of this system x1  20
11
4.
 2 2 
 2
 2 1 3    6 
 2 1 3 
14 0 

T
1 
A A
1  
; A b
;
 1
2 1 1    4 
0 6 
 2 1 1  3


 1
1

T
14 0   x1   6 
The normal system AT Ax  AT b is 
    .
 0 6   x2   4 
3
1 0
7
.
The reduced row echelon form of the augmented matrix of the normal system is 
2
0 1  3 
The solution of this system x1  37 , x2   23 is the unique least squares solution of Ax  b .
5.
1
 1 2 1 1 
2
AT A   0
1 1 1 
1
 1 2 0 1 
1
0 1
 7 4 6 
1 2  
 4
3 3 ;
1 0
  6 3 6 
1 1 
6 
 1 2 1 1    18 
0
AT b   0
1 1 1     12  ;
9 
 1 2 0 1    9 
3 
29
30
Chapter 6: Inner Product Spaces
 7 4 6   x1   18 
The normal system A Ax  A b is  4 3 3  x2    12  .
 6 3 6   x3   9 
T
T
 1 0 0 12 
The reduced row echelon form of the augmented matrix of the normal system is 0 1 0 3 .
0 0 1 9 
The solution of this system x1  12 , x2  3 , x3  9 is the unique least squares solution of Ax  b .
6.
0 1
2
1 2 0 
 2
 9 4 0 
1 2 2  


T

A A   0 2 1 1
  4
6 5 ;
 2 1 0 
 1 2 0 1 
  0 5 6 
1 1 
0
0 
1 2 0    6
 2
6
AT b   0 2 1 1     6  ;
0 
 1 2 0 1    6 
6 
 9 4 0   x1   6 
6 5  x2    6  .
The normal system A Ax  A b is  4
 0 5 6   x3   6 
T
T
 1 0 0 14 
The reduced row echelon form of the augmented matrix of the normal system is  0 1 0 30  .
 0 0 1 26 
The solution of this system x1  14 , x2  30 , x3  26 is the unique least squares solution of Ax  b .
7.
28
   116 
 2   1 1 20
 2   11
 11     16
  27 




Least squares error vector: e  b  Ax   1   2 3  8    1   11     11
;
 11 

40
15
 5  4 5
 5  11   11 
  116 
1
2
4


 27  0 
AT e  

  11     ; therefore the least squares error vector is orthogonal to every vector in the
 1 3 5  15  0 
 11 
column space of A .
Least squares error: b  Ax 
8.
  116     2711    1511   113 110  2.86 .
2
2
2
  214 
 2  2 2  3
 2   46
21 






  
1  27    1    215     16
Least squares error vector: e  b  Ax   1   1
;
21 
3

13
8
 1  3
 1  21   21 
1
6.4 Best Approximation; Least Squares
31
  214 
2
1
3


 16  0 
 21     ; therefore the least squares error vector is orthogonal to every vector in the
AT e  


 2 1 1  8  0 
 21 
column space of A .
Least squares error: b  Ax 
9.
  214     1621    218   421  0.873 .
2
2
6   1
0  2
Least squares error vector: e  b  Ax     
9  1
  
3  1
2
0 1
6   3   3 
 12       

1  2     0   3   3 
;
3 


1 0    9  9   0 
  9      
1 1    3   0   3
 3
 1 2 1 1    0 
3
AT e   0
1 1 1     0  ; therefore the least squares error vector is orthogonal to every vector
 0
 1 2 0 1    0 
 3
in the column space of A .
Least squares error: b  Ax  32   3   0 2  32  3 3  5.196 .
2
10.
0 1
0  2
 0   2   2 
6   1 2 2  14  6   6   0 
 30           ;
Least squares error vector: e  b  Ax     
 0   2  1 0     0   2   2 
  
  26       
1 1   6   4   2 
6  0
 2 
1 2 0    0 
 2
0
AT e   0 2 1 1     0  therefore the least squares error vector is orthogonal to every vector
 2
 1 2 0 1    0 
 2
in the column space of A .
Least squares error: b  Ax 
11.
 2   02  22  22  2 3  3.464 .
2
 2 1
3
 2 4 2    12 
2 4 2  
24 12 

T
A A
  4 2  
 ; A b   1 2 1  2    6  ;
 1 2 1  2 1 12 6 

 1   


 
T
24 12   x1  12 
The normal system AT Ax  AT b is 
    .
12 6   x2   6 
1 12 12 
The reduced row echelon form of the augmented matrix of the normal system is 
.
0 0 0 
The general solution of the normal system is x1  12  12 t , x2  t . All of these are least squares solutions of
Ax  b . The error vector is the same for all solutions:
32
Chapter 6: Inner Product Spaces
 3   2 1 1 1
 3   1 2 
2  2 t      




e  b  Ax   2    4 2  
 2  2  0 .
t       
 1   2 1 
 1   1 2 
12.
 1 3
1 
1 2 3     4 
1 2 3  
14 42 

T
A A
  2 6   
 ; A b  3 6 9  0   12  ;
3 6 9   3 9   42 126 

 1   


 
T
14 42   x1   4 
The normal system AT Ax  AT b is 
     .
 42 126   x2  12 
 1 3 27 
The reduced row echelon form of the augmented matrix of the normal system is 
.
0 0 0 
The general solution of the normal system is x1  27  3t , x2  t . All of these are least squares solutions of
Ax  b . The error vector is the same for all solutions:
1   1 3 2
1   27   75 
3
t



   4 4
e  b  Ax  0    2 6   7
  0     7    7  .
t
 1   6   1 
1   3 9  
   7 7
13.
 1 2 0   1 3 2   5 1 4 
A A   3 1 1  2 1 3   1 11 10  ;
 2 3 1  0 1 1  4 10 14 
T
 1 2 0   7   7 
A b   3 1 1  0    14  ;
 2 3 1  7   7 
T
 5 1 4   x1   7 
The normal system A Ax  A b is  1 11 10   x2    14  .
 4 10 14   x3   7 
T
T
 1 0 1  67 

7
The reduced row echelon form of the augmented matrix of the normal system is 0 1 1
.
6
0 0 0
0 
The general solution of the normal system is x1   67  t , x2  67  t , x3  t . All of these are least squares
solutions of Ax  b . The error vector is the same for all solutions:
 7   1 3 2    67  t   7   143   37 


  

e  b  Ax   0    2 1 3  67  t    0     67    67  .
 7   0 1 1  t   7   67    496 
6.4 Best Approximation; Least Squares
14.
33
1 1 3 2 1  11 12 7 
1 1  2   5
 3
 3







T
A A   2 4 10   1 4
3   12 120 84  ; A b   2 4 10   2    22  ;
 1 3 7   1  15
 1 3 7   1 10 7   7 84
59 
T
 11 12 7   x1   5
The normal system A Ax  A b is  12 120 84   x2    22  .
 7 84
59   x3   15
T
T
1
1 0
7

The reduced row echelon form of the augmented matrix of the normal system is 0 1  75
0 0
0


.
0 
2
7
13
84
The general solution of the normal system is x1  27  17 t , x2  13
 75 t , x3  t . All of these are least squares
84
solutions of Ax  b . The error vector is the same for all solutions:
 2  3 2 1  27  17 t   2   67   65 
   


 75 t    2     31     35  .
e  b  Ax   2    1 4
3  13
84
 1  1 10 7   t   1  116    65 
15.
 1 1
4
 1 3 2     1
 1 3 2  
14 3

T
A A
3 2  
1 
; A b
;
1 2 4    10 
3 21 
4  
 1 2


 2 4 
 3
T
14 3  x1   1
The normal system AT Ax  AT b is 
     .
 3 21  x2  10 
1 0
The reduced row echelon form of the augmented matrix of the normal system is 
0 1
17
95
143
285
, x2  143
is the unique least squares solution of Ax  b .
The solution of this system x1  17
95
285
92
  285

 1 1 17


 439 


95
By Theorem 6.4.2, projW b  Ax   3 2   143    285  .

 2 4   285   94
57 
This matches the result obtained using Theorem 6.4.4:

projW b  A A A
T

1
 1 1
4
1
 14 3   1 3 2   


A b   3 2  
1

3 21   1 2
4   


 2 4 
 3 
T
 1 1
4

 21 3   1 3 2   
1


  3 2 
1
 14  21   3  3   3 14    1 2
4   


 2 4 
 3
 1 1
4
 21 3  1 3 2   
1 


3 2 
1
3 14   1 2
4   
285 
 2 4  
 3

.

34
Chapter 6: Inner Product Spaces
 92 
1 

439 

285
 470 
 92 
  285 


439 


 285 


 94 
 57 
16.
1
 4 
5
 5 1 4     6 
5 1 4  
 42 0 

T
A A
  1 3  
 ; A b   1 3 2   2    4  ;
 1 3 2   4 2   0 14 

  3  


 
T
 42 0   x1   6 
The normal system AT Ax  AT b is 
     .
 0 14   x2   4 
1 0  17 
.
The reduced row echelon form of the augmented matrix of the normal system is 
2
0 1  7 
The solution of this system x1   17 , x2   27 is the unique least squares solution of Ax  b .
1   17   1
5
 

By Theorem 6.4.2, projW b  Ax   1 3     1 .
 4 2    27   0 
This matches the result obtained using Theorem 6.4.4:

projW b  A AT A
 Ab
1
T
1
5
4 
1
  42 0    5 1 4   


  1 3  
1

0 14    1 3 2   


 4 2 
 3
1
5
4 
0  5 1 4   
1 / 42


  1 3 
1
0
1 / 14   1 3 2   

 4 2 
 3
 1
  1
 0 
17.
We follow the procedure of Example 2.
 1 2 
For A   v1 | v 2    2 2  , we have
 1 4 
6.4 Best Approximation; Least Squares
 1 2 
 1
 1 2 1    12 
 1 2 1 
6 6 

T
A A
  2 2  
 and A u   2 2 4   6    6  ;

  1 

 2 2 4   1 4  6 24 
 


T
6 6   x1   12 
The normal system AT Ax  AT u is 
   
 . The reduced row echelon form of the
6 24   x2   6 
 1 0  73 
so that the least squares solution of Ax  u is
augmented matrix of the normal system is 
1
3
0 1
 1 2  7
 3
 3   
  73 


x   1  . Denoting W  span v1 , v 2  we obtain projW u  Ax   2 2   1    4  .
 3
 1 4   3   1
18.
2
1
Let A  
1

1
1 2 
2
1 1 1 
 2

1
0 1
. AT A   1 0 1 1 
1
1 0
 2 1 0 1 


1 1
1
1 2 
 7 4 6 
0 1 
3 3 ;
 4

1 0
  6 3 6 
1 1 
6 
1 1 1    30 
 2
3
AT u   1 0 1 1     21 .
9
 2 1 0 1 6   21
 
 7 4 6   x1   30 
3 3  x2    21 . The reduced row echelon form of the
The normal system A Ax  A u is  4
 6 3 6   x3   21
T
T
 1 0 0 6
augmented matrix of the normal system is 0 1 0 3 so that the least squares solution of Ax  u is
0 0 1 4 
2
6
1


x   3 . Denoting W  span v1 , v 2 , v 3  we obtain projW u  Ax  
1
 4 

1
19.
1
1  
1  
1 
Letting A    , we have P  A AT A AT     1 0    
0  
0  
0 


1
1 2 
7 
6  

0 1    2 
.
3 
1 0    9 
 4  
1 1    5 
1 
1 0  0  1 1 0 
1
 
1 
1 
1 0 
 0  11 0    0  1 0    0 0  . This matches the matrix in Table 3 of Section 1.8.
 
 


35
36
20.
Chapter 6: Inner Product Spaces
1
0  
0  
0 
Letting A    , we have P  A AT A AT      0 1   
1  
1  
1 


1
0 
0 1  1  1 0 1 
1
 
0 
0 
0 0 
 1  1 0 1   1   0 1   0 1  . This matches the matrix in Table 3 of Section 1.8.
 
 


21.
 1 0
 1 0
 1 0 0 
  1 0 


T
0
0
Letting A   0 0  , we have A A  



 
 0 0 1  0 1  0 1 
 0 1


 1 0
 1 0
 1 0 0
1
  1 0   1 0 0 
 1 0 0 



P  A A A A  0 0   
 0 0  
  0 0 0  .
 



0 1  0 0 1
0 0 1
0 1  
 0 1 
 0 0 1
This matches the matrix in Table 4 of Section 1.8.

22.
T

1
T
0 0 
0 0 
0 1 0  
1 0 


T
1 0   
Letting A   1 0  , we have A A  



 0 0 1  0 1  0 1 
 0 1


0 0 
0 0 
0 0 0 
1
  1 0   0 1 0  
0 1 0  




P  A A A A   1 0  
  0 0 1   1 0   0 0 1   0 1 0  .
0
1
 
  0 1 
  0 0 1
0 1  




This matches the matrix in Table 4 of Section 1.8.

T

1
T
 75 15   35
 5  75  0 5  45
1
 45   3   15

3 
5  2 
0
1
35
5
7
  15   17 
  18    18  .
 5   7 
23.
We use Theorem 6.4.6: x  R 1QT b 
24.
 1
 1 10   35 45 0     15 2   5   5 

7 
We use Theorem 6.4.6: x  R Q b 
.
 5 1 0 5  0 0 1  2  0 1  2  2 
 
25.
(a)
1
T
1
If x  s and y  t , then a point on the plane is  s, t , 5s  3t   s 1, 0, 5   t  0,1, 3  .
w1  1, 0,  5  and w 2   0,1, 3  form a basis for W (they are linearly independent since neither of
them is a scalar multiple of the other).
(b)
 1 0
Letting A   0 1 , Formula (11) yields
 5 3
1
 1 0 
 1 0 
1 T
  1 0 5 
  1 0 5 


T
P  A A A A   0 1  
0 1  


0 1 3
0 1 3
 5 3  
 5 3  


6.4 Best Approximation; Least Squares
 1 0
 1 0
1
  26 15   1 0 5 

10 15   1 0 5


  0 1  
  0 1   26 10 1 15 15 
 




15 10   0 1 3
15 26   0 1 3
 5 3  
 5 3 

26.
(a)
1
35
 1 0
 10 15 5
 0 1 10 15  1 0 5  1  15 26
3 .

 15 26  0 1 3 35 


 5 3 
 5 3 34 
W  span  2, 1,4  so that the vector  2, 1,4  forms a basis for W (its linear independence
follows from Theorem 4.4.2(b))
(b)
 2
Letting A   1 , Formula (11) yields
 4 
 2 
 2 
1 T




T
P  A A A A   1   2 1 4   1 
 4  
 4  


1
 2 1 4
8
 2
 4 2
1 
1


1 4  .
  1  21  2 1 4    2
21
 4 
 8 4 16 
27.
The reduced row echelon form of the augmented matrix of the given homogeneous system is
1 0

0 1
1
2
1
2
 12
1
2
0
1
1
1
1
 so that the general solution is x1   2 s  2 t , x2   2 s  2 t , x3  s , x4  t .
0
The solution space W is spanned by vectors  1, 1,2,0  and 1, 1,0,2  .
 1 1
 1 1
 then follow the procedure of
We construct the matrix with these vectors as its columns A  
 2 0


 0 2
Example 2 in Section 6.4.
5
 1 1
 


 1 1 2 0   1 1 6 0 
  1 1 2 0   6   3 
T
T

A A

 ; A u   1 1 0 2   7    3  ;


 
 1 1 0 2   2 0   0 6 
 


2 
 0 2
37
38
Chapter 6: Inner Product Spaces
6 0   x1  3
The normal system AT Ax  AT u is 
      . The reduced row echelon form of the augmented
 0 6   x2   3 
1 0
matrix of the normal system is 
0 1
1
2
1
2

 12 
so
that
the
least
squares
solution
of
is
Ax

u

x
 1  . We

2

 1 1  1   0 
 1 1    1
 2    .
obtained projW u  Ax  
 2 0   1   1

   
 0 2   2   1
28.
a 
Letting A   b  , Formula (11) in Section 6.4 yields
 c 

P  A AT A
 A
1
T
a  
a  




  b    a b c   b  
 c  
 c  
1
a b c
a 
1
  b   a 2  b2  c 2   a b c 
 c 
 a 2 ab ac 
1


ab b 2 bc  .
 2
a  b2  c2 
 ac bc c 2 


29.
Let W be the row space of A . Since W is also the column space of AT , by Formula (11) we have
P  AT
30.
 A  A   A   A  AA  A .
T
T
1
T
T

T
1
 A b   A A  A Ax  x .
1
T
T
1
T
Since b is orthogonal to the column space of A , it follows that AT b  0 . By Theorem 6.4.4, the least

squares solution is x  AT A
32.
T
Multiplying both sides of Ax  b on the left by AT yields AT Ax  AT b . By Theorem 6.4.4, the least
squares solution is AT A
31.
T
 A b   A A 0  0 .
1
T
T
1
Partitioning A into columns A   u1 ||u n  , we have AT A   AT u1 ||AT u n  .
Assume the columns of A are linearly dependent, i.e., there exist scalars k1 ,, kn , not all equal 0, such that
k1u1    kn u n  0.
6.4 Best Approximation; Least Squares
39
Multiplying both sides on the left by AT we obtain
k1 AT u1    kn AT u n  0.
Since scalars k1 ,, kn are not all equal 0, that means the columns of AT A are linearly dependent. However,
this contradicts the assumption of the invertibility of AT A .
Consequently, the columns of A must be linearly independent.
True-False Exercises
(a)
True. AT A is an n  n matrix.
(b)
False. Only square matrices have inverses, but AT A can be invertible when A is not a square matrix.
(c)
True. If A is invertible, so is AT , so the product AT A is also invertible.
(d)
True. Multiplying both sides of Ax  b on the left by AT yields AT Ax  AT b .
(e)
False. By Theorem 6.4.2, the normal system AT Ax  AT b is always consistent.
(f)
True. This follows from Theorem 6.4.2.
(g)
False. There may be more than one least squares solution as shown in Example 2.
(h)
True. This follows from Theorem 6.4.4.
6.5 Mathematical Modeling Using Least Squares
1.
1 0 
1 1 1 
3 3 
We have M  1 1  , M T  
,
, MT M  

3 5 
0 1 2


1 2 
M M  
1
T

v  M M
*
T
1
15  9

1
 5 3  1  5 3 
 3 3   6  3 3  , and




0 
1  5 3 1 1 1      12 
2  
M y 
6  3 3 0 1 2     27 
 7 
T
so the least squares straight line fit to the given data points is y   12  27 x .
2.
1
1
We have M  
1

1
0
1
2 
4 8
 22 8 
, MT M  
,  M T M   241 
,

3
 8 22 
 8 4 

3
40
Chapter 6: Inner Product Spaces
1 
 
1
1  22 8   1 1 1 1  0   23 
and v*  M T M M T y 
   so the least squares straight line fit to the
24  8 4  0 2 3 3  1   61 
 
2 
given data points is


y  23  61 x .
3.
1

1
We have M  
1

1
2
3
5
6
22  1
 
32  1

52  1
 
6 2  1
2 4
3 9 
,
5 25

6 36 
74 
 4 16

M M  16 74 376  ,
74 376 2018 
T
M M
T
1
 1989 1116 135
1 
649 80  , and

1116
90 
 135
80 10 
 0
 1989 1116 135  1 1 1 1  
 2
1
10   
1 



T
T
*

649 80   2 3 5 6 
v  M M M y
1116
 5
 48   
90 
 135
10   4 9 25 36  
80
  3
 76 


so the least squares quadratic fit to the given data points is y  2  5 x  3 x 2 .
4.
1
1
We have M  
1

1
M M
T
1
1
0
1
2
1
4 4 6
0 
T
, M M   4 6 10  ,
1
 6 10 18 

4
1
 1  32
2
 3

9
  2
2  , and
2
 12 2
1
 2 
1
1 1 1 1     1
 1  32
2
1
1  


9
v*  M T M M T y    32
2  1 0 1 2       25  so the least squares quadratic fit to the
2
 0
1
1 1 0 1 4     25 
 2 2
 4


given data points is y  1  25 x  25 x 2 .
5.
With the substitution X  1x , the problem becomes to find a line of the form y  a  b  X that best fits the
data points (1, 7),  13 , 3  ,  61 , 1 .
6.5 Mathematical Modeling Using Least Squares
1 1 
3


We have M  1 13  , M T M   3
2
1 61 

v  M M
*
T
 M y
1
T
1
42
3
2
41
36
41

1
 41 54 
T
1
 ,  M M   42 
 , and
 54 108 

7 
 41 54  1 1 1    215 
5
48
 54 108  1 1 1  3    48  . The line in terms of X is y  21  7 X , so the

  3 6  1   7 
 
required curve is y  215  748x .
6.
With the substitution X  x , the problem becomes to
find a line of the form y  a  b  X that best fits the data
points
 3,  ,  7,  ,  10,3 .
3
2
5
2
1

We have M  1

1
3
 23 

 
7  , y   25  , and

 3
10 


3
MT M  
 3  7  10

v*  M T M
3  7  10 
 . We conduct the remaining computations approximately:
20

0.316
 M y   1.054  . The line in terms of X is approximately y  0.316  1.054 X , so the
1
T


required curve is approximately y  0.316  1.054 x .
7.
The two column vectors of M are linearly independent if and only neither is a multiple of the other. Since
all the entries in the first column are equal, the columns are linearly independent if and only if the second
column has at least two different entries, i.e., if and only if at least two of the numbers x1 , x2 , ..., xn are
distinct.
True-False Exercises
(a)
False. There is only a unique least squares straight line fit if the data points do not all lie on a vertical line.
(b)
True. If the points are not collinear, there is no solution to the system.
(c)
True.
(d)
False. The line minimizes the sum of the squares of the data errors.
6.6 Function Approximation; Fourier Series
1.
1  x  dx  1  x  x2  0  2  2
0
a0  1 
2
2
2
Using integration by parts to integrate both xcos  kx  and xsin  kx  we obtain
42
Chapter 6: Inner Product Spaces
1  x  cos  kx  dx   1kx sin  kx   k 1 cos  kx   0  0 and
0
ak  1 
2
2
2
1  x  sin  kx  dx    1kx cos  kx   k 1 sin  kx    0   2k
0
bk  1 
(a)
2
2
2
1  x  20  a1 cos x  a2 cos  2 x   a3 cos  3 x   b1 sin x  b2 sin  2 x   b3sin  3 x  yields
a
1  x  1    0 cos x  0 cos  2 x   21 sin x  22 sin  2 x   1    2sin x  sin 2 x
(b)
1  x  20  a1 cos x  a2 cos  2 x     an cos  nx   b1 sin x  b2 sin  2 x     bn sin  nx  yields
a
1  x  1    21 sin x  22 sin  2 x     2n sin  nx 
2.
2
a0  1  x 2 dx  3x 
3
0
2
0
 83
2
Using integration by parts twice to integrate both x 2 cos  kx  and x 2 sin  kx  we obtain

2

2
ak  1  x 2 cos  kx  dx  kx sin  kx   k22x cos  kx   k 32 sin  kx    k42 and
0
0
2

2

2
bk  1  x 2 sin  kx  dx   kx cos  kx   k22x sin  kx   k 32 cos  kx     4k
0
0
(a)
2
x 2  20  a1 cos x  a2 cos  2 x   a3 cos  3 x   b1 sin x  b2 sin  2 x   b3sin  3 x  yields
a
x 2  43  4 cos x  cos  2 x   49 cos  3 x   4 sin x  2 sin  2 x   43 sin  3 x 
2
(b)
x 2  20  a1 cos x  a2 cos  2 x     an cos  nx   b1 sin x  b2 sin  2 x     bn sin  nx  yields
a
x 2  43  142 cos x  242 cos  2 x     n42 cos  nx   41 sin x  42 sin  2 x     4n sin  nx 
2
3.
(a)
 
Let us denote W  span 1, e x . Applying the Gram-Schmidt process to the basis u1  1 and u 2  e x
we obtain an orthogonal basis
1
v1  1 , v 2  u 2 
1
1
v1  e x   01 1  e x  10 1  e x   e  11  e x  e  1 .
2
v1
x 0
 01dx
 u 2 , v1 


2
1
ex 
e x dx


Since  e x  e  1 dx   1  2e  e2  2e x  2ee x  e2 x dx
0
0

 x  2ex  e2 x  2e x  2e x 1  12 e2 x
q1  v1 
v
1
1
1
 01dx

1

1
x0
    2e  e   e  1 3  e  , an orthonormal basis is
 1 , q 2  v2 
1
3
2
0
e x  e 1
v
2
1
2
1
2
 e 1 3  e 
2
1
2
.
The least squares approximation to f  x   x from W is
 
1
1





projW f   f , q1  q1   f , q2  q2   x dx   e12 3e  x e x  e  1 dx e x  e  1
0

 
1
0

 12   e 12 3 e  xe x  e x  x2e  x2  e x  e  1  12   e 12 3 e    2e  23  e x  e  1
0
2
2
 12  e ee11  12  ee1  1  ee1  12 .
x
x
x
6.6 Function Approximation; Fourier Series
4.
1
 
(b)
The mean square error is  x  ee1  12
(a)
Let us denote W  span 1, x .
0
 dx 
2
x
7 e 19
12 e 12
 0.00136.
Applying the Gram-Schmidt process to the basis u1  1 and u 2  x we obtain an orthogonal basis
1
x2 
1
2 
 xdx
v1  1 , v 2  u 2  2 v1  x  10 1  x  10  x  12 and an orthonormal basis
v1
x 0
 01dx
 u 2 , v1 
q1  v1 
v
1
1
1
 01dx

x  12
 1 , q 2  vv2 
1

1
x0
1
 0 x  12  dx
2
2
 2 3  x  12  .
x  12

1
 x  12  
3
3

0
The least squares approximation to f  x   e x from W is


projW f   f , q1  q1   f , q2  q2   e x dx   2 3  x  12  e x dx 2 3  x  12 
1
0

1
0

1
 e  1   2 3 xe x  32 e x   2 3  x  12   4e  10  6  3  e  x.
0

5.


(b)
The mean square error is  e x   4e  10  6  3  e  x  dx   572  20e  72e  0.00394.
(a)
Let us denote W  span 1, x, x 2 .
1
0

2
2

Applying the Gram-Schmidt process to the basis u1  1 , u 2  x , and u 3  x 2 we obtain an
1
x2 
1
2 
 xdx
orthogonal basis v1  1 , v 2  u 2  ||v ||2 v1  x  11 1  x  11  x  0  x ,
x 1
1
1
dx
 1
 u 2 , v1 
1
1
1
x3 
1
x4 
x dx
x dx
3 
4 
v 3  u 3  ||v ||2 v1  ||v ||2 v 2  x   11 1   11 2 x  x 2  11  3 11 x  x 2  13 ,
x
1
x 
1
2
 11dx
 1x dx
3 
 1
 u 3 , v1 
 u3 ,v2 
2
and an orthonormal basis q1  ||vv11 || 
q2 
v2
||v 2 ||

x
1
 1x dx
2

x
1
x3 
3 
 1

3
2
2

1
1
 1
1dx
x , q3 

v3
||v3 ||
3
 12 ,
1
x 1
1
x2 
1 
1
3
1
2
 1 x  3  dx
2

1
3
2 2
3 5
x2 
.
The least squares approximation to f  x   sin  x from W is
projW f   f , q1  q1   f , q 2  q 2   f , q 3  q 3

1

1
  x   sin x dx   x  
 14  sin  x dx  32  x sin  x dx x  458 
1

 0  32  x cos  x  sin 2 x
(b)
8.
1
 x0  x 
1
3
2
1
1
1
2

3x

2
1
3
.
The mean square error is   sin πx  3x  dx  1  62  0.392.
1
1
We use Formulas (8) in Section 6.6
  x  dx   2x  
0
a0  1 
2
2
2
0
 0.
2
2
1
3
43
44
Chapter 6: Inner Product Spaces
Using integration by parts to integrate both   x  cos  kx  and   x  sin  kx  we obtain
ak  1 
2
0
  x  cos  kx  dx   kx sin  kx   k 1 cos  kx    0  0 and
2
2
  x  sin  kx  dx    kx cos  kx   k 1 sin  kx   0  2k .
0
2
2
bk  1 
2
The Fourier series for   x over the interval  0,2  is  k 1 2k sin  kx  .

9.
1, 0  x  
Let f  x   
.
0,   x  2
a0  1 
2
ak  1 
2
bk  1 
2
0
0
0

f ( x ) dx  1  dx  1
0

f ( x )cos kx dx  1  cos kx dx  0
0
π
f ( x )sin kx dx  π1  sin kx dx  k1 (1  ( 1)k )
0
So the Fourier series is 12   k 1 k1 (1  ( 1)k )sin kx.

10.
All the coefficients of the series are zero except for b3  1 , i.e., sin  3 x  is its own Fourier series.
True-False Exercises
(a)
False. The area between the graphs is the error, not the mean square error.
(b)
True.
(c)
True.
(d)
False. ||1 ||  1, 1   12 dx  2  1 .
(e)
True.
2
0
Chapter 6 Supplementary Exercises
1.
(a)
Let v   v1 , v2 , v3 , v4  .
 v, u1   v1 ,  v, u 2   v2 ,  v, u 3   v3 ,  v, u 4   v4
If  v, u1    v, u 4   0, then v1  v4  0 and v   0, v2 , v3 , 0  . Since the angle  between u and v
satisfies cos  ||uu||,||vv|| , v making equal angles with u 2 and u 3 means that v2  v3 . In order for the
angle between v and u 3 to be defined || v ||  0. Thus, v = (0, a, a, 0) with a  0 .
(b)
As in part (a), since  x, u1    x, u 4   0, x1  x4  0.
Since || u 2 ||  || u 3 ||  1 and we want || x ||  1, the cosine of the angle between x and u 2 is
cos 2   x, u 2   x2 and, similarly, cos3   x, u 3   x3 , so we want x2  2 x3 , and
x   0, x2 , 2 x2 , 0.
Supplementary Exercises
|| x ||  x22  4 x22  5 x22  x2

If || x ||  1, then x2   15 , so x   0,
3.
u
Recall that if U   1
u3
(a)
u2 
v
and V   1

u4 
 v3
1
5
,
2
5
45
5.

, 0 .
v2 
, then U , V   u1v1  u2 v2  u3 v3  u4 v4 .
v4 
If U is a diagonal matrix, then u2  u3  0 and U , V   u1v1  u4 v4 .
For V to be in the orthogonal complement of the subspace of all diagonal matrices, then it must be the
case that v1  v4  0 and V must have zeros on the main diagonal.
(b)
If U is a symmetric matrix, then u2  u3 and U , V   u1v1  u2  v2  v3   u4 v4 .
Since u1 and u4 can take on any values, for V to be in the orthogonal complement of the subspace of
all symmetric matrices, it must be the case that v1  v4  0 and v2  v3 , thus V must be skewsymmetric.
5.
 a , ..., a  and v   , ..., . By the Cauchy-Schwarz inequality,
 u  v  ( 1

1 )  || u || || v || or n   a    a       .


Let u 
1
1
a1
n
2
2
1
an
2
2
2
1
n
n terms
7.
1
a1
1
an
Let x   x1 , x2 , x3  .
 x, u1   x1  x2  x3
 x, u 2   2 x1  x2  2 x3
 x, u 3    x1  x3
 x, u 3   0   x1  x3  0, so x1  x3 . Then  x, u1   x2 and  x, u 2    x2 , so x2  0 and x   x1 , 0, x1  .
Then || x ||  x12  x12  2 x12  x1
2.
If || x ||  1 then x1   12 and the vectors are 
8.
 , 0, .
1
2
1
2
By inspection, the weighted Euclidean inner product  u, v  u1v1  12 u2 v2  13 u3 v3    1n un vn satisfies
 v1 , v1    v 2 , v 2    v 3 , v 3      v n , v n   1 and  v i , v j   0 for all i  j .
9.
For u   u1 , u2  , v   v1 , v2  in R 2 , let  u, v  au1v1  bu2 v2 be a weighted inner product.
If u = (1, 2) and v = (3, 1) form an orthonormal set, then || u ||2  a(1)2  b(2)2  a  4b  1,
|| v ||2  a(3)2  b(1)2  9a  b  1, and  u, v  a 1 3   b  2  1  3a  2b  0.
 1 4
1 
a   


1    1  .
This leads to the system  9
b
 3 2    0 
46
Chapter 6: Inner Product Spaces
 1 4 1
1 0 0 


1 1 reduces to  0 1 0  , the system is inconsistent and there is no such weighted inner
Since  9
 0 0 1 
 3 2 0 
product.
11.
(a)
Let u1   k, 0, 0, ..., 0  , u 2   0, k, 0, ..., 0  , ..., u n   0, 0, 0, ..., k  be the edges of the ‘cube’
in R n and u = (k, k, k, ..., k) be the diagonal.
 u ,u 
Then || u i ||  k , || u ||  k n , and  u i , u  k 2 , so cos  ||ui i|| ||u||  k kk n  1n .
(b)
13.
As n approaches ,
1
n
2


approaches 0, so  approaches π2 .
Recall that u can be expressed as the linear combination u  a1v1    an v n where ai   u, v i  for

 u, v 
i = 1, ..., n. Since || vi ||  1 , we have cos2 α i  ||u|| ||vii ||
2
2
2
1
2
n
   
2
ai
||u ||
2
ai2
a12  a22  an2
.
 an
Therefore cos2 α1    cos2 α n  aa12  aa22 
 1.
 a 2
15.
To show that (W  )  W , we first show that W  (W  ) . If w is in W, then w is orthogonal to every vector
in W  , so that w is in (W  ) . Thus W  (W  ) .
To show that (W  )  W , let v be in (W  ) . Since v is in V, we have, by the Projection Theorem, that
v  w1  w 2 where w1 is in W and w 2 is in W  . By definition,  v, w 2    w1 , w 2   0. But
 v, w 2    w1  w 2 , w 2    w1 , w 2   w 2 , w 2    w 2 , w 2 
so that  w 2 , w 2   0. Hence w 2  0 and therefore v  w1 , so that v is in W. Thus (W  )  W .
17.
 1 1
1
 1 2 4     4 s  3
 1 2 4
 21 25


T
T
T
1 
A   2 3 , A  
, A A
, A b
1 3 5   5s  2 
1 3 5 
25 35 



s 
 4 5
21 25  x1   4 s  3
The associated normal system is 
   
.
25 35   x2  5s  2 
 21 25  1   71   4 s  3 
If the least squares solution is x1  1 and x2  2, then 
      
.
 25 35   2   95   5s  2 
The resulting equations have solutions s = 17 and s = 18.6, respectively, so no such value of s exists.
18.
Using the trigonometric identity sin  sin   12 cos      12 cos     we obtain
 f , g  12  cos   p  q  x  dx  12  cos   p  q  x  dx where both p  q and p  q are nonzero integers.
2
2
0
0
Substituting u   p  q  x in the first integral, and t   p  q  x in the second integral yields
 f , g  12
sin   p  q  x 
pq
2
2
  1 sin   p  q  x    0  0  0 since sin  m   0 for any integer m .
pq
2
 0
 0
7.1 Orthogonal Matrices
CHAPTER 7: DIAGONALIZATION AND QUADRATIC FORMS
7.1 Orthogonal Matrices
1.
(a)
 1 0  1 0
 1 0  1 0
AAT  
 I and AT A  

  I therefore A is an orthogonal matrix;



 0 1  0 1
 0 1  0 1
 1 0
A 1  A T  

 0 1
(b)
 12
AAT   1
 2
 12   12
 1
1
2
   2

 12
T

I
A
A

and

 1
1

  2
2
1
2
 12
A is an orthogonal matrix; A1  AT   1
  2
2.
(a)
  12
 1
1
2
  2
1
2
 12 
  I therefore
1
2



1
2

1
2
1 0  1 0 
1 0  1 0 
AAT  
 I and AT A  

  I therefore A is an orthogonal matrix;



0 1  0 1 
0 1  0 1 
1 0 
A 1  AT  

0 1 
3.
(b)
 15
AAT   2
 5
  15
 2
1

5
 5
(a)
|| r1 ||  02  12 
(b)
1
 1
2
6

AAT   0  26
 1
1
 2
6
 1 45 
  4
  I therefore A is not an orthogonal matrix.
2
1

5


5
2
5
2
5
  
1
2
2
3
2
 1 so the matrix is not orthogonal.
  1
 2
1
  16
3
 1
1
3
  3
1
3
0
 26
1
3

 1

 2
T
1
and
I
A
A



 16
6

 1
1
 3
3

 1
 2
1
T
therefore A is an orthogonal matrix; A  A   16
 1
 3
4.
(a)
0
 26
1
3
1
  1
6
 2
1
2
0



6
6
 1
1
1
3
6
  2
0
1
2
1
2
 26
1
3


1

6

1
3

1
2
 12
1
AAT  AT A  I therefore A is an orthogonal matrix; A1  AT   21
2
1
 2

1
2
5
6
1
6
1
6

1
2
1
6
1
6
5
6



 


1
2
1
6
5
6
1
6


1
I
3

1
3

1
3
1
2
Chapter 7: Diagonalization and Quadratic Forms
(b)
5.
     
|| r2 || 
1
3
 45  259

4
AT A   0
5
  35  12
25
2
1 2
2
12
25
3
5
16
25
  45
 9
   25
  12
25
row vectors of A , r1   45
7
12
 1 . The matrix is not orthogonal.
 35 

 12
I;
25 
16 
25 
0
4
5
3
5
0  35  , r2    259
12
 12
25 
 , r3   25
4
5
3
5
16
25
 , form an orthonormal set since
r1  r2  r1  r3  r2  r3  0 and || r1 ||  || r2 ||  || r3 ||  1 ;
 45 
0 
  35 
 
 
 
, form an orthonormal set since
column vectors of A , c1    259  , c 2   45  , c 3    12
25 
16
3
12
 25 
 5 
 25 
c1  c 2  c1  c 3  c 2  c 3  0 and || c1 ||  || c 2 ||  || c 3 ||  1 .
6.
 23   31

 13   23
2 2

3 3

row vectors of A , r1   13
2
3
 13

AT A   23
 23

2
3
2
3
1
3

2
3
2
3
1
3
2
3
1
3
2
3
2
3


I

 , r2   23
 23
1
3
 , r3    23
 13
2
3
 , form an orthonormal set since
r1  r2  r1  r3  r2  r3  0 and || r1 ||  || r2 ||  || r3 ||  1 ;
 13 
 23 
 23 
 2
 
 2
column vectors of A , c1   3  , c 2    3  , c 3   13  , form an orthonormal set since
  23 
  13 
 23 
c1  c 2  c1  c 3  c 2  c 3  0 and || c1 ||  || c 2 ||  || c 3 ||  1 .
7.
 45

TA  x     259
 12
25
0
4
5
3
5
 35   2    235 
    18 
3   25  ; || TA (x) || 
 12
25  
16  

5  101
25  
25 
529
25
 324
 10201
 38
625
625
equals || x ||  4  9  25  38
8.
 13

TA  x    23
  23


2
3
2
3
1
3
2
3
1
3
2
3
  0   103 
   2
  1   3  ; || TA (x) || 
  4   37 
100
9
 49  499  17 equals || x ||  0  1  16  17
9.
Yes, by inspection, the column vectors in each of these matrices form orthonormal sets. By Theorem 7.1.1,
these matrices are orthogonal.
10.
No. Each of these matrices contains a zero column. Consequently, the column vectors do not form
orthonormal sets. By Theorem 7.1.1, these matrices are not orthogonal.
11.
 2(a 2  b 2 ) 0

a  b b  a 
, so a and b must satisfy a 2  b2  12 .
Let A  
. Then AT A  

2
2 
2( a  b ) 
a  b b  a 
0
7.1 Orthogonal Matrices
12.
13.
All main diagonal entries must be 1 in order for the column vectors to form an orthonormal set.
(a)
 x
x  1
orthogonal, P 1  PT therefore    P 1     2
 y 
 y    23
14.
  2   1  3 3 
   

1
  6   3  3 
2
 23   5   25  3 

   
1
 2  1  25 3 
2
(b)
(a)
cos 3
Formula (4) in Section 7.1 yields the transition matrix P   34
 sin 4
1
 sin 34    2

cos 34   12

 12 
;
 12 
  12
 x 
1  x 
since P is orthogonal, P  P therefore    P     1
 y 
 y    2
  2   4 2 
   

 12   6   2 2 
(b)
1
x
 x   2
Using the matrix P we obtained in part (a),    P     1
 y
 y  2
 12   5    72 



 12  2   32 
(a)
Following the method of Example 6 in Section 7.1 (also see Table 7 in Section 8.6), we use the
cos 4

orthogonal matrix P   sin 4
 0
1
 x 
x  2
 y   P 1  y     1
 
   2
 z 
 z   0

16.
 23 
 ; since P is
1
2

3
2
x
 x  12
Using the matrix P we obtained in part (a),    P    
 y
 y   23
1
15.
 sin 3   12

cos 3   23
cos 
Formula (4) in Section 7.1 yields the transition matrix P   3
 sin 3
T
 sin 4
cos 4
0
1
0  2
 
0    12
1   0

 12
1
2
0
1
2
0

0  to obtain

1
0   1  12 

 
1
0   2    32 
2

 
0 1  5  5 
1
2
(b)
5
1
1
x
 x   2  2 0   1   2 




 6 
7
1
Using the matrix P we obtained in part (a),  y   P  y    12
0


2
2
 

  3 
 z 
 z   0
0 1    3

(a)
We follow the method of Example 6 in Section 7.1, with the appropriate orthogonal matrix obtained
0
1

from Table 7 in Section 8.6: P  0 cos 34
0 sin 34
0  1
0
0



 sin 34   0  12  12 


1
cos 34  0
 12 
2

0
0   1  1
 x 
x  1

  

 
1  
1
1
 2    32 
2 
 y   P  y   0  2
 z 
 z  0  1  1   5   7 

2
2

 2 
3
4
17.
Chapter 7: Diagonalization and Quadratic Forms
(b)
0
0   1  1
x
 x  1
 y   P  y    0  1  1   6     3 

Using the matrix P we obtained in part (a),  
2
2 
  
  2




1
 12   3  92 
 z 
 z  0
2
(a)
We follow the method of Example 6 in Section 7.1, with the appropriate orthogonal matrix obtained
0 sin 3   12 0
 
1
0  0 1
0 cos 3    3 0
 2
 cos 3

from Table 7 in Section 8.6: P   0
 sin 3
 x
 x   12
 y   P 1  y    0
 
  

 z 
 z   23
(b)
18.


0
1
2
3
2
0  23   1   12  5 2 3 



1 0   2    2 


1   5
0
   23  25 
2 
x
 x   12 0

Using the matrix P we obtained in part (a),  y   P  y    0 1

 z 
 z    23 0
  1  12  3 2 3 



0   6    6 


1   3 
   23  23 
2
3
2
 cos 
If B  u1 , u 2 , u 3  is the standard basis for R and B  u1 , u2 , u3  , then [ u1 ]B  
0  ,
 sin 
3
0 
 sin 
 cos 0 sin 




[ u2 ]B  1  , and [ u 3 ]B  
0  . Thus the transition matrix from B to B is P  
0 1
0  , i.e.,
 0 
 cos 
 sin 0 cos 
x
 x 
 cos 0 sin 
 y   P  y  . Then A  P 1  
0 1
0  .
 
 

 z 
 z 
 sin 0 cos 
19.
1 
If B  u1 , u 2 , u 3  is the standard basis for R and B  u1 , u2 , u3  , then [ u1 ]B   0  ,
 0 
3
0
0
0
0


1





[ u 2 ]B   cos  , and [ u 3 ]B   sin  , so the transition matrix from B to B is P  0 cos sin 
 sin 
 cos 
0 sin cos 
0
0
1

and A  0 cos sin  .
0 sin cos 
7.1 Orthogonal Matrices
20.
5
We obtain the relevant orthogonal matrices using the formulas in Table 7 of Section 8.6:
cos 3
 x
x
 y   P 1  y  with P   sin 
1
3

  1  
 0
 z 
 z 
 cos 4
 x 
 x 
 y   P 1  y  with P   0
2
2  

 
  sin 4
 z 
 z 
 sin 3
cos 3
0
1
0  2


0    23
1   0

 23
1
2
0
1
0 sin 4   2
 
1
0  0
0 cos 4    12

0

0  and
1 



0 .
1 
2
0
1
2
1
0
 x  
x


Therefore, the matrix A such that  y   A  y  is obtained from
 z  
 z 
 12 0  12   12


A  P21 P11   0 1
0    23
1 0
1 
2
 0
 2
21.
1
0  2 2


1
0     23
2

0 1  1
  2 2
3
2 2
3
2
1
2
3
2 2
 12 

0

1

2

(a)
Rotations about the origin, reflections about any line through the origin, and any combination of these
are rigid operators.
(b)
Rotations about the origin, dilations, contractions, reflections about lines through the origin, and
combinations of these are angle preserving.
(c)
All rigid operators on R2 are angle preserving. Dilations and contractions are angle preserving
operators that are not rigid.
22.
No. If A is orthogonal then by part (c) of Theorem 7.1.3, u  v  0 implies Au  Av  0 .
23.
(a)
Denoting p1  p1  x   13 , p 2  p2  x   12 x , and p3  p3  x  
3
2
x2 
we have
2
3
   1     3   
 p, p   p  1 p  1  p  0  p  0   p 1 p 1  1    1 0    3     2
 p, p   p  1 p  1  p  0  p  0   p 1 p 1  1    1      3   
q, p   q  1 p  1  q  0  p  0   q 1 p 1   3     0     1    
 q, p   q  1 p  1  q  0  p  0   q 1 p 1   3      0  0   1    2 2
q, p   q  1 p  1  q  0  p  0   q 1 p 1   3      0      1    
 p, p1   p  1 p1  1  p  0  p1  0   p 1 p1 1  1
1
3
1
3
2
2
2
2
1
2
3
3
3
3
1
6
2
6
1
1
1
1
1
3
1
3
2
2
2
2
1
2
3
3
3
3
1
6
 p S    p, p1 ,  p, p2 ,  p, p3     53 , 2, 23 
 q S   q, p1 , q, p2 ,  q, p3      23 ,2 2,  23 
5
3
1
3
1
2
1
6
1
3
2
3
2
3
1
2
2
6
1
6
2
3
6
Chapter 7: Diagonalization and Quadratic Forms
(b)
|| p || 
   2   
5
3
2
2
2
3
2
25
3
 2  23  11
     2  2 2        2   21
 p, q        2  2 2          4   0
d  p, q  
5
3
5
3
25.


AT A  AAT  I n  xT2x
26.
2
3
2
3
2
3
2
3
2
2
3
49
3
8
3
10
3
2
3
  I   xx   I   x  x  I  xx  A therefore
xx  I  xx   I  xx  xx 
xx xx
 
We have AT  I n  xT2x xxT
 I n  xT4x xxT 
2
2
2
3
T
T
n
2
xT x
2
xT x
T
T
n
T
T
n
2
xT x
T
n
2
xT x
T
T
T
T
n
2
xT x
T
4
xT x
2
xT x
T
T
T
2
  T
xx  I n  x4x xxT  x4x xxT  I n
 
4 xT x
xT x
2
T
T
 cos 
Every unit vector in R2 can be expressed as 
 for some angle  . Thus for a 2  2 matrix A to have
 sin  
 cos
orthonormal columns, we must have A  
 sin 
cos  
for some  and  such that
sin  
 cos   cos  

3
 sin     sin    cos cos   sin  sin   cos      0 so either     2  2k or     2  2k .

 

 cos
Trigonometric identities imply that either A  
 sin 
27.
(a)
 cos
Multiplication by A  
 sin 
sin  
 cos
or A  

 cos 
 sin 
 sin  
.
cos 
 sin  
is a rotation through  .
cos 
In this case, det  A   cos2   sin 2   1 .
 cos
The determinant of A  
 sin 
sin  
is det  A    cos2   sin 2   1 .
 cos 
 cos
We can express this matrix as a product 
 sin 
 cos
 sin 

(b)
sin    cos

 cos   sin 
 sin    1 0 
. Multiplying by
cos   0 1
sin  
is a reflection about the x -axis followed by a rotation through  .
 cos 
 cos
Multiplication by 
 sin 
sin  
is a reflection about the line through the origin that makes the
 cos 
angle 2 with the positive x -axis.
28.
(a)
  12
Multiplication by A   1
  2
 cos 54

 12   sin 54
1
2
 sin 54 
5
 is a rotation through 4 .
cos 54 
7.1 Orthogonal Matrices
(b)
 1
Multiplication by A   2
 23
 cos 23

2
1
  sin 3
2
3
2
7
sin 23 
 is a reflection about the x -axis followed by a
 cos 23 
rotation through 23 . Also, multiplication by A is a reflection about the line through the origin that
makes the angle 3 with the positive x -axis.
29.
Let A and B be 3  3 standard matrices of two rotations in R3 : TA and TB , respectively.
The result stated in this Exercise implies that A and B are both orthogonal and det  A   det  B   1 .
The product AB is a standard matrix of the composition of these rotations TA  TB .
By part (c) of Theorem 7.1.2, AB is an orthogonal matrix.
Furthermore, by Theorem 2.3.4, det  AB   det  A  det  B   1 .
We conclude that TA  TB is a rotation in R3 .
(One can show by induction that a composition of more than two rotations in R3 is also a rotation.)
30.
It follows directly from Definition 1 that the transpose of an orthogonal matrix is orthogonal as well (this is
also stated as part (a) of Theorem 7.1.2). Since rows of A are columns of AT , the equivalence of
statements (a) and (c) follows from the equivalence of statements (a) and (b) which is shown in the book.
True-False Exercises
(a)
False. Only square matrices can be orthogonal.
(b)
False. The row and column vectors are not unit vectors.
(c)
False. Only square matrices can be orthogonal. (The statement would be true if m  n .)
(d)
False. The column vectors must form an orthonormal set.
(e)
True. Since AT A  I for an orthogonal matrix A, A must be invertible (and A1  AT ).
(f)
True. A product of orthogonal matrices is orthogonal, so A2 is orthogonal; furthermore,
 
det A2  (detA)2  (1)2  1.
(g)
True. Since || Ax ||  || x || for an orthogonal matrix.
(h)
True. This follows from Theorem 7.1.3.
7.2 Orthogonal Diagonalization
1.
  1 2
  2  5
2
 4
The characteristic equation is  2  5  0 and the eigenvalues are  = 0 and  = 5.
Both eigenspaces are one-dimensional.
8
Chapter 7: Diagonalization and Quadratic Forms
 1
2.
4
2
4
2
 1
2   3  27  54     6  (  3)2
2
 2
The characteristic equation is  3  27  54  0 and the eigenvalues are  = 6 and  = 3.
The eigenspace for  = 6 is one-dimensional; the eigenspace for  = 3 is two-dimensional.
 1
3.
1
1
1
1
  1 1   3  3 2   2    3 
1   1
The characteristic equation is  3  3 2  0 and the eigenvalues are  = 3 and  = 0.
The eigenspace for  = 3 is one-dimensional; the eigenspace for  = 0 is two-dimensional.
4.
  4 2
2
2
  4 2   3  12 2  36  32     8  (  2)2
2
2
 4
The characteristic equation is  3  12 2  36  32  0 and the eigenvalues are  = 8 and  = 2.
The eigenspace for  = 8 is one-dimensional; the eigenspace for  = 2 is two-dimensional.
 4
5.
4
4
 4
0
0
0
0
0
0
0
0

0
  4  8 3   3    8 
0 
The characteristic equation is  4  8 3  0 and the eigenvalues are  = 0 and  = 8.
The eigenspace for  = 0 is three-dimensional; the eigenspace for  = 8 is one-dimensional.
 2
6.
1
1
 2
0
0
0
0
0
0
0
0
 2
1
 2
1
  4  8 3  22 2  24  9  (  1)2 (  3)2
The characteristic equation is  4  8 3  22 2  24  9  0 and the eigenvalues are  = 1 and  = 3. Both
eigenspaces are two-dimensional.
7.
det   I  A  
  6 2 3
2 3
 7
  2  13  30     3    10  therefore A has eigenvalues 3 and 10 .
1
The reduced row echelon form of 3 I  A is 
 0

 so that the eigenspace corresponding to 1  3
0 
2
3
 2 
x 
consists of vectors  1  where x1   23 t , x2  t . A vector p1    forms a basis for this eigenspace.
 x2 
 3
 1  23 
The reduced row echelon form of 10 I  A is 
 so that the eigenspace
0
0
7.2 Orthogonal Diagonalization
9
 3
x 
corresponding to 2  10 consists of vectors  1  where x1  23 t , x2  t . A vector p2    forms a
 x2 
2 
basis for this eigenspace.
Applying the Gram-Schmidt process to both bases p1 and p2  amounts to simply normalizing the
vectors. This yields the columns of a matrix P that orthogonally diagonalizes A :
 2
7
P
 3
 7
8.


 . We have P 1 AP  P T AP   1
2 
0
7
3
7
det   I  A  
 3
1
0  3 0

.
2  0 10 
1
    2    4  therefore A has eigenvalues 2 and 4 .
 3
1 1 
The reduced row echelon form of 2 I  A is 
 so that the eigenspace corresponding to 1  2 consists
0 0 
x 
 1
of vectors  1  where x1  t , x2  t . A vector p1    forms a basis for this eigenspace.
 1
 x2 
 1 1
The reduced row echelon form of 4 I  A is 
 so that the eigenspace corresponding to 2  4
0 0 
x 
1
consists of vectors  1  where x1  t , x2  t . A vector p2    forms a basis for this eigenspace.
1
 x2 
Applying the Gram-Schmidt process to both bases p1 and p2  amounts to simply normalizing the
vectors. This yields the columns of a matrix P that orthogonally diagonalizes A :
  12
P 1
 2
9.


 . We have P 1 AP  P T AP   1
1
0
2

1
2
0  2 0 

.
2  0 4 
Cofactor expansion along the second row yields det   I  A  
    3
 2
36
36
  23
 2
0
36
0
36
 3
0
  23
0
    25    3    50  therefore A has eigenvalues 25 , 3 , and 50 .
1 0 43 


The reduced row echelon form of 25I  A is 0 1 0  so that the eigenspace corresponding to
0 0 0 
 x1 
 4 


4
1  25 contains vectors  x2  where x1   3 t , x2  0 , x3  t . A vector p1   0  forms a basis for this
 3
 x3 
eigenspace.
10
Chapter 7: Diagonalization and Quadratic Forms
1 0 0 
The reduced row echelon form of 3 I  A is  0 0 1  so that the eigenspace corresponding to
 0 0 0 
 x1 
0 


2  3 contains vectors  x2  where x1  0 , x2  t , x3  0 . A vector p2  1  forms a basis for this
0 
 x3 
eigenspace.
 1 0  34 


The reduced row echelon form of 50 I  A is 0 1 0  so that the eigenspace corresponding to
0 0
0 
 x1 
3


3
3  50 contains vectors  x2  where x1  4 t , x2  0 , x3  t . A vector p3  0  forms a basis for this
 4 
 x3 
eigenspace.
Applying the Gram-Schmidt process to the bases p1 and p3  amounts to simply normalizing the
vectors; the basis p2  is already orthonormal. This yields the columns of a matrix P that orthogonally
  45 0 35 


diagonalizes A : P   0 1 0  .
 35 0 45 
1
We have P AP  P AP   0
 0
1
10.
det   I  A  
T
 6
2
2
 3
0
2
0
0  25 0
0


0    0 3
0  .
3   0 0 50 
    2    7  therefore A has eigenvalues 2 and 7 .
 1  12 
The reduced row echelon form of 2 I  A is 
 so that the eigenspace corresponding to 1  2
0
0
x 
 1
consists of vectors  1  where x1  12 t , x2  t . A vector p1    forms a basis for this eigenspace.
2 
 x2 
1 2 
The reduced row echelon form of 7 I  A is 
 so that the eigenspace corresponding to 2  7 consists
0 0 
x 
 2 
of vectors  1  where x1  2t , x2  t . A vector p 2    forms a basis for this eigenspace.
 1
 x2 
Applying the Gram-Schmidt process to both bases p1 and p2  amounts to simply normalizing the
7.2 Orthogonal Diagonalization
 15
vectors. This yields the columns of a matrix P that orthogonally diagonalizes A : P   2
 5

We have P 1 AP  P T AP   1
0
11.
det   I  A  
 2
1
1
1
 2
1
11
 25 
.
1
5

0  2 0 
.

2  0 7 
1
1   3  6 2  9   (  3)2 therefore A has eigenvalues
 2
3 and 0 .
1 1 1 
The reduced row echelon form of 3I  A is  0 0 0  so that the eigenspace corresponding to
 0 0 0 
 1
 x1 


1  2  3 contains vectors  x2  where x1   s  t , x2  s , x3  t . Vectors p1   1 and
 0 
 x3 
 1
p2   0  form a basis for this eigenspace. We apply the Gram-Schmidt process to find an orthogonal basis
 1
 1
 1   12 
 1
 p ,v 
 
   
for this eigenspace: v1  p1   1 and v 2  p2  ||v2 || 21 v1   0   12  1    12  , then
1
 1
 0   1
 0 
  12 


v
proceed to normalize the two vectors to yield an orthonormal basis: q1  ||v11 ||   12  and


 0
 1 
 6
v2
q 2  ||v2 ||    16  .
 2
 6 
 1 0 1
The reduced row echelon form of 0I  A is  0 1 1 so that the eigenspace corresponding to
 0 0 0 
 x1 
1


3  0 contains vectors  x2  where x1  t , x2  t , x3  t . A vector p3  1 forms a basis for this
1
 x3 
eigenspace.
12
Chapter 7: Diagonalization and Quadratic Forms
Applying the Gram-Schmidt process to p3  amounts to simply normalizing this vector.
 1  1
6
 2
1
1
A matrix P   2  6

2
 0
6


1
 orthogonally diagonalizes A resulting in
3

1
3

 1
P AP  P AP   0
 0
2
0  3 0 0
0   0 3 0  .
3  0 0 0 
 1
1
1
12.
T
0
0
1
3
0
  1 0   2    2  therefore A has eigenvalues 0 and 2 .
0

det   I  A   1
0
1 1 0 
The reduced row echelon form of 0I  A is  0 0 0  so that the eigenspace corresponding to 1  2  0
 0 0 0 
 x1 
 1
0 




contains vectors  x2  where x1   s , x2  s , x3  t . Vectors p1   1 and p2  0  form a basis for this
 0 
 1
 x3 
eigenspace.
 1 1 0 
The reduced row echelon form of 2 I  A is 0 0 1 so that the eigenspace corresponding to
0 0 0 
 x1 
1 


3  2 contains vectors  x2  where x1  t , x2  t , x3  0 . A vector p3  1  forms a basis for this
0 
 x3 
eigenspace.
Applying the Gram-Schmidt process to both bases p1 , p2  and p3  amounts to simply normalizing the
vectors since the three vectors are already orthogonal. This yields the columns of a matrix P that
  12 0

orthogonally diagonalizes A : P   12 0

 0 1
1
We have P AP  P AP   0
 0
1
T
0
2
0


1
.
2

0
1
2
0  0 0 0 
0    0 0 0  .
3   0 0 2 
7.2 Orthogonal Diagonalization
 7
13.
det   I  A  
24
24
 7
0
0
0
0
13
0
0
0
0
 (  25)2 (  25)2 therefore A has eigenvalues 25 and 25 .
  7 24
24   7
 1 43 0 0 


0 0 1 43 

so that the eigenspace corresponding to
The reduced row echelon form of 25I  A is
0 0 0 0 


0 0 0 0 
 x1 
 4 
x 
 3
1  2  25 contains vectors  2  where x1   43 s , x2  s , x3   43 t , x4  t . Vectors p1    and
 x3 
 0
 
 
 0
 x4 
 0
 0
p2    form a basis for this eigenspace.
 4 
 
 3
0
 1  34 0


0
0 1  34 
so that the eigenspace corresponding to
The reduced row echelon form of 25I  A is 
0
0 0
0


0 0
0
0
 x1 
 3
0
x 
0
4
3  4  25 contains vectors  2  where x1  34 s , x2  s , x3  34 t , x4  t . Vectors p3    and p4   
 x3 
 3
0
 
 
 
4
0
 x4 
form a basis for this eigenspace.
Applying the Gram-Schmidt process to the two bases p1 , p2  , p3 , p4  amounts to simply normalizing the
vectors since the four vectors are already orthogonal. This yields the columns of a matrix P that
0 35 0 
  45
 3

0 45 0 
.
orthogonally diagonalizes A : P   5
 0  45 0 35 


3
0 45 
5
 0
0 0 0
1 0 0 0   25
0 


0 0   0 25 0 0 
2
We have P 1 AP  P T AP  
.

 0 0 3 0   0
0 25 0 

 

0 0 25
 0 0 0 4   0
14
Chapter 7: Diagonalization and Quadratic Forms
 3
14.
det   I  A  
1
0
0
1 0
 3 0
0
0

0
0
  2    2    4  therefore A has eigenvalues 0 , 2 , and 4 .
0
0 
1
0
The reduced row echelon form of 0 I  A is 
0

0
0
1
0
0
0
0 
so that the eigenspace corresponding to
0

0
0
0
0
0
 x1 
0 
x 
0 
1  2  0 contains vectors  2  where x1  0 , x2  0 , x3  s , x4  t . Vectors p1    and
 x3 
1 
 
 
0 
 x4 
0 
0 
p2    form a basis for this eigenspace.
0 
 
1 
1
0
The reduced row echelon form of 2 I  A is 
0

0
1
0
0
0
0
0 
so that the eigenspace corresponding to 3  2
1

0
0
1
0
0
 x1 
 1
x 
 1
2

x

0
where x1  t , x2  t , 3
, x4  0 . A vector p3    forms a basis for this
contains vectors
 x3 
 0
 
 
 0
 x4 
eigenspace.
 1 1
0 0
The reduced row echelon form of 4 I  A is 
0 0

0 0
0
1
0
0
0
0 
so that the eigenspace corresponding to
1

0
 x1 
1 
x 
1 
4  4 contains vectors  2  where x1  t , x2  t , x3  0 , x4  0 . A vector p4    forms a basis for
 x3 
0 
 
 
0 
 x4 
this eigenspace.
Applying the Gram-Schmidt process to the three bases p1 , p2  , p3  , and p4  amounts to simply
normalizing the vectors since the four vectors are already orthogonal. This yields the columns of a
7.2 Orthogonal Diagonalization
0

0
matrix P that orthogonally diagonalizes A : P  
1
0
1 0
0 
2
We have P 1 AP  P T AP  
0 0

0 0
15.
det   I  A  
 3
1
0
0
3
0
0  0
0   0

0  0
 
4  0
0  12
0
1
2
0
0
1
0
15



.
0
0 
1
2
1
2
0 0 0
0 0 0 
.
0 2 0

0 0 4
1
    2    4  therefore A has eigenvalues 2 and 4 .
 3
1 1 
The reduced row echelon form of 2 I  A is 
 so that the eigenspace corresponding to 1  2 consists
0 0 
x 
 1
of vectors  1  where x1  t , x2  t . A vector p1    forms a basis for this eigenspace.
 1
 x2 
 1 1
The reduced row echelon form of 4 I  A is 
 so that the eigenspace corresponding to 2  4
0 0 
x 
1
consists of vectors  1  where x1  t , x2  t . A vector p2    forms a basis for this eigenspace.
1
 x2 
Applying the Gram-Schmidt process to both bases p1 and p2  amounts to simply normalizing the
vectors. This yields the columns of a matrix P that orthogonally diagonalizes A :
  12
P 1
 2

 0  2 0 
.

 . We have P 1 AP  P T AP   1
1
0 2   0 4 

2

1
2
Formula (7) of Section 7.2 yields the spectral decomposition of A :
  12 
3 1 
1
1 3  (2)  1    2


 2 
16.
1
2
 12 
  (4)    1
1

 2
 2 
1
2
 1
  (2)  2
1

 2
 15
In the solution of Exercise 10, we have shown that P   2
 5
 12 
 12

(4)
1
1
2
2
1
2
1
2

.

 25 
 orthogonally diagonalizes A :
1
5

2 0 
P T AP  
 . Formula (7) of Section 7.2 yields the spectral decomposition of A :
0 7 
 15 
 6 2 
1
 2
   2   2   5
3
 5 


2
5
  25 

 2
   7   1    5
 5
1
5
1
  2  5
2

5
2
5
4
5

 45

7



 2

 5
 25 
.
1
5
16
Chapter 7: Diagonalization and Quadratic Forms
 3
17.
det   I  A   1
2
1
2
  3 2     4     2  therefore A has eigenvalues 4 and 2 .
2

2
1 1 2 
The reduced row echelon form of 4 I  A is  0 0 0  so that the eigenspace corresponding to   4
 0 0 0 
 x1 
 1
 2 




contains vectors  x2  where x1   s  2 t , x2  s , x3  t . Vectors p1   1 and p2   0  form a basis
 0 
 1
 x3 
for this eigenspace. We apply the Gram-Schmidt process to find an orthogonal basis for this eigenspace:
 2 
 1  1
 1
 p2 , v1 




2 
v1  p1   1 and v 2  p2  ||v || 2 v1   0   2  1   1 , then proceed to normalize the two vectors to
1
 1
 0   1
 0 
 1 
  12 
 3
 1
v1
v2
yield an orthonormal basis: q1  ||v1 ||   2  and q 2  ||v2 ||    13  .
 1


 3 
 0
 1 0  12 


The reduced row echelon form of 2 I  A is 0 1  12  so that the eigenspace corresponding to   2
0 0
0 
 x1 
1 


1
1
contains vectors  x2  where x1  2 t , x2  2 t , x3  t . A vector p3  1  forms a basis for this eigenspace.
2 
 x3 
Applying the Gram-Schmidt process to p3  amounts to simply normalizing this vector.
 1  1
3
 2
A matrix P   12  13

1
3
 0

0 0
 4



T
1
 orthogonally diagonalizes A resulting in P AP  D   0 4 0  .
6

 0
2
0 2 

6
1
6
Formula (7) of Section 7.2 yields the spectral decomposition of A :
  12 
1 2
 3
 1 3 2   4  1    1

   2   2


 2 2 0 
 0
1
2
 1 
 3

0    4    13    13
 1
 3 
 13
 12  12 0 



1
  4    12
0    4   13
2
  13
 0
0 0 
1
3
1
3
1
3
 13 
 61


 13    2   61
1
 13
3

1
6
1
6
1
3
1
3
1
3
2
3


.

 13
1
 6
1 
  2   16   16
3
2
 6 
1
6
2
6


7.2 Orthogonal Diagonalization
18.
det   I  A  
 2
0
36
0
 3
0
36
0
  23
17
    25    3    50  therefore A has eigenvalues 25 ,
3 , and 50 .
1 0 43 


The reduced row echelon form of 25I  A is 0 1 0  so that the eigenspace corresponding to
0 0 0 
 x1 
 4 


4
  25 consists of vectors  x2  where x1   3 t , x2  0 , x3  t . A vector p1   0  forms a basis for this
 x3 
 3
eigenspace.
1 0 0 
The reduced row echelon form of 3 I  A is  0 0 1  so that the eigenspace corresponding to
 0 0 0 
 x1 
0 


  3 consists of vectors  x2  where x1  0 , x2  t , x3  0 . A vector p2   1 forms a basis for this
0 
 x3 
eigenspace.
 1 0  34 


The reduced row echelon form of 50 I  A is 0 1 0  so that the eigenspace corresponding to
0 0
0 
 x1 
 3


3
  50 consists of vectors  x2  where x1  4 t , x2  0 , x3  t . A vector p3   0  forms a basis for this
 x3 
 4 
eigenspace.
Applying the Gram-Schmidt process to the bases p1 and p3  amounts to simply normalizing the
vectors; the basis p2  is already orthonormal.
  45 0 35 
0
25 0



T
A matrix P   0 1 0  orthogonally diagonalizes A resulting in P AP  D   0 3
0  .
 35 0 45 
 0 0 50 
Formula (7) of Section 7.2 yields the spectral decomposition of A :
18
Chapter 7: Diagonalization and Quadratic Forms
  45 
 2 0 36 
 
 0 3

0    25   0    45

 35 
 36 0 23
0
 35 
0 
  3


3
5
   3   1  0 1 0    50  0   5
 45 
0 
0  12
 16
 259 0
0 0 0 
25
25 



  25   0 0
0    3  0 1 0    50   0 0
9 
  12
 12
0 0 0 
0
0
25
25 
25
19.
0
4
5



0.
16 
25 
12
25
The three vectors are orthogonal, and they can be made into orthonormal vectors by a simple normalization.
Forming the columns of a matrix P in this way we obtain an orthogonal matrix
 0

P   12
 1
  2
1
0
0
0

1
 . When the diagonal matrix D contains the corresponding eigenvalues on its main
2

1
2

3 0 0 
 1 0 0 


T
diagonal, D   0 3 0  , then Formula (2) in Section 7.2 yields PDP  A  0 3 4  .
0 4 3 
 0 0 7 
20.
According to Theorem 7.2.2(b), for every symmetric matrix, eigenvectors corresponding to distinct
eigenvalues must be orthogonal. Since  v 2 , v3   1  0 , it follows that no symmetric matrix can satisfy the
given conditions.
21.
22.
Yes. The Gram-Schmidt process will ensure that columns of P corresponding to the same eigenvalue are an
orthonormal set. Since eigenvectors from distinct eigenvalues are orthogonal, this means that P will be an
orthogonal matrix. Then since A is orthogonally diagonalizable, it must be symmetric.
det   I  A  
a
b
b
    a  b    a  b  therefore A has eigenvalues a  b and a  b .
a
 1 1
Assuming b  0 , the reduced row echelon form of  a  b  I  A is 
 so that the eigenspace
0 0 
x 
1
corresponding to   a  b consists of vectors  1  where x1  t , x2  t . A vector p1    forms a basis
1
 x2 
for this eigenspace.
1 1 
Again assuming b  0 , the reduced row echelon form of  a  b  I  A is 
 so that the eigenspace
0 0 
x 
 1
corresponding to   a  b consists of vectors  1  where x1  t , x2  t . A vector p2    forms a
 1
 x2 
basis for this eigenspace.
Applying the Gram-Schmidt process to both bases p1 and p2  amounts to simply normalizing the
 12
vectors. This yields the columns of a matrix P that orthogonally diagonalizes A : P   1
 2
 12 
.
1
2

7.2 Orthogonal Diagonalization
23.
(a)
det   I  A  
 1
1
19
1
  2  2    2   2 therefore A has eigenvalues  2 .
 1



A is symmetric, so by Theorem 7.2.2(b), eigenvectors from different eigenspaces are orthogonal.
The reduced row echelon form of
1 1  2 
2 I  A is 
 so that the eigenspace corresponding to
0 
0
x 
  2 consists of vectors  1  where x1 
 x2 




 2  1 t , x  t . A vector  21 1 forms a basis for
2
this eigenspace.
1 1  2 
The reduced row echelon form of  2 I  A is 
 so that the eigenspace corresponding to
0 
0
  2  1
x 
   2 consists of vectors  1  where x1   2  1 t , x2  t . A vector 
 forms a basis
 x2 
 1 


for this eigenspace.
Unit eigenvectors chosen from two different eigenspaces will meet our desired condition. For
 2 1 
  2 1 
instance, let u1   4  2 2  and u 2   4  2 2  .
 4  21 2 
 4  21 2 
(b)
det   I  A  
 1
2
2
    1   3  therefore A has eigenvalues 1 and 3 .
 1
A is symmetric, so by Theorem 7.2.2(b), eigenvectors from different eigenspaces are orthogonal.
1 1 
The reduced row echelon form of 1I  A is 
 so that the eigenspace corresponding to   1
0 0 
x 
 1
consists of vectors  1  where x1  t , x2  t . A vector   forms a basis for this eigenspace.
 1
 x2 
 1 1
The reduced row echelon form of 3I  A is 
 so that the eigenspace corresponding to   3
0 0 
x 
1
consists of vectors  1  where x1  t , x2  t . A vector   forms a basis for this eigenspace.
1
 x2 
Unit eigenvectors chosen from two different eigenspaces will meet our desired condition. For
 21 
 12 
instance, let u1   1  and u 2   1  .
 2 
 2 
24.
(a)
  4 2
2
2
  4 2   3  12 2  36  32     2     8 
2
2
2
 4
so the eigenvalues are  = 2 and  = 8.
20
Chapter 7: Diagonalization and Quadratic Forms
A is symmetric, so by Theorem 7.2.2(b), eigenvectors from different eigenspaces are orthogonal.
1 1 1 
The reduced row echelon form of 2 I  A is  0 0 0  so that the eigenspace corresponding to
 0 0 0 
 x1 
 1
 1




  2 contains vectors  x2  where x1   s  t , x2  s , x3  t . Vectors  1 and  0  form a basis
 x3 
 1
 0 
for this eigenspace.
 1 0 1
The reduced row echelon form of 8I  A is  0 1 1 so that the eigenspace corresponding to
 0 0 0 
 x1 
1


  8 contains vectors  x2  where x1  t , x2  t , x3  t . A vector 1 forms a basis for this
 x3 
1
eigenspace.
Unit eigenvectors chosen from two different eigenspaces will meet our desired condition. For
1
  12 
 3
 1
instance, let u1   2  and u 2   13  .
1


 3 
 0
(b)
 1
0
0
0
 1
0
1
1  λ(  1)    2  so the eigenvalues are  = 0,  = 1, and   2 .
 1
A is symmetric, so by Theorem 7.2.2(b), eigenvectors from different eigenspaces are orthogonal.
1 0 0 
The reduced row echelon form of 0 I  A is  0 1 1  so that the eigenspace corresponding to
 0 0 0 
 x1 
 0


  0 contains vectors  x2  where x1  0 , x2  t , x3  t . A vector  1 forms a basis for this
 x3 
 1
eigenspace.
0 1 0 
The reduced row echelon form of 1I  A is  0 0 1  so that the eigenspace corresponding to   1
 0 0 0 
 x1 
 1


contains vectors  x2  where x1  t , x2  0 , x3  0 . A vector  0  forms a basis for this eigenspace.
 x3 
 0 
7.2 Orthogonal Diagonalization
21
Unit eigenvectors chosen from these two different eigenspaces will meet our desired condition. For
 0
 1


1
instance, let u1    2  and u 2  0  . (Note that it was not necessary to discuss the third
 1
0 
 2 
eigenspace.)
25.
AT A is a symmetric n  n matrix since  AT A   AT  AT   AT A . By Theorem 7.2.1 it has an
T
T
orthonormal set of n eigenvectors.
28.
(b)
 0 0 1
A  I  vv   0 1 0  has the characteristic polynomial
 1 0 0 
T

det   I  A   0
0
1
  1 0     1    1 therefore A has eigenvalues 1 and 1 .
1
0

2
1 0 1 
The reduced row echelon form of 1I  A is  0 0 0  so that the eigenspace corresponding to   1
 0 0 0 
 x1 
0 
 1




contains vectors  x2  where x1  t , x2  s , x3  t . Vectors p1   1  and p2   0  form a basis for this
 x3 
 1
 0 
eigenspace.
 1 0 1
The reduced row echelon form of 1I  A is 0 1 0  so that the eigenspace corresponding to
0 0 0 
 x1 
1 


  1 contains vectors  x2  where x1  t , x2  0 , x3  t . A vector p3  0  forms a basis for this
 x3 
1 
eigenspace.
Applying the Gram-Schmidt process to both bases p1 , p2  and p3  amounts to simply normalizing the
vectors since the three vectors are already orthogonal.
0  12

0
This yields the columns of a matrix P that orthogonally diagonalizes A : P   1
0
1
2

29.


0 .
1 
2
1
2
By Theorem 7.1.3(b), if A is an orthogonal n  n matrix then || Ax ||  || x || for all x in R n . Since the
eigenvalues of a symmetric matrix must be real numbers, for every such eigenvalue  and a corresponding
22
Chapter 7: Diagonalization and Quadratic Forms
eigenvector x we have || x ||  || Ax ||  || λx ||  |  ||| x || hence the only possible eigenvalues for an
orthogonal symmetric matrix are 1 and 1.
30.
No, a non-symmetric matrix A can have eigenvalues that are real numbers. For instance, the eigenvalues of
3 0 
the matrix 
 are 3 and 1.
8 1
True-False Exercises
(a)
True. For any square matrix A, both AAT and AT A are symmetric, hence orthogonally diagonalizable.
(b)
True. Since v1 and v 2 are from distinct eigenspaces of a symmetric matrix, they are orthogonal, so
|| v1  v 2 || 2   v1  v 2 , v1  v 2    v1 , v1   2 v1 , v 2    v 2 , v 2   || v1 || 2  0  || v 2 || 2 .
(c)
False. An orthogonal matrix is not necessarily symmetric.
(d)
True. By Theorem 1.7.4, if A is an invertible symmetric matrix then A1 is also symmetric.
(e)
True. By Theorem 7.1.3(b), if A is an orthogonal n  n matrix then || Ax ||  || x || for all x in R n . For
every eigenvalue  and a corresponding eigenvector x we have || x ||  || Ax ||  || λx ||  |  | || x || hence
|  |  1.
(f)
True. If A is an n  n orthogonally diagonalizable matrix, then A has an orthonormal set of n eigenvectors,
which form a basis for R n .
(g)
True. This follows from part (a) of Theorem 7.2.2.
7.3 Quadratic Forms
1.
2.
3 0   x1 
x2  
 
 0 7   x2 
(a)
3 x12  7 x22   x1
(b)
4 x12  9 x22  6 x1 x2   x1
(c)
9 x  x  4 x  6 x1 x2  8 x1 x3  x2 x3   x1
(a)
5 x12  5 x1 x2   x1
(b)
7 x1 x2   x1
2
1
2
2
 4 3  x1 
x2  
 
 3 9   x2 
2
3
5 5   x 
x2   5 2   1 
 2 0   x2 
 0  72   x1 
x2   7

0   x2 
 2
x2
 9 3 4   x1 

1
x3   3 1
x 
2 2
1
 4
4   x3 
2
7.3 Quadratic Forms
3.
(c)
x12  x22  3 x32  5 x1 x2  9 x1 x3   x1
x
 2 3   x 
y 
 2 x 2  5 y 2  6 xy



 3 5   y 
9
x
 1  25
2 1
 5
 
1 0   x2 
x3    2
9
 2
0 3  x3 
 2 72 1  x1 


x3   27 0 6   x2   2 x12  3 x32  7 x1 x2  2 x1 x3  12 x2 x3
 1 6 3  x3 
4.
 x1
5.
Q  x T Ax   x1
x2
x2
23
 2 1  x1 
x2  
   ; the characteristic polynomial of the matrix A is
 1 2   x2 
 2  4  3     3   1 , so the eigenvalues of A are   3, 1.
 1 1
The reduced row echelon form of 3I  A is 
 so that the eigenspace corresponding to   3 consists
0 0 
x 
 1
of vectors  1  where x1  t , x2  t . A vector p1    forms a basis for this eigenspace.
 1
 x2 
 1 1
The reduced row echelon form of 1I  A is 
 so that the eigenspace corresponding to   1 consists
0 0 
x 
1
of vectors  1  where x1  t , x2  t . A vector p2    forms a basis for this eigenspace.
1
 x2 
Applying the Gram-Schmidt process to the bases p1 and p2  amounts to simply normalizing the
vectors. Therefore an orthogonal change of variables x  Py that eliminates the cross product terms in Q is
1
 x1    2


x 
1
 2   2
1
2
1
2
  y1 
   . In terms of the new variables, we have
  y2 


Q  x T Ax  y T P T AP y   y1
6.
Q  x Ax   x1
T
x2
 5
det   I  A   2
0
 3 0   y1 
2
2
y2  
  y   3 y1  y2 .
0
1

 2
 5 2 0   x1 
x3   2 2 0   x2  ; the characteristic polynomial of the matrix A is
 0 0 4   x3 
2
0
 2
0
0
4
    1   4    6  so the eigenvalues of A are 1 , 4 , and 6 .
1 12 0 


The reduced row echelon form of 1I  A is 0 0 1  so that the eigenspace corresponding to   1
0 0 0 
24
Chapter 7: Diagonalization and Quadratic Forms
 x1 
 1


1
consists of vectors  x2  where x1   2 t , x2  t , x3  0 . A vector p1   2  forms a basis for this
 x3 
 0 
eigenspace.
1 0 0 
The reduced row echelon form of 4I  A is  0 1 0  so that the eigenspace corresponding to   4
 0 0 0 
 x1 
0 


consists of vectors  x2  where x1  0 , x2  0 , x3  t . A vector p2  0  forms a basis for this eigenspace.
 x3 
 1
 1 2 0 
The reduced row echelon form of 6I  A is 0
0 1 so that the eigenspace corresponding to
0
0 0 
 x1 
2 


  6 consists of vectors  x2  where x1  2t , x2  t , x3  0 . A vector p3   1 forms a basis for this
 x3 
0 
eigenspace.
Applying the Gram-Schmidt process to the bases p1 and p3  amounts to simply normalizing the
vectors; the basis p2  is already orthonormal. Therefore an orthogonal change of variables x  Py
1
 x1    5 0

that eliminates the cross product terms in Q is  x2    25 0
 x3   0 1

  y1 
 
1
y . In terms of the new
5 2
 y 
0  3 
2
5
variables, we have
Q  x Ax  y
T
7.
 P AP  y   y
T
T
1
Q  x Ax   x1
T
x2
 3
det   I  A   2
0
y2
 1 0 0   y1 
y3  0 4 0   y2   y12  4 y22  6 y32 .
0 0 6   y3 
0   x1 
3 2

x3   2
4 2   x2  ; the characteristic polynomial of the matrix A is
 0 2
5  x3 
2
0
 4
2
2
 5
  3  12 2  39  28     1   4    7 
so the eigenvalues of A are 1 , 4 , and 7 .
7.3 Quadratic Forms
25
2
1 0

The reduced row echelon form of 1I  A is 0 1 2  so that the eigenspace corresponding to
0 0
0 
 x1 
 2 


  1 consists of vectors  x2  where x1  2t , x2  2t , x3  t . A vector p1   2  forms a basis for this
 x3 
 1
eigenspace.
 1 0 1
The reduced row echelon form of 4I  A is 0 1  12  so that the eigenspace corresponding to
0 0
0 
 x1 
2 


1
  4 consists of vectors  x2  where x1  t , x2  2 t , x3  t . A vector p2   1 forms a basis for this
 x3 
2 
eigenspace.
1 0 12 


The reduced row echelon form of 7I  A is 0 1 1  so that the eigenspace corresponding to   7
0 0 0 
 x1 
 1


1
consists of vectors  x2  where x1   2 t , x2  t , x3  t . A vector p3   2  forms a basis for this
 x3 
 2 
eigenspace.
Applying the Gram-Schmidt process to the three bases amounts to simply normalizing the vectors.
Therefore an orthogonal change of variables x  Py that eliminates the cross product terms in Q is
 x1    23
x    2
 2  3
 x3   13
2
3
1
3
2
3
Q  x Ax  y
T
T
8.
Q  x Ax   x1
T
 13   y1 

 23   y2  . In terms of the new variables, we have
2
y 
3 3
 P AP  y   y
T
1
x2
 2
det   I  A   2
2
y2
 1 0 0   y1 
y3  0 4 0   y2   y12  4 y22  7 y32 .
0 0 7   y3 
2 2   x1 
 2

x3   2
5 4   x2  ; the characteristic polynomial of the matrix A is
 2 4
5  x3 
2
2
2
4     1    10  so the eigenvalues of A are 1 and 10 .
 5
4
 5
26
Chapter 7: Diagonalization and Quadratic Forms
 1 2 2 
The reduced row echelon form of 1I  A is 0 0
0  so that the eigenspace corresponding to
0 0
0 
 x1 
2 
 2 




  1 contains vectors  x2  where x1  2 s  2t , x2  s , x3  t . Vectors p1   1 and p2  0  form a
 x3 
 1
 0 
basis for this eigenspace. We apply the Gram-Schmidt process to find an orthogonal basis for this
2 
 2   25 
 2 
4    4 
 p2 , v1 




1 
eigenspace: v1  p1   1 and v 2  p2  ||v || 2 v1  0  
, then proceed to normalize the
1
5   5
 1
 0   1
 0 
 2 
  25 
3 5 


v1
v2
1
two vectors to yield an orthonormal basis: q1  ||v1 ||   5  and q 2  ||v2 ||   3 4 5  .
 5 


 3 5 
 0
1 0 12 


The reduced row echelon form of 10I  A is 0 1 1  so that the eigenspace corresponding to
0 0 0 
 x1 
 1


1
  10 contains vectors  x2  where x1   2 t , x2  t , x3  t . A vector p3   2  forms a basis for
 x3 
 2 
this eigenspace.
Applying the Gram-Schmidt process to p3  amounts to simply normalizing this vector.
Therefore an orthogonal change of variables x  Py that eliminates the cross product terms in Q is
2
 x1    5
x    1
 2  5
 x3   0

Q  x Ax  y
T
9.
(a)
 13   y 
 1
 23   y2  . In terms of the new variables, we have

2 y 
3
 3
2
3 5
4
3 5
5
3 5
T
 P AP  y   y
T
1
y2
 1 0 0   y1 
y3   0 1 0   y2   y12  y22  10 y32 .
 0 0 10   y3 
2 x 2  xy  x  6 y  2  0 can be expressed as  x
2 1   x 
x
y   1 2     1 6      2   0

  y 
0  y 

2
f
K
A
(b)
y2  7 x  8 y  5  0 can be expressed as  x
0 0   x 
x
y 
  7 8     5   0







0 1  y 

 y 
f
K
A
7.3 Quadratic Forms
10.
(a)
x 2  xy  5 x  8 y  3  0 can be expressed as  x
27
 1  12   x 
x
y  1
5 8     3   0
    


2
0 y
 y 

f
K
  
A
(b)
5 xy  8 should first be rewritten as 5 xy  8  0 , then as
x
0 5   x 
x
y   5 2      0 0      8   0
0   y    y  
2

f
K
A
11.
12.
13.
2
(a)
x
2 x 2  5 y 2  20 is 10
 y4  1 which is an equation of an ellipse.
(b)
x 2  y2  8  0 is x 2  y2  8 or x8  y8  1 which is an equation of a hyperbola.
(c)
7 y2  2 x  0 is x  27 y 2 which is an equation of a parabola.
(d)
x 2  y 2  25  0 is x 2  y 2  25 which is an equation of a circle.
(a)
y
ellipse (rewrite as 1/x 4  1/9
 1 );
(b)
hyperbola (rewrite as x5  y4  1 );
(c)
parabola;
(d)
circle (rewrite as x 2  y2  3 )
2
2
2
2
2
2
2
 2 2 
We can rewrite the given equation in the matrix form x T Ax  8 with A  
.
 2 1
The characteristic polynomial of A is det   I  A  
2
 2
    3    2  so A has eigenvalues 3
2
 1
and 2 .
1 2 
The reduced row echelon form of 3 I  A is 
 so that the eigenspace corresponding to   3 consists
0 0 
x 
 2 
of vectors  1  where x1  2t , x2  t . A vector p1    forms a basis for this eigenspace.
 1
 x2 
 1  12 
The reduced row echelon form of 2I  A is 
 so that the eigenspace corresponding to   2
0
0
x 
 1
consists of vectors  1  where x1  12 t , x2  t . A vector p2    forms a basis for this eigenspace.
2 
 x2 
Applying the Gram-Schmidt process to both bases p1 and p2  amounts to simply normalizing the
vectors.
This yields the columns of a matrix P that orthogonally diagonalizes A - of the two possibilities,
  25 15 
 15  25 
 15  25 
 , since its determinant is 1 so that the
 1
 and  2
 we choose the latter, i.e., P   2
2
1
1
 5
 5
 5
5
5
5



substitution x  Px  performs a rotation of axes. In the rotated coordinates, the equation of the conic
28
Chapter 7: Diagonalization and Quadratic Forms
 2 0   x  
becomes  x  y 
 8 , i.e., 3 y2  2 x2  8 ; this equation represents a hyperbola.




 0 3  y 
 15
Solving P   2
 5
14.
 25  cos

1
5
  sin 
 sin  
we conclude that the angle of rotation is   sin 1
cos 
   63.4 .
2
5
5 2
We can rewrite the given equation in the matrix form x T Ax  9 with A  
 . The characteristic
2 5 
polynomial of A is det   I  A  
  5 2
    3    7  so A has eigenvalues 3 and 7 .
2   5
1 1 
The reduced row echelon form of 3I  A is 
 so that the eigenspace corresponding to   3 consists
0 0 
x 
 1
of vectors  1  where x1  t , x2  t . A vector p1    forms a basis for this eigenspace.
 1
 x2 
 1 1
The reduced row echelon form of 7 I  A is 
 so that the eigenspace corresponding to   7
0 0 
x 
1
consists of vectors  1  where x1  t , x2  t . A vector p2    forms a basis for this eigenspace.
1
 x2 
Applying the Gram-Schmidt process to both bases p1 and p2  amounts to simply normalizing the
vectors.
This yields the columns of a matrix P that orthogonally diagonalizes A - of the two possibilities,
  12 12 
 12  12 
 12  12 

P
and
we
choose
the
latter,
i.e.,
1
 , since its determinant is 1 so that the
 1



1
1
1
1

 2



2
2
2
 2

 2
substitution x  Px  performs a rotation of axes. In the rotated coordinates, the equation of the conic
 7 0   x 
2
2
becomes  x  y 
  y   9 , i.e., 7 x  3 y  9 ; this equation represents an ellipse.
0
3

 
 12
Solving P   1
 2
15.
 12  cos

1
  sin 
2
 sin  
we conclude that the angle of rotation is   4 .
cos 
11 12 
We can rewrite the given equation in the matrix form xT Ax  15 with A  
.
12 4 
The characteristic polynomial of A is det   I  A  
  11 12
    20    5  so A has eigenvalues
12   4
20 and 5 .
 1  43 
The reduced row echelon form of 20 I  A is 
 so that the eigenspace corresponding to   20
0
0
x 
4 
consists of vectors  1  where x1  43 t , x2  t . A vector p1    forms a basis for this eigenspace.
 3
 x2 
7.3 Quadratic Forms
29
1 34 
The reduced row echelon form of 5 I  A is 
 so that the eigenspace corresponding to   5
0 0 
x 
 3 
consists of vectors  1  where x1   34 t , x2  t . A vector p 2    forms a basis for this eigenspace.
 4
 x2 
Applying the Gram-Schmidt process to both bases p1 and p2  amounts to simply normalizing the
vectors.
4
This yields the columns of a matrix P that orthogonally diagonalizes A - of the two possibilities,  53
5
 35 
4
5
 3 4 
 4  35 
and  45 53  we choose the former, i.e., P   53
, since its determinant is 1 so that the substitution
4
5
5
 5 5
x  Px  performs a rotation of axes. In the rotated coordinates, the equation of the conic becomes
 x
0   x 
 20
y  
 15 , i.e., 4 x2  y2  3 ; this equation represents a hyperbola.




 0 5   y 
4
Solving P   53
5
16.
 35  cos

4
 sin 
5
 sin  
we conclude that the angle of rotation is   sin 1  35   36.9 .
cos 
1 1 
We can rewrite the given equation in the matrix form xT Ax  12 with A   1 2  . The characteristic
 2 1
polynomial of A is det   I  A  
 1
 12
 12
    12     23  so A has eigenvalues 12 and 23 .
 1
1 1 
1
The reduced row echelon form of 12 I  A is 
 so that the eigenspace corresponding to   2
0
0


x 
 1
consists of vectors  1  where x1  t , x2  t . A vector p1    forms a basis for this eigenspace.
 1
 x2 
 1 1
3
The reduced row echelon form of 32 I  A is 
 so that the eigenspace corresponding to   2
0 0 
x 
1
consists of vectors  1  where x1  t , x2  t . A vector p2    forms a basis for this eigenspace.
1
 x2 
Applying the Gram-Schmidt process to both bases p1 and p2  amounts to simply normalizing the
vectors.
This yields the columns of a matrix P that orthogonally diagonalizes A - of the two possibilities,
  12 12 
 12  12 
 12  12 
 , since its determinant is 1 so that
 1
 and  1
 we choose the latter, i.e., P   1
1
1
1
 2
 2
 2
2
2
2



the substitution x  Px  performs a rotation of axes. In the rotated coordinates, the equation of the
30
Chapter 7: Diagonalization and Quadratic Forms
 3 0   x
conic becomes  x  y  2 1     12 , i.e., 32 x 2  12 y2  12 or equivalently 3 x2  y2  1 . This equation
 0 2   y 
represents an ellipse.
 12
Solving P   1
 2
 12  cos

1
2
  sin 
 sin  
we conclude that the angle of rotation is   4 .
cos 
17.
All matrices in this exercise are diagonal, therefore by Theorem 5.1.2, their eigenvalues are the entries on
the main diagonal. We use Theorem 7.3.2 (including the remark below it).
(a) positive definite
(b) negative definite
(c) indefinite
(d) positive semidefinite
(e) negative semidefinite
18.
All matrices in this exercise are diagonal, therefore by Theorem 5.1.2, their eigenvalues are the entries on
the main diagonal. We use Theorem 7.3.2 (including the remark below it).
(a) indefinite
(b) negative definite
(c) positive definite
(d) negative semidefinite
(e) positive semidefinite
19.
For all  x1 , x2    0,0  , we clearly have x12  x22  0 therefore the form is positive definite
 1 0
(an alternate justification would be to calculate eigenvalues of the associated matrix 
 which are
 0 1
1  2  1 then use Theorem 7.3.2).
20.
For all  x1 , x2    0,0  , we clearly have  x12  3 x22  0 therefore the form is negative definite
 1 0 
(an alternate justification would be to calculate eigenvalues of the associated matrix 
 which are
 0 3 
  1 and   3 then use Theorem 7.3.2).
21.
For all  x1 , x2    0,0  , we clearly have ( x1  x2 )2  0 , but cannot claim ( x1  x2 )2  0 when x1  x2
therefore the form is positive semidefinite
 1 1
(an alternate justification would be to calculate eigenvalues of the associated matrix 
 which are
 1 1
  2 and   0 then use the remark under Theorem 7.3.2).
22.
For all  x1 , x2    0,0  , we clearly have   x1  x2   0 , but cannot claim   x1  x2   0 when x1  x2
2
2
therefore the form is negative semidefinite
 1 1
(an alternate justification would be to calculate eigenvalues of the associated matrix 
 which are
 1 1
  2 and   0 then use the remark under Theorem 7.3.2).
23.
Clearly, the form x12  x22 has both positive and negative values (e.g., 32  12  0 and 2 2  4 2  0 ) therefore
this quadratic form is indefinite
7.3 Quadratic Forms
31
 1 0
(an alternate justification would be to calculate eigenvalues of the associated matrix 
 which are
 0 1
  1 and   1 then use Theorem 7.3.2).
24.
Clearly, the form x1 x2 has both positive and negative values (e.g.,  2  3  0 and  2  3  0 ) therefore
this quadratic form is indefinite
0 1 
(an alternate justification would be to calculate eigenvalues of the associated matrix  1 2  which are
 2 0
   12 and   12 then use Theorem 7.3.2).
25.
(a)
det   I  A  
2
 5
    3    7  ; since both eigenvalues   3 and   7 are positive, by
2
 5
Theorem 7.3.2, A is positive definite.
Determinant of the first principal submatrix of A is det  5  5  0 .
Determinant of the second principal submatrix of A is det  A   21  0 .
By Theorem 7.3.4, A is positive definite.
(b)
det   I  A  
 2
1
0
1
0
0     1   3    5  ; since all three eigenvalues
 2
0
 5
  1 ,   3 , and   5 are positive, by Theorem 7.3.2, A is positive definite.
Determinant of the first principal submatrix of A is det  2   2  0 .
  2 1 
Determinant of the second principal submatrix of A is det  
  3  0 .
  1 2  
Determinant of the third principal submatrix of A is det  A   15  0 .
By Theorem 7.3.4, A is positive definite.
26.
(a)
det   I  A  
  2 1
    1   3  ; since both eigenvalues   1 and   3 are positive, by
1   2
Theorem 7.3.2, A is positive definite.
Determinant of the first principal submatrix of A is det  2   2  0 .
Determinant of the second principal submatrix of A is det  A   3  0 .
By Theorem 7.3.4, A is positive definite.
(b)
det   I  A  
 3
1
0
1
0
1     1   3    4  ; since all three eigenvalues
 2
1
 3
  1 ,   3 , and   4 are positive, by Theorem 7.3.2, A is positive definite.
32
Chapter 7: Diagonalization and Quadratic Forms
Determinant of the first principal submatrix of A is det 3  3  0 .
  3 1 
Determinant of the second principal submatrix of A is det  
  5  0 .
  1 2  
Determinant of the third principal submatrix of A is det  A   12  0 .
By Theorem 7.3.4, A is positive definite.
27.
(a)
Determinant of the first principal submatrix of A is det 3  3  0 .
 3 1 
Determinant of the second principal submatrix of A is det  
   4  0 .
  1 1 
Determinant of the third principal submatrix of A is det  A   19  0 .
By Theorem 7.3.4(c), A is indefinite.
(b)
Determinant of the first principal submatrix of A is det  3  3  0 .
  3 2  
Determinant of the second principal submatrix of A is det  
  5  0 .
  2 3 
Determinant of the third principal submatrix of A is det  A   25  0 .
By Theorem 7.3.4(b), A is negative definite.
28.
(a)
Determinant of the first principal submatrix of A is det  4   4  0 .
  4 1 
Determinant of the second principal submatrix of A is det  
  7  0.
  1 2 
Determinant of the third principal submatrix of A is det  A   6  0 .
By Theorem 7.3.4(a), A is positive definite.
(b)
Determinant of the first principal submatrix of A is det  4   4  0 .
  4 1 
Determinant of the second principal submatrix of A is det  
  7  0 .
  1 2  
Determinant of the third principal submatrix of A is det  A   6  0 .
By Theorem 7.3.4(b), A is negative definite.
29.
The quadratic form Q  5 x12  x22  kx32  4 x1 x2  2 x1 x3  2 x2 x3 can be expressed in matrix notation as
 5 2 1
1 1 . The determinants of the principal submatrices of A are det  5  5,
Q  x Ax where A   2
 1 1 k 
T
5 2 
det 
  1, and det A  k  2. Thus Q is positive definite if and only if k > 2.
2 1 
7.3 Quadratic Forms
30.
33
The quadratic form Q  3 x12  x22  2 x32  0 x1 x2  2 x1 x3  2kx2 x3 can be expressed in matrix notation as
 3 0 1
Q  x Ax where A   0 1 k  . The determinants of the principal submatrices of A are det 3  3,
 1 k 2 
T
3 0 
2
det 
  3, and det A  5  3k . Thus Q is positive definite if and only if
0
1


5  3k 2  0 , i.e., 
31.
(a)
5
3
k
5
3
.

We assume A is symmetric so that xT Ay  xT Ay
  y A x  y Ax . Therefore
T
T
T
T
T  x  y    x  y  A  x  y   x T Ax  y T Ax  x T Ay  y T A y  T  x   2 x T A y  T  y  .
T
(b)
32.
T

 c1 x1  c2 x2    cn xn   c12 x12  c22 x22    cn2 xn2  2c1c2 x1 x2    2c1cn x1 xn  
2
  x1
33.

T  cx    cx  A  cx   c 2 x T Ax  c 2T  x 
(a)
x2
 c12

cc
 xn   1 2
 

c1cn
c1c2
c22

c2 cn
 c1cn   x1 
 
 c2 cn   x2 

   
 
 cn2   xn 
For each i = 1, ..., n we have
( xi  x ) 2
 xi2  2 xi x  x 2
1 n
1 n 
 x  2 xi x j  2  x j 
n j 1
n  j 1 
2
2
i
 xi2 
n 1 n

2 n
1 n
xi x j  2  x 2j  2  x j xk 

n j 1
n  j 1
j 1 k  j 1

Thus in the quadratic form s x2  n11 [( x1  x )2  ( x2  x )2    ( xn  x )2 ] the coefficient of xi2 is
1
n 1
1  n2  12 n   1n , and the coefficient of xi x j for i  j is n11   2n  2n  22 n    n n21 . It follows that
n
n




 1n
 1
  n n 1
s x2  xT Ax where A    
 
 1
 n n 1
(b)
 n n11   n n11 

1
  n n11 
n
.


 

1
 n n11 
n

We have s x2  n11 [( x1  x )2  ( x2  x )2    ( xn  x )2 ]  0, and s x2  0 if and only if x1  x ,
x2  x , ..., xn  x , i.e., if and only if x1  x2    xn . Thus s x2 is a positive semidefinite form.
34
34.
Chapter 7: Diagonalization and Quadratic Forms
(a)
To simplify the equation, multiply both sides by
 2
We have det   I  A   1
1
3
2
so that x Ax 
T
3
2
2 1 1
with A   1 2 1 .
 1 1 2 
1
1
2
  2 1     1    4  so the eigenvalues of A are 1 and 4 .
1   2
1 1 1 
The reduced row echelon form of 1I  A is  0 0 0  so that the eigenspace corresponding to   1
 0 0 0 
x
 1
 1




contains vectors  y  where x   s  t , y  s , z  t . Vectors p1   1 and p2   0  form a basis
 z 
 0 
 1
for this eigenspace. We apply the Gram-Schmidt process to find an orthogonal basis for this
 1
 1   12 
 1
 p ,v 
 
   
eigenspace: v1  p1   1 and v 2  p2  ||v2 || 21 v1   0   12  1    12  , then proceed to normalize
1
 1
 0   1
 0 
 1 
  12 
 6
 1
v1
v2
the two vectors to yield an orthonormal basis: q1  ||v1 ||   2  and q 2  ||v2 ||    16  .
 2


 6 
 0
 1 0 1
The reduced row echelon form of 4I  A is  0 1 1 so that the eigenspace corresponding to
 0 0 0 
x
1


  4 contains vectors  y  where x  t , y  t , z  t . A vector p3  1 forms a basis for this
1
 z 
eigenspace.
Applying the Gram-Schmidt process to p3  amounts to simply normalizing this vector.
Therefore an orthogonal change of variables x  Py that eliminates the cross product terms in Q is
1
1
 x    2  6
 y   1  1
6
   2
2
 z   0
6

x Ax  y
T
T

  x
 
1
 y . In terms of the new variables, we have
3  
  
1
z
3
 
1
3
 1 0 0   x
P AP y   x  y z  0 1 0   y   x 2  y2  4 z2 so the original equation is
 0 0 4   z 
T

expressed as x2  y2  4 z2  32 or 23 x 2  23 y2  83 z2  1 . The lengths of the three axes in the x , y ,
and z -directions are
6,
6 , and
6
2
, respectively.
7.3 Quadratic Forms
(b)
35.
36.
35
A must be positive definite.
The eigenvalues of A must be positive and equal to each other. That is, A must have a positive eigenvalue of
multiplicity 2.
a b  x 
T
y 
  y   x Ax .
b
c

 
Rotating a coordinate system through an angle  amounts to the change of variables x  Py where
We express the quadratic form in the matrix notation ax 2  2bxy  cy 2   x
 cos
P
 sin 
 sin  
.
cos 
 cos
The two off-diagonal entries of the matrix P T AP  
  sin 

sin    a b   cos
cos   b c   sin 

the resulting quadratic form y  P AP  y has no cross product terms.
 sin  
are both equal
cos 

 cot 2 . Hence
to  c  a  sin  cos  b cos2   sin 2   c 2 a sin 2  b cos2 , which equals 0 if a2bc  cos2
sin 2
T
37.
T
If A is an n  n symmetric matrix such that its eigenvalues 1 , ..., n are all nonnegative, then by Theorem
7.3.1 there exists a change of variable y  Px for which xT Ax  1 y12    n yn2 . The right hand side is
always nonnegative, consequently x T Ax  0 for all x in R n .
True-False Exercises
(a)
True. This follows from part (a) of Theorem 7.3.2 and from the margin note next to Definition 1.
(b)
False. The term 4x1 x2 x3 cannot be included.
(c)
True. One can rewrite ( x1  3 x2 )2  x12  6 x1 x2  9 x22 .
(d)
True. None of the eigenvalues will be 0.
(e)
False. A symmetric matrix can also be positive semidefinite or negative semidefinite.
(f)
True.
(g)
True. x  x  x12  x22    xn2
(h)
True. Eigenvalues of A1 are reciprocals of eigenvalues of A . Therefore if all eigenvalues of A are
positive, the same is true for all eigenvalues of A1 .
(i)
True.
(j)
True. This follows from part (a) of Theorem 7.3.4.
(k)
True.
(l)
False. If c < 0, x T Ax  c has no graph.
36
Chapter 7: Diagonalization and Quadratic Forms
7.4 Optimization Using Quadratic Forms
1.
We express the quadratic form in the matrix notation z  5 x 2  y 2  x T Ax   x
det   I  A  
5 0  x 
y 
 .
 0 1  y 
0
 5
    5    1 therefore the eigenvalues of A are   5 and   1 .
0
 1
0 1 
The reduced row echelon form of 5I  A is 
 so that the eigenspace corresponding to   5 consists
0 0 
x
 1
of vectors   where x  t , y  0 . A vector   forms a basis for this eigenspace - this vector is already
0 
 y
normalized.
1 0 
The reduced row echelon form of 1I  A is 
 so that the eigenspace corresponding to   1
0 0 
x
0 
consists of vectors   where x  0 , y  t . A vector   forms a basis for this eigenspace - this vector is
 y
 1
already normalized.
We conclude that the constrained extrema are
2.

constrained maximum: z  5 at  x, y    1,0  ;

constrained minimum: z  1 at  x, y    0, 1 .
We express the quadratic form in the matrix notation z  xy  x T Ax   x
det   I  A  
0 1   x 
y  1 2    .
 2 0  y
  12
    12    12  therefore the eigenvalues of A are   12 and    12 .
 12

 1 1
The reduced row echelon form of 12 I  A is 
so that the eigenspace corresponding to   12

0 0 
x
1
consists of vectors   where x  t , y  t . A vector   forms a basis for this eigenspace. A normalized
 y
1
 12 
eigenvector in this eigenspace is  1  .
 2 
1 1 
1
The reduced row echelon form of  12 I  A is 
 so that the eigenspace corresponding to    2
0
0


x
 1
consists of vectors   where x   t , y  t . A vector   forms a basis for this eigenspace. A
 y
 1
7.4 Optimization Using Quadratic Forms
37
  12 
normalized eigenvector in this eigenspace is  1  .
 2 
We conclude that the constrained extrema are
 ,  and  x, y     ,   ;
 constrained minimum: z   at  x, y     ,  and  x, y    ,   .

constrained maximum: z  12 at  x, y  
1
2
1
2
3.
1
2
1
2
1
2
1
2
1
2
1
2
1
2
We express the quadratic form in the matrix notation z  3 x 2  7 y 2  x T Ax   x
det   I  A  
3 0   x 
y 
 .
0 7   y 
0
 3
    3    7  therefore the eigenvalues of A are   3 and   7 .
0
 7
0 1 
The reduced row echelon form of 3I  A is 
 so that the eigenspace corresponding to   3 consists
0 0 
x
 1
of vectors   where x  t , y  0 . A vector   forms a basis for this eigenspace - this vector is already
 y
0 
normalized.
1 0 
The reduced row echelon form of 7I  A is 
 so that the eigenspace corresponding to   7 consists
0 0 
x
0 
of vectors   where x  0 , y  t . A vector   forms a basis for this eigenspace - this vector is already
 y
 1
normalized.
We conclude that the constrained extrema are
4.

constrained maximum: z  7 at  x, y    0, 1 ;

constrained minimum: z  3 at  x, y    1,0  .
We express the quadratic form in the matrix notation z  5 x 2  5 xy  xT Ax   x
det   I  A  
 5
 25

 5
5
2

  2  5  254    5 52 2
  
5 5 2
2
5 5   x 
y  5 2    .
 2 0  y
 therefore the eigenvalues of A are
  552 2 and   552 2 .
 1 1  2 
The reduced row echelon form of 552 2 I  A is 
 so that the eigenspace corresponding to
0 
0
x
 y


1  2 
 forms a basis for this
 1 
  552 2 consists of vectors   where x  1  2 t , y  t . A vector 
 1 2 
eigenspace. A normalized eigenvector in this eigenspace is  4  2 2  .
 1 
 42 2 
38
Chapter 7: Diagonalization and Quadratic Forms
 1 1  2 
The reduced row echelon form of 552 2 I  A is 
 so that the eigenspace corresponding to
0 
0
x
 y

1  2 
 forms a basis for this
 1 

  552 2 consists of vectors   where x  1  2 t , y  t . A vector 
 1 2 
eigenspace. A normalized eigenvector in this eigenspace is  4  2 2  .
 1 
 4 2 2 
We conclude that the constrained extrema are
5.


constrained maximum: z  552 2  6.036 at  x, y   

constrained minimum: z  552 2  1.036 at  x, y   
1 2
42 2

,
1 2
4 2 2
1
42 2
,
1
    0.924,0.383 and
    0.383,0.924  .
4 2 2
We express the quadratic form in the matrix notation
9 0 0   x 
2
2
2
T
w  9 x  4 y  3z  x Ax   x y z  0 4 0   y  .
0 0 3   z 
det   I  A  
 9
0
0
0
0
0     3    4    9  therefore the eigenvalues of A are
4
0
 3
  3 ,   4 , and   9 .
1 0 0 
The reduced row echelon form of 3I  A is  0 1 0  so that the eigenspace corresponding to   3
 0 0 0 
x
0 


consists of vectors  y  where x  0 , y  0 , z  t . A vector  0  forms a basis for this eigenspace - this
 1
 z 
vector is already normalized.
0 1 0 
The reduced row echelon form of 9I  A is  0 0 1  so that the eigenspace corresponding to   9
 0 0 0 
x
 1


consists of vectors  y  where x  t , y  0 , z  0 . A vector  0  forms a basis for this eigenspace
 0 
 z 
eigenspace - this vector is already normalized.
We conclude that the constrained extrema are

constrained maximum: w  9 at  x, y, z    1,0,0  ;

constrained minimum: w  3 at  x, y, z    0,0, 1 .
7.4 Optimization Using Quadratic Forms
6.
39
We express the quadratic form in the matrix notation
2 1 1   x 
2
2
2
T
w  2 x  y  z  2 xy  2 xz  x Ax   x y z   1 1 0   y  .
 1 0 1   z 
 2
det   I  A   1
1
1
1
 1
0      1   3  therefore the eigenvalues of A are   0 ,
 1
0
  1 , and   3 .
1
1 0

The reduced row echelon form of 0I  A is 0 1 1 so that the eigenspace corresponding to
0 0 0 
x
 1


  0 consists of vectors  y  where x   t , y  t , z  t . A vector  1 forms a basis for this eigenspace.
 1
 z 
 1 
 3
A normalized eigenvector in this eigenspace is  13  .
 1
 3 
 1 0 2 
The reduced row echelon form of 3I  A is 0 1 1 so that the eigenspace corresponding to
0 0
0 
x
2 


  3 consists of vectors  y  where x  2 t , y  t , z  t . A vector  1 forms a basis for this eigenspace.
 1
 z 
2
 6
A normalized eigenvector in this eigenspace is  16  .
1
 6 
We conclude that the constrained extrema are
 , ,  and  x, y, z     ,  ,   ;
 constrained minimum: w  0 at  x, y, z     , ,  and  x, y, z    ,  ,   .

constrained maximum: w  3 at  x, y, z  
2
6
1
6
1
3
1
6
1
3
1
3
2
6
1
6
1
6
1
3
1
3
1
3
40
Chapter 7: Diagonalization and Quadratic Forms
7.
The constraint 4 x 2  8 y2  16 can be rewritten as  2x  
2
   1 . We define new variables x and y by
2
y
1
2
1
x  2 x1 and y  2 y1 . Our problem can now be reformulated to find maximum and minimum value of
 0
2   x1 
2
2
y1  
   subject to the constraint x1  y1  1 . We have
y
2 0   1 


2 2 x1 y1   x1
A
det   I  A  

 2
 2




  2  2    2   2 thus A has eigenvalues  2 .
The reduced row echelon form of
 1 1
2 I  A is 
 so that the eigenspace corresponding to   2
0 0 
x 
1
consists of vectors  1  where x1  t , y1  t . A vector   forms a basis for this eigenspace. A normalized
1
 y1 
 12 
eigenvector in this eigenspace is  1  . In terms of the original variables, this corresponds to x  2 x1  2
 2 
and y  2 y1  1 .
1 1 
The reduced row echelon form of  2 I  A is 
 so that the eigenspace corresponding to
0 0 
x 
 1
   2 consists of vectors  1  where x1  t , y1  t . A vector   forms a basis for this eigenspace.
 1
 y1 
  12 
A normalized eigenvector in this eigenspace is  1  . In terms of the original variables, this corresponds to
 2 
x  2 x1   2 and y  2 y1  1 .
We conclude that the constrained extrema are
 2,1 and  x, y     2, 1 ;
 constrained minimum value:  2 at  x, y     2,1 and  x, y    2, 1 .

8.
constrained maximum value:
2 at  x, y  
The constraint x 2  3 y 2  16 can be rewritten as  4x  
2
   1 . We define new variables x and y by
2
y
4
3
1
1
x  4 x1 and y  43 y1 . Our problem can now be reformulated to find maximum and minimum value of
16 x 
2
1
16
3
x1 y1 
32
3
16 83   x1 
y1   8 32    subject to the constraint x12  y12  1 . We have
  y1 
3 
3
 



y   x1
2
1
A
det   I  A  
  16

 83
  323
8
3
    8     563  thus A has eigenvalues 8 and 563 .
7.4 Optimization Using Quadratic Forms
41
1  3
56
The reduced row echelon form of 563 I  A is 
 so that the eigenspace corresponding to   3
0
0


 3
x 
consists of vectors  1  where x1  3t , y1  t . A vector   forms a basis for this eigenspace. A
 y1 
 1
 3
normalized eigenvector in this eigenspace is  21  . In terms of the original variables, this corresponds to
 2 
x  4 x1  2 3 and y  43 y1  23 .
1
The reduced row echelon form of 8I  A is 
 0

 so that the eigenspace corresponding to   8 consists
0 
1
3
 1 
x 
of vectors  1  where x1   13 t , y1  t . A vector  3  forms a basis for this eigenspace. A normalized
 y1 
 1
  12 
eigenvector in this eigenspace is  3  . In terms of the original variables, this corresponds to x  4 x1  2
 2 
and y  43 y1  2 .
We conclude that the constrained extrema are
9.





constrained maximum value: 563 at  x, y   2 3, 23 and  x, y   2 3,  23 .

constrained minimum value: 8 at  x, y    2,2  and  x, y    2, 2  .
The following illustration indicates positions of constrained extrema consistent with the solution that was
obtained for Exercise 1.
42
Chapter 7: Diagonalization and Quadratic Forms
10.
The following illustration indicates positions of constrained extrema consistent with the solution that was
obtained for Exercise 2.
11.
(a)
The first partial derivatives of f  x, y  are f x  x, y   4 y  4 x 3 and fy  x, y   4 x  4 y3 .
Since f x  0,0   f y  0,0   0 , f x 1,1  f y 1,1  0 , and f x  1, 1  f y  1, 1  0 , f has critical
points at  0,0  , 1,1 , and  1, 1 .
(b)
The second partial derivatives of f  x, y  are f xx  x, y   12 x 2 , f xy  x, y   4 , and
 12 x 2
f yy  x, y   12 y2 therefore the Hessian matrix of f is H  x, y   
 4
det   I  H  0,0   

4
4


.
12 y 
4
2
    4    4  so H  0,0  has eigenvalues 4 and 4 ; since H  0,0 
is indefinite, f has a saddle point at  0,0  ;
det   I  H 1,1  
  12
4
4
    8    16  so H 1,1 has eigenvalues 8 and 16 ; since
  12
H 1,1 is negative definite, f has a relative maximum at 1,1 ;
det   I  H  1, 1  
  12
4
4
    8    16  so H  1, 1 has eigenvalues 8 and 16 ;
  12
since H  1, 1 is negative definite, f has a relative maximum at  1, 1
12.
(a)
The first partial derivatives of f  x, y  are f x  x, y   3 x 2  6 y and f y  x, y   6 x  3 y2 .
Since f x  0,0   f y  0,0   0 and f x  2,2   f y  2,2   0, f has critical points at  0,0  and  2,2  .
(b)
The second partial derivatives of f  x, y  are f xx  x, y   6 x , f xy  x, y   6 , and f yy  x, y   6 y
 6 x 6 
therefore the Hessian matrix of f is H  x, y   
.
 6 6 y 
7.4 Optimization Using Quadratic Forms
det   I  H  0,0   

6
6

43
    6    6  so H  0,0  has eigenvalues 6 and 6 ; since H  0,0  is
indefinite, f has a saddle point at  0,0  ;
det   I  H  2,2   
  12
6
6
    6    18  so H  2,2  has eigenvalues 6 and 18 ;
  12
since H  2,2  is negative definite, f has a relative maximum at  2,2 
13.
The first partial derivatives of f are f x  x, y   3 x 2  3 y and f y  x, y   3x  3y2 . To find the critical
points we set f x and f y equal to zero. This yields the equations y  x 2 and x   y2 . From this we conclude
that y  y 4 and so y = 0 or y = 1. The corresponding values of x are x = 0 and x = 1 respectively. Thus
there are two critical points: (0, 0) and (1, 1).
 f xx  x, y 
The Hessian matrix is H  x, y   
 f yx  x, y 

det   I  H  0,0   
3
3 
f xy  x, y   6 x 3 

.
f yy  x, y    3 6 y 
    3    3  so H  0,0  has eigenvalues 3 and 3 ; since H  0,0  is
indefinite, f has a saddle point at  0,0  ;
det   I  H  1,1  
 6
3
3
    3    9  so H  1,1 has eigenvalues 3 and 9 ; since
 6
H  1,1 is negative definite, f has a relative maximum at  1,1 .
14.
The first and second partial derivatives of f are:
f x  x, y   3 x 2  3 y , f y  x, y   3 x  3 y2 , f xx  x, y   6 x , f xy  x, y   3 , and f yy  x, y   6 y .
Setting f x  0 and f y  0 results in y  x 2 and x  y 2 ; substituting the former equation into the latter


yields x  x 4 . Rewriting this equation as x 4  x  0 then factoring yields x x 3  1  0 and


x  x  1 x 2  x  1  0 . Thus either x  0 or x  1 ; from the equation y  x 2 , the critical points are
 0,0  and 1,1 .
 f xx  x, y 
The Hessian matrix of f is H  x, y   
 f yx  x, y 
det   I  H  0,0   

3
3 
f xy  x, y   6 x 3

.
f yy  x, y    3 6 y 
    3    3  so H  0,0  has eigenvalues 3 and 3 ; since H  0,0  is
indefinite, f has a saddle point at  0,0  ;
det   I  H 1,1  
 6
3
3
    3    9  so H 1,1 has eigenvalues 3 and 9 ; since H 1,1 is
 6
positive definite, f has a relative minimum at 1,1 .
44
15.
Chapter 7: Diagonalization and Quadratic Forms
The first partial derivatives of f are f x  x, y   2 x  2 xy and f y  x, y   4 y  x 2 . To find the critical points
we set f x and f y equal to zero. This yields the equations 2 x 1  y   0 and y  14 x 2 . From the first, we
conclude that x  0 or y  1 . Thus there are three critical points: (0, 0), (2, 1), and (2, 1).
 f xx  x, y 
The Hessian matrix is H  x, y   
 f yx  x, y 
 2
det   I  H  0,0   
0
f xy  x, y    2  2 y 2 x 
.

4 
f yy  x, y    2 x
0
    2    4  so H  0,0  has eigenvalues 2 and 4 ; since H  0,0  is
 4
positive definite, f has a relative minimum at  0,0  .
det   I  H  2,1  

4
4
  2  4  16 so the eigenvalues of H  2,1 are 2  2 5 . One of these is
 4
positive and one is negative; thus this matrix is indefinite and f has a saddle point at (2, 1).
det   I  H  2,1  

4
  2  4  16 so the eigenvalues of H  2,1 are 2  2 5 . One of these
4   4
is positive and one is negative; thus this matrix is indefinite and f has a saddle point at  2,1 .
16.
The first and second partial derivatives of f are:
f x  x, y   3 x 2  3 , f y  x, y   3y2  3 , f xx  x, y   6 x , f xy  x, y   0 , and f yy  x, y   6 y .
Setting f x  0 and f y  0 results in four critical points:  1, 1 ,  1,1 , 1, 1 , and 1,1 .
 f xx  x, y 
The Hessian matrix of f is H  x, y   
 f yx  x, y 
f xy  x, y   6 x 0 

 . By Theorem 5.1.2, the
f yy  x, y    0 6 y 
eigenvalues of the diagonal matrix H  x, y  are its main diagonal entries, therefore
17.

at the critical point  1, 1 , H  1, 1 has eigenvalues 6, 6 so f has a relative maximum,

at the critical point  1,1 , H  1,1 has eigenvalues 6,6 so f has a saddle point,

at the critical point 1, 1 , H 1, 1 has eigenvalues 6, 6 so f has a saddle point,

at the critical point 1,1 , H 1,1 has eigenvalues 6,6 so f has a relative minimum.
   1.
The problem is to maximize z  4 xy subject to x 2  25 y 2  25, or  5x   1y
2
2
Let x  5 x1 and y  y1 , so that the problem is to maximize z  20 x1 y1 subject to || ( x1 , y1 ) ||  1.
Write z  xT Ax   x1
 0 10   x1 
y1  
  .
10 0   y1 

10
10

  2  100     10    10  .
7.4 Optimization Using Quadratic Forms
45
 12 
The largest eigenvalue of A is  = 10 which has positive unit eigenvector  1  . Thus the maximum value
 2 
of z  20
    10 which occurs when x  5x 
1
2
1
2
1
5
2
and y  y1  12 , which are the coordinates of one
of the corner points of the rectangle.
18.
Since || x ||  1 and Ax  2 x , it follows that xT Ax  xT  2x   2(xT x)  2 || x || 2  2 .
19.
(a)
The first partial derivatives of f  x, y  are f x  x, y   4 x 3 and fy  x, y   4 y3 .
Since f x  0,0   f y  0,0   0 , f has a critical point at  0,0  .
The second partial derivatives of f  x, y  are f xx  x, y   12 x 2 , f xy  x, y   0 , and
f yy  x, y   12 y2 . We have f xx  0,0  f yy  0,0   f xy2  0,0   0 therefore the second derivative test is
inconclusive.
The first partial derivatives of g  x, y  are g x  x, y   4 x 3 and gy  x, y   4 y3 .
Since gx  0,0   gy  0,0   0 , g has a critical point at  0,0  .
The second partial derivatives of g  x, y  are gxx  x, y   12 x 2 , gxy  x, y   0 , and gyy  x, y   12 y2 .
2
We have gxx  0,0  gyy  0,0   gxy
 0,0   0 therefore the second derivative test is inconclusive.
(b)
Clearly, for all ( x, y )  (0,0) , f ( x, y )  f (0,0)  0 therefore f has a relative minimum at (0,0) .
For all x  0 , g( x,0)  g(0,0)  0 ; however, for all y  0 , g(0, y )  g(0,0)  0 - consequently, g has
a saddle point at (0,0) .
20.
The general quadratic form on R2 , f  x, y   a1 x 2  a2 y 2  a3 xy has first and second partial derivatives
f x  x, y   2a1 x  a3 y , f y  x, y   2a2 y  a3 x , f xx  x, y   2a1 , f xy  x, y   a3 , and f yy  x, y   2a2 . The
2 4 
assumption H  x, y   
 implies that a1  1 , a2  1 , and a3  4 .
4 2 
The equations f x  0 and f y  0 become 2 x  4 y  0 and 4 x  2 y  0 so the only critical point is  0,0  .
det   I  H  0,0   
 2
4
4
    6    2  so H  0,0  has eigenvalues 2 and 6 . We conclude
 2
that f  x, y  has a saddle point at  0,0  .


21.
If x is a unit eigenvector corresponding to , then q  x   xT Ax  xT   x    xT x   1  .
22.
Let us assume A is a symmetric matrix.
If m  M then c must be equal to m  M ; taking x c  u m we obtain xTc Ax c  u Tm Au m  m  c .
Now, consider the case m  M . With the vectors given in the hint, Theorem 7.4.1 yields
46
Chapter 7: Diagonalization and Quadratic Forms
Au M  Mu M and Au m  mu m . Eigenvectors from different eigenspaces must be orthogonal, so




uTm Au M  u Tm  Mu M   M uTm u M  0 and uTM Au m  uTM  mu m   m uTM u m  0 . We have
x Tc Ax c
 M c T
cm T   M c
u 
uM  A
u 
 
 M m m
  M m m
M m

 
M c T
cm T
u m Au m 
u M Au M

M m
M m
M c
cm
m
M

M m
M m
Mm  cm  cM  mM

M m
cM  m

M m
 c

cm
uM 

M m

True-False Exercises
(a)
False. If the only critical point of the quadratic form is a saddle point, then it will have neither a maximum
nor a minimum value.
(b)
True. This follows from part (b) of Theorem 7.4.1.
(c)
True.
(d)
False. The second derivative test is inconclusive in this case.
(e)
True. If det(A) < 0, then A will have a negative eigenvalue.
7.5 Hermitian, Unitary, and Normal Matrices
1.
 2i 1  i 
4
5  i
 2 i
A   4 3  i  therefore A*  AT  
0 
1  i 3  i
 5  i
0 
2.
4 
 2i
  2 i 1  i 1  i 

*
T
therefore A  A   1  i 5  7i 
A

i 
 4 5  7i
 1  i
i 
3.
i 2  3i 
 1

A   i
3
1 
 2  3i 1
2 
7.5 Hermitian, Unitary, and Normal Matrices
4.
0 3  5i 
 2

A 0
4
i 
3  5i i
6 
5.
(a)
 A13  2  3i does not equal  A* 13  2  3i
(b)
 A22  i does not equal  A* 22  i
(a)
 A12  1  i does not equal  A* 12  1  i
(b)
 A33  2  i does not equal  A* 33  2  i
6.
7.
det   I  A  
 
 3
2  3i
  2  2  16    1  17
2  3i   1
47
    1  17  so A has real eigenvalues
1  17 and 1  17 .
For the eigenvalue   1  17 , the augmented matrix of the homogeneous system


1  17  I  A x  0 is 22 317i 22 173i 00 . The rows of this matrix must be scalar multiples of
each other (see Example 3 in Section 5.3) therefore it suffices to solve the equation corresponding to the
second row, which yields x1  21317  2  3i  x2  0 . The general solution of this equation (and,
consequently, of the entire system) is x1  21317  2  3i  t , x2  t . The vector
 2  17  2  3i  
v1   13
 forms a basis for the eigenspace corresponding to   1  17 .
1


For the eigenvalue   1  17 , the augmented matrix of the homogeneous system
 2  17
2  3i

2  17
1  17  I  A x  0 is  2  3i
0
.
0 
As before, this yields x1  2 1317  2  3i  x2  0 . The general solution of this equation (and, consequently, of
 2  17  2  3i  
the entire system) is x1  21317  2  3i  t , x2  t . The vector v 2   13
 forms a basis for the
1


eigenspace corresponding to   1  17 .
We have

v1  v 2  2 1317  2  3i 
=
 2  17  2  17 
132

2  17
13

 2  3i   1  1    21317  2  3i    2 1317  2  3i    11
 2  3i  2  3i   1  41317  4  9   1  1  1  0 therefore the eigenvectors from different
eigenspaces are orthogonal.
2
48
8.
Chapter 7: Diagonalization and Quadratic Forms
det   I  A  
 

2 i
  2  2  4    1  5
2i   2
    1  5  so A has real eigenvalues 1  5 and
1 5 .
For the eigenvalue   1  5 , the augmented matrix of the homogeneous system


1  5  I  A x  0 is 1 2i 5 12i 5 00 . The rows of this matrix must be scalar multiples of each
other (see Example 3 in Section 5.3) therefore it suffices to solve the equation corresponding to the second


row, which yields x1  12 5 i x2  0 . The general solution of this equation (and, consequently, of the entire
 1 5 i 
system) is x1  12 5 i t , x2  t . The vector v1   2  forms a basis for the eigenspace corresponding
 1 


to   1  5 .
For the eigenvalue   1  5 , the augmented matrix of the homogeneous system


1  5  I  A x  0 is 1 2i 5 12i 5 00 . As before, this yields x  
1
1 5
2


i x2  0 . The general

solution of this equation (and, consequently, of the entire system) is x1  12 5 i t , x2  t . The vector
 1 5 i 
v 2   2  forms a basis for the eigenspace corresponding to   1  5 .
 1 

We have v1  v 2  12 5 i

1 5
2


i  1  1   12 5 i

1 5
2


i  11   12 5
  
1 5
2
1   14  1  5   1  1  1  0 therefore the eigenvectors from different eigenspaces are orthogonal.
9.
The following computations show that the row vectors of A are orthonormal:
|| r1 || 
3 2
5
2
 45 i 
9
25
 16
 1 ; || r2 || 
25
2
2
 45  35 i 
16
25
 259  1 ;
r1  r2   35   45    45 i   35 i    125  125  0
 45 
 3
.
By Theorem 7.5.3, A is unitary, and A1  A*   54
3 
 5 i  5 i 
10.
We will show that the row vectors of A , r1   12
|| r1 || 
r1  r2 
1
2
2
 12
2
 1 ; || r2 || 
1
2
 and r2    12 1  i 

 12 1  i   12 1  i  
2
2
1
2
 12  1 ;
    1  i       1  i         i       i   0
1
2
1
2
1
2
1
2
1
2
1
2
1
2
 12
By Theorem 7.5.3, A is unitary, therefore A  A   1
 2
1
*
1
2
1
2
1
2
 12  21 i 
.
1
 12 i 
2
1
2
1  i  are orthonormal.
7.5 Hermitian, Unitary, and Normal Matrices
11.
The following computations show that the column vectors of A are orthonormal:
|| c1 || 
 3  i   1  i 3  
2
1
2 2
2
1
2 2
4
8
 84  1 ;
1  i 3    i  3     1 ;
c c 
 3  i  1  i 3   1  i 3   i  3   0
  3  i
By Theorem 7.5.3, A is unitary, therefore A  A  

1 i 3
 
|| c 2 || 
1
2
2
1
2 2
2
1
4
8
2 2
1
2 2
1
2 2
4
8
1
2 2
1
2 2
1
12.
*
1
2 2
1
2 2
1
2 2
1
2 2
We will show that the row vectors of A , r1   13  1  i 
|| r1 || 
r1  r2 
1
3
2
2
 1  i   16 1  i  
2
3
 62  1 ; || r2 || 
1
6
1
3
2
1  i  and r2   13
 26
2

1
3
2
6
 are orthonormal.

 64  1 ;
  1  i       1  i         1  i     1  i   0
1
3
1
3
1
6
2
6
1
3
2
6
 13  1  i 
By Theorem 7.5.3, A is unitary, therefore A  A   1
 6 1  i 
1
13.
1  i 3   .
 i  3 
det   I  A  
*

.
2

6
1
3
  4 1  i
    3    6  thus A has eigenvalues   3 and   6 .
1  i   5
1 1  i 
The reduced row echelon form of 3I  A is 
 so that the eigenspace corresponding to   3
0 0 
x
 1  i 
consists of vectors   where x   1  i  t , y  t . A vector 
 forms a basis for this eigenspace.
 y
 1 
 1  12  12 i 
The reduced row echelon form of 6I  A is 
 so that the eigenspace corresponding to
0 
0
x
1  i 
  6 consists of vectors   where x   12  12 i  t , y  t . A vector 
 forms a basis for this
 y
 2 
eigenspace.
Applying the Gram-Schmidt process to both bases amounts to simply normalizing the respective vectors.
 13i
Therefore A is unitarily diagonalized by P   1
 3
1i
6
 13i
follows that P AP   1i
 6
1i
6
1
  4 1  i   13i

 1
2
1
5

i

 3


6
1
3

 13i
*
1


P
P
 1i

.
Since
P
is
unitary,
2
 6

6
2
6
 3 0 

.
 0 6 

 . It
2
6

1
3
49
50
14.
Chapter 7: Diagonalization and Quadratic Forms
det   I  A  
 3
i
    2    4  thus A has eigenvalues   2 and   4 .
 3
i
 1 i 
The reduced row echelon form of 2 I  A is 
 so that the eigenspace corresponding to   2
0 0 
x
i
consists of vectors   where x   i  t , y  t . A vector   forms a basis for this eigenspace.
 y
1
1 i 
The reduced row echelon form of 4 I  A is 
 so that the eigenspace corresponding to   4 consists
0 0 
x
 i 
of vectors   where x   i  t , y  t . A vector   forms a basis for this eigenspace.
 y
 1
Applying the Gram-Schmidt process to both bases amounts to simply normalizing the respective vectors.
 12 i  12 i 
Therefore A is unitarily diagonalized by P   1
 . Since P is unitary,
1
 2
2 

  12 i
P 1  P *   1
 2 i
15.
det   I  A  

  12 i
1

P
AP
 . It follows that
 1
1
 2 i
2

 3 i   12 i  12 i  2 0 


 1
.
1
1
3
i




2
2 
 2
 0 4 
1
2
1
2
 6
2  2 i
    2    8  thus A has eigenvalues   2 and   8 .
2  2 i   4
1
The reduced row echelon form of 2 I  A is 
0
1
2
 12 i 
 so that the eigenspace corresponding to   2
0 
x
 1  i 
consists of vectors   where x    12  12 i  t , y  t . A vector 
 forms a basis for this eigenspace.
 y
 2 
 1 1  i 
so that the eigenspace corresponding to   8
The reduced row echelon form of 8I  A is 
0 
0
x
1  i 
consists of vectors   where x  1  i  t , y  t . A vector 
 forms a basis for this eigenspace.
 y
 1 
Applying the Gram-Schmidt process to both bases amounts to simply normalizing the respective vectors.
 16i
Therefore A is unitarily diagonalized by P   2
 6
 16i
follows that P 1 AP   1i
 3
16.
det   I  A  
1 i
 6
2  2i   6


2  1i
1
4   26
3


2
6

 16i
*
1
 . Since P is unitary, P  P   1i
1
 3
3

1 i
3

 . It
1
3

2
6
 2 0 

.
1
0
8



3
1 i
3

3  i
    2    5  thus A has eigenvalues   2 and   5 .
3  i   3
 1  32  12 i 
The reduced row echelon form of 2 I  A is 
 so that the eigenspace corresponding to   2
0 
0
7.5 Hermitian, Unitary, and Normal Matrices
51
 3  1 i
x
consists of vectors   where x   32  12 i  t , y  t . A vector  2 2  forms a basis for this eigenspace.
 y
 1 
1
The reduced row echelon form of 5I  A is 
0
3
5
 15 i 
 so that the eigenspace corresponding to
0 
 3  1 i 
x
  5 consists of vectors   where x    35  15 i  t , y  t . A vector  5 5  forms a basis for this
 y
 1 
eigenspace.
Applying the Gram-Schmidt process to both bases amounts to simply normalizing the respective vectors.
i
 314
Therefore A is unitarily diagonalized by P   2
 14
3  i
35
i
  0 3  i   314



 2
5
35 
 3  i 3   14
3 i
35
i
 314
follows that P 1 AP   3i
 35
17.
2
14

i

 314
*
1
 . Since P is unitary, P  P   3i
5
 35
35 


 . It
5
35 

2
14
 2 0 

.
5
0
5




35 

The characteristic polynomial of A is    5   2    2     2    1   5 ; thus the eigenvalues of A
are 1  2, 2  1, and 3  5. The augmented matrix of the system (2I  A)x = 0 is
0 0
0
0
1 0
 0 
 7 0
0



1 1  i 0  , which can be reduced to  0 1 1  i 0  . Thus v1  1  i  is a basis for the

 0 1  i 2 0 
 1 
 0 0
0
0 
0 
 
eigenspace corresponding to 1  2, and p1   13i  is a unit eigenvector. Similar computations show that
1 
 3 
 0 
1 


 
1 i
p2   6  is a unit eigenvector corresponding to 2  1, and p3  0  is a unit eigenvector
 2 
0 
 6 
corresponding to 3  5. The vectors p1 , p2 , p3  form an orthogonal set, and the unitary matrix
P   p1 p2
0

P*AP  0

 1
p3  diagonalizes the matrix A:
1 i
3
1 i
6
0
 5
0
0  0


0
2
1 1  i   13i
6


0   1
0  0 1  i
 3
1
3
0
1 i
6
2
6
1   2 0 0 

0    0 1 0  .

0   0 0 5
52
18.
Chapter 7: Diagonalization and Quadratic Forms
det   I  A  
  2  12 i
1
2
i
 2
0
 12 i
0
 2
1
2
i
    1   2    3  thus A has eigenvalues 1 , 2 , and 3 .
1 0  2i 


1  so that the eigenspace corresponding to   1
The reduced row echelon form of 1I  A is 0 1
0 0
0 

 x1 
consists of vectors  x2  where x1 
 x3 
 2i 


2i t , x2  t , x3  t . A vector  1 forms a basis for this
 1


 
eigenspace.
 1 0 0
The reduced row echelon form of 2 I  A is 0 1 1 so that the eigenspace corresponding to
0 0 0 
 x1 
0 


  2 consists of vectors  x2  where x1  0 , x2  t , x3  t . A vector  1 forms a basis for this
 x3 
 1
eigenspace.
1 0

The reduced row echelon form of 3I  A is 0 1
0 0

2i 

1  so that the eigenspace corresponding to
0 
  2i 
 x1 




  3 consists of vectors  x2  where x1   2i t , x2  t , x3  t . A vector  1 forms a basis for

 x3 
1



this eigenspace.
Applying the Gram-Schmidt process to the three bases amounts to simply normalizing the respective
1i
 2
P

vectors. Therefore A is unitarily diagonalized by
  12
 1
 2
  1 i  12
 2
*
1
1
P P  0
2
 1
1
 2 i  2


1
 . It follows that
2

1
2 

1
2
0  12 i 

1
1

 . Since P is unitary,
2
2

1
1

2
2
7.5 Hermitian, Unitary, and Normal Matrices
  1 i  12
2

1
1
P AP   0
2
 1
1
 2 i  2
 2

1
   12 i
2
 1
1
2 
  2 i
1
2
19.
i 2  3i 
 0

A i
0
1 
 2  3i 1
4i 
20.
0 3  5i 
 0

0
A 0
i 
 3  5i i
0 
21.
(a)
i  12 i   12 i

2
0    21

0
2   12
1
2
53
0  12 i   1 0 0 
 
1
1

 0 2 0  .
2
2
 
1
1
0 0 3 
2
2
 
  A12  i does not equal  A* 12  i ;
   2  3i
also,   A 13  2  3i does not equal A*
(b)
13
  A11  1 does not equal  A* 11  1 ;
   3  5i and
also,   A 13  3  5i does not equal A*
13
  A23  i does not equal  A 23  i .
*
22.
(a)
  A13  2  3i does not equal  A* 13  2  3i ;
   1  i
also,   A23  1  i does not equal A*
(b)
23
  A13  4  7i does not equal  A* 13  4  7i ;
  1
also,   A33  1 does not equal A*
23.
24.
33
1 i 
 
2
det   I  A   
    i  2     2i    i  ; thus the eigenvalues of A,   2i and   i ,

1


i
i



are pure imaginary numbers.
det   I  A  
 3i
  2  9     3i    3i  ; the eigenvalues of A ,   3i and   3i , are pure

3i
imaginary numbers.
25.
 15 8 8 
 1  2i 2  i 2  i 


*
*
A   2  i 1 i
i  ; we have AA  A A   8 8 7 
 2  i
 8 7 8 
1  i 
i
26.
i
1 i 
4 14 
 11
 2  2i



*
*
A   i
2i
1  3i  ; we have AA  A A   4
15 22 
 14 22
 1  i 1  3i 3  8i 
85
*
*
54
27.
Chapter 7: Diagonalization and Quadratic Forms



   A  A    A  A  B . Similarly, C  C .
 

*
(a)
If B  12 A  A* , then B*  12 A  A*
(b)
We have B  iC  12 A  A*  12 A  A*  A and B  iC  12 A  A*  12 A  A*  A* .
(c)
AA*   B  iC  B  iC   B2  iBC  iCB  C 2 and A*A  B 2  iBC  iCB  C 2 .

*
1
2
**
*
1
2
*

 

Thus AA*  A* A if and only if  iBC  iCB  iBC  iCB , or 2iCB  2iBC .
Thus A is normal if and only if B and C commute i.e., CB = BC.
28.
 
*


*
By Theorem 7.5.1 and Formula (5), Au  v  v* Au  v* A* u  A* v u  u  A* v .
Also, u  Av   Av  u  v* A* u  A* u  v .
*
30.
  i 
AA*  
  i 
  i      i 
  i       i 
  i     2   2   2   2          1 0 

; thus
 
  i           2   2   2   2   0 1
A*  A1 and A is unitary.
31.
 7  11 i 
Ax   5 1 5 2  ;
 5  5 i 
2
|| Ax || 
7
5
2
2
 115 i   15  25 i 
49
25
 121
 251  254  7 equals
25
2
|| x ||  1  i  2  i  1  1  4  1  7 which verifies part (b);
 7  4i 
Ay   5 1 5 3  ;
 5  5 i 
Ax  Ay   75  115 i  75  45 i     15  25 i   15  35 i 
  75  115 i  75  45 i     15  25 i   15  35 i    93
 49
i   257  251 i   4  2i equals
25
25 
x  y  1  i 1   2  i 1  i   1  i 1   2  i 1  i   1  i    3  i   4  2i which verifies part (c).
32.
33.
  12 i 
 12 i 
Eigenvectors u1   1  and u 2   1  which were found in the solution of Exercise 14 have the desired
 2 
 2 
properties.
a 0 0 
 aa


*
*
A   0 0 b  ; AA   0
 0
 0 c 0 
0
cc
0
 2
0  a
0    0

bb   0

0
c
2
0
0 
 aa

*

0 ; A A   0

2
 0
b 

0
bb
0
A is normal if and only if b  c .
34.
From Formulas (3) and (4), such a matrix must be equal to its own inverse.
35.
 12
A i
 2
36.
Applying Theorem 5.3.1 to each column, we have
 i2 
 is both Hermitian and unitary.
 12 
 2
0  a
0    0

cc   0

0
b
2
0
0 

0 

2
c 

7.5 Hermitian, Unitary, and Normal Matrices
Part (b):  A  B  
*
Def.1
Def.1
37.
Def .1
Part (e):  AB  
 
*
Def .1
38.
(*)
kA  k A
(**)
T
T
T
*
T
(*)
*
Def.1
    k A   k A  kA
Part (a):  A*   AT
*
AB A B
 A B  A B  A  B  A  B
Part (d):  kA   kA
*
55
T
T
*
T
(**)
Def.1
T
 
T
 AT 

AT  A

 Th. 5.3.2(a)
Th. 5.3.2(b) 

T

 AB 
T

 A B  B  A  B A
T
Th. 5.3.2(c)
T
T
*
*
Def.1
A is a real skew-Hermitian matrix whenever A*   A , which is equivalent to  A    A :
T
 a11 – b11i a21 – b21i
a – b i a – b i
22
22
 12 12




 a1n – b1n i a2 n – b2 n i
 an1 – bn1i   – a11 – b11i – a12 – b12 i
 an 2 – bn 2 i   – a21 – b21i – a22 – b22 i

 




 
 ann – bnn i   – an1 – bn1i – an 2 – bn 2 i
 – a1n – b1n i 
 – a2 n – b2 n i 




 – ann – bnn i 
Comparing the main diagonal entries on both sides, we must have a11  a22    ann  0 .
    A  ; thus A is also unitary.
*
39.
If A is unitary, then A1  A* and so ( A* )1  A1
40.
If A is skew-Hermitian then B  iA is Hermitian since
B*   iA 
*

Th. 7.5.1(d)
*
*
*
iA*  iA*   i   A   iA  B
For every eigenvalue  of A there must exist a nonzero vector x for which
Ax   x
Multiplying both sides by i yields  iA  x   i  x , i.e. Bx   i  x . By Theorem 7.5.2(a),  i must be real,
consequently,  is either 0 or purely imaginary.
41.
A unitary matrix A has the property that || Ax ||  || x || for all x in C n . Thus if A is unitary and Ax = x
where x  0, we must have |  | || x ||  || Ax ||  || x || and so   1.
42.
P *  ( uu * )*

43.
If H  I  2 uu * , then H *  ( I  2uu* )*  I *  2u** u*  I  2uu*  H ; thus H is Hermitian.
Th. 7.5.1(e)
(u * )* u *

Th. 7.5.1(a)
uu *  P therefore P is Hermitian.
HH *  ( I  2uu* )( I  2uu* )  I  2uu*  2uu*  4uu* uu*  I  4uu*  4u || u ||2 u*  I
so H is unitary.
44.
A* ( A1 )*

Th. 7.5.1(e)
( A1 A)*  I *  I therefore A* is invertible and its inverse is ( A1 )* .
56
45.
Chapter 7: Diagonalization and Quadratic Forms
(a)
This result can be obtained by mathematical induction.
(b)
det( A* )  det(( A)T )  det( A)  det( A) .
True-False Exercises
(a)
0 i 
*
False. Denoting A  
 , we observe that  A12  i does not equal A 12  i.
2
i


(b)
False. For r1    i2
 
i
6
i
3
 
 and r2  0  i
6


r1  r2   i2  0   i6  i6  i3
i
3
,

   0         0
i
3
i
6
2
i
3
2
1
6
1
3
1
6
thus the row vectors do not form an orthonormal set and the matrix is not unitary by Theorem 7.5.3.
 
*
(c)
True. If A is unitary, so A 1  A* , then ( A* )1  A  A* .
(d)
False. Normal matrices that are not Hermitian are also unitarily diagonalizable.
(e)
False. If A is skew-Hermitian, then A2
    A  A     A  A  A   A .
*
*
*
2
2
Chapter 7 Supplementary Exercises
1.
3.
(a)
3
For A   45
5
(b)
 45

For A    259
 12
25
 45 
 35
 1 0
1
T
T


,
so
A
A

,
A
A
 4
 0 1
3


5
 5
0
4
5
3
5
4
5
3
5

.

 35 
 45  259
 1 0 0


4
 12
, AT A  0 1 0  , so A1  AT   0
25 
5
16 
3
12





0 0 1
25 
25
 5
12
25
3
5
16
25


.

 1
0
T
Since A is symmetric, there exists an orthogonal matrix P such that P AP  D  


0
 1

 0
A is positive definite, all  ’s must be positive. Let us form a diagonal matrix C  
 
 0

0
2

0
0
 0 
. Since
 

 n 

0

2 


0

0 

0 
.
 
n 
Then A  PDP T  PCC T P T   PC  PC  . The matrix  PC  is nonsingular (it is a transpose of a product
T
T
of two nonsingular matrices), therefore it generates an inner product on R n :
Supplementary Exercises

57

 u, v   PC  u   PC  v  u T PCC T P T v  u T Av
T
T
 3
4.
The characteristic polynomial of A is det   I  A   2
2
2
2
 3
2     1    7  .
2
 3
2
The eigenvalues of A are   1 and   7 .
1 1 1 
The reduced row echelon form of 1I  A is  0 0 0  so that the eigenspace corresponding to   1
 0 0 0 
 x1 
 1


contains vectors  x2  where x1   s  t , x2  s , x3  t . This eigenspace has dimension 2 (vectors  1
 0 
 x3 
 1
and  0  form its basis).
 1
 1 0 1
The reduced row echelon form of 7 I  A is 0 1 1 so that the eigenspace corresponding to   7
0 0 0 
 x1 
1


contains vectors  x2  where x1  t , x2  t , x3  t . This eigenspace has dimension 1 ( 1 forms its basis).
 x3 
1
5.
The characteristic equation of A is  3  3 2  2      2    1 , so the eigenvalues are  = 0, 2, 1.
  12 
 12 
0 


 
Orthogonal bases for the eigenspaces are  = 0:  0  ;  = 2: 0  ;  = 1: 1  .
 1 
1
0 
 2 
 2
  12

Thus P   0
 1
 2
6.
0
0 0 0 

T
1  orthogonally diagonalizes A, and P AP  0 2 0  .
0 0 1
0

1
2
0
1
2
 4
x2   15
 2
 152   x1 
 xT Ax

16   x2 
(a)
4 x12  16 x22  15 x1 x2   x1
(b)
9 x  x  4 x  6 x1 x2  8 x1 x3  x2 x3   x1
2
1
2
2
2
3
x2
 9 3 4   x1 

1
x3   3 1
x   xT Ax
2 2
1
 4
4   x3 
2
58
7.
Chapter 7: Diagonalization and Quadratic Forms
 1  23   x1 
. The characteristic equation of A is
x2   3

4   x2 
 2
In matrix form, the quadratic form is xT Ax   x1
 2  5  74  0 which has solutions   532 2 or   4.62, 0.38. Since both eigenvalues of A are positive,
the quadratic form is positive definite.
8.
(a)
 3 1  x1 
x2  
   ; the characteristic polynomial of the matrix A is
 1 5   x2 
Q  x T Ax   x1
 3
det   I  A  
1
 
1
   1  17
 5
    1  17  so A has eigenvalues 1  17 .
 1 4  17 
The reduced row echelon form of 1  17 I  A is 
 so that the eigenspace
0 
0


x 
corresponding to   1  17 consists of vectors  1  where x1  4  17 t , x2  t .
 x2 


 4  17 
A vector p1  
 forms a basis for this eigenspace.
1


 1 4  17 
The reduced row echelon form of 1  17 I  A is 
 so that the eigenspace
0 
0


x 
corresponding to   1  17 consists of vectors  1  where x1  4  17 t , x2  t .
 x2 


 4  17 
A vector p2  
 forms a basis for this eigenspace.
1


Applying the Gram-Schmidt process to both bases p1 and p2  amounts to simply normalizing the
vectors.
Therefore an orthogonal change of variables x  Py that eliminates the cross product terms in Q is
 4  17
 x1   34 8 17
x    1
 2
 34 8 17
 y
  1  . In terms of the new variables, we have
1
  y2 
34  8 17 
4  17
34  8 17

1  17
0   y1 
2
2
y2  
    1  17 y1  1  17 y2 .
y
1  17   2 
 0


Q  xT Ax  yT P T AP y   y1
(b)
Q  x Ax   x1
T
x2
5
det   I  A   2
3



 5 2 3  x1 
x3   2 1 0   x2  ; the characteristic polynomial of the matrix A is
 3 0 1  x3 
2
3
 1
0
0
 1
     2    7  so the eigenvalues of A are 0 , 2 , and 7 .
Supplementary Exercises
59
 1 0  13 

2
so that the eigenspace corresponding to
The reduced row echelon form of 0 I  A is 0 1
3
0 0
0 
 x1 
 1


1
2
  0 consists of vectors  x2  where x1  3 t , x2   3 t , x3  t . A vector p1   2  forms a basis
 x3 
 3
for this eigenspace.
 1 0 1
The reduced row echelon form of 2 I  A is 0 1 2  so that the eigenspace corresponding to
0 0
0 
 x1 
 1


  2 consists of vectors  x2  where x1  t , x2  2t , x3  t . A vector p2  2  forms a basis for
 1
 x3 
this eigenspace.
2
1 0

The reduced row echelon form of 7 I  A is 0 1  12  so that the eigenspace corresponding to
0 0
0 
 x1 
 4 


1
  7 consists of vectors  x2  where x1  2t , x2  2 t , x3  t . A vector p3   1 forms a basis
 x3 
 2 
for this eigenspace.
Applying the Gram-Schmidt process to the three bases amounts to simply normalizing the vectors.
Therefore an orthogonal change of variables x  Py that eliminates the cross product terms in Q is
1
 x1   14
 x    2
 2   14
 x3   3
 14
Q  x Ax  y
T
9.
 421   y 
 1
1
 y . In terms of the new variables, we have
21  2 
 
2
y
21 
 3
1
6
2
6
1
6
T
 P AP  y   y
T
1
y2
 0 0 0   y1 
y3   0 2 0   y2   2 y22  7 y32 .
 0 0 7   y3 
(a)
y  x 2  0 or y  x 2 represents a parabola.
(b)
3 x  11y 2  0 or x  113 y 2 represents a parabola.
60
10.
Chapter 7: Diagonalization and Quadratic Forms
det   I  A  
 1
1
0
0
 1
1     2   2    2 thus A has eigenvalues 2 and 12 3i .
1
0
 1


 1 0 1
The reduced row echelon form of 2 I  A is 0 1 1 so that the eigenspace corresponding to   2
0 0 0 
 x1 
1


consists of vectors  x2  where x1  t , x2  t , x3  t . A vector 1 forms a basis for this eigenspace.
1
 x3 
1 0

The reduced row echelon form of 12 3i I  A is 0 1
0 0



1 3i
so that the eigenspace corresponding to
2 

0 

1 3i
2
 12 3i 
 x1 


  12 3i consists of vectors  x2  where x1  12 3i t , x2  12 3i t , x3  t . A vector  12 3i  forms a basis
 1 
 x3 


for this eigenspace.
 12 3i 


By Theorem 5.3.4, a vector  12 3i  forms a basis for the eigenspace corresponding to   12 3i .
 1 


Applying the Gram-Schmidt process to the three bases amounts to simply normalizing the respective
1
 3
vectors. Therefore A is unitarily diagonalized by U   13

 13

11.
 1
 3
1
*
Since U is unitary, U  U   12 33i

 1 3i
 2 3
1 3i
2 3
 1
 3
1
It follows that U AU   12 33i

 1 3i
 2 3

1
 1 1 0   3
1  
0 1 1   13
3 


1  
1 0 1   13
3

1
3
1 3i
2 3
1 3i
2 3
1
3
1 3i
2 3
1
3
1 3i
2 3
1 3i
2 3
1
3


1 3i 
.
2 3

1

3
1 3i
2 3


1 
.
3

1 
3
1
3
1 3i
2 3
1 3i
2 3
1
3
 2
 
1 3i 
 0
2 3
 
1
 0
3
1 3i
2 3
0
1 3i
2
0
0 

0 .
1 3i 
2 
Partitioning U into columns we can write U  [ u1 | u 2 | |u n ] . The given product can be rewritten in
partitioned form as well:
Supplementary Exercises
 z1
0
A U 


0
0
z2

0
0
 z1

0
 0
  u u |u n  

   1 2


 zn 
0

0
z2

0
0
 0 
  z u z u |zn u n 
   1 1 2 2

 zn 

By Theorem 7.5.3, the columns of U form an orthonormal set. Therefore, columns of A must also be

 
orthonormal:  zi u i   z j u j  zi z j
  u  u   0 for all i  j and || z u ||  | z | || u ||  1 for all i .
i
i
j
i
i
i
By Theorem 7.5.3, A is a unitary matrix.
12.
Refer to the solution of Exercise 40 in Section 7.5.
13.
a 
Partitioning the given matrix into columns A   u1 u 2 u 3  , we must find u1   b  such that
 c 
u1  u2  a2  b6  c3  0 , u1  u3   a2  b6  c3  0 , and || u1 || 2  a 2  b2  c2  1 .
Subtracting the second equation from the first one yields a  0 . Therefore c   63 b   b2 .
Substituting into || u1 || 2  1 we obtain b2  b2  1 so that b2  23 .
2
There are two possible solutions:
14.
, c   13 and

a0, b

a0, b
(a)
Negative definite
(b)
Positive definite
(c)
Indefinite
(d)
Indefinite
(e)
Indefinite
(f)
Theorem 7.3.4 is inconclusive
2
3
2
3
, c  13 .
61
8.1 General Linear Transformations
CHAPTER 8: LINEAR TRANSFORMATIONS
8.1 General Linear Transformations
 1 0  
 2 0   2 0   4 0 
T 2
 T 

  
 
 does not equal
 0 1  
 0 2   0 2   0 4 
2
1.
(a)
 1 0  
1 0 
1 0  2 0 
 2
2T  
  2



 so T does not satisfy the homogeneity property.
0 1 
0 1  0 2 
 0 1  
Consequently, T is not a linear transformation.
2
(b)
Let A and B be any 2  2 matrices and let k be any real number. We have
T  kA   tr  kA   ka11  ka22  k  a11  a22   k tr  A   kT  A  and
T  A  B   tr  A  B    a11  b11    a22  b22    a11  a22    b11  b22 
 tr  A   tr  B   T  A   T  B 
therefore T is a linear transformation.
 a b  
a b 
such that T  
The kernel of T consists of all matrices 
   a  d  0 , i.e., d  a .

c d 
 c d  
b
a
We conclude that the kernel of T consists of all matrices of the form 
.
 c a 
(c)
Let A and B be any 2  2 matrices and let k be any real number. We have


T  kA   kA   kA   kA  kAT  k A  AT  kT  A  and
T
T  A  B   A  B   A  B   A  B  AT  BT  A  AT  B  BT  T  A   T  B 
T
therefore T is a linear transformation.
a b 
The kernel of T consists of all matrices 
 such that
c d 
 a b   a b  a b 
 a b   a c   2 a b  c  0 0 
T 









 therefore
 c d   b d  b  c 2 d  0 0 
 c d   c d  c d 
T
a  d  0 and c  b .
 0 b
We conclude that the kernel of T consists of all matrices of the form 
.
 b 0 
2.
(a)
Let A and B be any 2  2 matrices and let k be any real number. We have
T  kA    kA 11  ka11  k  A 11  kT  A  and
1
2
Chapter 8: Linear Transformations
T  A  B    A  B 11  a11  b11   A 11   B 11  T  A   T  B 
therefore T is a linear transformation.
 a b  
a b 
The kernel of T consists of all matrices 
such that T  
  a  0 .

c d 
 c d  
0 b 
We conclude that the kernel of T consists of all matrices of the form 
.
c d
(b)
Let A and B be any 2  2 matrices and let k be any real number. We have
T  kA   O22  kO22  kT  A  and T  A  B   O22  O22  O22  T  A   T  B 
therefore T is a linear transformation.
The kernel of T is M 22 .
(c)
Let A and B be any 2  2 matrices and let k be any real number. We have
T  kA   c  kA   k  cA   kT  A  and T  A  B   c  A  B   cA  cB  T  A   T  B 
therefore T is a linear transformation.
a
The kernel of T consists of all matrices  11
 a21
 a
T   11
  a21
a12 
such that
a22 
a12    ca11 ca12  0 0 

.

a22   ca21 ca22  0 0 
  0 0  
If c  0 then ker  T   M 22 , otherwise ker  T    
 .
  0 0  
3.
4.
For u  0 , T  1u   u  u  T  u   1T  u  , so the mapping is not a linear transformation.
Let u and v be any vectors in R3 and let k be any real number. By properties of cross product listed in
Theorem 3.5.2, we have
T  ku    ku   v 0  k  u  v 0   kT  u  and
T  u  v    u  v   v0   u  v0    v  v0   T  u   T  v 
therefore T is a linear transformation.
If v 0  0 then ker  T   R 3 . Otherwise ker  T   span v 0  .
5.
Let A1 and A2 be any 2  2 matrices and let k be any real number. We have
T  kA1    kA1  B  k  A1 B   kT  A1  and
T  A1  A2    A1  A2  B  A1 B  A2 B  T  A1   T  A2  therefore T is a linear transformation.
The kernel of T consists of all 2  2 matrices whose rows are orthogonal to all columns of B .
8.1 General Linear Transformations
6.
(a)
 a b  
  ka kb  
 a b  
T k
 T 
  3ka  4 kb  kc  kd  k  3a  4b  c  d   kT  



 c d  
  kc kd  
 c d  
  a b   a  b  
  a  a  b  b  

T 
 T 


   3  a  a    4  b  b    c  c    d  d  
  c d   c d   
  c  c d  d   
 a b  
  a  b  
  3 a  4 b  c  d    3a   4 b   c   d    T  
T 


  c d   
 c d  
therefore T is a linear transformation.
a b 
The kernel of T consists of all matrices 
 for which 3a  4b  c  d  0 .
c d 
(b)
7.
 2 0  
 4 0 
 2 0  
T 2
 T 
  16 does not equal 2T  


   2  4   8 therefore T is not a linear
 0 0  
 0 0  
 0 0  
transformation.
Let p  x   a0  a1 x  a2 x 2 and q  x   b0  b1 x  b2 x 2 .
(a)
T  kp  x    ka0  ka1  x  1  ka2 ( x  1)2  kT  p  x  
T  p  x   q  x    a0  b0   a1  b1  x  1   a2  b2  ( x  1)2
 a0  a1  x  1  a2 ( x  1)2  b0  b1  x  1  b2 ( x  1)2  T  p  x    T  q  x  
Thus T is a linear transformation.
The kernel of T consists of all polynomials a0  a1 x  a2 x 2 such that


T a0  a1 x  a2 x 2  a0  a1  x  1  a2 ( x  1)2  0 . This equality requires that
a0  a1  a2  0 therefore ker  T   0 .
(b)


T  kp  x    T ka0  ka1 x  ka2 x 2   ka0  1   ka1  1 x   ka2  1 x 2  kT  p  x   so
T is not a linear transformation.
8.
(a)
Since T maps the zero function f  x   0 to g  x   1 , by Theorem 8.1.1(a), T is not a linear
transformation.
(b)
Let f and g be any functions in F  ,   and let k be any real number. We have
T  kf  x    kf  x  1  kT  f  x   and T  f  x   g  x    f  x  1  g  x  1  T  f  x    T  g  x  
therefore T is a linear transformation.
The kernel of T contains only the zero function.
9.
T  k  a0 , a1 , a2 ,, an ,   T  ka0 , ka1 , ka2 ,, kan ,   0, ka0 , ka1 ,, kan ,
 k  0, a0 , a1 ,, an ,  kT  a0 , a1 , a2 ,, an ,
T   a0 , a1 , a2 ,, an ,   b0 , b1 , b2 ,, bn    T  a0  b0 , a1  b1 , a2  b2 ,, an  bn ,
3
4
Chapter 8: Linear Transformations
  0, a0  b0 , a1  b1 ,, an  bn ,   0, a0 , a1 ,, an ,0    0, b0 , b1 ,, bn ,0 
 T  a0 , a1 , a2 ,, an ,  T  b0 , b1 , b2 ,, bn , therefore T is a linear transformation.
The kernel of T contains only  0,0,0, .
10.
11.
 
 
(a)
T x 2   x  x 2  x 3  0 therefore x 2 is not in ker  T  .
(b)
T  0    x  0   0 therefore 0 is in ker  T  .
(c)
T 1  x    x 1  x   x  x 2  0 therefore 1  x is not in ker  T  .
(d)
T   x    x   x    x 2  0 therefore x is not in ker  T  .
(a)
Since x  x 2  x 1  x  , x  x 2 is in R(T).
(b)
1  x cannot be expressed in the form xp  x  for any polynomial p  x  therefore 1  x is not in
R T  .
(c)
3  x 2 cannot be expressed in the form xp  x  for any polynomial p  x  therefore
3  x 2 is not in R  T  .
12.
13.
14.
15.
(d)
 x  x  1 therefore x is in R  T  .
(a)
The equation 3v  0 implies v  0 therefore ker  T   0 .
(b)
Every vector w in V is an image of 13 w under T . Consequently, R  T   V .
(a)
nullity(T) = 5  rank(T) = 2
(b)
dim  P4   5, so nullity(T) = 5  rank(T) = 4
(c)
Since R  T   R 3 , T has rank 3; dim  M mn   mn so nullity  T   mn rank  T   mn  3
(d)
nullity(T) = 4  rank(T) = 1
(a)
rank  T   7  nullity  T   5
(b)
rank  T   dim  P3   nullity  T   4  1  3
(c)
Since ker  T   P5 , T has nullity 6; rank  T   dim  P5   nullity  T   6  6  0
(d)
rank  T   dim  Pn   nullity  T   n  1  3  n  2
(a)
  1 2 
 1 2  3 6
T 
  3



 4 3  12 9 
  4 3 
8.1 General Linear Transformations
(b)
The only 2  2 matrix A such that 3 A  0 is the zero matrix. Consequently, ker  T   0 so the
nullity of T is 0.
By Theorem 8.1.4, rank  T   dim  M 22   nullity  T   4  0  4 .
16.


(a)
T 1  4 x  8 x 2  14  x  2 x 2
(b)
The only polynomial p in P2 A such that 14 p  0 is the zero polynomial. Consequently,
ker  T   0 so the nullity of T is 0.
By Theorem 8.1.4, rank  T   dim  P2   nullity  T   3  0  3 .
17.
  

(a)
T x 2   1 ,0 2 ,12  1,0,1
(b)
The kernel of T consists of all polynomials p  x   a0  a1 x  a2 x 2 such that
2
 p  1 , p  0  , p 1    a  a  a , a , a  a  a    0,0,0  .
0
1
2
0
0
1
2
Equating the corresponding components we obtain a linear system
 a1
a0
a0
a0

a1
 a2
 0
 a2
 0
 0
1 0 0 
The reduced row echelon form of the coefficient matrix of this system is 0 1 0  hence the
0 0 1 
system has a unique solution a0  a1  a2  0 . We conclude that ker  T   0 .
(c)
It follows from the solution of part (b) that nullity  T   0 .
By Theorem 8.1.4, rank  T   dim  P2   nullity  T   3  0  3 . Consequently, R  T   R 3 .
18.
(a)
T 1  sin x  cos x   1  sin 0  cos0,1  sin   cos  ,1  sin  2   cos  2     2,0,2 
(b)
The kernel of T consists of all functions f  x   c1  c2 sin x  c3 cos x such that
 f  0  , f   , f  2  
=  c  c sin 0  c cos0, c  c sin   c cos  , c  c sin  2   c cos  2  
1
2
3
1
2
3
1
2
3
  c1  c3 , c1  c3 , c1  c3    0,0,0 
Equating the corresponding components yields c1  c3  0 . Since c2 is an arbitrary real number, we
conclude that ker  T   span sin x .
(c)
T  c1  c2 sin x  c3 cos x    c1  c3 , c1  c3 , c1  c3   c1 1,1,1  c3 1, 1,1 .
Consequently, R  T   span 1,1,1 , 1, 1,1 .
5
6
Chapter 8: Linear Transformations
19.
For x   x1 , x2   c1v1  c2 v 2 , we have  x1 , x2   c1 1, 1  c2 1, 0    c1  c2 , c1  or
c1
c1
 c2
 x1
 x2
which has the solution c1  x2 , c2  x1  x2 .
 x1 , x2   x2 1, 1   x1  x2 1, 0   x2 v1   x1  x2  v 2 and
T  x1 , x2   x2T  v1    x1  x2  T  v 2   x2 1,  2    x1  x2  4, 1   4 x1  5 x2 , x1  3 x2 
T(5, 3) = (20  15, 5 + 9) = (35, 14).
20.
We begin by expressing  x1 , x2  as a linear combination of the basis vectors  2,1 and 1,3  :
 x1 , x2   c1  2,1  c2 1,3
Equating the corresponding components we obtain a linear system
2c1
c1
 c2
 3c2
 x1
 x2
which yields c1   73 x1  71 x2 , c2  17 x1  27 x2 , allowing us to write
 x1 , x2     73 x1  17 x2   2,1   71 x1  72 x2  1,3  and
T  x1 , x2     73 x1  71 x2  T  2,1   71 x1  27 x2  T 1,3 
   37 x1  17 x2   1,2,0    17 x1  72 x2   0, 3,5    73 x1  17 x2 ,  79 x1  74 x2 , 75 x1  107 x2  so that
T  2, 3    97 ,  67 ,  207  .
21.
For x   x1 , x2 , x3   c1v1  c2 v 2  c3 v 3 , we have
 x1 , x2 , x3   c1 1, 1, 1  c2 1, 1, 0   c3 1, 0, 0    c1  c2  c3 , c1  c2 , c1  or
c1
 c2
c1
c1
 c2
 c3

x1
 x2
 x3
which has the solution c1  x3 , c2  x2  x3 , c3  x1   x2  x3   x3  x1  x2 .
 x1 , x2 , x3   x3 v1   x2  x3  v 2   x1  x2  v3
T  x1 , x2 , x3   x3T  v1    x2  x3  T  v 2    x1  x2  T  v 3 
 x3  2,  1, 4    x2  x3  3, 0, 1   x1  x2  1, 5, 1
   x1  4 x2  x3 , 5 x1  5 x2  x3 , x1  3 x3 
T  2, 4,  1   2  16  1, 10  20  1, 2  3   15,  9,  1
8.1 General Linear Transformations
22.
7
We begin by expressing  x1 , x2 , x3  as a linear combination of the basis vectors 1,2,1 ,  2,9,0  , and
 3,3,4  :
 x1 , x2 , x3   c1 1,2,1  c2  2,9,0   c3  3,3,4 
Equating the corresponding components we obtain a linear system
c1
 2c2
 3c3

2c1
 9c2
 3c3
 x2
 4c3

c1
x1
x3
which yields c1  36 x1  8 x2  21x3 , c2  5 x1  x2  3 x3 , c3  9 x1  2 x2  5 x3 allowing us to write
 x1 , x2 , x3    36 x1  8 x2  21x3 1,2,1   5 x1  x2  3 x3  2,9,0    9 x1  2 x2  5 x3  3,3,4  and
T  x1 , x2 , x3 
  36 x1  8 x2  21x3  T 1,2,1   5 x1  x2  3 x3  T  2,9,0    9 x1  2 x2  5 x3  T  3,3,4 
  36 x1  8 x2  21x3 1,0    5 x1  x2  3 x3  1,1   9 x1  2 x2  5 x3  0,1
  41x1  9 x2  24 x3 ,14 x1  3 x2  8 x3 
so that T  7,13,7    2,3  .
23.
(a)
The range of T  x   Ax consists of all vectors  y1 , y2 , y3  that are images of at least one vector
 y1   1 1 3  x1 
1 
 1
 3










 x1 , x2 , x3  under this transformation  y2    5 6 4   x2   x1 5  x2  6   x3  4  . Since the
 y3   7 4
 7 
 4 
 2 
2   x3 
14
1 0
1 
 1
11 



19 
reduced row echelon form of A is 0 1  11  , by Theorem 4.8.5 the vectors 5 and  6  are
0 0
7
 4 
0 
1 
 1


linearly independent, and they span R  T  . We conclude that 5 and  6  form a basis for R  T  .
7
 4 
(b)
 1 1 3  x1  0 
The kernel of T consists of all vectors  x1 , x2 , x3  such that  5 6 4   x2   0  . Based on the
 7 4
2   x3  0 
reduced row echelon form of A obtained above, the general solution is x1   14
t , x2  19
t , x3  t .
11
11
Therefore a basis for ker  T  is formed by the vector  14,19,11 .
(c)
From part (a), rank  T   dim  R  T    2 . From part (b), nullity  T   dim  ker  T    1 .
(d)
Based on the reduced row echelon form of A obtained above, rank  A   2 and nullity  A   1 .
8
Chapter 8: Linear Transformations
24.
(a)
The range of T  x   Ax consists of all vectors  y1 , y2 , y3  that are images of at least one vector
 y1   2 0 1  x1 
2
0 
 1










 x1 , x2 , x3  under this transformation  y2    4 0 2   x2   x1  4   x2 0   x3  2  . Since the
 y3   20 0
 20 
 0 
 0 
0   x3 
1 0 0 
2
 1




reduced row echelon form of A is 0 0 1  , by Theorem 4.8.5 the vectors  4  and  2  are
20 
0 0 0 
 0 
2
 1


linearly independent, and they span R  T  . We conclude that  4  and  2  form a basis for R  T  .
20 
 0 
(b)
 2 0 1  x1   0 
The kernel of T consists of all vectors  x1 , x2 , x3  such that  4 0 2   x2    0  . Based on the
 20 0
0   x3   0 
reduced row echelon form of A obtained above, the general solution is x1  0 , x2  t ,
x3  0 . Therefore a basis for ker  T  is formed by the vector  0,1,0  .
25.
(c)
From part (a), rank  T   dim  R  T    2 . From part (b), nullity  T   dim  ker  T    1 .
(d)
Based on the reduced row echelon form of A obtained above, rank  A   2 and nullity  A   1 .
 x1 
 1 2 1 2     0 
x
The kernel of TA consists of all vectors  x1 , x2 , x3 , x4  such that  3 1 3 4   2    0  . Since the
x 
 3 8 4
2   3   0 
 x4 
 1 0 0  107 


reduced row echelon form of A is 0 1 0  27  , the general solution is x1  107 t , x2  27 t ,
0 0 1
0 
x3  0 , x4  t . Therefore a basis for ker  TA  is formed by the vector 10, 2, 0, 7  .
The range of TA  x   Ax consists of all vectors  y1 , y2 , y3  that are images of at least one vector
 x1 , x2 , x3 , x4  under this transformation
 x1 
 y1   1 2 1 2   
 1
2 
 1
 2 
 y    3 1 3 4   x2   x  3  x 1   x  3  x  4  . Based on the reduced row echelon
 2 
 x  1   2   3   4  
 y3   3 8 4
 3
8 
 4 
 2 
2   3 
 x4 
8.1 General Linear Transformations
9
 1 2 
 1




form of A obtained above, by Theorem 4.8.5 the vectors  3 , 1  , and  3 are linearly independent,
 3 8 
 4 
 1 2 
 1




and they span R  TA  . We conclude that  3 , 1  , and  3 form a basis for R  TA  .
 3 8 
 4 
26.
 x1 
 1 1 0 1    0 
x
The kernel of TA consists of all vectors  x1 , x2 , x3 , x4  such that  2 4 2 2   2    0  . Since the
x 
 1 8 3 5  3   0 
 x4 
1 0  13

reduced row echelon form of A is 0 1 13
0 0 0


1
1
 , the general solution is x1  3 s  3 t ,
0 
1
3
2
3
x2   13 s  23 t , x3  s , x4  t . In vector form,  x1 , x2 , x3 , x4   s  13 ,  13 ,1,0   t   31 ,  32 ,0,1 . Therefore a
basis for ker  TA  is formed by the vectors 1,  1, 3, 0  and (1, 2, 0, 3 ).
The range of TA  x   Ax consists of all vectors  y1 , y2 , y3  that are images of at least one vector
 x1 , x2 , x3 , x4  under this transformation
 x1 
 y1   1 1 0 1  
 1
1 
0 
 1
 y    2 4 2 2   x2   x  2   x  4   x 2   x 2  . Based on the reduced row echelon form of
 2 
 x  1   2   3   4  
 y3   1 8 3 5  3 
 1
8 
 3
 5
 x4 
 1
1 


A obtained above, by Theorem 4.8.5 the vectors  2  and  4  are linearly independent, and they span
 1
8 
 1
1 


R  TA  . We conclude that  2  and  4  form a basis for R  TA  .
 1
8 
27.
(a)

   T  ka  ka x  ka x  ka x   5ka  ka x
 k  5a  a x   kT  a  a x  a x  a x 
T a  a x  a x  a x  b  b x  b x  b x 
 T  a  b    a  b  x   a  b  x   a  b  x   5 a  b    a  b  x
  5a  a x    5b  b x   T  a  a x  a x  a x   T  b  b x  b x  b x 
T k a0  a1 x  a2 x 2  a3 x 3
2
0
2
0
2
3
0
2
0
1
1
1
0
1
2
0
1
1
2
2
0
3
3
3
2
3
therefore T is linear.
3
0
2
0
1
2
3
2
0
3
3
2
0
0
3
2
3
2
3
3
2
3
2
3
2
2
0
3
3
3
3
2
0
1
2
3
3
10
Chapter 8: Linear Transformations
(b)
The kernel of T consists of all polynomials p  x   a0  a1 x  a2 x 2  a3 x 3 such that


T a0  a1 x  a2 x 2  a3 x 3  5a0  a3 x 2  0 , which requires that a0  a3  0 .
Therefore every vector in ker  T  can be written in the form a1 x  a2 x 2 , i.e.,




ker  T   span x, x 2 . The set x, x 2 is linearly independent since neither polynomial is a scalar


multiple of the other one. We conclude that x, x 2 is a basis for ker  T  .
(c)






T a0  a1 x  a2 x 2  a3 x 3  5a0  a3 x 2 so R  T   span 5, x 2 . The set 5, x 2 is linearly independent
since neither polynomial is a scalar multiple of the other one.


We conclude that 5, x 2 is a basis for R  T  .
28.
(a)

  T  ka  ka x  ka x   3ka  ka x   ka  ka  x
 k  3a  a x   a  a  x   kT  a  a x  a x 
T a  a x  a x  b  b x  b x 
 T  a  b    a  b  x   a  b  x   3 a  b    a  b  x   a  b  a  b  x
  3a  a x   a  a  x    3b  b x   b  b  x 
 T a  a x  a x   T b  b x  b x 
T k a0  a1 x  a2 x 2
2
0
1
2
2
0
2
0
1
0
1
0
1
2
1
0
2
0
1
1
2
2
2
0
1
2
2
0
0
1
1
2
2
0
0
2
0
1
0
0
1
2
1
1
0
0
1
2
1
2
1
0
1
2
0
1
2
0
1
2
therefore T is linear.
(b)
The kernel of T consists of all polynomials p  x   a0  a1 x  a2 x 2 such that


T a0  a1 x  a2 x 2  3a0  a1 x   a0  a1  x 2  0 , which requires that a0  a1  0 .
 
Therefore every vector in ker  T  can be written in the form a2 x 2 , i.e., ker  T   span x 2 .
 
The set x 2 is linearly independent since x 2 is not the zero polynomial.
 
We conclude that x 2 is a basis for ker  T  .
(c)





T a0  a1 x  a2 x 2  3a0  a1 x   a0  a1  x 2  a0 3  x 2  a1 x  x 2



so R  T   span 3  x 2 , x  x 2 .

 is linearly independent since neither polynomial is a scalar multiple of the
other one. We conclude that 3  x , x  x  is a basis for R  T  .
The set 3  x 2 , x  x 2
2
29.
(a)
2
If p  x   0, then p(x) is a constant, so ker(D) consists of all constant polynomials.
8.1 General Linear Transformations
(b)
11
The kernel of J contains all polynomials a0  a1 x such that   a0  a1 x  dx  0 . By integration, this
1
1


1
a x2
a
a
condition yields a0 x  12   0 , i.e., a0  21  a0  21  0 , or equivalently, a0  0 .
 1
The kernel consists of all polynomials of the form a1 x .
30.
For any functions f and g in C  a, b  and for any real number k we have


T  kf   5kf  x   3 kf  t  dt  k 5 f  x    f  t  dt  kT  f  and
x
a
x
a


T  f  g   5  f  x   g  x    3  f  t   g  t   dt  5 f  x   3 f  t  dt  5g  x   3 g  t  dt
x
a
x
a
x
a

 T  f   T g 
This shows that T is a linear operator.
31.
(a)
If f 4  x   0, then f   x   a for some constant a . Applying Fundamental Theorem of Calculus, we
obtain f   x   ax  b , then f   x   a2 x 2  bx  c , and f  x   6a x 3  b2 x 2  cx  d for constants b ,
c , and d . We conclude that P3 is the kernel of T  f  x    f    x  .
4
(b)
32.
By similar reasoning, T  f  x    f 
n 1
 x  has ker T   Pn .
Since R  T   R , it follows from Theorem 8.1.4 that ker  T  has dimension
dim  M nn   dim  R   n 2  1 .
33.
(a)
R(T) must be a subspace of R3 , thus the possibilities are a line through the origin, a plane through the
origin, the origin only, or all of R 3 .
34.
(b)
The origin, a line through the origin, a plane through the origin, or the entire space R3 .
(a)
For all polynomials p  x  and q  x  in Pn and for every scalar k we have
T  kp  x    kp  x  1  kT  p  x   and
T  p  x   q  x    p  x  1  q  x  1  T  p  x    T  q  x   therefore T is linear.
(b)
Since T maps the zero polynomial p  x   0 to q  x   1 , by Theorem 8.1.1(a), T is not a linear
transformation.
35.
T  2 v1  3v 2  4 v 3   2T  v1   3T  v 2   4T  v 3    2,  2, 4    0, 9, 6    12, 4, 8 
  10,  7, 6 
36.
Let v  c1v1  c2 v 2    cn v n be any vector in V. Then
T  v   c1T  v1   c2T  v 2     cnT  v n   c1 0  c2 0    cn 0  0
Since v was an arbitrary vector in V, T must be the zero transformation.
12
37.
Chapter 8: Linear Transformations
Let v  c1v1  c2 v 2    cn v n be any vector in V. Then
T  v   c1T  v1   c2T  v 2     cnT  v n   c1v1  c2 v 2    cn v n  v
Since v was an arbitrary vector in V, T must be the identity operator.
38.
For every vector v in V , there are unique scalars c1 , c2 , ..., cn such that v  c1v1  c2 v 2    cn v n .
A linear transformation with the desired properties can be defined by
T  v   T  c1v1  c2 v 2    cn v n   c1w1  c2 w 2    cn w n
39.
T  kp  x    kp  q0  x    kT  p  x  
T  p1  x   p2  x    p1  q0  x    p2  q0  x    T  p1  x    T  p2  x  
True-False Exercises
(a)
True. c1  k, c2  0 gives the homogeneity property and c1  c2  1 gives the additivity property.
(b)
False. Every linear transformation will have T(v) = T(v).
(c)
True. Only the zero transformation has this property.
(d)
False. T  0   v 0  0  v 0  0, so T is not a linear transformation.
(e)
True. This follows from part (a) of Theorem 8.1.3.
(f)
True. This follows from part (b) of Theorem 8.1.3.
(g)
False. T does not necessarily have rank 4.
(h)
False. det(A + B)  det(A) + det(B) in general.
(i)
False. nullity(T) = rank(T) = 2
8.2 Compositions and Inverse Transformations
1.
2.
3.
(a)
Not one-to-one (maps distinct vectors with the same x components into the same vector).
(b)
One-to-one (distinct vectors that are reflected have distinct images).
(c)
One-to-one (distinct vectors that are reflected have distinct images).
(a)
One-to-one (distinct vectors that are rotated have distinct images).
(b)
One-to-one (maps distinct vectors with the same x and y components into the same vector).
(c)
Not one-to-one (distinct vectors that are contracted have distinct images).
(a)
By inspection, ker(T) = {0}, so T is one-to-one.
(b)
By inspection, ker(T) = {0}, so T is one-to-one.
8.2 Compositions and Inverse Transformations
(c)
13
(x, y, z) is in ker(T) if both x  y  z  0 and x  y  z  0 , which is x  0 and y  z  0 .
Thus, ker  T   span  0,1,1 and T is not one-to-one.
4.
5.
(a)
 x, y  is in ker(T) if x  y  0 or x  y , so ker T   span 1,1 and T is not one-to-one.
(b)
T  x, y   0 if 2 x  3 y  0 or x   23 y so ker  T   span   32 ,1 and T is not one-to-one.
(c)
 x, y  is in ker(T) only if x  y  0 and x  y  0 , so ker T   0 and T is one-to-one.
(a)
 1 2 
The reduced row echelon form of A is 0 0  , so nullity( A )  1 .
0 0 


Multiplication by A is not one-to-one.
(b)
30 
1 0 0

The reduced row echelon form of A is  0 1 0 10  , so nullity( A )  1 .
 0 0 1
7 
Multiplication by A is not one-to-one.
6.
(a)
(b)
7.
 1 0
The reduced row echelon form of A is 0 1 , so nullity( A )  0 .
0 0 
Multiplication by A is one-to-one.
 1 0 12 0 
The reduced row echelon form of A is  0 1 2 0  , so nullity( A )  1 .
 0 0 0 1
Multiplication by A is not one-to-one.
(a)
Since nullity( T )  0 , T is one-to-one.
(b)
nullity  T   dim V   rank  T   0 therefore T is one-to-one.
(c)
Since rank  T   dim  W   dim V  , we have nullity( T )  dim V   rank  T   0 . We conclude that
T is not one-to-one.
8.
(a)
Since nullity( T )  0 , T is one-to-one; rank  T   dim V   nullity  T   dim V  so T is onto.
(b)
nullity  T   dim V   rank  T   0 so T is not one-to-one; rank  T   dim V  so T is not onto.
(c)
R  T   V so T is onto; nullity  T   dim V   rank  T   0 so T is one-to-one.
14
9.
Chapter 8: Linear Transformations


For example, T 1  x 2   1   1 ,1  12   0,0  .
2
The transformation is onto since for any real numbers a and b , a polynomial p  x  in P2 can be found
such that p  1  a and p 1  b .
10.
Setting a0  a1  x  1  a2 ( x  1)2  0 implies a0  a1  a2  0 . Since ker  T   0 , by Theorem 8.2.1, T
is one-to-one.
rank  T   dim  P2   nullity  T   dim  P2  therefore T is onto.
11.
No; T is not one-to-one because ker(T)  {0} as T(a) = a  a = 0.
12.
Since elementary matrices are invertible, EA  0 yields A  E 1 0  0 therefore ker  T   0 .
Consequently, by Theorem 8.2.1 T is a one-to-one linear operator.
13.
(a) The columns of A are linearly independent by inspection, so multiplication by A is one-to one by
Theorem 8.2.3(a). Since A has only two columns, they do not span R3 . Therefore, by Theorem 8.2.3(b),
multiplication by A is not onto.
(b) The columns of A are linearly dependent since they form a set of four vectors in R3 so multiplication
by A is not one-to-one by Theorem 8.2.3(a). The first, third, and fourth columns of A are linearly
 1 1 1  


independent since det  1 1 0    1  0 so these columns span R3 . By Theorem 8.2.3(b),
 1 0 0  


multiplication by A is onto.
 5 4  
2
(c) Since det  A   det  
  1  0, the columns of A are a basis of R . By Theorem 8.2.3,

 1 1  
multiplication by A is both one-to-one and onto.
  2
1 0 


(d) Since det  a   det   6 3 1   0, the columns of A are linearly dependent which implies they do
  8 4 3 


not span R 3 . By Theorem 8.2.3, multiplication by A is neither one-to-one nor onto.
14.
(a) The columns of A are linearly independent by inspection, so multiplication by A is one-to one by
Theorem 8.2.3(a). Since A has only two columns, they do not span R3 . Therefore, by Theorem 8.2.3(b),
multiplication by A is not onto.
(b) The columns of A are linearly dependent since they form a set of four vectors in R3 so multiplication
by A is not one-to-one by Theorem 8.2.3(a). The second, third, and fourth columns of A are linearly
  3 1 1  


dependent since det   6 0 2    0 . Also, first column is 3 times the third column and the second
  9 1 3  


8.2 Compositions and Inverse Transformations
15
column of A is 3 times the first column. Therefore, A has fewer than three linearly independent columns
so its columns do not span R3 . By Theorem 8.2.3(b), multiplication by A is not onto.
  3 9  
(c) Since det  A   det  
   0, the columns of A are linearly dependent and, therefore, do not
  1 3  
span R 2 . By Theorem 8.2.3, multiplication by A is neither one-to-one nor onto.
 2 3 8  


(d) Since det  A   det  0 1 4    2  0, the columns of A are a basis for R 3 . By Theorem 8.2.3,
 0 0 1 


multiplication by A is both one-to-one and onto.
15.
16.
17.
(a)
The reflection about the x-axis in R 2 is its own inverse.
(b)
The rotation through an angle of  / 4 in R 2 (i.e., the clockwise rotation through an angle  / 4 ) is
the desired inverse.
(a)
The reflection about the yz-plane in R3 is its own inverse.
(b)
The rotation through an angle of 18 about the z-axis is the desired inverse.
(a)
0 1 
The standard matrix of the reflection about the line y  x is A  
.
1 0 
 0 1 0 1 
The inverse is A1   0  0 111 

  A.
 1 0   1 0 
(b)
cos
The standard matrix of the rotation by angle  about the origin is A  
 sin 
 cos
The inverse is A1   cos  cos  1sin    sin  
  sin 
sin   cos     sin    

 which is a rotation
cos   sin    cos    
by angle  about the origin.
18.
(a)
 1 0 
The standard matrix of the reflection about the y -axis is A  
.
 0 1
 1 0   1 0 
The inverse is A1   111 0  0  

  A.
 0 1  0 1
(Reflections about the x-axis can be treated analogously.)
(b)
 0 1
The standard matrix of the reflection about the origin is A  
.
 1 0 
 0 1  0 1
The inverse is A 1   0  0  1 1 1 

  A.
 1 0   1 0 
19.
(a)
T 1  2 x   1  2  0  ,1  2 1   1,  1
 sin  
.
cos 
16
Chapter 8: Linear Transformations
(b)
T  kp  x     kp  0  , kp 1   k  p  0  , p 1   kT  p  x   ;
T  p  x   q  x     p  0   q  0  , p 1  q 1    p  0  , p 1    q  0  , q 1 
 T  p  x   T q  x 
(c)
Let p  x   a0  a1 x, then T  p  x     a0 , a0  a1  so if
T  p  x     0, 0  , then a0  a1  0 and p is the zero polynomial, so ker(T) = {0}.
(d)
Since T  p  x     a0 , a0  a1  , then T 1  2, 3  has a0  2 and a0  a1  3 or a1  1. Thus,
T 1  2, 3   2  x.
20.
(a)
T is not one-to-one, e.g., T  0,,0,1  T  0,,0,2    0,,0,0  .
(b)
ker  T    0,,0  thus T is one-to-one;
T 1  x1 , x2 ,, xn    xn , xn 1 ,, x2 , x1  (i.e., T 1  T )
21.
(c)
ker  T    0,,0  thus T is one-to-one; T 1  x1 , x2 ,, xn    xn , x1 ,, xn 1 
(a)
For T to have an inverse, all the ai 's must be nonzero since otherwise T would have a nonzero
kernel.
(b)
22.

T 1  x1 , x2 , ..., xn   a11 x1 , a12 x2 , ..., a1n xn

 0
 
1 0 2 5  0  1
2
Observe that 
  2   1 so that TA  0,0, 2,1  1,1 . Therefore, any vector v in R for
3
4
1
3


 
 
1
 
 0
 0
which TA  v   1,1 can be expressed in vector form as v     u for some vector u in the kernel of
 2 
 
 1
2 5
1 0
1 0 2 5 
TA . The reduced row echelon form of 
is 
 so any vector in the kernel may

5
3 4 1 3 
0 1  4 3
8.2 Compositions and Inverse Transformations
17
 2 
 5 
 5
 3
be expressed parametrically as u  t  4   s   . Therefore, any vector v in R 2 for which TA  v   1,1
 1
 0
 
 
 0
 1
 0   2 
 5
 0  5 
 3
4



can be written parametrically as v 
t
 s .
 2   1
 0
   
 
 1  0 
 1
23.
T2  T1  x, y   T2  2 x, 3 y    2 x  3 y, 2 x  3 y 
24.
T2  T1  x, y   T2  2 x,  3 y, x  y    2 x  3 y,  3 y  x  y    2 x  3 y, x  2 y 
25.
T2  T1   a0  a1 x  a2 x 2   T2 T1  a0  a1 x  a2 x 2    T2  a0  a1  x  1  a2  x  1 
2

 x a0  a1  x  1  a2  x  1
26.
2
  a x  a x  x  1  a x  x  1
0
1
2
2
T1  T2   p  x    T1 T2  p  x     T1  p  x  1   p   x  1  1  p  x 
T2  T1   p  x    T2 T1  p  x     T2  p  x  1   p   x  1  1  p  x 
27.
28.
29.
 a c  
  a  d
 b d  
(a)
T1  T2  A   T1  AT   tr  
(b)
T2  T1  A  does not exist because T1  A is not a 2  2 matrix.
(a)
T1  T2  A   T1  AT   k  
(b)
T2  T1  A   T2  
  a c    ka kc 
  

 b d    kb kd 
  ka kb    ka kc 
  

  kc kd    kb kd 
T3  T2  T1  x, y   T3 T2 T1  x, y     T3 T2  2 y,3 x, x  2 y    T3  3 x, x  2 y, 2 y 
  3 x  2 y, x  2 y   2 y     3 x  2 y, x 
30.
T3  T2  T1  x, y   T3 T2 T1  x, y     T3 T2  x  y, y,  x    T3  0, x  y  y  x,3 y 
 T3  0,2 y,3 y    4 y,12 y  6 y    4 y,6 y 
31.
(a)
Since T1  p  x    xp  x  , T11  p  x    1x p  x  .
Since T2  p  x    p  x  1 , T21  p  x    p  x  1 .
T  T   p  x    T  p  x  1   p  x  1
1
1
1
2
1
1
1
x
18
Chapter 8: Linear Transformations
(b)


Since  T2  T1   p  x    T2 T1  p  x    T2  xp  x     x  1 p  x  1 , we have
T2  T1   T11  T21   p  x     T2  T1   1x p  x  1    x  1  x11  p  x  1  1  p  x 
32.
(a)
Setting T1  x, y    x  y, x  y    0,0  and T2  x, y    2 x  y, x  2 y    0,0  yields the systems
x 
y  0
2x 
and
x  y  0
x
y  0
 2y  0
each of which has only the trivial solution  x, y    0,0  therefore ker  T1   ker  T2    0,0  . By
Theorem 8.2.1, T1 and T2 are one-to-one.
(b)
The standard matrices of T1 and T2 are
1
1


2
1


T1   1 1 and T2    1 2  , respectively. Both matrices are invertible, and we have

1
1
T1   T1   

1
2
1
2

 25
1
1




and
T
T



1
2
 2 
 
5

 so that
 
1
2
1
2
1
5
2
5
 x 
x  1
T11      T11      12
 y  2
  y 
  x   21 x  21 y 
1
1
1
1
1


 , i.e. T1  x, y    2 x  2 y, 2 x  2 y  and
   y   12 x  12 y 
 x 
 25
1  x 


T      T2      1
 y  5
  y 
  x   25 x  15 y 
, i.e. T21  x, y    25 x  15 y, 15 x  25 y  .
    1
2 
   y  5 x  5 y
1
2
1
2
1
2
1
5
2
5
Since  T2  T1  x, y   T2  T1  x, y    T2  x  y, x  y    3 x  y,  x  3 y  we have
 103
 3 1
1

which
is
an
invertible
matrix
with
T

T


T
T
 2 1  1
 2 1   1 3 


 10
 101 
.
3 
10 
 101   x   103 x  101 y 
1
 , i.e.,
3  
3
10   y 
 10 x  10 y 
3
1  x 
1   x  
Therefore  T2  T1       T2  T1      101
 y   10
  y 
T2  T1   x, y    103 x  101 y, 101 x  103 y  .
1
(c)
T  T   x, y   T T  x, y    T  x  y, x  y 
1
1
1
2
1
1
1
2
1
1
2
5
1
5
1
5
2
5
  12  25 x  15 y   21  15 x  25 y  , 12  25 x  15 y   12  15 x  25 y     103 x  101 y, 101 x  103 y 
  T2  T1 
1
 x, y 
33.
T2  v   14 v, then  T1  T2  v   T1  14 v   4  14 v   v and  T2  T1  v   T2  4v   14  4v   v.
34.
(a)
T2  T1   
(b)
 1 0  
  0 1 
For example,  T2  T1   
   T2  T1   

   1,1,1 .
 0 1  
  0 1 
 a b  
   T2   a  b    c  d  x    a  b, c  d, a  b 
 c d  
8.2 Compositions and Inverse Transformations
(c)
19
For example, 1,2,3  cannot be expressed in the form  a  b, c  d, a  b  .
35.
By inspection, T(x, y, z) = (x, y, 0). Then T(T(x, y, z)) = T(x, y, 0) = (x, y, 0) = T(x, y, z) or T  T  T .
36.
T  kf   kf  0   2 kf   0   3kf  1  kT  f 
T  f  g   f  0   g  0   2  f   0   g  0    3  f  1  g 1   T  f   T  g 
Thus T is a linear transformation.
Let f  f  x   x 2 ( x  1)2 , then f   x   2 x  x  1 2 x  1 so f(0) = 0, f   0   0, and f  1  0.
T  f   f  0   2 f   0   3 f  1  0, so ker(T)  {0} and T is not one-to-one.
37.
(a)
D  kp  x    dxd  kp  x    kp  x   kD  p  x  
D  p  x   q  x    dxd  p  x   q  x    p  x   q  x   D  p  x    D  q  x  
J  kp  x     kp  t  dt  k  p  t  dt  kJ  p  x  
x
x
0
0
J  p  x   q  x      p  t   q  t   dt   p  t  dt   q  t  dt  J  p  x    J  q  x  
x
x
x
0
0
0


   2 x ) so it does not have an inverse.
(b)
D is not one-to-one (e.g., D x  3  D x
(c)
Yes, this can be accomplished by taking D : V  Pn 1 and J : Pn 1  V where V is the set of all
2
2
polynomials p  x  in Pn such that p  0   0 .
38.
39.
(a)
 J  D  f   J  D  x 2  3 x  2    J  2 x  3   0  2t  3  dt   t 2  3t  0  x 2  3 x
(b)
 J  D  f   J  D  sin x    J  cos x   0  cos t  dt  sin t 0  sin x
x
x
x
x
The kernel of J contains all polynomials a0  a1 x such that   a0  a1 x  dx  0 . By integration, this
1
1


1
a x2
a
a
condition yields a0 x  12   0 , i.e., a0  21  a0  21  0 , or equivalently, a0  0 .
 1
The kernel consists of all polynomials of the form a1 x .
Since ker  J   0 , by Theorem 8.2.1, J is not one-to-one.
40.
For every polynomial q  x   a0  a1 x  a2 x 2    an 2 x n 2  an 1 x n 1 in Pn 1 we can construct a polynomial
p  x   a0 x  21 x 2  32 x 3    nn12 x n 1  nn1 x n in Pn for which D  p  x    p  x   q  x  . We conclude that
a
a
a
a
D is onto.
41.
(a)
By parts (g) and (j) of Theorem 8.2.4, the range of T cannot be R n - it must be a proper subset of R n
instead.
1 0 
For instance A  
 is the standard matrix of the orthogonal projection onto the x -axis; the
0 0 
range of this transformation is the x -axis - a proper subset of R 2 .
20
Chapter 8: Linear Transformations
(b)
42.
(a)
By parts (g) and (b) of Theorem 8.2.4, the kernel of T must contain at least one nonzero vector v .
Consequently T maps infinitely many vectors (e.g., scalar multiples kv ) into 0 .
 1 0 
By parts (g) and (j) of Theorem 8.2.4, the range of T is R n . For instance A  
 is the standard
 0 1
matrix of the reflection about the y -axis; the range of this transformation is R 2 .
43.
(b)
By parts (g) and (b) of Theorem 8.2.4, the kernel of T must contain only the zero vector.
Consequently T maps only 0 into 0 .
(a)
Yes. If T1 : R n  R m and T2 : R m  R k are both one-to-one then for any vectors u and v in R n ,
T2  T1  u    T2  T1  v   must imply T1  u   T1  v  (since T2 is one-to-one), which further implies that
u  v (since T1 is one-to-one), therefore the composition T2  T1 : R n  R k is also one-to-one.
(b)
Yes. For instance, T1  x1 , x2    x1 , x2 ,0  is one-to-one but T2  x1 , x2 , x3    x1 , x2  is not. However,
the composition T2  T1  x1 , x2    T2  x1 , x2 ,0    x1 , x2  is obviously one-to-one.
However, if T1 is not one-to-one, then the composition T2  T1 is not one-to-one since there must exist
two vectors u  v such that T1  u   T1  v  leading to T2  T1  u    T2  T1  v   .
44.
For any linear transformation T : V  W , dim V   nullity  T   rank  T   rank  T  .
If T is onto then rank  T   dim  W  . Consequently, dim V   dim  W  .
True-False Exercises
(a)
True. This is equivalent to Definition 1.
(b)
False. If T  u   T  v  , then v is the unique vector in V with image T  v  . Therefore u  v.
(c)
True.
(d)
True. For T to have an inverse, it must be one-to-one.
(e)
False. T 1 does not exist.
(f)
True. If T1 is not one-to-one then there is some nonzero vector v1 with T1  v1   0. Thus
T2  T1  v1   T2  0   0 and ker T2  T1   0.
(g)
(h)
True. This follows from parts (b) and (j) of Theorem 8.2.4.
1 2 
2 1 
False. For instance, TA and TB where A  
and B  
are two matrix operators on R 2 whose


3 4 
3 4 
composition is not commutative.
(i)
True. This is stated following the proof of Theorem 8.2.3.
(j)
True. This follows from parts (t) and (v) of Theorem 8.2.4.
8.3 Isomorphism
8.3 Isomorphism
1.
The transformation is an isomorphism.
2.
The transformation is not an isomorphism(not onto).
3.
The transformation is an isomorphism.
4.
The transformation is not an isomorphism(not a linear transformation).
5.
The transformation is not an isomorphism (not a linear transformation).
6.
The transformation is an isomorphism.
7.
The transformation is an isomorphism.
8.
The transformation is not an isomorphism(not onto).
9.
10.
(a)
a
b
 a b c    

 c
T  b d e     
 c e f    d 
 e

 
 f 
(b)
a 
a 


  a b    c 
  a b   b 
and
T1  
T
.

2 
 

 c d   b 
 c d   c 
 
 
d 
d 
(a)
If p(x) is a polynomial and p(0) = 0, then p(x) has constant term 0.
a 
T ax  bx  cx  b 
c 

(b)
3
2

a 
T  a  bsin  x   ccos  x     b 
 c 
11.
det  A   3  0 , so by Theorem 8.2.4 TA is one-to-one and onto. Consequently, TA is an isomorphism.
12.
det  A   0 , so by Theorem 8.2.4 TA is not one-to-one. Consequently, TA is not an isomorphism.
21
22
13.
Chapter 8: Linear Transformations
1 1 1 1 
The reduced row echelon form of A is 0 0 0 0  . The solution space W contains vectors
0 0 0 0 
 x1 , x2 , x3 , x4  such that x1  r  s  t , x2  r , x3  s , x4  t so
 x1 , x2 , x3 , x4    r  s  t, r, s, t   r  1,1,0,0   s  1,0,1,0   t  1,0,0,1 hence dim W   3 .
 r  s  t, r, s, t    r, s, t  is an isomorphism between W and R3 .
14.
1
0
The reduced row echelon form of A is 
0

0
0 1 0
1 0 1 
. The solution space W contains vectors
0 0 0

0 0 0
 x1 , x2 , x3 , x4  such that x1  s , x2  t , x3  s , x4  t so
 x1 , x2 , x3 , x4    s, t, s, t   s  1,0,1,0   t  0, 1,0,1 hence dim W   2 .
 s, t, s, t    s, t  is an isomorphism between W and R2 .
15.
a b 
Let us denote the given transformation by T . T is a linear transformation, since for any A  
 and
c d 
 a  b 
B
 in M 22 and for any scalar k , we have
 c d  
ka
a








  ka kb   
ka  kb
ab


  kT  A  and


T  kA   T  
k
   ka  kb  kc 


kc
kd
a

b

c






 ka  kb  kc  kd 
a  b  c  d 
a  a






  a  a  b  b   
aa bb

T  A  B  T  
 

a  a   b  b  c  c 
  c  c d  d    


 a  a   b  b  c  c   d  d  
a
a

 

 ab
 

a   b

  T  A  T  B .
=
 a  b  c   a   b  c  

 

 a  b  c  d   a   b  c   d  
a

 0 
 ab
 0 
    implies a  b  c  d  0 , therefore ker  T     0 0   so T is
By inspection, 


 a  b  c  0 
  0 0  

  
 a  b  c  d  0 
one-to-one.
rank  T   dim  M 22   nullity  T   4  0  4 hence T is onto.
We conclude that T is an isomorphism.
8.3 Isomorphism
16.
23
Let us denote the given transformation by T .
0 
  1 1   0 
 1 1
By inspection, T  
therefore 

 is in ker  T  so T is not one-to-one.

0 0 
 0 0   0 
 
0 
We conclude that T is not an isomorphism.
17.
Yes,  a, b    0, a, b  is an isomorphism between R 2 and the yz -plane in R3 .
18.
(a)
M mn is isomorphic to R k if k  mn (since that makes dimensions of both spaces equal).
(b)
M mn is isomorphic to Pk if k  mn 1 (since that makes dimensions of both spaces equal).
19.
0 0 
No. By inspection, T  x 2  x   T  0   
 so T is not one-to-one.
0 0 
20.
a
Let us denote the given transformation by T . T is a linear transformation, since for any A   0
 a2
b
B 0
 b2
a1 
and
a3 
b1 
in M 22 and for any scalar k , we have
b3 
  ka
T  kA   T   0
  ka2
ka1  
2
3
2
3
  ka0  ka1 x  ka2 x  ka3 x  k a0  a1 x  a2 x  a3 x  kT  A 
ka3  
  a  b0
T  A  B   T   0
  a2  b2



a1  b1  
2
3
   a0  b0    a1  b1  x   a2  b2  x   a3  b3  x

a3  b3  
 

 a0  a1 x  a2 x 2  a3 x 3  b0  b1 x  b2 x 2  b3 x3  T  A   T  B  .
  0 0  
By inspection, a0  a1 x  a2 x 2  a3 x 3  0 implies a0  a1  a2  a3  0 so ker  T    
 .
  0 0  
Therefore T is one-to-one.
rank  T   dim  M 22   nullity  T   4  0  4 hence T is onto.
We conclude that T is an isomorphism.
Furthermore, T  A  , T  B   a0 b0  a1b1  a2 b2  a3 b3  A, B , so T is an inner product space isomorphism.
True-False Exercises
 
(a)
False. dim R2  2 while dim  P2   3.
(b)
True. If ker(T) = {0} then rank(T) = 4 so T is one-to-one and onto.
(c)
False. dim  M 33   9 while dim  P9   10.
24
(d)
Chapter 8: Linear Transformations
a 
  a b 0   b 
a b 0
True. For instance, if V consists of all matrices of the form 
then
is an
,
T


 
c d 0
  c d 0   c 
 
d 
isomorphism T : V  R 4 .
(e)
True.
(f)
True. For instance if V consists of all vectors in R n 1 of the form  x1 ,, xn ,0  , then
T  x1 ,, xn    x1 ,, xn ,0  is an isomorphism T : R n  V .
8.4 Matrices for General Linear Transformations
1.
(a)
 
T  u1   T 1  x  v 2 , T  u 2   T  x   x 2  v 3 , and T  u 3   T x 2  x 3  v 4 therefore
0 
0 
0 
0
0 
1
1 
0 
T  u1       , T  u 2       , and T  u 3       , thus T B, B  
B
B
B
0 
1 
0 
0
 
 
 

0 
0 
1 
0
(b)
0
 c0 
1
By inspection,  x B   c1  so T B, B  x B  
0
 c2 

0
0 0
0 0 
.
1 0

0 1
0 0
0 
 c0   

0 0     c0 
.
c1 
1 0     c1 
  c2   
0 0     c2 
0 
c 
On the other hand, T  x ]B   c0 x  c1 x 2  c2 x 3 ]B   0  . Formula (5) is satisfied.
c1 
 
c2 
2.
(a)
 
From the definition of T we have T 1  1 , T  x   1  2 x , and T x 2  3x . By inspection,
1 0
1 
 1
 0
1
T 1      , T  x       , and T x 2     , therefore T B, B  
.
B
B



B
0 
 2 
0 2 3
 3
 
8.4 Matrices for General Linear Transformations
(b)
 c0 
 c0 
1 0     c0  c1 
1


By inspection,  x B   c1  so T B, B  x B  
c1  
.
0 2 3    2c1  3c2 

 c2 
 c2 
 c0  c1 
On the other hand, T  x    c0  c1    2c1  3c2  x and, by inspection, T  x   B  
.
   2c1  3c2  
Formula (5) is satisfied.
3.
(a)
 
T(1) = 1; T  x   x 1  1  x ; T x 2  ( x  1)2  1  2 x  x 2
1
 1 1

Thus the matrix for T relative to B is  0
1 2  .
 0 0
1
(b)
1  a0   a0  a1  a2 
 1 1

T ]B  x]B  0 1 2   a1    a1  2a2  . For x  a0  a1 x  a2 x 2 ,
 0 0

1  a2  
a2
T  x   a0  a1  x  1  a2 ( x  1)2  a0  a1  a2   a1  2 a2  x  a2 x 2 ,
 a0  a1  a2 
so [T  x ]B   a1  2 a2  .


a2
4.
(a)
0 
 1
From the definition of T we have T  u1     and T  u 2     . To find the coordinate vectors,
2 
 1
we solve the systems
a1
a1
 a2
 0
 2
b1
b1
and
 b2
 1
 1
2 
 1
 2 1
so clearly we have T  u1   B    and T  u 2   B    . Therefore T B  
.
2 
 0
2 0 
(b)
x
For an arbitrary x    in R 2 , solving the system
 y
c1
c1
 c2
 x
 y
 y 
 2 1  y   x  y 
yields  x B  
therefore T B  x B  


.

y  x
2 0   y  x   2 y 
 x  y
On the other hand, T  x   
 . Solving the system
 x  y
c1
c1
 c2
 xy
 xy
25
26
Chapter 8: Linear Transformations
 x  y
showing that Formula (8) holds for every x in R 2 .
yields T  x   B  

 2y 
5.
(a)
 7
6 
 1    
  2    
8
1
T  u1   T       1  0 v1  2 v 2  3 v 3 ; T  u 2   T       2   0 v1  v 2  43 v 3
 3    0 
  4   0 
 
 
 0 0


[T ]B, B    12 1
 83 43 
(b)
x
For an arbitrary x    in R 2 , solving the system
 y
c1
3c1
 2c2
 4c2
 x
 y
 0 0 2 x y
 0 
 2 x5 y 
 5  



1
yields  x B   3 x  y  therefore [T ]B, B  x B    2 1  3 x  y     2x  .
 10 
 83 43   10   23 x  23 y 
 x  2 y
On the other hand, T  x     x  . Solving the system
 0 
c1
 2c2
c1
c1
 2c2
 3c3

x  2y


x
0
 0 


yields T  x   B    2x  showing that Formula (5) holds for every x in R 2 .
 23 x  23 y 
6.
(a)
  23 
 12 
 1


 
We have T  v1     1 , T  v 2   B   12  , and T  v3   B   12  .
B
 12 
  12 
 0 
 1  23

1
Therefore T B   1
2
1
 0
2
(b)


.
 
1
2
1
2
1
2
For an arbitrary x   x1 , x2 , x3  in R3 , solving the system
a1
a1
a2
 a2
 a3

x1
 a3


x2
x3
8.4 Matrices for General Linear Transformations
 12  x1  x2  x3  


yields  x B   12   x1  x2  x3   therefore
 12  x1  x2  x3  


 1  23
T B  x B   1 12
1
 0
2
  12  x1  x2  x3    23 x1  x2  12 x3 
  1
 1

1
  2   x1  x2  x3      2 x1  x2  2 x3  .
   12  x1  x2  x3     12 x1  12 x3 
1
2
1
2
1
2
On the other hand, solving the system
b1
b1
b2
 b2
 b3

x1  x2
 b3


x2  x1
x1  x3
 32 x1  x2  12 x3 


yields T  x   B    12 x1  x2  12 x3  showing that Formula (8) holds for every x in R3 .
  12 x1  12 x3 
(c)
The augmented matrix of the homogeneous system
x1

x2
 0
 x1
x1

x2
 0
 0

x3
 1 0 1 0 
has the reduced row echelon form  0 1 1 0  . Since the system has nontrivial solutions
 0 0 0 0 
(e.g., 1,1,1 ) ker  T    0,0,0  so that T is not one-to-one.
7.
(a)
 
We have T 1  1 , T  x   2 x  1  1  2 x , and T x 2  (2 x  1)2  1  4 x  4 x 2 .
1 1 1 
Therefore, [T ]B   0 2 4  .
 0 0 4 
(b)
Step 1.
The coordinate vector of x  2  3 x  4 x 2 with respect to the basis B is
 2
 x B   3 .
 4 
Step 2.
 1 1 1  2   3
[T ]B  x B   0 2 4   3  10   T  x   B .
 0 0 4   4  16 
27
28
Chapter 8: Linear Transformations
Step 3.
Reconstructing T  x  from the coordinate vector obtained in Step 2, we obtain
 
T  x   3 1  10  x   16 x 2  3  10 x  16 x 2 .
8.


(c)
T 2  3 x  4 x 2  2  3  2 x  1  4(2 x  1)2  2  6 x  3  16 x 2  16 x  4  3  10 x  16 x 2
(a)
From the definition of T we have T 1  x , T  x   x 2  3 x , and
 
T x 2  x  x  3  x3  6 x 2  9 x .
2
0 
 0
1 
 3 
The corresponding coordinate vectors are T 1  B    , T  x   B    , and
0 
 1
 
 
0 
 0
0
 0
0 0
 9
 1 3 9 
.
T x 2     , therefore T   
B, B

 B   6 
0
1 6 
 


1
 1
0 0
 
(b)
Step 1.
 1
The coordinate vector of x  1  x  x with respect to the basis B is  x B   1 .
 1
Step 2.
0
0 0
 0
 1 
 1 3

9     11

T B,B  x B  0 1 6   1   7   T  x  B .

  1 

1    1
0 0
Step 3.
Reconstructing T  x  from the coordinate vector obtained in Step 2, we obtain
2
   
T  x   0 1  11 x   7 x 2  1 x 3  11x  7 x 2  x 3 .
9.



  11x  7 x  x .
(c)
T 1  x  x 2  x 1   x  3   x  3
(a)
Since A is the matrix for T relative to B, A  T (v1 )B
2
2
3
T  v 2    .
B
 1
3 
That is, [T  v1 ]B    and [T  v 2 ]B    .
 2 
5
(b)
 1
 1   2   3 
Since [T  v1 ]B    , T  v1   1v1  2 v 2          .
 2 
3  8   5
 3   5   2 
Similarly, T  v 2   3v1  5v 2          .
9   20   29 
8.4 Matrices for General Linear Transformations
(c)
29
x 
1 
 1
If  1   c1v1  c2 v 2  c1    c2   , then
3 
 4
 x2 
x1  c1  c2
x2  3c1  4c2
Solving for c1 and c2 gives c1  47 x1  17 x2 , c2   73 x1  17 x2 , so
x 
 3
 2   187 x1  17 x2 
T   1    c1T  v1   c2T  v 2    47 x1  17 x2       37 x1  17 x2      107

24
 5
 29    7 x1  7 x2 
  x2  
 18
  1077
 7
10.
1
7
24
7
  x1 
  .
  x2 
(d)
 1   18
T       1077
 1    7
(a)
 3
 2 
 1
0 






By Formula (4), T  v1      1 , T  v 2      6  , T  v 3      2  , and T  v 4      1 .
B
B
B
B
 3
 0 
 7 
 1
(b)
1
7
24
7
 1  197 
     83 
 1   7 
Reconstructing the vectors using their coordinate vectors obtained in part (a) yields
0 
 7 
 6   42 
0   7 
 6   11












T  v1   3 8   1  8   3  9    5 T  v 2   2 8   6  8   0  9    32 
8 
 1
 1  10 
8   1
 1 22 
 0   7   6   13
0 
 7 
 6   56 








T  v 3   1 8   2  8   7  9    87  T  v 4   0 8   1  8   1  9    17 
8   1  1  2 
8 
 1
 1  17 
(c)
We follow the procedure of Example 10 in Section 8.1. First of all, we express x   x1 , x2 , x3 , x4  as a
linear combination of the vectors in B . The corresponding linear system
c1
2 c2
 c2
 c3
 4c3
 6 c4
 9c4


x1
x2
c1
c1


 c3
 2c3
 4 c4
 2 c4


x3
x4
c2
c2
has the augmented matrix whose reduced row echelon form is
1

0
0

0
x  4 x2  12 x3  112 x4 

1 0 0
x  2 x2  12 x3  25 x4 
.
0 1 0  x  25 x2  15 x3  15 x4 

0 0 1  x  35 x2  15 x3  45 x4 
0 0 0
9
2 1
5
2 1
2
5 1
3
5 1
This yields c1  92 x1  4 x2  12 x3  112 x4 , c2  25 x1  2 x2  12 x3  25 x4 ,
30
Chapter 8: Linear Transformations
c3   25 x1  25 x2  15 x3  15 x4 , c4   35 x1  35 x2  15 x3  45 x4 . Therefore
T  x1 , x2 , x3 , x4    x1  4 x2  x3 
9
2
1
2
11
2
 11
 42 


5
5
1
x4   5   2 x1  2 x2  2 x3  2 x4   32 
 22 
 10 
 56 
 13


   25 x1  25 x2  15 x3  15 x4   87     35 x1  35 x2  15 x3  45 x4   17 
 17 
 2 
x  495 x2  241
x  229
x
  253
10 1
10 3
10 4 
 115

65
153
  2 x1  39 x2  2 x3  2 x4  .
 66 x1  60 x2  9 x3  91x4 
11.
(d)
 2  
     31
2
T       37
 0  
     12 
 0  
(a)
Since A is the matrix for T relative to B, the columns of A are [T  v1 ]B , [T  v 2 ]B , and [T  v 3 ]B ,
1 
 3
 1




respectively. That is, [T  v1 ]B  2  , [T  v 2 ]B   0  , and [T  v 3 ]B   5 .
6 
 2 
 4 
(b)
1 
Since [T  v1 ]B  2  ,
6 
T  v1   v1  2 v 2  6 v 3  3 x  3 x 2  2  6 x  4 x 2  18  42 x  12 x 2  16  51x  19 x 2 .
Similarly, T  v 2   3v1  2 v 3  9 x  9 x 2  6  14 x  4 x 2  6  5 x  5 x 2 , and
T  v 3    v1  5v 2  4 v 3  3 x  3 x 2  5  15 x  10 x 2  12  28 x  8 x 2
 7  40 x  15 x 2 .
(c)
If a0  a1 x  a2 x 2  c1v1  c2 v 2  c3 v 3






 c1 3 x  3 x 2  c2 1  3 x  2 x 2  c3 3  7 x  2 x 2 then
a0  c2  3c3 , a1  3c1  3c2  7c3 , a2  3c1  2c2  2c3 .
Solving for c1 , c2 , and c3 gives
c1  13  a0  a1  2a2  , c2  81  5a0  3a1  3a2  , c3  81  a0  a1  a2  , so


  a  a  2a  16  51x  19 x    5a  3a  3a   6  5 x  5 x 
T a0  a1 x  a2 x 2  c1T  v1   c2T  v 2   c3T  v3 
1
3
2
0
1
2
1
8
2
0
1
2
8.4 Matrices for General Linear Transformations

 81  a0  a1  a2  7  40 x  15 x 2

(d)
239 a0 161a1  289 a2
24

201a0 111a1  247 a2
8
x

61a0 31a1 107 a2
12
x2 .
In 1  x 2 , a0  1, a1  0, a2  1.


107 2
T 1  x 2  23924289  2018 247 x  6112
x  22  56 x  14 x 2
12.
(a)
T2  T1 1  T2 T1 1   T2  x   2 x  1 ;
T2  T1  x   T2 T1  x    T2  x 2    2 x  1  4 x 2  4 x  1 ;
2
1 
1 
1 1 




 T2  T1 1    2  and  T2  T1  x      4  therefore T2  T1 B, B   2 4  ;
B
B
0 
 4 
 0 4 
 
T2 1  1 ; T2  x   2 x  1 ; T2 x 2   2 x  1  4 x 2  4 x  1 ;
2
1 
1 
1 
1 1 1 






2
T2 1     0  , T2  x      2  , and T2 x    4  therefore T2 B   0 2 4  ;
B
B

 B
 0 
 0 
 0 0 4 
 4 
 
0 
0 
0 0 




T1 1  x ; T1  x   x ; T1 1     1  and T1  x     0  therefore T1 B, B   1 0  .
B
B
 0 
 1 
0 1 
2
13.
(b)
By Theorem 8.4.1, T2  T1 B, B  T2 B T1 B, B .
(c)
1 1 1  0 0  1 1 
T2 B T1 B, B  0 2 4  1 0   2 4   T2  T1 B, B
0 0 4   0 1  0 4 
(a)
0 0 
6 0 
2

;
T

T
1

T
2

6
x
and
T

T
x

T

3
x


9
x
so

T

T
[
]
 2 1   2  
 2 1   2  
2
1 B , B
 0 9 


0 0 
2 0 
T1 1  2 and T1  x   3 x so [T1 ]B, B   0 3 ;
 0 0 
 
T2 1  3 x , T2  x   3 x 2 , and T2 x 2
(b)
T2  T1 B,B  T2 B,B T1 B,B
0
3
 3x 3 so [T2 ]B, B  
0

0
0 0
0 0 
.
3 0

0 3
31
32
Chapter 8: Linear Transformations
(c)
14.
15.
0
3

0

0
0 0
0 0 
2 0  


0 0 
  6 0 

0
3
  0 9 
3 0 
  0 0  

0 3
0 0 
0 
0 
0 
1 
0
1 
0 
1
0 
0 
T  v1      , T  v 2      , T  v 3      , and T  v 4      therefore T B  
B
B
B
B
0
0 
1 
0 
0 

 
 
 
 
0
0 
0 
1 
0 
(a)
1 1
 0 1
2
Since T 1  
, T  x  

 , T x 
1
0
1
1





1
1
0 1 
 


,
we
have

1

T




 B 1 ,
1
0


 
1
 0
0 
1 0
 1
1 
1 1
T  x      , and T x 2     . Consequently, T B, B  
B

 B 1 
 1
1 1
 
 

 0
0 
1 0
 
0
1
.
1

0
1 1
 1 2
, T 1  x   
, and T 1  x 2 
Since T 1  


 0 1
1 1
1
1
1 2 
 

 , we have T 1  B  1 ,
2 1 
 
1
 1
1 
1
2 
2 
1
T 1  x      , and T 1  x 2     . Consequently, T B , B  
B

 B 2 
0 
1
 

 
 1
1 
1

(b)
0 0 1
0 0 0 
1 0 0

0 1 0

1 1
2 2 
.
0 2

1 1
Applying the three-step procedure to the bases B  and B we obtain:
Step 1.
2 
The coordinate vector of x  2  2 x  x with respect to the basis B is  x B   2  .
 1
Step 2.
1 0
1 1
T B,B  x B  1 1

1 0
Step 3.
Reconstructing T  x  from the coordinate vector obtained in Step 2, we obtain
2
0
2 
2   

1    5
 T  x   B .
2 
1    1 
  1  
0    2 
1 0 
0 1  0 0 
0 0  2 5
T x  2 
 5
 1
 2




.
0 0 
0 0  1 0 
0 1  1 2 
8.4 Matrices for General Linear Transformations
Applying the three-step procedure to the bases B and B we obtain:
2 
The coordinate vector of x  2  2 x  x with respect to the basis B is  x B  2  .
 1
2
Step 1.
To find the coordinate vector of x  2  2 x  x 2 with respect to the basis B we
solve the system
c1
 c2
 c3
 2
 c3
 2
 1
c2
 1
Back-substitution yields  x B   2  .
 1
Step 2.
1
1
T B,B  x B  1

1
1 1
2 
 1  

2 2     5
 T  x   B .
2 
0 2     1 
  1  
1 1    2 
Step 3.
Reconstructing T  x  from the coordinate vector obtained in Step 2, we obtain
1 0 
0 1  0 0 
0 0  2 5
 5
 1
 2
T x  2 




.
0 0 
0 0  1 0 
0 1  1 2 
16.
(c)
2 5
T 2  2 x  x2  

1 2 
(a)
 0 0   0 
 1 0  
 0 1  
 0 0   1 
Since T  
 T
 T 
    and T  



     , we have
 0 0  
 0 0  
 1 0   0 
 0 1   1 


  1 0   
  0 1   
  0 0   
  0 0   
0 
1 
and



T
T
T 

T  





     and













   0 0    B    0 0    B    1 0    B 0 
   0 1    B  1 
  1 0   
  0 1   
  0 0   
  0 0   
1
 0
T  
     therefore
   T  
   T  
     and T  




  0 0    B    0 0    B    1 0    B  1
  0 1    B 1
1 1 1 0 
 0


0
0 1
T B, B  0 0 0 1  and T B, B   1 1 1 1 .
(b)


Applying the three-step procedure to the bases B and B we obtain:
Step 1.
 1
2
1 2 
 .
The coordinate vector of x  
with
respect
to
the
basis
is
B

x



B
 3
3
4


 
4
33
34
Chapter 8: Linear Transformations
Step 2.
 1
 
1 1 1 0   2   6 
T
x

 B,B  B 0 0 0 1   3   4   T  x   B .


 
 
4 
Step 3.
Reconstructing T  x  from the coordinate vector obtained in Step 2, we obtain
1 
0   6 
T x  6    4      .
0 
1   4 
Applying the three-step procedure to the bases B and B we obtain:
Step 1.
 1
2
1 2 
 
The coordinate vector of x  
 with respect to the basis B is  x B   3 .
3 4 
 
4
Step 2.
 1
 
 0 0 0 1  2   4 
T B,B  x B   1 1 1 1  3   2   T  x  B .


 
 
4 
Step 3.
Reconstructing T  x  from the coordinate vector obtained in Step 2, we obtain
1
 1  6 
T x  4    2      .
1
 0 4 
17.
(c)
 1 2    6 
T 
   
 3 4    4 
(a)
0 
1 
0 




D  p1   0 ; D  p2   1 ; D  p3   2 x ;  D  p1    0  ,  D  p2    0  , and  D  p3    2 
B
B
B
0 
0 
0 
0 1 0 
therefore  D B   0 0 2  .
 0 0 0 
(b)
 6
Denoting p  6  6 x  24 x , we have  p B   6  ;
 24 
2
 0 1 0   6   6 
 D B  pB  0 0 2   6    48    D  p  B thus D  p   6p1  48p2  0p3  6  48 x .
 0 0 0   24   0 
8.4 Matrices for General Linear Transformations
18.
(a)
0 
D  p1   0 ; D  p2   3 ; D  p3   3  16 x ; by inspection we obtain  D  p1    0  ;
B
0 
to determine the coordinate vectors  D  p2   B and  D  p3   B we solve the systems
2 a1
 2 a2
 2 a3
 3
 3 a2
 3a3

0
8a3

0
2b1
and
 2b2
 2b3
 3
 3b2
 3b3
 16
8b3

0
23
 236 
0  32
  23 
6 
 16 

 
16 
0  3.
obtaining  D  p2   B   0  and  D  p3   B    3  . Therefore  D B  0
 0 
0
 0 
0
0 
(b)
Solving the system
2 a1
 2 a2
 2 a3

 3a2
 3a3
 6
8a3

6
24
 1
we obtain a1  1 , a2  1 , a3  3 so that  pB   1 ;
 3
23
1  13
0  32
6 

16  
 DB  pB  0 0  3  1  16    D  p  B
0
0
0   3  0 


therefore D  p   13p1  16p2  0p3  13  2   16  2  3 x   0 2  3 x  8 x 2  6  48 x
19.
(a)
D  f1   D 1  0 ; D  f2   D  sinx   cosx ; D  f3   D  cosx   sinx
0 0 0 
The matrix for D relative to this basis is 0 0 1 .
0 1 0 
(b)
 2
0 0 0   2   0 


Since  2  3sin x  4 cos x B   3 ,  D  2  3sin x  4 cos x     0 0 1  3   4  .
B
 4 
 0 1 0   4   3
Consequently, D  2  3sin x  4 cos x    0 1  4sin x  3cos x  4sin x  3cos x .
20.
x is in V ;  x B is in R 4 ; T  x   B is in R 7 ; T  x  is in W
21.
(a)
T2  T1 B, B  T2 B,B T1 B,B
(b)
T3  T2  T1 B,B  T3 B,B T2 B,B T1 B,B
35
36
22.
Chapter 8: Linear Transformations
Applying T to each basis vector for V yields 0 (the zero vector in W ). Furthermore, the coordinate
vector of 0 with respect to any basis for W must be the zero vector in R n where n  dim  W  .
It follows from formula (4) that the matrix for T relative to the two bases is a zero matrix.
23.
The matrix for T relative to B is the matrix whose columns are the transforms of the basis vectors in B in
terms of the standard basis. Since B is the standard basis for R n , this matrix is the standard matrix for T.
Also, since B ' is the standard basis for R m , the resulting transformation will give vector components
relative to the standard basis.
True-False Exercises
(a)
False. The conclusion would only be true if T : V  V were a linear operator, i.e., if V = W.
(b)
False.
(c)
True. Since the matrix for T is invertible, by Theorem 8.4.2 ker(T) = {0}.
(d)
False. It follows from Theorem 8.4.1 that the matrix of S  T relative to B is  S ]B  T ]B .
(e)
True. This follows from Theorem 8.4.2.
8.5 Similarity
1.
2.
(a)
det  A   2 does not equal det  B   1
(b)
tr  A   3 does not equal tr  B   2
(a)
det  A   1 does not equal det  B   0
(b)
tr  A   2 does not equal tr  B   0
1
3.
3 2 
 1 2   1 2 
  31 1 2 1 
Since 


,
1 1 
 1 3  1 3
3 2   2 0   1 2   6 10 

3  2
3



T B  PB B T B PB B  1 1  1 1   1
1
4.
 4 5
 1 5  19
1


Since 

 4  1  51  1
4   19
 1 1

1
T B  PB B T B PB B   19
9

,
 
5
9
4
9
  3 2   4 5   19


   1 1  1 1  269
5
9
4
9
 179 
37 
9 
8.5 Similarity
1
5.
3 2 
 1 2   1 2 
  31 1 2 1 
Since 


,
1 1 
 1 3  1 3
 1 2   2 0  3 2   2 2 

3  1 1 1 1   6
5

T B  PB B T B PB B   1
1
6.
 4 5
 1 5  19
1

Since 

  1
 4  1  51 
 1 1
 1 4   9
4
5  3 2   1


T B  PB B T B PB B   1 1  1 1  91
7.
 9

,
 
5
9
4
9
  209

   95
5
9
4
9
 179 
16 
9 
 1
 2 
From the definition of T we have T  u1      T  u1   B and T  u 2      T  u 2   B therefore
0 
 1
4
7
4 7
 1 2 
. Since  v1 B    and  v 2 B    , we have PB B  
.

1 
2 
 1 2

 1
T B   0
 2 7 
The inverse of PB B is PB1 B  
 . Using Theorem 8.5.2, we obtain
 1 4 
 2 7   1 2   4 7   11 20 
.

4  0 1  1 2   6 11

T B  PB1 B T B PB B   1
8.
 16 
 3 
From the definition of T we have T  u1     and T  u 2     . Solving the linear systems
 2 
 16 
2 a1
2 a1
 4 a2
 a2
 16
 2
and
2b1
2b1
 4b2
 b2
 3
 16
4
 61 
 45

therefore
T
yields the coordinate vectors T  u1   B   185  and T  u 2   B   10


 18

19
B
5
5
 5 
From Theorem 8.5.1, PB B   I B, B . Solving the linear systems
2 a1
2 a1
 4 a2
 a2
 18
 8
and
2b1
2b1
 4b2
 b2
 10
 15
 5
3
 5 3
yields the coordinate vectors  v1 B    and  v 2 B    therefore PB B  
.
 1
 2 1
2 
 1 3 
The inverse of PB B is PB1 B  
 . Using Theorem 8.5.2, we obtain
 2 5 
T B  P
1
B  B
 1 3  4
T B PB B   2 5  185

 5
25
  5 3  15
2 
  98

.

   2 1   2 18 
61
10
19
5

.
 
61
10
19
5
37
38
9.
Chapter 8: Linear Transformations
Denoting e1  1,0,0  , e 2   0,1,0  , and e 3   0,0,1 , we have T  e1    2,1,0    T  e1   B ,
 2 1 0 
T  e 2    1,0,1   T  e 2   B , and T  e 3    0,1,0    T  e 3   B therefore T B   1 0 1 .
 0
1 0 
 2 1 0 
Since  v1  B   2,1,0  ,  v 2  B   1,0,1 , and  v 3  B   0,1,0  , we have PB B   1 0 1 .
 0
1 0 
Using Theorem 8.5.2, we obtain
1
 2 1 0   2 1 0   2 1 0   2 1 0 
1
T B  PB B T B PB B   1 0 1  1 0 1  1 0 1   1 0 1 .
 0
1 0   0 1 0   0 1 0   0 1 0 
10.
Denoting e1  1,0,0  , e 2   0,1,0  , and e 3   0,0,1 , we have T  e1   1,0,1   T  e1   B ,
 1 2 1
T  e 2    2, 1,0    T  e 2   B , and T  e 3    1,0,7    T  e 3   B therefore T B   0 1 0  .
 1 0 7 
 1 1 1
Since  v1  B  1,0,0  ,  v 2  B  1,1,0  , and  v 3  B  1,1,1 , we have PB B  0 1 1 .
0 0 1
1
B B
The inverse of PB B is P
T B  P
1
B  B
11.
 1 1 0 
 0
1 1 . Using Theorem 8.5.2, we obtain
0 0
1
3
 1 1 0   1 2 1  1 1 1  1 4







T B PB B  0 1 1 0 1 0  0 1 1   1 2 9 .
 0 0
1  1 0 7   0 0 1  1
1 8 
Denoting e1  1,0  and e 2   0,1 we have T  e1    cos 45, sin 45  

T  e 2     sin 45, cos 45   
Since  v1  B 

1
2
obtain T B  P
1
2
1
2
,

 12
  T  e 2   B therefore T B   1
 2
 12
, 12 and  v 2  B   12 , 12 , we have PB B   1
 2
1
B  B


 12

T
P
 B B   B  1
 2

 12 

1
2

1
 12
1
 2
 12   12
 1
1
2
  2
 ,   T  e   and
1
2
1
2
1
B
 12 
.
1
2

 12 
 . Using Theorem 8.5.2, we
1
2

 12   12
 1
1
2
  2
 12 
.
1
2

8.5 Similarity
12.
Denoting e1  1,0  and e 2   0,1 we have T  e1   1,0    T  e1   B and
1 k 
T  e 2    k,1   T  e 2   B therefore T B  
.
0 1 
k 1
Since  v1  B   k,1 and  v 2  B  1,0  , we have PB B  
.
1 0 
0 1 
The inverse of PB B is PB1 B  
 . Using Theorem 8.5.2, we obtain
1 k 
0
1  1 k   k 1 
1 0 
.
 1
T B  PB1 B T B PB B  1 k  0 1  1 0    k

13.



Denoting p1  1 and p2  x we have T  p1   1  x and T  p2   x . Thus  T  p1   B   1,1 and
1 0
T  p     0,1 so T    1 1 . Since  q   1,1 and  q    1,1 , we have P
2

B
B

 1
1
The inverse of PB B is PB B   21
 2

1

2
1
2
1
2
T B  PB1B T B PBB   21
14.
1
2
1
2
B B
2 B
1 1

.
1 1

 . Using Theorem 8.5.2, we obtain

  1 0  1 1  12


  3
  1 1 1 1  2

.
 
1
2
1
2
We have T  p1   9  3 x and T  p2   12  2 x . Thus  T  p1   B   23 , 12  and  T  p2   B    29 , 43  so
 23
T

 B  1
2
 29 
  29
7
2 1
1
.
Since
and
,
we
have
P

q


,
q

,

 1 B  9 3 
 2 B  9 6 
 1
B  B
4
3
 3
3
PB B is PB1 B   43
2

 . Using Theorem 8.5.2, we obtain
1
3
2
(a)

 . The inverse of
 
7
9
1
6
7
2
T B  PB1 B T B PB B   43
15.
1 B
  23

1  12
7
2
 29    29
4 1
3 3
  1 1
.

  0 1
7
9
1
6
 
We have T 1  5  x 2 , T  x   6  x , and T x 2  2  8 x  2 x 2 so the matrix for T relative to the
2
5 6

standard basis B is [T ]B  0 1 8  . The characteristic polynomial for [T ]B is
 1 0 2 
 3  2 2  15  36     4  (  3)2 so the eigenvalues of T are  = 4 and  = 3.
(b)
 2 
A basis for the eigenspace of [T ]B corresponding to  = 4 is  83  , so the polynomial
 1
39
40
Chapter 8: Linear Transformations
2  83 x  x 2 is a basis in P 2 for the corresponding eigenspace of T .
 5
A basis for the eigenspace of [T ]B corresponding to  = 3 is  2  , so the polynomial 5  2 x  x 2 is a
 1
basis in P 2 for the corresponding eigenspace of T .
16.
(a)
  1 0  0 1  0 0  0 0  
Let B be the standard basis for M 22 , B   
,
,
,
  . We have
 0 0  0 0  1 0  0 1  
 1 0   0 1 
 0 1   0 0 
 0 0    2 1
 0 0   0 0 
T 
, T 
, T 
, T 









  
 so by
  1 0    2 0 
 0 0   0 0 
 0 0   1 0 
 0 1   0 1 
inspection
0 
0 
 2
0 






 
  1 0   






 1  , T  0 1     0  , T   0 0      1 , T   0 0     0 

T  









  0 0    B 0    0 0    B  1     1 0    B  2     0 1    B 0 
 
 
 
 
0 
0 
 0
1 
0
1
therefore T B  
0

0
0
0 
 A.
0

1
The characteristic polynomial of A is

0
2
0
1
1 2
0
0
2
0
1  1
0
2
det   I  A  
    1    1   2  .
0 1   2
0
 1
0 0
0
0
We conclude that the eigenvalues of T are 1 , 1 , and 2 .
(b)
1
0
The reduced row echelon form of I  A is 
0

0
0 2 0 
1 3 0 
so that the eigenspace corresponding to
0
0 0

0
0 0
a 
2 
b 
 3
 1 contains vectors   where a  2s , b  3s , c  s , d  t . Vectors u1    and
c 
 1
 
 
d 
0 
8.5 Similarity
41
0 
0 
u 2    form a basis for this eigenspace of A  T B .
0 
 
 1
1
0
The reduced row echelon form of  I  A is 
0

0
0 2
1 1
0 0
0 0
0
0 
so that the eigenspace corresponding to
1

0
a 
 2 
b 
 1
  1 contains vectors   where a  2t , b  t , c  t , d  0 . A vector u 3    forms a basis
c 
 1
 
 
d 
 0
for this eigenspace of A  T B .
1
0
The reduced row echelon form of 2 I  A is 
0

0
0
1
0
0
1
0
0
0
0
0 
so that the eigenspace corresponding to
1

0
a 
 1
b 
 0
  2 contains vectors   where a  t , b  0 , c  t , d  0 . A vector u 4    forms a basis
c 
 1
 
 
d 
 0
for this eigenspace of A  T B .
Reconstructing vectors in M 22 from their respective coordinate vectors u 1 , u 2 , u 3 , and u 4 yields
18.

 2 3  0 0  
a basis  
,
  for the eigenspace of T corresponding to  1 ,
 1 0  0 1  

  2 1 
a basis  
  for the eigenspace of T corresponding to   1 ,
  1 0  

  1 0  
a basis  
  for the eigenspace of T corresponding to   2 .
  1 0  
1 0 
1 0 
E.g., 
and 

 are not similar since their determinants do not equal.
0 1 
0 0 
42
19.
Chapter 8: Linear Transformations
 3 4 
The matrix of T with respect to the standard basis for R 2 , B  1,0  ,  0,1 , is T B  
.
 1 7 


By Formula (12), det  T   det T B  17 .
The characteristic polynomial of T B is  2  10  17 . The eigenvalues of T are 5  2 2 .
20.
The matrix of T with respect to the standard basis for R3 , B  1,0,0  ,  0,1,0  ,  0,0,1 , is
 1 1 0 
T B   0 1 1 . By Formula (12), det T   det T B  0 .
 1 0
1


The characteristic polynomial of T B is  3  3 2  3 . The eigenvalues of T are 0 and 3  2i 3 .
21.


T a0  a1 x  a2 x 2  a0  a1  x  1  a2  x  1   a0  a1  a2    a1  2 a2  x  a2 x 2
2

hence the matrix of T with respect to the standard basis for P2 , B  1, x, x

2

1
 1 1

, is T B   0
1 2  .
 0 0
1

By Formula (12), det  T   det T B  1 . The characteristic polynomial of T B is    1 .
3
The only eigenvalue of T is 1 .
22.
(a)

T a0  a1 x  a2 x 2  a3 x 3  a4 x 4

 a0  a1  2 x  1  a2  2 x  1  a3  2 x  1  a4  2 x  1
2
3
4
  a0  a1  a2  a3  a4    2 a1  4 a2  6 a3  8a4  x   4 a2  12 a3  24 a4  x 2
+  8a3  32 a4  x 3  16 a4 x 4


hence the matrix of T with respect to the standard basis for P4 , B  1, x, x 2 , x 3 , x 4 , is
1
0

T B   0

0
0
1
2
0
0
0
1 1 1
4 6 8 
4 12 24  . By Theorem 2.1.2, this matrix has nonzero determinant equal to the

0 8 32 
0 0 16 
product of the entries on the main diagonal. It follows from Theorem 8.2.4 that the rank of T is 5 ,
and the nullity of T is 0 .
(b)
23.
Theorem 8.2.4 also implies that T B is invertible, therefore by Theorem 8.4.2, T is one-to-one.
Step (1) follows from the hypothesis (since B  P 1 AP ).
Step (2) follows from Formula (1) in Section 1.4 (since I  P 1 P ) .
Step (3) follows from parts (f), (g), and (m) of Theorem 1.4.1.
Step (4) follows from Theorem 2.3.4.
8.5 Similarity
43
Step (5) follows from commutativity of real number multiplication.
Step (6) follows from Theorem 2.3.4, Formula (1) in Section 1.4, and from Theorem 2.1.2
 


(since det P 1 det  P   det P 1 P  det  I   1 ).
31.
Since C  x ]B  D x ]B for all x in V, then we can let x  v i for each of the basis vectors v1 , ..., v n of V.
Since [ v i ]B  e i for each i where e1 , ..., e n  is the standard basis for R n , this yields Ce i  De i for
i = 1, 2, ..., n. But Ce i and De i are the ith columns of C and D, respectively. Since corresponding columns
of C and D are all equal, C = D.
True-False Exercises
(a)
False. Every matrix is similar to itself since A  I 1 AI .
(b)
True. If A  P 1 BP and B  Q 1CQ, then A  P 1 Q1CQ P  (QP)1 C  QP  .
(c)
True. Invertibility is a similarity invariant.
(d)
True. If A  P 1 BP, then A1  ( P 1 BP )1  P 1 B 1 P.
(e)
True.
(f)
False. For example, if T1 is the zero operator then T1 B with respect to any basis B is a zero matrix.
(g)
True. By Theorem 8.5.2, for any basis B  for R n there exists P such that


T B  P 1 T B P  P 1 IP  P 1 P  I .
(h)
False. If B and B  are different, let [T ]B be given by the matrix PB B .
Then T B, B  PB  B T B  PB B PB B  I n .
8.6 Geometry of Matrix Operators
1.
Coordinates  x, y  are being transformed to coordinates  x, y  according to the equation
1
 x   5 2   x   1 2   x
 x   5 2   x 
so x  x  2 y ' and y  2 x  5 y .
 y    2 1   y  . Thus  y   2 1   y    2
5  y 
  
   
  
 
Substituting these into y  4 x yields 2 x  5 y  4  x  2 y  or equivalently y  136 x  .
2.
Coordinates  x, y  are being transformed to coordinates  x, y  according to the equation
1
 x   4 3   x   2 3  x
 x    4 3   x 
 y    3 2   y  . Thus  y    3 2   y    3 4   y  so x  2 x  3 y and
  
   
 
  
 
y  3 x  4 y . Substituting these into y  4 x  3 yields 3 x  4 y  4  2 x  3y   3 or equivalently
11
y  16
x   163 .
444
33.
Chaptter 8: Linear Transformatio
T
ons
1 3
From Table 5, the staandard matrix
x for the shearr in the x -dirrection by a ffactor 3 is A  
.
0 1
nates  x, y  are
a being transsformed to co
oordinates  x , y  accordinng to the equaation
Coordin
1
 x  1 3  x   1 3  x 
 x  1 3  x 
T
so x  x   3y and y  y . Substitutinng
 y    0 1   y    0
 y   0 1   y  . Thus
1  y 
   
  
 
  
these in
nto y  2 x yieelds y  2  x   3y  or equ
uivalently y  27 x  .
44.
1 0 
From Table 4, the staandard matrix
x for the comp
pression in thhe y -directionn by a factor 12 is A  
.
1
0 2 
Coordin
nates  x, y  are
a being transsformed to co
oordinates  x , y  accordinng to the equaation
1
 x  1 0   x  1 0   x 
 x  1 0   x 
T
 y   0 1   y   0 2   y  so x  x and y  2 y . Suubstituting theese into
 y   0 1   y  . Thus
  
  
 
  
2 
2
2  or equivaalently y  x .
y  2 x yields 2 y  2x
55.
3 1 0  0  3 1  1  3 3 1  0   1
3 1 1  2 
Since 
mage of
 , 
 , 
   , and 



      , the im






1 2  1  1
1 2  0  0  1 2  0  1  1 2   1   2 
the unitt square is a parallelogram with verticess  0,0  ,  3,1 ,  1, 2  , aand  2, 1 .
66.
 2 1 0   0   2 1  1   2   2 1 0   1 
 2 1 1 3
Since 
 , 
 , 
   , and 



      , the imaage of the






 1 2  1  1
 1 2   0   0   1 2   0   1  1 2   1   2 
unit squ
uare is a squarre with vertices  0,0  ,  2,, 1 , 1,2  , aand  3,1 .
8.6 Geeometry of M
Matrix Operatoors
77.
(a)
45
We
W are looking
g for the stand
dard matrix of
o T  T2  T1 where T1 is tthe compressiion by a factoor of 12 in
th
he x-direction
n and T2 is thee expansion by
b a factor of 5 in the y-direection. From Table 4,
 1 0
 12 0 
1 0 
.
Theref
fore,
T

T
T






2
1

 and T2   
.

0 5 
0 5
0 1 
T1    2
(b)
We
W are looking
g for the stand
dard matrix of
o T  T2  T1 where T1 is tthe expansionn by a factor oof 5 in the
1 0 
y-direction
and
d T2 is the sheear by a facto
or of 2 in the y -direction. From Tables 4 and 5, T1   

0 5 
1 0 
 1 0
an
nd T2   
. Therefo
ore, T   T2 
T1   

.

2 5
2 1 
(c)
We
W are looking
g for the stand
dard matrix of
o T  T2  T1 where T1 is tthe reflection about the linee y  x
80 about thee origin. From
an
nd T2 is rotation through an
a angle of 18
m Tables 1 andd 3,
c
  sin
n180  1 0 
cos180
.

co
os180  0 1

0 1 
T1   1 0  and T2    sin180
s



 0 1
Therefore,
T
T   T2 T1    1 0  .


88.
(a)
We
W are looking
g for the stand
dard matrix of
o T  T3  T2  T1 where T1 is the reflecttion about thee y -axis,
T2 is the expannsion by a facctor of 5 in thhe x -directioon, and T3 is tthe reflectionn about the linne y  x .
 1 0 
5 0
0 1 
Frrom Tables 1 and 4, T1   
, T2   
, and T3   


 . Thereffore,
0 1 
1 0 
 0 1
 0
1


T   T3 T2 T1    5 0  .
(b)
We
W are looking
g for the stand
dard matrix of
o T  T3  T2  T1 where T1 is the rotatioon of 30 , T2 is the
sh
hear by a facttor of 2 in th
he y -directio
on, and T3 is tthe expansionn by a factor oof 3 in the
46
Chapter 8: Linear Transformations
3
cos30  sin 30   2

y -direction. From Tables 3,4, and 5, T1   


 sin 30 cos30  12
1 0 


3
 3 3  2
9.
(a)


.
3  3 2 3 
 12
3
2
T3   0 3  . Therefore, T   T3 T2 T1   
 12 
 1 0
, T2   

 , and
3

2
1



2 
 1 0
From Tables 1 and 4, T1 , a reflection about the x -axis has the standard matrix T1   
 and
0 1
T2 , a compression by a factor of 13 in the x -direction has the standard matrix
 1 0
.
0 1
T2    3
0
0
1
 13
;



T
T
T
T





2
1
2
1

.

0 1
0 1
T1  T2   T1 T2    3
Since T1  T2  T2  T1 , these operators commute.
(b)
0 1 
From Tables 1 and 4, T1 , a reflection about the line y  x has the standard matrix T1   
 and
1 0 
2 0 
T2 , an expansion by a factor of 2 in the x -direction has the standard matrix T2   
.
0 1 
0 1 
0 2 


T1  T2   T1 T2   2 0  ; T2  T1   T2 T1   1 0  .


Since T1  T2  T2  T1 , these operators do not commute.
10.
(a)
From Table 5, T1 , a shear in the y -direction with factor of 14 has the standard matrix
1 0 
3
 and T2 , a shear in the y -direction with factor of 5 has the standard matrix
1
4

T1    1
1 0 
 1 0
1
. T1  T2   T1 T2    17
; T2  T1   T2 T1    17


 5 1
 20 1
 20
T2    3
0
.
1 
Since T1  T2  T2  T1 , these operators commute.
(b)
From Table 5, T1 , a shear in the y -direction with factor of 14 has the standard matrix
 1 0
3
 and T2 , a shear in the x -direction with factor of 5 has the standard
1
4

T1    1
23
 20
1 35 
matrix T2   
.
T

T
T
T








1
2
1
2

1
0 1
 4

1
 ; T2  T1   T2 T1    1
1
4
3
5
Since T1  T2  T2  T1 , these operators do not commute.
3
5
23
20

.

8.6 Geometry of Matrix Operators
4
4
11. A  

 0 2 


Multiply
the first
row by 14
1
1
0 2 




Multiply
the second
row by  12
 1 1
0 1




Add 1
times the
second
row to
the first
row
47
 1 0
 0 1


0
1
 1 0
 1 1 
therefore E3 E2 E1 A  I with E1   4
so that
 , E2   0  1  , and E3  
1 
0
0 1
2

 4 0   1 0   1 1
A  E11 E21 E31  


.
 0 1   0 2  0 1
Multiplication by A has the geometric effect of shearing by a factor of 1 in the x -direction, then
reflection about the x -axis, then expanding by a factor of 2 in the y -direction, then expanding by a
factor of 4 in the x -direction.
12.
1 4 
A

2 9 


Add 2
times the
first row
to the
second
row
1 4 
0 1 




Add  4
times the
second
row to
the first
row
1 0 
0 1 


 1 0
 1 4 
1 0  1 4 
therefore E2 E1 A  I with E1  
and E2  
so that A  E11 E21  



.
1
 2 1
0
2 1  0 1 
Multiplication by A has the geometric effect of shearing by a factor of 4 in the x -direction, then shearing
by a factor of 2 in the y -direction.
13.
 0 2 
A

4 0


Interchange
the first row
and the
second row
4 0
 0 2 




Multiply
the first
row by 14
 1 0
0 2 




Multiply
the second
 1 0
0 1


row by  12
 1 0
 1 0
0 1 
therefore E3 E2 E1 A  I with E1  
, E2   4
 , and E3   0  1  so that

1 0 
0 1
2 

0 1   4 0   1 0 
A  E11 E21 E31  


.
 1 0   0 1   0 2 
Multiplication by A has the geometric effect of reflection about the x -axis, then expanding by a factor of
2 in the y -direction, then expanding by a factor of 4 in the x -direction, then reflection about the line
y x.
48
Chapter 8: Linear Transformations
14.
 1 3
A

4 6


Add  4 times
the first row
 1 3
0 18 


to the
second row


Multiply
the second
 1 3
0
1

1
row by 18


Add 3 times
the second row
 1 0
0 1


to the
first row
1
 1 0
therefore E3 E2 E1 A  I with E1  
, E2  

 4 1
0
0
1 3
, and E3  
 so that
1 
0 1
18 
 1 0   1 0   1 3 
.
A  E11 E21 E31  


1
 4 1  0 18   0
Multiplication by A has the geometric effect of shearing by a factor of 3 in the x -direction, then
expanding by a factor of 18 in the y -direction, then shearing by a factor of 4 in the y -direction.
15.
16.
17.
(a)
The unit square is expanded in the x-direction by a factor of 3.
(b)
The unit square is reflected about the x-axis and expanded in the y-direction by a factor of 5.
(a)
The unit square is reflected about the y-axis and expanded in the x-direction by a factor of 2.
(b)
The unit square is reflected about the x-axis, reflected about the y-axis and expanded in the x-direction
by a factor of 3.
(a)
Coordinates  a, b  are being transformed to coordinates  x, y  according to the equation
 x   3 1   a   3a  b 
 y   6 2   b   6 a  2b  . It follows that x  3a  b and y  6 a  2b satisfy the equation of the
  
  

line y  2 x (since 6 a  2b  2  3a  b  ).
(b)
18.
3 1 
 0 so 
 is not invertible; Theorem 8.6.1 applies only to invertible matrices.
6 2
6 2 
3 1
We need a real number k such that the shear of factor k in the x-direction transforms  2,1 into  0,1 , i.e.,
0  1 k  2 
 1    0 1   1  . Equating the corresponding components results in two equations, of which the second
  
 
one 1  1 is satisfied for all k . Solving the first equation 0  2  k for k yields k  2 , therefore the matrix
 1 2 
for the shear is 
.
1
0
19.
Coordinates  x, y  are being transformed to coordinates  x, y  according to the equation
1
 x  3 2   x  1 2   x
 x   3 2   x 
 y   1 1   y  . Thus  y    1 1  y    1 3  y  so x  x  2 y and y   x  3 y .
  
   
 
  
 
Substituting these into y  3 x  1 yields  x  3 y  3  x  2 y   1 or equivalently y  49 x   19 .
Substituting x  x  2 y and y   x  3y into y  3 x  2 yields  x   3 y  3  x   2 y   2 or equivalently
8.6 Geometry of Matrix Operators
y  49 x   29 . Since both lines we obtained in the  x, y  coordinates have the same slope ( 4 / 9 ), we
conclude that the given parallel lines are mapped into parallel lines.
20.
1 2 
The standard matrix of a shear by a factor 2 in the x -direction is 
.
0 1 
1 2  0  0  1 2  1  1 
 1 2   0.5  2.5
We obtain 
 , 
   , and 
     .






0 1   1   1 
0 1  0  0  0 1  0  0 
21.
(a)
1 1  0.5  0.5
1 1  0   0  1 1 1  1
We obtain 
   , and 
 , 



   
.



1 1  1   1.5
1 1  0   0  1 1 0  1
(b)
1 1
A

1 1


Add 1 times
the first row
to the
second row
 1 1
0 2 




Multiply
the second
row by 12
 1 1
0 1




Add
the second row
 1 0
0 1


to the
first row
1 0 
 1 0
 1 1
therefore E3 E2 E1 A  I with E1  
, E2  
, and E3  

 so that
1
 1 1
 0 1
0 2 
1 0   1 0   1 1
.
A  E11 E21 E31  


1
1 1  0 2   0
Shearing by a factor of 1 in the x -direction, then expanding by a factor of 2 in the y -direction,
then shearing by a factor of 1 in the y -direction will produce the same image as in part (a).
22.
(a)
 1 0   1  1
1 0   3  3 
We calculate 
   and 
   therefore the endpoints are 1,1 and  3,2  .
1 
1 
 0 2  2  1
0 2   4  2 
49
50
Chapter 8: Linear Transformations
(b)
3
cos30  sin 30  2

The standard matrix of rotation of 30 about the origin is 


 sin 30 cos30  12
 3
We calculate  2
1
 2
 23
 12  1   23  1 

and




3 2 
1
    12  3 
2 
 2
  1,  3  and 
3
2
23.
1
2
3 3
2

 12 
.
3

2 
 12   3   3 2 3  2 

 therefore the endpoints are

3 4
    23  2 3 
2 
 2, 32  2 3 .
We calculate the positions of corners of the image of the unit square in which the figure is inscribed:
 1 14  1  45 
 1 14  0   0   1 14   1   1   1 14  0   14 
      , and 

     , 
     , 
     .
 0 1  0   0   0 1  0   0   0 1   1  1 
 0 1  1  1
(The following calculations determine positions of the remaining endpoints of segments comprising the
 1 14  0.45  0.45  1 14  0.55  0.55  1 14   0  0.2375
outline of the figure: 




, 

, 

,
 0 1   0   0   0 1   0   0   0 1   0.95 0.95 
 1 14   0.45 0.6875  1 14   0.55  0.7875 
 1 14  1   1.2375 
, 
, and 







 .)




 0 1   0.95  0.95 
 0 1   0.95 0.95   0 1   0.95  0.95 
24.
25.
26.
No. When multiplying by a 2  2 invertible matrix, it follows from part (e) of Theorem 8.6.1, that three
points that are not collinear cannot map onto three collinear points, therefore the image of a square cannot
be a triangle. (By using part (c) of the same theorem as well, we can conclude that the image of a square has
to be a parallelogram instead.)
 2 1  2   2 
 2 1  0   0  2 1 1  1 
Since 
 , 
   , and 
      , the image of the given triangle is the






0 0  0  0 
 0 0   0   0  0 0  1  0 
line segment from (0,0) to (2,0). Theorem 8.6.1 does not apply here because A is singular.
(a)
1 0 k
0 1 k 


0 0 1
8.6 Geometry of Matrix Operators
(b)
A shear in the xz -direction with factor k moves each point  x, y, z  to the new position
 1 k 0
 x  ky, y, z  ky  . The standard matrix for this transformation is 0 1 0  .
0 k 1
A shear in the yz -direction with factor k moves each point  x, y, z  to the new position
 1 0 0
 x, y  kx, z  kx  . The standard matrix for this transformation is  k 1 0  .
 k 0 1
27.
28.
29.
(a)
3
1
cos30  sin 30 0   2  2 0 

 sin 30 cos30 0    1
3
0
2

  2
 0
0
1   0
0 1


(b)
0
1
 1
0
0

 
2
2
 0 cos  45   sin  45     0
 0 sin  45  cos  45    0  2

 
2
(a)
 cos90 0 sin 90   0 0 1
 0
1
0    0 1 0 

  sin 90 0 cos90   1 0 0 
(b)
cos     sin    0 


 sin    cos    0 
 0
0
1 

0

2
2 
2
2 
A unit vector in the direction of  2,2,1 is ||1v|| v   32 , 32 , 31  so in Formula (3), we take
a  b  23 and c  13 . Formula (3) yields the standard matrix
 49 1  cos    cos 
4
1
 9 1  cos    3 sin 
 29 1  cos    23 sin 

30.
1  cos    13 sin 
4
1  cos    cos 
9
2
1  cos    23 sin 
9
4
9
2
9
2
9
1  cos    23 sin     19
1  cos    23 sin     89
1
1  cos    cos    49
9

8
9
1
9
4
9



 
4
9
4
9
7
9
A unit vector in the direction of 1,1,1 is ||1v|| v  13 1,1,1 so in Formula (3), we take
a  b  c  13 . Formula (3) yields the standard matrix
 13 1  cos 2   cos 2

 13 1  cos 2   13 sin 2
1


 3 1  cos 2   13 sin 2
1  cos 2   13 sin 2
1
1  cos 2   cos 2
3
1
1  cos 2   13 sin 2
3
1
3
1  cos 2   13 sin 2   13
 
1
1  cos 2   13 sin 2    13  13
3
 
1
1  cos 2   cos 2   31  13
3
1
3
1
3
 13
1
3
1
3
 13
 13 

1
 13 
3

1

3
1
3
51
52
31.
Chapter 8: Linear Transformations
For the rotation about the x -axis, we take v  1,0,0  , so in Formula (3) we have a 1 , b  c  0 :
 1 1  cos 2   cos 2



 0  1  cos 2    0  sin 2



 0  1  cos 2    0  sin 2
 0  1  cos 2    0  sin 2  0  1  cos 2    0  sin 2   1 0 0 

 0  1  cos 2   cos 2  0  1  cos 2   1 sin 2   0 0 1 .
 0  1  cos 2   1 sin 2  0  1  cos 2   cos 2  0 1 0 
For the rotation about the y -axis, we take v   0,1,0  , so in Formula (3) we have b 1 , a  c  0 :
  0  1  cos 2   cos 2



 0  1  cos 2    0  sin 2



  0  1  cos 2   1 sin 2
 0  1  cos 2    0  sin 2  0  1  cos 2   1 sin 2   0 0 1

1 1  cos 2   cos 2  0  1  cos 2    0  sin 2    0 1 0  .
 0  1  cos 2    0  sin 2
 0  1  cos 2   cos 2   1 0 0 
For the rotation about the z -axis, we take v   0,0,1 , so in Formula (3) we have a  b  0 , c 1 :
  0  1  cos 2   cos 2



  0  1  cos 2   1 sin 2



 0  1  cos 2    0  sin 2
 0  1  cos 2   1 sin 2  0  1  cos 2    0  sin 2  0 1 0 

 0  1  cos 2   cos 2  0  1  cos 2    0  sin 2    1 0 0  .
 0  1  cos 2    0  sin 2
1 1  cos 2   cos 2  0 0 1
True-False Exercises
(a)
False. The image is a parallelogram.
(b)
True. This is a consequence of Theorem 8.6.2.
(c)
True. This is the statement of part (a) of Theorem 8.6.1.
(d)
True. Performing the same reflection twice amounts to no change (identity transformation).
(e)
False. The matrix represents a composition of a reflection and a dilation.
(f)
False. This matrix does not represent a shear in either x or y direction.
(g)
True. This matrix represents an expansion by a factor of 3 in the y -direction.
Chapter 8 Supplementary Exercises
1.
No. T  x1  x 2   A  x1  x 2   B   Ax1  B    Ax 2  B   T  x1   T  x 2 
2.
(a)
cos
A2  
 sin 
 sin   cos
cos   sin 
 sin   cos2   sin 2 

cos   2sin  cos
cos  2   sin  2  


 sin  2  cos  2  
cos
A3  
 sin 
 sin   cos  2   sin  2  


cos   sin  2  cos  2  
2sin  cos 

cos2   sin 2  
Supplementary Exercises
53
 cos cos  2   sin  sin  2   cos sin  2   sin  cos  2  


sin  cos  2   cos sin  2   sin  sin  2   cos cos  2  
cos  3   sin  3  


 sin  3  cos  3  
(b)
(c)
 cos  n   sin  n  
An  

 sin  n  cos  n  
According to Table 3 of Section 4.9, multiplication by A corresponds to a rotation through an angle
.
Consequently, multiplication by An corresponds to a rotation through an angle n , which agrees
with the result in part (b).
3.
For instance let A and B have all zero entries except for the nonzero entries  A 11   B 11 so that their
 1 0 0
2 0 0 


traces are different. E.g., A   0 0 0  and B   0 0 0  are not similar.
 0 0 0 
 0 0 0 
5.
(a)
 1 0 1 1
The matrix for T relative to the standard basis is A   2 1 3 1 . A basis for the range of T is a
 1 0 0 1
1
1 0 0

basis for the column space of A. A reduces to 0 1 0 1 .
0 0 1 0 
Since row operations don’t change the dependency relations among columns, the reduced form of A
indicates that T  e 3  and any two of T  e1  , T  e 2  , T  e 4  form a basis for the range.
The reduced form of A shows that the general solution of Ax = 0 is x = s, x2  s, x3  0, x4  s so a
 1
 1
basis for the null space of A, which is the kernel of T, is   .
 0
 
 1
(b)
6.
(a)
Since R(T) is three-dimensional and ker(T) is one-dimensional, rank(T) = 3 and nullity(T) = 1.
Setting  x1
x2
 1 2 4 
x3   3 0 1   0 0 0  yields the linear system
 2 2 5
 x1
2 x1
4 x1
 3 x2

x2
 2 x3
 2 x3
 5 x3
 0
 0
 0
54
Chapter 8: Linear Transformations
1 0 1 0 
The reduced row echelon form of the augmented matrix of this system is 0 1 1 0  therefore the
0 0 0 0 
kernel of T consists of all vectors  x1
x2
x3  such that x1  t , x2  t , x3  t . A basis for
ker  T  is formed by  1 1 1 .
(b)
The range of T contains all vectors x1  1 2 4   x2 3 0 1  x3  2 2 5 .
Based on the reduced row echelon form obtained in part (a),  1 2 4  and 3 0 1 form a basis
for R  T  .
7.
(a)
 1 1 2 2 
 1 1 4
6 
The matrix for T relative to B is [T ]B  
.
1 2
5 6 


3 2 
3 2
1
0
[T ]B reduces to 
0

0
0 1
1
0
0
2
3 4 
which has rank 2 and nullity 2. Thus, rank(T) = 2 and
0
0

0
0
nullity(T) = 2.
9.
(b)
Since nullity  T   0 , T is not one-to-one.
(a)
Since A  P 1 BP, we have
AT  ( P 1 BP )T  P T BT ( P 1 )T  (( P T )1 )1 BT ( P 1 )T  (( P 1 )T )1 BT ( P 1 )T .
Thus, AT and BT are similar.
(b)
11.
A1  ( P 1 BP )1  P 1 B 1 P thus A1 and B1 are similar.
a b 
 a  c b  d  b b   a  b  c 2b  d 
For X  
.


, T X  

0   d d   d
d 
c d 
 0
T(X) = 0 gives the equations
abc0
2b  d  0
d0
  1 0  
 k 0
Thus b  d  0 and c = a hence X is in ker(T) if it has the form 
, so ker  T   span  


  1 0  
 k 0 
which is one-dimensional. We conclude that nullity(T) = 1 and rank  T   dim  M 22   nullity  T   3.
Supplementary Exercises
13.
1 0 
 0 1
0 0 
0 0 
The standard basis for M 22 is u1  
, u2  
, u3  
, u4  


 . Since L  u1   u1 ,

0 0 
1 0 
 0 1
0 0 
1
0
L  u 2   u 3 , L  u 3   u 2 , and L  u 4   u 4 , the matrix of L relative to the standard basis is 
0

0
14.
(a)
 2 1 3 
P   1 1 4   PB B   v1 B

 0
1 2 
0 0 0
0 1 0 
.
1 0 0

0 0 1
 v 2 B  v3 B  therefore
v1  2 u1  1u 2  0 u 3 , v 2  1u1  1u 2  1u 3 , and v 3  3u1  4 u 2  2 u 3 .
(b)
5 7 
 2

Using the inverse matrix P   2
4 5  PB  B   u1 B
 1 2
3
1
 u 2 B  u 3 B  we obtain
u1  2 v1  2 v 2  1v 3 , u 2  5v1  4 v 2  2 v 3 , and u 3  7v1  5v 2  3v 3 .
15.
9
 1 1 1
 4 0



1
The transition matrix from B ' to B is P   0 1 1 , thus T ]B  P  T ]B P   1 0 2  .
 0 0 1
 0 1
1
16.
 1 1
2 1
2
Denoting A  
and B  

 we have det   I  A   det   I  B     5  5 .

1
4
1
3




Both matrices have eigenvalues 5 2 5 .


5 5
2
5 5
2

I  B  x  0 has solutions x 

t , x t; 
I  A x  0 has solutions x1  32 5 t , x2  t ;
1
 3 5
Letting P   2
 1


5 1
2

 521
 and Q  
1 
 1
3 5
2

 
 
2
5 5
2
5 5
2

I  B  x  0 has solutions x 
I  A x  0 has solutions x1  32 5 t , x2  t ;
1
 52 5

1
 we have P AP  
1 
 0
 5 1
2
B  Q P 1 AP Q 1  QP 1 A PQ 1  PQ 1
 5 1
2
t , x2  t
0 
1
  Q BQ therefore
5 5

2 
 A  PQ  , which shows that A and B are similar.
1
1
 3
  1 2  
1 
det  
  0 does not equal det  

   2 , thus that the two matrices are not similar.
  6 2  
  1 0 
17.
55
 1   1 
 0    1
 0    1
 1 1 1
   
   
   
T  0    0  , T  1     1 , and T  0     0  , thus [T }B   0
1 0  .
 0   1 
 0    0 
 1    1
 1 0 1
   
   
   
Note that this can also be read directly from [T  x ]B .
56
19.
Chapter 8: Linear Transformations
(a)
D  f  g    f  x   g  x    f   x   g  x  and D  kf    kf  x    kf   x  .
(b)
If f is in ker(D), then f has the form f  f  x   a0  a1 x, so a basis for ker(D) is
f  x  1, g  x   x .
(c)
The equation D(f) = f can be rewritten as y  y . Substituting y1  y and y2  y yields the system
y1  y2
y2  y
 0 1
The coefficient matrix of this system is A  
 . The characteristic polynomial of A is
 1 0
det   I  A  
 1
  2  1     1   1 thus the eigenvalues of A are   1 and   1 .
1 
 1 1
The reduced row echelon form of 1I  A is 
 so that the eigenspace corresponding to   1
0 0 
x 
1
consists of vectors  1  where x1  t , x2  t . A vector p1    forms a basis for this eigenspace.
1
 x2 
1 1 
The reduced row echelon form of 1I  A is 
 so that the eigenspace corresponding to   1
0 0 
x 
 1
consists of vectors  1  where x1  t , x2  t . A vector p2    forms a basis for this eigenspace.
 1
 x2 
 1 0
1 1
diagonalizes A and P 1 AP  D  
Therefore P  
.

 0 1
1 1
 1 0
The substitution y  Pu yields the “diagonal system” u   
 u consisting of equations u1  u1
0 1
and u2  u2 . From Formula (2) in Section 5.4, these equations have the solutions u1  c1e x ,
 c ex 
u2  c2 e  x , i.e., u   1  x  . From y  Pu we obtain the solution
 c2 e 
1 1  c1e x   c1e x  c2 e  x 
y
thus y1  c1e x  c2 e  x and y2  c1e x  c2 e  x .
 x    x

x 
1 1  c2 e  c1e  c2 e 
We conclude that the original equation y  y has the solution y  c1e x  c2 e  x .
Thus, f  x   e x and g  x   e x form a basis for the subspace of C 2  ,   containing the functions
satisfying the equation D(f) = f. (Other bases are possible, e.g. {e x , e  x } .)
Supplementary Exercises
20.
(a)
 12  5  1  6   2 


T x 2  5 x  6   02  5  0   6    6 
 12  5 1  6  12 

  
(b)
For any p  x  and q  x  in P2 and for any real number k we have

57

 kp  1 
 p  1 




T  kp  x     kp  0    k  p  0    kT  p  x   and
 kp 1 
 p 1 




 p  1  q  1   p  1  q  1 

 
 

T  p  x   q  x    p 0   q  0    p 0    q  0   T  p  x   T q  x 
 p 1  q 1   p 1   q 1 

 
 

therefore T is a linear transformation.
(c)
(d)
 a0  a1  a2   0 
   0  yields a  a  a  0 thus ker T  0 .
Solving T a0  a1 x  a2 x  
a0
  
0
1
2
  
 a0  a1  a2   0 
Consequently, T is one-to-one.

2

Setting T  a0  a1 x  a2 x 
2
 a0  a1  a2   0 
   3  yields a linear system with augmented matrix
 
a0
  
 a0  a1  a2   0 
3
1 0 0
1 1 1 0 
1 0 0 3 , whose reduced row echelon form is  0 1 0 0  .




 0 0 1 3
1 1 1 0 
Thus T 1  0,3,0   3  3 x 2 .
(e)
21.
(c)
Note that a1 P1  x   a2 P2  x   a3 P3  x  evaluated at x1 , x2 , and x3 gives the values a1 , a2 , and a3 ,
 
respectively, since Pi  xi   1 and Pi x j  0 for i  j .
(d)
From the computations in part (c), the points lie on the graph.
58
22.
Chapter 8: Linear Transformations
For any y  x  and z  x  in V and for any real number k we have
(a)
L  ky  x    dxd 2  ky  x    p  x  dxd  ky  x    q  x   ky  x    k  y  x   p  x  y  x   q  x  y  x  
2
 kL  y  x   and
L  y  x   z  x    dxd 2  y  x   z  x    p  x  dxd  y  x   z  x    q  x   y  x   z  x  
2
  y  x   p  x  y  x   q  x  y  x     z   x   p  x  z   x   q  x  z  x    L  y  x    L  z  x  
therefore L is a linear transformation.
Letting p  x   0 and q  x   1 , we have L  y  x    y  x   y  x  .
(b)
L   x    dxd 2  c1 sin x  c2 cos x   c1 sin x  c2 cos x
2
 dxd  c1 cos x  c2 sin x   c1 sin x  c2 cos x  c1 sin x  c2 cos x  c1 sin x  c2 cos x  0 therefore   x 
is in ker  L  for all real values of c1 and c2 .
23.
D 1  0
D x  1
 
D x2  2 x

 
D x n  nx n 1
This gives the matrix shown.
24.
 
For every integer k  1 , we have D  k !    k ! 
x c
k
k x c
k 1
 x  c k 1
  k 1! ; also, D  x  c   1 and D 1  0 .
0
0

0
Therefore, the matrix of D with respect to the given basis is 

0

 0
25.
0
1

0

The matrix of J with respect to the given bases is  0


0
0

1 0 0  0
0 1 0  0 
0 0 1  0
.
    
0 0 0  1

0 0 0  0 
0 0  0
0 0  0
1
2
0  0
0
1
3
 0
   
0 0  n1
0 0  0
0
0 
0

0 .
 

0
1 
n 1 
Supplementary Exercises
26.
x0 
1 0


y0  since 0 1
0 0
1 
(a)
1 0
The matrix M is  0 1
 0 0
(b)
 1 0 1
Using part (a), the matrix  0 1 3 .
 0 0 1
x0   x   1 x   0  y  ( x0 )1  x  x0 


y0   y    0  x  1 y   y0 1   y  y0  .
1   1    0  x   0  y  11  1 
59
9.1 LU-Decompositions
CHAPTER 9: NUMERICAL METHODS
9.1 LU-Decompositions
1.
 3 0   1 2   x1   0 
Step 1. Rewrite the system as 
 
2 1  0
1  x2  1 




 
L
x
U
b
 y   1 2   x1 
 3 0   y1  0 
and solve 
by forward
Step 2. Define y1 and y2 by  1   
 



y2   0
2 1  y2  1 
1  x2 




 



y
U
x
b
y
L
substitution to obtain y1  0 , y2  1 .
 1 2   x1  0 
Step 3. Solve 
by back substitution to find x1  2 , x2  1 .
 
0
1  x2  1 


 
2.
y
x
U
 3 0 0   1 2 1  x1   3
Step 1. Rewrite the system as  2 4 0   0
1 2   x2    22 
 4 1 2   0 0 1  x3   3
  


 

L
x
U
b
 3 0 0   y1   3
 y1   1 2 1  x1 






1 2   x2  and solve  2 4 0   y2    22  by
Step 2. Define y1 , y2 , and y3 by  y2   0
 4 1 2   y3   3
 y3  0 0 1  x3 



  



y
U
x
L
y
b
forward substitution to obtain y1  1 , y2  5 , y3  3 .
 1 2 1  x1   1
1 2   x2    5 by back substitution to find x1  2 , x2  1 , x3  3 .
Step 3. Solve  0
 0 0 1  x3   3


 
U
3.
A
U
x
y
 2 8
 1 1


 0 
   (we follow the procedure of Example 3)


 1 4   multiplier  12
 1 1


2 0 
  


 1 4
 0 3  multiplier  1


 2 0
 1  


1 4 
0 1   multiplier  1


3
 2 0
L 

 1 3
1
2
Chapter 9: Numerical Methods
 2 0   1 4   x1   2 
Step 1. Rewrite the system as 
 
1 3 0 1  x2   2 


   
L
b
x
U
 y   1 4   x1 
 2 0   y1   2 
Step 2. Define y1 and y2 by  1   
and solve 
by forward substitution to
 



y2   0 1  x2 
1 3  y2   2 




 
 
y
x
U
L
y
b
obtain y1  1 , y2  1 .
 1 4   x1   1
Step 3. Solve 
by back substitution to find x1  3 , x2  1 .
 
0 1  x2   1

 
U
4.
A
U
x
y
 5 10 
 6
5

 0 
   (we follow the procedure of Example 3)


 1 2   multiplier   15
 6 5


 5 0 
  


 1 2
 0 7  multiplier  6


 5 0 
 6 


1 2 
0 1   multiplier   1


7
 5 0 
L 

 6 7 
 5 0   1 2   x1   10 
Step 1. Rewrite the system as 
 
6 7  0 1  x2   19 

   

L
U
x
b
 y   1 2   x1 
 5 0   y1   10 
and solve 
by forward
Step 2. Define y1 and y2 by  1   


 

6 7   y2   19 

 y2   0 1 
 x2 



  
y
y
b
L


 x
U
substitution to obtain y1  2 , y2  1 .
 1 2   x1   2 
by back substitution to find x1  4 , x2  1 .
Step 3. Solve 
 
0 1  x2   1

 
U
5.
A
 2 2 2 
 0 2 2 


 1 5 2 
x
y
 0 0 
  0 


   
9.1 LU-Decompositions
U
 1 1 1  multiplier  12
 0 2 2 


 1 5 2 
2 0 0 
   0


    
 1 1 1
 0 2 2   multiplier  0


 0 4
1  multiplier  1
 2 0 0
 0  0


 1   
 1 1 1
0
1 1  multiplier   12

 0 4
1
0 0
 2
 0 2 0 


 1



 1 1 1
0
1 1

 0 0 5  multiplier  4
0 0
 2
 0 2 0 


 1 4  
 1 1 1 
0
1 1 

 0 0
1   multiplier  15
0 0
 2
 0 2 0 
L 

 1 4 5 
0 0   1 1 1  x1   4 
 2

1 1  x2    2 
Step 1. Rewrite the system as  0 2 0   0
 1 4 5  0 0
1  x3   6 



 

 
L
x
U
b
0 0   y1   4 
 y1   1 1 1  x1 
 2







Step 2. Define y1 , y2 , and y3 by  y2    0 1 1  x2  and solve  0 2 0   y2    2  by
 y3   0 0
 1 4 5  y3   6 
1  x3 



 


 
y
U
x
L
y
forward substitution to obtain y1  2 , y2  1 , y3  0 .
 1 1 1  x1   2 
Step 3. Solve  0 1 1  x2    1 by back substitution to find x1  1 , x2  1 , x3  0 .
 0 0 1  x3   0 


 
U
6.
A
x
y
 3 12 6 
 1 2 2 


 0
1 1
 0 0 
  0 


   
 1 4 2   multiplier   13
 1 2 2 


 0
1 1
 3 0 0 
   0


    
b
3
4
Chapter 9: Numerical Methods
U
 1 4 2 
 0 2 0   multiplier  1


 0
1 1  multiplier  0
 3 0 0 
 1  0


 0   
 1 4 2 
0
1 0   multiplier  12

 0
1 1
 3 0 0 
 1 2 0


 0   
 1 4 2 
0
1 0 

 0
0 1  multiplier  1
 3 0 0 
 1 2 0


 0 1  
 1 4 2 
0
1 0 

 0
0 1   multiplier  1
 3 0 0 


L   1 2 0
 0 1 1
 3 0 0   1 4 2   x1   33
1 0   x2    7 
Step 1. Rewrite the system as  1 2 0  0
 0 1 1 0
0 1  x3   1

  
L
x
U
b
 y1   1 4 2   x1 
 3 0 0   y1   33






1 0   x2  and solve  1 2 0   y2    7  by forward
Step 2. Define y1 , y2 , and y3 by  y2   0
 y3  0 0 1  x3 
 0 1 1  y3   1

 
  
y
U
x
y
L
b
substitution to obtain y1  11 , y2  2 , y3  1 .
 1 4 2   x1   11
1 0   x2    2  by back substitution to find x1  1 , x2  2 , x3  1 .
Step 3. Solve  0
 0 0 1  x3   1
  
U
x
y
(a)
 12 81  487 
 1 0 0



1
5 
L   2 1 0  ; U   0 14
24 
1
 0 0
 1 1 1
6
8.
(a)
 12

L1   32
 145
9.
(a)
Reduce A to upper triangular form.
7.
1
0 0
 1 3 8 

1
1 0  ; U   0
1 3
3
1
 0 0
1
7
7
 481
(b)
 485

A  U L    247
 61
(b)
  87

A1  U 1 L1   37
 145
3
7
2
7
3
7
1
1 1
11
24
1
6

 487 
5 
24 
1
6


 

8
7
3
7
1
7
9.1 LU-Decompositions
5
1 1  1 12  12   1 12  12   1 12  12 
 2
 
 

 2 1 2    2 1
2   0 0
1  0 0
1  U

 
 2
1 0   2
1
0  2 1
0  0 0
1
 2 0 0
The multipliers used were 12 , 2, and 2, which leads to L   2 1 0  where the 1’s on the
 2 0 1
diagonal reflect that no multiplication was required on the 2nd and 3rd diagonal entries.
(b)
To change the 2 on the diagonal of L to a 1, the first column of L is divided by 2 and the diagonal
matrix has a 2 as the 1, 1 entry.
 1 0 0   2 0 0   1 12  12 


A  L1 DU1   1 1 0   0 1 0   0 0
1 is the LDU-decomposition of A.
 1 0 1  0 0 1   0 0
1
(c)
10.
(a)
(b)
 1 0 0  2 1 1
Let U 2  DU , and L2  L1 , then A  L2U 2   1 1 0  0 0
1 .
 1 0 1 0 0
1
ad  0 1 
 a 0  1 d  0 1 
a

Setting 
yields 

 . Equating entries in the first row leads




 b bd  c  1 0 
 b c  0 1  1 0 
to a contradiction, since we cannot simultaneously have a  0 and ad  1 . We conclude that the
matrix has no LU -decomposition. (This matrix cannot be reduced to a row echelon form without
interchanging rows.)
0 1  0 1  1 0  1 0 
By inspection, 
  1 0  0 1  0 1  .
 1 0  

 

 


P
11.
L
U
0 1 0 
1 


1
P   1 0 0  and P b   2  , so the system P 1 Ax  P 1b is
0 0 1 
 5 
1
 1 0 0   1 2 2   x1  1 
0
1 0  0 1 4   x2    2  .

 3 5 1 0 0 17   x3   5 
 1 0 0   y1  1 
0
1 0   y2   2  is

 3 5 1  y3  5 
y1
y2
1
2
3 y1  5 y2  y3  5
6
Chapter 9: Numerical Methods
which has the solution y1  1, y2  2, y3  12.
 1 2 2   x1   1
 0 1 4   x    2  is

 2  
 0 0 17   x3  12 
x1  2 x2  2 x3  1
x 2  4 x3  2
17 x3  12
21
which gives the solution of the original system: x1  17
, x2   14
, x3  12
.
17
17
12.
1 0 0 
1 0 0 
3




1
1
The inverse of P  0 0 1  is P  0 0 1  thus P b  6  .
0 1 0 
0 
0 1 0 
 1 0 0   4 1 2   x1   3 
1 0   0 1 4   x2   6  .
We rewrite Ax  b as P  PLU  x  P b , i.e.,  2
 0 2 1  0 0 9   x3   0 
   
1
1
L
U
x
P 1 b
 1 0 0   y1   3 
1 0   y2   6  by forward substitution yields y1  3 , y2  0 , y3  0 .
Solving  2
 0 2 1  y3  0 
  
L
y
P 1 b
1 2   x1   3
4

Solving  0 1 4   x2   0  by back substitution yields x1  34 , x2  0 , x3  0 .
 0 0 9   x3  0 
  
U
13.
A
U
x
y
2 2
 4 1


 0 
  


 1 1  multiplier  12
 4 1


2 0 
  


 1 1
 0 3  multiplier  4


2 0
4 


1 1 
0 1   multiplier   1


3
2 0
 4 3


9.1 LU-Decompositions
A general 2  2 lower triangular matrix with nonzero main diagonal entries can be factored as
 a11
a
 21
0   1
0   a11


a22   a21 / a11 1   0
0 
 2 0  1 0  2 0 
therefore 



 . We conclude that an
a22 
 4 3 2 1  0 3
 2 2  1 0  2 0   1 1
LDU -decomposition of A is A  



  LDU .
 4 1 2 1  0 3 0 1
14.
Reduce A to upper triangular form:
 3 12 6   1 4 2   1 4 2   1 4 2   1 4 2 
0
2 0   0
2 0   0
2 0    0
1 0   0
1 0   U

 6 28 13 6 28 13 0 4 1  0 4 1 0
0 1
3 0 0
The multipliers used were 13 , 6, 12 , and 4, which leads to L1   0
2 0  .
 6 4 1
 1 0 0  3 0 0 
 1 0 0   3 0 0   1 4 2 




Since L1   0
1 0   0 2 0   0
1 0  is the
1 0  0 2 0  , we conclude that A  0
 2 2 1 0 0 1 
2 2 1  0 0 1   0
0 1
LDU-decomposition of A.
15.
If rows 2 and 3 of A are interchanged, then the resulting matrix has an LU-decomposition.
1 0 0 
 3 1 0 


For P   0 0 1  , PA   0 2 1 . Reduce PA to upper triangular form:
 0 1 0 
 3 1 1
 3 1 0   1  13 0   1  13 0   1  13 0 
 
 

 0 2 1  0
2 1  0
2 1  0
1 12   U

 
 3 1 1  3 1 1 0
0 1 0
0 1
3 0 0 
The multipliers used were , 3, and , so L   0 2 0  . Since P  P 1 , we have
 3 0 1 
1
3
1
2
 1 0 0   3 0 0   1  13 0 


A  PLU  0 0 1   0 2 0   0
1 12  .
0 1 0   3 0 1   0
0 1
 2 
 3 0 0   1  13 0   x1   2 


Since Pb   4  , the system to solve is  0 2 0   0
1 12   x2    4  .
 3 0 1   0
 1
0 1  x3   1
 3 0 0   y1   2 
 0 2 0   y    4  is

 2  
 3 0 1   y3   1
7
8
Chapter 9: Numerical Methods
3 y1
2 y2
3 y1
 2
4
 y3  1
which has the solution y1   23 , y2  2, y3  3.
 1  13 0   x1    23 


 
1 12   x2    2  is
0
 0
0 1  x3   3
1
2
x1  x2  
3
3
1
x 2  x3  2
2
x3  3
which gives the solution to the original system: x1   12 , x2  12 , x3  3.
16.
0 1 0 
As discussed in the last subsection of Section 9.1, we introduce a permutation matrix Q  1 0 0  .
0 0 1 
Multiplying QA results in interchanging the first two rows of A , so that an LU -decomposition can be
found.
QA 
U
 1 1 4   multiplier  1
 0 3 2 


 2 2
5
1 0 0 
  0 


    
 1 1 4
 0 3 2   multiplier  0


 0 0 3  multiplier  2
 1 0 0
0  0 


 2   
4
1 1
 0 1  2   multiplier  1
3
3

 0 0 3
 1 0 0
0 3 0 


 2   
4
1 1
0 1  2 
3

 0 0 3  multiplier  0
 1 0 0
0 3 0 


 2 0  
4 
1 1
0 1  2 
3 

 0 0
1   multiplier   13
 1 0 0


L  0 3 0 
 2 0 3
9.1 LU-Decompositions
0 1 0 
Since P  Q   1 0 0  , we obtain a PLU -decomposition of A :
0 0 1 
1
4 
 0 3 2   0 1 0   1 0 0   1 1







A   1 1 4    1 0 0   0 3 0   0 1  23   PLU .
2 2 5  0 0 1   2 0 3  0 0
1 
4
1 1
 1 0 0 
 5
  x1   5
2




1
1
1
Using P b   7  , we rewrite Ax  b as P Ax  P b , i.e., 0 3 0  0 1    x2    7  .

3
 2 
2 0 3 
 x3   2 
1 

 0 0


 x
L
P 1b
U
 1 0 0   y1   5
Solving 0 3 0   y2    7  by forward substitution yields y1  5 , y2  73 , y3  4 .
2 0 3  y3   2 
  
L
y
P 1b
4   x1   5
1 1

Solving  0 1  23   x2    73  by back substitution yields x1  16 , x2  5 , x3  4 .
 0 0
1  x3   4 


 
U
x
y
17.
Approximately 23 n 3 additions and multiplications are required – see Section 9.3.
18.
(a)
If A has such an LU-decomposition, it can be written as
y 
 a b  1 0   x y   x
c d    w 1  0 z    wx wy  z  which leads to the equations

 

 

xa
yb
wx  c
wy  z  d
Since a  0, the system has the unique solution x = a, y = b, w  ac , and z  d  bca  ad a bc .
Because the solution is unique, the LU-decomposition is also unique.
(b)
 a b  1 0   a
From part (a) the LU-decomposition is 

  c
c d   a 1   0
b 
.

ad  bc
a
True-False Exercises
(a)
False. If the matrix cannot be reduced to row echelon form without interchanging rows, then it does not
have an LU-decomposition.
9
10
Chapter 9: Numerical Methods
(b)
False. If the row equivalence of A and U requires interchanging rows of A, then A does not have an
LU-decomposition.
(c)
True. This follows from part (b) of Theorem 1.7.1.
(d)
True. (Refer to the subsection "LDU-Decompositions" for the relevant result.)
(e)
True. The procedure for obtaining a PLU-decomposition of a matrix A has been described in the
subsection "PLU-Decompositions".
9.2 The Power Method
1.
(a)
3  8 is the dominant eigenvalue since 3  8 is greater than the absolute values of all remaining
eigenvalues
2.
(b)
1  4  5 ; no dominant eigenvalue
(a)
3  3 is the dominant eigenvalue since 3  3 is greater than the absolute values of all remaining
eigenvalues
(b)
3.
1  4  3 ; no dominant eigenvalue
 5 1 1   5
Ax 0  
    
 1 1 0   1
 5  0.98058
Ax
x1  || Ax00 ||  126    

 1  0.19612 
 5 1  0.98058   5.09902 
Ax1  



 1 1  0.19612   0.78446 
 0.98837
Ax
x 2  || Ax11 ||  

 0.15206 
 5 1  0.98837   5.09391
Ax 2  



 1 1  0.15206   0.83631
 0.98679 
Ax
x 3  || Ax22 ||  

 0.16201
 5 1  0.98679  5.09596 
Ax 3  



 1 1  0.16201  0.82478 
 0.98715 
Ax
x 4  || Ax33 ||  

 0.15977 
 1  Ax1  x1   Ax1  x1  5.15385
T
  2   Ax 2  x 2   Ax 2  x 2  5.16185
T
  3  Ax 3  x 3   Ax 3  x 3  5.16226
T
  4  Ax 4  x 4   Ax 4  x 4  5.16228
T
det   I  A  
 5
1
2  10  5.16228 .



1
  2  4  6  λ  2  10 λ  2  10 ; the dominant eigenvalue is
 1
9.2 The Power Method
11
 1 3  10 
The reduced row echelon form of 2  10 I  A is 
 so that the eigenspace corresponding to
0 
0






  2  10 contains vectors  x1 , x2  where x1   3  10 t , x2  t ,. A vector 3  10,1 forms a
basis for this eigenspace. We see that x 4 approximates a unit eigenvector
 3 10, 1   0.98709, 0.16018  and    approximates the dominant eigenvalue
4
1
20  6 10
2  10  5.16228 .
4.
0  1   7 
 7 2

6 2   0    2 
Ax 0    2
 0 2
5  0   0 
 7   0.96152 
 2    0.27472 
  

 0   0.00000 
x1 
Ax 0
|| Ax 0 ||

1
53
0   0.96152   7.28011
 7 2

Ax1   2
6 2   0.27472    3.57137 
 0 2
5  0.00000   0.54944 
x2 
Ax1
|| Ax1 ||

1
8.12752
 7.28011  0.89574 
 3.57137    0.43942 

 

 0.54944   0.06760 
0   0.89574   7.14898 
 7 2

Ax 2   2
6 2   0.43942    4.56318 
 0 2
5  0.06760   1.21685
x3 
Ax 2
|| Ax 2 ||

1
8.56804
 7.14898   0.83438 
 4.56318    0.53258 

 

 1.21685  0.14202 
0   0.83438   6.90581
 7 2

Ax 3   2
6 2   0.53258    5.14829 
 0 2
5  0.14202   1.77527 
x4 
Ax3
|| Ax3 ||

1
8.7947

1

2

 3

 4
 6.90581  0.78522 
 5.14829    0.58539 

 

 1.77527   0.20186 
 0.96152 
 Ax1  x1   Ax1  x1   7.28011 3.57137 0.54944   0.27472   7.98113
 0.00000 
T
 0.89574 
 Ax 2  x 2   Ax 2  x 2   7.14898 4.56318 1.21685  0.43942   8.49100
 0.06760 
T
 0.83438 
 Ax 3  x 3   Ax 3  x 3  6.90581 5.14829 1.77527  0.53258   8.75607
 0.14202 
T
 0.78522 
 Ax 4  x 4   Ax 4  x 4  6.66734 5.48648 2.18006   0.58539   8.88712
 0.20186 
T
12
Chapter 9: Numerical Methods
det   I  A  
 7
2
0
2
 6
2
0
2
 5
    3    6    9  ; the dominant eigenvalue is 9 .
 1 0 2 
The reduced row echelon form of 9I  A is  0 1 2  so that the eigenspace corresponding to
 0 0
0 
  9 contains vectors  x1 , x2 , x3  where x1  2t , x2  2t , x3  t . A vector  2, 2,1 forms a basis for
this eigenspace. We see that x 4 approximates the unit eigenvector
 23 ,  23 , 13    0.66667, 0.66667,0.33333 and    approximates the dominant eigenvalue 9 .
4
5.
 1 3 1  2 
Ax 0  
    
 3 5 1  2 
 1
Ax
x1  max A0x    
0
 1
 1 3  1  4 
Ax1  
    
 3 5  1  8
 0.5
Ax
x 2  max A1x   

1
 1 
 1 3  0.5  3.5
Ax 2  



 3 5  1   6.5
 0.53846 
Ax
x3  max A2x   

2
1


 1 3  0.53846   3.53846 
Ax 3  

   6.61538 
1
 3 5 
 

 0.53488 
Ax
x 4  max A3x   

3
1


det   I  A  
 1
3
 1 
Ax 1  x 1
6
x1  x1
 2 
Ax 2  x 2
 6.6
x2  x2
  3 
Ax 3  x 3
 6.60550
x3  x3
  4 
Ax 4  x 4
 6.60555
x4  x4
3
  2  6  4 , so the eigenvalues of A are   3  13 . The dominant
 5
 2  13   0.53518 
eigenvalue is 3  13  6.60555 with corresponding scaled eigenvector  3   
.
1

 1  
6.
 3 2 2  1  7 
Ax 0  2 2 0  1   4 
2 0 4  1  6 
x1 
Ax 0
max  Ax 0 
 7   1.00000 
  4    0.57143
 6   0.85714 
1
7
9.2 The Power Method
 3 2 2   1.00000  5.85714 
Ax1  2 2 0   0.57143  3.14286 
2 0 4   0.85714   5.42857 

1
5.85714
 5.85714  1.00000 
 3.14286   0.53659 

 

 5.42857   0.92683
x3 
Ax 2
max  Ax 2 

1
5.92683
 5.92683  1.00000 
 3.07317    0.51852 

 

 5.70732   0.96296 
x4 
Ax 3
max  Ax3 

1
5.96296
5.96296   1.00000 
3.03704    0.50932 

 

 5.85185  0.98137 
x2 
Ax1
max  Ax1 
 3 2 2  1.00000   5.92683
Ax 2  2 2 0  0.53659    3.07317 
2 0 4   0.92683 5.70732 
 3 2 2   1.00000   5.96296 
Ax 3  2 2 0   0.51852    3.03704 
2 0 4   0.96296   5.85185

1
Ax  x  Ax1  x1 12.30612
 1 1

 5.97030
x1  x1
x1T x1
2.06122
T
 2 
 3
det   I  A   2
2
T
 3
 Ax3  x3  13.17284  5.99813
Ax  x
 3 3
x3  x3
xT3 x 3
2.19616
 4
Ax 4  x 4  Ax 4  x 4 13.33386



 5.99953
x4  x4
xT4 x 4
2.22248


Ax 2  x 2  Ax 2  x 2 12.86556


 5.99252
x2  x2
2.14694
xT2 x 2
T
T
2
2
 2
0
0
4
     3    6  ; the dominant eigenvalue is 6.
 1 0 1
The reduced row echelon form of 6I  A is  0 1  12  so that the eigenspace corresponding to
 0 0
0 
  6 contains vectors  x1 , x2 , x3  where x1  t , x2  12 t , x3  t . A vector 1, 12 ,1 forms a basis for this
eigenspace. We see that x 4 approximates the eigenvector 1, 12 ,1 and   4  approximates the dominant
eigenvalue 6.
7.
(a)
 2 1 1   2 
Ax 0  
    
 1 2  0   1
 1 
Ax
x1  max A0x   

0
 0.5
 2 1  1  2.5
Ax1  

 
 1 2   0.5 2 
 1 
Ax
x 2  max A1x   

1
 0.8 
13
14
Chapter 9: Numerical Methods
 2 1  1   2.8
Ax 2  



 1 2   0.8  2.6 
(b)
 1  Axx xx  2.8 ;
(c)
det   I  A  
1
1
1
1
 2
1
 1

Ax
x3  max A2x   

2
 0.929 
  2   Axx xx  2.976 ;
2
2
2
2
  3  Axx xx  2.997
3
3
3
3
1
    3   1 ; the dominant eigenvalue is 3 .
 2
1 1 
The reduced row echelon form of 3I  A is 
 so that the eigenspace corresponding to
0 0 
  3 contains vectors  x1 , x2  where x1  t , x2  t . A vector  1,1 forms a basis for this
eigenspace. We see that x 3 approximates the eigenvector 1, 1 and   3 approximates the dominant
eigenvalue 3.
8.
3
(d)
The percentage error is  
(a)
2 1 0  1  3
Ax 0   1 2 0  1   3
0 0 10  1 10 
(b)
 3 2.997
 0.001  0.1% .
3

1
10
 3 0.3
 3  0.3
   
10  1.0 
x2 
Ax1
max  Ax1 

1
10
 0.9  0.09 
 0.9   0.09 

 

10.0  1.00 
x3 
Ax 2
max  Ax 2 

1
10
 0.27   0.027 
 0.27    0.027 

 

10.00  1.000 
x1 
Ax 0
max  Ax 0 
 2 1 0   0.3  0.9 
Ax1   1 2 0   0.3   0.9 
 0 0 10  1.0  10.0 
2 1 0   0.09   0.27 
Ax 2   1 2 0   0.09    0.27 
0 0 10  1.00  10.00 
 1 
Ax1  x1  Ax1  x1 10.54


 8.932
1.18
x1  x1
x1T x1
T

2
Ax 2  x 2  Ax 2  x 2 10.049



 9.889
1.016
x2  x2
xT2 x 2

 3
 Ax3  x3  10.004  9.990
Ax  x
 3 3
1.001
x3  x3
xT3 x 3
T
T
9.2 The Power Method
 2
(c)
det   I  A   1
1
0
 2
0
0
  10
0
15
    1   3    10  ; the dominant eigenvalue is 10 .
1 0 0 
The reduced row echelon form of 10 I  A is  0 1 0  so that the eigenspace corresponding to
 0 0 0 
  10 contains vectors  x1 , x2 , x3  where x1  0 , x2  0 , x3  t . A vector  0, 0,1 forms a basis for
this eigenspace. We see that x 3 approximates the eigenvector  0,0,1 and   3 approximates the
dominant eigenvalue 10 .
(d)
The percentage error in the approximation  3  9.99 of the dominant eigenvalue   10 is
   3

 10 109.99  0.001  0.1%
9.
0.99180 
A5 x
Ax 5  x 5
 5
By Formula (10), x 5  max A50x  
 . Thus   x5 x5  2.99993.
 0  1

10.
 1 
A5 x
Ax  x
5
. Thus     x55x55  2.99993.
By Formula (10), x 5  max A50x  
 0  0.99180 
11.
By inspection, A is symmetric and has a dominant eigenvalue 1 .
Assuming a  0 , the power sequence is
 1 0   a   a 
Ax 0  
    
 0 0 b   0
 a   a / a 
Ax
x1  || Ax00 ||  1a    

 0  0 
 1 0   a / a   a / a 
Ax1  



 0 0  0   0 
a / a  a / a 
Ax
x 2  || Ax11 ||  11 


 0   0 
 1 0   a / a   a / a 
Ax 2  



 0 0  0   0 
 a / a   a / a 
Ax
x 3  || Ax22 ||  11 


 0   0 
 1 0   a / a   a / a 
Ax 3  



 0 0  0   0 
a / a  a / a 
Ax
x 4  || Ax33 ||  11 


 0   0 


The quantity a / a is equal to 1 if a  0 and 1 if a  0 . Since the power sequence continues to oscillate
 1
 1
between   and   , it does not converge.
0 
 0
12.
(a)
0 
E.g., choose x 0   0  .
 1 
16
(b)
Chapter 9: Numerical Methods
x1 
Ax 0
|| Ax 0 ||
 0.28604 
  0.09535
 0.95346 
 1  Ax1  x1  10.86364
x2 
Ax1
|| Ax1 ||
 0.26286 
  0.04381
 0.96384 
  2   Ax 2  x 2  10.90211
  2    1
 2
 0.00353  0.353%
x3 
Ax 2
|| Ax 2 ||
 0.27723
  0.03214 
 0.96027 
  3  Ax 3  x3  10.90765
  3    2 
 3
 0.00051  0.051%
0 
0 
E.g., choose x 0    .
0 
 
1 
 0.12217 
 0.12217 
Ax 0

x1  || Ax0 ||  
 0.12217 


 0.97736 
 1  Ax1  x1  8.46269
 0.14413
 0.12971
Ax

x 2  || Ax11 ||  
 0.17295


 0.96565
  2   Ax 2  x 2  8.50187
  2    1
 2
 0.05467  5.467%
 0.15083
 0.12371
Ax

x 3  || Ax22 ||  
 0.19658 


 0.96089 
 3  Ax3  x3  8.51040
  3    2 
 3
 0.00461  0.461%
 0.15372 
 0.11887 
Ax 3

x 4  || Ax3 ||  
 0.20847 


 0.95853
  4  Ax 4  x 4  8.51272
  4    3
 4
 0.00027  0.027%
9.2 The Power Method
13.
(a)
1 
Starting with x 0  0  , it takes 8 iterations.
0 
 0.229 
1
x1   0.668  ,     7.632
 0.668 
0.507 
2
x 2  0.320  ,     9.968
0.800 
 0.380 
3
x 3   0.197  ,     10.622
 0.904 
 0.344 
4
x 4   0.096  ,     10.827
 0.934 
 0.317 
5
x 5   0.044  ,     10.886
 0.948 
 0.302 
6
x 6   0.016  ,     10.903
 0.953 
 0.294 
7
x 7   0.002  ,     10.908
 0.956 
 0.290 
8
x 8   0.006  ,     10.909
 0.957 
(b)
1 
0 
Starting with x 0    , it takes 8 iterations.
0 
 
0 
17
18
Chapter 9: Numerical Methods
 0.577 
0

 ,  1  6.333
x1  
 0.577 


 0.577 
0.249 
0

 ,   2   8.062
x2  
0.498 


0.830 
 0.193 
 0.041 
 ,  3  8.382
x3  
 0.376 


 0.905 
0.175
0.073
 ,   4   8.476
x4  
0.305


0.933
0.167 
0.091 
 ,   5  8.503
x5  
0.266 


0.945 
 0.162 
 0.101 
 ,   6   8.511
x6  
 0.245 


 0.951 
 0.159 
 0.107 
 ,   7   8.513
x7  
 0.234 


 0.953 
 0.158 
 0.110 
 ,  8  8.513
x8  
 0.228 


 0.954 
14.
(a)
0 
E.g., choose x 0   0  .
 1 
9.2 The Power Method
(b)
x1 
Ax 0
max  Ax 0 
 0.3
  0.1
 1.0 
 1  Axx xx  10.86364
x2 
Ax1
max  Ax1 
 0.27273
  0.04545
 1.00000 
  2   Axx xx  10.90211
  2    1
  2
 0.00353  0.353%
x3 
Ax 2
max  Ax 2 
 0.28870 
  0.03347 
 1.00000 
 3  Axx xx  10.90765
  3    2 
 3
 0.00051  0.051%
1
1
1
1
2
2
2
2
3
3
3
3
0 
0 
E.g., choose x 0    .
0 
 
1 
0.125
0.125
Ax 0

x1  max  Ax   
0
0.125


1.000 
 1  Axx xx  8.46269
 0.14925
 0.13433
Ax

x 2  max  A1 x   
1
 0.17910 


 1.00000 
  2   Axx xx  8.50187
  2    1
 2
 0.05467  5.467%
0.15697 
 0.12875
Ax

x 3  max A2x   
2
0.20459 


1.00000 
 3  Axx xx  8.51040
  3    2 
 3
 0.00461  0.461%
 0.16037 
 0.12401
Ax3

x 4  max Ax   
3
 0.21749 


1.00000 
 4 
  4    3
 4
 0.00027  0.027%
1
1
1
1
2
2
2
2
3
3
3
3
Ax 4  x 4
 8.51272
x4  x4
19
20
Chapter 9: Numerical Methods
9.3 Comparison of Procedures for Solving Linear Systems
1.
(a)
 
For n  1000  103 , the flops for both phases is 23 (103 )3  32 (103 )2  67 103  668,165,500, which is
0.6681655 gigaflops, so it will take 0.6681655  10 1  0.067 second.
(b)
 
n  10,000  10 4 : 23 (10 4 )3  23 (10 4 )2  67 10 4  666,816,655,000 flops or 666.816655 gigaflops. The
time is about 66.68 seconds.
(c)
 
n  100,000  10 5 ; 23 (10 5 )3  23 (105 )2  67 10 5  666,682  109 flops or 666,682 gigaflops. The time
is about 66,668 seconds which is about 18.5 hours.
2.
(a)
The number of gigaflops required is
 n  n  n 10  666.817 . At 100 gigaflops per second,
2
3
3
3
2
2
7
6
9
the time required to solve the system is approximately 6.66817 seconds.
(b)
The number of gigaflops required is
 n  n  n 10  666,681.667 . At 100 gigaflops per
2
3
3
3
2
2
7
6
9
second, the time required to solve the system is approximately 6666.817 seconds (i.e., 1 hour, 51
minutes, and 6.817 seconds).
(c)
The number of gigaflops required is
 n  n  n 10  6.666682  10 . At 100 gigaflops per
2
3
3
3
2
2
7
6
9
8
second, the time required to solve the system is approximately 6,666,682 seconds (i.e., 77 days, 3
hours, 51 minutes, and 22 seconds).
3.
n  10,000  10 4
(a)
2
3


n3  23 1012  666.67  10 9 ;
666.67 gigaflops are required, which will take 666.67
 9.52 seconds.
70
4.
(b)
n2  108  0.1  109 ; 0.1 gigaflop is required, which will take about 0.0014 second.
(c)
This is the same as part (a); about 9.52 seconds.
(d)
2n3  2  1012  2000  109 ;
2000 gigaflops are required, which will take about 28.57 seconds.
(a)
The number of petaflops required is approximately 23 n 3 10 15  0.66667 , therefore the time required
for the forward phase of Gauss-Jordan elimination is approximately 0.041667 seconds.
(b)
The number of petaflops required is approximately n2 10 15  0.00001 , therefore the time required for
the backward phase of Gauss-Jordan elimination is approximately 0.000000625 seconds.
(c)
The number of petaflops required is approximately 23 n 3 10 15  0.66667 , therefore the time required
for the LU -decomposition is approximately 0.041667 seconds.
9.3 Comparison of Procedures for Solving Linear Systems
(d)
21
The number of petaflops required is approximately 2n3 10 15  2 , therefore the time required for the
computation of A1 by reducing [ A | I ] to [ I | A1 ] is approximately 0.125 seconds.
5.
(a)
n  100,000  10 5 ; 23 n3  23  1015  0.667  1015  6.67  10 5  10 9 ;
Thus, the forward phase would require about 6.67  105 seconds.
n2  1010  10  10 9 ; The backward phase would require about 10 seconds.
(b)
n  10,000  10 4 ; 23 n 3  23  1012  0.667  1012  6.67  10 2  10 9 ;
About 667 gigaflops are required, so the computer would have to execute 2(667) = 1334 gigaflops per
second.
6.
The number of teraflops required is approximately 2n3 10 12  2000 . A computer must be able to execute
more than 4000 teraflops per second to be able to find A1 in less than 0.5 seconds.
7.
Multiplying each of the n 2 entries of A by c requires n 2 flops.
8.
n 2 flops are required to compute A  B.
9.
Let C  cij   AB. Computing cij requires first multiplying each of the n entries aik by the corresponding
entry bkj , which requires n flops. Then the n terms aik bkj must be summed, which requires n  1 flops.
Thus, each of the n 2 entries in AB requires 2n  1 flops, for a total of
n 2  2 n  1  2 n 3  n 2 flops. Note that adding two numbers requires 1 flop, adding three numbers requires 2
flops, and in general, n  1 flops are required to add n numbers.
10.
Each diagonal entry can be obtained using k  1 multiplications, so the computation of Ak would involve
n  k  1 flops overall. (Note that the number of flops can be reduced to n log 2 k .)
9.4 Singular Value Decomposition
1.
1 
1 2 0 


The characteristic polynomial of A A   2  1 2 0    2 4 0  is  2    5  ; thus the eigenvalues of
 0 
 0 0 0 
T
AT A are 1  5 and 2  0, and  1  5 and  2  0 are singular values of A.
2.
 9
0
3 0  3 0   9 0 
    9    16  ;
AT A  

; det  I  AT A 




  16
0
0 4  0 4  0 16 


the eigenvalues of AT A are 1  16 and 2  9 therefore the singular values of A are
 1  1  4 and  2  2  3 .
22
3.
Chapter 9: Numerical Methods
 1 2   1 2   5 0 

are 1  5 and 2  5 (i.e.,  = 5 is an eigenvalue of
The eigenvalues of AT A  

1 0 5 
 2 1 2
multiplicity 2); thus the singular value of A is  1  5.
4.
 2
AT A  
 0
1  2

2   1
0  3

2   2
 3  2
2
T
    1   4  ;
 ; det  I  A A 
 2  2
2 


the eigenvalues of AT A are 1  4 and 2  1 therefore the singular values of A are
 1  1  2 and  2  2  1 .
5.
 1 1 1 1 2 0 
1 

is  = 2 (multiplicity 2), and the vectors v1   
The only eigenvalue of AT A  




 1 1 1 1 0 2 
0 
0 
and v 2    form an orthonormal basis for the eigenspace (which is all of R 2 ).
1 
1
1 1 1   2 

The singular values of A are  1  2 and  2  2. We have u1  11 Av1  12 
    1  , and
1 1 0   2 
u 2   2 Av 2 
1
1
2
1
1 1 0    2 
1 1 1    1  .

    2 
This results in the following singular value decomposition of A:
 1
 2
A  U ΣV T  
 1
 2

6.

1 
2   2

1   0
2 
0  1 0 


2   0 1 
 9
0
 3 0   3 0   9 0 
    9    16  ;
AT A  

; det  I  AT A 




  16
0
 0 4   0 4  0 16 


the eigenvalues of AT A are 1  16 and 2  9 therefore the singular values of A are
 1  1  4 and  2  2  3 .
1 0 
The reduced row echelon form of 16 I  AT A is 
 so that the eigenspace corresponding to
0 0 
x 
 x2 
0 
 1
1  16 consists of vectors  1  where x1  0 , x2  t . A vector v1    forms an orthonormal basis for
this eigenspace.
0 1 
The reduced row echelon form of 9I  AT A is 
 so that the eigenspace corresponding to 2  9
0 0 
x 
 1
consists of vectors  1  where x1  t , x2  0 . A vector v 2    forms an orthonormal basis for this
0 
 x2 
eigenspace.
9.4 Singular Value Decomposition
23
0 1 
16 0 
The matrix V   v1 | v 2   
orthogonally diagonalizes AT A : V T AT A V  

.
1 0 
 0 9
From part (d) of Theorem 9.4.4,


 3 0   1  1
 3 0  0   0 
u1  11 Av1  14 
   and u 2  12 Av 2  13 
     .



 0 4  0   0 
 0 4   1  1
A singular value decomposition of A is
 3 0   0 1  4 0  0 1 
 0 4    1 0   0 3  1 0 

 
  


 
 
A
7.
U
Σ
VT
 4 0   4 6  16 24 
The eigenvalues of AT A  


 are 1  64 and 2  4, with corresponding unit
6 4   0 4  24 52 
  25 
 15 
eigenvectors v1   2  and v 2   1  respectively. The singular values of A are  1  8 and  2  2. We
 5 
 5 
1
2
2
1
4 6   5   5 
4 6   5   5 
1
1
have u1  1 Av1  
  2    1  , and u 2   2 Av 2  2 0 4   1    2  .

  5   5 
0 4   5   5 
This results in the following singular value decomposition:
1
1
8
 2

5
A  U ΣV T  
 1

 5
8.
1 
 1
 8 0  
5
 5


2  0 2   2


5
 5

2 

5
1 

5
  18 18
3 3 3 3 18 18 
AT A  

    36   ;
; det  I  AT A 




18   18
3 3 3 3 18 18 


the eigenvalues of AT A are 1  36 and 2  0 therefore the singular values of A are
 1  1  6 and  2  2  0 .
 1 1
The reduced row echelon form of 36 I  AT A is 
 so that the eigenspace corresponding to
0 0 
x 
 x2 
 12 
1  36 consists of vectors  1  where x1  t , x2  t . A vector v1   1  forms an orthonormal basis for
 2 
this eigenspace.
1 1 
The reduced row echelon form of 0 I  AT A is 
 so that the eigenspace corresponding to 2  0
0 0 
  12 
 x1 
consists of vectors   where x1  t , x2  t . A vector v 2   1  forms an orthonormal basis for this
 x2 
 2 
eigenspace.
24
Chapter 9: Numerical Methods
 12  12 
36 0 
T
T
T
V
A
A
V

A
A
The matrix V   v1 | v 2    1
orthogonally
diagonalizes
:

 0 0 .
1
 2



2
From part (d) of Theorem 9.4.4,


1
1
3 3   2   2 
u1  1 Av1  
 1    1  .
3 3  2   2 
1
1
6
To obtain u 2 , we extend the set u1  to an orthonormal basis for R 2 . To simplify the computations, we
consider
x 
1
2 u1    . A vector u 2 orthogonal to this vector must be a solution of 1 1  1    0  .
1
 x2 
  12 
An orthonormal basis for the solution space is formed by u 2   1  .
 2 
A singular value decomposition of A is
1 
1 
 1
 1





3 3
2
2  6 0   2
2


3 3
1  0 0   1
1 

  1

  



A
2 Σ 
2
2
 2
 

VT
U
9.
2
 2
 2 1 2  
 9 9 
The eigenvalues of A A  
are 1  18 and 2  0, with
1  
1


1 2 
9 
9
 2

 2 2 
T
  12 
 12 
corresponding unit eigenvectors v1   1  and v 2   1  respectively. The only nonzero singular value
 2 
 2 
2
 23 
 2
1



2
 
of A is  1  18  3 2, and we have u1  11 Av1  3 1 2  1
1  1    13  . We must choose the
 2 2   2    23 
vectors u 2 and u 3 so that u1 , u 2 , u 3  is an orthonormal basis R3 .
 62 
 12 


 
A possible choice is u 2   0  and u 3    2 3 2  . This results in the following singular value

1
2
 2
  6 
 23

decomposition: A  U ΣV T   13

  23


 3 2
2 2
0  3  0

1
 62   0
2

1
2
2
6
0
1
  2
0  1

0  2


.
1
2

1
2
Note: The singular value decomposition is not unique. It depends on the choice of the (extended)
orthonormal basis for R3 . This is just one possibility.
9.4 Singular Value Decomposition
10.
25
 2 2 
 8 4 8 
 2 1 2  


 4
2 4  ;
A A   1 1 

2
1 2 
 2 1 
 8 4
8 
T
  8 4
8
det   I  A A   4   2
4     18   2 ; the eigenvalues of AT A are 1  18 and
 8
8
4
T
2  3  0 therefore the singular values of A are  1  1  3 2 ,  2  2  0 , and  3  3  0 .
1 0 1 
The reduced row echelon form of 18 I  A A is  0 1 12  so that the eigenspace corresponding to   18
 0 0 0 
T
  23 
 x1 
 


contains vectors  x2  where x1  t , x2   12 t , x3  t . A vector v1    13  forms an orthonormal basis
 23 
 x3 
for this eigenspace.
 1 12 1


The reduced row echelon form of 0 I  AT A is  0 0 0  so that the eigenspace corresponding to   0
 0 0 0 
 x1 
 1
 1




1
contains vectors  x2  where x1   2 s  t , x2  s , x3  t . Vectors p1   2  and p2   0  form a basis for
 x3 
 1
 0 
this eigenspace. We apply the Gram-Schmidt process to find an orthogonal basis for this eigenspace:
 1
 1
 1   45 
 p2 ,q1 
 




q1  p1   2  and q 2  p2  ||q || 2 q1   2   51  0    25  , then proceed to normalize the two vectors to
1
 0 
 1   1
 0 
 4 
  15 
3 5 
 2
q2
q1
yield an orthonormal basis: v 2  ||q1 ||   5  and v3  ||q2 ||   3 2 5  .
 5 


0
 3 5 


  23

The matrix V   v1 v 2 v 3     13
 2
 3
18 0 0 
V A A V   0 0 0  .
 0 0 0 
From part (d) of Theorem 9.4.4,
T

T

 15
2
5
0


2
 orthogonally diagonalizes AT A :
3 5

5
3 5

4
3 5
26
Chapter 9: Numerical Methods
  23 
 12 
2
1
2





1
1
1
u1  1 Av1  3 2 
 

1 2   23    12 
 2
 3 
To obtain u 2 , we extend the set u1  to an orthonormal basis for R 2 . To simplify the computations, we
consider
x 
 1
2u1    . A vector u 2 orthogonal to this vector must be a solution of 1 1  1    0  .
 1
 x2 
 12 

u
An orthonormal basis for the solution space is formed by 2  1  .
 2 
A singular value decomposition of A is
 2
1
2




1 
 1
3
3
3



 2 1 2   2
2
2  3 2 0 0   1


0

 
 2 1 2 
1  0
 1
5
0 0  5


  


 
A

Σ
4
2
5 
2
2






U
3 5 3 5 3 5

VT
11.
 1 0
 1 1 1 
  3 0  are   3 and   2, with corresponding unit
The eigenvalues of A A  
1
1
1
2



 
 0 1 1  1 1  0 2 


T
1 
0 
eigenvectors v1    and v 2    respectively. The singular values of A are  1  3 and
0 
1 
 1
 0
 1 0
 1 0
3


 
1
0






 2  2. We have u1  11 Av1  13  1 1     13  and u 2  12  1 1     12  . We choose
0
1
 1 1     1 
 1 1    1 
 2 
 3
 2
 6
u 3    16  so that u1 , u 2 , u 3  is an orthonormal basis for R3 . This results in the following singular
 1
 6 
 1
 3
value decomposition: A  U ΣV T   13
 1
  3
0
1
2
1
2
 3

 16   0

1
0
6
 
2
6
0
  1 0
2
.
 0 1
0

9.4 Singular Value Decomposition
12.
27
6 4 
6 0 4  
  52 24  ; det  I  AT A    52 24    64   4
0
0
A A






 
24   16
 4 0 0   4 0  24 16 



T

the eigenvalues of AT A are 1  64 and 2  4 therefore the singular values of A are
 1  1  8 and  2  2  2 .
 1 2 
The reduced row echelon form of 64 I  AT A is 
 so that the eigenspace corresponding to
0 0 
 25 
 x1 
1  64 consists of vectors   where x1  2t , x2  t . A vector v1   1  forms an orthonormal basis for
 5 
 x2 
this eigenspace.
1 12 
The reduced row echelon form of 4 I  AT A is 
 so that the eigenspace corresponding to 2  4
0 0 
  15 
x 
consists of vectors  1  where x1   12 t , x2  t . A vector v 2   2  forms an orthonormal basis for this
 5 
 x2 
eigenspace.
 25
The matrix V   v1 | v 2    1
 5
 15 
64 0 
 orthogonally diagonalizes AT A : V T AT A V  
.
2

 0 4
5


From part (d) of Theorem 9.4.4,
 25 
 15 
6 4  2
6 4   1








5
5
u1  11 Av1  81  0 0   1    0  and u 2  12 Av 2  12  0 0   2    0  .
 4 0   5   1 
 4 0   5    2 
 5
 5
To obtain u 3 , we extend the set u1 , u 2  to an orthonormal basis for R 2 . To simplify the computations, we
consider
2 
5u1   0  and
 1 
 1
5u 2   0  . A vector u 3 orthogonal to both of these vectors must be a solution
 2 
 x1 
1   0 
2 0
of the homogeneous linear system 
  x2     .
 1 0 2   x  0 
 3
1 0 0 0 
The augmented matrix of this system has the reduced row echelon form 
 therefore an
0 0 1 0 
0 
orthonormal basis for the solution space is formed by u 3   1  .
 0 
A singular value decomposition of A is
28
Chapter 9: Numerical Methods
2
1
0  8 0 
6 4   5
5
 25 15 

0 0    0

0 1 0 2   1


 
 5 25 



1
2

 4 0 
 5 0 0 0  
5
 


 
VT
Σ
A
U
19.
(b)
In the solution of Exercise 5, we obtained a singular value decomposition
 12   2

1
2
  0
 12
A  U ΣV   1
 2
T
0  1 0


2  0 1
A polar decomposition of A is
A 
U ΣU UV 
T
 1

2
 
 1

 2
 2
 
 0
T

1 

2  2

1   0

2
 1
0  2

2   1

 2

 1

0  2

2    1

 2
1    1
  
2    2
1    1
  
2    2


1 
  1 0 
2

1  0 1 


2

1 

2
1 

2
True-False Exercises
(a)
False. If A is an m  n matrix, then AT is an n  m matrix, and AT A is an n  n matrix.
(b)
True.  AT A   AT  AT   AT A .
(c)
False. AT A may have eigenvalues that are 0.
(d)
False. A would have to be symmetric to be orthogonally diagonalizable.
(e)
True. This follows since AT A is a symmetric n  n matrix.
(f)
False. The eigenvalues of AT A are the squares of the singular values of A.
(g)
True. This follows from Theorem 9.4.3.
T
T
9.5 Data Compression Using Singular Value Decomposition
1.
From Exercise 9 in Section 9.4, A has the singular value decomposition
9.5 Data Compression Using Singular Value Decomposition
 23

A   13

  23


 3 2
2 2
0  3  0

2  0
1

6 
2

2
6
1
2
0
1
  2
0  1

0  2

1
2
1
2

.

 23 
 
Thus the reduced singular value decomposition of A is A  U1Σ1V1T   13  3 2    12


  23 
2.
1
 2 1 2   2 
2
 2 1 2     1  3 2    3



  2
3.
From Exercise 11 in Section 9.4, A has the singular value decomposition
 1
 3
T
A  U ΣV   13
 1
  3
0
1
2
1
2
 3

 16   0

1
0
6
 
2
6
 13
2
3

2
 8 0  5
0 

0 2    15
2 
 5

.


0 
  1 0
2
.
 0 1
0 

 1
 3
T
A

U
Σ
V

 13
Thus the reduced singular value decomposition of A is
1 1 1

  13
4.
2
6 4   5

 
0 0    0
 4 0   1
 5
5.
 23 
 
The reduced singular value expansion of A is 3 2  13    12
  23 
6.
 12 
 2 1 2 
T
3
2

u
v


 1    23
1 1 1
 2 1 2 
  2 


1
5
1
2
0
 3
1

2

 0
1

2


2
5

1
5
 13
7.
The reduced singular value decomposition of A is
8.
 25 
6 4 
0 0    u vT   u vT  8  0   2
  5
1 1 1
2 2 2


 1
 4 0 
 5
2
3
1
2
.


 1
0
 3
 
3  13  1 0  2  12   0 1.
 1
1
 2 
  3 
 15 


1 
1

2
0

   5
5
 2 
 5
2
5


0  1 0

.
2  0 1
29
30
9.
Chapter 9: Numerical Methods
A rank 100 approximation of A requires storage space for 100(200 + 500 + 1) = 70,100 numbers, while A
has 200(500) = 100,000 entries.
True-False Exercises
(a)
True. This follows from the definition of a reduced singular value decomposition.
(b)
True. This follows from the definition of a reduced singular value decomposition.
(c)
False. V1 has size n  k so that V1T has size k  n.
Chapter 9 Supplementary Exercises
1.
Reduce A to upper triangular form:
 6 2   3 1  3 1
 6 0   6 0   0 2  U

 
 

 2 0
 2 0   3 1
The multipliers used were 12 and 2, so L  
and A  


.
 2 1
 2 1  0 2 
2.
A
U
 6 2 
 6 0


 0 
  


 1  13   multiplier   61


0
 6
 6 0 
  


 1  13 


2   multiplier  6
 0
 6 0 
 6 


 1  13 


1   multiplier  12
0
 6 0 
 6 2


A general 2  2 lower triangular matrix with nonzero main diagonal entries can be factored as
 a11
a
 21
0   1
0   a11


a22   a21 / a11 1   0
0 
 6 0   1 0   6 0 
therefore 



 . We conclude that an LDU a22 
 6 2   1 1  0 2 
 6 2   1 0   6 0   1  13 
decomposition of A is A  
  LDU .



1
 6 0   1 1  0 2  0
3.
Reduce A to upper triangular form.
Supplementary Exercises
31
 2 4 6  1 2 3   1 2 3   1 2 3 
 1 4 7   1 4 7   0 2 4   0 2 4 

 
 
 

 1 3 7  1 3 7   1 3 7   0 1 4 
1 2 3  1 2 3  1 2 3 
 0 1 2    0 1 2   0 1 2   U
0 1 4   0 0 2  0 0 1 
The multipliers used were , 1, 1, , 1, and
1
2
4.
1
2
1
2
2 0 0 
2 0 0  1 2 3 


so L  1 2 0  and A   1 2 0  0 1 2  .
 1 1 2  0 0 1 
1 1 2 
2 0 0  1 2 3 
It was shown in the solution of Exercise 3 that A   1 2 0   0 1 2  .
 1 1 2   0 0 1 
A general 3  3 lower triangular matrix with nonzero main diagonal entries can be factored as
 a11
a
 21
 a31
0
a22
a32
0   1
0    a21 / a11
a33   a31 / a11
0
1
a32 / a22
0   a11
0   0
1   0
0
a22
0
0 
0 
a33 
2 0 0   1 0 0  2 0 0 


therefore  1 2 0    12 1 0   0 2 0  . We conclude that an LDU -decomposition of A is
 1 1 2   12 12 1  0 0 2 
2 4 6   1 0 0  2 0 0   1 2 3


A   1 4 7    12 1 0   0 2 0   0 1 2   LDU .
 1 3 7   12 12 1  0 0 2   0 0 1
5.
(a)
The characteristic equation of A is  2  4   3     3    1  0 so the dominant eigenvalue of A is
1  3, with corresponding positive unit eigenvector
 1 


2  0.7071

v
.
 1  0.7071


 2
(b)
2 1  1  2 
Ax 0  
    
1 2  0  1 
32
Chapter 9: Numerical Methods
x1 
Ax 0
1 2  0.8944 

 

|| Ax 0 ||
5 1  0.4472 
x2 
 0.7809 
Ax1

|| Ax1 ||  0.6247 
x3 
 0.7328 
Ax 2

|| Ax 2 ||  0.6805 
x4 
0.7158 
Ax 3

|| Ax 3 || 0.6983 
x5 
 0.7100 
Ax 4

|| Ax 4 ||  0.7042 
0.7100 
0.7071
x5  
as compared to v  

.
0.7042 
0.7071
(c)
2 
Ax 0   
 1
x1 
1 
Ax 0
 
max  Ax 0  0.5
x2 
1 
Ax1
 
max  Ax1  0.8 
x3 
1

Ax 2

max  Ax 2  0.9286 
x4 
1

Ax 3

max  Ax 3   0.9756 
x5 
1

Ax 4

max  Ax 4  0.9918 
1

1
x5  
as compared to the exact eigenvector v    .

0.9918 
1
7.
The Rayleigh quotients will converge to the dominant eigenvalue 4  8.1. However, since the ratio
4
1
8.
 8.1
 1.0125 is very close to 1, the rate of convergence is likely to be quite slow.
8
  2 2
1 1 1 1 2 2 
AT A  

   4  ;
; det  I  AT A 




2   2
1 1 1 1 2 2 


the eigenvalues of AT A are 1  4 and 2  0 therefore the singular values of A are
 1  1  2 and  2  2  0 .
 1 1
The reduced row echelon form of 4 I  AT A is 
 so that the eigenspace corresponding to 1  4
0 0 
Supplementary Exercises
33
 12 
x 
consists of vectors  1  where x1  t , x2  t . A vector v1   1  forms an orthonormal basis for this
 2 
 x2 
eigenspace.
1 1 
The reduced row echelon form of 0 I  AT A is 
 so that the eigenspace corresponding to 2  0
0 0 
  12 
 x1 
consists of vectors   where x1  t , x2  t . A vector v 2   1  forms an orthonormal basis for this
 2 
 x2 
eigenspace.
 12


V
v
|
v
The matrix
 1 2  1
 2
 12 
4 0
 orthogonally diagonalize
Download