Uploaded by Paul Crew

Solutions Manual For Applied Linear Algebra 1st Edition By Peter J. Olver and Cheri Shakiban

advertisement
Get all Chapter’s Instant download by email at etutorsource@gmail.com
Solutions — Chapter 1
1.1.1.
(a) Reduce the system to x − y = 7, 3 y = −4; then use Back Substitution to solve for
4
x = 17
3 ,y = − 3.
(b) Reduce the system to 6 u + v = 5, − 25 v = 25 ; then use Back Substitution to solve for
u = 1, v = −1.
(c) Reduce the system to p + q − r = 0, −3 q + 5 r = 3, − r = 6; then solve for p = 5, q =
−11, r = −6.
(d) Reduce the system to 2 u − v + 2 w = 2, − 23 v + 4 w = 2, − w = 0; then solve for
u = 13 , v = − 43 , w = 0.
(e) Reduce the system to 5 x1 + 3 x2 − x3 = 9, 15 x2 − 52 x3 = 25 , 2 x3 = −2; then solve for
x1 = 4, x2 = −4, x3 = −1.
(f ) Reduce the system to x + z − 2 w = − 3, − y + 3 w = 1, − 4 z − 16 w = − 4, 6 w = 6; then
solve for x = 2, y = 2, z = −3, w = 1.
3 55
5
(g) Reduce the system to 3 x1 + x2 = 1, 83 x2 + x3 = 32 , 21
8 x3 + x4 = 4 , 21 x4 = 7 ; then
3
2
2
3
solve for x1 = 11
, x2 = 11
, x3 = 11
, x4 = 11
.
1.1.2. Plugging in the given values of x, y and z gives a+2 b− c = 3, a−2− c = 1, 1+2 b+c = 2.
Solving this system yields a = 4, b = 0, and c = 1.
♥ 1.1.3.
(a) With Forward Substitution, we just start with the top equation and work down. Thus
2 x = −6 so x = −3. Plugging this into the second equation gives 12 + 3y = 3, and so
y = −3. Plugging the values of x and y in the third equation yields −3 + 4(−3) − z = 7,
and so z = −22.
(b) We will get a diagonal system with the same solution.
(c) Start with the last equation and, assuming the coefficient of the last variable is 6= 0, use
the operation to eliminate the last variable in all the preceding equations. Then, again
assuming the coefficient of the next-to-last variable is non-zero, eliminate it from all but
the last two equations, and so on.
(d) For the systems in Exercise 1.1.1, the method works in all cases except (c) and (f ).
Solving the reduced system by Forward Substitution reproduces the same solution (as
it must):
(a) The system reduces to 23 x = 17
2 , x + 2 y = 3.
15
(b) The reduced system is 15
u
=
2
2 , 3 u − 2 v = 5.
(c) The method doesn’t work since r doesn’t appear in the last equation.
(d) Reduce the system to 23 u = 21 , 72 u − v = 52 , 3 u − 2 w = −1.
(e) Reduce the system to 32 x1 = 83 , 4 x1 + 3 x2 = 4, x1 + x2 + x3 = −1.
(f ) Doesn’t work since, after the first reduction, z doesn’t occur in the next to last
equation.
55
3
8
2
(g) Reduce the system to 21
x1 = 57 , x2 + 21
8 x3 = 4 , x3 + 3 x4 = 3 , x3 + 3 x4 = 1.
1.2.1. (a) 3 × 4,
(b) 7,
(c) 6,
(d) ( −2 0 1 2 ),
0
1
0
C
(e) B
@ 2 A.
−6
1
Get all Chapter’s Instant download by email at etutorsource@gmail.com
Get all Chapter’s Instant download by email at etutorsource@gmail.com
0
1
1
2 3
5 6C
1.2.2.
A, (b)
7
8
9
0 1
1
C
(e) B
@ 2 A, (f ) ( 1 ).
3
(a) B
@4
1
1
2
4
0
!
1
3
, (c) B
@4
5
7
2
5
8
3
6
9
1
4
7C
A, (d) ( 1
3
2
3
4 ),
1.2.3. x = − 31 , y = 43 , z = − 13 , w = 23 .
1.2.4.
1
1
6
3
!
!
!
−1
x
7
(a) A =
, x=
, b=
;
2
y
3
!
!
!
1
u
5
(b) A =
, x=
, b=
;
−2
v
5
0
1
0 1
0 1
1
1 −1
p
0
B C
B C
(c) A = B
3C
@ 2 −1
A, x = @ q A , b = @ 3 A ;
−1 −1
0
r
6 1
0
1
0 1
0
3
u
2
1 2
C
B
B C
3 3C
(d) A = B
A, x = @ v A, b = @ −2 A;
@ −1
71
w1
4 −3 1
0
0
0
0
5 3 −1
x1
9
C
B
C
B
C
(e) A = B
@ 3 2 −1 A, x = @ x2 A, b = @ 5 A;
1 1
2
x3 0 1 −1 0
0
1
1
1
0
1 −2
x
−3
B
C
B C
B
C
2 −1
2 −1 C
B yC
B 3C
B
C, x = B C , b = B
C;
(f ) A = B
@ 0 −6 −4
@ zA
@ 2A
2A
1
3
2 1 −1
w
1
0
0
1
0 1
3 1 0 0
x1
1
B
B
C
B C
1 3 1 0C
C
Bx C
B1C
B
C, x = B 2 C, b = B C.
(g) A = B
@0 1 3 1A
@ x3 A
@1A
0 0 1 3
x4
1
1.2.5.
(a) x − y = −1, 2 x + 3 y = −3. The solution is x = − 56 , y = − 15 .
(b) u + w = −1, u + v = −1, v + w = 2. The solution is u = −2, v = 1, w = 1.
(c) 3 x1 − x3 = 1, −2 x1 − x2 = 0, x1 + x2 − 3 x3 = 1.
The solution is x1 = 51 , x2 = − 25 , x3 = − 25 .
(d) x + y − z − w = 0, −x + z + 2 w = 4, x − y + z = 1, 2 y − z + w = 5.
The solution is x = 2, y = 1, z = 0, w = 3.
1.2.6.
0
1
0
1
1 0 0 0
0 0 0 0
B
C
B
C
B0 1 0 0C
B0 0 0 0C
C,
B
C.
(a) I = B
O
=
@0 0 1 0A
@0 0 0 0A
0 0 0 1
0 0 0 0
(b) I + O = I , I O = O I = O. No, it does not.
0
1
(f ) B
@3
7
11
−12
8
3
−1
6
4
!
0
, (d) undefined, (e) undefined,
2
1
0
1
9
9 −2
14
B
−12 C
6 −17 C
A, (g) undefined, (h) @ −8
A, (i) undefined.
8
12 −3
28
1.2.7. (a) undefined, (b) undefined, (c)
2
Get all Chapter’s Instant download by email at etutorsource@gmail.com
Get all Chapter’s Instant download by email at etutorsource@gmail.com
1.2.8. Only the third pair commute.
1.2.9. 1, 6, 11, 16.
0
1
1.2.10. (a) B
@0
0
0
1
2
0
B
C
B0
0 A, (b) B
@0
−1
0
0
0
0
0
−2
0
0
1
0
0
3
0
0
0C
C
C.
0A
−3
1.2.11. (a) True, (b) true.
♥ 1.2.12. (a) Let A =
x
z
!
y
. Then A D =
w
ax
az
by
bw
!
=
ax
bz
ay
bw
!
= D A, so if a 6= b these
!
a 0
are equal if and only if y = z = 0. (b) Every 2 × 2 matrix commutes with
= a I.
0 a
0
1
x 0 0
C
(c) Only 3 × 3 diagonal matrices. (d) Any matrix of the form A = B
@ 0 y z A. (e) Let
0 u v
D = diag (d1 , . . . , dn ). The (i, j) entry of A D is aij dj . The (i, j) entry of D A is di aij . If
di 6= dj , this requires aij = 0, and hence, if all the di ’s are different, then A is diagonal.
1.2.13. We need A of size m × n and B of size n × m for both products to be defined. Further,
A B has size m × m while B A has size n × n, so the sizes agree if and only if m = n.
1.2.14. B =
x
0
y
x
!
where x, y are arbitrary.
1.2.15. (a) (A + B)2 = (A + B)(A + B) = AA
+ AB + BA
+ BB = A2 + 2AB + B 2 , since
!
!
1 2
0 0
AB = BA. (b) An example: A =
,B=
.
0 1
1 0
1.2.16. If A B is defined and A is m × n matrix, then B is n × p matrix and A B is m × p matrix;
on the other hand if B A is defined we must have p = m and B A is n × n matrix. Now,
since A B = B A, we must have p = m = n.
1.2.17. A On×p = Om×p , Ol×m A = Ol×n .
1.2.18. The (i, j) entry of the matrix equation c A = O is c aij = 0. If any aij 6= 0 then c = 0, so
the only possible way that c 6= 0 is if all aij = 0 and hence A = O.
1.2.19. False: for example,
1
0
0
0
!
0
1
0
0
!
0
0
=
!
0
.
0
1.2.20. False — unless they commute: A B = B A.
1.2.21. Let v be the column vector with 1 in its j th position and all other entries 0. Then A v
is the same as the j th column of A. Thus, the hypothesis implies all columns of A are 0
and hence A = O.
1.2.22. (a) A must be a square matrix. (b) By associativity, A A2 = A A A = A2 A = A3 .
(c) The naı̈ve answer is n − 1. A more sophisticated answer is to note that you can comr
pute A2 = A A, A4 = A2 A2 , A8 = A4 A4 , and, by induction, A2 with only r matrix
multiplications. More generally, if the binary expansion of n has r + 1 digits, with s nonzero
digits, then we need r + s − 1 multiplications. For example, A13 = A8 A4 A since 13 is 1101
in binary, for a total of 5 multiplications: 3 to compute A2 , A4 and A8 , and 2 more to multiply them together to obtain A13 .
3
Get all Chapter’s Instant download by email at etutorsource@gmail.com
Get all Chapter’s Instant download by email at etutorsource@gmail.com
1.2.23. A =
0
0
!
1
.
0
♦ 1.2.24. (a) If the ith row of A has all zero entries, then the (i, j) entry of A B is ai1 b1j + · · · +
ain bnj = 0 b1j + · · · + 0 bnj = 0, which holds for all j, so the ith row of A B will have all 0’s.
!
!
!
1 1
1 2
1 1
(b) If A =
,B=
, then B A =
.
0 0
3 4
3 3
−1
3
1.2.25. The same solution X =
1.2.26. (a)
4
1
!
5
, (b)
2
5
−2
!
1
−2
!
in both cases.
−1
. They are not the same.
1
!
1
0
1.2.27. (a) X = O. (b) Yes, for instance, A =
2
,B=
1
3
−2
!
2
,X=
−1
1
1
!
0
.
1
1.2.28. A = (1/c) I when c 6= 0. If c = 0 there is no solution.
♦ 1.2.29.
(a) The ith entry of A z is 1 ai1 +1 ai2 +· · ·+1 ain = ai1 +· · ·+ain , which is the ith row sum.
1
1−n
(b) Each row of W has n − 1 entries equal to n and one entry equal to n and so its row
1
1−n
sums are (n − 1) n + n = 0. Therefore, by part (a), W z = 0. Consequently, the
row 0
sums
= AW
= A 0 = 0, and the result follows.
1 of B = A W are
0 the entries of
1B
0z 1
0 z1
1
1 2 −1
1
2
C
B
B C
B C
(c) z = B
3C
@ 1 A, and so A z = @ 2 1
A @ 1 A = @ 6 A, while B = A W =
1
−4 5 −1
1
0
1
0
1
0
10
0 1
1
1
2
1
4
5
−
− 3 −3
1
2
−1
0
3
3
3
C
B
3
B
C
B
CB
1
2
1C
B
C, and so B z = B
B
CB
C
=
0C
@
A.
−
2
1
3
0
1
−1
@
A
@
A@
3
3
3A
0
1
1
2
− 4 5 −1
4 −5
1
3
3 −3
♦ 1.2.30. Assume A has size m × n, B has size n × p and C 0
has size p × 1
q. The (k, j) entry of B C
is
p
X
l=1
bkl clj , so the (i, j) entry of A (B C) is
On the
other hand,
the (i, l) entry of A B is
0
1
p
X
l=1
@
n
X
k=1
aik bkl A clj =
n
X
p
X
n
X
kk= 1
X
i=1
aik @
p
X
l=1
bkl clj A =
n
X
p
X
k=1 l=1
aik bkl clj .
aik bkl , so the (i, j) entry of (A B) C is
aik bkl clj . The two results agree, and so A (B C) =
k=1 l=1
(A B) C. Remark : A more sophisticated, simpler proof can be found in Exercise 7.1.44.
♥ 1.2.31.
(a) We need A B and B A to have the same size, and so this follows from Exercise 1.2.13.
(b) A B − B A = O if and only if A B = B A.
0
1
!
!
0 1 1
−1 2
0 0
B
(c) (i)
, (ii)
, (iii) @ 1 0 1 C
A;
6 1
0 0
−1 1 0
(d) (i) [ c A + d B, C ] = (c A + d B)C − C(c A + d B)
= c(A C − C A) + d(B C − C B) = c [ A, B ] + d [ B, C ],
[ A, c B + d C ] = A(c B + d C) − (c B + d C)A
= c(A B − B A) + d(A C − C A) = c [ A, B ] + d [ A, C ].
(ii) [ A, B ] = A B − B A = − (B A − A B) = − [ B, A ].
4
Get all Chapter’s Instant download by email at etutorsource@gmail.com
Get all Chapter’s Instant download by email at etutorsource@gmail.com
(iii)
h
h
h
i
[ A, B ], C = (A B − B A) C − C (A B − B A) = A B C − B A C − C A B + C B A,
i
[ C, A ], B = (C A − A C) B − B (C A − A B) = C A B − A C B − B C A + B A C,
i
[ B, C ], A = (B C − C B) A − A (B C − C B) = B C A − C B A − A B C + A C B.
Summing the three expressions produces O.
♦ 1.2.32. (a) (i) 4, (ii) 0, (b) tr(A + B) =
(c) The diagonal entries of A B are
n
X
i=1
j =1
entries of B A are
n
X
i=1
n
X
(aii + bii ) =
n
X
i=1
aij bji , so tr(A B) =
n
X
bji aij , so tr(B A) =
n
X
i=1 j =1
aii +
n
X
n
X
i=1
n
X
i=1 j =1
bii = tr A + tr B.
aij bji ; the diagonal
bji aij . These double summations are
clearly equal. (d) tr C = tr(A B − B A) = tr A B − tr B A = 0 by part (a).
(e) Yes, by the same proof.
♦ 1.2.33. If b = A x, then bi = ai1 x1 + ai2 x2 + · · · + ain xn for each i. On the other hand,
cj = (a1j , a2j , . . . , anj )T , and so the ith entry of the right hand side of (1.13) is
x1 ai1 + x2 ai2 + · · · + xn ain , which agrees with the expression for bi .
♥ 1.2.34.
(a) This follows by direct computation.
(b) (i)
!
!
!
!
!
!
!
−2 1
1 −2
−2
1
−2
4
1 0
−1
4
=
( 1 −2 ) +
(1 0) =
+
=
.
3 2
1
0
3
2
3 −6
2 0
5 −6
0
1
!
!
!
!
2
5
(ii)
1 −2 0 B
1
−2
0
C
0A =
(2 5) +
( −3 0 ) +
( 1 −1 )
@ −3
−3 −1 2
−3
−1
2
1 −1
=
(iii)
3 −1
B
2
@ −1
1
1
0
10
2
−6
5
−15
1
0
!
+
6
3
1
0
0
!
+
0
2
0
−2
0
1
!
=
8
−1
!
5
.
−17
0
1
1
2
3 0
3
−1
1
B
C
B
C
B
C
B
C
1C
A @ 3 −1 4 A = @ −1 A( 2 3 0 ) + @ 2 A( 3 −1 4 ) + @ 1 A( 0 4 1 )
−5
0
4 1
1
1
−5
0
1
0
1
0
1
0
1
6
9 0
−3
1 −4
0
4
1
3
14 −3
C
B
B
B
=B
8C
4
1C
−1
9C
@ −2 −3 0 A + @ 6 −2
A + @0
A = @4
A.
2
3 0
3 −1
4
0 −20 −5
5 −18 −1
(c) If we set B = x, where x is an n × 1 matrix, then we obtain (1.14).
(d) The (i, j) entry of A B is
n
X
k=1
aik bkj . On the other hand, the (i, j) entry of ck rk equals
the product of the ith entry of ck , namely aik , with the j th entry of rk , namely bkj .
Summing these entries, aik bkj , over k yields the usual matrix product formula.
♥ 1.2.35.
!
−2 −8
−1
(a) p(A) = A − 3A + 2 I , q(A) = 2A + I . (b) p(A) =
, q(A) =
4
6
0
3
2
5
3
2
(c) p(A)q(A) = (A − 3A + 2 I )(2A + I ) = 2A − 5A + 4A − 3A + 2 I , while
p(x)q(x) = 2 x5 − 5 x3 + 4 x2 − 3 x + 2.
(d) True, since powers of A mutually commute.
For the particular matrix from (b),
!
2
8
p(A) q(A) = q(A) p(A) =
.
−4 −6
3
2
!
0
.
−1
5
Get all Chapter’s Instant download by email at etutorsource@gmail.com
We Don’t reply in this website, you need to contact by email for all chapters
Instant download. Just send email and get all chapters download.
Get all Chapters Solutions Manual/Test Bank Instant Download by email at
etutorsource@gmail.com
You can also order by WhatsApp
https://api.whatsapp.com/send/?phone=%2B447507735190&text&type=ph
one_number&app_absent=0
Send email or WhatsApp with complete Book title, Edition Number and
Author Name.
Get all Chapter’s Instant download by email at etutorsource@gmail.com
♥ 1.2.36.
2
0
2
(a) Check that S = A by direct computation. Another example: S =
!
0
. Or, more
2
generally, 2 times any of the matrices in part (c).
(b) S 2 is only defined if S is square.!
!
±1
0
a
b
(c) Any of the matrices
,
, where a is arbitrary and b c = 1 − a2 .
0 ±1
c −a
!
0 −1
(d) Yes: for example
.
1
0
0
1
1 1 −1
B 3 0
1C
B
C
B
C
C. (c) Since matrix addition is
♥ 1.2.37. (a) M has size (i+j)×(k+l). (b) M = B
1
1
3
B
C
@ −2 2
0A
1 1 −1
done entry-wise, adding the entries of each block is the same as adding the blocks. (d) X
has size k × m, Y has size k × n, Z has size l × m, and W has size l × n. Then A X + B Z
will have size i × m. Its (p, q) entry is obtained by multiplying the pth row of M times the
q th column of P , which is ap1 x1q + · · · + api xiq + bp1 z1q + · · · + bpl zlq and equals the
sum of the (p, q) entries of A X and B Z. A similar argument works
for the remaining
three
!
!
0
0 −1
blocks. (e) For example, if X = (1), Y = ( 2 0 ), Z =
, W =
, then
1
1
0
0
1
0
1 −1
0
1
B 4
1 2
0
7
0C
B
C
B
C
B
C
C. The individual block products are
P = @ 0 0 −1 A, and so M P = B
4
5
−1
B
C
@ −2 −4 −2 A
1 1
0
0
1 −1
0
0
4
!
4
1
1
3
=
0
!
(1) +
1
0
1
1
0
−1
1
1
B
C
B
C
B
@ −2 A = @ −2 A (1) + @ 2
0
1
1.3.1.
(a)
1
−2
7
−9
˛
˛
˛
˛
˛
˛
4
2
!
!
1
!
1
0
!
3
C 0
0A
,
1
−1
2R1 +R2
−→
2
!
0
,
1
1
0
7
5
5
−1
1
B
C
−2 C
A = @ −2 A ( 2
−1
1
1
1
1
3
!
−1
0
B
@ −4
˛
˛
˛
˛
˛
!
1
7
=
0
(2
1
0) +
1
0
0
−1
1
1
0) + B
@2
1
!
0
1
1
3
0
0C
A
1
−1
!
−1
,
0
!
−1
.
0
!
4
. Back Substitution yields x2 = 2, x1 = −10.
10
˛
!
−5 ˛˛ −1 − 3 R1 +R2 3 −5 ˛˛ −1
˛ 26 . Back Substitution yields w = 2, z = 3.
˛
(b)
−→
˛
0 13
1 ˛ 8
3
˛
˛
˛
0
1
03
1
1
0
˛
1 ˛˛ 0 23 R2 +R3 1 −2
1 −2
1 ˛ 0 4R1 +R3 1 −2
1 ˛˛ 0
B
B
2 −8 ˛˛˛ 8 C
(c) B
2 −8 ˛˛˛ 8 C
2 −8 ˛˛˛ 8 C
A −→ @ 0
A −→ @ 0
@ 0
A.
˛
˛
−9
0 −3 13
−9
−4
5
9
0
0
1 ˛ 3
Back Substitution˛ yields
˛
1
0
1 z = 3, y
0= 16, x = 29.˛
1
0
4 −2 ˛˛ 1 −3R1 +R3 1
4 −2 ˛˛ 1
1
4 −2 ˛˛ 1 2R1 +R2 1
˛
C
B
˛
C
B
˛
B
8 −7 ˛˛ −5 A −→ @ 0
8 −7 ˛˛ −5 C
(d) @ −2
0 −3 ˛˛ −7 A −→ @ 0
A
3 −2
2 ˛ −1
0 −14
8 ˛ −4
3 −2
2 ˛ −1
˛
1
0
7
1 4
−2 ˛˛
1
R
+R
2
3
4
−→ B
−7 ˛˛˛ −5 C
@0 8
A. Back Substitution yields r = 3, q = 2, p = −1.
˛ − 51
0 0 − 17
4
4
3
2
6
Get all Chapter’s Instant download by email at etutorsource@gmail.com
Get all Chapter’s Instant download by email at etutorsource@gmail.com
˛
0
1
˛
0
1
1
0 −2
0 ˛˛ −1
1 0 −2
0 ˛˛ −1
˛
C
B
0
1
0 −1 ˛ 2 C
0 −1 ˛˛ 2 C
B0 1
C
˛
C reduces to B
˛
C.
@0 0
0 −3
2
0 ˛˛ 0 A
2 −3 ˛˛ 6 A
−4
0
0
7 ˛ −5
0 0
0 −5 ˛ 15
3
Solution: x4 = −3, x3 = − 2 , x2 = −1, x1 = −4.
˛
˛
0
1
0
1
−1
3 −1
1 ˛˛ −2
−1 3 −1
1 ˛˛ −2
B
B
1 −1
3 −1 ˛˛ 0 C
2
0 ˛˛ −2 C
C
B 0 2
C
B
˛
C reduces to B
˛
C.
(f ) B
@ 0
@ 0 0 −2
1 −1
4 ˛˛ 7 A
4 ˛˛
8A
˛
˛
4 −1
1
0
5
0 0
0 −24
−48
Solution: w = 2, z = 0, y = −1, x = 1.
B
B
(e) B
@
1.3.2.
(a) 3 x + 2 y = 2, − 4 x − 3 y = −1; solution: x = 4, y = −5,
(b) x + 2 y = −3, − x + 2 y + z = −6, − 2 x − 3 z = 1; solution: x = 1, y = −2, z = −1,
(c) 3 x − y + 2 z = −3, −2 y − 5 z = −1, 6 x − 2 y + z = −3;
solution: x = 32 , y = 3, z = −1,
(d) 2 x − y = 0, − x + 2 y − z = 1, − y + 2 z − w = 1, − z + 2 w = 0;
solution: x = 1, y = 2, z = 2, w = 1.
,
y
= − 43 ; (b) u = 1, v = −1; (c) u = 32 , v = − 13 , w = 16 ; (d) x1 =
1.3.3. (a) x = 17
3
11
10
2
2
19
5
1
4
2
3 , x2 = − 3 , x3 = − 3 ; (e) p = − 3 , q = 6 , r = 2 ; (f ) a = 3 , b = 0, c = 3 , d = − 3 ;
(g) x = 31 , y = 76 , z = − 83 , w = 92 .
1.3.4. Solving 6 = a + b + c, 4 = 4 a + 2 b + c, 0 = 9 a + 3 b + c, yields a = −1, b = 1, c = 6, so
y = −x2 + x + 6.
1.3.5.
2
(a) Regular:
1
(b) Not regular.
0
1
4
!
−→
2
0
1
7
2
1
3
!
.
0
1
−2
1
3 −2
1
B
10
4 −3 C
− 83 C
A −→ @ 0
A.
3
3
−2
5
0
0
4
0
1
0
1
1 −2
3
1 −2
3
B
C
B
(d) Not regular: @ −2
4 −1 A −→ @ 0
0
5C
A.
3 −1
2
0
5 −7
(e) Regular:
0
1
0
1
0
1 3 −3 0
1
3 −3 0
1
B
C
B
B
3 −4 2 C
B −1 0 −1 2 C
B0
C
B0
B
C −→ B
C −→ B
@ 3 3 −6 1 A
@ 0 −6
@0
3 1A
2 3 −3 5
0 −3
3 5
0
(c) Regular: B
@ −1
1.3.6.
−i
1− i
(a)
0
i
(b) B
@ 0
−1
(c)
1− i
−i
1+ i
1
0
2i
2i
˛
˛
˛
˛
˛
−1
−3 i
!
−→
−i
0
1+ i
1 − 2i
˛
˛
˛
˛
˛
3
3
0
0
−1
1 − 2i
−3
−4
−5
−1
1
0
0
1
B
2C
C
B0
C −→ B
@0
5A
7
0
3
3
0
0
−3
−4
−5
0
1
0
2C
C
C.
5A
6
!
;
use Back Substitution to obtain the solution y = 1, x = 1 − 2 i .
˛
1
0
˛
1
2i
i
0
1 − i ˛˛
2i
1 − i ˛˛
B
2 C
1 + i ˛˛˛
2 C
1 + i ˛˛˛
A −→ @ 0 2 i
A.
˛ 1 − 2i
0 0 −2 − i ˛ 1 − 2 i
i
solution:
z=
i , y = − 21 − 32 i , x˛ = 1 + i . !
˛
!
˛
2
1− i
2 ˛˛
i
˛ i
˛
˛
−→
;
1 + i ˛ −1
0
2 i ˛ − 32 − 12 i
solution: y = − 14 + 34 i , x = 12 .
7
Get all Chapter’s Instant download by email at etutorsource@gmail.com
Get all Chapter’s Instant download by email at etutorsource@gmail.com
˛
0
˛
0
1
1
i 2 + 2 i ˛˛ 0
1+ i i
2 + 2 i ˛˛ 0
B
˛
C
˛ 0 A −→ @ 0
2
i
1 −2 + 3 i ˛˛˛ 0 C
A;
˛
i 3 − 11 i ˛ 6
0
0 −6 + 6 i ˛ 6
solution: z = − 12 − 21 i , y = − 52 + 21 i , x = 52 + 2 i .
1+ i
(d) B
@ 1− i
3 − 3i
1.3.7. (a) 2 x = 3, − y = 4, 3 z = 1, u = 6, 8 v = − 24. (b) x = 23 , y = − 4, z = 31 ,
u = 6, v = − 3. (c) You only have to divide by each coefficient to find the solution.
♦ 1.3.8. 0 is the (unique) solution since A 0 = 0.
♠ 1.3.9.
Back Substitution
start
set xn = cn /unn
for i = n − 1 to 1 with increment −1
0
1
i+1
X
1 @
ci −
uij xj A
set xi =
uii
j =1
next j
end
1.3.10. Since
a11
0
a12
a22
!
b11
0
b12
b22
!
!
=
a11 b11
0
!
!
a11 b12 + a12 b22
,
a22 b22
!
b11 b12
a11 a12
a11 b11 a22 b12 + a12 b11
=
,
0 b22
0
a22
0
a22 b22
the matrices commute if and only if
a11 b12 + a12 b22 = a22 b12 + a12 b11 ,
or
(a11 − a22 )b12 = a12 (b11 − b22 ).
1.3.11. Clearly, any diagonal matrix is both lower and upper triangular. Conversely, A being
lower triangular requires that aij = 0 for i < j; A upper triangular requires that aij = 0 for
i > j. If A is both lower and upper triangular, aij = 0 for all i 6= j, which implies A is a
diagonal matrix.
♦ 1.3.12.
(a) Set lij =
0
(
0
(b) L = B
@ 1
−2
♦ 1.3.13.
aij ,
0,
0
0
0
i > j,
,
i ≤ j,
1
0
0C
A,
0
uij =
0
3
D=B
@0
0
0
(
aij ,
0,
0
−4
0
1
i < j,
i ≥ j,
0
0C
A,
5
dij =
0
0
U =B
@0
0
(
1
0
0
aij ,
0,
i = j,
i 6= j.
1
−1
2C
A.
0
1
0 0 1
C
3
(a) By direct computation, A2 = B
@ 0 0 0 A, and so A = O.
0 0 0
(b) Let A have size n × n. By assumption, aij = 0 whenever i > j − 1. By induction, one
proves that the (i, j) entries of Ak are all zero whenever i > j − k. Indeed, to compute
the (i, j) entry of Ak+1 = A Ak you multiply the ith row of A, whose first i entries are 0,
8
Get all Chapter’s Instant download by email at etutorsource@gmail.com
We Don’t reply in this website, you need to contact by email for all chapters
Instant download. Just send email and get all chapters download.
Get all Chapters Solutions Manual/Test Bank Instant Download by email at
etutorsource@gmail.com
You can also order by WhatsApp
https://api.whatsapp.com/send/?phone=%2B447507735190&text&type=ph
one_number&app_absent=0
Send email or WhatsApp with complete Book title, Edition Number and
Author Name.
Get all Chapter’s Instant download by email at etutorsource@gmail.com
by the j th column of Ak , whose first j − k − 1 entries are non-zero, and all the rest are
zero, according to the induction hypothesis; therefore, if i > j − k − 1, every term in the
sum producing this entry is 0, and the induction is complete. In particular, for k = n,
n
every entry of Ak is zero, and
! so A = O.
1
1
(c) The matrix A =
has A2 = O.
−1 −1
1.3.14.
(a) Add −2 times the second row to the first row of a 2 × n matrix.
(b) Add 7 times the first row to the second row of a 2 × n matrix.
(c) Add −5 times the third row to the second row of a 3 × n matrix.
(d) Add 21 times the first row to the third row of a 3 × n matrix.
(e) Add −3 times the fourth row to the second row of a 4 × n matrix.
0
1
B
B0
1.3.15. (a) B
@0
0
0
1
0
0
0
0
1
1
0
1
1.3.16. L3 L2 L1 = B
@2
0
0
1
0
0
1
B
0C
C
B0
C, (b) B
@0
0A
1
0
0
1
− 21
1
0
1
0
0
0
0
1
0
0
0C
A 6= L1 L2 L3 .
1
1
1
0
0
1
B
0C
C
B0
C, (c) B
@0
−1 A
1
0
0
0
1
0
0
0
0
1
0
1
0
3
1
B
0C
C
B0
C, (d) B
@0
0A
1
0
0
1
0
−2
0
0
1
0
1
0
0C
C
C.
0A
1
1
1 0 0
1 0 0
B
1.3.17. E3 E2 E1 = B
1 0C
1 0C
@ −2
A, E1 E2 E3 = @ −2
A. The second is easier to predict
1
1
−2 2 1
−1 2 1
since its entries are the same as the corresponding entries of the Ei .
1.3.18.
e adds d 6= 0 times row k to
(a) Suppose that E adds c 6= 0 times row i to row j 6= i, while E
e
row l 6= k. If r1 , . . . , rn are the rows, then the effect of E E is to replace
(i) rj by rl + c ri + d rk for j = l;
(ii) rj by rj + c ri and rl by rl + (c d) ri + d rj for j = k;
(iii) rj by rj + c ri and rl by rl + d rk otherwise.
e is to replace
On the other hand, the effect of E E
(i) rj by rl + c ri + d rk for j = l;
(ii) rj by rj + c ri + (c d) rk and rl by rl + d rk for i = l;
(iii) rj by rj + c ri and rl by rl + d rk otherwise.
e =E
e E whenever i 6= l and j 6= k.
Comparing results, we see that E E
(b) E1 E2 = E2 E1 , E1 E3 6= E3 E1 , and E3 E2 = E2 E3 .
(c) See the answer to part (a).
1.3.19. (a) Upper triangular; (b) both special upper and special lower triangular; (c) lower
triangular; (d) special lower triangular; (e) none of the above.
1.3.20. (a) aij = 0 for all i 6= j; (b) aij = 0 for all i > j; (c) aij = 0 for all i > j and aii = 1
for all i; (d) aij = 0 for all i < j; (e) aij = 0 for all i < j and aii = 1 for all i.
♦ 1.3.21.
(a) Consider the product L M of two lower triangular n × n matrices. The last n − i entries
in the ith row of L are zero, while the first j − 1 entries in the j th column of M are zero.
So if i < j each summand in the product of the ith row times the j th column is zero,
9
Get all Chapter’s Instant download by email at etutorsource@gmail.com
Get all Chapter’s Instant download by email at etutorsource@gmail.com
and so all entries above the diagonal in L M are zero.
(b) The ith diagonal entry of L M is the product of the ith diagonal entry of L times the ith
diagonal entry of M .
(c) Special matrices have all 1’s on the diagonal, and so, by part (b), does their product.
!
!
!
!
1 0
1 3
1 0
1
3
1.3.22. (a) L =
, U=
,
(b) L =
, U=
,
−1 1
0 3
3 1
0 −8
0
1
0
1
0
1
0
1
1 0 0
2 0
3
1 0 0
−1 1 −1
1
1C
C
B
B
1 0C
(c) L = B
0C
(d) L = B
@ −1 1 0 A, U = @ 0 2
A,
@2
A, U = @ 0 3 − 2 A,
1
7
1 0 1
0 0
3
0 3 1
0 0
6 1
0
1
0
1
0
1
0
1
0 0
−1
0 0
1 0 0
1 0
−1
B
C
B
B
(e) L = B
1 0C
1 0C
4C
@ −2
A, U = @ 0 −3 0 A, (f ) L = @ 2
A, U = @ 0 3
A,
−1 −1 1
0
0 2
−3 31 1
0 0 − 13
3
1
0
0
1
0
1
1 0 −1
0
1
0 0 0
1
0
0
0
B
B 0 2 −1
B
−1 C
1 0 0C
C
B 0
B
C
−1
1
0 0C
C
C, U = B
C
3
B
C,
(g) L = B
(h) L = B
C
B −1
B
7 C,
1
1
0
@ −2
A
1
1
0
2
A
@
@0 0
A
2
2
1
3
−1
−2
1
0 −2 3 1
0 0
0 −10
1
0
1
0
0
1
2 1
3
1
1
0
0 0
1 1 −2 3
C
B
B1
1C
B0 7
C
B
B
1
0 0C
− 23
1 3C
C
B
B0 3
C
B2
2
2C
C, U = B
C
C,
B
U =B
(i)
L
=
5 C.
B0
B 3 −3
@ 0 0 −4 1 A
1 0C
0 − 22
A
@
@2
7
7
7A
0 0
0 1
35
1
1
5
0 0
0 22
2
7 − 22 1
1.3.23. (a) Add 3 times first row to second row. (b) Add −2 times first row to third row.
(c) Add 4 times second row to third row.
1.3.24. 0
1
1 0 0 0
B
C
B2 1 0 0C
C
(a) B
@3 4 1 0A
5 6 7 1
(b) (1) Add −2 times first row to second row. (2) Add −3 times first row to third row.
(3) Add −5 times first row to fourth row. (4) Add −4 times second row to third row.
(5) Add −6 times second row to fourth row. (6) Add −7 times third row to fourth row.
(c) Use the order given in part (b).
♦ 1.3.25. See equation (4.51) for the general case.
1
t2
1
Bt
@ 1
t21
1
t2
t22
1
1
B
t3 C
A = @ t1
t21
t23
1
t2
t22
t32
1
1
B
C
C
B
t4 C B t1
C=B 2
Bt
t24 C
A
@ 1
3
t31
t4
0
0
1
B
Bt
B 1
B 2
Bt
@ 1
t31
!
1
t1
1
t3
t23
t33
1
=
1
0
1
t1
0
0
0
1
!
1
0
0
1
t1 + t 2
1
t2 − t1
10
0
1
B
0C
A@0
0
1
0
1
t1 + t 2
t21 + t1 t2 + t22
1
B
B0
B
B
B0
@
0
1
t2 − t1
0
0
!
1
t2 − t1
0
0
0
1
t1 + t 2 + t 3
1
t3 − t 1
(t3 − t1 ) (t3 − t2 )
0
1
1
C
t3 − t 1
A,
(t3 − t1 ) (t3 − t2 )
1
0
C
0C
C
C
0C
A
1
1
1
C
C
t4 − t 1
C
C.
C
(t4 − t1 ) (t4 − t2 )
A
(t4 − t1 ) (t4 − t2 ) (t4 − t3 )
10
Get all Chapter’s Instant download by email at etutorsource@gmail.com
We Don’t reply in this website, you need to contact by email for all chapters
Instant download. Just send email and get all chapters download.
Get all Chapters Solutions Manual/Test Bank Instant Download by email at
etutorsource@gmail.com
You can also order by WhatsApp
https://api.whatsapp.com/send/?phone=%2B447507735190&text&type=ph
one_number&app_absent=0
Send email or WhatsApp with complete Book title, Edition Number and
Author Name.
Get all Chapter’s Instant download by email at etutorsource@gmail.com
!
1 1
is regular. Only if the zero appear in the (1, 1) position
1 0
does it automatically preclude regularity of the matrix.
1.3.26. False. For instance
1.3.27. (n − 1) + (n − 2) + · · · + 1 =
1.3.28. We solve the equation
0
1
!
u1
0
u2
u3
!
a
c
=
b
d
!
for u1 , u2 , u3 , l, where a 6= 0 since
b
is regular. This matrix equation has a unique solution: u1 = a, u2 = b,
d
bc
c
u3 = d −
, l= .
a
a
A =
a
c
!
1
l
n(n − 1)
.
2
!
0 1
♦ 1.3.29. The matrix factorization A = L U is
=
1 0
This implies x = 0 and a x = 1, which is impossible.
1
a
0
1
!
x
0
y
z
!
=
x
ax
!
y
.
ay + z
♦ 1.3.30.
(a) Let u11 , . . . , unn be the pivots of A, i.e., the diagonal entries of U . Let D be the diagonal matrix whose diagonal entries are dii = sign uii . Then B = A D is the matrix obtained by multiplying each column of A by the sign of its pivot. Moreover, B = L U D =
e , where U
e = U D, is the L U factorization of B. Each column of U
e is obtained by
LU
e , which are
multiplying it by the sign of its pivot. In particular, the diagonal entries of U
the pivots of B, are uii sign uii = | uii | > 0.
(b) Using the same notation as in part (a), we note that C = D A is the matrix obtained
by multiplying each row of A by the sign of its pivot. Moreover, C = D L U . However, D L is not special lower triangular, since its diagonal entries are the pivot signs.
b = D L D is special lower triangular, and so C = D L D D U = L
bU
b , where
But L
b
b
U = D U , is the L U factorization of B. Each row of U is obtained by multiplying it
b , which are the pivots of
by the sign of its pivot. In particular, the diagonal entries of U
C, are uii sign uii = | uii | > 0.
(c)
0
1
0
10
1
−2 2 1
1 0 0
−2 2
1
B
C
B 1
CB
3C
@ 1 0 1 A = @ − 2 1 0 A@ 0 1
2 A,
4 2 3
−2 6 1
0 0 −4
0
10
1
1
0
2 2 −1
2 2 −1
1 0 0
B
CB
C
B 1
3C
@ −1 0 −1 A = @ − 2 1 0 A@ 0 1 − 2 A,
−4 2 −3
−2 6 1
0 0
4
0
1
0
10
1
2 −2 −1
1
0 0
2 −2 −1
B
C
B 1
CB
3C
0
1A = @ 2
1 0 A@ 0
1
@ 1
2 A.
−4 −2 −3
−2 −6 1
0
0
4
1.3.31. (a) x =
−1
2
3
!
,
0
1
1
(b) x = @ 41 A,
4
0
0
1
C
(c) x = B
@ 1 A,
0
0
1
0
B
(d) x = B
@
0
2C
C,
7A
5
17
37
3
1
− 12
2
C
B 35 C
B
C
B 6 C
B
0
B C
C
B − 17 C
B
C
B1C
12 C, (i) x = B 35 C.
(f ) x = B
@ 1 A, (g) x = B C, (h) x = B
B
C
B 1 C
@1A
C
B 1 C
B
−1
@ 7 A
@ 4 A
0
8
2
35
0
1
0
1
4
B − 7C
0
−1
1
C
(e) x = B
@ −1 A,
5
2
11
Get all Chapter’s Instant download by email at etutorsource@gmail.com
Get all Chapter’s Instant download by email at etutorsource@gmail.com
1.3.32.
1
−3
0
1
1
(b) L = B
@ −1
1
0
1
0
(a) L =
0
0
!
, U=
−1
0
1
0
!
3
;
11
0
−1
B
0C
A, U = @ 0
1
0
1
0
1
2
0
x1 =
1
−1
0C
A;
3
0
1
5
@ − 11 A
1
2
11
, x2 =
0
1
!
0
1
9
1
A;
, x3 = @ 11
3
1
1 11
0
1
−1
B − 6C
C
B
C
x1 = B
@ 0 A, x 2 = B − 3 C;
2A
@
0
5
0
1
0
3
1
9 −2 −1
0
−2
1
C
B
B
B
B C
1C
2
C, U = B0 −1
C;
−9 C
,
x
=
2
x
=
(c) L = B
A;
@
A
@
0
−
A
@
@
2
1
3
3
3A
5
2
1
−1
3
00 0 − 3
9
3 1
1
1
0
2.0
.3
.4
1
0
0
B
.355
4.94 C
1
0C
(d) L = B
A;
A, U = @ 0
@ .15
0
0
−.2028
.2 1.2394
1
1
0
1
0
1
0
−9.3056
1.1111
.6944
B
C
B
C
B
x1 = @ −1.3889 A, x2 = @ −82.2222 A, x3 = @ 68.6111 C
A
−4.9306
6.1111
.0694
0 51
0
0
1
1 1
0
1
1 0 −1
0
1
0
0 0
4
14
B
B
C
C
B0 2
B 0
B−1 C
B− 5 C
3 −1 C
B
C
1
0 0C
B
B 4C
C
B 14 C
B
C
C, x 2 = B
C;
(e) L = B
, U =B
x1 = B
3
7
7 C;
1 0C
1 C
B 1C
B
@ −1
A
@0 0 −2
2
2A
@ 4A
@ 14 A
1
0 − 2 −1 1
1
1
0 0
0
4
4
2
0
1
0
1
1
0 0 0
1 −2
0
2
B
B
4
1 0 0C
9 −1 −9 C
C
B0
C
B
C, U = B
C;
(f ) L = B
17
1
@ −8 −
A
@0
A
1
0
0
0
9
9
−4 0
−1 1 0 1 0 1 0
0
0
1
0
1
1
1
10
B C
B C
B
C
0C
B1C
B 8C
B C, x 2 = B C, x 3 = B
C.
x1 = B
@4A
@3A
@ 41 A
0
2
4
1
0
1
1.4.1. The nonsingular matrices are (a), (c), (d), (h).
1.4.2. (a) Regular and nonsingular, (b) singular, (c) nonsingular, (d) regular and nonsingular.
1.4.3. (a) x1 = − 35 , x2 = − 10
(b) x1 = 0, x2 = −1, x3 = 2;
3 , x3 = 5;
9
(c) x1 = −6, x2 = 2, x3 = −2;
(d) x = − 13
2 , y = − 2 , z = −1, w = −3;
(e) x1 = −11, x2 = − 10
3 , x3 = −5, x4 = −7.
1.4.4. Solve the equations −1 = 2 b + c, 3 = − 2 a + 4 b + c, −3 = 2 a − b + c, for a = −4, b = −2,
c = 3, giving the plane z = − 4 x − 2 y + 3.
1.4.5.
(a) Suppose A is nonsingular. If a 6= 0 and c 6= 0, then we subtract c/a times the first row
from the second, producing the (2, 2) pivot entry (a d − b c)/a 6= 0. If c = 0, then the
pivot entry is d and so a d − b c = a d 6= 0. If a = 0, then c 6= 0 as otherwise the first
column would not contain a pivot. Interchanging the two rows gives the pivots c and b,
and so a d − b c = b c 6= 0.
(b) Regularity requires a 6= 0. Proceeding as in part (a), we conclude that a d − b c 6= 0 also.
1.4.6. True. All regular matrices are nonsingular.
12
Get all Chapter’s Instant download by email at etutorsource@gmail.com
Get all Chapter’s Instant download by email at etutorsource@gmail.com
♦ 1.4.7. Since A is nonsingular, we can reduce it to the upper triangular form with nonzero diagonal entries (by applying the operations # 1 and # 2). The rest of argument is the same as
in Exercise 1.3.8.
1.4.8. By applying the operations # 1 and # 2 to the system Ax = b we obtain an equivalent
upper triangular system U x = c. Since A is nonsingular, uii 6= 0 for
all i, so by Back Sub1
0
n
X
cn
1 @
uik xk A,
ci −
and xi =
stitution each solution component, namely xn =
unn
uii
k = i+1
for i = n − 1, n − 2, . . . , 1, is uniquely defined.
0
1
1
0
1
0 0 0
0 0 0 1
B
C
0 0 1C
C
B0 1 0 0C
C, (b) P2 = B
C,
@0 0 1 0A
0 1 0A
0 1 0 0
1 0 0 0
(c) No, they do not commute. (d) P1 P2 arranges the rows in the order 4, 1, 3, 2, while
P2 P1 arranges them in the order 2, 4, 3, 1.
B
0
B
1.4.9. (a) P1 = B
@0
0
0
1.4.10. (a) B
@0
1
1
0
0
0
1
0
0
B
C
B0
1 A, (b) B
@1
0
0
0
0
0
1
0
1
0
0
1
0
1
0
B
0C
C
B1
C, (c) B
@0
0A
0
0
1
0
0
0
0
0
0
1
0
1
0
0
B1
B
0C
B
C
C, (d) B 0
B
1A
@0
0
0
0
0
0
1
0
0
0
1
0
0
1
1
0
0
0
0
0
0C
C
C.
0C
C
0A
1
1.4.11. The (i, j) entry of the following Multiplication Table indicates the product P i Pj , where
0
1
P1 = B
@0
0
0
0
P4 = B
@1
0
1
0
1
0
1
0
0
0
0
0C
A,
1
1
0
0C
A,
1
0
P2 = B
@0
1
0
0
P5 = B
@0
1
1
0
0
0
1
0
1
0
0
1C
A,
0
1
1
0C
A,
0
0
P3 = B
@1
0
0
1
P6 = B
@0
0
0
0
1
0
0
1
The commutative pairs are P1 Pi = Pi P1 , i = 1, . . . , 6, and P2 P3 = P3 P2 .
0
1
B
0
B
1.4.12. (a) B
@0
0
P1
P2
P3
P4
P5
P6
P1
P1
P2
P3
P4
P5
P6
P2
P2
P3
P1
P6
P4
P5
P3
P3
P1
P2
P5
P6
P4
P4
P4
P5
P6
P1
P2
P3
P5
P5
P6
P4
P3
P1
P2
P6
P6
P4
P5
P2
P3
P1
0
0
1
0
1
0
B
0C
C B1
C, B
0A @0
0
0
0
1
0
0
0
0
1
0
1
0
0
0
B
0C
C B0
C, B
0A @0
1
1
1
0
0
0
0
0
1
0
1
0
0
0
B
1C
C B1
C, B
0A @0
0
0
0
0
0
1
1
0
1
0
0
0
0
0
1
0
1
1
0C
A,
0
1
0
1C
A.
0
1
0
0
0
B
0C
C B0
C, B
0A @0
1
1
0
1
0
0
0
0
1
0
1
1
0C
C
C,
0A
0
13
Get all Chapter’s Instant download by email at etutorsource@gmail.com
Get all Chapter’s Instant download by email at etutorsource@gmail.com
0
1
B
B0
B
@0
0
0
B
B1
B
@0
0
0
0
0
0
1
0
0
1
0
0
0
1
0
0
0
0
1
1
0
0
0 1 0
B
1C
C
B1 0 0
C;
(b) B
@0 0 0
0A
01 0
0 0 11
1
0 1 0 0
B
C
0C
C B0 0 0 1C
C, B
C;
0A @1 0 0 0A
0
0 0 1 0
1
0
0
1
B
0C
C B0
C, B
1A @0
0
0
0
1
B
0
B
(c) B
@0
0
0
0
1
0
0
0
1
0
0
0
0
1
0
1
0
0
1
0
0
0
B
1C
C B0
C, B
0A @1
01 0 0
0
0
B
0C
C B0
C, B
0A @0
1
1
0
1
0
0
0
0
1
0
0
0
0
1
0
1
0
0
1
0
1
1
B
0C
C B0
C, B
0A @0
01
0
1
0C
C
C.
0A
0
0
1
0
0
0
0
0
1
1
0
0C
C
C,
1A
0
1.4.13. (a) True, since interchanging the same pair of rows twice brings you back 0
to where 1
0 0 1
C
you started. (b) False; an example is the non-elementary permuation matrix B
@ 1 0 0 A.
!
0 1 0
−1
0
(c) False; for example P =
is not a permutation matrix. For a complete list of
0 −1
such matrices, see Exercise 1.2.36.
1.4.14. (a) Only when all the entries of v are different; (b) only when all the rows of A are
different.
0
1
1 0 0
C
1.4.15. (a) B
@ 0 0 1 A. (b) True. (c) False — A P permutes the columns of A according to
0 1 0
the inverse (or transpose) permutation matrix P −1 = P T .
♥ 1.4.16.
(a) If P has a 1 in position (π(j), j), then it moves row j of A to row π(j) of P A, which is
enough to establish the correspondence.
1
0
0
1
0
1
0 0 0 0 1
0
1
0 0 0 1
1 0 0 0
C
B
0 1 0
B0 0 0 1 0C
B
C
B
C
C
B
B
C
B0 1 0 0C
B0 0 1 0C
C, (iii) B
C, (iv ) B 0 0 1 0 0 C.
(b) (i) @ 1 0 0 A, (ii) B
C
B
@0 0 1 0A
@0 0 0 1A
@0 1 0 0 0A
0 0 1
1 0 0 0
0 1 0 0
1 0 0 0 0
Cases (i) and !
(ii) are elementary matrices.
!
!
!
1 2 3
1 2 3 4
1 2 3 4
1 2 3 4 5
(c) (i)
, (ii)
, (iii)
, (iv )
.
2 3 1
3 4 1 2
4 1 2 3
2 5 3 1 4
♦ 1.4.17. The first row of an n×n permutation matrix can have the 1 in any of the n positions, so
there are n possibilities for the first row. Once the first row is set, the second row can have
its 1 anywhere except in the column under the 1 in the first row, and so there are n − 1
possibilities. The 1 in the third row can be in any of the n − 2 positions not under either of
the previous two 1’s. And so on, leading to a total of n(n − 1)(n − 2) · · · 2 · 1 = n ! possible
permutation matrices.
1.4.18. Let ri , rj denote the rows of the matrix in question. After the first elementary row operation, the rows are ri and rj + ri . After the second, they are ri − (rj + ri ) = − rj and
rj + ri . After the third operation, we are left with − rj and rj + ri + (− rj ) = ri .
1.4.19. (a)
0
1
1
0
!
0
2
1
−1
!
=
1
0
0
1
!
2
0
!
−1
,
1
0
1
5
x = @ 2 A;
3
14
Get all Chapter’s Instant download by email at etutorsource@gmail.com
Get all Chapter’s Instant download by email at etutorsource@gmail.com
0
0
(b) B
@0
1
1
0
0
10
0
0
B
1C
A@ 1
0
0
10
0
1
0
1
0
1
0 −4
B
2 3C
A = @0
0
1 7
0
1
0
1
10
1
0
B
0C
A@ 0
0
1
2 3
1 7C
A,
0 −4
1
10
0
1
5
4C
3C
C;
4A
1
− 41
0
B
B
x=B
@
−1
1 0 2
1 0 0
0 1 −3
0 0 1
C
C
CB
B
CB
x=B
3C
(c) B
@ 1 A;
A = @ 0 1 0 A@ 0 1 −3 A,
@ 1 0 0 A@ 0 2
0
0 0109
00 2 1
0 1 010 1 0 2
0
1
1
0
1
1 0 0 0
1 2 −1 0
1 0 0 0
1 2 −1 0
22
B
B
B
B
B
C
0 0 1 0C
6 2 −1 C
0 0C
2C
CB 3
C
CB 0 −1 −6
C
B1 1
B −13 C
B
CB
C=B
CB
C, x = B
C;
(d) B
@ 0 1 0 0 A@ 1
@ −5 A
1 −7 2 A @ 3 0 1 0 A@ 0 0 5 −1 A
21
4
0 0 0 1 10 1 −1 2 11 0 1 3 5 110 0 0 0 − 51
−22
0
0
1
0 0 1 0
0 1 0 0
1
0 0 0
1 4 −1 2
−1
B
B
B
B
B
C
1 0 0 0C
3 1 0C
1 0 0C
0 0C
CB 2
C
B0
CB 0 1
C
B −1 C
B
CB
C=B
CB
C, x = B
C;
(e) B
@ 0 1 0 0 A@ 1
@ 1A
4 −1 2 A @ 2 −5 1 0 A@ 0 0 3 −4 A
0 0 0 1
7 −1 2 3
7 −29 3 1
0 0 0 1
3
10
1
0
1
0
10
1
0
1 4
1 1 1
1
0 0 1 0 0
0 0 2 3 4
1 0 0 0 0
C
B
C
B
C
B 0 1 0 0 0 CB 0 1 −7 2 3 C
B0 1 0
0 0 CB 0 1 −7 2 3 C
B 0C
B
CB
C
B
CB
C
B
B
CB
C
B
B0 0
C.
B1 4
,x=B
1 0 2C
0C
=B
(f ) B
0 0 0 1 0C
1 1 1C
0 0 1 0 0C
CB
C
B
C
B
CB
C
B
@ −1 A
@ 1 0 0 0 0 A@ 0 0
0 3 0A
1 0 2 A @ 0 0 2 1 0 A@ 0 0
0 0
0 0 1
0
0 0 0 0 1
0 0 1 7 3
0 0 1 37 1
1.4.20.
0
1
(a) B
@0
0
0
0
0
1
0
B
B0
(b) B
@1
0
0
1
0
0
1
0
0
0
1
0
B
0
B
(c) B
@0
0
10
0
4
B
1C
A@ −3
0
−3
−4
3
1
0
1
1
2
B
3
1C
A=B
−
@
4
−2
−3
solution: x1 = 54 ,
10
0
1
0
4
x2 = 47 , x3 = 23 .
1
0
10
0
4
CB
B0
0C
A@
1
0
−4
−2
0
2
1
C
− 21 C
A;
5
2
1
10
1 0
0
1 −1
1
1 0 0 0
1 −1
1 −3
B
B
B
0 0C
1
1
0C
0 0C
1
1
0C
CB 0
C
B0 1
C
CB 0
CB
C=B
C;
CB
A
@
A
@
A
@
0 0
1 −1
1 −3
0 1 1 0
0
0 −2
1A
3
0 1
1
2 −1
1
1 3 25 1
0
0
0
2
solution:
x
=
4,
y
=
0,
z
=
1,
w
=
1.
10
1
10
1
0
1 −1
2
1
0 0
1 −1
2
1
1 0
0 0
CB
CB
C
B
3 −3
0C
0 1 CB −1
1 −3
0C B 1 1
0 0 CB 0
C
CB
C;
CB
C=B
0
2 −4 A
1 0 A@ 1 −1
1 −3 A @ 1 0
1 0 A@ 0
1
0
0
0
1
0 0
1
2 −1
1
−1 0 − 2 1
19
5
solution: x = 3 , y = − 3 , z = −3, w = −2.
♦ 1.4.21.
(a) They are all of the form P A = L U , where P is a permutation matrix. In the first case,
we interchange rows 1 and 2, in the second case, we interchange rows 1 and 3, in the
third case, we interchange rows 1 and 3 first and then interchange rows 2 and 3.
(b) Same solution x = 1, y = 1, z = −2 in all cases. Each is done by a sequence of elementary row operations, which do not change the solution.
1.4.22. There are four
0 in all: 1 0
0 1 0
0
B
CB
@1 0 0A@1
0 0 1
1
0
10
0 1 0
0
B
CB
@0 0 1A@1
1 0 0
1
0
10
0 0 1
0
B
CB
@0 1 0A@1
1 0 0
1
1
0
1
1
0
1
1
0
1
1
0
2
1
B
−1 C
A = @0
3
1
1
0
2
1
B
−1 C
A = @1
3
0
1
0
2
1
B
−1 C
A = @1
3
0
0
1
1
0
1
1
10
1
0
1 0 −1
B
0C
2C
A@0 1
A,
1
0 0
2
10
1
0
1 0 −1
B
0C
4C
A@0 1
A,
1
0 0 −2
10
1
0 0
1
1
3
B
C
1 0C
A @ 0 −1 −4 A ,
−1 1
0
0 −2
15
Get all Chapter’s Instant download by email at etutorsource@gmail.com
Get all Chapter’s Instant download by email at etutorsource@gmail.com
0
1
10
10
0
1
0
B
0C
A@0
0
1
1
0
0 1
2
0 1
C
B
B
1
0 0C
A @ 1 0 −1 A = @ 0
1 −1
1 1
3
0 1 0
The other two permutation matrices are not regular.
0
B
@1
1
1
0
1
3
2C
A.
−2
1.4.23. The maximum is 6 since there are 6 different 3 × 3 permutation matrices. For example,
0
1
0
10
1
1 0 0
1 0 0
1 0 0
B
C
B
CB
C
@ 1 1 0A = @ 1 1 0A@0 1 0A,
−1 1 1
−1 1 1
0 0 1
0
10
1
0
10
1
1 0 0
1 0 0
1 0 0
1 0
0
B
CB
C
B
CB
1C
@ 0 0 1 A @ 1 1 0 A = @ −1 1 0 A @ 0 1
A,
0 1 0
−1 1 1
1 1 1
0 0 −1
1
10
1
0
10
0
1
1 0
1
0 0
1 0 0
0 1 0
C
B
C
B
CB
B
1 0C
A @ 0 −1 0 A ,
@1 0 0A@ 1 1 0A = @ 1
0
0 1
−1 −2 1
−1 1 1
0 0 1
0
0
B
@0
1
0
B
@0
1
0
0
B
@1
0
0
1
0
0
0
1
0
0
0
1
10
1
0
B
1C
A@ 1
−1
0
10
1
1
B
0C
A@ 1
0
−1
10
1
1
B
0C
A@ 1
0
−1
0
1
1
0
1
1
0
1
1
1
0
1
0
B
0C
A = @ −1
1
1
1
0
0
1
B
0C
A = @ −1
1
−1
1
0
0
1
B
0C
A = @ −1
1
−1
0
1
10
1
0
1 1 0
B
0C
1C
A@0 2
A,
1
1
−2 1
0 0 2
10
1
−1 1 1
0 0
CB
1 0A@ 0 2 1C
A,
1
0 0 21
2 1
10
1
0 0
−1 1
1
CB
1 0A@ 0 1
1C
A.
2 1
0 0 −1
1.4.24. False. Changing the permuation matrix typically changes the pivots.
♠ 1.4.25.
Permuted L U factorization
start
set P = I , L = I , U = A
for j = 1 to n
if ukj = 0 for all k ≥ j, stop; print “A is singular”
if ujj = 0 but ukj 6= 0 for some k > j then
interchange rows j and k of U
interchange rows j and k of P
for m = 1 to j − 1 interchange ljm and lkm next m
for i = j + 1 to n
set lij = uij /ujj
add − uij times row j to row i of A
next i
next j
end
16
Get all Chapter’s Instant download by email at etutorsource@gmail.com
We Don’t reply in this website, you need to contact by email for all chapters
Instant download. Just send email and get all chapters download.
Get all Chapters Solutions Manual/Test Bank Instant Download by email at
etutorsource@gmail.com
You can also order by WhatsApp
https://api.whatsapp.com/send/?phone=%2B447507735190&text&type=ph
one_number&app_absent=0
Send email or WhatsApp with complete Book title, Edition Number and
Author Name.
Get all Chapter’s Instant download by email at etutorsource@gmail.com
1.5.1.
!
!
!
!
!
2
3
−1 −3
1 0
−1 −3
2
3
(a)
=
=
,
−1 −1
1
2
0 1
1
2
−1 −1
0
10
1
0
1
0
10
1
2 1 1
3 −1 −1
1 0 0
3 −1 −1
2 1 1
CB
B
C
B
B
C
(b) B
2
1C
2
1C
@ 3 2 1 A @ −4
A = @ 0 1 0 A = @ −4
A @ 3 2 1 A,
2 1 2
−1
0
1
0 0 1
−1
0
1
2 1 2
0
1
10
1
0
10
0
−1
1
1
1
1
1
0
0
−1 3
−1 3
2 B −1
B
C
CB
C
B
CB
4
1
3
1
3
4
B
C
C
=
0
1
0
2 2
=
(c) B
2
2
−1
A
@
A@
@
@
7 −7 −7 A
7 −7 −7 A@
6
5
8
6
5
8
0
0
1
−2
1
−2 1
3
−17
− 7 17 0 7
1
10
07
0
7
1 0 0
1
2
0
−5 16
6
−5 16
6
C
B
CB
B
C
1
3C
1.5.2. X = B
A = @ 0 1 0 A.
@ 3 −8 −3 A; X A = @ 3 −8 −3 A @ 0
0 0 1
1 −1 −8
−1
3
1
−1
3
1
1.5.3. (a)
0
1
(d) B
@0
0
1.5.4.
0
1
B
@a
b
!
0
1
!
1
,
0
0
1
0
0
1
0
!
1 0
1 2
(b)
, (c)
,
−5 1
0 1
0
1
0
1
1
0 0 0
0
0
B
B
1 0 0C
C
B0
C
B0
C,
3 A, (e) B
(f ) B
@ 0 −6 1 0 A
@0
1
0
0 0 1
1
10
0
1
B
0C
A @ −a
1
−b
0
1
0
1
0
1
0
1 0
B
0C
A = @0 1
1
0 00
0
1
0
0
0
0
1
0
0
1
2
−1 C
A.
3
1
1
0C
C
C.
0A
0
10
0
1 0 0
1
B
CB
0C
A = @ −a 1 0 A @ a
1
− b 10 1
b
1
0 0
M −1 = B
1 0C
@ −a
A.
ac − b −c 1
0
1
0
1
0
0C
A;
1
1.5.5. The ith row of the matrix multiplied by the ith column of the inverse should be equal 1.
This is not possible if all the entries of the ith row are zero; see Exercise 1.2.24.
1.5.6. (a) A−1 =
(b) C =
2
3
0
!
−1
2
!
1
1
3 A.
1
30 3
2
1
3
, B −1 = @
−1
−1
0
1
, C −1 = B −1 A−1 = @
0
1
!
!
1
0
!
!
sin θ
a
x
x cos θ + y sin θ
.
(b)
= Rθ−1
=
.
cos θ
b
y
− x sin θ + y cos θ
!
cos θ − a − sin θ
(c) det(Rθ − a I ) = det
= (cos θ − a)2 + (sin θ)2 > 0
sin θ
cos θ − a
provided sin θ 6= 0, which is valid when 0 < θ < π.
1.5.7. (a) Rθ−1 =
1.5.8.
cos θ
− sin θ
1
1
3 A.
− 23
0
1
0
1
1 0 0
0 1 0
0 0 1
C
C
C
(a) Setting P1 = B
P2 = B
P3 = B
@0 1 0A,
@0 0 1A,
@1 0 0A,
0 0 1
1 0 0
0 1 0
1
0
1
0
1
0
0 0 1
1 0 0
0 1 0
C
C
C
P5 = B
P6 = B
P4 = B
@1 0 0A,
@0 1 0A,
@0 0 1A,
0 0 1
1 0 0
0 1 0
−1
−1
−1
−1
−1
we find P1 = P1 , P2 = P3 , P3 = P2 , P4 = P4 , P5 = P5 , P6−1 = P6 .
(b) P1 , P4 , P5 , P6 are their own inverses.
17
Get all Chapter’s Instant download by email at etutorsource@gmail.com
Get all Chapter’s Instant download by email at etutorsource@gmail.com
0
0
B
1
B
(c) Yes: P = B
@0
0
0
0
B
B0
1.5.9. (a) B
@0
1
0
0
1
0
0
1
0
0
1
0
0
0
0
0
0
1
1
1
0
0C
C
C interchanges two pairs of rows.
1A
0
0
1
0
B
0C
C
B1
C, (b) B
@0
0A
0
0
0
0
1
0
0
0
0
1
1
0
1
1
B
0C
C
B0
C, (c) B
@0
0A
0
0
0
0
0
1
0
1
0
0
0
1
1
0
B0
B
0C
C
B
C, (d) B 0
B
1A
@0
0
0
0
0
1
0
0
0
0
0
0
1
0
1
0
0
0
1
0
0C
C
C.
0C
C
1A
0
1.5.10.
(a) If i and j = π(i) are the entries in the ith column of the 2 × n matrix corresponding to
the permutation, then the entries in the j th column of the 2 × n matrix corresponding to
the permutation are j and i = π −1 (j). Equivalently, permute the columns so that the
second row is in order 1, 2, . . . , n and then switch the two rows.
(b) The permutations
correspond to !
!
!
!
1 2 3 4
1 2 3 4
1 2 3 4
1 2 3 4 5
(i)
, (ii)
, (iii)
, (iv )
.
4 3 2 1
4 1 2 3
1 3 4 2
1 4 2 5 3
The inverse permutations
correspond
!
! to
!
!
1 2 3 4
1 2 3 4
1 2 3 4
1 2 3 4 5
(i)
, (ii)
, (iii)
, (iv )
.
4 3 2 1
2 3 4 1
1 4 2 3
1 3 5 2 4
1.5.11. If a = 0 the first row is all zeros, and so A is singular. Otherwise, we make d → 0 by
an elementary row operation. If e = 0 then the resulting matrix has a row of all zeros.
Otherwise, we make h → 0 by another elementary row operation, and the result is a matrix
with a row of all zeros.
2
1.5.12. This is true if and
! only if A =! I , and so, according to Exercise 1.2.36, A is either of
±1
0
a
b
the form
or
, where a is arbitrary and b c = 1 − a2 .
0 ±1
c −a
1.5.13. (3 I − A)A = 3A − A2 = I , so 3 I − A is the inverse of A.
1.5.14.
1 −1
A
c
!
(c A) =
1
c A−1 A = I .
c
1.5.15. Indeed, (An )−1 = (A−1 )n .
1.5.16. If all the diagonal entries are nonzero, then D −1 D = I . On the other hand, if one of
diagonal entries is zero, then all the entries in that row are zero, and so D is not invertible.
1.5.17. Since U −1 is also upper triangular, the only nonzero summand in the product of the ith
row of U and the ith column of U −1 is the product of their diagonal entries, which must
equal 1 since U U −1 = I .
♦ 1.5.18. (a) A = I −1 A I . (b) If B = S −1 AS, then A = S B S −1 = T −1 B T , where T = S −1 .
(c) If B = S −1 A S and C = T −1 B T , then C = T −1 (S −1 AS)T = (S T )−1 A(S T ).
♥ 1.5.19. (a) Suppose D
!
−1
=
X
Z
Y
W
!
. Then, in view of Exercise 1.2.37, the equation D D −1 =
I O
I =
requires A X = I , A Y = O, B Z = O, B W = I . Thus, X = A−1 , W = B −1
O I
and, since they are invertible, Y = A−1 O = O, Z = B −1 O = O.
18
Get all Chapter’s Instant download by email at etutorsource@gmail.com
Get all Chapter’s Instant download by email at etutorsource@gmail.com
0
1
B − 3
2
B
(b) B
3
@
0
1
2
3
− 13
0
0C
C
,
0C
A
1
3
0
1.5.20.
−1
B
B −2
B
@ 0
0
!
1
1
0
0
0
1
0
0
−5
2
0
0C
C
C.
3A
−1
1
!
1 −1
1
1 0 B
1 0
(a) B A =
1C
.
@0
A=
−1 −1 1
0 1
1
1
(b) A X = I does not0have a solution.
Indeed,
the first column of this matrix equation is
0 1
1
!
1
1 −1
x
C
=B
the linear system B
1C
@ 0 A, which has no solutions since x − y = 1, y = 0,
A
@0
y
0
1
1
and x + y = 0 are incompatible.
!
2
3 −1
(c) Yes: for instance, B =
. More generally, B A = I if and only if B =
−1 −1
1
!
1 − z 1 − 2z z
, where z, w are arbitrary.
−w 1 − 2w w
0
1
−2 y 1 − 2 v
1.5.21. The general solution to A X =
y
v C
A, where y, v are arbitrary.
−1
1
Any of these matrices serves as a right inverse. On the other hand, the linear system
Y A = I is incompatible and there is no solution.
I is X = B
@
1.5.22.
„
q «
(a) No. The only solutions are complex, with a = − 21 ± i 23 b, where b 6= 0 is any
nonzero complex number.
!
!
−1 1
1 0
(b) Yes. A simple example is A =
,B =
. The general solution to the
−1 0
0 1
!
x y
2 × 2 matrix equation has the form A = B M , where M =
is any matrix with
z w
tr M = x + w = −1, and det M = x w − y z = 1. To see this, if we set A = B M ,
then ( I + M )−1 = I + M −1 , which is equivalent to I + M + M −1 = O. Writing this
out using the formula (1.38) for the inverse, we find that if det M = x w − y z = 1 then
tr M = x+w = −1, while if det M 6= 1, then y = z = 0 and x+x−1 +1 = 0 = w+w −1 +1,
in which case, as in part (a), there are no real solutions.
0
1
B
B0
1.5.23. E = B
@0
0
0
−1
1.5.24. (a) @
1
0
3
(e) B
@9
1
−2
−7
−1
0
1
0
0
0
0
7
0
1
2
3 A,
1
3
1
−2
−6 C
A,
−1
1
0
0C
C
C,
0A
1
(b)
0
1
B
B0
−1
E
=B
@0
0
0
1
@ − 8
0
3
8
5
B − 8
B
1
(f ) B
@ − 2
7
8
0
1
0
0
1
3
8 A,
− 18
1
8
1
2
− 38
0
0
1
7
0
1
0
0C
C
C.
0A
1
0
1
3
4
5
5 A,
(c) @
− 54 35
1
0
5
−5
8C
B 2
C
,
(g) B
− 12 C
2
@
A
1
2
8
(d) no inverse,
3
2
−1
−1
1
1
2C
−1 C
A,
0
19
Get all Chapter’s Instant download by email at etutorsource@gmail.com
We Don’t reply in this website, you need to contact by email for all chapters
Instant download. Just send email and get all chapters download.
Get all Chapters Solutions Manual/Test Bank Instant Download by email at
etutorsource@gmail.com
You can also order by WhatsApp
https://api.whatsapp.com/send/?phone=%2B447507735190&text&type=ph
one_number&app_absent=0
Send email or WhatsApp with complete Book title, Edition Number and
Author Name.
Get all Chapter’s Instant download by email at etutorsource@gmail.com
0
0
0
2
−6
−5
2
1
3
1
3
0
1
!
0
1
B
1
B
(h) B
@0
1.5.25.
(a)
(b)
!
!0
1
−2
0
0
1
0
1
0
0
3
1
B
−13
B
(i) B
@ 21
!
!
0
−8
!
1
0 @ 35 0 A
(c)
4
0 1
3 1
(d) not
possible,
10
0
1
1 0 0
CB
(e) B
@3 1 0A@ 0
−2
0 0 1
1
0
1
−3 C
C
C,
−3 A
1
−51
8
2
−3
−1
5
1
0
−2
=
1
!
1 3
=
0 1
1
0
0
5
3
!0
@1
0
10
1
3
1
3
12
3
−5
−1
1
3
1C
C
C.
−1 A
0
!
−2
,
−3
!
3
,
1
1
0
− 43 A @ 35
= 4
1
5
10
1
− 45 A
3
5
,
10
1
1 0
0
1
0 0
1
0 0
0 0
CB
B
B
0C
1 0C
1 0C
A
A @ 0 −1 0 A @ 0 1
A@0
0
0
−1
0
0
1
0
−1
1
0
1
0
10
1
0
1
1 0 −2
1 0
0
1
0 −2
B
B
C
B
0C
0C
@0 1
A @ 0 1 −6 A = @ 3 −1
A,
0
0
1
0
0
1
−2
1
−3
0
10
10
10
10
1
1 0 0
1 0 0
1 0 0
1
0 0
1 0 0
B
CB
CB
CB
CB
(f ) @ 3 1 0 A @ 0 1 0 A @ 0 1 0 A @ 0 −1 0 A @ 0 1 0 C
A
0 0 1 0 2 0 11 0 0 3 11 0 0
0 1
1 00 0 8 1
1 0 3
1 0 0
1 2 0
1 2 3
B
CB
CB
C
B
C
@ 0 1 0 A @ 0 1 4 A @ 0 1 0 A = @ 3 5 5 A,
0 0 1 1 00 0 1 1 00 0 1 1 0 2 1 2 1
0
10
1 0 0
1 0 0
2 0 0
1
0 0
1 0
0
CB
CB
CB
CB
(g) B
0C
@ 0 0 1 A @ 0 1 0 A @ 0 1 0 A @ 0 −1 0 A @ 0 1
A
0 1 0 0 2 0 11 0 0 0 1 1 0
0
0 1 1 00 0 −1 1
1 0 1
1 0
0
2
1 2
1 21 0
B
CB
CB
C
B
2 3C
@ 0 1 0 A @ 0 1 −1 A @ 0
A,
1 0A = @4
0 0 1
0 0
1
0 −1 1
0
0
1
10
10
10
1
0
10
1 0
0 0
2 0 0 0
1
0 0 0
1 0 0 0
1 0 0 0
1
CB
B
CB
C
B
B1
0 0C
0 0 1 0C
CB 0 1 0 0 C B 0 − 2 0 0 C
C B 2 1 0 0 CB 0 1
CB
CB
CB
C
B
CB
(h) B
@ 0 1 0 0 A @ 0 0 1 0 A@ 0 0
1 0 A@ 0 0 1 0 A @ 0
0 1 0A
0 0 0 1
0 0 −2 1
0 0 0 1
0
0 0 1
0 0 0 1
10
10
10
1
0
1
0
1 0 0 0
1 0 0 0
2 1
0
1
1 21 0 0
1 0 0 21
CB
CB
CB
C
B
B
1
3C
B 0 1 0 3 CB 0 1 0 0 CB 0
B0 0
C
B0 1 0
0C
1 0 0C
B
CB
CB
CB
C=B
C,
@0 0 1
0 −1 A
0 A@ 0 0 1 0 A @ 0 0 1 3 A @ 0 0 1 0 A @ 1 0
0 0 −2 −5
0 0 0 110 0 0 0 110 0 0 0 110 0 0 0 10
1
0
1
1 0 0 0
1 0 0 0
1 0 0 0
1 0 0 0
1
0 0 0
B
CB
CB
CB
CB
0 1 0 0 C B 2 1 0 0 CB 0 1 0 0 CB 0 1 0 0 CB 0
1 0 0C
C
B
CB
CB
CB
CB
C
(i) B
@ 0 0 0 1 A @ 0 0 1 0 A@ 0 0 1 0 A@ 0 2 1 0 A@ 0
0 1 0A
00 0 1 0
0100 0 1
3 100 0 1
0 100 0 1
0 −1
0 1
10
1
1 0 0 1
1 0 0
0
1 0 0
0
1 0
0 0
1 0 0
0
CB
CB
CB
B
CB
0C
0 0 CB 0 1 0
0 CB 0 1 0 0 CB 0 1 0 −2 CB 0 1 0
C
B0 1
B
CB
CB
CB
CB
C
@ 0 0 −1 0 A@ 0 0 1
0 A@ 0 0 1 −5 A
0 A@ 0 0 1 0 A@ 0 0 1
0 00 0
1
0 10 0
1
0 0
0
0 1
0 0 0 −1 100 0 0 1
10
1
1 0 1 0
1 0 0 0
1 −2 0 0
1 −2 1 1
B
CB
CB
B
C
1 0 0C
B 0 1 0 0 CB 0 1 1 0 CB 0
C
B 2 −3 3 0 C
B
CB
CB
C=B
C.
@ 0 0 1 0 A@ 0 0 1 0 A@ 0
0 1 0 A @ 3 −7 2 4 A
0 0 0 1
0 0 0 1
0
0 0 1
0
2 1 1
20
Get all Chapter’s Instant download by email at etutorsource@gmail.com
Get all Chapter’s Instant download by email at etutorsource@gmail.com
1.5.26. Applying Gaussian Elimination:
0
E1 = @
E2
E3
0
=@
1
− √1
3
1
0
0
√2
=@ 3
0
E4 = @
0
1
0
1
0
√ A
3
2
0
1
0√
3
2
E1 A = B
@
1
0
A,
1
0
0√
3
E 2 E1 A = @ 2
,
0
A,
E 3 E2 E1 A = @
0
−i
−1
1
−1
1 C
A,
−i
1
− 12 A
,
1
− √1
1
0
10
3+ i
(d) B
@ −4 + 4 i
−1 + 2 i
0
√2
3
10 √
3
A@ 2
−1 − i
2− i
1− i
0
1
3 A,
0
1
E 4 E3 E2 E1 A = I =
1 0
1
A@
and hence A = E1−1 E2−1 E3−1 E4−1 = @ √1
1
0
3
0
1
!
1
− 2i
−1
1− i
2
@
A
1.5.27. (a)
, (b)
,
1
i
1+ i
−1
2 −2
0
1
0
i
(c) B
@1 − i
−1
A,
√2
3
0
1
1
√1
3 A,
1
− 12 C
1
0
0
1
10
!
,
− √1
0 A@ 1
1
0
1
3 A.
1
1
−i
2+ i C
A.
1
“
”
1.5.28. No. If they have the same solution, then they both reduce to I | x under elementary
row operations. Thus, by applying the appropriate
“
” elementary row operations to reduce the
augmented matrix of the first system to I | x , and then applying the inverse elementary row operations we arrive at the augmented matrix for second system. Thus, the first
system can be changed into the second by the combined sequence of elementary row operations, proving equivalence. (See also Exercise 2.5.44 for the general case.)
♥ 1.5.29.
e =E E
(a) If A
N N −1 · · · E2 E1 A where E1 , . . . , EN represent the row operations applied to
e =A
eB =E E
A, then C
N N −1 · · · E2 E1 A B = EN EN −1 · · · E2 E1 C, which represents
the same sequence of row operations applied to C.
(b)
0
10
1
0
1
0
10
1
1
2 −1
1 −2
8 −3
1 0 0
8 −3
B
B
C
B
CB
C
(E A) B = B
2C
0C
@ 2 −3
A@ 3
A = @ −9 −2 A = @ 0 1 0 A @ −9 −2 A = E (A B).
−2 −3 −2
−1
1
−9
2
−2 0 1
7 −4
0
1
1
2A
− 14
1
!
!
5
2
2
2
17
17 A
@
=
;
(b)
=
;
3
1
3
12
2
−
4
17
17
0
1
10
1
0
10
1
0 1
5
3
2 −2
3 C B 14 C
9 −15 −8
3
2
2 CB
B
B
CB
C
B C
C
CB − 2 C = B
;
(d)
6
−10
−5
−1
=
3 A;
(c) B
@
@
A
@
A
5A
−1
0 A@
@
@1
A
1
1
−1
2
1
5
0
−2
2
0
2 −2
0
10
1
0
1
0
10
1
0
1
1
0
1
1
4
3
−4
3 1
3
−4
B
CB
C
B
0
0 −1 −1 CB 11 C B 1 C
CB
C
B
C
C
B
CB
C=B
C;
(e) B
(f ) B
@ 2 −1 0 A @ 5 A = @ 1 A;
@ 2 −1 −1
0 A@ −7 A @ 4 A
1
1.5.30. (a) @ 21
4
0
3
−1
1
−2
1
!
−7
0
1
0
1
@ − 2A
−3
2
−1
−1
−1
6
−2
21
Get all Chapter’s Instant download by email at etutorsource@gmail.com
Get all Chapter’s Instant download by email at etutorsource@gmail.com
0
1
@
−4
− 21
B
B − 5
2
B
(g) B
B
0
1
1
−2
−3
−1
1
−1
C
(e) B
@ −4 A,
−1
1.5.32.
1
−3
(a)
2
1
!
0 1
1 0
0
2
1
(c) B
4
@2
0 −2
(b)
0
1
(d) B
@0
0
0
0
0
1
!
=
1
10
1
3
2
2
1
2
1
−3
!
0
0
1
!
1
0
0
7
0
1
7
5 A,
(c) @
− 15
1
0
− 21
C
(g) B
@ 0 A,
1
!
1
0
!
10
1
1
1
1
−1
0
5
1
B
−2 C
A = @2
3
1
−3
−1
−1
0
1
2
B1
1C
A=@2
1
2
2
1.5.33.
0
1
− 73 A
,
(a) @
5
7
1.6.1. (a) ( 1
!
0
1
0
10
0
2 0
B
1
0C
A@ 0 2
0 0
1
0
1
0
1 −1
1
2
1
0 0
B
C
B
1
−4
1
5
1
1 0
C
B
B
C=B
(f ) B
@1
2 −1 −1 A @ 1 −1 1
3
1
1
6
3 − 34 1
0
10
1
0
1 0 0 0
1
0
2 −3
1
B
CB
C
0
1C B
B 0 1 0 0 CB 2 −2
B2
CB
C=B
(g) B
@ 0 0 0 1 A@ 1 −2 −2 −1 A
@0
1
0 0 1 0
0
1
1
2
2
(e) B
@1
1
!
2
;
1
0 4
1 0
−7 0
=
−7 2
0 1
0 4
10
1
0
2 0
2
1
0 0
B
B
−1 C
1 0C
A@ 0 3
A = @1
0 0
1
0 − 32 1
0
1
B
1C
A@1
0
2
(b)
5 ),
!
−8
,
3
(b)
1
0
3
− 2C B
CB
C
3 CB
1C
C
B
− 2 CB 3 C B
2C
CB
C.
C=B
C
B
B
3C
−3 C
A@
A
@ − 1A
1
3
2
−2
−2
1
1
(b) @ 41 A,
0 4 1
1
8C
B
C
B − 1 C,
(f ) B
2A
@
5
8
− 13 A
1.5.31. (a) @
,
− 23
0
0
0
1
1
0
1
1
6C
B
C
B − 2 C,
(c) B
3A
@
2
3
1
1
!
0
,
2
(d) singular matrix,
0
4
1
B
B
(i) B
@
3
1
−28
−7 C
C
C.
12 A
3
!
1 − 27
0
1
1
10
0
1
1 21
CB
0 A@ 0 1 −1 C
A;
−1
0 0
1
10
10
0
1
0
0
1
B
B
0C
0C
A@ 0 −3
A@ 0
0
1
0
0 −7
10
1
1
0
1
− 23
5
1
7C
3 A;
1
0
1
1
C
B
0C
A@ 0
1 0 A;
1
0
0 1
10
10
1
0
1
0
0 0
1 −1 1
2
CB
CB
0 CB 0 −3
0 0 CB 0
1 0 −1 C
C
CB
CB
C;
0 A@ 0
0 −2 0 A@ 0
0 1
0A
1
0
0
0 4
0
0 0
1
10
10
1 0 2
1
0
0
0
0 0 0
B
B
0
0C
1 0 0C
CB 0 −2
CB 0 1 2
CB
CB
0 −1
0 A@ 0 0 1
− 21 1 0 A@ 0
1 0 1
0
0
0 −5
0 0 0
0
1
0
1
1
−12
C
B
C
(d) B
@ −2 A, (e) @ −3 A,
0
7
(c)
0
B
−10 C
C
B
C,
(h) B
@ −8 A
1
2
!
2
,
1
0
1
(d) B
@ 2
−1
0
7 1
3
B
C
B 2 C
C,
(f ) B
B
C
@ 5 A
5
−3
0
B
B
(g) B
@
1
−3
− 27 C
C
.
11 C
−2 A
1
1
0
1C
C
C.
0A
−2
1
2
0C
A,
2
22
Get all Chapter’s Instant download by email at etutorsource@gmail.com
Get all Chapter’s Instant download by email at etutorsource@gmail.com
1
(f )
3
1.6.2. AT = B
−1
@
−1
1
2C
A,
1
0
(e) B
@
1
2C
A,
−3
3
4
1
0
T
1
2
T
!
−2
2
(A B) = B A =
(g) B
@
−1
2
BT =
T
0
!
5
,
6
0
,
6
2
0
1
2
−1
0
3
2
!
1
1
1C
A.
5
−3
,
4
T
T
(B A) = A B
T
0
=B
@
−1
5
3
1
6
−2
−2
−5
11 C
A.
7
1.6.3. If A has size m × n and B has size n × p, then (A B)T has size p × m. Further, AT has
size n × m and B T has size p × n, and so unless m = p the product AT B T is not defined.
If m = p, then AT B T has size n × n, and so to equal (A B)T , we must have m = n = p,
so the matrices are square. Finally, taking the transpose of both sides, A B = (A T B T )T =
(B T )T (AT )T = B A, and so they must commute.
♦ 1.6.4. The (i, j) entry of C = (A B)T is the (j, i) entry of A B, so
cij =
n
X
k=1
ajk bki =
n
X
k=1
e
e ,
bik a
kj
T
T
e
e
where a
ij = aji and bij = bji are the entries of A and B respectively. Thus, cij equals
the (i, j) entry of the product B T AT .
1.6.5. (A B C)T = C T B T AT
1.6.6. False. For example,
1
0
1
1
!
does not commute with its transpose.
!
a b
♦ 1.6.7. If A =
, then AT A = A AT if and only if b2 = c2 and (a − d)(b − c) = 0.
c d
So either
! b = c, or c =
! −b 6= 0 and a = d. Thus all normal 2 × 2 matrices are of the form
a b
a b
or
.
b d
−b a
1.6.8.
−T
(a) (A B)−T = ((A
B)T )−1 = (B T AT )−1 = !(AT )−1 (B T )−1 = A−T B!
.
!
1 0
1 −2
0 −1
−T
−T
(b) A B =
, so (A B)
=
, while A
=
, B −T =
2 1
0
1
1
1
!
1 −2
−T −T
so A B
=
.
0
1
1
−1
!
−1
,
2
1.6.9. If A is invertible, then so is AT by Lemma 1.32; then by Lemma 1.21 A AT and AT A are
invertible.
1.6.10. No; for example,
!
1
(3 4) =
2
3
6
4
8
!
while
!
3
(1 2) =
4
3
4
!
6
.
8
1.6.11. No. In general, B T A is the transpose of AT B.
♦ 1.6.12.
(a) The ith entry of A ej is the product of the ith row of A with ej . Since all the entries in
ej are zero except the j th entry the product will be equal to aij , i.e., the (i, j) entry of A.
bT A e is the product of the row matrix e
bT and the j th column of A. Since
(b) By part (a), e
i
j
i
23
Get all Chapter’s Instant download by email at etutorsource@gmail.com
We Don’t reply in this website, you need to contact by email for all chapters
Instant download. Just send email and get all chapters download.
Get all Chapters Solutions Manual/Test Bank Instant Download by email at
etutorsource@gmail.com
You can also order by WhatsApp
https://api.whatsapp.com/send/?phone=%2B447507735190&text&type=ph
one_number&app_absent=0
Send email or WhatsApp with complete Book title, Edition Number and
Author Name.
Get all Chapter’s Instant download by email at etutorsource@gmail.com
bT are zero except the ith entry, multiplication by the j th column of A
all the entries in e
i
will produce aij .
♦ 1.6.13.
(a) Using Exercise 1.6.12, aij = eT
A ej = eT
ej = bij for all i, j.
i B!
!i
!
1 2
1 1
0 0
(b) Two examples: A =
, B=
; A=
, B=
0 1
1 1
0 0
0
1
!
−1
.
0
♦ 1.6.14.
(a) If pij = 1, then P A maps the j th row of A to its ith row. Then Q = P T has qji = 1,
and so it does the reverse, mapping the ith row of A to its j th row. Since this holds for
all such entries, the result follows.
!
cos θ − sin θ
(b) No. Any rotation matrix
also has this property. See Section 5.3.
sin θ
cos θ
♦ 1.6.15.
(a) Note that (A P T )T = P AT , which permutes the rows of AT , which are the columns of
A, according to the permutation P .
(b) The effect of multiplying P A P T is equivalent to simultaneously permuting rows and
columns of A according to the permutation P . Associativity of matrix multiplication
implies that it doesn’t matter whether the rows or the columns are permuted first.
♥ 1.6.16.
(a) Note that w vT is a scalar, and so
A A−1 = ( I − v wT )( I − c v wT ) = I − (1 + c)v wT + c v (wT v)wT
= I − (1 + c − cwT v)v wT = I
provided c = 1/(vT w − 1),! which works whenever wT v 6= 1.
!
5
1
− 12
2 −2
T
−1
T
1
1
4
(b) A = I − v w =
and c = T
= 4 , so A = I − 4 v w = 3
1 .
3 −5
v w−1
4 −2
(c) If vT w = 1 then A is singular, since A v = 0 and v 6= 0, and so the homogeneous
system does not have a unique solution.
1.6.17. (a) a = 1;
1.6.18. 0
1
(a) B
@0
0
0
1
B
B0
(b) B
@0
0
0
1
B
B0
B
@0
0
0
1
0
0
1
0
0
0
0
0
1
(b) a = −1, b = 2, c = 3;
1 0
0
0 1
B
0C
A, @ 1 0
1
0 0
1 0
0 0
0
B
0 0C
C B1
C, B
1 0A @0
0 11 00
0 0
1
B
0 1C
C B0
C, B
1 0A @0
0 0
0
1 0
(c) a = −2, b = −1, c = −5.
1 0
1
0
0 0 1
1 0 0
B
C B
C
0C
A, @ 0 1 0 A , @ 0 0 1 A.
1
1 10 0 0
0 1 10 0
1 0 0
0 0 1 0
0
B
C B
0 0 0C
C B0 1 0 0C B0
C, B
C, B
0 1 0A @1 0 0 0A @0
0 0 11 00 0 0 11 01
0 0 0
0 1 0 0
0
B
C B
1 0 0C
C B1 0 0 0C B0
C, B
C, B
0 0 1A @0 0 0 1A @1
0 1 0
0 0 1 0
0
1.6.19. True, since (A2 )T = (A A)T = AT AT = A A = A2 .
0
1
0
0
0
0
0
1
0
0
1
0
1
0
0
0
1 0
1
1
B
0C
C B0
C, B
0A @0
01 00
0
0
B
1C
C B0
C, B
0A @0
0
1
0
0
1
0
0
0
1
0
0
1
0
0
0
1
0
0
1
0
0C
C
C,
0A
11
1
0C
C
C.
0A
0
♦ 1.6.20. True. Invert both sides of the equation AT = A, and use Lemma 1.32.
24
Get all Chapter’s Instant download by email at etutorsource@gmail.com
Get all Chapter’s Instant download by email at etutorsource@gmail.com
0
1
♦ 1.6.21. False. For example
!
1
0
2
1
1
3
!
!
1
2
=
3
.
1
1.6.22.
(a) If D is a diagonal matrix, then for all i 6= j we have aij = aji = 0, so D is symmetric.
(b) If L is lower triangular then aij = 0 for i < j, if it is symmetric then aji = 0 for i < j,
so L is diagonal. If L is diagonal, then aij = 0 for i < j, so L is lower triangular and it
is symmetric.
1.6.23.
(a) Since A is symmetric we have (An )T = (A A . . . A)T = AT AT . . . AT = A A . . . A = An
(b) (2 A2 − 3 A + I )T = 2 (A2 )T − 3 AT + I = 2 A2 − 3 A + I
(c) If p(A) = cn An + · · · + c1 A + c0 I , then p(A)T = cn An + · · · c1 A + c0 I T = cn (AT )n +
· · · c1 AT + c0 I = p(AT ). In particular, if A = AT , then p(A)T = p(AT ) = p(A).
1.6.24. If A has size m × n, then AT has size n × m and so both products are defined. Also,
K T = (AT A)T = AT (AT )T = AT A = K and LT = (AAT )T = (AT )T AT = A AT = L.
1.6.25.
(a)
1
1
(b)
−2
3
0
1
4
1
(c) B
@ −1
0
−1
1
B
B −1
(d) B
@ 0
3
1.6.26.
!
=
3
−1
!
−1
3
2
0
1
!
1
0
1
=
− 23
1
0
0
3
0
1
!
−1
1
B
2C
A = @ −1
0
−1
−1
2
2
0
M2 =
1
1
0
2
−1
0
1
1
2
0
1
1
0
!
1
0
0
3
2
0
0
1
1
2
!0
1
B1
B
B2
M4 = B
B 0
@
0
1
1
−2
0
3
1
B
0C
C
B −1
C=B
0A @ 0
1
3
!
1
0
0
!
!
7
2
10
0
1
B
0C
A@ 0
1
0
0
1
2
3
0
0
1
6
5
1
@1
1
2 A,
0
1
0
0
1
0
2
3
0
,
1
3
4
!
− 32 ,
1
10
0
0
1
B
2
0C
A@ 0
0
0 − 32
1
0
10
0
1
B
0C
CB 0
CB
0 A@ 0
1
0
0
1
0
0
0
B
1
1
M3 = B
@2
10
2
0
CB
B
C
0 CB 0
CB
B
0C
A@ 0
1
0
0
3
2
0
0
0
0
0
4
3
0
−1
1
0
0
0
−5
0
0
1
2
3
1
1C
2 A,
1
10
0
1
B
0C
CB 0
CB
0 A@ 0
− 49
0
5
10
2
0
CB
B0
0C
A@
1
0
10
1
0
CB
B
C
0 CB 0
CB
B
0C
A@ 0
5
4
−1
0
1
2
1
0
0
0
3
2
0
0
2
3
1
0
−1
1
0
0
0
2
1
0
10
0 B1
CB
0C
AB
@0
4
0
3
1
0
C
0C
C
1
3
3C
C
.
6C
A
5
1
1
2
1
0
1
0C
2C
C,
3A
1
C.
3C
4A
1
♦ 1.6.27. The matrix is not regular, since after the first set of row operations the (2, 2) entry is 0.
More0explicitly,1if
0
1
0
1
p
ap
bp
1 0 0
p 0 0
B
C
B
C
B
T
2
L = @ a 1 0 A , D = @ 0 q 0 A , then L D L = @ a p a p + q
abp + cq C
A.
2
2
b c 1
0 0 r
bp abp + cq b p + c q + r
Equating this to A, the (1, 1) entry requires p = 1, and so the (1, 2) entry requires a = 2,
but the (2, 2) entry then implies q = 0, which is not an allowed diagonal entry for D.
Even if we ignore this, the (1, 3) entry would set b = 1, but then the (2, 3) entry says
a b p + c q = 2 6= −1, which is a contradiction.
eU
e , where L
e = V T and U
e = D L.
e Thus, AT
♦ 1.6.28. Write A = L D V , then AT = V T D U T = L
T
e
is regular since the diagonal entries of U , which are the pivots of A , are the same as those
25
Get all Chapter’s Instant download by email at etutorsource@gmail.com
We Don’t reply in this website, you need to contact by email for all chapters
Instant download. Just send email and get all chapters download.
Get all Chapters Solutions Manual/Test Bank Instant Download by email at
etutorsource@gmail.com
You can also order by WhatsApp
https://api.whatsapp.com/send/?phone=%2B447507735190&text&type=ph
one_number&app_absent=0
Send email or WhatsApp with complete Book title, Edition Number and
Author Name.
Get all Chapter’s Instant download by email at etutorsource@gmail.com
of D and U , which are the pivots of A.
!
0 1
. (c) No,
−1 0
because the (1, 1) entry is always 0. (d) Invert both sides of the equation J T = − J and
use Lemma 1.32. (e) (J T )T = J = − J T , (J ± K)T = J T ±!K T = − J!∓ K = − (J ± !
K).
0 1
0 1
−1
0
J K is not, in general, skew-symmetric; for instance
=
.
−1 0
−1 0
0 −1
(f ) Since it is a scalar, vT J v = (vT J v)T = vT J T (vT )T = − vT J v equals its own
negative, and so is zero.
♥ 1.6.29. (a) The diagonal entries satisfy jii = − jii and so must be 0. (b)
1.6.30.
(a) Let S = 21 (A + AT ), J = 12 (A − AT ). Then
S T = S,
J T0= − J, and
A0
= S + J.
1
1
1
0
!
!
!
0 −1 −2
1 3 5
1 2 3
1 52
0 − 21
1 2
B
C
B
C
B
0 −1 C
(b)
= 5
+ 1
;
A.
@4 5 6A = @3 5 7A + @1
3 4
4
0
2
2
2
1
0
5 7 9
7 8 9
1.7.1.
19
(a) The solution is x = − 10
7 , y = − 7 . Gaussian Elimination and Back Substitution requires 2 multiplications and
Gauss–Jordan also uses 2 multiplications and 3
1
0 3 additions;
2
7 A by the Gauss–Jordan method requires 2 additions
0
1
0
1
1
!
1
2
10
7
−
4
7
7
7
A
and 4 multiplications, while computing the solution x = @
= @ 19 A
−7
− 37 17
−7
takes another 4 multiplications and 2 additions.
additions; finding A−1 = @
1
7
− 37
(b) The solution is x = −4, y = −5, z = −1. Gaussian Elimination and Back Substitution requires 17 multiplications and 110additions; Gauss–Jordan
uses 20 multiplications
1
0 −1 −1
C
and 11 additions; computing A−1 = B
@ 2 −8 −5 A takes 27 multiplications and 12
3
2 −5 −3
additions, while multiplying A−1 b = x takes another 9 multiplications and 6 additions.
(c) The solution is x = 2, y = 1, z = 52 . Gaussian Elimination and Back Substitution
requires 6 multiplications and 5 additions;
Gauss–Jordan
is the same: 6 multiplications
0
1
1
3
3
2
2C
B − 2
1
1C
B − 1
C takes 11 multiplications and 3
and 5 additions; computing A−1 = B
2
2
2A
@
2
1
− 5 0 −5
−1
additions, while multiplying A b = x takes another 8 multiplications and 5 additions.
1.7.2.
(a) For a general matrix A, each entry of A2 requires n multiplications and n − 1 additions,
for a total of n3 multiplications and n3 − n2 additions, and so, when compared with
the efficient version of the Gauss–Jordan algorithm, takes exactly the same amount of
computation.
(b) A3 = A2 A requires a total of 2 n3 multiplications and 2 n3 − 2 n2 additions, and so is
about twice as slow.
(c) You can compute A4 as A2 A2 , and so only 2 matrix multiplications are required. In
general, if 2r ≤ k < 2r+1 has j ones in its binary representation, then you need r multir
plications to compute A2 , A4 , A8 , . . . A2 followed by j − 1 multiplications to form Ak as
a product of these particular powers, for a total of r + j − 1 matrix multiplications, and
hence a total of (r + j − 1)n3 multiplications and (r + j − 1)n2 (n − 1) additions. See
26
Get all Chapter’s Instant download by email at etutorsource@gmail.com
Get all Chapter’s Instant download by email at etutorsource@gmail.com
Exercise 1.7.8 and [ 11 ] for more sophisticated ways to speed up the computation.
1.7.3. Back Substitution requires about one half the number of arithmetic operations as multiplying a matrix times a vector, and so is twice as fast.
♦ 1.7.4. We begin by proving (1.61). We must show that 1 + 2 + 3 + . . . + (n − 1) = n(n − 1)/2
for n = 2, 3, . . .. For n = 2 both sides equal 1. Assume that (1.61) is true for n = k. Then
1 + 2 + 3 + . . . + (k − 1) + k = k(k − 1)/2 + k = k(k + 1)/2, so (1.61) is true for n = k + 1. Now
the first equation in (1.62) follows if we note that 1 + 2 + 3 + . . . + (n − 1) + n = n(n + 1)/2.
Next we prove the first equation in (1.60), namely 2 + 6 + 12 + . . . + (n − 1)n = 13 n3 − 13 n
for n = 2, 3, . . .. For n = 2 both sides equal 2. Assume that the formula is true for n = k.
Then 2 + 6 + 12 + . . . + (k − 1)k + k(k + 1) = 13 k3 − 31 k + k 2 + k = 13 (k + 1)3 − 13 (k + 1), so the
formula is true for n = k + 1, which completes the induction step. The proof of the second
equation is similar, or, alternatively, one can use the first equation and (1.61) to show that
n
X
j =1
(n − j)2 =
n
X
j =1
(n − j)(n − j + 1) −
n
X
j =1
(n − j) =
n3 − n
n2 − n
2 n3 − 3 n 2 + n
−
=
.
3
2
6
♥ 1.7.5. We may assume that the matrix is regular, so P = I , since row interchanges have no
effect on the number of arithmetic operations.
(a) First, according to (1.60), it takes 13 n3 − 13 n multiplications and 13 n3 − 21 n2 + 16 n
additions to factor A = L U . To solve L cj = ej by Forward Substitution, the first j − 1
entries of c are automatically 0, the j th entry is 1, and then, for k = j + 1, . . . n, we need
k − j − 1 multiplications and the same number of additions to compute the k th entry, for
a total of 12 (n − j)(n − j − 1) multiplications and additions to find cj . Similarly, to solve
U xj = cj for the j th column of A−1 requires 12 n2 + 12 n multiplications and, since the
first j − 1 entries of cj are 0, also 21 n2 − 21 n − j + 1 additions. The grand total is n3
multiplications and n (n − 1)2 additions.
“
”
(b) Starting with the large augmented matrix M = A | I , it takes 12 n2 (n − 1) multipli“
”
cations and 21 n (n − 1)2 additions to reduce it to triangular form U | C with U upper
triangular and C“ lower triangular,
then n2 multiplications to obtain the special upper
”
triangular form V | B , and then 21 n2 (n − 1) multiplications and, since B is upper
“
”
triangular, 12 n (n − 1)2 additions to produce the final matrix I | A−1 . The grand
total is n3 multiplications and n (n − 1)2 additions. Thus, both methods take the same
amount of work.
and 31 n3 − 13 n
1.7.6. Combining (1.60–61), we see that it takes 31 n3 + 12 n2 − 56 n multiplications
“
”
additions to reduce the augmented matrix to upper triangular form U | c . Dividing the
j th row by its pivot requires n−j +1 multiplications,
for a total of 12 n2 + 21 n multiplications
“
”
“
”
to produce the special upper triangular form V | e . To produce the solved form I | d
requires an additional 21 n2 − 21 n multiplications and the same number of additions for a
grand total of 13 n3 + 23 n2 − 56 n multiplications and 13 n3 + 12 n2 − 56 n additions needed to
solve the system.
1.7.7. Less efficient, by, roughly, a factor of 23 . It takes 12 n3 + n2 − 12 n multiplications and
1 3
1
2 n − 2 n additions.
27
Get all Chapter’s Instant download by email at etutorsource@gmail.com
Get all Chapter’s Instant download by email at etutorsource@gmail.com
♥ 1.7.8.
(a) D1 + D3 − D4 − D6 = (A1 + A4 ) (B1 + B4 ) + (A2 − A4 ) (B3 + B4 ) −
− (A1 + A2 ) B4 − A4 (B1 − B3 ) = A1 B1 + A2 B3 = C1 ,
D4 + D7 = (A1 + A2 ) B4 + A1 (B2 − B4 ) = A1 B2 + A2 B4 = C2 ,
D5 − D6 = (A3 + A4 ) B1 − A4 (B1 − B3 ) = A3 B1 + A4 B3 = C3 ,
D1 − D2 − D5 + D7 = (A1 + A4 ) (B1 + B4 ) − (A1 − A3 ) (B1 + B2 ) −
− (A3 + A4 ) B1 + A1 (B2 − B4 ) = A3 B2 + A4 B4 = C4 .
(b) To compute D1 , . . . , D7 requires 7 multiplications and 10 additions; then to compute
C1 , C2 , C3 , C4 requires an additional 8 additions for a total of 7 multiplications and 18
additions. The traditional method for computing the product of two 2 × 2 matrices requires 8 multiplications and 4 additions.
(c) The method requires 7 multiplications and 18 additions of n × n matrices, for a total of
7 n3 and 7 n2 (n−1)+18 n2 ≈ 7 n3 additions, versus 8 n3 multiplications and 8 n2 (n−1) ≈
8 n3 additions for the direct method, so there is a savings by a factor of 87 .
(d) Let µr denote the number of multiplications and αr the number of additions to compute
the product of 2r × 2r matrices using Strassen’s Algorithm. Then, µr = 7 µr−1 , while
αr = 7 αr−1 + 18 · 22 r−2 , where the first factor comes from multiplying the blocks,
and the second from adding them. Since µ1 = 1, α1 = 0. Clearly, µr = 7r , while an
induction proves the formula for αr = 6(7r−1 − 4r−1 ), namely
αr+1 = 7 αr−1 + 18 · 4r−1 = 6(7r − 7 · 4r−1 ) + 18 · 4r−1 = 6(7r − 4r ).
Combining the operations, Strassen’s Algorithm is faster by a factor of
2 n3
23 r+1
=
,
µr + αr
13 · 7r−1 − 6 · 4r−1
which, for r = 10, equals 4.1059, for r = 25, equals 30.3378, and, for r = 100, equals
678, 234, which is a remarkable savings — but bear in mind that the matrices have size
around 1030 , which is astronomical!
!
!
A O
B O
(e) One way is to use block matrix multiplication, in the trivial form
=
O I
O I
!
C O
where C = A B. Thus, choosing I to be an identity matrix of the appropriate
O I
size, the overall size of the block matrices can be arranged to be a power of 2, and then
the reduction algorithm can proceed on the larger matrices. Another approach, trickier
to program, is to break the matrix up into blocks of nearly equal size since the Strassen
formulas do not, in fact, require the blocks to have the same size and even apply to rectangular matrices whose rectangular blocks are of compatible sizes.
1.7.9. 0
1
(a) B
@ −1
0
0
1
B
B −1
(b) B
@ 0
0
2
−1
−2
−1
2
−1
0
1
0
0
1
0
B
1C
1
A = @ −1
3
0 −2
1
0
0 0
1
C
1 0C B
B −1
C=B
4 1A @ 0
−1 6
0
10
1
0
1 2 0
B
C
0C
A @ 0 1 1 A,
1
0 0 5
10
0
0 0
1
B
1
0 0C
CB 0
CB
−1
1 0 A@ 0
0 −1 1
0
0
1
−2
C
x=B
@ 3 A;
0
1
−1 0 0
1 1 0C
C
C,
0 5 1A
0 0 7
0
1
1
B C
0C
B C;
x=B
@1A
2
28
Get all Chapter’s Instant download by email at etutorsource@gmail.com
We Don’t reply in this website, you need to contact by email for all chapters
Instant download. Just send email and get all chapters download.
Get all Chapters Solutions Manual/Test Bank Instant Download by email at
etutorsource@gmail.com
You can also order by WhatsApp
https://api.whatsapp.com/send/?phone=%2B447507735190&text&type=ph
one_number&app_absent=0
Send email or WhatsApp with complete Book title, Edition Number and
Author Name.
Get all Chapter’s Instant download by email at etutorsource@gmail.com
0
1
B
−1
B
(c) B
@ 0
0
1.7.10. 0
2
(a) B
@ −1
0
2
B
−1
B
B
@ 0
0
0
0
2
B −1
B
B
B 0
B
@ 0
0
(b)
“
1
0
−4
B
2C
C
B
C
x=B
2
B − C.
@ 5A
−1
2
−1
−1
2
−1
0
1
0 0
2 −1
0
0
3
B−1
CB 0
C
1
0
−1
=
−1 C
@ 2
A
A@
A,
2
2
4
2
00 − 3 1
0
0 103
1
1
1
0
0 0
2 −1
0
0
0
0
3
B−1
B
1
0 0C
0C
CB 0
C
B 2
−1
0C
C
2 −1
CB
C ,
C=B
2
4
CB 0
C
1
0
−1
2 −1 A B
0
−
0
A@
A
@
3
3
3
5
−1
2
00 0 − 4 1
0
0 100
4
1
2 −1
0
1
0
0
0 0
0
0
0
3
CB
B 1
1
0
0
0
−1
0
−
C
C
B
B
2
−1
0
0C B 2
CB
2
4
B
B
1
0 0C
0
CB 0
C = B 0 −3
2 −1
0C
3
CB
C
B
3
B
−1
2 −1 A B
0 −4
1 0C
0
0
A@ 0
@ 0
4
0 −1
2
0
0
0 −5 1
0
0
0
”
3 T
3
,
2 , 2, 2
0
1
1
0
0
1
1
0
B
0C
CB 0
CB
0 A@ 0
1
0
0
0
1
− 41
10
( 2, 3, 3, 2 )T ,
“
2
−1
0
0
0
0
4
0
1
0
1
0
0
4
−1
−1
2
−1
0
0
0
1
B
0C
−1
C
B
C=B
−1 A @ 0
−1
0
10
2
−3
−1
0
0
0C
C
C,
−1 A
5
−4
− 35
1
0
0
−1
5
4
0
”
5
9
5 T
.
2 , 4, 2 , 4, 2
1
0
0C
C
C
0C
C;
C
−1 C
A
6
5
(c) The subdiagonal entries in L are li+1,i = − i/(i + 1) → −1, while the diagonal entries in
U uii = (i + 1)/i → 1.
♠ 1.7.11. 0
2
(a) B
@ −1
0
0
2
B
B −1
B
@ 0
0
0
2
B1
B
B
B0
B
@0
0
“
1
2
−1
1
2
−1
0
1
2
1
0
0
0
1
2
1
0
1
10
0
1
2 1
0
1
0 0
0
5
B
B 1
1 0C
1C
1C
A@ 0 2
A ,
A = @−2
2
12
2
00 − 5 1
0 0 1
50
1
1
0
0 0
2 1
0 0
B−1
CB 0 5
C
1
0
0
CB
1 0C B
2
2
CB
C=B
2
B
0
2 1A B
1 0C
@ 0 −5
A@ 0
5
−1 2
0
0 − 12 1
0100
0
1
1
0
0
0
0
2
0 0
B 1
CB
−
0
1
0
0
0
B
B
C
C
0 0C B 2
CB
B 0 −2
C
CB 0
1
0
0
CB
1 0C
=B
5
B
C
CB
5
B
2 1A B
1 0C
0 − 12
@ 0
A@ 0
12
1 2
0
0
0 − 29 1
0
”
“
”
“
”
1 1 2 T
8 13 11 20 T
3 2 1 2 7 T
,
,
,
,
,
,
,
,
,
,
,
.
3 3 3
29 29 29 29
10 5 2√5 10
(b)
(c) The subdiagonal√entries in L approach 1 −
U approach 1 + 2 = 2.414214.
1.7.12. Both false. For example,
10
1
0
1 1 0 0
1 1 0 0
2
B
CB
C
B
1
1
1
0
1
1
1
0
B
CB
C
B2
B
CB
C=B
@ 0 1 1 1 A@ 0 1 1 1 A
@1
0 0 1 1
0 0 1 1
0
0
♠ 1.7.13.
0
4
B
B1
@
1
1
4
1
1
0
1C B 1
B1
1C
A=@4
1
4
4
0
1
1
5
2
3
2
1
10
0
4
CB
B0
0C
A@
1
0
1
2
3
2
1
15
4
0
0
0C
C
C ,
1C
A
12
5
0
1
29
12
0
0
0
12
5
0
1
5
2
0
0
0
0
1
29
12
0
1
0
0C
C
C
0C
C;
C
1C
A
70
29
2 = −.414214, and the diagonal entries in
0
0
1C
C
C,
2A
2
1
1
0
1
1
B
B1
B
@0
0
1
1
1
1
1
0
0
1
1
1
1
0
0 −1
1
B
0C
C
B 0
C =B
@ −1
1A
1
1
0
0
1
−1
−1
1
0
0
1
1
−1 C
C
C.
0A
1
3C
C
4A
18
5
29
Get all Chapter’s Instant download by email at etutorsource@gmail.com
Get all Chapter’s Instant download by email at etutorsource@gmail.com
0
0
4
B
B
B1
B
B0
@
1
4
B
B1
B
B
B0
B
B
@0
1
1
4
1
0
0
1
4
1
0
0
1
4
1
0
1
0
0
1C B 1
C
B1
1
0C B 4
C=B
4
C
B
1A @ 0
15
1
1
4
− 15
4
0
1
1
0
0 1
B1
C
B
1
0 0C
B4
C
B
C
4
B
=B0
1 0C
15
C
B
C
4 1A B
0
@ 0
1
1
1 4
4 − 15
0
1
4
1
10
1
4
1
0
1
0
CB
1C
15
C
B
0 CB 0 4
1 −4 C
C
CB
C
56
16 C
C
B
0 A@ 0
0 15
15 A
2
24
0
0
0
7 1
7
10
4
1
0
0
0
0 0
CB
15
C
B
0
0 0 CB 0 4
1
0
CB
56
C
B
1
0 0 CB 0
0 15
1
CB
15
209
C
B
1 0 A@ 0
0
0 56
56
1
5
0
0
0
0
56
19 1
0
0
1
For the 6 × 6 version we have
0
0
1
1
0
4 1 0 0 0 1
B1
B
C
B
B1 4 1 0 0 0C
1
B4
B
C
B
B
C
4
B
B0 1 4 1 0 0C
0
15
B
C=B
B
B
C
B 0
B0 0 1 4 1 0C
0
B
B
C
B
B
C
B
@0 0 0 1 4 1A
0
@ 0
1
1
1 0 0 0 1 4
4 − 15
0
0
1
15
56
0
1
56
0
0
0
1
56
209
1
− 209
0
0
0
0
1
7
26
10
4
0
CB
C
B
0 CB 0
CB
B
0C
CB 0
CB
B
0C
CB 0
CB
B
0C
A@ 0
1
0
1
15
4
0
0
0
0
1
1
C
− 41 C
C
1 C
C
15 C
C
55 C
56 A
66
19
0
1
56
15
0
0
0
0
0
1
209
56
0
0
0
0
0
1
780
209
0
1
1
C
− 14 C
C
1 C
C
15 C
C
1 C
− 56
C
C
210 C
209 A
45
13
The pattern is that the only the entries lying on the diagonal, the subdiagonal or the last
row of L are nonzero, while the only nonzero entries of U are on its diagonal, superdiagonal
or last column.
♥ 1.7.14.
(a) Assuming regularity, the only row operations required to reduce A to upper triangular
form U are, for each j = 1, . . . , n − 1, to add multiples of the j th row to the (j + 1)st and
the nth rows. Thus, the only nonzero entries below the diagonal in L are at positions
(j, j + 1) and (j, n). Moreover, these row operations only affect zero entries in the last
column,
leading 1to the
of0U .
1
0
0 final form1
1 −1 −1
1 −1 −1
1 0 0
B
C
B
CB
1 −2 C
(b) @ −1
2 −1 A = @ −1 1 0 A@ 0
A,
0
0
2
−1 −1
3
−1 2 0
1
10
1
0
1
1 −1
0
0 −1
1
0
0
0 0
1 −1
0
0 −1
B
B
B −1
1 −1
0 −1 C
1
0
0 0C
CB 0
C
B −1
2 −1
0
0C
B
C
CB
C
B
B
C
B
C
B
0
2 −1 −1 C
1
0 0 CB 0
B 0 −1
C = B 0 −1
,
3
−1
0
B
C
7
3C
B
C
@ 0
0
0
1 0C
−
0 − 21
0 −1
4 −1 A B
A@ 0
A
@ 0
2
2
13
−1
0
0 −1
5
−1 −1 − 21 − 37 1
0
0
0
0
7
0
1 −1
0
0
0 −1 1
B −1
2 −1
0
0
0C
B
C
B
3 −1
0
0C
C
B 0 −1
B
C=
C
B 0
0
−1
4
−1
0
B
C
@ 0
0
0 −1
5 −1 A
−1
0
0
0 −1
6
0
1
0
0
0
0 0 10 1 −1
0
0
0 −1 1
B −1
B
1
0
0
0 0C
1 −1
0
0 −1 C
B
CB 0
C
B 0 −1
B
1
0
0 0C
0
2 −1
0 −1 C
B
CB 0
C
B
CB
1
7
1C
B 0
CB 0
C.
0
−
1
0
0
0
0
−1
−
2
2
2C
B
CB
B
B
C
C
@ 0
0
0 − 72
0
0
0 33
1 0 A@ 0
− 87 A
7
8
−1 −1 − 21 − 17 − 33
1
0
0
0
0
0 104
33
The 4 × 4 case is a singular matrix.
30
Get all Chapter’s Instant download by email at etutorsource@gmail.com
Get all Chapter’s Instant download by email at etutorsource@gmail.com
♥ 1.7.15.
(a) If matrix A is tridiagonal, then the only nonzero elements in ith row are ai,i−1 , aii , ai,i+1 .
So aij = 0 whenever | i − j | > 1.
0
0
2 1 1 0 0 01
2 1 1 1 0 01
B1 2 1 1 0 0C
B1 2 1 1 1 0C
B
C
B
C
B
C
B
C
B1 1 2 1 1 0C
B1 1 2 1 1 1C
C has band width 2; B
C has band
(b) For example, B
B0 1 1 2 1 1C
B1 1 1 2 1 1C
B
C
B
C
@0 0 1 1 2 1A
@0 1 1 1 2 1A
0 0 0 1 1 2
0 0 1 1 1 2
width 3.
(c) U is a matrix that result from applying the row operation # 1 to A, so all zero entries
in A will produce corresponding zero entries in U . On the other hand, if A is of band
width k, then for each column of A we need to perform no more than k row replacements to obtain zero’s below the diagonal. Thus L which reflects these row replacements will have at most k nonzero entries below the diagonal.
10
1
0
2 1 1 0 0 0
1 0 0 0 0 0
0
1
B
C
C
B
2 1 1 0 0 0
3
1
B
B1
1 0 0 0 0C
1 0 0C
CB 0 2
C
B2
B1 2 1 1 0 0C
2
CB
C
B1
B
C
1
4
2
CB 0
C
B
B
C
1
0
0
0
0
1
0
CB
C
B2
B1 1 2 1 1 0C
3
3
3
CB
C,
C=B
(d) B
1
1
CB 0
C
B 0 2
B0 1 1 2 1 1C
1
0
0
1
0
0
1
CB
C
B
B
C
3
2
2
CB
C
B
@0 0 1 1 2 1A
B
B
0 43 21 1 0 C
0 0 0 1 21 C
A@ 0
A
@ 0
0 0 0 1 1 2
1
3
0 0 0 1 2 1
0 0 0 0 0 4
0
10
1
2 1 1 1 0 0
1 0 0 0 0 0
0
B
C
C
2 1 1 1 0 01 B
3
1
1
B
B1
1 0 0 0 0C
1 0C
B2
B1 2 1 1 1 0C
CB 0 2
C
2
2
B1
B
C
CB
C
1
1
2
4
B
B
C
CB 0
C
1
0
0
0
1
0
1
1
2
1
1
1
B2
B
C
CB
C
3
3
3
3
B
C=B
CB
C
5
1
1
1
3 C.
B1
B1 1 1 2 1 1C
CB 0
0
0
1
0
0
B
B2
B
C
C
C
3
4
4
2
4
B
CB
C
@0 1 1 1 2 1A
1
2
1C
2
4
B
CB
1 0 A@ 0 0 0 0 5 5 A
@ 0 3
2
5
0 0 1 1 1 2
0 0 34 53 14 1
0 0 0 0 0 43
“
”T
“
”T
2 1
1
1 1 2
(e) 13 , 31 , 0, 0, 13 , 13
,
.
3, 3,−3,−3, 3, 3
(f ) For A we still need to compute k multipliers at each stage and update at most 2 k 2 entries, so we have less than (n − 1)(k + 2 k 2 ) multiplications and (n − 1) 2 k 2 additions. For
the right-hand side we have to update at most k entries at each stage, so we have less
than (n − 1)k multiplications and (n − 1)k additions. So we can get by with less than
total (n − 1)(2 k + 2 k 2 ) multiplications and (n − 1)(k + 2 k 2 ) additions.
(g) The inverse of a0banded matrix is1not necessarily banded. For example, the inverse of
0
1
3
1
1
2 1 0
4 −2
4C
B
B
C
B
1
1C
C
@ 1 2 1 A is B −
1
−
2
2A
@
0 1 2
1
1
3
4 −2
4
1.7.16. (a) ( −8, 4 )T , (b) ( −10, −4.1 )T , (c) ( −8.1, −4.1 )T . (d) Partial pivoting reduces
the effect of round off errors and results in a significantly more accurate answer.
1
1
1.7.17. (a) x = 11
7 ≈ 1.57143, y = 7 ≈ .142857, z = − 7 ≈ .142857,
(b) x = 3.357, y = .5, z = −.1429, (c) x = 1.572, y = .1429, z = −.1429.
1.7.18. (a) x = −2, y = 2, z = 3, (b) x = −7.3, y = 3.3, z = 2.9, (c) x = −1.9, y = 2.,
z = 2.9, (d) partial pivoting works markedly better, especially for the value of x.
31
Get all Chapter’s Instant download by email at etutorsource@gmail.com
Get all Chapter’s Instant download by email at etutorsource@gmail.com
1.7.19. (a) x = −220., y = 26, z = .91;
(b) x = −190., y = 24, z = .84;
(c) x = −210,
y = 26, z = 1.
(d) The exact solution is x = −213.658, y = 25.6537, z = .858586.
Full pivoting is the most accurate. Interestingly, partial pivoting fares a little worse than
regular elimination.
1
1
0
0 1
0
1
−1
− 32
0
−.9143
B 4C
B 35 C
B 5C
B 19 C
B C
B
C
B− C
B−
−.5429 C
B1C
C
4C
35 C = B
B
C, (d) B
C.
(b) B
C
B 1 C, (c) B
B
@
@
A
A
12
1
−.3429
C
C
B
B−
@ 8 A
@ 35 A
0
−2.1714
1
− 76
4
35
1
0
1
0
4
1
0
!
−.8000
1
B −5 C
−
−.0769
B
B
8 C
C
,
(b) B
=
−.5333 C
1.7.21. (a) @ 813 A =
A,
@
@ − 15 A
.6154
−1.2667
13
− 19
15
0
1
2
0
1
121
1
0
.0165
B
C
B
−.732
38 C
B
C
B
C
.3141 C
C
121 C = B
B
C,
(c) B
(d) B
@ −.002 A.
B
@ .2438 A
59 C
B
C
.508
@ 242 A
−.4628
56
− 121
0
0
1
0
1
6
1.2
B 5 C
B 13 C
B
C
C = @ −2.6 A,
1.7.20. (a) B
@− 5 A
−1.8
− 95
1.7.22. The results are the same.
♠ 1.7.23.
Gaussian Elimination With Full Pivoting
start
for i = 1 to n
set σ(i) = τ (i) = i
next i
for j = 1 to n
if mσ(i),j = 0 for all i ≥ j, stop; print “A is singular”
choose i ≥ j and k ≥ j such that mσ(i),τ (k) is maximal
interchange σ(i) ←→ σ(j)
interchange τ (k) ←→ τ (j)
for i = j + 1 to n
set z = mσ(i),τ (j) /mσ(k),τ (j)
set mσ(i),τ (j) = 0
for k = j + 1 to n + 1
set mσ(i),τ (k) = mσ(i),τ (k) − z mσ(i),τ (k)
next k
next i
next j
end
32
Get all Chapter’s Instant download by email at etutorsource@gmail.com
Get all Chapter’s Instant download by email at etutorsource@gmail.com
♠ 1.7.24. We let x ∈ R n be generated using a random number generator, compute b = Hn x and
then solve Hn y = b for y. The error is e = x − y and we use e? = max | ei | as a measure of
the overall error. Using Matlab, running Gaussian Elimination with pivoting:
n
e?
10
.00097711
20
35.5111
50
318.3845
100
1771.1
Using Mathematica, running regular Gaussian Elimination:
n
e?
10
.000309257
20
19.8964
50
160.325
100
404.625
In Mathematica, using the built-in LinearSolve function, which is more accurate since
it uses a more sophisticated solution method when confronted with an ill-posed linear system:
n
e?
10
.00035996
20
.620536
50
.65328
100
.516865
(Of course, the errors vary a bit each time the program is run due to the randomness of the
choice of x.)
♠ 1.7.25.
0
1
9 −360
30
C
(a)
H3−1 = B
−36
192
−180
@
A,
30 −180
180
0
1
16
−120
240
−140
B
−120
1200 −2700
1680 C
C
B
C,
H4−1 = B
@ 240 −2700
6480 −4200 A
−140
1680 −4200
2800
0
1
25
−300
1050
−1400
630
B −300
4080
−18900
26880 −12600 C
B
C
B
−1
C.
H5 = B
1050 −18900
79380 −117600
56700 C
B
C
@ −1400
26880 −117600
179200 −88200 A
630 −12600
56700
−88200
44100
(b) The same results are obtained when using floating point arithmetic in either Mathematica or Matlab.
f H , where K
f
(c) The product K
10 10
10 is the computed inverse, is fairly close to the 10 × 10
identity matrix; the largest error is .0000801892 in Mathematica or .000036472 in
f H , it is nowhere close to the identity matrix: in MathematMatlab. As for K
20 20
ica the diagonal entries range from −1.34937 to 3.03755, while the largest (in absolute
value) off-diagonal entry is 4.3505; in Matlab the diagonal entries range from −.4918
to 3.9942, while the largest (in absolute value) off-diagonal entry is −5.1994.
1.8.1.
(a) Unique solution: (− 21 , − 34 )T ;
(b) infinitely many solutions: (1 − 2 z, −1 + z, z)T , where z is arbitrary;
(c) no solutions;
(d) unique solution: (1, −2, 1)T ;
(e) infinitely many solutions: (5 − 2 z, 1, z, 0)T , where z is arbitrary;
(f ) infinitely many solutions: (1, 0, 1, w)T , where w is arbitrary;
(g) unique solution: (2, 1, 3, 1)T .
33
Get all Chapter’s Instant download by email at etutorsource@gmail.com
We Don’t reply in this website, you need to contact by email for all chapters
Instant download. Just send email and get all chapters download.
Get all Chapters Solutions Manual/Test Bank Instant Download by email at
etutorsource@gmail.com
You can also order by WhatsApp
https://api.whatsapp.com/send/?phone=%2B447507735190&text&type=ph
one_number&app_absent=0
Send email or WhatsApp with complete Book title, Edition Number and
Author Name.
Get all Chapter’s Instant download by email at etutorsource@gmail.com
1.8.2. (a) Incompatible; (b) incompatible; (c) (1, 0)T ; (d) (1 + 3 x2 − 2 x3 , x2 , x3 )T , where x2
T
T
and x3 are arbitrary; (e) (− 15
2 , 23, −10) ; (f ) (−5 − 3 x4 , 19 − 4 x4 , −6 − 2 x4 , x4 ) , where
x4 is arbitrary; (g) incompatible.
1.8.3. The planes intersect at (1, 0, 0).
1.8.4. (i) a 6= b and b 6= 0;
(ii) b = 0, a 6= −2;
(iii) a = b 6= 0, or a = −2 and b = 0.
1.8.5. (a) b = 2, c 6= −1 or b = 12 , c 6= 2; (b) b 6= 2, 12 ;
(c) b = 2, c = −1, or b = 12 , c = 2.
1.8.6. “
”T
(a) 1 + i − 12 (1 + i )y, y, − i
, where y is arbitrary;;
(b) ( 4 i z + 3 + i , i z + 2 − i , z )T , where z is arbitrary;
(c) ( 3 + 2 i , −1 + 2 i , 3 i )T ;
(d) ( − z − (3 + 4 i )w, −z − (1 + i )w, z, w )T , where z and w are arbitrary.
1.8.7. (a) 2, (b) 1, (c) 2, (d) 3, (e) 1, (f ) 1, (g) 2, (h) 2, (i) 3.
1.8.8.
1
1
!
!
!
1 0
1
1
(a)
=
,
1 1
0 −3
!
!
!
2
1
3
1 0
2 1 3
(b)
=
,
−2 −1 −3
−1 1
0 0 0
0
1
0
10
1
1 −1 1
1 0 0
1 −1 1
C
B
CB
(c) B
0 1C
@ 1 −1 2 A = @ 1 1 0 A @ 0
A,
−1
1
0
−1
1
1
0
0
0
10
0
10
1
0
2 −1
1 0 0
2 −1
0
1 0 0
CB
CB
B1
3
(d) B
1 −1 C
@0 0 1A@1
A = @ 2 1 0 A@ 0
2
1 0 1
0
0
0 1
1 0 0 2 −1 10
1 1
0
3
3
1 0 0
C
B C
B
(e) B
0 1 0C
@ 0A = @
A @ 0 A,
2
−2
0
−3 0 1
(f ) ( 0 −1 2 5 ) = ( 1 )( 0 −1 2 5 ),
0
0
B
B1
(g) B
@0
0
0
1
B2
B
B1
(h) B
B
@4
0
0
0
(i) B
@0
1
1
−2
10
10
0
1
1
0 0 0
1 2
1 0 0
0 −3
B
C
B
C
B
0
1 0 0C
0 0 0C
CB 0 7 C
CB 4 −1 C
B
CB
C,
CB
C=B 1
3
−
1
0
A@ 0 0 A
0 1 0 A@ 1
2A @ 4
4
0 0
0 0 1
−1 −5
− 41 − 74 0 1
1
0
10
1
1 −1
2
1
−1
2
1
1 0 0 0 0
B
C
B
C
3 −5 −2 C
1 −1
0 C B 2 1 0 0 0 CB 0
C
C
B
CB
B0
C,
0
0
0C
2 −3 −1 C
=B
1 1 1 0 0C
B
CB
C
C
0
0
0A
−1
3
2 A @ 4 1 0 1 0 A@ 0
0 100 0
0
3 1
−50 −2
0 1 01 0 01
1 0
0 0
0 3
1
1 0 0
1 2 −3
1
B
C
B
CB
0 1C
4 −1
A @ 1 2 −3 1 −2 A = @ 2 1 0 A @ 0 0
0 0
2 4 −2 1 −2
0 0 1
0 0
0
3
x + y = 1,
(b) y + z = 0,
x − z = 1.
x = 1,
1.8.9. (a) y = 0,
z = 0.
1.8.10. (a)
1
1
0
−1 C
A,
1
1
0
0
1
0
!
1
0
, (b) B
@0
0
0
0
1
0
x + y = 1,
(c) y + z = 0,
x − z = 0.
1
0
0
1
B
0C
A, (c) @ 0
0
0
1
0
0
1
B
1C
A, (d) @ 0
0
0
0
1
0
1
−2
2C
A.
1
1
0
0C
A.
1
34
Get all Chapter’s Instant download by email at etutorsource@gmail.com
We Don’t reply in this website, you need to contact by email for all chapters
Instant download. Just send email and get all chapters download.
Get all Chapters Solutions Manual/Test Bank Instant Download by email at
etutorsource@gmail.com
You can also order by WhatsApp
https://api.whatsapp.com/send/?phone=%2B447507735190&text&type=ph
one_number&app_absent=0
Send email or WhatsApp with complete Book title, Edition Number and
Author Name.
Get all Chapter’s Instant download by email at etutorsource@gmail.com
1.8.11.
(a) x2 + y 2 = 1, x2 − y 2 = 2;
(b) y = x2 , x − y + 2 = 0; solutions: x = 2, y = 4 and x = −1, y = 1;
(c) y = x3 , x − y = 0; solutions: x = y = 0, x = y = −1, x = y = 1;
(d) y = sin x, y = 0; solutions: x = k π, y = 0, for k any integer.
1.8.12. That variable does not appear anywhere in the system, and is automatically free (although it doesn’t enter into any of the formulas, and so is, in a sense, irrelevant).
1.8.13. True. For example, take a matrix in row echelon form with r pivots, e.g., the matrix A
with aii = 1 for i = 1, . . . , r, and all other entries equal to 0.
1.8.14. Both false. The zero matrix has no pivots, and hence has rank 0.
♥ 1.8.15.
(a) Each row of A = v wT is a scalar multiple, namely vi w, of the vector w. If necessary,
we use a row interchange to ensure that the first row is non-zero. We then subtract the
appropriate scalar multiple of the first row from all the others. This makes all rows below the first zero, and so the resulting matrix is in row echelon form has a single nonzero row, and hence a 0
single pivot1— proving that A has rank 1.
!
!
−8
4
−1 2
2
6 −2
B
C
(b) (i)
, (ii) @ 0
0 A, (iii)
.
−3 6
−3 −9
3
4 −2
(c) The row echelon form of A must have a single nonzero row, say w T . Reversing the elementary row operations that led to the row echelon form, at each step we either interchange rows or add multiples of one row to another. Every row of every matrix obtained
in such a fashion must be some scalar multiple of w T , and hence the original matrix
A = v wT , where the entries vi of the vector v are the indicated scalar multiples.
1.8.16. 1.
1.8.17. 2.
1.8.18. Example: A =
has rank 0.
1
0
0
0
!
, B=
0
0
1
0
!
so A B =
0
0
1
0
!
has rank 1, but B A =
0
0
0
0
!
♦ 1.8.19.
“
”
(a) Under elementary row operations, the reduced form of C will be U Z where U is the
row echelon form
! of A. Thus, C has!at least r pivots, namely
! the pivots in A. Examples:
!
1 2 1
1 2
1 2 1
1 2
rank
= 1 = rank
, while rank
= 2 > 1 = rank
.
2 4 2
2 4
2 4 3
2 4
!
U
(b) Applying elementary row operations, we can reduce E to
where U is the row echW
elon form of A. If we can then use elementary row operations of type #1 to eliminate
all entries of W , then the row echelon form of E has the same number of pivots as A
and so rank E = rank A. Otherwise, at least one new pivot appears in the rows below U ,
and rank E0> rank1A. Examples:
0
1
!
!
1 2
1 2
1 2
1 2
C
C
, while rank B
.
rank B
@ 2 4 A = 1 = rank
@ 2 4 A = 2 > 1 = rank
2 4
2 4
3 6
3 5
♦ 1.8.20. By Proposition 1.39, A can be reduced to row echelon form U by a sequence of elementary row operations. Therefore, as in the proof of the L U decomposition, A = E 1 E2 · · · EN U
−1
where E1−1 , . . . , EN
are the elementary matrices representing the row operations. If A is
singular, then U = Z must have at least one all zero row.
35
Get all Chapter’s Instant download by email at etutorsource@gmail.com
Get all Chapter’s Instant download by email at etutorsource@gmail.com
“
”
♦ 1.8.21. After row operations, the augmented matrix becomes N = U | c where the r = rank A
nonzero rows of U contain the pivots of A. If the system is compatible, then the last m − r
entries of c are all zero, and hence N is itself a row echelon matrix with r nonzero rows and
hence rank M = rank N = r. If the system is incompatible, then one or more of the last
m − r entries of c are nonzero, and hence, by one more set of row operations, N is placed
in row echelon form with a final pivot in row r + 1 of the last column. In this case, then,
rank M = rank N = r + 1.
1.8.22. (a) x = z, y = z, where z is arbitrary; (b) x = − 23 z, y = 97 z, where z is arbitrary;
(c) x = y = z = 0; (d) x = 13 z − 32 w, y = 56 z − 61 w, where z and w are arbitrary;
(e) x = 13 z, y = 5 z, w = 0, where z is arbitrary; (f ) x = 32 w, y = 12 w, z = 21 w, where
w is arbitrary.
“
”T
”T
“
1
, where y is arbitrary; (b) − 65 z, 58 z, z , where z is arbitrary;
3 y, y
”T
“
3
2
6
, where z and w are arbitrary; (d) ( z, − 2 z, z )T , where
(c) − 11
5 z + 5 w, 5 z − 5 w, z, w
T
T
T
1.8.23. (a)
z is arbitrary; (e) ( −4 z, 2 z, z ) , where z is arbitrary; (f ) ( 0, 0, 0 ) ; (g) ( 3 z, 3 z, z, 0 ) ,
where z is arbitrary; (h) ( y − 3 w, y, w, w )T , where y and w are arbitrary.
1.8.24. If U has only nonzero entries on the diagonal, it must be nonsingular, and so the only
solution is x = 0. On the other hand, if there is a diagonal zero entry, then U cannot have
n pivots, and so must be singular, and the system will admit nontrivial solutions.
1.8.25. For the homogeneous case x1 = x3 , x2 = 0, where x3 is arbitrary. For the inhomogeneous case x1 = x3 + 14 (a + b), x2 = 12 (a − b), where x3 is arbitrary. The solution to the
homogeneous version is a line going through the origin, while the inhomogeneous solution
“
is a parallel line going through the point 14 (a + b), 0, 21 (a − b)
free variable x3 is the same as in the homogeneous case.
”T
. The dependence on the
1.8.26. For the homogeneous case x1 = − 16 x3 − 61 x4 , x2 = − 23 x3 + 43 x4 , where x3 and x4
are arbitrary. For the inhomogeneous case x1 = − 16 x3 − 61 x4 + 13 a + 16 b, x2 = − 23 x3 +
1
1
4
3 x4 + 3 a + 6 b, where x3 and x4 are arbitrary. The dependence on the free variable x3 is
the same as in the homogeneous case.
1.8.27. (a) k = 2 or k = −2;
(b) k = 0 or k = 12 ;
(c) k = 1.
1.9.1.
(a) Regular matrix, reduces to upper triangular form U =
0
0
1
0
1
!
−1
, so determinant is 2;
1
3
−2 C
A, so determinant is 0;
0 0
1
1 2
3
B
(c) Regular matrix, reduces to upper triangular form U = @ 0 1
2C
A, so determinant is −3;
0 00 −3
1
−2 1
3
B
(d) Nonsingular matrix, reduces to upper triangular form U = @ 0 1 −1 C
A after one row
interchange, so determinant is 6;
0 0
3
(b) Singular matrix, row echelon form U = B
@
−1
0
0
2
0
36
Get all Chapter’s Instant download by email at etutorsource@gmail.com
Get all Chapter’s Instant download by email at etutorsource@gmail.com
(e) Upper triangular matrix, so the determinant is a product of0diagonal entries: −180;
1
1 −2
1
4
B
0
2 −1 −7 C
C
B
C after
(f ) Nonsingular matrix, reduces to upper triangular form U = B
@0
0 −2 −8 A
0
0
0 10
one row interchange, so determinant is 40;
0
1
1 −2
1
4 −5
B0
3 −3
−1
2C
B
C
B
C
(g) Nonsingular matrix, reduces to upper triangular form U = B
0
0
4 −12 24 C
B
C
@0
0
0
−5 10 A
0
0
0
0
1
after one row interchange, so determinant is 60.
0
1.9.2. det A = −2, det B = −11 and det A B = det B
@
!
5
1
−2
4
5
10
1
4
1C
A = 22.
0
2
3
1.9.3. (a) A =
; (b) By formula (1.82),
−1 −2
1 = det I = det(A2 ) = det(A A) = det A det A = (det A)2 , so det A = ±1.
1.9.4. det A2 = (det A)2 = det A, and hence det A = 0 or 1
1.9.5.
(a) True. By Theorem 1.52, A !
is nonsingular, so, by Theorem 1.18, A−1 exists
2
3
(b) False. For A =
, we have 2 det A = −2 and det 2 A = −4. In general,
−1 −2
n
det(2 A) = 2 det A.
!
!
!
2
3
0 1
2
4
(c) False. For A =
and B =
, we have det(A + B) = det
=
−1 −2
0 0
−1 −2
0 6= −1 = det A + det B.
(d) True. det A−T = det(A−1 )T = det A−1 = 1/ det A, where the second equality follows
from Proposition 1.56, and the third equality follows from Proposition 1.55.
(e) True. det(A B −1 ) = det A det B −1 = det A/ det B, where the first equality follows from
formula (1.82) and the second
equality follows
from Proposition 1.55.
!
!
!
2
3
0 1
0 −4
(f ) False. If A =
and B =
, then det(A + B)(A − B) = det
=
−1 −2
0 0
0
2
!
1 0
0 6= det(A2 − B 2 ) = det
= 1. However, if A B = B A, then det(A + B)(A − B) =
0 1
det(A2 − A B + B A − B 2 ) = det(A2 − B 2 ).
(g) True. Proposition 1.42 says rank A = n if and only if A is nonsingular, while Theorem 1.52 implies that det A 6= 0.
(h) True. Since det A = 1 6= 0, Theorem 1.52 implies that A is nonsingular, and so B =
A−1 O = O.
1.9.6. Never — its determinant is always zero.
1.9.7. By (1.82, 83) and commutativity of numeric multiplication,
1
det A det S = det A.
det B = det(S −1 A S) = det S −1 det A det S =
det S
1.9.8. Multiplying one row of A by c multiplies its determinant by c. To obtain c A, we must
multiply all n rows by c, and hence the determinant is multiplied by c a total of n times.
1.9.9. By Proposition 1.56, det LT = det L. If L is a lower triangular matrix, then LT is an
37
Get all Chapter’s Instant download by email at etutorsource@gmail.com
We Don’t reply in this website, you need to contact by email for all chapters
Instant download. Just send email and get all chapters download.
Get all Chapters Solutions Manual/Test Bank Instant Download by email at
etutorsource@gmail.com
You can also order by WhatsApp
https://api.whatsapp.com/send/?phone=%2B447507735190&text&type=ph
one_number&app_absent=0
Send email or WhatsApp with complete Book title, Edition Number and
Author Name.
Get all Chapter’s Instant download by email at etutorsource@gmail.com
upper triangular matrix. By Theorem 1.50, det LT is the product of its diagonal entries
which are the same as the diagonal entries of L.
1.9.10. (a) See Exercise 1.9.8. (b) If n is odd, det(− A) = − det A. On the other hand,
if
!
0 1
AT = − A, then det A = det AT = − det A, and hence det A = 0. (c) A =
.
−1 0
♦ 1.9.11. We have
det
a
c + ka
b
d + kb
c
a
det
det
ka
c
kb
d
a
0
det
d
b
b
d
!
!
!
!
a
c
= a d + a k b − b c − b k a = a d − b c = det
a
c
b
d
!
= k a d − k b c = k (a d − b c) = k det
a
c
b
d
= c b − a d = −(a d − b c) = − det
b
d
!
,
,
!
,
= a d − b 0 = ad.
♦ 1.9.12.
(a) The product formula holds if A is an elementary matrix; this is a consequence of the
determinant axioms coupled with the fact that elementary matrices are obtained by applying the corresponding row operation to the identity matrix, with det I = 1.
(b) By induction, if A = E1 E2 · · · EN is a product of elementary matrices, then (1.82) also
holds. Proposition 1.25 then implies that the product formula is valid whenever A is
nonsingular.
(c) The first result is in Exercise 1.2.24(a), and so the formula follows by applying Lemma 1.51
to Z and Z B.
(d) According to Exercise 1.8.20, every singular matrix can be written as A = E1 E2 · · · EN Z,
where the Ei are elementary matrices, while Z, its row echelon form, is a matrix with a
row of zeros. But then Z B = W also has a row of zeros, and so A B = E1 E2 · · · EN W
is also singular. Thus, both sides of (1.82) are zero in this case.
1.9.13. Indeed, by (1.82), det A det A−1 = det(A A−1 ) = det I = 1.
♦ 1.9.14. Exercise 1.6.28 implies that, if A is regular, so is AT , and they both have the same pivots. Since the determinant of a regular matrix is the product of the pivots, this implies
det A = det AT . If A is nonsingular, then we use the permuted L U decomposition to write
A = P T L U where P T = P −1 by Exercise 1.6.14. Thus, det A = det P T det U = ± det U ,
while det AT = det(U T LT P ) = det U det P = ± det U where det P −1 = det P = ±1.
Finally, if A is singular, then the same computation holds, with U denoting the row echelon
form of A, and so det A = det U = 0 = ± det AT .
1.9.15.
0
a11
B
B a21
det B
@ a31
a41
1
a12 a13 a14
a22 a23 a24 C
C
C=
a32 a33 a34 A
a42 a43 a44
a11 a22 a33 a44 − a11 a22 a34 a43 − a11 a23 a32 a44 + a11 a23 a34 a42 − a11 a24 a33 a42
+ a11 a24 a32 a43 − a12 a21 a33 a44 + a12 a21 a34 a43 + a12 a23 a31 a44 − a12 a23 a34 a41
+ a12 a24 a33 a41 − a12 a24 a31 a43 + a13 a21 a32 a44 − a13 a21 a34 a42 − a13 a22 a31 a44
+ a13 a22 a34 a41 − a13 a24 a32 a41 + a13 a24 a31 a42 − a14 a21 a32 a43 + a14 a21 a33 a42
+ a14 a22 a31 a43 − a14 a22 a33 a41 + a14 a23 a32 a41 − a14 a23 a31 a42 .
38
Get all Chapter’s Instant download by email at etutorsource@gmail.com
Get all Chapter’s Instant download by email at etutorsource@gmail.com
♦ 1.9.16.
(i) Suppose
8 B is obtained from A by adding c times row k to row l, so
< alj + c aij , i = l,
bij = :
Thus, each summand in the determinantal formula for
aij ,
i 6= l.
det B splits into two terms, and we find that det B = det A + c det C, where C is the
matrix obtained from A by replacing row l by row k. But rows k and l of C are identical, and so, by axiom (ii), if we interchange the two rows det C = − det C = 0. Thus,
det B = det A.
(ii) Let B be obtained from A by interchanging rows k and l. Then each summand in the
formula for det B equals minus the corresponding summand in the formula for det A,
since the permutation has changed sign, and so det B = − det A.
(iii) Let B be obtained from A by multiplying rows k by c. Then each summand in the formula for det B contains one entry from row k, and so equals c times the corresponding
term in det A, hence det B = c det A.
(iv ) The only term in det U that does not contain at least one zero entry lying below the
diagonal is for the identity permutation π(i) = i, and so det U is the product of its diagonal entries.
♦ 1.9.17. If U is nonsingular, then, by Gauss–Jordan elimination, it can be reduced to the identity matrix by elementary row operations of types #1 and #3. Each operation of type #1
doesn’t change the determinant, while operations of type #3 multiply the determinant by
the diagonal entry. Thus, det U = u11 u22 · · · unn det I . On the other hand, U is singular if
and only if one or more of its diagonal entries are zero, and so det U = 0 = u11 u22 · · · unn .
♦ 1.9.18. The determinant of an elementary matrix of type #2 is −1, whereas all elementary matrices of type #1 have determinant +1, and hence so does any product thereof.
♥ 1.9.19.
(a) Since A is regular, a 6= 0 and a d − b c 6= 0. Subtracting c/a times the first
from from the
!
a
b
second row reduces A to the upper triangular matrix
, and its pivots
0
d
+
b
(−c/a)
ad − bc
det A
c
= a .
are a and d − b a =
a
(b) As in part (a) we reduce A to an upper triangular form. First, we subtract c/a times
the first row from
the second row, and g/a
times the first row from third row, resulting
1
0
a
b
e
B
ad − bc af − ce C
C
C. Performing the final row operation reduces
B0
in the matrix B
a
a
@
ah − bg aj − cg A
0
1
0
a
b
e
a
a
B
C
ad − bc
af − ce C
B0
C, whose pivots
the matrix to the upper triangular form U = B
−
@
A
a
a
ad − bc
are a,
, and
0
0
P
a
aj − eg
(a f − c e)(a h − b g)
adj + bf g + ech − af h − bcj − edg
det A
−
=
=
.
a
a (a d − b c)
ad − bc
ad − bc
(c) If A is a regular n × n matrix, then its first pivot is a11 , and its k th pivot, for k =
2, . . . , n, is det Ak /det Ak−1 , where Ak is the k × k upper left submatrix of A with entries aij for i, j = 1, . . . , k. A formal proof is done by induction.
♥ 1.9.20. (a–c) Applying an elementary column operation to a matrix A is the same as applying the elementary row operation to its transpose AT and then taking the transpose of the
result. Moreover, Proposition 1.56 implies that taking the transpose does not affect the de39
Get all Chapter’s Instant download by email at etutorsource@gmail.com
We Don’t reply in this website, you need to contact by email for all chapters
Instant download. Just send email and get all chapters download.
Get all Chapters Solutions Manual/Test Bank Instant Download by email at
etutorsource@gmail.com
You can also order by WhatsApp
https://api.whatsapp.com/send/?phone=%2B447507735190&text&type=ph
one_number&app_absent=0
Send email or WhatsApp with complete Book title, Edition Number and
Author Name.
Get all Chapter’s Instant download by email at etutorsource@gmail.com
terminant, and so any elementary column operation has exactly the same effect as the corresponding elementary row operation.
(d) Apply the transposed version of the elementary row operations required to reduce A T
to upper triangular form. Thus, if the (1, 1) entry is zero, use a column interchange to
place a nonzero pivot in the upper left position. Then apply elementary column operations
of type #1 to make all entries to the right of the pivot zero. Next, make sure a nonzero
pivot is in the (2, 2) position by a column interchange if necessary, and then apply elementary column operations of type #1 to make all entries to the right of the pivot zero. Continuing in this fashion, if the matrix is nonsingular, the result is an lower triangular matrix.
(e) We first interchange the first and second columns, and then use elementary column operations of type #1 to reduce the matrix to lower triangular form:
1
0
1
0
1
0 2
0
1 2
C
B
3 5C
det B
A = − det @ 3 −1 5 A
@ −1
−3
2 1
2 −3 1
1
0
1
0
1
0 0
1
0
0
C
B
C
= − det B
@ 3 −1 −1 A = − det @ 3 −1 0 A = 5.
−3
2 5
−3
2
7
♦ 1.9.21. Using the L U factorizations established
in Exercise
1.3.25:
0
1
!
1 1 1
1 1
C
(a) det
= t2 − t1 , (b) det B
@ t1 t2 t3 A = (t2 − t1 )(t3 − t1 )(t3 − t2 ),
t1 t2
2
2
2
t1 t2 t3
0
1
1 1 1 1
Bt
C
B 1 t2 t3 t4 C
C
(c) det B
B t2 t2 t2 t2 C = (t2 − t1 )(t3 − t1 )(t3 − t2 )(t4 − t1 )(t4 − t2 )(t4 − t3 ).
@ 1
2
3
4A
t31 t32 t33 t34
The general formula is found in Exercise 4.4.29.
♥ 1.9.22.
(a) By direct substitution:
pd − bq
aq − pc
pd − bq
aq − pc
ax + by = a
+b
= p,
cx + dy = c
+d
= q.
ad − bc
ad − bc
ad − bc
ad − bc
!
!
1
1
13 3
1 13
det
= −2.6, y = −
det
= 5.2;
(b) (i) x = −
0
2
4
0
10
10
!
!
5
1
7
1
4 −2
1
4
(ii) x =
det
= , y=
det
=− .
−2
6
3 −2
12
3
12
6
(c) Proof by direct0substitution,
expanding all the determinants.
1
0
1
3
4
0
1
3
0
1
1
1
7
(d) (i) x = det B
1C
y = det B
1C
,
@2 2
A=− ,
@ 4 2
A=
9
9
9
9
0
1
−1
−1
0
−1
0
1
0
1
1 4 3
1
2 −1
1
8
1
B
C
B
2C
z = det @ 4 2 2 A = ; (ii) x = − det @ 2 −3
A = 0,
9
9
2
−1
1
0
3
−1
1
0
1
0
1
3 1 −1
3
2 1
1
1
B
C
B
y = − det @ 1 2
2 A = 4, z = − det @ 1 −3 2 C
A = 7.
2
2
2 3
1
2 −1 3
(e) Assuming A is nonsingular, the solution to A x = b is xi = det Ai / det A, where Ai
is obtained by replacing the ith column of A by the right hand side b. See [ 60 ] for a
complete justification.
40
Get all Chapter’s Instant download by email at etutorsource@gmail.com
Get all Chapter’s Instant download by email at etutorsource@gmail.com
♦ 1.9.23.
(a) We can individually reduce A and B to upper triangular forms U1 and U2 with the
determinants equal to the products of their respective diagonal entries. Applying the
analogous !elementary row operations to D will reduce it to the upper triangular form
U1 O
, and its determinant is equal to the product of its diagonal entries, which
O U2
are the diagonal entries of both U1 and U2 , so det D = det U1 det U2 = det A det B.
(b) The same argument as in part (a) proves the result. The row operations applied to A
are also applied to C, but this doesn’t affect the final upper triangular form.
0
3
(c) (i) det B
@0
0
0
1
B
−3
B
(ii) det B
@ 0
0
0
1
B
−3
B
(iii) det B
@ 0
0
0
5
B
2
B
(iv ) det B
@2
3
1
2
4
3
2
1
0
0
−2
−5 C
A = det(3) det
7
−2
0
1
2
2
1
3
0
0
4
1
0
−1
5
4
−2
0
0
4
9
4
3
1
5
−5 C
C
C = det
3A
−2
1
−3
1
0
4
1
−1 C
C
B
C = det @ −3
8A
0
−3
1
0
0C
C
C = det
−2 A
−5
−5
7
5
2
!
2
1
!
1
2
det
3
−2
!
= 7 · (−8) = −56,
1
2
1
3
−1
5
= 3 · 43 = 129,
0
4C
A det(−3) = (−5) · (−3) = 15,
1
!
det
4
9
−2
−5
!
= 27 · (−2) = −54.
41
Get all Chapter’s Instant download by email at etutorsource@gmail.com
We Don’t reply in this website, you need to contact by email for all chapters
Instant download. Just send email and get all chapters download.
Get all Chapters Solutions Manual/Test Bank Instant Download by email at
etutorsource@gmail.com
You can also order by WhatsApp
https://api.whatsapp.com/send/?phone=%2B447507735190&text&type=ph
one_number&app_absent=0
Send email or WhatsApp with complete Book title, Edition Number and
Author Name.
Download