Math 2270 Homework Guide, Fall 2014 Week 1 Week 2

advertisement
Math 2270 Homework Guide, Fall 2014
November 16, 2014
Week 1
1.1 Vectors and Linear Combinations
Exercises: 1, 3, 4, 6, 26, 29, 30
1.2 Lengths and Dot Products
Exercises: 1, 2, 4, 8, 13, 16, 19, 27
1.3 Matrices
Exercises: 1, 2, 6
Week 2
2.1 Vectors and Linear Equations
Exercises: 1, 2, 5, 15, 17, 22, 26, 31
2.2 The Idea of Elimination
Exercises: 1, 2, 3, 4, 5, 8, 11, 13, 21
Week 3
2.3 Elimination Using Matrices
Exercises: 1, 2, 3, 4, 6, 10, 16, 24
2.4 Rules for Matrix Operations
Exercises: 1, 2, 3, 5, 7, 11, 14, 26
2.5 Inverse Matrices
Exercises: 1, 2, 6, 7, 8, 21, 22, 23, 27
Week 4
2.6 Elimination = Factorization: A = LU
Exercise 1. Fill the blanks with `21



1 0
1 1
5
= 1, L =
,
~x =
, A~x = ~b.
1 1
1 2
7

1 0
Exercise 2. The triangular systems are
~c =
1 1


5
3
second when ~c =
yields ~x =
.
2
2


5
1 1
and
~x = ~c. Solving the
7
0 1
2
3
2
32
3
1 0 0
1 0 0
2 1 0
Exercise 5. E = 4 0 1 05, A = 40 1 05 40 4 25.
3 0 1
3 0 1
0 0 5
Exercise 6. E21
Exercise 7. E21
2
3
2
1 0 0
1
4
5
4
2 1 0 , E32 = 0
=
0 0 1
0
2
3
2
32
0 0
1 0 0
1 1
5
4
5
4
1 0 ,A= 2 1 0
0 2
2 1
0 2 1
0 0
3
2
3
2
1 0 0
1 0 0
1
4
5
4
5
4
2 1 0 , E31 = 0 1 0 , E32 = 0
=
0 0 1
3 0 1
0
3
1
3 5.
6
3
2
32
3
0 0
1 0 0
1 0 1
1 0 5 , A = 4 2 1 05 4 0 2 05 .
2 1
3 2 1
0 0 2
Exercise 9. Compute the products on the right sides of the equations and solve
variables. You should get a contradiction!
2
3
2
2 0 0
1
Exercise 11. L = I, D = 40 3 05. The U in LU is A and the U in LDU is 40
0 0 7
0

2
Exercise 15. ~c =
, ~x =
3

5
.
3
for the
3
2 4
1 35 .
0 1
2.7 Transposes and Permutations



3 0
9
1 3
1 3
1 T
T
1
=
, (A ) = 3
, (A ) = 3
9 1
0
1
0

0
c
the first A. For the second A, AT = A, A 1 = 1c2
, (A 1 )T = A 1 , (AT )
c 1
1 9
Exercise 1. A =
,A
0 3
T
1
1
3

Exercise 2. For the proof, B T AT = (AB)T = (BA)T = AT B T .
2
9
for
1
1
= A 1.
Exercise 6. M
T
=
D = DT .

AT C T
. For M to be symmetric, we need A = AT , B = C T , and
B T DT
Exercise 7. (a) is false, (b) is false, (c) is true, (d) is true.


1 3
1 0
=
3 2
3 1
2
32
32
1
0 0
2 0 0
1
4 1 1 0 5 40 3 0 5 40
2
2
2
4
0
1
0
0
0
3
3
2
32
0 1 0
0
4
5
4
1
Exercise 22. 1 0 0
0
0
1
2
2
32
3
1 0 0
1 2 0
4 1 1 05 4 0
1 1 5.
2 0 1
0 0 1
Exercise 20.

1
0
0
7
1
2
0
1
0


1 3
1 b
1 0
,
=
b c
b 1
0 1
3
25
.
3
1
3

2
32
1 1
1 0 0
1 0
5
4
5
4
0 1 = 0 1 0
0 1
3 4
2 3 1
0 0

1
0
0 c b2

2
2
1 b 4
1
,
0 1
0
3 2
32
3
1
1 0 0
1 2 0
1 5 , 4 0 0 15 4 2 4 15 =
1
0 1 0
1 1 1
3.1 Spaces of Vectors
Exercise 1. Conditions (1), (2), and (8) are not satisfied.




0 0 1
1
1
2 2
c c
Exercise 4.
, 2A =
, A=
, all matrices of the form
.
0 0
1
1
2 2
c c

n
Exercise 9. (a) Vectors of the form
, where n is any integer. (b) Take all vectors of the
0


x
0
form
or
(the union of the x-axis and the y-axis).
0
y
Exercise 10. (a) yes, (b) no, (c) no, (d) yes, (e) yes, (f) no.
2 3
2 3
4
0
4
5
4
Exercise 12. 0 and 45 are in P but their sum is not in P .
0
0
Exercise 13. x + y
2z = 0.
Exercise 19. Line, plane, line.


1 0
1 1
Exercise 24. A =
,B=
.
0 1
0 0
Exercise 26. All of R5 because A~x = ~b has a (unique) solution for every ~b in R5 .
3
1
2
1
3
0
15 =
2
Week 5
3.2 The Nullspace of A: Solving A~x = 0
2
1 2 2
4
Exercise 1. (a) U = 0 0 1
0 0 0
variables are2x1 and x33 .
2 4 2
4
(b) U = 0 4 45. The free
0 0 0
2
6
6
Exercise 2. (a) 6
6
4
3
2
17
7
07
7,
05
0
2
3
0
07
7
27
7,
15
0
6
6
6
6
4
2
3
4 6
2 35. The free variables are x2 , x4 , and x5 . The pivot
0 0
variable is x3 . The pivot variables are x1 and x2 .
2
6
6
6
6
4
3
0
2 3
07
1
7
4 15.
37
.
(b)
7
05
1
1
3
2
2
6
17
7
6
7
0 7 + x4 6
6
4
05
0
6
6
Exercise 3. (a) ~x = x2 6
6
4
“free variables”.
3
2
0
6
07
7
6
7
2 7 + x5 6
6
4
15
0
3
0
2 3
07
1
7
7
4
37. (b) ~x = x3
15. Fill in the blank:
05
1
1
Exercise 9. (a) False. Example: the n ⇥ n 0-matrix has n free variables. (b) True. Reason:
elimination produces the full number of pivots. (c) True. Reason: you can’t have two pivots
in the same column. (d) True. Reason: you can’t have two pivots in the same row.
2
0
60
Exercise 11. (a) 6
40
0
2
1
60
Exercise 12. (a) 6
40
0
Exercise 21.

1 0
0 1
1
0
0
0
1
0
0
0
1
0
0
0
2
2
0
1
0
0
1
1
0
0
1
1
0
0
1
1
1
0
1
1
0
0
1
1
1
0
1
1
0
0
3
2
1
1
7
6
17
0
. (b) 6
40
15
0
0
0
0
1
0
1
0
0
0
1
1
0
0
3
2
0
0 1
7
6
07
0 0
. (b) 6
5
4
0
0 0
1
0 0
1
1
0
0
1
1
0
0
1
0
0
0
3
1
3.3 The Rank and the Row Reduced Form
Exercise 1. (a) and (c) are correct.
4
0
1
0
0
1
1
1
0
0
0
1
0
3
2
1
0
7
6
17
0
. (c) 6
40
15
1.
0
1
1
1
0
1
1
1
0
3
1
17
7.
15
0
0
0
0
0
0
0
0
0
1
0
0
0
1
0
0
0
1
1
0
0
3
1
17
7.
05
0
2
3
1 1 1 1
Exercise 2. (a) R = 40 0 0 05, r
0 0 0 0
2
3
1
1 1
1
40 0 0 0 5, r = 1.
0 0 0 0
2
3
2
1 2 4
3
Exercise 8. A = 42 4 8 5, B = 41
4 8 16
2
2
1 0
4
= 1. (b) R = 0 1
0 0
9
3
6
3
9
2
35
,
2
3
M=

a
c
b
bc
a
1
2
0
3
2
3 5, r = 2. (c) R =
0
.
Exercise 9. line, hyperplane (an (n 1)-dimensional subspace), n ⇥ (n 1).
2 3
2 3
2 3
1

3
1
6
2
17
7
Exercise 10. First A: ~u = 415, ~v = 425. Second A: ~u =
, ~v = 6
435.
1
4
2
2

I 0
Exercise 28. The “row-and-column reduced form” has block form
. The identity
0 0
block is r ⇥ r.
3.4 The Complete Solution to A~x = ~b
2
3
2 4 6 4
b1
⇤
b2 b1 5. (2) b2 + b3 2b1 = 0. (3) C(A) is
Exercise 1. (1) U ~c = 40 1 1 2
0 0 0 0 b2 + b3 2b1
the plane spanned by the first and second columns of A. This is the same as the plane given
by all ~b in R3 that
the
b2 + b23 2b
2 satisfy
3
2 equation
3
3 1 = 0. (4) N (A) is the span of the two
1
2
4
6 17
6 27
6 17
7
6 7
6 7
special solutions 6
4 1 5 and 4 0 5. (5) ~xp = 4 0 5. The complete solution is ~x = ~xp + ~xn ,
0
1
02
3
1 0 1
2 4
h
i
15. The 4 and the 1 in
where ~xn is any vector in N (A). (6) R d~ = 40 1 1 2
0 0 0 0
0
the last column show us the first two entries of the particular solution ~xp above.
⇥
Exercise 2. See the book’s solution.
2
3
2
3
1
3 3 1
1 3 0
2
h
i
h
i
6 9 55. Row reduce to get R d~ = 40 0 1 1 5. The
Exercise 3. A ~b = 4 2
1
3 2
3 53
0 0 0 02 3
2
3
last column of R shows that ~xp = 4 0 5. N (A) is the span of the special solution 4 1 5 to
1
0
2 3
2 3
2
3
the equation A~x = ~0, so any solution of A~x = ~b is of the form 4 0 5 + c 4 1 5.
1
0
5
2
3
1 0 0 0 0
Exercise 16. 3, row, (always exists), R3 , A = 40 1 0 0 05.
0 0 1 0 0
2
1
60
6
60
Exercise 17. 4, column, (is unique if it exists), the 0 subspace of R6 , A = 6
60
6
40
0
0
1
0
0
0
0
0
0
1
0
0
0
3
0
07
7
07
7.
17
7
05
0
Exercise 22. If A~x = ~b has infinitely many solutions, then N (A) must be infinite. Thus
~ has either no solution or infinitely many solutions.
A~x = B
Exercise 28. See the book’s solution.
Exercise
34. (a) 3r = 3, complete solution to A~x = ~0 is the line span(~s) in R4 . (b)
2
1 0
2 0
4
3 05. (c) A has full row rank.
R= 0 1
0 0 0 1
6
Week 6
3.5 Independence, Basis, and Dimension
Exercise 1. First put ~v1 , ~v2 , and ~v3 in the columns of A. Then N (A) = {~0} since A has no
free columns, namely A~x = ~0 has only the zero solution ~x = ~0. This proves that ~v1 , ~v2 , and
~v3 are independent. Now2put3all four vectors in A and compute the one special solution to
1
617
7
~
A~x = ~0, which gives ~c = 6
4 45. A~c = 0 proves that the columns of A are linearly dependent.
1
Exercise 5. (a) independent, (b) dependent
Exercise 9. (a) R3 has only dimension 3. (b) one of the vectors is a multiple of the other.
(c) 0v~1 + ~0 = ~0 is a nontrivial linear combination of ~v1 and ~0 that produces ~0.
2 3
2 3
2 3
2 3 2 3
2
3
2
3
1
617
607
6 1 7 607
607
7
6 7
6 7 6 7
6 7
Exercise 10. 6
4 0 5 and 415 are independent, and so are 4 0 5, 415, and 405. Four inde0
0
0
0
1⇥
⇤
3
1 .
pendent vectors would span R4 , but this (3-dimensional) hyperplane is not R4 . 1 2
Exercise 12. A~x = ~b. ~y T A = ~cT . False.
Exercise 13. (a) 2, (b) 2, (c) 2, (d) 2. The row spaces of A and U are the same.
Exercise 15. n; basis; ; invertible.
2 3
2 3 2
1
1
617
6 17 6
7
6 7 6
Exercise 16. (a) 6
415. (b) 4 0 5, 4
1
0
2 3 2 3
0
0
607 607
6 7, 6 7. N (A): no vectors!
415 405
0
1
Exercise 19. r = n; r
3
1
07
7,
15
0
2
3
2 3
1
1
607
617
6 7. (c) 6 7,
405
415
1
0
2
3
2 3
1
1
617
607
6 7. (d) C(A): 6 7,
405
405
1
0
2 3
0
617
6 7,
405
0
m; r = m; independent.
Exercise 21. (a) the columns of A are independent. (b) span.
⇥
⇤

1 1
. (c) True. Reason:
0 0⇥
⇤
dim C(A) = r = dim C(AT ) for every matrix A. (d) False. Example: 1 0 (the columns
might be dependent).
Exercise 24. (a) False. Example: 1 1 . (b) False. Example:
7
2
3 2
3 2
1 0 0
0 0 0
0 0
4
5
4
5
4
Exercise 26. (a) 0 0 0 , 0 1 0 , 0 0
0 30 2 0
0 30 20
0 30
2
3 2
0 0 0
0 1 0
0 0 1
0 0 0
4 0 0 05 , 4 1 0 05 , 4 0 0 05 , 4 0 0 15 ;
1 0 0
0 1 0
20 0 1 3 0 0 0
0 0 0
40 0 15; dimension 3.
0
1 0
Exercise 35. 1, x, x2 , x3 (dimension 4). x
3
2
0
1
5
4
0 ; dimension 3. (b) 0
1
0
2
0 1
4
1 0
dimension 6. (c)
0 0
1, x2
1, x3
3 2
0 0
0
5
4
0 0 , 0
0 30 2 0
0
0
5
4
0 , 0
0
1
0
1
0
0
0
0
3
0
05 ,
03
1
0 5,
0
1 (dimension 3).
3.6 Dimensions of the Four Subspaces
2 3
2 3 2 3

2
4
1
1
T
4
5
4
5
4
1 , 0 5;
Exercise 2. C(A ): 2 ; dimension 1. C(A):
; dimension 1. N (A):
2
4
0
1
2 3 2 3



1
2
2
1
2
T
T
4
5
4
5
dimension 2. N (A ):
; dimension 1. C(B ): 2 , 5 ; dimension 2. C(B):
,
;
1
2
5
4
8
2 3
4
dimension 2. N (B): 4 0 5; dimension 1. N (B T ): no vectors; dimension 0.
1
2
3
1 0
⇥
⇤
Exercise 4. (a) 41 05. (b) Impossible: dim C(A) + dim N (A) = 3. (c) 1 1 . (d)
0 1

9
3
. (e) Impossible: in Section 4.1, we’ll see that N (A) is the orthogonal complement
3
1
of C(AT ) and N (AT ) is the orthogonal complement of C(A). Since C(AT ) = C(A), it follows
that N (A) = N (AT ).

⇥
⇤
1 1 1
2 1 .
Exercise 5. A =
;B= 1
2 1 0
Exercise 16. One of the entries of A~v would be ~v T ~v = 2, so A~v 6= ~0.
Exercise 24. Row space; left nullspace.
4.1 Orthogonality of the Four Subspaces
2
3
1
2
3
3 1 5. (b) Impossible: the row space must be orthogonal to the
Exercise 3. (a) 4 2
3 5
2
nullspace.
(c) Impossible: the column space must be orthogonal to the left nullspace. (d)

1
1
. (e) Impossible: if A is m ⇥ n, then saying that columns add up to ~0 means that
1
1
8
2 3
1
6 .. 7
the n ⇥ 1 vector 4 . 5 is in the nullspace. Saying that the rows add to a row of 1’s means
1
2 3
1
6 .. 7
that 4 . 5 is in the row space. But the nullspace and the row space are orthogonal.
1
Exercise 4. Nullspace; left nullspace. Because if B has rank 2, then the left nullspace of B
has rank 1 and is therefore too small to contain the row space of A.
Exercise 9. Column space; orthogonal.
✓
◆
✓ ◆
✓ ◆
1
1
2
T
Exercise 11. N (A) = span
, C(A ) = span
, C(A) = span
, N (AT ) =
1
2
3
✓
◆
✓ ◆
✓ ◆
✓ ◆
3
0
1
1
T
span
. N (B) = span
, C(B ) = span
, C(B) = span
, N (B T ) =
1
1
0
3
✓
◆
3
span
. For the pictures, you should be drawing these subspaces as pairs of orthogonal
1
lines in R2 .
2 3 2 3 2 3
1
1
1
617 607 607 ⇥
⇤
7 6 7 6 7
Exercise 22. 6
4 0 5, 4 1 5, 4 0 5. 1 1 1 1 .
0
0
1
Exercise 24. Rows 2, 3, . . . , n.
Exercise 25. AT A = I.
Exercise228.3(a) The planes have a common line. (b) The orthogonal complement also
0
607
6 7
7
contains 6
6 4 7, which is not in the span of the first two vectors listed. (c) Two subspaces
4 35
0
✓ ◆
✓ ◆
1
1
can intersect only at ~0 and fail to be orthogonal: consider span
and span
.
2
3
9
Week 7
4.2 Projections
Exercise 1. (a) x
b=
2 3
2 3
1
2
5
5 4 5
1 4
= 3 ; p~ = 3 1 ; ~e = 3 1 5. (b) x
b=
1
1
~aT ~b
~aT ~a

2
3
1
1; p~ = 4 35; ~e = ~0.
1


cos ✓
0
1
Exercise 2. (a) x
b = cos ✓; p~ =
, ~e =
. (b) x
b = 0; p~ = ~0, ~e =
.
0
sin ✓
1
2
3
1 1 1
Exercise 3. (a) P = 13 41 1 15. (b) P =
1 1 1
Exercise 11. (a) x
b=
T

3
1 1 0
P2 = 12 41 1 05.
0 0 2
3
1 3 1
1 4
3 9 35 .
11
1 3 1
2 3
2 3

2
0
1
2
; p~ = 435; ~e = 405. (b) x
b=
; p~ = ~b; ~e = ~0.
3
6
0
4
Exercise 12. (a) (A A)
2
2
1
=

2
1
2
3
1 0 0
1
; P1 = 40 1 05. (b) (AT A)
1
0 0 0
2 3
2
1
1 0
627
60 1
6
7
Exercise 13. p~ = 6
435; P is 4 ⇥ 4; P = 40 0
0
0 0
2 3
2 3
2 3
2
1
1
1 4
3 4 5
4
5
5
Exercise 16. 1 = 2 2 + 2 0 .
1
1
1
0
0
1
0
1
=
1
2

3
2
2
;
2
3
0
07
7.
05
0
Exercise 21. P 2 = (A(AT A) 1 AT )(A(AT A) 1 AT ) = A(AT A) 1 (AT A)(AT A) 1 AT = A(AT A) 1 AT =
P ; itself.
Exercise 22. P T = (A(AT A) 1 AT )T = (AT )T ((AT A) 1 )T AT = A((AT A)T ) 1 AT = A(AT A) 1 AT .
Exercise 23. The column space of A is all of Rm , so projecting onto the column space is
the identity map. The error ~e is always ~0.
Exercise 31. Check whether AT (~b
p~) = ~0.
10
4.3 Least Squares Approximations
2 3
2
1
657
6
1
7; ~e = 6
Exercise 1. x
b=
; p~ = 6
4135
4
4
17

3
1
37
7; E = 44.
55
3
Exercise 9. (The four data points are the same as in Exercise 1.) Equations 0 = C+0D+0E;
82 = C + 3
D + E; 8 2= 3
C + 3D + 9E; 20 = C + 4D + 16E. Unsolvable system A~x =
2
3
2
32 3
2 3
1 0 0
0
C
4
8
26
C
36
61 1 1 7
6 7
6
7 4D5 = 6 8 7. Solvable system is AT Ab
4 8 26 92 5 4D5 = 41125. In
x
=
41 3 9 5
485
E
26 92 338
E
400
1 4 16
20
figure 4.9b you’re projecting ~b onto the three-dimensional subspace of R4 spanned by the
columns of A.
2
3
2 3

1
1
7
C
~
4
5
4
5
Exercise 17. 7 = C D; 7 = C + D; 21 = C + 2D. A = 1 1 ; b = 7 . A
= ~b
D
1 2
21


C
C
9
has no solution, but AT A
= AT~b has the solution
=
.
D
D
4
2 3
5
4
Exercise 18. p~ = 135. Use ~e = ~b
17
p~. Then P~e = P~b
P p~ = p~
p~ = ~0.
Exercise 21. ~e 2 N (AT ); p~ 2 C(A); x
b 2 C(AT ); N (A) = {~0}.


C
1
Exercise 22.
=
; the line is 1 t.
D
1
4.4 Orthogonal Bases and Gram-Schmidt

0
Exercise 1. (a) Only independent;
. (b) Only orthogonal;
1
2 3
1
1
1
0
1
4
Exercise 4. (a) Q =
. (b)
,
. (c) p3 15,
0
1
0
1



2

3
1
p1 4 15,
2
0
.8
. (c) Orthonormal.
.6
2
3
1
p1 4 1 5.
6
2
Exercise 6. (Q1 Q2 )T (Q1 Q2 ) = (QT2 QT1 )(Q1 Q2 ) = QT2 (QT1 Q1 )Q2 = QT2 Q2 = I.
Exercise 10. (a) Dot product of both sides of the equation with ~q1 yields c1 = 0, dot
product with ~q2 yields c2 = 0, and dot product with ~q3 yields c3 = 0.
(b) If Q~x = ~0, then QT (Q~x) = QT ~0, so (QT Q)~x = ~0. Thus I~x = ~0, which proves that
~x = ~0.
11
2 3
2 3
1
7
637
637
6 7
6 7
1 6 7
1 6
7
Exercise 11. (a) ~q1 = 10
647, ~q2 = 10 6 4 7.
455
4 55
7
1
2
3
2 3
2
3
1
7
1
50
63 3 7
607
6 187
6
7
6 7
6
7
1 6
1
T
607 =
6 247. (Tip: when computing p~, multiply
(b) Q = 10 64 4 7
;
p
~
=
QQ
100 6
7
6 7
7
45
405
4 40 5
55
7 1
0
0
T
the column by Q first, then multiply that result by Q. This way you can avoid computing
the 5 ⇥ 5 matrix QQT .)
Exercise 18. See the textbook’s solution.
21
2
61
6
Exercise 21. See the textbook’s solution. Use Q = 6 21
42
1
2
(Tip: multiply QT~b first, then multiply the result by Q.)
12
3
p5
2 13
p1 7
2 13 7
7
p1
2 13 5
p5
2 13
and compute p~ = QQT~b.
Week 8
5.1 Properties of Determinants
Exercise 1. det(2A) = 8; det( A) = 12 ; det(A2 ) = 14 ; det(A 1 ) = 2.

1 0
Exercise 3. (a) False. Counterexample: If A =
, then det(I + A) = 4 6= 1 + det A.
0 1
(b) True.
Reason: |ABC| = |(AB)C| = |AB||C|

 = |A||B||C|.
 (c) False. Counterexample:
0 0
0 1
1 0
A=
. (d) False. Counterexample: A =
,B=
.
0 1
0 1
1 0
Exercise 4. First matrix: exchange row 1 and row 3. Second matrix: exchange row 1 and
row 4, and exchange row 2 and row 3.
Exercise 5. Rule: Jn requires n2 row exchanges when n is even and n 2 1 row exchanges when
n is odd. Since 1012 1 = 50, J101 has determinant ( 1)50 = 1.
2 3
1
6 .. 7
Exercise 10. ~x = 4 . 5 is in N (A). Thus A is singular (not invertible), so det A = 0. If
1
2 3
1
6 .. 7
the entries of each row add to 1, then ~x = 4 . 5 is in N (A I). This does not mean that
1

1 0
det A = 1: a counterexample is A =
.
1 0
Exercise 11. If C and D are n ⇥ n matrices, then taking determinants gives |C||D| =
( 1)n |D||C|, which is di↵erent from |D||C| when n is even. The book’s reasoning is valid
only when n is odd.
✓

◆
d
b
1
ad bc
1
Exercise 12. The correct calculation is det A = det ad bc
= (ad
= ad 1 bc .
bc)2
c a
Exercise 13. det A = 1; det B = 3.
Exercise 18.
1 a a2
1
a
a2
1
a
a2
1 a
2
2
2
1 b b = 0 b a b
a = (b a) 0
1
b + a = (b a) 0 1
1 c c2
1 c a c 2 a2
0 c a c 2 a2
0 0 c2
The entry in the bottom right corner simplifies to (c
(b a)(c a)(c b).
a)(c
a2
a2
b+a
.
(c a)(b + a)
b), so the determinant is
Exercise
28. (a) True. Reason: |AB| = |A||B| 
= 0 det B = 0. (b) False. Counterexample:

0 1
1 0
0 0
A=
. (c) False. Counterexample: A =
, B =
. (d) True. Reason:
1 0
0 0
0
1
|AB| = |A||B| = |B||A| = |BA|.
Exercise 29. |AT A| 1 = |AT1 A| is correct, but this cannot be broken down into |AT1||A| because
AT and A are in general not square and the determinant is only defined for square matrices.
13
5.2 Permutations and Cofactors
Exercise 1.
2
det A = 4
1
1
1
3
2
5+4
= 1 + 12 + 18
2
3
4
6
3
2
25 + 43
3
2
9 = 12.
Similarly, compute det B = 0 and det C =
the rows of B are dependent.
3
2
5+4
1
2
3
2
25 + 43
2
1
3
2
5+4
3
1
3
3
5
1. The rows of A and C are independent, but
Exercise 3. Each of the six terms in the big formula is 0. The cofactors of row 1 are all 0.
The rank of A is 2.
Exercise 4. See the textbook’s solution.
Exercise 11. Typo in the problem: the second sentence should read “Find det B.” instead
of “Find AC and det B.” See the book’s solution.
Exercise 12. Typo in the problem: the second sentence should read “Compare C T with
A 1 .” See the book’s solution.
Exercise 16. See the book’s solution.
5.3 Cramer’s Rule, Inverses, and Volumes
2
3
3
2
. (b) ~x = 14 4 25.
1
1
Exercise 1. (a) ~x =

Exercise 2. (a) y =
c
.
ad bc
Exercise 6. (a) A
2
1
2
(b) y =
3
= 13 40
0
6
4
Exercise 8. C = 3
6
3
1
2
f g di
.
D
3
2
3
2 0
3 2 1
1 05. (b) A 1 = 14 42 4 25.
7 3
1 2 3
3
0
15. AC T = 3I, so det A = 3. Because C13 = 0.
1
Exercise 10. A is n ⇥ n. Taking determinants of both sides of AC T = (det A)I, we get
(det A)(det C T ) = (det A)n . Dividing both sides by det A and using the fact that det C T =
det C, we get det C = (det A)n 1 .
14
Week 9
6.1 Introduction to Eigenvalues
Exercise
1. Eigenvalues: A has 1 and 12 ; A2 has 1 and 14 ; A1 has 1 and 0. (a) Eigenvalues

.2 .7
of
are 1 and 12 . (b) Because the nullspace doesn’t change during elimination.
.8 .3


1
2
Exercise 2. A has eigenvalues 5 and 1 with corresponding eigenvectors
and
. A+
1
1


1
2
I has eigenvalues 6 and 0 with corresponding eigenvectors
and
. Same; increased.
1
1


2
0
Exercise 5. A has eigenvalues 3 and 1 and eigenvectors
and
. B has eigenvalues 1
1
1



1
1
1
and 3 and eigenvectors
and
. A + B has eigenvalues 3 and 5 and eigenvectors
0
2
1

1
and
. Are not equal to.
1
Exercise 8. (a) Compute A~x. (b) Compute the nullspace of A
I.
Exercise 9. (a) Multiply both sides by A. (b) Multiply both sides by 1 A 1 . (c) Add ~x to
both sides.


1
1
2
Exercise 10. A has eigenvalues 1 and 5 , with eigenvectors
and
.
2
1
2 3
2 3
2 3
.2
0
2
4
5
4
5
4
15.
Exercise 12. Two 1-eigenvectors of P are .4 and 0 . A 0-eigenvector of P is
0 2 3 1
0
.2
4
A 1-eigenvector of P with no zero components is .45.
1

1
Exercise 14. A (cos ✓ + i sin ✓)-eigenvector of Q is
and a (cos ✓ i sin ✓)-eigenvector
i

1
of Q is
.
i

0 1
Exercise 21. A
I has the same determinant as its transpose. Example:
has
0
1



1
0 0
0
1-eigenvector
, while
has 1-eigenvector
.
1
1 1
1



0 0
0 1
0 0
Exercise 23.
,
,
.
0 0
0 0
1 0
15
Exercise 25. Both A~x and B~x equal c1 1~x1 + · · · + cn n~xn for every ~x. Two matrices for
which A~x = B~x for all ~x must be equal: one way to see this is to note that (A B)~x = ~0
for all ~x, which implies that A = B (why?).
p
p
Exercise 29. A has eigenvalues 1, 4, 6. B has eigenvalues 2, 3,
3. C has eigenvalues
0, 0, 6.
Exercise 32. (a) A basis of N (A) is ~u. A basis of C(A) is ~v , w.
~ (b) ~xp = 13 ~v + 15 w.
~ The
complete solution is ~x = 13 ~v + 15 w
~ + c~u. (c) ~u.
6.2 Diagonalizing a Matrix


1 2
1
Exercise 1. (a)
=
0 3
0
3
3
1
1
A = S⇤ S and A = S⇤


1 1 2
Exercise 2. A =
0 1 0

1 1 0
1 0 3
1
S 1.

1
0 1
5 0 1




1
1 1
1 1 0 0
.
=
1
3 3
1 3 0 4
1
0
=

1
4

3
1
1
. (b)
1
2 3
.
0 5
Exercise 4. (a) False, (b) True, (c) True, (d) False.


1
2
1
1
k
1 1
Exercise 8. S = 1 2
. S⇤ S
=
1
0
1
1
1
2

k+1
1
k
1
k+1
2
k
2
.
Exercise 11. (a) true, (b) false, (c) false.
Exercise 12. (a) false, (b) true, (c) true.


1 1
1 1
1
Exercise 18. S =
; S
= 2
1
1
1

k
k
1 3
1 1+3
.
2 1
3k 1 + 3 k
Exercise 20. det A = (det S)(det )(det S
1
; S
1
1
1
) = det ⇤ =
AS =
1
···

1 0
; Ak = S⇤k S
0 3
n.
1
=
Diagonalized.
Exercise 26. Every eigenvector is in N (A) or in C(A), and every vector in N (A) is an
eigenvector, but C(A) may contain too few eigenvectors!
16
Week 10
6.3 Applications to Di↵erential Equations
Exercise 1. Two solutions: ~u = e

1
t
2e
.
1
4t

1
and ~u = et
0


1
4t 1
. The combination ~u = 3e
+
1
0

v
u
Exercise 4.
=
+
= 0. Use ~u =
to write the two equations as d~
=
dt
w



1 1
1
1
~u. The eigenvalues are 0 and 2, with eigenvectors
and
. The solution is
1
1
1



 1

1
1
1
1
21.35
20
10
2t
~u(t) = 20
+ 10e
, so ~u(1) = 20
+ e2
⇡
and limt!1 ~u(t) =
.
1
1
1
1
18.65
20


r
6
2
d~
u
Exercise 8. Using ~u =
, the di↵erential equation is dt =
~u. The eigenvalues
w 
2
1


1
2
30
are 2 and 5, with eigenvectors
and
. The solution with initial conditions ~u(0) =
2
1
30


1
2
is ~u(t) = 10e2t
+ 10e5t
. We have r(t) = 10e2t + 20e5t and w(t) = 20e2t + 10e5t , and
1
2
5t
r(t)
since e5t grows much faster than e2t , limt!1 w(t)
= limt!1 20e
= 2, which means that after
10e5t
a long time, the ratio of rabbits to wolves will be about 2 : 1 (with both populations growing
larger and larger). If you want to go further with this problem, think about how these results
depend on the initial conditions. Try starting with more rabbits
 than wolves (for example
40
10
~u(0) =
) or more wolves than rabbits (for example ~u(0) =
) and see what happens!
20
50
In actual population models, the population growth is usually limited by carrying capacity,
which makes the di↵erential equations non-linear (r and w will occur with power greater
u
than 1) and prevents us from writing them in the nice matrix form d~
= A~u.
dt



0 0
1 t
y(t)
2
At
Exercise 11. A =
, so e = I + At =
, which gives the solution 0
=
0 0
0 1
y (t)

y(0) + y 0 (0)t
. (The initial conditions y(0) and y 0 (0) are the y-intercept and slope of the
y 0 (0)
line!)
d(v+w)
dt
dv
dt
dw
dt
Exercise 13. (a) y(t) = cos(3t), y(t) = sin(3t). y(t) = 3 cos(3t) has initial
conditions


1
1
0
y(0) = 3 and y (0) = 0. (b) The eigenvalues are 3i and 3i with eigenvectors
and
.
3i
3i




3
1
1
3 cos(3t)
The solution with ~u(0) =
is ~u(t) = 32 e3it
+ 32 e 3it
=
, which gives
0
3i
3i
9 sin(3t)
i✓
i✓
the same answer as in (a)! (The last equality follows from the identities cos ✓ = e +e
and
2
ei✓ e i✓
i✓
sin ✓ =
, which are equivalent to the identity e = cos ✓ + i sin ✓.)
2i
17
Exercise 21. S =
 t
e 4et 4
.
0
1

1
0
4
1
= S
1
, ⇤ =

6.4 Symmetric Matrices
1 0
. eAt = Se⇤t S
0 0
1
= S

et 0
S
0 1
1
=
2
3
0
Exercise 3. The eigenvalues of A are 0, 2, 4, with corresponding unit eigenvectors ± p12 4 15,
1
2 3
2 3
1
2
± p13 4 1 5, ± p16 415. Note that these unit eigenvectors are orthonormal!
1
1

2
1
Exercise 4. The eigenvalues of A are 5, 10, and a choice of unit eigenvectors is p5
,
1



1
2 1
5 0
p1
. Thus Q = p15
. Computing QAQ 1 =
yields the eigenvalue
5 2
1 2
0 10
matrix ⇤.

0 1
Exercise 8. The eigenvalues of A must be 0. Example: A =
. If A is symmetric,
0 0


0 0
0 0
A = Q⇤QT =
since ⇤ =
.
0 0
0 0


2 0
1 1
1
Exercise 11. For A, we have ⇤ =
and Q = p2
, so use 1 = 2, 2 = 4,
0
4
1
1


1
1
1
1
p
p
~x1 = 2
, ~x2 = 2
. For B, we get the decomposition
1
1



⇤
⇥
⇤
9 12
4 ⇥
1
1 3
4 3 +5· 5
3 4 .
=0· 5
12 16
3
4
Exercise 18. (1) A is symmetric, so C(A) = C(AT ). Since N (A) and C(AT ) are always
orthogonal, this means that N (A) and C(A) are orthogonal when A is symmetric. (2) Note
that A
I is symmetric since A is symmetric. This proof only shows that eigenvectors
corresponding to di↵erent eigenvalues are orthogonal.

0 1
Exercise 21. (a) False. Example:
. (b) True (assuming the n ⇥ n matrix A has
4 0
n independent eigenvectors). Reason: picking orthonormal eigenvectors for the columns of
our matrix S = Q, we get A = Q⇤Q 1 = Q⇤QT . Then AT = (Q⇤QT )T = Q⇤QT = A,
0 1
so A is symmetric. (If A does not have enough eigenvectors, for instance A =
, then
0 0
this claim can be false.) (c) True. Reason: since A is symmetric, we can write A = Q⇤QT ,
0 1
so we see that A 1 = Q⇤ 1 QT is also symmetric. (d) False. Example: A =
and
1 0

1
1
S = Q = p12
.
1 1
18
Exercise 24. b = 1; b =
1; b = 0.
6.5 Positive Definite Matrices
Exercise 2. Only A4 is positive definite. Using ~x =
Exercise 6. A =


0 1
, which has eigenvalues 1 and
1 0
4
, we see that ~xT A1~x =
3
1 < 0.
1.
Exercise 14. First proof: They are the reciprocals of the eigenvalues of A. Second proof:
We know since A is positive definite that a > 0 and det A = ac b2 > 0, which implies that
c > 0. Also, det A 1 = ac 1 b2 > 0, so A 1 passes the determinant tests.
Exercise 18. ~xT A~x = k~xk2 . Since k~xk2
0, if we know that k~xk2 = ~xT A~x > 0 then
dividing both sides by k~xk2 shows that > 0.
Exercise 20. (a) 0 cannot be an eigenvalue since the eigenvalues are positive. (b) The
only projection matrix that is invertible is I. (If P is projection onto a lower-dimensional
subspace, then the orthogonal subspace is in the nullspace of P .) (c) Determinant
test. (d)

1 0
The symmetric matrix could have two negative eigenvalues, for instance
.
0
1
Exercise 21. For A, the upper left determinants are s, s2 16, and (s 8)(s + 4)2 , which
are all positive exactly when s > 8. For B, the upper left determinants are t, (t 3)(t + 3),
and t(t 5)(t + 5), which are all positive exactly when t > 5.
Exercise 23. a =
p1
1
,b=
p1
2
; a = 13 , b = 12 .

1
Exercise 24. The axes are in the direction of
and
1
19

1
, with
1
q
2
3
and
p
2.
Week 11
6.6 Similar Matrices
Exercise 1. M = F G 1 ; B is similar to A.

1 0
Exercise 3. For the first A and B, use the eigenvector matrix M = S =
. For the
1
1


1 0
0 1
second A and B, use M =
. For the third A and B, use M =
.
0
1
1 0
Exercise 6. I will classify the eight families using their Jordan forms, and write how many
of the sixteen matrices are in the family. The first six families consist of matrices that are
diagonalizable, which you can see because the Jordan form is diagonal. The last two families
consist of matrices that have a double eigenvalue but only one independent eigenvector.
Remember that you can always see the eigenvalues on the diagonal of the Jordan form.



0 0
1 0
1 0
J=
, one
J=
, one
J=
, six
0 0
0 1
0 0
" p
#


1+ 5
2 0
1 0
0p
2
J=
, one
J=
, one
J=
, two
1
5
0 0
0
1
0
2


0 1
1 1
J=
, two
J=
, two
0 0
0 1
Exercise 12. Let ~r1T , . . . , ~r4T denote the rows of M , and ~c1 , . . . , ~c4 denote the columns of M .
Then JM has first row ~r2T , third row ~r4T , and other two rows ~0T . M K has second column
~c1 , third column ~c2 , and other two columns ~0. So if JM = M K, then we can deduce that
the second and fourth rows of M each contain three zeros, and hence are dependent, so M
cannot be invertible.
2
3 2
3 2
3 2
3 2
3
0 0 0 0
0 1 0 0
0 1 0 0
0 1 0 0
0 1 0 0
6 0 0 0 07 6 0 0 0 07 6 0 0 0 07 6 0 0 1 07 6 0 0 1 07
7 6
7 6
7 6
7 6
7
Exercise 13. 6
4 0 0 0 05 , 4 0 0 0 05 , 4 0 0 0 15 , 4 0 0 0 05 , 4 0 0 0 15 .
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0

0 1
1
2
1 2
Exercise 20. (a) If B = M AM , then B = M A M . (b) Example: A =
and
0
0

0 0
B=
. (c) The matrices have the same eigenvalues 3 and 4, so they are both similar
0 0

3 0
to the diagonal eigenvalue matrix
, hence they are similar. (d) These matrices are in
0 4
Jordan form, and the Jordan forms are di↵erent. Also, the first matrix has two independent
eigenvectors, while the second has only one independent eigenvector, so these matrices cannot
be similar. (e) The eigenvalues stay the same because the matrices are similar: M is the
0 1
permutation matrix that switches rows 1 and 2, which in the 2 ⇥ 2 case is M =
.
1 0
20
6.7 Singular Value Decomposition

1
Exercise 1. The normalized eigenvectors of A A are ~v1 =
, with eigenvalue 12 = 50,
2


2
1
1
1
and ~v2 = p5
, with eigenvalue 0. So ~u1 = p10
, and indeed AAT ~u1 = 50~u1 . We
1
3
3
choose ~u2 = p110
. Then the SVD gives
1
T


50 0 p1 1 2
.
2 1
0 0 5
✓  ◆
✓ 
◆
✓
 ◆
1
2
1
Exercise 2. C(AT ) = span p15
, N (A) = span p15
, C(A) = span p110
,
2
1
3
✓

◆
3
N (AT ) = span p110
.
1
2
3
1 1 0
Exercise 6. AT A = 41 2 15 has eigenvalues 3 = 12 , 1 = 22 , and 0, with corresponding
0 12 13
2 3
2 3

1
1
1
2 1
1
1
1
T
4
5
4
5
4
5
1 . AA =
unit eigenvectors ~v1 = p6 2 , ~v2 = p2 0 , and ~v3 = p3
has
1 2
1
1
1

1
1
1
1
eigenvalues 3 and 1, with corresponding unit eigenvectors ~u1 = p2
and ~u2 = p2
.
1
1
The SVD gives
2
3


p
1
2
1
p
p
1 1 0
1 1
3 0 0 p1 4
5
3
0
= p12
p
p3 .
0 1 1
1
1
0 1 0 6 p
2
2
2
2
3
1 2
=
3 6
p1
10

p1
5
1
3
3
1
p
6
Exercise 11. Since A has orthogonal columns, it has rank n. AT A = 4
2
1
..
.
2
n
7
5, so
V = I. Thus ~u1 = 11 w
~ 1 , . . . , ~un = 1n w
~ n , and we choose ~un+1 , . . . , ~um to be an orthonormal
T
basis of N (A ). These ~ui are the columns of U . Finally, ⌃ is the m ⇥ n matrix whose only
nonzero entries are i in position (i, i) for 1  i  n.
7.1 The Idea of a Linear Transformation
Exercise 1. w
~=
~v ; c = 0.
Exercise 2. T (c~v + dw)
~ = T (c~v ) + T (dw)
~ = cT (~v ) + dT (w).
~ Similarly, one can show that
T (c~v + dw
~ + e~u) = cT (~v ) + dT (w)
~ + eT (~u).
Exercise 3. The transformations in (d) and (f) are not linear.
21
✓
◆
✓ ◆
1
1
Exercise 10. (a) kernel(T ) = span
is too big and range(T ) = span
is not all
1
02 3 2 031
✓ ◆
1
0
0
2
3
of R . (b) range(T ) = span @405 , 415A is not all of R . (c) kernel(T ) = span
is
1
1
1
too big.
Exercise 11. (a) V = Rn and W = Rm . (b) The outputs T (v) = Av are exactly all linear
combinations of the columns of A, so the range of T is C(A). (c) The kernel of T is all
vectors v so that T (v) = Av = 0, which is N (A).







1
2
4
3
1
2
1
Exercise 12. (a) T
=T 2
= 2T
=2
=
. (b) T
=T
+
2
1
1
2
4
1
1










2
1
2
2
0
2
1
2
a
b
=T
+T
=
+
=
. (c) T
=
. (d) T
=
.
0
1
0
2
0
2
1
2
b
b
Exercise 20. (a) Vertical lines stay vertical, and horizontal lines stay horizontal. The
house gets stretched/shrunk in the horizontal and vertical directions without any shearing
or rotating. (b) The house
 gets collapsed

onto a line through the origin. (c) Vertical lines
0
0
0
stay vertical because T
=A
=
.
1
1
a22
Exercise 21. I will describe what the resulting house looks like, but you should be drawing
pictures! D stretches the house horizontally by a factor of 2. A collapses the house onto the
.7
line span
. U takes horizontal lines to horizontal lines (it is upper triangular!), but it
.3
shears the house to the right, taking vertical lines to lines with slope 45 .
Exercise 22. (a) I’m not quite sure what the author means by “sit straight up”. To prevent
the house from leaning left or leaning right, we need A to take vertical lines to vertical lines,
namely A should be lower triangular (b = 0). To keep the floor of the house horizontal, we
need A to be upper triangular (c = 0). Together, b = 0 and c = 0 make A diagonal. To
prevent the house from being flipped upside-down, we want d > 0. To prevent the house
from being smashed onto the y-axis, we want a 6= 0, but we probably allow a < 0 since
reflecting the house about the y-axis seems to leave it “sitting straight up”. In summary, we
want a 6= 0, b = c = 0, and d > 0. (b) A = 3I. (c) A should be orthogonal.
Exercise 23. T first rotates the house by 180 and then shifts it to the right by 1. This
transformation T is not linear (see Exercise 1).
Exercise 29. (a) det A = 0. (b) det A > 0. (c) det A = 1.
Exercise 31. Parallel lines go to parallel lines.
22
Week 12
7.2 The Matrix of a Linear Transformation
2
0
60
Exercise 1. S~v1 = 0, S~v2 = 0, S~v3 = 2w
~ 1 , S~v4 = 6w
~ 2. B = 6
40
0
0
0
0
0
2
0
0
0
3
0
67
7.
05
0
Exercise 2. The functions ~v = a + bx have ~v 00 = 0, so they are in the kernel of S and in
the nullspace of B.
Exercise 3. After adding a zero
2
0 0 0
60 0 0
Exercise 4. (a) AB = 6
40 0 0
0 0 0
row to A, we get A2 = B; output basis = input basis.
3
6
07
7. (b) Because our polynomials have degree at most 3.
05
0
Exercise 13. (a) A permutation transformation: T (~v1 ) = ~v2 , T (~v2 ) = ~v1 . (b) A projection
transformation: T (~v1 ) = ~v1 , T (~v2 ) = ~0. (c) If T = T 2 and T were invertible, then T 1 T =
T 1 T 2 , so I = T . But we’re not allowing T = I.


2 1
3
1
Exercise 14. (a)
. (b)
. (c) Because that would contradict linearity.
5 3
5 2
Exercise 17. Permutation; diagonal.


a
cos ✓
Exercise 18.
=
.
b
sin ✓
Exercise 20. (a) w
~ 2 = (x
~y (x) = 4w
~ 1 + 5w
~ 2 + 6w
~ 3.
1)(x + 1) = 1
x2 . (b) w
~ 3 = 12 x(x
1) = 12 (x2
2
0
Exercise 21. The change of basis matrix from the w’s
~ to the ~v ’s is 4 12
2
1
4
change of basis matrix from the ~v ’s to the w’s
~ is 1
1
1
2
1
0
1
x). (c)
0
1
2
3
15
.
2
The
3
1 1
0 05. (These matrices are inverses!)
1 1
Exercise 26. The diagonal matrix with the eigenvalues on the diagonal.
Exercise 27. The vectors w
~ i = T (~vi ) are a basis if and only if T is invertible.



x
x
x
Exercise 29. S T
=S
=
. So ST is the transformation that multiy
y
y
plies every vector by 1, namely ST = I.
Exercise 32. False. We need the n di↵erent nonzero vectors to be independent.
23
Exercise 36. The change of basis matrix is M = V 1 W .
2
3
a 0 b 0
60 a 0 b 7
7
Exercise 37. A = 6
4 c 0 d 0 5.
0 c 0 d
2
3
1 0 0 0
Exercise 38. 40 1 0 05. (Given any transformation T defined by T (~v ) = A~v , where A
0 0 0 0
is an m ⇥ n matrix of rank r, you can choose input and output bases in this manner so that
the matrix representing T in these special bases has r 1’s on the diagonal and is 0 everywhere
else. Exercise 27 is the special case when r = m = n.)
7.3 Diagonalization and the Pseudoinverse

10 20
Exercise 1. (a) A A =
has eigenvalues 50 and 0 with unit eigenvectors ~v1 =
20
40



p
1
2
5 15
1
1
T
p
has eigenvalues 50 and 0
and ~v2 = p5
. 1 = 50. (b) AA =
5 2
1
15 45




p 1
1
3
5
1
1
1
with unit eigenvectors ~u1 = p10
and ~u2 = p10
. (c) A~v1 = p5
= 5
=
3
1
15
3

p
1
1
p
50 10
= 1~u1 . The SVD is
3


p

1 2
1
3
1 2
50
0
1
1
p
= p10
.
5
3 6
3 1
2 1
0 0



1
2
1
1
1
1
T
Exercise 2. (a) Orthonormal bases: p5
for C(A ); p5
for N (A); p10
for C(A);
2
1
3

3
p1
for N (AT ). (b) Since the column space of A is one-dimensional, both columns of
10
1

1
A must be multiples of
. Since the row space of A is one-dimensional, both rows of A
 3
1
must be multiples of
. The only matrices that have these two properties are cA, where c
2
is any nonzero scalar.

 1


p
0 1
1
2
1
3
1 3
1
1
+
50
p
Exercise 4. A = p5
= 50
. Note that N (A) =
10
2 1
3 1
2 6
0 0
T
T
N (A+ ), N (AT ) = N (A+ ), C(A) = C(A+ ), C(AT ) = C(A+ ) (the four subspaces for A+
are always the related to the four subspaces of A in this
 way, which is especially obvious in
1 2
this example since A+ is a multiple of AT ). A+ A = 15
projects vectors onto C(AT ),
2
4

1 3
1
and AA+ = 9
projects vectors onto C(A) .
3 9
T
24
Download