Math 215 Exam #2 Practice Problem Solutions

advertisement
Math 215 Exam #2 Practice Problem Solutions
1. For each of the following statements, say whether it is true or false. If the statement is true, prove it.
If false, give a counterexample.
(a) If Q is an orthogonal matrix, then det Q = 1.
Answer: False. Let
0 1
Q=
.
1 0
Then the columns of Q are orthogonal and each has length 1, so Q is an orthogonal matrix, but
det Q = −1.
However, if the statement had been that | det Q| = 1, then it would have been true (think, for
example, of the box spanned by the columns of Q; since the action of Q preserves lengths, the
volume of this box must be 1).
(b) Every invertible matrix can be diagonalized.
False: Consider the matrix
1
A=
0
5
.
1
Then det A = 1, so A is indeed invertible. Now, if we try to find the eigenvalues of A, we solve
1 − λ
5 0 = det(A − λI) = 0
1 − λ
= (1 − λ)(1 − λ),
so λ = 1. Since A has only one eigenvalue, there must be two linearly independent eigenvectors
associated with the eigenvalue 1–otherwise there won’t be enough columns to make an invertible
matrix S.
The eigenvector(s) corresponding to 1 will be in the nullspace of
0 5
A−I =
0 0
which is already in reduced echelon form. If (A − I)~x = 0, then it must be the case that x2 = 0,
so eigenvectors are of the form
x1
.
0
Therefore, there are not 2 linearly independent eigenvectors, so A is not diagonalizable.
(c) Every diagonalizable matrix is invertible.
Answer: False. The matrix
1
A=
0
0
0
is certainly diagonalizable (it is itself a diagonal matrix), but A is not invertible since it is of rank
1 (alternatively: det A = 0).
(d) If the matrix A is not invertible, then 0 is an eigenvalue of A.
Answer: True. If A is not invertible, then rank A < n. Therefore, since
dim col A + dim nul A = n
and dim col A = rank A, we see that the nullspace of A has dimension ≥ 1, so there is at least one
non-zero vector ~v such that A~v = ~0. Thus, 0 is an eigenvalue of A with corresponding eigenvector
~v .
1
(e) If ~v and w
~ are orthogonal and P is a projection matrix, then P ~v and P w
~ are also orthogonal.
1
1
2
Answer: False. Let ` be the line in R through ~a =
. Then, although the vectors ~v =
1
0
0
and w
~=
are perpendicular, we would expect that P ~v and P w
~ are not only not perpendicular,
1
but in fact they should be equal.
To confirm this, note that
1 1
1 1 1
~a~aT
1/2 1/2
=
[1 1] =
=
.
P =
1/2 1/2
h~a, ~a
2 1
2 1 1
Therefore,
P ~v =
and
Pw
~=
1/2 1/2 1
1/2
=
1/2 1/2 0
1/2
1/2
1/2
1/2
1/2
0
1/2
=
,
1
1/2
as expected.
(f) Suppose A is an n × n matrix and that there exists some k such that Ak = 0 (such matrices are
called nilpotent matrices). Then A is not invertible.
Answer: True. We know that the determinant of a product is the product of the determinants,
so
0 = det 0 = det Ak = (det A)k .
Therefore, it must be the case that det A = 0, which implies that A is not invertible.
2. Let Q be an n × n orthogonal matrix. Show that if {~v1 , . . . , ~vn } is an orthonormal basis for Rn , then
so is {Q~v1 , . . . , Q~vn }.
Proof. On the last exam, you proved that, for any linear map T , if ~u1 , . . . , ~un are linearly independent,
then T (~u1 ), . . . , T (~un ) are linearly independent as well (actually, you only showed this for n = 3, but
essentially the same proof holds in general).
Therefore, since multiplication by Q (or any matrix) certainly defines a linear transformation, we have
that {Q~v1 , . . . , Q~vn } is a linearly independent set. Since this is a set of n linearly independent vectors
in Rn , it is necessarily a basis for Rn . Therefore, to see that it is an orthonormal basis, we only need
to show that each Q~vi has length 1 and that Q~vi is perpendicular to Q~vj whenever j 6= i.
Note that, for any i, j ∈ {1, . . . , n},
hQ~vi , Q~vj i = (Q~vi )T (Q~vj ) = ~viT QT Q~vj .
Since Q is an orthogonal matrix, we know that QT Q = I, so the above is equal to
~viT ~vj ,
which is, by definition, h~vi , ~vj i.
Thus, we have shown that, for any choices of i and j,
(
1
hQ~vi , Q~vj i = h~vi , ~vj i =
0
if i = j
.
if i =
6 j
This is precisely the condition for {Q~v1 , . . . , Q~vn } to be an orthonormal basis.
2
3. Let
2
A=
1
1
.
3
(a) Let R be the region in the plane enclosed by the unit circle. If T is the linear transformation of
the plane whose matrix is A, what is the area of T (R)?
Answer: As we discussed in class, the amount that a linear transformation distorts volumes is
precisely given by the determinant of the matrix of that linear transformation. Therefore, since
2 1
= 2 · 3 − 1 · 1 = 5,
det A = 1 3
the region T (R) should have an area 5 times that of R. Therefore, since the area of R is π, we
see that the area of T (R) is 5π.
(b) Find the matrix for the transformation T −1 without doing elimination.
Answer: The matrix for the transformation T −1 will simply be the inverse of the matrix for T ,
which is A. Therefore, we just need to determine A−1 without resorting to elimination. However,
this is easily done using the cofactor matrix C, as
1
1
C11 C21
A−1 =
CT =
.
det A
det A C12 C22
Thus, we just need to compute the cofactors:
C11 = (−1)1+1 A11 = 3
C12 = (−1)1+2 A12 = −1
C21 = (−1)2+1 A21 = −1
C22 = (−1)2+2 A22 = 2.
Therefore, since det A = 5,
−1
A
1 3 −1
3/5 −1/5
=
.
=
−1/5 2/5
5 −1 2
a b
(Note: the above reasoning for a 2 × 2 matrix reduces to the following: for a 2 × 2 matrix
,
c d
d −b
1
its inverse is equal to ad−bc
)
−c a
 
1
4. Let ` be the line in R3 through the vector ~a = −2.
1
(a) Find a basis for the orthogonal complement of `.


1
Answer: The space ` is clearly equal to the column space of the 3×1 matrix −2 or, equivalently,
1
the row space of the 1 × 3 matrix A = [1 − 2 1]. This latter is particularly useful, since the
nullspace of a matrix is the orthogonal complement of the row space. Therefore, to find the
orthogonal complement of `, we need only find the nullspace of A. In other words, we want to
solve A~x = ~0. This is easy enough, though:
 
x1
~0 = A~x = [1 − 2 1] x2  = x1 − 2x2 + x3 ,
x3
3
or, equivalently, x1 = 2x2 − x3 . Therefore, elements of the nullspace take the form
 
 
2
−1
x2 1 + x3  0  .
0
1
   
−1 
 2
Hence, 1 ,  0  is a basis for the nullspace of A, which is of course equal to `⊥ .


0
1
 
1
(b) If ~v = 3, write ~v as a sum
1
~v = ~v1 + ~v2 ,
where ~v1 ∈ ` and ~v2 ∈ `⊥ .
Answer: If we let ~v1 be the orthogonal projection of ~v to ` and if we let ~v2 equal ~v − ~v1 , then we
will have the desired decomposition of ~v (since ~v −~v1 is necessarily orthogonal to ` and, therefore,
an element of `⊥ ).
To find ~v )1 , recall that the projection of ~v to the line through the vector ~a is given by
~v1 =
~a~aT
~v .
h~a, ~ai
Compute piece-by-piece: first, we have that
 


1
1 −2 1
~a~aT = −2 [1 − 2 1] = −2 4 −2 .
1
1 −2 1
Next,

1
h~a, ~ai = ~aT ~a = [1 − 2 1] −2 = 6.
1

Therefore,

 

  
1
−2/3
1 −2 1
−4
1
~a~aT
1
−2 4 −2 3 =  8  =  4/3  .
~v1 =
~v =
h~a, ~ai
6
6
1
−2/3
1 −2 1
−4
Thus, we can determine ~v2 as
  
  
1
−2/3
5/3
~v2 = ~v − ~v1 = 3 −  4/3  = 5/3 .
−2/3
5/3
1
Hence, the desired decomposition of ~v is

  
−2/3
5/3
~v =  4/3  + 5/3 .
−2/3
5/3
(Notice that we can confirm that ~v2 is indeed in `⊥ , since


1
h~v2 , ~ai = ~v2T ~a = [5/3 5/3 5/3] −2 = 0
1
4
Also, we can write ~v2 in terms of the basis for `⊥ found in (a):
 
 
 
5/3
2
−1
5/3 = ~v2 = 5 1 + 5  0  .)
3
3
5/3
0
1
5. Find the line C + Dt that best fits the data
(−1, 1), (0, 1), (1, 2).
Answer: If the above data points actually lay on a line C + Dt, then we would have


 
1 −1 1
C
1 0 
= 1 .
D
1 1
2
C
Letting A be the matrix on the left, we want to find the values of C and D such that A
is as close
D
as possible to the vector ~b on the right-hand side of the above equation. The appropriate such C and
D will be given by
C
= (AT A)−1 AT ~b.
(∗)
D
Let’s take this in pieces. First,
AT A =
1
−1


1 −1
3
1 1 
1 0 =
0
0 1
1 1
so we see that
T
−1
(A A)
1/3
=
0
0
,
2
0
.
1/2
Substituting this into (∗), we see that
 
1
C
1/3
0
1 1 1  
1
=
D
0
1/2 −1 0 1
2
1/3
0
4
=
0
1/2 −1
4/3
=
,
−1/2
so the best-fit line for the data is
4 1
− t.
3 2
6. Let ` be the line through a vector ~a ∈ Rn and let P be the matrix which projects everything in Rn to
`.
(a) Show that the trace of P equals 1.
5


a1
 
Proof. Suppose ~a =  ... . The matrix P is given by
an

P =
1
~a~aT
= 2
h~a, ~a
a1 + . . . + a2n

a1
 .. 
 .  [a1 · · · an ] =
an
a21

1
 a2 a1
 .
2
2
a1 + . . . + an  ..
an a1

a1 a2
a22
..
.
···
···
..
.
an a2
···

a1 an
a2 an 

..  .
. 
a2n
Hence, since the trace of P is just the sum of the diagonal entries in P , we see that
trace(P ) =
a21
a2n
a21 + . . . + a2n
+
.
.
.
+
=
= 1,
a21 + . . . + a2n
a21 + . . . + a2n
a21 + . . . + a2n
as desired.
(b) What can you say about the eigenvalues of P ?
Answer: First off, recall that the trace of a matrix is equal to the sum of its eigenvalues, so we
know, from part (a), that the sum of the eigenvalues of P is 1.
Moreover, notice that
P~a = ~a
since ~a is already in the line `, so we see that 1 is an eigenvalue of P with corresponding eigenvector
~a. Thus, by the above reasoning, the sum of all the other eigenvalues of P is zero.
In fact, the only other eigenvalue of P is zero. Notice that for any vector ~v in `⊥ , P ~v = ~0. Hence,
0 is definitely an eigenvalue for P and, since `⊥ is (n − 1)-dimensional, there are n − 1 linearly
independent eigenvectors for the eigenvalue 0.
Since we’ve found n linearly independent eigenvectors for P and since no n × n matrix can have
more than n linearly independent eigenvectors, we see that 1 and 0 are the only eigenvalues for
P.
7. Suppose A is a 2 × 2 matrix with eigenvalues λ1 and λ2 corresponding to non-zero eigenvectors ~v1 and
~v2 , respectively. If λ1 6= λ2 , show that ~v1 and ~v2 are linearly independent.
Proof. If ~v1 and ~v2 are not linearly independent, then ~v1 is a multiple of ~v2 , so there is some real
number r such that
~v1 = r~v2 .
Now, multiply both sides of the above equation by A to get
A~v1 = rA~v2 .
Using the fact that ~v1 and ~v2 are eigenvectors, the above can be re-written as
λ1~v1 = rλ2~v2 .
Now, substitute r~v2 for ~v1 on the left-hand side to get
rλ1~v2 = rλ2~v2 .
This implies that λ1 = λ2 , which contradicts the hypothesis that λ1 6= λ2 .
From this contradiction, then, we conclude that ~v1 and ~v2 must be linearly independent.
6
Download