Math 317: Linear Algebra Practice Final Exam Fall 2015 Name:

advertisement
Math 317: Linear Algebra
Practice Final Exam
Fall 2015
Name:
This practice final emphasizes the latest material covered in class. However, recall that
the final exam is comprehensive. To best prepare for the final, in addition to looking at
this practice final, look over all practice exams and real exams given in class, along with
the final exam topics sheet.
2 5
1. Let A =
. Calculate Ak for all k ≥ 1.
1 −2
See HW 11 solutions, Problem 8.
2. Let A and B be similar matrices.
(a) Prove that A and B have the same characteristic polynomial.
Proof : Suppose that A and B are similar matrices. Then there is a nonsingular matrix P such that A = P −1 BP . So, det(A − λI) = det(P −1 BP −
λI) = det(P −1 BP − λIP −1 P ) = det(P −1 (B − λI)P ) = det(P −1 ) det(B −
λI) det(P ) = det(B − λI). So A and B have the same characteristic polynomial.
(b) Prove that A and B have the same eigenvalues.
Proof : Since A and B have the same characteristic polynomial, from (a),
then A and B have the same eigenvalues with the same algebraic multiplicities.
3. (a) Let A be a n × n matrix with n distinct eigenvalues. Prove that det A is the
product of its eigenvalues.
Proof : Suppose that A is a n × n matrix with n distinct eigenvalues. Then
alg mult(λ) = 1 for each λ of A. Thus A is diagonalizable (since every
eigenvalue is simple) and so there is a nonsingular matrix P such that A =
P DP −1 where D is a diagonal entries whose entries are the eigenvalues (λi )
of A. Recalling that det(A) = det(P DP −1 ) = det(P ) det(D) det(P −1 ) =
det(D) = product of the diagonal entries = product of the eigenvalues of
A.
1
2
(b) Suppose that A is a symmetric matrix with real eigenvalues and A
=
.
1
2
Find A if det A = 6.
1
Math 317: Linear Algebra
Practice Final Exam
Fall 2015
1
2
We begin by finding the eigenvalues for A. We note that A
=
1
2
1
implies that Av1 = 2v1 with v1 =
. Hence λ1 = 2 with corresponding
1
eigenvector v1 . Now A is a 2 × 2 matrix and A has real eigenvalues so
det A = 6 =⇒ λ2 = 3. To find a corresponding eigenvector v2 for λ2 ,
we recall that if A is a symmetric matrix, then eigenvectors from different
eigenvalues
are orthogonal to each other. That
is v1 · v2 = 0. Hence,
1
c
−1
·
= 0 =⇒ c+d = 0 =⇒ c = −d =⇒
is the eigenvector that
1 d
1
corresponds to λ2 = 3. Now A has 2 simple
(alg
mult(λ) =
1) eigenvalues
1 −1
2 0
and hence A is diagonalizable with P =
and D =
. That is,
1 1
0 3
−1
1 −1 2 0 1 −1
A=
.
1 1
0 3 1 1
4. Prove that if λ1 , . . . , λk are distinct eigenvalues for A corresponding eigenvectors
v1 , . . . , vk , then the set {v1 , . . . , vk } is linearly independent.
Proof : Following the procedure from class, we prove this by way of contradiction.
So, we suppose that {v1 , . . . , vk } is linearly dependent set. Let m be the largest
integer such that {v1 , . . . , vm } is linearly independent. Note that m < k. Thus
vm+1 can be written as a linear combination of {v1 , . . . , vm }. That is, there are
scalars c1 , c2 , . . . , cm such that vm+1 = c1 v1 + c2 v2 + . . . + cm vm . Multiplying
both sides by (A − λm+1 I) on both sides, we obtain
(A − λm+1 I)vm+1 = (A − λm+1 I)(c1 v1 + c2 v2 + . . . + cm vm ).
Now, (A − λm+1 I)vm+1 = Avm+1 − λm+1 vm+1 = λm+1 vm+1 − λm+1 vm+1 = 0.
Thus (A − λm+1 I)(c1 v1 + c2 v2 + . . . + cm vm ) = 0. Furthermore, we have that
(A−λm+1 I)(c1 v1 +c2 v2 +. . .+cm vm ) = c1 (A−λm+1 I)v1 +. . .+cm (A−λm+1 I)vm
= c1 (λ1 − λm+1 )v1 + . . . + cm (λm − λm+1 )vm .
Since {v1 , . . . , vm } forms a linearly independent set, we have that ci (λi −λm+1 ) =
0 for each i. Since the eigenvalues for A are distinct, we have that (λi −λm+1 ) 6= 0
for each i which implies that ci = 0 for each i = 1, . . . , k. This means that
vm+1 = 0 which is a contradiction to the fact that vm+1 is an eigenvector and is
hence nonzero. Thus m = k, and the set {v1 , . . . , vk } is linearly independent.
5. Let V = {x ∈ R3 : −x1 + x2 + x3 = 0} and let T : R3 → R3 be the linear transfor2
Math 317: Linear Algebra
Practice Final Exam
Fall 2015
mation that projects vectors onto V .
(a) Find an orthonormal basis B = {q1 , q2 , q3 } for R3 so that q1 , q2 span V and
q3 is orthogonal to V .
We begin by finding a basis for V . Letting x2 and x3 act as free variables,
we have that x1 =x
2 + x3 , x
2 =
 x2 and x3 = x3 . Thus we have a basis for
1
1
V given by v1 = 1 , v2 = 0. We now form an orthogonal basis for V
0
1
by applying the Gram-Schmidt procedure. To begin, we let w1 = v1 . Then
w 1 · v2
w1 =
kw1 k2
  
 

 
1
1
1/2
1
0 − 1 1 = −1/2 or −1 .
2
0
1
1
2
w2 = v2 −
Next we find a basis for V ⊥ so that we can find the vector that
is orthogonal
to both w1 and w2 . We write the matrix A = −1 1 1 and note that
w1 , w2 make up a basis for N(A).Thus a basis for V ⊥ is made up of vectors
−1

from R(A) and hence w3 = 1 . Finally, to make an orthonormal basis,
1
we devide by the magnitudes to obtain:
 
1
1
0


1
−1
2
 
−1
1
1
, q2 = p
, q3 = p
.
q1 = √
12 + 12 + 02
12 + (−1)2 + 22
(−1)2 + 12 + 12
(b) Let C be the matrix of T with respect to B. Find C.
To find C = [T ]B , where T is the linear transformation which projects
vectors onto V , we recall that T (x) = x if x ∈ V and T (x) = 0 if x ∈ V ⊥ .
Thus, we have that T (q1 ) = q1 = 1q1 + 0q2 + 0q3 , T (q2 ) = q2 = 0q1 +
1q2 + 0q3 , and T (q3 ) = 0 = 0q1 + 0q2 + 0q3 . Thus the matrix C is given
by:


1 0 0
C = 0 1 0
0 0 0
(c) Let A be the standard matrix A of T . Find A.
3
Math 317: Linear Algebra
Practice Final Exam
Fall 2015
Here, we use the change of basis formula which says that A = [T ]stand =
P [T ]B P −1 , where the columns of P are made up of vectors from B. Thus,
√
√ 
√
√ −1
 √
 √
1/√2 1/ √6 −1/√ 3 1 0 0 1/√2 1/ √6 −1/√ 3
A = 1/ 2 −1/√ 6 1/√3  0 1 0 1/ 2 −1/√ 6 1/√3  .
0 0 0
1/ 3
1/ 3
0
2/ 6
0
2/ 6
(d) Find the eigenvalues of C. Find the corresponding eigenvectors of C.
Since C is a diagonal matrix, its eigenvalues are given by its diagonal entries.
Thus, λ = 0, λ = 1 are the eigenvalues of C. The eigenvectors for λ = 0
comes from N (A − 0I) = N (A) and so with
 x3 being the free variable
0
and x1 = x2 = 0, we have that v1 = 0 is an eigenvector for λ = 0.
1
To find the
eigenvectors
for
λ
=
1,
we
find
a basis for N (A − 1I). Since


0 0 0

A − I = 0 0 0 , we have that x1 , x2 are free variables and x3 = 0.
0 0 −1  

1
0



Thus v2 = 0 , v3 = 1 are the eigenvectors associated with λ = 1.
0
0
(e) Find the eigenvalues of A.
Since A is similar to C, they have the same eigenvalues. Thus λ = 0, 1 are
the eigenvalues for A.
6. Suppose that A is an n × n matrix.
(a) Show by counterexample that Ax = 0, x 6= 0 does not necessarily imply that
A = 0.
−1 0
0
Let A =
and x =
. Then Ax = 0 but A 6= 0.
0 0
1
(b) Prove that if Ax = 0 for all x 6= 0, then A = 0.
Proof : Suppose that Ax = 0 for all x 6= 0. Then Aei = 0, where the ei ’s
make up the canonical basis for Rn . That is, ei = ith column of the identity
matrix. By definition of matrix multiplication, Aei = ith column of A = 0.
So each column of A is 0. Thus, A is the zero matrix.
4
Math 317: Linear Algebra
Practice Final Exam
Fall 2015
(c) Let p(λ) be the characteristic polynomial for A and suppose that A is diagonalizable. Prove that p(A) = 0. (Note: For example, this means that if
p(λ) = λ2 + λ + 1 then p(A) = A2 + A + I.) Hint: Show that p(A)x = 0 for all
x ∈ Rn .
Proof : Suppose that A is diagonalizable with eigenvalues {λ1 , λ2 , . . . , λk }.
Then we can write p(λ) = (λ − λ1 )m1 (λ − λ2 )m2 . . . (λ − λk )mk . We also
have that the corresponding eigenvectors {v1 , v2 , . . . , vn } form a basis for
Rn (since A is diagonalizable.) Hence for any x ∈ Rn we can find scalars
c1 , c2 , . . . , cn such that x = c1 v1 + . . . + cn vn . Thus p(A)x =
[(A − λ1 I)m1 (A − λ2 I)m2 . . . (A − λk I)mk ] x.
= [(A − λ1 I)m1 (A − λ2 I)m2 . . . (A − λk I)mk ] (c1 v1 + . . . + cn vn )
= c1 [(A − λ1 I)m1 (A − λ2 I)m2 . . . (A − λk I)mk ] v1 +
. . . + cn [(A − λ1 I)m1 (A − λ2 I)m2 . . . (A − λk I)mk ] vn = 0,
since (A − λi I)mi v = 0 where v is an eigenvector associated with λi . So
p(A)x = 0 for all x ∈ Rn and hence p(A) = 0 from part (b).


 
8
1



7. If possible, write the vector x = −4 as a linear combination of u1 = 0,
−3
1
 
 
−1
2



u2 = 4 , and u3 = 1 .
1
−2
This is equivalent to asking whether there exist c1 , c2 , c3 such that
 
 
   
1
−1
2
8







c1 0 + c2 4 + c3 1 = −4 .
1
1
−2
−3
This of course is equivalent to reducing the following augmented matrix:



8
1 −1
2
8
1 −1 2
1 −4  →  0 2
1/2 −2  ,
[A|b] =  0 4
1 1 −2 −3
0 0 −9/2 −9

which implies that c3 = 2, c2 = −3/2, c1 = 5/2 via back substitution.
8. Prove that any orthogonal set of k vectors is linearly independent.
5
Math 317: Linear Algebra
Practice Final Exam
Fall 2015
Proof : Let {v1 , . . . , vk } be an orthogonal set of vectors. Note that without loss
of generality, we can assume that all of the vectors are nonzero, since a zero
vector would make the set linearly dependent. By definition of an orthogonal
set, we have that vi · vj = 0 for i 6= j. Let c1 v1 + . . . + ck vk = 0. We show
that c1 = . . . = ck = 0. Noting that vi · (c1 v1 + . . . + ck vk ) = vi · 0 = 0 and
that vi · (c1 v1 + . . . + ck vk ) = c1 (v1 · vi ) + . . . + ci (vi · vi ) + . . . + ck (vi · vk ) =
ci kvi k2 = 0 =⇒ ci = 0 for each i, we have that c1 = · · · = ck = 0 and hence
any orthogonal set of k vectors is linearly independent.

1
0
9. Let A = 
0
0

0 0
0
2 −2 1 
. Find the rank and reduced row echelon form of A.
2 1 −2
1 2
2
We begin by finding the reduced row echelon form of A.

1
0
A=
0
0


0 0
0
1


2 −2 1 
0
→
0
2 1 −2
1 2
2
0

1
0
→
0
0


0 0
0
1


1 −1 1/2
0
→
0
0 3 −3 
0 3 5/2
0

0 0 0
1 0 0
,
0 1 0
0 0 1
0
1
0
0

0
0
0 −1/2

1 −1 
0 11/2
so rankA = 4 and the reduced row echelon form of A is the identity matrix.
10. Determine if the following statements are true or false. If the statement is true,
prove it. If the statement is false, provide a counterexample or explain why the
statement is not true.
(a) If A is invertible, then it’s diagonalizable.
2 1
False. A =
is nonsingular but is not diagonalizable as we showed in
0 2
a previous homework. Check that the algebraic multiplicity of λ = 2 does
not equal the geometric multiplicity of λ = 2.
(b) If A is diagonalizable, then it’s invertible.
1 0
False. Every diagonal matrix is diagonalizable and so A =
is diago0 0
nalizable and is not invertible since it is singular.
(c) If A is invertible and diagonalizable, then A−1 is diagonalizable.
6
Math 317: Linear Algebra
Practice Final Exam
Fall 2015
True. If A is diagonalizable, then there is a nonsingular matrix P such that
A = P DP −1 . Since A is invertible, 0 is not an eigenvalue for A and hence D
(whose diagonal entries contain the eigenvalues for A) is invertible. Thus,
A−1 = (P DP −1 )−1 = P D−1 P −1 and so A−1 is similar to a diagonal matrix.
Hence A−1 is diagonalizable.
(d) If A and B have the same eigenvalues, then they have the same eigenvectors.
False. Recall from the last homework that A and AT have the same eigenvalues
but could have different eigenvectors. In particular, consider A =
1 1
. Then A has eigenvalues 0, 1 with associated eigenvectors v1 =
0 0
1
1
0
1
T
, v2 =
. However, A has eigenvectors w1 =
and w2 =
.
−1
0
1
1
7
Download
Study collections