Math 2270 Spring 2008 Final Exam

advertisement
Math 2270 Spring 2008
Final Exam
You will have two hours to complete this exam. You may not use a calculator or
any other electronic device on this exam. Be sure to show all your work and your
reasoning in your responses, as you will be graded on your work and not necessarily
on your answer. All matrices and vectors in the problem statements are assumed to
have real entries unless otherwise specified.
1. Find all solutions to the linear system
x + 2y + 3z = 1
3x + 2y + z = 1 .
7x + 2y − 3z = 1
This system can be written as A~x = ~b, where


 
1 2 3
1
~



and b = 1  .
A= 3 2 1
7 2 −3
1
The corresponding

1 2 3

 3 2 1

7 2 −3
⇒
augmented matrix is
 
..
.
. 1
1 2
3 .. 1
 
..
..

. 1 
 ∼  0 −4 −8 . −2
..
.
. 1
0 −12 −24 .. −6



x1
x1 = r
x2 = −2r + 12 ⇒  x2  = r 
x3 = r
x3
All solutions have this form for some choice of r.
2. Find the inverse of the matrix


1 2 1
A =  1 3 2 .
1 0 1
1

.
1 0 −1 .. 0
 

 ∼  0 1 2 ... 1 
 
2 
..
0 0 0 . 0
  
1
0
−2  +  21  .
1
0


We form and

.
1 2 1 ..

 1 3 2 ...

.
1 0 1 ..
row reduce the augmented
 
1 2 1
1 0 0
 


0 1 0 ∼ 0 1 1
0 −2 0
0 0 1
system
 
.
..
1 0 −1 .. 3 −2 0
. 1 0 0
 
..
..

. −1 1 0 
 ∼  0 1 1 . −1 1 0
.
..
0 0 2 .. −3 2 1
. −1 0 1

.. 3
1
1 0 0 . 2 −1 2



1 .
∼  0 1 0 ... 1
0 −2 
2
..
3
1
0 0 1 . −2 1
2

This implies that

A−1 = 

−1 12
0 − 12  .
1
1
2
3
2
1
2
− 32
3. Compute the matrix product



1 0 −1
1 2 3
 0 1
1  3 2 1 .
1 −1 −2
2 1 3
We have that


 

1 0 −1
1 2 3
−1 1
0
 0 1
1  3 2 1  =  5
3
4 .
1 −1 −2
2 1 3
−6 −2 −4
4. Find bases for the image and kernel of

1

A= 1
1
the matrix

1 1
2 3 .
3 5
First, we row reduce A.

 
 

1 1 1
1 1 1
1 0 −1
 1 2 3  ∼  0 1 2  ∼  0 1 2 .
1 3 5
0 2 4
0 0 0
2




If ~x ∈ ker(A), then its components must satisfy
x1 = x3
x2 = −2x3 .
x3 = x3
If we write x3 = t, then

 



x1
t
1
~x =  x2  =  −2t  = t  −2  .
x3
t
1
This implies that a basis for ker(A) is


 1 
 −2 


1
and a basis for im(A) consists of the columns of A that correspond to columns of
rref(A) that contain leading ones. In other words
   
1 
 1
 1 , 2 


1
3
is a basis for im(A).
5. Suppose ~v1 , ~v2 , . . . , ~vm is a basis for a vector space V . Let ~x ∈ V . Show that there
exists a unique set of constants c1 , c2 , . . . , cm such that
~x = c1~v1 + c2~v2 + · · · + cm~vm .
Let us assume that ~x = c1~v1 + c2~v2 + · · · + cm~vm , but that also ~x = d1~v1 + d2~v2 +
· · · + dm~vm . Subtracting these two equations, we find that
~0 = (c1 − d1 )~v1 + (c2 − d2 )~v2 + · · · + (cm − dm )~vm .
Since ~v1 , ~v2 , . . . , ~vm are linearly independent, it must be true that
(c1 − d1 ) = (c2 − d2 ) = · · · = (cm − dm ) = 0,
i.e. ck = dk for k = 1, 2, . . . , m.
3
6. Let f (t) = 3 + t − 4t2 . Find the coordinate vector [f ]B of f with respect to the
basis B = {1 − t, t − t2 }.
We want to find constants c1 and c2 such that
c1 (1 − t) + c2 (t − t2 ) = 3 + t − 4t2
⇒ c1 + (−c1 + c2 )t − c2 t2 = 3t − 4t2
⇒ c1 = 3,
−c1 + c2 = 1,
−c2 = 4.
We now use c1 and c2 as the components of our coordinate vector:
3
[f ]B =
.
4
7. Consider the basis
B=
1 0
0 1
0 1
0 1
,
,
0 0
0 1
of U 2×2 , the space of all 2×2 upper triangular matrices and the mapping T : U 2×2 →
U 2×2 defined by
1 2
T (M ) =
M.
0 3
First find the matrix representation of T with respect to the basis B, and using this
representation, determine whether or not T is an isomorphism.
In class we derived a formula for the matrix representation of a transformation:
B = [T (v1 )]B [T (v2 )]B · · · [T (vm )]B ,
where v1 , v2 , . . . , vm are the elements
1 0
T
=
0 1
0 1
T
=
0 0
0 1
T
=
0 1
of B. In our case,
1 2
1 0
=
0 3
0 1
1 2
0 1
=
0 3
0 0
1 2
0 1
=
0 3
0 1
4
1 2
0 3
0 1
0 0
0 3
0 3
,
,
.
Two of the coordinate vectors are easy to find by inspection:
 
 
0
0
0 1
0 1



T
= 1 ,
T
= 0 .
0 0
0
1
B
B
0
3
To find the other coordinate vector, we solve the system
1 0
0 1
0 1
1 2
c1
+ c2
+ c3
=
0 1
0 0
0 1
0 3
 
c1 = 1
1
c3 = 2
1 0

⇒ c2 + c3 = 2 ⇒
⇒ T
= 0 .
c2 = 0
0 1
B
c1 + c3 = 3
2
Therefore, the matrix representation of T

1
B= 0
2
is

0 0
1 0 ,
0 3
which is obviously invertible, since it is lower triangular with nonzero diagonal entries.
This shows that T is an isomorphism.
8. Suppose that ~u1 , ~u2 , . . . , ~um is an orthonormal set of vectors in Rn . Show that
~u1 , ~u2 , . . . , ~um are linearly independent.
Suppose a linear combination of ~u1 , ~u2 , . . . , ~um is equal to the zero vector, i.e.
c1~u1 + c2~u2 + · · · + cm~um = ~0.
Let 1 ≤ i ≤ m. Taking the dot product of each side of the equation with ~ui , we find
that
ci = ~ui · ~0 = 0 for i = 1, 2, . . . , m.
This shows that ~u1 , ~u2 , . . . , ~um are linearly independent.
9. Consider an inconsistent system of linear equations A~x = ~b. Explain in what
sense the least squares solution as defined in class gives the best approximation to a
solution to the system.
The least squares solution ~x∗ to the system A~x = ~b satisfies
kA~x∗ − ~bk ≤ k~y − ~bk
5
for all y ∈ im(A). In other words, A~x∗ is the vector in im(A) that is closest to ~b.
This means that A~x∗ is projim(A) (~b).
10. Compute the determinant of the matrix


1 1 1
A =  1 2 3 .
1 3 6
Expanding along the first column, we have that
2 3
1 3
1 2
det(A) = 1 · det
− 1 · det
+ 1 · det
3 6
1 6
1 3
= (12 − 9) − (6 − 3) + (3 − 2) = 3 − 3 + 1 = 1.
11. Suppose that A is an n × n invertible matrix. Show that
det(A−1 ) =
1
.
det(A)
Since A is invertible, we know that AA−1 = I. Taking the determinant of both sides
of this equation, we obtain
det(AA−1 ) = det(I) = 1,
but det(AA−1 ) = det(A)det(A−1 ), so upon dividing both sides of the equation above
by det(A−1 ), we find that
1
.
det(A) =
det(A−1 )
12. Find all eigenvalues and a basis for each eigenspace of the matrix


−1 0 1
A =  −3 0 1  .
−4 0 3
Right off the bat we can see that 0 is an eigenvalue of A, but we’ll still have to use
the characteristic polynomial to find the other eigenvalues. We have that


−1 − λ 0
1
−λ
1  = (−1 − λ)(−λ(3 − λ)) + (−4λ)
det(A − λI) = det  −3
−4
0 3−λ
6
= (−1−λ)(λ2 −3λ)−4λ = −λ2 +3λ−λ3 +3λ2 −4λ = −λ3 +2λ2 −λ = λ(−λ2 +2λ−1).
This means that either λ = 0, or
λ2 − 2λ + 1 = 0 ⇒ (λ − 1)2 = 0,
so λ = 1 is an eigenvalue of A with algebraic
corresponding eigenvectors. We have that

 
 
−1 0 1
1 0 −1
A−0I =  −3 0 1  ∼  0 0 −2  ∼ 
−4 0 3
0 0 −1


 
−2 0 1
1 0 − 21
A − I =  −3 −1 1  ∼  0 1 12 
0 0 0
−4 0 2
multiplicity 2. Now we must find the

 
1 0 0
x1 = 0
0
0 0 1  ⇒ x2 = r ⇒ ~v1 =  1  ,
0 0 0
x3 = 0
0


x1 = 12 r
1
⇒ x2 = − 12 r ⇒ ~v2 =  −1  .
x3 = r
2
Thus, bases for E1 and E2 are

 

 1 
 0 
 1  and  −1  ,




2
0
respectively.
13. Find all eigenvalues and a basis for each eigenspace of the matrix
0 1
A=
.
−1 0
First, we must solve the characteristic equation
−λ 1
0 = det(A − λI) = det
= λ2 + 1 ⇒ λ2 = −1 ⇒ λ = ±i.
−1 −λ
To find a basis for the eigenspaces, we row reduce:
−i 1
1 i
x = −ir
−i
A − iI =
∼
⇒
⇒ ~v1 =
,
−1 −i
0 0
y=r
1
i 1
1 −i
x = ir
i
A + iI =
∼
⇒
⇒ ~v2 =
= ~v1 .
−1 i
0 0
y=r
1
7
Thus, bases for Ei and E−i are
−i
1
and
i
1
,
respectively.
14. Find an invertible matrix S and a diagonal matrix D such that the matrix
1 2
A=
2 3
can be written as A = SDS −1 .
First, we find the eigenvalues of A.
1−λ
2
0 = det(A−λI) = det
= (1−λ)(3−λ)−4 = 3−λ−3λ+λ2 −4 = λ2 −4λ−1.
2
3−λ
√
√
λ2 −4λ−1 = 0 ⇒ λ2 −4λ+4 = 1+4 ⇒ (λ−2)2 = 5 ⇒ λ−2 = ± 5 ⇒ λ = 2± 5.
Now we must find the corresponding eigenvectors. We have that
√ √
√
√ √
−1 − 5
2√
−1 + 5
1 1−2 5
x = −1+2 5 r
A−(2+ 5)I =
∼
⇒
⇒ ~v1 =
,
2
2
1− 5
0
0
y=r
√
A−(2− 5)I =
−1 +
2
√
5
2√
1+ 5
∼
1
0
√
1+ 5
2
0
x = −1−2
⇒
y=r
√
5
r
⇒ ~v2 =
−1 −
2
This tells us that
√
√ √
0√
2+ 5
−1 + 5 −1 − 5
and D =
.
S=
2
2
0
2− 5
15. Mark the following statements true if they are always true. Mark them false if
they are never true or only true sometimes.
If the matrix A has linearly independent columns, then there exists a vector
(a)
~b such that A~x = ~b has infinitely many solutions.
(b)
Any 2 × 2 rotation matrix is invertible.
(c)
The least squares solution to the system A~x = ~b always exists.
8
√ 5
.
(d)
The dimension of P4 is 4.
(e)
An n × n matrix A with n distinct eigenvalues is diagonalizable.
(f)
The determinant of a square matrix is the sum of its eigenvalues.
The number of eigenvalues of a square matrix A that have nonzero imaginary
(g)
part must be even.
(h)
Given n vectors ~v1 , ~v2 , . . . , ~vn , the Gramm-Schmidt process can be performed
to produce an orthonormal set of n vectors ~u1 , ~u2 , . . . , ~un such that
span(~v1 , ~v2 , . . . , ~vn ) = span(~u1 , ~u2 , . . . , ~un ).
(i)
If we factor a square matrix A as A = QR where Q is orthogonal and R is
upper triangular with positive diagonal entries, then det(A) = det(R).
Given an n × n matrix A and a vector ~b ∈ Rn , Cramer’s rule can be used to
(j)
find a solution ~x such that A~x = ~b.
(a) False. Linearly independent columns means that there is only one way to express
any vector as a linear combination of the columns of A. In other words, if A~x = ~b
has a solution ~x, ~x is the only solution.
(b) True. If A rotates vectors through θ◦ in the counterclockwise direction, then we
can construct another rotation B that rotates (360 − θ) degrees in the counterclockwise direction. Then AB~x = ~x for all ~x ∈ R2 , so AB = I. In other words, B is the
inverse of A.
(c) True. This is the whole point of least squares. The vector b is projected onto
im(A), and then the system
A~x∗ = projim(A) (~b)
is solved. This system must have a solution by the definition of im(A).
(d) False. A basis for P4 is {1, t, t2 , t3 , t4 }, so dim(P4 ) = 5.
(e) True. We proved in class that if we concatenate the bases form eigenspaces
corresponding to distinct eigenvalues, then the resulting set is linearly independent.
Therefore, if A has n distinct eigenvalues, it must have n linearly independent eigenvectors, so it is diagonalizable.
9
(f) False. We showed in class that the determinant was the product of the eigenvalues.
(g) True. Since A has real entries, its characteristic polynomial has real coefficients.
Therefore, when complex roots of the characteristic polynomial occur, they do so in
conjugate pairs. This means that the number of complex roots must be even.
(h) False. If ~v1 , ~v2 , . . . , ~vn are linearly dependent, the Gramm-Schmidt process will
fail at some point. In particular, if ~vk can be written as a linear combination of
~v1 , ~v2 , . . . , ~vk−1 , then ~vk⊥ = ~0, and we cannot normalize the zero vector.
(i) False. We will have
det(A) = det(QR) = det(Q)det(R) = ±1det(R).
This means that |det(A)| = det(R), but it is not true without the absolute value.
(j) False. Recall that Cramer’s rule says that
~xi =
det(A~b,i )
det(A)
.
If det(A) = 0, then this formula makes no sense, even if A~x = ~b has a solution.
10
Download