Last name: name: 1

advertisement
Last name:
name:
1
Mid term exam 2 (Notes, books, and calculators are not authorized). Show all your work in the
blank space you are given on the exam sheet. Always justify your answer. Answers with
no justification will not be graded.


1 2 3
Question 1: Is A = 0 1 2 diagonalizable? Be very accurate when answering.
0 0 1
Solution 1: The characteristic polynomial is
PA (λ) = det(A − λI) = (1 − λ)3 .
This means that there is only one eigenvalue of multiplicity three. Let v = (x, y, z) be an eigenvector,
then
2y + 3z = 0
2z = 0
which implies that y = z = 0. The eigenvectors are of the following form v = (α, 0, 0), α ∈ R and
α 6= 0. This means that the eigenspace E1 is the one-dimensional line E1 := span{(1, 0, 0)}, i.e.,
all the eigenvectors are parallel to (1, 0, 0). As a result, it is not possible to find three independent
eigenvectors. The matrix is not diagonalizable.
Solution 2: We deduce as above that 1 is the only eigenvalue of multiplicity three. If A was diagonal,
we would have A = P DP −1 where D = I. This would mean A = P IP −1 = P P −1 = I, i.e.,
A = I, which is obviously wrong. In conclusion A cannot be diagonalizable.
Question 2: The following linearly 
independent vectors e1 := (1, −1, 0), e2 := (1, 0, 1), and
4 1 −1
e3 := (1, 2, 1) are eigenvectors of A = 2 5 −2. Find an invertible matrix P and a diagonal
1 1 2
matrix D so that A = P DP −1 .
The matrix is clearly diagonalizable since there exist three linearly independent eigenvectors. Let
us compute Ae1 , Ae2 and Ae3 to get the associated eigenvalues. (A second possibility would
be to compute the characteristic polynomial of A, but that would be too long and unnecessarily
complicated since we are already given the eigenvectors.)
Ae1 = (3, −3, 0) = 3e1
Ae2 = (3, 0, 3) = 3e2
Ae3 = (5, 10, 5) = 5e3 .
This means that e1 and e2 are associated with the eigenvalue 3 (the multiplicity of this eigenvalue
is two) and e3 is associated with the eigenvalue 5. Let us now set




3 0 0
1 1 1
D = 0 3 0 , P = −1 0 2 .
0 0 5
0 1 1
Then A = P DP −1 .
2
Mid term exam 2, November 8, 2011
3 − λ
PA (λ) := det(A − λI) = 2
3
2
−4
0 −1
and B =
.
−6
1 0
−4 = (3 − λ)(−6 − λ) + 8 = λ2 + 3λ − 10
−6 − λ
Question 3: Find the eigenvalues of A =
i.e., PA (λ) = λ2 + 3λ − 10 = (λ − 2)(λ + 5). The eigenvalues of A are λ1 = 2 and λ1 = −5.
−λ −1 = λ2 + 1
PB (λ) := det(B − λI) = 1 −λ
i.e., PB (λ) = λ2 + 1 = (λ + i)(λ − i). The eigenvalues of B are λ1 = i and λ1 = −i.
Question 4: The following λ1 := −2,v1
1
4, v3 := (1, 1, 2) are eigen-pairs of A = 3
6
du(t)
of the ODE system dt = Au(t)?
:= (1, 1,0), λ2 := −2, v2 := (1, 0, −1), and λ3 :=
−3 3
−5 3. (a) What are the fundamental solutions
−6 4
The fundamental solutions have been defined in class and are
e−2t v1 ,
e−2t v2 ,
and e4t v3 .
This is all I wanted you to write.
Recall that upon defining the matrix P := [v1T v2T v3T ], we have A = P DP −1 where D is the diagonal
matrix composed of the eigenvalues. Then the ODE can be re-written as follows:
d(P −1 u)
= DP −1 u.
dt
−1
This can also be re-written dw
u(t). Then w(t) is given by
dt = Dw by defining w(t) := P

 
 

 
α1 e−2t
0
0
1
w(t) = α2 e−2t  = α1 e−2t 0 + α2 e−2t 1 + α3 e4t 0 ,
0
1
0
α3 e4t
where α1 , α2 , α3 are any real numbers. As a result,
 
 
1
0
u(t) = P w(t) = α1 e−2t P 0 + α2 e−2t P 1 + α3 e4t P
0
0
 
0
0 = α1 e−2t v1 + α1 e−2t v2 + α1 e4t v3 .
1
In conclusion, u(t) is a linear combination of the fundamental solutions e−2t v1 , e−2t v2 , e4t v3 .
(b) What is the solution of
du(t)
dt
= Au(t), with u(0) = (2, 1, −1)?
We know that u(t) is a linear combination of the fundamental solutions:
u(t) = α1 e−2t v1 + α2 e−2t v2 + α3 e4t v3 .
The condition u(0) = (2, 1, −1) implies
(2, 1, −1) = α2 v1 + α2 v2 + α3 v3 .
The corresponding

1 1
1 0
0 −1
augmented matrix is
 
 
1 2
1 1 1 2
1
1 1  ∼ 0 −1 0 −1 ∼ 0
2 −1
0 −1 2 −1
0
1
1
0
1
0
2
 
2
1
1 ∼ 0
0
0
The solution is (α1 , α2 , α3 ) = (1, 1, 0). This means
u(t) = e−2t v1 + e−2t v2 = e−2t (2, 1, −1).
0
1
0
0
0
1

1
1 .
0
Last name:
name:
3
Question 5: Are the following functions f1 (t) = 1−3 cos(t)+2 sin(t), f2 (t) = 2−4 cos(t)−sin(t),
f3 (t) = 1 − 5 cos(t) + 7 sin(t) linearly independent in V := span{1, cos(t), sin(t)}?
Let X = (x1 , x2 , x3 ) ∈ R3 be so that x1 f1 + x2 f2 + x3 f3 = 0. Then
x1 f1 (t)+x2 f2 (t)+x3 f3 (t) = x1 +2x2 +x3 +(−3x1 −4x2 −5x3 ) cos(t)+(2x1 −x2 +7x3 ) sin(t) = 0.
This means that X solves

1
−3
2
2
−4
−1

1
−5 X = 0.
7
Let us reduce the matrix of the linear system in echelon form

 


1
2
1
1 2
1
1
−3 −4 −5 ∼ 0 2 −2 ∼= 0
2 −1 7
0 −5 5
0
2
2
0

1
−2 .
0
There are only two pivots. There is one free variable. This means that the solution set of the above
linear system is not {0}. There is some nonzero vector X so that x1 f1 + x2 f2 + x3 f3 = 0. This
means that the functions f1 , f2 , f3 are linearly dependent.
Question 6: Let C 1 (R; R) be the vector space over R of the functions over R that are continuously differentiable. Let S = {1, cos(t), sin(t)} and consider V = span(S). Accept as a fact
that V is a three-dimensional subspace of C 1 (R; R). Consider the operator D : V −→ V so that
Df = df
dt . Find the matrix representation of D relative to the basis S.
Let us denote e1 = 1, e2 = cos(t), e3 = sin(t). The columns of the matrix representation of D
are the coordinate vectors [D(e1 )]S , [D(e2 )]S , [D(e3 )]S . Let us then compute D(e1 ), D(e2 )S , and
D(e3 ).
D(e1 ) = 0 = 0e1 + 0e2 + 0e3
⇒ [D(e1 )]S = (0, 0, 0)
D(e2 ) = − sin(t) = −e3
⇒ [D(e2 )]S = (0, 0, −1)
D(e3 ) = cos(t) = e2
⇒ [D(e3 )]S = (0, 1, 0)
As a result
[D]SS

0
:= [D(e1 )]TS [D(e2 )]TS [D(e3 )]TS = 0
0
0
0
−1

0
1
0
4
The original question has
been simplified to make
the computation simpler.
Credit has been given to
those who gave the correct
Gram-Schmidt formulae
Mid term exam 2, November 8, 2011
√
Question 7: Consider the four-dimensional real vector space U = span{1/ 2, cos(t), cos(2t), cos(3t)}
R
2π
equipped with the inner product hf, qi = π1 0 f (τ )g(τ )dτ . Consider the linearly independent
√
√
vector v√
1 = 1/ 2, + cos(t) + cos(2t) + cos(3t), v2 = 1/ 2 + cos(t) + 2 cos(2t) + 4 cos(3t),
v3 = 1/ 2 + 2 cos(t) − 4√
cos(2t) − 3 cos(3t). Use the Gram-Schmidt process to orthogonalize
these vectors. (Hint: {1/ 2, cos(t), cos(2t), cos(3t)} is an orthonormal basis of U )
The Gram-Schmidt process gives
w1 = v1
hv2 , w1 i
w1
kw1 k2
hv3 , w1 i
hv3 , w2 i
w1 −
w2
w3 = v3 −
kw1 k2
kw2 k2
w2 = v2 −
We immediately obtain
w1 = v1 =
√1
2
+ cos(t) + cos(2t) + cos(3t).
√
Let us compute kw1 k and hv2 , w1 i. Since {1/ 2, cos(t), cos(2t), cos(3t)} is an orthonormal basis
of U , we have
kw1 k2 = 1 + 1 + 1 + 1 = 4,
by Pythagoras’ Theorem
hv2 , w1 i = 1 + 1 + 2 + 4 = 8
As a result
8
w2 = v2 − w1 = v2 − 2v1
2
√
√
= (1/ 2 − 2/ 2) + (1 − 2) cos(t) + (2 − 2) cos(2t) + (4 − 2) cos(3t)
In conclusion
w2 = − √12 − cos(t) + 2 cos(3t).
Let us compute kw2 k2 , hv3 , w1 i, and hv3 , w2 i.
kw2 k2 = 1 + 1 + 4 = 6,
by Pythagoras’ Theorem
hv3 , w1 i = 1 + 2 − 4 − 3 = −4
hv3 , w2 i = −1 − 2 − 6 = −9
As a result
−9
3
−4
w1 −
w2 = v3 + w1 + w2
4
6
2
1
3
3
3
= √ (1 + 1 − ) + (2 + 1 − ) cos(t) + (−4 + 1) cos(2t) + (−3 + 1 + 2 ) cos(3t)
2
2
2
2
w3 = v3 −
In conclusion
w3 =
1
√
2 2
+
3
2
cos(t) − 3 cos(t) + cos(3t).
Last name:
name:
5
Question 8: Let G : R3 −→ P2 be the linear mapping defined by G(x, y, z) = (x + 2y − z) +
(y + z)t + (x + y − 2z)t2 . Find the dimension of Ker(G) and the dimension of Im(G).
Let us first characterize ker(G). Let X be a member of Ker(G). Then x + 2y − z + (y + z)t + (x +
y − 2z)t2 = 0. This means that x + 2y − z = 0, y + z = 0 and x + y − 2z = 0. This is a linear
system. The matrix associated with this system is

 
 
 
 

1 2 −1
1 2 −1
1 2 −1
1 3 0
1 0 −3
0 1 1  ∼ 0 1
1  ∼ 0 1 1  ∼ 0 1 1 ∼ 0 1 1 
1 1 −2
0 −1 −1
0 0 0
0 0 0
0 0 0
There is one free variable. This means that dim(ker(G)) = 1. Then the rank Theorem gives
dim(Im(G)) = 3 − dim(Ker(G)) = 2. Note in passing that X ∈ ker(G) iff
 
 
x1
3
x2  = x3 −1
x3
1
This means that ker(G) = span((3, −1, 1)).
Question 9: Let F : P2 (t) −→ P2 (t) be the linear mapping defined by F (p) = p(0) + p(1)t +
p(2)t2 . Show that F is bijective.
Since the domain P2 (t) and the co-domain P2 (t) have the same dimension, it suffices to prove that
F is either injective or surjective (by the rank Theorem). Let us prove that F is injective. Let
p = at2 + bt + c be a member of P2 (t) such that F (p) = 0. Since
p(0) = c,
p(1) = a + b + c
p(2) = 4a + 2b + c
we infer that
0 = p(0) + p(1)t + p(2)t2 = c + (a + b + c)t + (4a + 2b + c)t2 .
This implies that c = 0, a + b + c = 0
The associated matrix is

4
1
0
and 4a + 2b + c = 0. This is a linear system for a, b and c.
 

2 1
4 2
1
1 1 ∼ 0 −2 −3
0 1
0 0
1
The rank is maximum. This means that a = b = c = 0. As a result F is injective. In conclusion F
is bijective.
6
Mid term exam 2, November 8, 2011
Question 10: Let A be a complex-valued square matrix. Assume that all the eigenvalues of A
have a modulus larger than π. Prove that A is invertible
Let v be a nonzero vector so that Av = 0. This means that Av = 0v, which implies that 0 is an
eigenvalue of A, which contradicts that the eigenvalues have a modulus larger than π. In conclusion
one cannot find a nonzero vector so that Av = 0. This implies that A is invertible since A is a
square matrix.
Question 11: Let M be a real-valued n×n matrix. (a)Prove that M T M is diagonalizable
M T M is diagonalizable since it is a symmetric matrix as shown be the following computation:
(M T M )T = M T (M T )T = M T M.
(b) Prove that the eigenvalues of M T M are non-negative. (Hint v T M T M v = kM vk2 )
Let (λ, v) an eigen-pair. Then M T M v = λv; this means v T M T M v = λv T v, which can also be
written kM vk2 = λkvk2 . This proves that λ = kM vk2 /kvk2 ≥ 0 (Note that kvk =
6 0 since v is an
eigenvector).
(c) Assume that the smallest eigenvalue of M T M is 12 . Prove that M is invertible.
Let v be a nonzero vector so that M v = 0. This means that M T M v = 0v, which implies that 0
is an eigenvalue of M T M , which contradicts that 21 is the smallest eigenvalue. In conclusion one
cannot find a nonzero vector so that M v = 0. This implies that M is invertible since M is a square
matrix.
Download