18.303 Problem Set 1 Solutions Problem 1: ((5+10+5)+(10+5) points)

advertisement
18.303 Problem Set 1 Solutions
Problem 1: ((5+10+5)+(10+5) points)
(a) Here, we have a non-square m × m matrix A with eigensolutions Axn = λn xn .
(i) We can use the fact from 18.06 that det A = det(AT ), and hence det(A∗ ) = det A.
Hence, the characteristic polynomial p(λ) = det(A − λI) = det(A∗ − λ̄I): the characteristic polynomial of A∗ is thereore p(λ), which has the complex conjugate of the roots
of p(λ), which are λn , hence the eigenvalues of A∗ are λn . Given an eigenvalue, i.e. a λ
for which A∗ − λI is singular, the nullspace of A∗ − λI must contain at least one vector
yn 6= 0, and this is our “left” eigenvector A∗ yn = λn yn .
(This is a “right” eigenvector of A∗ , but yields an isomorphic “left” eigenvector yn∗ of
A satisfying yn∗ A = λn yn∗ . Whether we call yn or yn∗ the left eigenvector is a matter of
convenience.)
(ii) Suppose λj 6= λk . Then yj∗ Axk = (yj∗ A)xk = λj yj∗ xk = yj∗ (Axk ) = λk yj∗ xk . Hence
(λj − λk )yj∗ xk = 0, and since λj − λk 6= 0 it follows that yj∗ xk = 0.
(iii) If A = AT , then A∗ = Ā and Axn = λn xn ; comparing to the yn equation, we see that
there exists a solution yn = xn . Hence yj∗ xk = xTj xk , and λj 6= λk implies xTj xk = 0 .
An interesting corollary of this fact arises for “defective” matrices. For non-Hermitian
matrices, you can sometimes tweak the matrix entries so that λj → λk while xj → xk ,
that is so that two eigenvalues coalesce into one (algebraic multiplicity 2) but the two
eigenvectors also coalesce (geometric multiplicity 1). This is “defective” because you have
“lost” an eigenvector, and the eigenvectors no longer can form a complete basis. From
the xTj xk = 0 orthogonality relationship, however, it follows that defective eigenvectors
x satisfy xT x = 0: they are “self-orthogonal” under the unconjugated “inner product”
(which is not really a true inner product because it is indefinite).
(b) (We solved similar problem
d2
dt2 x
= Ax in class.)
(i) Since A is necessarily diagonalizable and we can expand x(t) in the eigenvector basis
P
P
d2
d
at every t:Px(t) = k ck (t)xk . Plugging this into dt
2 x − 2 dt x = Ax, we get
k (c̈k −
2ċk )xk = k λk ck xk . Since A is real-symmetric, the eigenvectors are orthogonal, so we
can take the dot product of both sides with xn to get1 an ODE for cn : c̈n − 2ċn = λn cn .
This is exactly the ODE whose solutions were given in the problem, with c = λn , so we
obtain:
4 h
i
√
√
X
x(t) =
αk e(1+ 1+λk )t + βk e(1− 1+λk )t xk
k=1
in terms of unknown coefficients αP
k and βk . To get the coefficients,
Pwe use the initial
conditions
as
in
class:
x(0)
=
a
=
(α
+
β
)x
and
ẋ(0)
=
b
=
0
k
k
k
0
k
k (αk + βk + [αk −
√
βk ] 1 + λk )xk , and taking the inner product of both sides with xn (using orthogonality
again, gives):
x∗ a0
αn + βn = n 2 ,
kxn k
1 Actually, we don’t need to use orthogonality here; this would work even for a nonsymmetric A. We can write
x(t) = Xc where X is the matrix whose columns are eigenvectors and c is the vector of the coefficients ck , and then
the ODE gives X(c̈ − 2ċ) = XΛc where Λ is the diagonal matrix of eigenvalues (from AX = XΛ). Then, since A is
diagonalizable (even if it is nonsymmetric, it still has 4 distinct eigenvalues), X is invertible; multiplying both sides
by X −1 on the left gives c̈ − 2ċ = Λc, and the n-th row of this gives c̈n − 2ċn = λn cn as desired.
1
p
x∗ b0
x∗ (b0 − a0 )
[αn − βn ] 1 + λn = n 2 − (αn + βn ) = n
,
kxn k
kxn k2
(Note that you were not given that kxn k = 1, although of course we could choose that
normalization.) We can then solve this 2 × 2 system of equations to obtain the solution
h
i
b0 −a0
x∗n a0 + √
1+λn
,
αn =
2kxn k2
h
i
b0 −a0
x∗n a0 − √
1+λn
βn =
,
2kxn k2
which completes the solution.
√
(ii) The fastest growing term is αk e(1+ 1+λk )t for the largest λk , which is λ4 = 24. After a
long time, this term dominates, and we have:
0
x∗4 a0 + b0 −a
5
e6t x4 .
x(t) ≈
2kx4 k2
(The only exception would be the unlikely event in which α4 is exactly zero.)
Problem 2: ((5+5+5)+5+5 points)
(a) Suppose that we we change the boundary conditions to the periodic boundary condition
u(0) = u(L).
(i) As in class, the eigenfunctions are sines, cosines, and exponentials, and it only remains to
apply the boundary conditions. sin(kx) is periodic if k = 2πn
L for n = 1, 2, . . . (excluding
n = 0 because we do not allow zero eigenfunctions and excluding n < 0 because they
are not linearly independent), and cos(kx) is periodic if n = 0, 1, 2, . . . (excluding n < 0
since they are the same functions). The eigenvalues are −k 2 = −(2πn/L)2 .
2πn
i L x
ekx is periodic only for imaginary k = i 2πn
=
L , but in this case we obtain e
cos(2πnx/L) + i sin(2πnx/L), which is not linearly independent of the sin and cos eigenfunctions above. Recall from 18.06 that the eigenvectors for a given eigenvalue form a
vector space (the null space of A − λI), and when asked for eigenvectors we only want
a basis of this vector space. Alternatively, it is acceptable to start with exponentials
2πn
and call our eigenfunctions ei L x for all integers n (including n < 0), in which case we
wouldn’t give sin and cos eigenfunctions separately.
Similarly, sin(φ + 2πnx/L) is periodic for any φ, but this is not linearly independent
since sin(φ + 2πnx/L) = sin φ cos(2πnx/L) + cos φ sin(2πnx/L).
(ii) No, any solution will not be unique, because we now have a nonzero nullspace spanned
d2
by the constant function u(x) = 1 (which is periodic): dx
2 1 = 0. Equivalently, we have
a 0 eigenvalue corresponding to cos(2πnx/L) for n = 0 above.
(iii) As suggested, let us restrict ourselves to f (x) with a convergent Fourier series. That is,
as in class, we are expanding f (x) in terms of the eigenfunctions:
f (x) =
∞
X
cn e i
2πn
L x
.
n=−∞
(You could also write out the Fourier series in terms of sines and cosines, but the complexexponential form is more compact so I will use it here.) Here, the coefficients cn , by the
2
usual orthogonality properties of the Fourier series, are cn =
1
L
RL
0
e−
2πn
L x
f (x)dx.
2
In order to solve ddxu2 = f , as in class we would divide each term by its eigenvalue
−(2πn/L)2 , but we can only do this for n 6= 0. Hence, we can only solve the equation if
the n = 0 term is absent, i.e. c0 = 0. Appling the explicit formula for c0 , the equation is
solvable (for f with a Fourier series) if and only if:
L
Z
f (x)dx = 0 .
0
There are other ways to come to the same conclusion. For example, we could expand
u(x) in a Fourier series (i.e. in the eigenfunction basis), apply d2 /dx2 , and ask what is
the column space of d2 /dx2 ? Again, we would find that upon taking the second derivative
the n = 0 (constant) term vanishes, and so the column space consist of Fourier series
missing a constant term.
The same reasoning works if you write out the Fourier series in terms of sin and cos
sums separately, in which case you find that f must be missing the n = 0 cosine term,
giving the same result.
(b) No. For example, the function 0 (which must be in any vector space) does not satisy
those boundary conditions. (Also adding functions doesn’t work, scaling them by constants,
etcetera.)
(c) We merely pick any twice-differentiable function q(x) with q(L) − q(0) = 1, in which case
u(L) − u(0) = [v(L) − v(0)] + [q(L) − q(0)] = −1 + 1 = 0 and u is periodic. Then, plugging
d2
v = u − q into dx
2 v(x) = f (x), we obtain
d2 q
d2
u(x)
=
f
(x)
+
,
dx2
dx2
which is the (periodic-u) Poisson equation for u with a (possibly) modified right-hand side.
For example, the simplest such q is probably q(x) = x/L, in which case d2 q/dx2 = 0 and u
solves the Poisson equation with an unmodified right-hand side.
Problem 3: (10+10+5 points)
(a) Since we want to compute u0 (x), we should Taylor expand around x, obtaining:
h
i
h
i
2
8∆x3 000
∆x2 00
∆x3 000
00
0
a u + 2∆xu0 + 4∆x
u
+
u
+
b
u
+
∆xu
+
u
+
u
−u
2
6
2
6
u0 (x) ≈
,
c∆x
where all the u terms on the right are evaluated at x. We want this to give u0 plus something
proportional to ∆x2 , so the u and u00 terms must cancel while the u0 terms must have a
coefficient of 1. Writing down the equations for the u, u0 , and u00 coefficients, we get:
0=a+b−1
c = 2a + b
0 = 4a + b,
which is 3 equations in 3 unknowns. This is easily solved to obtain a = −1/3, b = 4/3, c = 2/3 .
You were not required to do this, but we can also get an error estimate by looking at the u000
2
∆x2 000
000
term: the error is ≈ (8a + b) ∆x
6c u (x) = − 3 u (x).
3
(b) See solutions notebook.
(c) See solutions notebook.
4
Download