18.336 Pset 4 Solutions Problem 1: Galerkin warmup

advertisement
18.336 Pset 4 Solutions
Bn,n−1 = ∆xn−1 /6.
Amn can be split into two pieces: Amn =
Kmn + Vmn where K is the kinetic energy
d2
from the − 2 and V is the potential energy
dx
from the V (x). Kmn we evaluated in class
Due Thursday, 20 April 2006.
Problem 1: Galerkin warmup
for Poisson's equation (with opposite sign),
bn (x), the matrix A is dened as
Amn = (bm , P bnP
).
Therefore, for any c, the
product c · Ac =
P m,n (cm bm , P cn bn ) = (u, P u)
where u(x) =
n cn bn (x). If c 6= 0 then u 6= 0
For a basis
and is:
because we always require our basis functions to
0 m 6= n, n ± 1
1
m=n
∆xm−1 + ∆xm
.
=
1
− ∆xm m = n − 1



1
− ∆xm−1
m=n+1




Kmn
(u, P u) > 0
c · Ac > 0 and A
be linearly independent, and thus
by assumption, and therefore
is positive-denite. Q.E.D.
Finally, the
over
Problem 2: Galerkin FEM
V (x),
1
Vmn
terms will involve integrals
which will have to be performed
numerically in general:
You will implement a Galerkin nite-element
Z
bm (x)V (x)bn (x)dx
Vmn =
method, with piecewise linear elements, for the
Schrodinger eigen-equation in 1d:
which we can make more explicit by break-
d2
[− 2 + V (x)]ψ(x) = Eψ(x)
dx
with a given potential
eigenvalue
E
V (x)
∆x chunks. Let
2
Z ∆xn
x
Pn =
V (x + xn )
dx,
∆xn
0
ing up the integral into
and for an unknown
and eigenfunction
ψ(x).
As usual,
ψ(x+2) =
ψ(x) and only solve for ψ(x) in x ∈ [−1, 1].
As in class, we will approximate ψ(x) by its
value cn at N points xn (n = 1, 2, . . . , N ) and
we'll use periodic boundary conditions
∆xn
Z
Mn =
0
V (x + xn )
0
Our basis functions are, as in class, the tent
bn (x),
which
= 1
at
xn
and
= 0
xm6=n , linearly interpolated in between xn
xn±1 . The periodic boundary conditions
simply mean that cN +1 = c1 for xN +1 = x1 + 2.
We dene ∆xn = xn+1 − xn as in class.
and
Vmn
Bmn = (bm , bn ) and
d2
Amn = (bm , [− dx2 + V (x)]bn ). However,
we'd like to make this a bit more explicit
(a) By denition, we have
since
in
up
looks like
elements
0
are
One
0
0 m 6= n, n ± 1
Pm−1 + Mm m = n
.
=
Cm m = n − 1



Cm−1 m = n + 1
for
Gaussian
quadrature
if
V (x)
is
more
note:
because
of
the
peri-
A1,N = A1,0 and AN,1 = AN,N +1 .
∆x interval the integral
(x/∆x)2 dx. The o-diagonal
only non-zero for Bn,n±1 ,
x(∆x − x)/∆x2 dx,




odic boundary conditions, we need to set
(b) See
web
the
site.
Matlab
Note
les
that
posted
in
on
addition
the
to
schrodinger_galerkin, I also implemented a schrodinger_galerkin_sparse
and come from an integral of the form
R ∆x
x(∆xn − x)
dx.
∆x2n
pieces with smooth integrands.
B:
each
R ∆x
dx,
smooth, since we have broken them up into
b2n dx = (∆xn−1 + ∆xn )/3,
Bnn =
2
Note that these integrals are properly set
and do some of the integrals analytically. To
Z
∆xn − x
∆xn
Then:
at
other
start with, let's evaluate
∆xn
Z
Cn =
linearly interpolate in between.
functions
V (x + xn )
giving
function that does the same thing but using
Bn,n+1 = ∆xn /6,
Matlab's
1
spdiags
function to exploit the
1
0.03
0.8
0.025
0.6
0.4
0.02
|r(x)|1/2
ψ
0.2
0
0.015
−0.2
0.01
−0.4
−0.6
1st
2nd
3rd
4th
−0.8
−1
−1
−0.8
0.005
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
0
−1
1
−0.8
−0.6
−0.4
−0.2
0
x
Figure 1:
Wavefunctions
ψ(x)
Figure 2: Residual function
for four lowest
0.4
0.6
0.8
1
(Note that the normalization of
ψ
p
|r(x)|
(blue dots),
and a simplied interpolated envelope (red cir-
eigenvalues, computed with a uniform grid for
N = 100.
0.2
x
p
ρ(x) ∼ 1/ |r(x)|.
cles/lines) that we'll use for
is
arbitrary.)
sparsity of the matrix and the
below.
eigs function
to use an iterative method to just nd a few
in order to cut the residual by 4 we
eigenvalues (it uses the Arnoldi algorithm
I believe).
should double the resolution. Thus, if
This way I can calculate the
solution for much larger
N
Now, if we believe that this
scheme is quadratically accurate, then
we suppose that we want to equalize
values if need
the residual everywhere, this would ar-
be.
p
ρ(x) ∼ 1/ |r(x)|
gue for using
(c) Here we computed the solutions for
r(x)
N =
r.
interpolated from
100:
where
is the residual function, suitably
The residual
r
is
plotted in gure 2; since it is kind of
r(x)
E
−35.1923, −3.7363, 24.1334, and
42.6923. The corresponding eigenfunc-
bumpy we'll just use an
were
by interpolating an upper-bound enve-
tions are shown in gure. 1.
lines in the gure. Using this
(i) For uniform spacing, my lowest four
(ii) In order to choose
ρ(x)
lope of
in a reasonable
residual
values
E
and
of -35.1944, -3.7364, 24.1668,
and 42.7061.
I looked
The
ψ(x)
plots are in-
distinguishable and are therefore not
of the solution and
shown.
changed the grid density to try to
equalize the residual everywhere.
ρ(x)
(via the makegrid function
supplied on the web site), I get eigen-
tried to mimic what a real adaptiveat the
as shown by the straight red
N = 100
way (not necessarily the best way), I
grid solver would likely do:
r,
formed
In
particular, I calculated the residual for
N = 100 uniform grid as r =
Ac0 − EBc0 where c0 is the exact so-
the
1A
(d) The error
∆EN
for
lution on the same grid (interpolated
quadratic, and the
from
error
N = 8000)1 in
N = 32, 64, 128, 256, 512
is plotted in gure 3.
particular, we'll
is
reduced
ρ(x)
by
about
1.4).
Darn.
use the residual for the lowest band
Fitting the last two points, it seems that
since that's what we want to compute
∆EN ≈ 42N −1.9993
ρ(x). For referE (as computed
N = 8000) should be
for
ence, the exact value of
more realistic adaptive-grid code would normally
estimate the residual by comparing with the solution at
by a uniform grid for
a
−35.1999808.
coarser
Both are clearly
helps slightly (the
resolution.
2
A table of our data follows:
inner product
−1
10
1/N2 line
10
N
2N
error ∆E = |E − E |
−2
−3
N
10
−4
10
−5
10
1
2
10
3
10
10
N
∆EN vs. N for a uniform
ρ(x) grid we selected, along with
2
a quadratic 1/N line for comparison. Both are
quadratic, and our ρ(x) helps slightly but not
Figure 3:
Error
grid and the
much.
N
32
64
128
256
512
EN
−35.146211
−35.186436
−35.196592
−35.199134
−35.199770
∆EN
0.04023
0.010157
0.0254
0.000636
0.000159
Problem 3: Orthogonal polynomials
xn (n = 1, . . . , N ) be the roots of the orthogpN (x), which we showed must
lie in (a, b). Suppose that the k -th root is repeated, with multiplicity M > 1; from this, we
Let
onal polynomial
will prove a contradiction similar to the proof in
class.
(S, pN ) > 0, which contradicts the
orthogonality above. Q.E.D.
uniform grid
ρ(x) grid
In particular, form the new polynomial
S(x) = pN (x)/(x − xk )L , where L = M if M is
even and L = M − 1 if M is odd. Clearly, S(x)
has smaller degree than pN (x) since L > 0, and
therefore we have zero inner product (S, pN ) = 0
However, it must also be the case that S(x)
has the same sign as pN (x) everywhere. If M
is even, then pN (x) does not change sign at xk
(which is an extremum), and neither does S(x)
since L = M (we removed the xk root entirely).
If M is odd, then pN (x) changes sign at xk and
so does S(x) (since L = M −1 and thus S(x) contains xk with multiplicity 1). All the other sign
changes are the same since the other roots were
unchanged. Therefore, we must have a positive
3
Download