A New Approach for Numerically Solving Nonlinear Eigensolution Problems ∗ and Jianxin Zhou

advertisement
A New Approach for Numerically Solving Nonlinear
Eigensolution Problems ∗
Changchun Wang†and Jianxin Zhou
Abstract
By considering a constraint on the energy profile, a new implicit approach is developed to solve nonlinear eigensolution problems. A corresponding minimax method is
modified to numerically find eigensolutions in the order of their eigenvalues to a class of
semilinear elliptic eigensolution problems from nonlinear optics and other nonlinear dispersive/diffusion systems. It turns out that the constraint is equivalent to a constraint
on the wave intensity in L-(p+1) norm. The new approach enables people to establish
some interesting new properties, such as wave intensity preserving/control, bifurcation
identification, etc., and to explore their applications. Numerical results are presented to
illustrate the method.
Keywords. Semilinear elliptic eigensolution problem, implicit approach, minimax method,
order in eigenvalues, Morse index, bifurcation.
AMS subject classification. 35P30, 35J20, 65N25, 58E99
1
Introduction
We consider a semilinear elliptic eigensolution problem (ESP): find eigensolutions (u, λ) ∈
H 1 (Ω) × R in an order s.t.
(1.1)
λu(x) = −∆u(x) + βv(x)u(x) − κf (x, |u(x)|)u(x),
x ∈ Ω,
subject to either a zero Dirichlet or Neumann boundary condition (B.C.), where Ω is an open
bounded domain in Rn , v(x) ≥ 0 is a potential function, β, κ are physical parameters. Nonlinear ESP (1.1) arises from many applications and theoretical studies in quantum mechanics,
∗
Department of Mathematics, Texas A&M University, College Station, TX 77843. ccwang81@gmail.com
and jzhou@math.tamu.edu. Supported in part by NSF DMS-0713872/0820327/1115384.
†
Current affiliation: Department of US Imaging, Cggveritas Company
1
2
nonlinear optics and other dispersive/diffusion systems [1,2,3,5,6,10,13,14,15,16,19]. For example, in order to study pattern formation, (in)stability and other properties of solutions to
the nonlinear Schrödinger equation (NLS)
(1.2)
(1.3)
∂w(x, t)
= −∆w(x, t) + βv(x)w(x, t) − κf (x, |w(x, t)|)w(x, t),
∫∂t
d
|w(x, t)|2 dx = 0, (a mass conservation law)
dt Rn
i
standing (solitary) wave solutions of the form w(x, t) = u(x)e−iλt are investigated where λ is a
wave frequency and u(x) is a wave amplitude function. (1.3) is the mass conservation law under
which the Schrödinger equation (1.2) can be derived [4] and will be physically meaningful. It
is clear that a standing wave solution always satisfies the mass conservation law (1.3). Then
(1.2) leads to nonlinear ESP (1.1). In (1.2), the parameters β, κ > 0(β, κ < 0) correspond to
a focusing or attractive (defocusing or repulsive) nonlinearity. Those two cases are physically
and mathematically very different, and have to be solved numerically by two very different
methods. Although the new approach developed in this paper is applicable to both cases,
here we deal with the focusing case (β = κ = 1) only.
The variational “energy” functional of (1.1) at each (u, λ) is
∫
1
(1.4)
J(u, λ) = [ (|∇u(x)|2 + v(x)u2 (x) − λu2 (x)) − F (x, u(x))]dx
Ω 2
where Fu (x, u) = f (x, |u|)u satisfies certain regularity and growth condition. Then (u, λ) is
an eigensolution to (1.1) if and only if it solves the partial differential equation
(1.5)
Ju′ (u, λ) = 0.
Since only a partial derivative of J w.r.t. u is involved in (1.5), the problem is not variational
and has a one-degree freedom. There are two types of problems related to (1.1) and studied
in the literature: (1) to formulate a variational or discrete spectrum problem by introducing a
scalar equation as a constraint to remove the one-degree freedom and then solve for (u, λ); (2)
to formulate a bifurcation problem by identifying the values of λ (bifurcation points) across
which their multiplicities change. Accordingly there are two types of numerical methods
in the literature on solving (1.1): variational methods with various optimization skills and
linearization methods including various Newton (continuation) methods [11]. Most variational
methods focus on finding the first eigensolution and assume a normalization condition [2,3]
∫
(1.6)
|u(x)|2 dx = 1,
Ω
within a Lagrange multiplier approach. Such a normalization condition is physically necessary
in quantum mechanics but stronger than the mass conservation law (1.3). On the other hand,
3
nonlinear ESP (1.1) may also arise from NLS in nonlinear optics [1] or from applications in
other nonlinear dispersive/diffusion systems [5,6,10,13,14] where such a normalization is not
necessary. Chan-Keller developed a Newton arc-length continuation method in [5, and references therein] to trace bifurcation points, where an arc-length normalization is used instead
of the normalization condition (1.6). In addition to others, a typical application to nonlinear
eigenfunction problems where at least the first few eigenvalues and their eigenfunctions are
required is the threshold problem: for given F, G ∈ C 1 (H, R) and a threshold λ0 , find a largest
subspace HS ⊂ H such that F (u) ≤ λ0 G(u), ∀u ∈ HS subject to certain constraint C(u) = 0
[8].
Numerical variational methods on finding multiple solutions to ESP (1.1) with the normalization condition and Lagrange multiplier approach are studied in [24].
In this paper we study other variational methods and explore their advantages. To find
a better formulation to do so, let us observe (1.1), we can see that the nonlinearity of the
problem is at the variable u not λ. So if we denote the Hamiltonian of the wave u by
∫
1
(1.7)
H(u) = [ (|∇u(x)|2 + v(x)u2 (x) − F (x, u(x))]dx,
Ω 2
and the intensity of the wave u in L2 -norm by
∫
(1.8)
I(u) =
u2 (x)dx = ∥u∥2L2 (Ω) ,
Ω
then (1.5) becomes a typical nonlinear ESP of the form
(1.9)
H ′ (u) = λI ′ (u).
In order to remove the one-degree freedom to form a variational discrete spectrum problem,
an easy choice as suggested in [16,25] and frequently used in the literature is to enforce a
level-set constraint I(u) = C. Then its Lagrange functional is
(1.10)
L(u, λ) = H(u) − λ(I(u) − C).
The problem becomes variational and is to solve for (u, λ) s.t.
L′ (u, λ) = (L′u (u, λ), L′λ (u, λ)) = (H ′ (u) − λI ′ (u), C − I(u)) = (0, 0).
However our numerical experience in [23,24] suggests that to treat (u, λ) as a variable of
the Lagrange functional L may not be a good idea in numerical computation. It causes
difficulties in selecting initial guesses (u0 , λ0 ) for computing eigensolutions in the order of
their eigenvalues or in analysis on functional structure (mountain pass or linking), solution
instability or bifurcation. A direct analysis shows that L(u, λ) is always degenerate in the
variable λ, i.e., [0, µ][L′′ (u, λ)][0, µ]T = 0 or ill-conditioned. It provides different information
4
on variational order or Morse index than the original problem, has no global minimum even
the original problem has one and has no proper local minimum or saddle points either. Thus to
numerically find a constrained local minimum, an augmented Lagrange functional [9] has to be
defined. However such an augmented Lagrange functional will neither change the degeneracy
nor work for constrained saddle points which correspond to eigenfunctions associated to the
second or higher eigenvalues. Hence we will not use a Lagrange functional approach in this
paper.
It is clear that the level set constraint I(u) = C implies that the intensity of the eigenfunction u in L2 -norm is a constant. The normalization condition is a nice invariant property
for linear ESP, i.e., normalizations in different norms or constants will yield exactly the same
eigensolutions. However the case is quite different for nonlinear ESP.
An ESP (1.9) is said to be homogenous if H(tu) = |t|k+1 H(u), I(tu) = |t|l+1 I(u) for some
k, l > 0 and any t ̸= 0. An ESP is said to be iso-homogenous if it is homogenous with
k = l. It is obvious that a linear ESP is iso-homogeneous and a nonlinear ESP (1.1) is in
general not homogeneous. Consequently many nice properties enjoyed by linear ESPs get lost
by nonlinear ESPs. In particular, the normalization condition is no longer invariant for non
iso-homogeneous nonlinear ESPs, i.e., normalizations in different norms or constants may
yield different eigensolutions. Thus for nonlinear ESPs in nonlinear optics or other nonlinear
diffusion systems, the normalization condition is neither necessary nor invariant. It is known
that for analysis and application of an ESP, it is important to find solutions in an order.
The best order is the order of their eigenvalues. But the Lagrange functional method (1.10)
with a level set constraint or Newton iteration based methods [5,19] failed to do so. In
order to be motivated to develop a better method, let’s first look at a homogenous ESP.
We have ⟨H ′ (u), u⟩ = (k + 1)H(u) and ⟨I ′ (u), u⟩ = (l + 1)I(u). Then at an eigensolution
l+1
λI(u). Thus the level set constraint I(u) = C is
(u, λ), ⟨H ′ (u), u⟩ = ⟨λI ′ (u), u⟩ ⇒ H(u) = k+1
equivalent to the constraint
l+1
l+1
Cλ or J(u, λ) = H(u) − λI(u) = (
− 1)Cλ,
k+1
k+1
i.e., the variational energy values of J at (u, λ) are proportional to λ. The level set constraint
H(u) = C is equivalent to the constraint
(1.11)
H(u) =
k+1C
k+1
or J(u, λ) = H(u) − λI(u) = (1 −
)C,
l+1 λ
l+1
i.e., the variational energy values of J at (u, λ) are fixed at a constant level. If ESP is isohomogenous, k = l, the above reduces to J(u, λ) = 0 and becomes the Rayleigh quotient
method λ(u) = H(u)
. However, if EPS is nonhomogeneous, the above equivalences between
I(u)
two level set constraints and two constraints on the energy profile are broken and they will
lead to different eigensolution sets. Nonhomogeneous EPS with level set constraint is studied
in [24]. Here we study nonhomogeneous EPS with constraints on its variational energy profile.
(1.12)
I(u) =
5
The paper is organized as the following: In Section 2, we develop an implicit minimax
method. The method is applied to solve nonlinear ESP (1.1) with zero Dirichlet B.C. in
Section 3.1 and with zero Neumann B.C. in Section 3.2. In each case, a separation property in
a mountain pass structure is verified in order to apply the minimax method. Corresponding
numerical results are presented in Sections 3.1.1 and 3.2.1. As the advantages of the new
approach, we verify some interesting properties, such as wave intensity preserving/control and
bifurcation identification (its proof is put in Appendix B), etc. For numerical convergence and
other analysis under the new formulation, a new PS condition is established in Appendix A.
2
An Implicit Minimax Method
In many applications, I(u) > 0 for u ̸= 0. Let C be a C 2 function s.t. C ′ (λ) ≥ 0 (physically it
implies that the energy J is nondecreasing in eigenvalues or wave frequencies λ) and for each
u ̸= 0, there is unique λ s.t. J(u, λ) = H(u)−λI(u) = C(λ). By the implicit function theorem,
a function λ(u) can be implicitly defined from J(u, λ(u)) = H(u) − λ(u)I(u) = C(λ(u)) with
λ′ (u) = [I(u) + C ′ (λ(u))]−1 [H ′ (u) − λ(u)I ′ (u)]. Since I(u) + C ′ (λ(u)) > 0, we have
(2.13)
λ′ (u) = 0 ⇔ H ′ (u) = λI ′ (u).
We need some notions from critical point theory. Let λ : H → R be a C 2 functional. A
point u∗ ∈ H is a critical point of λ if λ′ (u∗ ) = 0. Let H = H + ⊕ H 0 ⊕ H − be the
spectrum decomposition of the linear operator λ′′ (u∗ ), i.e. H + , H 0 , H − are respectively the
maximum positive, the null and the maximum negative subspaces of λ′′ (u∗ ). u∗ is said to be
nondegenerate if H 0 = {0}. Otherwise u∗ is degenerate and the integer dim(H 0 ) is called the
nullity of u∗ . According to the Morse theory, the integer dim(H − ) is called the Morse index of
the critical point u∗ and denoted by MI(u∗ ). Since H − contains decreasing directions of λ at
u∗ , MI(u∗ ) is used in the literature to measure the instability of u∗ , an important information
in system design/control and bifurcation analysis.
By (2.13), a local minimax method (LMM) [12,17,18,23,24,26,27,28] can be applied to
find critical points u of λ in the order of λ-values. Consequently eigensolutions (u, λ(u)) are
found in the order of λ and also Morse theory can be applied to discuss possible bifurcation
phenomenon, which exhibits a great advantage of our approach over others.
2.1
A Local Minimax Method
In this subsection we briefly introduce LMM, some of its mathematical background and related
results from [12,26,22,17,18,28]. LMM is a 2-level optimization method for finding multiple
critical points of a functional J with a mountain pass structure in the order of J-values and
their Morse indexes.
6
2.1.1
Solution Characterization and Morse Index
Let H be a Hilbert space and J ∈ C 1 (H, R). For a given closed subspace L ⊂ H, denote
H = L ⊕ L⊥ and SL⊥ = {v ∈ L⊥ : ∥v∥ = 1}. For each v ∈ SL⊥ , denote [L, v] = span{L, v}.
Definition 1. The peak mapping P : SL⊥ → 2H is defined by
P (v) = the set of all local maxima of J on [L, v], ∀v ∈ SL⊥ .
A peak selection is a single-valued mapping p : SL⊥ → H s.t. p(v) ∈ P (v), ∀v ∈ SL⊥ .
If p is locally defined, then p is called a local peak selection.
J is said to satisfy the Palais-Smale (PS) condition in H, if any sequence {un } ⊂ H s.t.
{J(un )} is bounded and J ′ (un ) → 0 has a convergent subsequence.
The following theorems provide a mathematical justification for LMM and also gives an
estimate for the Morse index of a solution found by LMM.
Theorem 1. If p is a local peak selection of J near v0 ∈ SL⊥ s.t. p is locally Lipschitz
continuous at v0 with p(v0 ) ̸∈ L and v0 = arg local- minv∈SL⊥ J(p(v)) then u0 = p(v0 ) is a
saddle point of J. If p is also differentiable at v0 and we denote H 0 = ker(J ′′ (u0 )), then
dim(L) + 1 = MI(u0 ) + dim(H 0 ∩ [L, v0 ]).
Theorem 2. Let J be C 1 and satisfy PS condition. If p is a peak selection of J w.r.t. L, s.t.
(a) p is locally Lipschitz continuous, (b) inf d(p(v), L) > 0 and (c) inf J(p(v)) > −∞,
′
v∈SL⊥
then there is v0 ∈ SL⊥ s.t. J (p(v0 )) = 0 and p(v0 ) = arg min J(p(v)).
v∈SL⊥
v∈SL⊥
Let M = {p(v) : v ∈ SL⊥ }. Theorem 1 states that local-min J(u) yields a critical point
u∈M
∗
u , which is unstable in H but stable on M and can be numerically approximated by, e.g.,
a steepest descent method. Then it leads to LMM. For L = {0}, M is called the Nehari
manifold in the literature, i.e.,
(2.14)
2.1.2
N = {u ∈ H : u ̸= 0, ⟨J ′ (u), u⟩ = 0} = M = {p(v) : v ∈ H, ∥v∥ = 1}.
The Numerical Algorithm and Its Convergence
Let w1 , ..., wn−1 be n-1 previously found critical points, L = [w1 , ..., wn−1 ]. Given ε > 0, ℓ > 0
and v 0 ∈ SL⊥ be an ascent-descent direction at wn−1 .
Step 1: Let t00 = 1, vL0 = 0 and set k = 0;
Step 2: Using the initial guess w = tk0 v k + vLk , solve for
wk ≡ p(v k ) = arg max J(u) and denote tk0 v k + vLk = wk ≡ p(v k );
u∈[L,v k ]
7
Step 3: Compute the steepest descent vector dk := −J ′ (wk );
Step 4: If ∥dk ∥ ≤ ε then output wn = wk , stop; else goto Step 5;
v k + sdk
Step 5: Set v (s) := k
∈ SL⊥ and find
∥v {+ sdk ∥
}
m
ℓ 2
tk0 ℓ
k
k
k ℓ
k
k 2
s := max
:
> ∥d ∥, J(p(v ( m ))) − J(w ) ≤ − m+1 ∥d ∥ .
m∈N
2m ℓ
2
2
k
Initial guess u = tk0 v k ( 2ℓm ) + vLk is used to find (track a peak selection) p(v k ( 2ℓm ))
where tk0 and vLk are found in Step 2.
Step 6: Set v k+1 = v k (sk ), wk+1 = p(v k+1 ), k = k + 1, then goto Step 3.
Remark 1. ℓ > 0 controls the maximum stepsize of each search. The condition v 0 ∈ SL⊥ can
actually be relaxed [21]. LMM first starts with n = 0, L = {0} to find a solution w1 . Then
LMM starts with n = 1, L = span{w1 } to find another solution w2 . LMM continues in this
way with L gradually expanded by previously found solutions.
Theorem 3. ([28]) If J is C 1 and satisfies PS condition, (a) p is locally Lipschitz continuous,
(b) d(L, p(v k )) > α > 0 and (c) inf J(p(v)) > −∞, then v k → v ∗ ∈ SL⊥ with ∇J(p(v ∗ )) = 0.
v∈SL⊥
3
Solve Focusing ESP
Let v = 0 and F (x, u(x)) = |u(x)|
in (1.4) subject to either zero Dirichlet or Neumann B.C.
p+1
∗
∗
where 1 < p < p and p is the critical Sobolev exponent [15], i.e., we solve ESP
p+1
−∆u(x) − |u(x)|p−1 u(x) = λu(x)
(3.1)
and set its variational energy
∫
1
1
1
(3.2) J(u, λ) = H(u) − λI(u) = [ |∇u(x)|2 − λu2 (x) −
|u(x)|p+1 ]dx = C(λ),
2
p+1
Ω 2
∫
∫
1 2
1
1
2
p+1
H(u) =
[ |∇u(x)| −
|u(x)| ]dx and I(u) =
u (x)dx.
p+1
Ω 2
Ω 2
Multiplying (3.1) by u and integrating on Ω give
∫
(3.3)
[|∇u(x)|2 − λu2 (x) − |u(x)|p+1 ]dx = 0.
Ω
Taking (3.2) into account, for any eigenfunction u, we obtain
∫
1
1
] |u(x)|p+1 dx > 0.
(3.4)
C(λ) = [ −
2 p+1 Ω
8
So we must have C(λ) > 0. (3.4) shows that the choice of C(λ) decides the way we find
eigensolutions under a constraint on the wave intensity in Lp+1 -norm. We consider two cases
in this paper: C(λ) = C and C(λ) = Cλ.
On intensity of eigenfunctions. Case (1): C(λ) = C. We must have C > 0 and
∥u∥p+1
Lp+1 =
(3.5)
2(p + 1)
C.
p−1
for any eigenfunction u. By the Holder inequality, there exists Cp > 0 s.t. ∥u∥L2 < Cp .
Theorem 4. Let uk , k = 1, 2, ... be all the eigenfunctions. Then ∥uk ∥p+1
Lp+1 = C1 for some
1
1
constant C1 > 0 if and only if J(uk ) = H(uk ) − λ(uk )I(uk ) = ( 2 − p+1 )C1 .
Proof. By (3.5), we only have to show the “only if” part. When ∥uk ∥p+1
Lp+1 = C1 > 0, we have
1
1
1
J(uk ) = H(uk ) − λ(uk )I(uk ) = ∥∇uk ∥2L2 −
C1 − λ(uk )I(uk ).
2
p+1
2
1
(3.3) gives ∥∇uk ∥2L2 − λ(uk )I(uk ) = ∥uk ∥p+1
Lp+1 = C1 . Thus J(uk ) = ( 2 −
1
)C1
p+1
> 0.
Remark 2. ∥uk ∥L2 and ∥uk ∥Lp+1 measure the intensity of uk in two different norms. When
∥uk ∥L2 = C is used as a constraint in the literature, there is no control over the intensity
∥uk ∥Lp+1 . Theorem 4 states J(u, λ) = C ⇔ ∥uk ∥Lp+1 = C1 . Under this constraint, by the
Hölder inequality, the intensity ∥uk ∥L2 is bounded. This shows an advantage of our approach.
Case (2): C(λ) = Cλ. We must have C = C ′ (λ) > 0, λ > 0. Then (3.4) becomes
(3.6)
∥u∥p+1
Lp+1 =
2(p + 1)
Cλ.
p−1
Plugging (3.6) into (3.2), we obtain
1
λ
2Cλ
∥∇u∥2L2 − ∥u∥2L2 −
= Cλ or
2
2
p−1
1
1
p+1
∥∇u∥2L2 = λ( ∥u∥2L2 +
C).
2
2
p−1
On mountain pass structure of λ(u). Since λ(u) is derived from a quotient formulation and
different from the usual ones in the literature, we need to know some of its basic properties. For
example, for LMM to be applicable, for each u ̸= 0, λ(tu) needs to attain its local maximum
at certain tu > t0 > 0 and limt→+∞ λ(tu) = −∞. Such a structure is called a mountain-pass
structure, which is motivated by the wellknown mountain pass lemma [15]. Typically in the
mountain pass lemma, a variational functional needs to be C 1 .
Case (1): C(λ) = C > 0. We have λ′ (u) = [I(u)]−1 [H ′ (u) − λ(u)I ′ (u)] where I(u) +
C ′ (λ(u)) = I(u) > 0 for all u ̸= 0 and
∫
∫
1
|u(x)|p+1
1 2
−1
−1
u (x)dx]
[ |∇u(x)|2 −
]dx − C.
λ(u) = [I(u)] [H(u) − C] = [
p+1
Ω 2
Ω 2
9
Thus limt→0+ λ(tu) = −∞, limt→+∞ λ(tu) = −∞. It is clear that λ(u) has a singularity at
u = 0. But u = 0 is not an eigenfunction, thus we still call such a structure a mountain-pass
type structure with a singularity at u = 0.
Case (2): C(λ) = Cλ with C > 0. We have λ′ (u) = [I(u) + C]−1 (H ′ (u) − λ(u)I ′ (u)),
∫
(∫ 1
|u(x)|p+1 )
1 2
−1
−1
[ |∇u(x)|2 −
λ(u) = [I(u) + C] H(u) = [
u (x)dx + C]
]dx .
p+1
Ω 2
Ω 2
Thus limt→0+ λ(tu) = 0+ , limt→+∞ λ(tu) = −∞. It shows a typical mountain pass structure.
In either case, see Fig. 1, for each u ̸= 0, there exists tu > 0 s.t.
d
(3.7)
tu = arg max λ(tu) or
λ(tu)|t=tu = ⟨λ′ (tu u), u⟩ = 0.
t>0
dt
But we still need to show tu > t0 > 0 for LMM to be successful. Then a weaker PSN condition
is verified in Appendix A for existence of solutions and convergence of LMM.
λ (tu)
λ (tu)
t
t
Figure 1: The graph of λ(tu) for a fixed u ̸= 0 when C(λ) = C (left), C(λ) = Cλ (right).
3.1
Focusing ESP with zero Dirichlet B.C.
Theorem 5. For Case (1) or Case(2). There is t0 > 0 s.t. for all u ∈ H = H01 (Ω) with
∥u∥H = 1, there is a unique tu > t0 satisfying dλ(tu)
|t=tu = 0. Furthermore in either case the
dt
peak selection p is unique and differentiable when L = {0} and satisfies ∥p(u)∥ > t0 .
Proof. Note ∥u∥H = ∥∇u∥L2 = 1. By the Sobolev inequality, let Cs > 0 be the constant s.t.
∥u∥sLs < Cs ∀u ∈ H, ∥u∥H = 1. For Case (1): C(λ) = C > 0, we have
]
H(tu) − C
1 [
2 p−1
p+1
−2
λ(tu) =
=
1
−
t
∥u∥
−
2Ct
,
Lp+1
I(tu)
∥u∥2L2
p+1
]
]
d
1 [ 2(p − 1) p−2
t−3 [ 2(p − 1)
p+1
−3
p+1
λ(tu) =
−
t
∥u∥
+
4Ct
>
−
C
t
+
4C
.
p+1
Lp+1
dt
∥u∥2L2
p+1
∥u∥2L2
p+1
Thus there is t0 > 0 s.t. when 0 < t < t0 , dtd λ(tu) > 0 or tu > t0 ∀u ∈ H, ∥u∥H = 1.
For Case (2): C(λ) = Cλ, we have
]−1 ( t2
[ t2
)
tp+1
λ(tu) =
∥u∥2L2 + C
−
∥u∥p+1
Lp+1 ,
2
2
p+1
[ t2
]−2 [
]
d
1
1
p+1
2
p
λ(tu) =
∥u∥2L2 + C
tp+2 (
− )∥u∥p+1
∥u∥
+
C(t
−
t
∥u∥
)
L2
Lp+1
Lp+1
dt
2
p+1 2
[
]
t2
1
1
> [ ∥u∥2L2 + C]−2 tp+2 (
− )Cp+1 C2 − tp CCp+1 + Ct .
2
p+1 2
10
Thus there is t0 > 0 s.t. when 0 < t < t0 , it holds
d
λ(tu) > 0 or tu > t0 ∀u ∈ H, ∥u∥H = 1.
dt
On the other hand, for either Case (1) or Case (2), we have
d
λ(tu) = ⟨λ′ (tu), u⟩ = [⟨H ′ (tu), u⟩ − λ(tu)⟨I ′ (tu), u⟩][I(tu) + C ′ (λ(tu))]−1 .
dt
At t = tu > 0, we have
dλ(tu)
|t=tu
dt
= ⟨λ′ (tu u), u⟩ = 0 or ⟨H ′ (tu u), u⟩ = λ(tu u)⟨I ′ (tu u), u⟩ and
d2
λ(tu)|t=tu = ⟨λ′′ (tu u)u, u⟩ = [⟨H ′′ (tu u), u⟩ − λ(tu u)⟨I ′′ (tu u), u⟩][I(tu u) + C ′ (λ(tu u))]−1 .
2
dt
Note that for any p > 1,
∫
′′
tu ⟨H (tu u)u, u⟩ = tu
∫
[|∇u(x)| −
2
p+1
ptp−1
]dx
u |u(x)|
Ω
= ⟨H ′ (tu u), u⟩ = λ(tu )⟨I ′ (tu u), u⟩ = λ(tu u)
∫
[tu |∇u(x)|2 − tpu |u(x)|p+1 ]dx
<
Ω
tu u2 (x)dx = tu λ(tu u)⟨I ′′ (tu u)u, u⟩.
Ω
Since I(tu u) + C ′ (λ(tu u)) > 0, we conclude that
d2
λ(tu)|t=tu < 0,
dt2
(3.8)
which implies that tu > 0 is unique. Also in either Case (1) or Case (2), by the implicit
function theorem, the peak selection p, when L = {0}, is unique and differentiable, or the
Nehari manifold N is differentiable. Since when L = {0}, p(u) = tu u, we have ∥p(u)∥H =
tu > t0 > 0 ∀u ∈ H with ∥u∥H = 1 or dist(0, N ) > t0 .
We have verified the mountain pass structure of λ(u) for applying LMM. The steepest
descent direction d = ∇λ at u in Step 3 of LMM is solved in H01 (Ω) from
−∆d(x) =
H ′ (u) − λ(u)I ′ (u)
−∆u(x) − |u(x)|p−1 u(x) − λ(u)u(x)
∫
=
.
1
I(u)
u2 (x)dx
2 Ω
This is where a numerical linear elliptic solver can be applied, e.g., a finite difference method
(FDM), a finite element method (FEM) or a finite boundary element method (FBEM). We
use a Matlab subroutine “assempde”, a finite element method in our numerical computation.
3.1.1
Numerical Results
∫
∫
Let Ω = (−0.5, 0.5)2 , p = 3, ε = 10−5 , H(u) = Ω [ 12 ∇u(x)|2 − 41 u4 (x)4 ]dx, I(u) = Ω 21 u2 (x)dx
and J(u) = H(u) − λI(u) = C(λ). We carry out numerical computation for Case (1):
C(λ) = C = 10, 20 and Case (2): C(λ) = Cλ with C = 1, 2. An initial guess u0 is solved
from −∆u + u = f (x) on Ω and u(x) = 0 on ∂Ω, where f (x) = −1, 1 or 0 if one wants u0 (x)
11
to be concave up, concave down or flat at x ∈ Ω. The support is selected as L(u1 ) = {0}
and L(ui ) = [u1 , ..., ui−1 ]. All numerical computations went through smoothly. For easy
comparison, we put all λ and ∥u∥-values in Table 1. Due to the symmetry of the problem,
for each eigenfunction we present only a representative of its equivalent class in the figures.
In order to see the profile and contours of a numerical solution clearly in one figure, we have
shifted the profile vertically. Since all the first nine eigenfunctions of the four cases are in
exactly the same pattern order and similar, we present only the figures for C(λ) = 10 and
C(λ) = λ and omit the other two.
Table 1:
k
1
2
3
4
5
6
7
8
9
3.2
Case (1): C(λ) = C
C = 10
C = 20
λk
∥uk ∥
λk
∥uk ∥
9.9594
9.1264
5.6939 10.7764
38.9635 13.8164 34.5421 16.3379
39.7149 14.3249 35.5935 16.9138
69.4852 18.2481 65.4870 21.6431
88.4921 19.5903 84.1640 23.2262
88.5404 19.6912 84.2720 23.4078
89.1777 20.1890 85.0800 23.8356
118.2005 22.3240 113.8775 26.4678
118.9864 23.2559 115.0084 27.6010
Case (2): C(λ) = Cλ
C=1
C=2
λk
∥uk ∥
λk
∥uk ∥
9.9731 9.1205
7.5888 10.0862
30.8403 18.1271 25.3741 20.4135
31.8415 18.8809 26.4798 21.2664
56.1634 27.8533 48.7244 31.8310
70.9239 31.5970 61.7317 36.0982
71.3244 32.1891 62.4561 37.0861
72.0392 32.0738 62.7393 36.3035
95.9733 38.6666 84.7947 44.1722
98.2867 40.7662 87.8495 46.9151
Focusing ESP with a zero Neumann B.C.
Since ESP (1.1) is also related to systems in chemotaxis or other chemical or biological diffusion
process [6,10,13,14] where a Neumann B.C. is prescribed, in this section, we solve
(3.9)
−∆u(x) − β|u(x)|p−1 u(x) = λu(x),
x ∈ Ω,
u ∈ H = H 1 (Ω),
satisfying a zero Neumann B.C. where β > 0 is a parameter. It is clear that if (u, λ) is an
eigensolution, then so is (−u, λ). Due to the application background, we are interested mainly
in the positive eigenfunctions, although mathematically there are sign-changing eigenfunctions, which may interfere our efforts in computing the positive ones. The variational functional
becomes
{
J(u) = H(u) − λ(u)I(u) = C where
(3.10)
β
1
2
H(u) = 12 ∥∇u∥2L2 − p+1
∥u∥p+1
Lp+1 and I(u) = 2 ∥u∥L2 .
12
0
0
0
−1
−2
−2
−2
−4
−4
−3
−6
−6
−4
−8
−8
−5
0.5
−10
0.5
−10
0.5
0.5
0.5
0
0
(1)
(2)
−0.5
0
0
−2
−2
0.5
0
0
0
−0.5
−0.5
0
(3)
−0.5
−0.5
−0.5
2
0
−2
−4
−4
−6
−6
−8
−8
−8
−10
0.5
−10
0.5
−10
0.5
−4
−6
0.5
0
(4)
−0.5
(5)
−0.5
0.5
0.5
0
0
0
0
−0.5
(6)
−0.5
0
0
0
−2
−2
−2
−4
−4
−4
−6
−6
−6
−8
−8
−8
−10
0.5
−10
0.5
−10
0.5
0.5
0
(7)
−0.5
−0.5
−0.5
0.5
0
0
−0.5
0
(8)
0.5
0
0
−0.5
(9)
−0.5
0
−0.5
−0.5
Figure 2: Case 1. C(λ) = 10. Profiles and contours of eigenfunctions u1 , ..., u9 .
Theorem 6. For ESP (3.9), in addition to the results in Theorem 4, it holds
(a) all one-sign eigenfunctions have λ < 0;
(b) {∥uk ∥H } is bounded for all eigenfunctions uk with λ < 0.
Proof. Integrating (3.9) for a one-sign eigenfunction u satisfying the zero Neumann B.C. leads
to
∫
∫
p−1
−β |u(x)| u(x)dx = λ u(x)dx,
Ω
Ω
which indicates (a) λ < 0 for all one-sign eigenfunctions. But a sign-changing eigenfunction
u may still have a positive eigenvalue. Since we need C(λ) > 0 and C ′ (λ) ≥ 0 in our setting,
we consider only the case C(λ) = C > 0. Multiplying u to (3.9) and integrating it, then
comparing to (3.10), we obtain
(3.11)
1
1
]β∥u∥p+1
[ −
Lp+1 = C
2 p+1
for all eigenfunctions u, i.e., ∥u∥Lp+1 = Cp > 0 for some constant Cp > 0. Then ∥u∥L2 < Ch
for some constant Ch > 0 by the Hölder inequality. Consequently from (3.9), we have
∥∇u∥2L2 − λ∥u∥2L2 =
2p
C
p−2
which implies, for all eigenfunctions u with λ < 0, ∥∇u∥L2 and ∥u∥H are bounded.
13
0
0
0
−1
−2
−2
−2
−4
−4
−3
−6
−6
−4
−8
−8
−5
0.5
−10
0.5
−10
0.5
0.5
0
0.5
0
0
(1)
−0.5
5
5
0
0
−5
−5
−10
−10
−15
0.5
−15
0.5
0.5
0
0
(2)
−0.5
−0.5
0
(3)
−0.5
−0.5
−0.5
5
0
−5
−10
0.5
0.5
0
0.5
0
0
−0.5
−0.5
−0.5
(4)
0.5
0
0
0
−0.5
−0.5
(5)
−0.5
(6)
5
5
5
0
0
0
−5
−5
−10
−5
0.5
−10
0.5
−15
0.5
0.5
−10
0.5
0
−15
0.5
0
(7)
−0.5
−0.5
0
0
0
0
(8)
−0.5
−0.5
(9)
−0.5
−0.5
Figure 3: Case 2. C(λ) = λ. Profiles and contours of eigenfunctions u1 , ..., u9 .
To check the mountain pass structure for λ(u), we note that the proof of the first part
of Theorem 5 utilizes the Sobolev inequality which is valid for functions in H with a zero
Dirichlet B.C. but not valid for functions in H with a zero Neumann B.C. So we have to
use other properties instead. Under the equality in (3.11), without loss of generality, we may
assume that for each fixed C > 0, let MC >> C and consider only the closed set
U = {u ∈ H = H 1 (Ω) : ∥u∥p+1
Lp+1 ≤ MC }.
Theorem 7. Let
∥∇u∥2L2 −
H(tu) − C
=
λ(tu) =
I(tu)
2tp−1
∥u∥p+1
Lp+1
p+1
2
∥u∥L2
− 2Ct−2
for each u ∈ U with ∥u∥H = 1 and t > 0. There is t0 > 0 s.t. for all u ∈ U with ∥u∥H = 1,
|t=tu = 0. Furthermore in either case the peak
there exists a unique tu > t0 satisfying dλ(tu)
dt
selection p is unique and differentiable when L = {0} and satisfies ∥p(u)∥ > t0 .
Proof. We have
]
]
d
1 [ 2(p − 1) p−2
1 [ 2(p − 1) p+1
p+1
−3
−3
,
λ(tu) =
−
t
∥u∥
≥
(−
t
M
+
4C)t
+
4Ct
c
Lp+1
dt
∥u∥2L2
p+1
∥u∥2L2
p+1
14
since u ∈ U or ∥u∥p+1
Lp+1 ≤ MC . Thus there is t0 > 0 s.t. when 0 < t < t0 , we have
d
λ(tu) > 0 or tu > t0 ∀u ∈ U, ∥u∥H = 1.
dt
Note that the proof of the inequality (3.8) in Theorem 5 does not involve the Sobolev inequality,
so it is still valid in the current situation and the rest of the theorem follows.
It is known that solutions to Neumann boundary value problems exhibit drastically different behavior from their Dirichlet counterparts. The numerical computation in this subsection
becomes even more complicated due to the existence of a positive constant eigenfunction uC .
Many of our numerical experiments suggest that when the value of C varies, MI(uC ) changes
and results in possible bifurcation from uC to many positive eigenfunctions or multiple branches of eigenfunctions. Such a situation causes tremendous difficulty for us to set up the support
L in LMM. In order to have successful numerical results, we must do more analysis and have
a better understanding about this situation. The following analysis displays a significant advantage of our approach over others in the literature. Let 0 = µ1 < µ2 ≤ µ3 ≤ · · · be the
eigenvalues of the linear ESP
(3.12)
−△u(x) = µu(x) x ∈ Ω and
∂u(x)
= 0 x ∈ ∂Ω.
∂n
Theorem 8. Let C be C 2 with C ′ (λ) > 0 and λ(u) be the function implicitly defined from
H(u) − λ(u)I(u) = C(λ(u))
where H(u) and I(u) are given in (3.10). Let uC be a positive constant solution to
λ′ (u) ≡
H ′ (u) − λ(u)I ′ (u)
= 0.
I(u) + C ′ (λ(u))
For k = 1, 2, 3, ...,
(a) if µk < (p − 1)βup−1
< µk+1 , then uC is nondegenerate with MI (uC ) = k;
C
(b) if µk < (p − 1)βup−1
= µk+1 = · · · = µk+1+rk < µk+2+rk , then uC is degenerate with
C
1
MI (uε ) = k and nullity(uC ) = rk ≥ 1 and bifurcates to new positive solution(s).
Furthermore if C(λ) = C > 0, then
(3.13)
uC =
1
[ 2C(p + 1) ] p+1
,
|Ω|β(p − 1)
λ(uC ) = −β
[ 2C(p + 1) ] p−1
p+1
|Ω|β(p − 1)
and uC is monotonically increasing in C. So C is a bifurcation parameter and there is a
(respectively no) bifurcation taking place for uC under the condition (b) (respectively (a)).
15
Proof. See Appendix B.
Remark 3. From Theorem 8, when uC bifurcates to a positive eigenfunction u∗ , we have
MI(uC ) > MI(u∗ ) and λ(uC ) > λ(u∗ ).
So we conclude that in computing u∗ , we should not put uC in the support L.
Using (3.13), the term in (a) and (b) of Theorem 8 becomes
(p −
1)βup−1
C
=β
2
p+1
[ 2C(p + 1) ] p−1
p+1
(p − 1)
.
|Ω|(p − 1)
Thus β can also be viewed as a bifurcation parameter, i.e., when β increases, uC bifurcates to
positive eigenfunctions. However in this paper, we fix β as a constant and only let C vary.
3.2.1
Numerical Results
In (3.9), we choose Ω = (−0.5, 0.5)2 , β = 5, p = 3, ε = 10−5 . So the eigenvalues for the linear
ESP (3.12) are given by the formula µ(n,m) = (nπ)2 + (mπ)2 for n, m = 0, 1, 2, 3, ..., or, in
a sequential order, µ1 = µ(0,0) = 0, µ2 = µ(0,1) = µ(1,0) = 9.8696, µ3 = µ(1,1) = 19.7392, µ4 =
µ(0,2) = µ(2,0) = 39.4784, µ5 = µ(1,2) = µ(2,1) = 49.3480, ..., µ8 = µ(1,3) = µ(1,3) = 90.8696, µ9 =
µ(2,3) = µ(3,2) = 128.3049. In our numerical computation we deliberately choose C-values and
check the inequality µk < (p − 1)βup−1
= 10u2C < µk+1 for some k = 1, 2, ..., so that each
C
example represents a typical case and we can apply the bifurcation result, Theorem 8 to predict
the existence of certain positive eigenfunctions and to decide the support L in LMM to find
them. Since all other positive eigenfunctions are bifurcated from uC , we have λ(u) < λ(uC )
for all positive eigenfunctions u. So when λ(uC ) − λ(u) is very small, it indicates that there is
no other positive eigenfunction in between. The steepest descent direction d = ∇λ at a given
u in Step 3 of LMM is solved from
−∆d(x) + d(x) =
H ′ (u) − λ(u)I ′ (u)
−∆u(x) − β|u(x)|p−1 u(x) − λ(u)u(x)
∫
=
1
I(u)
u2 (x)dx
2 Ω
in H 1 (Ω) satisfying a zero Neumann B.C.. In our numerical computation this linear elliptic
equation is solved by calling a Matlab subroutine “assempde”, a finite element method.
For the problem (3.1) with zero Dirichlet B.C., the peaks of an eigenfunction always locate
inside Ω. However for the problem (3.9), one can see that most eigenfunctions have their
peaks located on the boundary of Ω, but occasionally some eigenfunctions may have their
peaks located inside Ω.
Note that the problem (3.9) possesses symmetries on rotations by π2 , π, 2π
, 2π and reflec2
tions about the lines x = 0, y = 0, y = x, y = −x. In the following figures, we show only one
eigenfunction representing its equivalent class, e.g., when we set L = {u1 } we may actually
16
use an eigenfunction in the equivalent class of u1 . To compute sign-changing eigenfunctions
of (3.9), we use their corresponding eigenfunctions of −∆ as initial guesses. Since there is
no eigenfunction of −∆ corresponding to a nontrivial positive eigenfunction of (3.9), we first
guess an initial u0 with certain peak location by setting suitable f (x) = −1, 1 or 0 if one wants
u0 (x) to be concave up, concave down or flat at x ∈ Ω and then solve u0 from −∆u + u = f (x)
on Ω with a zero Neumann B.C. We denote this process by IPL (initial peak locating).
In Fig.4, we set C = 0.25, L(u1 ) = {0}, L(u2 ) = {u1 }, L(u3 ) = {u1 }, L(u4 ) = {u1 , u2 },
√
L(u5 ) = {u1 , u2 , u4 }, L(u6 ) = {u1 , u2 , u4 , u5 }, L(u7 ) = {u1 , u2 , u4 , u5 , u6 } and u10 = 5, u20 =
π
(sin(x1 π) + sin(x2 π)), u30 = πsin(x1 π), u40 = π 2 sin(x1 π)sin(x2 π), u50 = 2πcos(2x1 π), u60 =
2
2π(cos(2x1 π) + cos(2x2 π)), u70 = −4πcos(4x1 π). Since uC = 0.6687 and 0 = µ1 < 10u2C =
4.4715 < µ2 = 9.8696, no bifurcation takes place. Thus uC is the only positive eigenfunction
and sign-changing eigenfunctions can be smoothly computed
In Fig.5, we set C = 1.25, L(u1 ) = {0}, L(u2 ) = {u1 }, L(u3 ) = {0}, L(u4 ) = {u3 } and
i
u0 = IPL, π2 (sin(x1 π) + sin(x2 π)), πsin(x1 π), π 2 sin(x1 π)sin(x2 π). Since uC = 1 and 9.8696 =
µ2 < 10u2C = 10 < µ3 = 19.7392, uC bifurcates to u1 and its rotations by π2 , π, 3π
. Also
2
λ(uC ) − λ(u1 ) is very small, so no other positive eigenfunction is in between.
In Fig.6, we set C = 7, L(u1 ) = L(u2 ) = {0}, L(u3 ) = {u1 } and ui0 = IPL. Since
uC = 1.5382 and 19.7392 = µ3 < 10u2C = 23.6605 < µ4 = 39.4784, we find three nontrivial
positive eigenfunctions.
In Fig.7, we set C = 20, L(u1 ) = L(u3 ) = {0}, L(u2 ) = {u1 ), L(u4 ) = {u3 } and ui0 =
IPL, π2 (sin(x1 π) + sin(x2 π)), 2, πsin(x1 π). Since uC = 2 and 39.4784 = µ4 < 10u2C = 40 <
µ5 = 49.3480, positive and sign-changing eigenfunctions appear mixed in the sequential order.
The order becomes much more complicated.
Next we further increase the C-value. Since there are too many sign-changing eigenfunctions in between positive ones, it is too difficult to follow the whole order to find eigenfunctions.
However we are still able to know the order of positive eigenfunctions by our bifurcation theorem and their symmetries. So we focus on finding positive eigenfunctions. In order to reduce
the dimension of L, we use the symmetry of the problem and apply a Haar projection (HP) in
LMM, see [17] for more details. Meanwhile when C is larger, the peak(s) of an eigenfunction
becomes more sharp and narrow. An uniform finite element mesh may loss its accuracy. Thus
local mesh refinements are used in our numerical computation when necessary. In order to see
the profiles and their contours clearer, figures presented below are regenerated on a uniform
coarse mesh and shifted downward.
In Fig.8, we set C = 25, L(ui ) = {0}, i = 1, 2, 5, 6, L(u3 ) = {u1 }, L(u6 ) = {u2 } and all
i
u0 = IPL. We have used HP to compute u6 . Since uC = 2.1147 and 39.4784 = µ4 < 10u2C =
44.7212 < µ5 = 49.3480, more positive eigenfunctions appeared due to bifurcations from uC .
In Fig.9, we set C = 125, L(ui ) = {0}, i = 1, 2, 4, 7, 8, 9, 10, L(u3 ) = L(u5 ) = L(u6 ) = {u1 }
and ui0 = IPL. Since uC = 3.1623 and 90.8696 = µ8 < 10u2C = 100 < µ9 = 128.3049, more
17
eigenvalues µk are passed. Thus even more positive eigenfunctions appeared. We have used
HP in LMM with L = {0} to compute u2 , u6 , u7 , u8 , u10 . However since u1 and u6 have
exactly the same symmetry, when we use LMM with HP, we still have to set L = {u1 }. The
convergence of this numerical solution is slow compared to other solutions. We obtained the
numerical eigenfunction u6 shown in the figure with ε < 10−2 . It is interesting to note that
u5 is totally asymmetric even the problem is symmetric and its equivalent class consists of
eight eigenfunctions, and also when we compare the λ-values and their pattern order of of
u2 , u3 in Fig.8 with u3 , u4 in Fig.9 and u3 , u4 , u5 in Fig.8 with u7 , u8 , u9 in Fig.9, we see that
their λ-values are very close but pattern orders are changed. This is due to the fact that they
are actually in different critical point branches of λ so we cannot differentiate them in the
order of λ-values. From Figs. 8 and 9, we also see that {∥uk ∥} is bounded for all positive
eigenfunctions uk , which confirms the analysis in Theorem 6.
1
0.8
1.5
0.8
0.6
1
0.4
0.6
0.5
0.4
0.2
0
0
−0.2
−0.5
0.2
−0.4
−0.6
−1
0
0.5
−0.8
−1.5
0.5
0.5
−1
0.5
0.5
0
0
−0.5
0
0
−0.5
−0.5
(1)
0
−0.5
−0.5
−0.5
(2)
0
0.5
(3)
1
1.5
1.5
1
1
0.5
1
0.5
0.5
0.5
0
0
0
0
−0.5
−0.5
−0.5
−0.5
−1
0.5
−1
−0.5
−1.5
0.5
0
−0.5
(5)
0.5
−1.5
0.5
0
0.5
−0.5
−0.5
0
0.5
0.5
0
0
(4)
−1
−1
0.5
0
0
(6)
0
−0.5
(7)
−0.5
−0.5
−0.5
Figure 4: C = 0.25, uC = 0.6687, λ(uC ) = −2.2361. Profiles and contours of eigenfunctions u1 , ..., u7 with λ(ui ) = −2.2361, 6.4473, 7.1233, 16.3673, 36.1560, 36.7853, 155.9402 and
∥ui ∥H = 0.9457, 1.8468, 2.0748, 2.5229, 3.5020, 3.8904, 7.6583.
2
1.5
1.5
2
2
1
1
0.5
0.5
1.5
1.5
1
0.5
0
0
1
0
−0.5
−0.5
0.5
−0.5
−1
−1
−1
−1.5
0
0.5
−0.5
0
−0.5
0
0
0
−0.5
(1)
−2
0.5
−1.5
−1.5
−0.5
−2
−0.5
0.5
0
0
−0.5
0.5
(2)
0.5
0.5
(3)
−0.5
0
0.5
−0.5
0
0.5
(4)
Figure 5: C = 1.25, uC = 1, λ(uC ) = −5. Profiles and contours of eigenfunctions u1 , ..., u4
with λ(ui ) = −5.0308, 1.9666, 3.6925, 12.0753 and ∥ui ∥H = 1.6135, 2.7125, 3.0923, 3.7324.
18
3.5
5
3
2.5
2.5
4
2
2
3
1.5
2
1
1
0.5
0.5
1.5
1
0
0.5
0
0.5
0
0.5
0.5
−0.5
0
0
0
0
0
−0.5
0.5
−0.5
0.5
−0.5
−0.5
0
−0.5
Figure 6: C = 7, uC = 1.5382, λ(uC ) = −11.8322. Profiles and contours of positive eigenfunctions u1 , u2 , u3 with λ(ui ) = −23.8764, −14.0810, −12.8484 and ∥ui ∥H =
3.8876, 3.1096, 3.6571.
6
8
3
3
4
6
2
2.5
2
1
4
0
2
2
−2
1.5
0
−1
−4
0
0.5
0.5
1
0.5
−6
0.5
0
0
−0.5
−2
0.5
0.5
0
0
−0.5
−0.5
−0.5
−0.5
0.5
−3
−0.5
0
0
0
0
−0.5
0.5
−0.5
Figure 7: C = 20, uC = 2, λ(uC ) = −20. Profiles and contours of eigenfunctions u1 , u2 , uC , u4
with λ(ui ) = −67.8699, −33.9949, −20, −15.6955 and ∥ui ∥H = 6.3951, 6.5205, 6.2655, 2.8284.
0
−2
0
0
−2
−2
−4
−4
−6
−6
−4
−6
−8
−10
0.5
−8
0.5
−8
0.5
0.5
0.5
0
0
0
(1)
0.5
0
−0.5
0
(2)
−0.5
−0.5
0
(3)
−0.5
0
0
0
−1
−1
−1
−2
−2
−2
−3
−3
−3
−4
0.5
−4
0.5
0.5
0.5
0
0
0
−0.5
−0.5
−0.5
−4
0.5
0.5
0
(4)
−0.5
0
(5)
−0.5
−0.5
0
(6)
−0.5
−0.5
Figure 8: C = 25, uC = 2.1147, λ(uC ) = −22.3606. Profiles and contours of positive eigenfunctions u1 , ..., u6 with λ(ui ) = −84.6860, −42.7914, −42.5634, −23.4976, −23.4976, −23.4976
and ∥ui ∥H = 7.1234, 7.1477, 7.2089, 6.2892, 6.2893, 6.2893.
19
0
0
0
0
−5
−5
−5
−5
−10
−10
−10
−10
−15
−20
0.5
−15
0.5
−15
0.5
(1)
(2)
0
−0.5
−0.5
−15
0.5
0.5
0.5
0
0
−0.5
0.5
(3)
0
−0.5
0.5
0
−0.5
−0.5
0
0
0
−2
−2
−2
−4
−4
−4
−6
−6
−8
−8
−10
−10
−10
(6)
0
0
−2
−2
−4
−4
−6
−6
−8
−8
−10
0.5
−10
0.5
−0.5
0.5
0
0
0
−0.5
−0.5
0.5
0
−0.5
0
−12
0.5
−12
0.5
0.5
(5)
0
−6
−8
−12
0.5
(4)
0
0
0
−0.5
(7)
−0.5
−0.5
−0.5
0
−2
−4
−6
−8
0.5
0.5
0
(8)
0.5
0
0
−0.5
−0.5
(9)
0
−0.5
−0.5
0.5
0
(10)
0
−0.5
−0.5
Figure 9: C = 125, uC = 3.1623, λ(uC ) = −50. Profiles and contours of positive eigenfunctions u1 , ..., u10 with λ(ui ) = −423.0702, −212.4282, −212.4279, −209.0124, −143.5454,
−142.9573, −106.5328, −105.6491, −105.6487, −56.1127 and ∥ui ∥H = 15.7391, 15.8416,
15.8416, 15.7213, 15.7097, 15.7737, 15.9291, 15.8651, 15.8651, 14.6269.
Final Remark: By using the implicit function theorem and LMM, a new implicit minimax
method is developed to find eigensolutions of nonlinear eigensolution problems in the order
of their eigenvalues. We have verified the mountain pass structure and the PSN -condition
of the variational functional λ(u) for applying LMM and its convergence. The new implicit
approach also enables us to establish some interesting properties, such as wave intensity preserving/control, bifurcation identification, etc., which showed its significant advantages over
the usual ones in the literature. Numerical examples for focusing cases are carried out and
confirmed the new approach. The implicit approach also works for defocusing cases. However
LMM needs to be modified. Our current research projects show that it can be done in several
different ways. Note that the theorems proved in this paper can actually be verified with a
more general H(u) in other eigensolution problems. However since our main objective of this
paper is on computational method and theory on solving nonlinear eigensolution problems not
on existence issue, we choose to stay with a relatively simpler H(u) for a clearer presentation
of our new ideas. Cases for other C(λ) can also be explored by our approach. Although a
Newton method can be used to speed up a local convergence in the above numerical computations, in order to avoid missing variational information (e.g, MI, order, etc.) on the numerical
20
solutions, it should be done after ε < 10−2 . Since we want to see the limit of our numerical
method, we did not use a Newton method at all in computing the above numerical results.
By our approach presented in this paper, we understand that there are many different
ways to satisfy the mass conservation law. We may consider which way is more physically
meaningful. If there are several physically equally meaningful ways, then we may explore their
individual features and advantages for different application purposes.
Appendix A: Verification of Weaker PSN Condition
For λ(u) defined in Section 3, we verify its PS condition which is crucial for proving the
existence of (infinitely) multiple eigenfunctions and also for the convergence of LMM.
Note that λ(u) may have a singular point. On the other hand, various PS conditions are
proposed in the literature to prove the existence, but failed to handle such a singularity and
are not for computational purpose. According to LMM, all computations are carried out only
on the Nehari manifold N , see (2.14), where it enjoys a nice property: ⟨λ′ (u), u⟩ = 0 for all
u ∈ N and dis(N , 0) > t0 > 0 for some t0 > 0. So we can restrict our analysis only on N and
utilize this property to simplify our analysis. Such an observation motivates us to introduce
a new definition.
Definition 2. A C 1 -functional J is said to satisfy PSN condition, if any sequence {uk } ⊂
N = {u ∈ H : u ̸= 0, ⟨J ′ (u), u⟩ = 0} s.t. {J(uk )} is bounded and J ′ (uk ) → 0 has a convergent
subsequence.
It is clear that PS condition implies PSN condition.
Theorem 9. λ(u) defined with C(λ) = C or C(λ) = Cλ in Section 3 satisfies PSN condition.
Proof. Let {uk } ⊂ N s.t. {λ(uk )} is bounded and λ′ (uk ) → 0. Since C(λ) = C or Cλ,
{C(λ(uk ))} and {C ′ (λ(uk ))} are bounded. Note
(A.1)
1
1
H(u) = ∥∇u∥2L2 −
∥u∥Lp+1
p+1 ,
2
p+1
Thus ⟨I ′ (uk ), uk ⟩ = 2I(uk ) and ⟨H ′ (uk ), uk ⟩ − 2H(uk ) =
1
I(u) = ∥u∥2L2 .
2
1−p
∥uk ∥p+1
Lp+1 .
p+1
Then uk ∈ N implies
0 = ⟨λ′ (uk ), uk ⟩ = [I(uk ) + C ′ (λ(uk ))]−1 [⟨H ′ (uk ), uk ⟩ − λ(uk )⟨I ′ (uk ), uk ⟩]
or
0 = ⟨H ′ (uk ), uk ⟩ − λ(uk )⟨I ′ (uk ), uk ⟩ = ⟨H ′ (uk ), uk ⟩ − 2H(uk ) + 2H(uk ) − 2λ(uk )I(uk )
1−p
=
∥uk ∥p+1
Lp+1 + 2C(λ(uk )).
p+1
21
When {C(λ(uk ))} is bounded, so is {∥uk ∥p+1
Lp+1 }. By the Hölder inequality {I(uk )} is bounded.
′
Consequently {I(uk ) + C (λ(uk ))} is bounded. Thus
λ′ (uk ) ≡
H ′ (uk ) − λ(uk )I ′ (uk )
→ 0 ⇒ H ′ (uk ) − λ(uk )I ′ (uk ) → 0.
I(uk ) + C ′ (λ(uk ))
From H(uk ) − λ(uk )I(uk ) = C(λ(uk )) and (A.1), we see that {∥∇u∥2L2 } is bounded or {uk } is
bounded in H = H 1 . Next we follow the approach in the proof of Lemma 1.20 in [20]. There
is a subsequence, denote by {uk } again, and u ∈ H s.t. uk ⇀ u (means weakly) in H. By the
Rellich theorem, uk → u in L2 and Lp+1 . Then
∫
2
∥uk − u∥H =
[|∇uk (x) − ∇u(x)|2 + |uk (x) − u(x)|2 ]dx
Ω
= ⟨H ′ (uk ) − λ(uk )I ′ (uk ) − H ′ (u) + λ(u)I ′ (u), uk − u⟩
+⟨λ(uk )I ′ (uk ) − λ(u)I ′ (u), uk − u⟩ + ⟨upk − up , uk − u⟩ + ∥uk − u∥2L2 → 0,
where ∥uk − u∥2L2 → 0 is clear; the first term
⟨H ′ (uk ) − λ(uk )I ′ (uk ) − H ′ (u) + λ(u)I ′ (u), uk − u⟩ → 0,
because H ′ (uk ) − λ(uk )I ′ (uk ) → 0 in H and uk ⇀ u; the second term
∫
′
′
|⟨λ(uk )I (uk ) − λ(u)I (u), uk − u⟩| = | [λ(uk )uk (x) − λ(u)u(x)][uk (x) − u(x)]dx|
Ω
≤ ∥λ(uk )uk − λ(u)u∥L2 ∥uk − u∥L2 → 0,
by the Cauchy-Schwarz inequality, the boundedness of λ(uk ) and uk → u in L2 ; and finally
p+1
|⟨upk − up , uk − u⟩| ≤ ∥uk − u∥p+1
.
Lp+1 → 0 by uk → u ∈ L
So Theorems 2 and 3 still hold when PS condition is replaced by PSN condition.
Appendix B: Proof of Theorem 8: Identification of bifurcation
Proof. We have an expression for the linear operator
λ′′ (u) =
1
[(H ′′ (u) − λ′ (u)I ′ (u) − λ(u)I ′′ (u))(I(u) + C ′ (λ(u))
′
2
(I(u) + C (λ(u))
−(I ′ (u) + C ′′ (λ(u))λ′ (u))(H ′ (u) − λ(u)I ′ (u))].
At each u s.t. λ′ (u) = 0 or H ′ (u) − λ(u)I ′ (u) = 0, we have
λ′′ (u) =
H ′′ (u) − λ(u)I ′′ (u)
.
I(u) + C ′ (λ(u))
22
Taking H(u) =
∫
( 1 |∇u(x)|2 −
Ω 2
β
|u(x)|p+1 )dx, I(u)
p+1
=
∫
1 2
u (x)dx
Ω 2
into account, we have
H ′ (u) = ∆u − β|u|p−1 u, I ′ (u) = u, H ′′ (u)w = −∆w − pβ|u|p−1 w, I ′′ (u)w = w
and then
p−1
(B.1) λ′ (uC ) = 0 ⇔ H ′ (uc ) − λ(uC )I ′ (uC ) = 0 ⇔ −βupC − λ(uC )uC = 0 ⇔ λ(uC ) = −βuC
.
Note uC > 0 can be solved from
(B.2)
H(uC ) − λ(uC )I(uC ) = C(λ(uC )) or −
β
p−1 β 2
p−1
up+1
uC |Ω| = C(−βuC
).
C |Ω| + uC
p+1
2
Let η be an eigenvalue of the linear operator λ′′ (uC ) and w be an associated eigenfunction,
i.e.,
p−1
−∆w − pβup−1
′′
C w + βuC w
= ηw.
λ (uC )w =
p−1
1 2
)
u |Ω| + C ′ (−uC
2 C
It leads to
1
denote
p−1
−∆w = [(p − 1)βup−1
+ η( u2C |Ω| + C ′ (−uC
))]w == µw.
C
2
p−1
Then µ = (p − 1)βuC + η( 12 u2C |Ω| + C ′ (−up−1
C )) is an eigenvalue of −∆ and w is its associated
′′
eigenfunction. So λ (uC ) and −∆ share exactly the same eigenfunctions. We have
µ − (p − 1)βup−1
C
η= 1 2
p−1 .
′
)
u |Ω| + C (−uC
2 C
It indicates that η and µ have the same multiplicity. Let η1 < η2 ≤ η3 · · · be all the eigenvalues
of λ′′ (uC ). We obtain that for k = 1, 2, · · · , (a) if
(B.3)
µk < (p − 1)βup−1
< µk+1 ,
C
then ηi < 0, i = 1, 2, ..., k and ηj > 0, j = k + 1, k + 2, ..., thus uC is nondegenerate with
MI(uc ) = k; and (b) if
(B.4)
µk < (p − 1)βup−1
= µk+1 = · · · = µk+rk < µk+1+rk ,
C
then ηi < 0, i = 1, 2, ..., k, ηi = 0, i = k + 1, ..., k + rk , ηi > 0, i = k + 1 + rk , k + 2 + rk , ..., thus uc
is degenerate with MI(uC ) = k, nullity(uC ) = rk ≥ 1 and uC bifurcates to new solution(s) [7].
Note that by the maximum principle, an one-sign solution either whose value and derivative
are equal to zero at an interior point of Ω or whose value and normal derivative are equal to
zero at a boundary point of Ω must be identically equal to zero. Since a sign-changing solution
has nodal line(s) (where values are equal to zero) inside Ω, when a sequence of sign-changing
solutions approach to an one-sign solution u∗ , there are two possibilities: (1) some nodal lines
stay inside Ω thus u∗ attains its zero value and zero derivative at an interior point of Ω or (2)
23
some nodal lines approach to the boundary ∂Ω thus u∗ attains its zero value and zero normal
derivative (as a solution) at a boundary point of Ω. In either case, u∗ has to be identically
equal to zero. When
p−1
µ1 = 0 < (p − 1)βuC
< µ2
uC is nondegenerate, so no bifurcation takes place. On the other hand, since uC > 0 = µ1
and at each bifurcation point (p − 1)βup−1
= µk+1 ≥ µ2 > 0, uC > 0 must satisfy
C
uC ≥
[
1
µ2 ] p−1
>0
(p − 1)β
and can bifurcate only to positive non trivial solutions.
When C(λ) = C, the equation in (B.2) becomes
−
β
p−1 β 2
up+1
u |Ω| = C,
C |Ω| + uC
p+1
2 C
which leads to
(B.5)
uC =
1
[ 2C(p + 1) ] p+1
|Ω|β(p − 1)
and λ(uC ) = −β
[ 2C(p + 1) ] p−1
p+1
|Ω|β(p − 1)
from the last equation in (B.1). It is clear that uC is monotonically increasing in C. When
C increases so that the term (p − 1)up−1
increases and crosses each µk , the positive constant
C
solution uC bifurcates to new positive solution(s).
References
[1] G. P. Agrawal, Nonlinear Fiber Optics, fifth edition, Academic Press, Oxford, UK, 2013.
[2] W. Bao, Y. Cai and H. Wang, Efficient numerical methods for computing ground states
and dynamics of dipolar Bose-Einstein condensates, J. Comp. Physics 229 (2010) 78747892.
[3] W. Bao and Q. Du, Computing the ground state solution of Bose-Einstein condensates
by a normalized gradient flow, SIAM J. Sci. Comp. 25(2004) 1674-1697.
[4] T. Bodurov, Derivation of the nonlinear Schrodinger equation from first principle, Annales
de la Fondation Louis de Broglie 30(2005) 1-10.
[5] T.F.C. Chan and H.B. Keller, Arc-Length Continuation and Multi-Grid Techniques for
Nonlinear Elliptic Eigenvalue Problems, SIAM J. Sci. Stat. Comput., 3(1982) 173-194.
[6] M. Chipot (editor), Handbook of differential equations: stationary partial differential
equations, Volume 5, Elsevier B.V., Oxford, UK, 2008.
[7] S. N. Chow and R. Lauterbach, A bifurcation theorem for critical points of variational
problems, Nonlinear Anal., 12(1988) 51-61.
24
[8] Y. Efendiev, J. Galvis, M. Presho and J. Zhou, A Multiscale Enrichment Procedure for
Nonlinear Monotone Operators” , M2AN, 48 (2014) 475-491.
[9] R. Glowinski and P.L. Tallec, Augmented Lagrangian and Operator-Splitting Methods in
Nonlinear Mechanics, SIAM, 1989.
[10] C. Gui, Multipeak solutions for a semilinear Neumann problem, Duke Math. J., 84(1996)
739-769.
[11] H. B. Keller, Numerical solution of bifurcation and nonlinear eigenvalue problems, in
Applications of Bifurcation Theory, P. Rabinowitz, ed., Academic Press, New York, 1977,
359-384.
[12] Y. Li and J. Zhou, A minimax method for finding multiple critical points and its applications to semilinear elliptic PDEs, SIAM Sci. Comp., 23(2001), 840-865.
[13] C. S. Lin, W. M. Ni, and I. Takagi, Large amplitude stationary solutions to a chemotaxis
system, J. Diff. Eqns., (1988) 1-27.
[14] W. M. Ni and I. Takagi, On the Neumann probem for some semilinear elliptic equations
and systems of activator-inhibitor type, Trans. Amer. Math. Soc., 297(1986) 351-368.
[15] P. Rabinowitz, Minimax Method in Critical Point Theory with Applications to Differential Equations, CBMS Regional Conf. Series in Math., No. 65, AMS, Providence, 1986.
[16] E. W. C. Van Groesen and T. Nijmegen, Continuation of solutions of constrained extremum problems and nonlinear eigenvalue problems, Math. Model., 1(1980) 255-270.
[17] Z.Q. Wang and J. Zhou, An efficient and stable method for computing multiple saddle
points with symmetries, SIAM J. Num. Anal., 43(2005) 891-907.
[18] Z.Q. Wang and J. Zhou, A Local Minimax-Newton Method for Finding Critical Points
with Symmetries, SIAM J. Num. Anal., 42(2004) 1745-1759.
[19] T. Watanabe, K. Nishikawa, Y. Kishimoto and H. Hojo, Numerical method for nonlinear
eigenvalue problems, Physica Scripta., T2/1 (1982) 142-146.
[20] M Willem, Minimax Theorems, Birkhauser, Boston, 1996.
[21] Z. Xie, Y. Yuan and J. Zhou, On finding multiple solutions to a singularly perturbed
neumann problem, SIAM J. Sci. Comp. 34(2012) 395-420.
[22] X. Yao and J. Zhou, A minimax method for finding multiple critical points in Banach
spaces and its application to quasilinear elliptic PDE, SIAM J. Sci. Comp., 26(2005)
1796-1809.
[23] X. Yao and J. Zhou, Numerical methods for computing nonlinear eigenpairs. Part I.
iso-homogenous cases, SIAM J. Sci. Comp., 29 (2007) 1355-1374.
[24] X. Yao and J. Zhou, Numerical methods for computing nonlinear eigenpairs. Part II. non
iso-homogenous cases, SIAM J. Sci. Comp., 30(2008) 937-956.
25
[25] E. Zeidler, Ljusternik-Schnirelman theory on general level sets, Math. Nachr., 129 (1986)
238-259.
[26] J. Zhou, A local min-orthogonal method for finding multiple saddle points, J. Math. Anal.
Appl., 291(2004) 66-81.
[27] J. Zhou, Instability analysis of saddle points by a local minimax method, Math. Comp.,
74(2005) 1391-1411.
[28] J. Zhou, Global sequence convergence of a local minimax method for finding multiple
solutions in Banach spaces, Num. Func. Anal. Optim., 32(2011) 1365-1380.
Download