Research Article Stationary in Distributions of Numerical Solutions for Stochastic

advertisement
Hindawi Publishing Corporation
Abstract and Applied Analysis
Volume 2013, Article ID 752953, 12 pages
http://dx.doi.org/10.1155/2013/752953
Research Article
Stationary in Distributions of Numerical Solutions for Stochastic
Partial Differential Equations with Markovian Switching
Yi Shen1 and Yan Li1,2
1
2
Department of Control Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074, China
College of Science, Huazhong Agriculture University, Wuhan 430079, China
Correspondence should be addressed to Yi Shen; yeeshen0912@gmail.com
Received 30 December 2012; Accepted 24 February 2013
Academic Editor: Qi Luo
Copyright © 2013 Y. Shen and Y. Li. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
We investigate a class of stochastic partial differential equations with Markovian switching. By using the Euler-Maruyama scheme
both in time and in space of mild solutions, we derive sufficient conditions for the existence and uniqueness of the stationary
distributions of numerical solutions. Finally, one example is given to illustrate the theory.
1. Introduction
The theory of numerical solutions of stochastic partial
differential equations (SPDEs) has been well developed by
many authors [1–5]. In [2], Debussche considered the error
of the Euler scheme for the nonlinear stochastic partial
differential equations by using Malliavin calculus. Gyöngy
and Millet [3] discussed the convergence rate of space time
approximations for stochastic evolution equations. Shardlow
[5] investigated the numerical methods of the mild solutions
for stochastic parabolic PDEs derived by space-time white
noise by applying finite difference approach.
On the other hand, the parameters of SPDEs may
experience abrupt changes caused by phenomena such as
component failures or repairs, changing subsystem interconnections, and abrupt environmental disturbances [6–9], and
the continuous-time Markov chains have been used to model
these parameter jumps. An important equation is a class of
SPDEs with Markovian switching
𝑑𝑋 (𝑡) = [𝐴𝑋 (𝑡) + 𝑓 (𝑋 (𝑡) , 𝑟 (𝑡))] 𝑑𝑡
+ 𝑔 (𝑋 (𝑡) , 𝑟 (𝑡)) 𝑑𝑊 (𝑡) ,
𝑡 ≥ 0.
(1)
Here the state vector has two components 𝑋(𝑡) and 𝑟(𝑡), the
first one is normally referred to as the state while the second
one is regarded as the mode. In its operation, the system will
switch from one mode to another one in a random way, and
the switching among the modes is governed by the Markov
chain 𝑟(𝑡).
Since only a few SPDEs with Markovian switching
have explicit formulae, numerical (approximate) schemes of
SPDEs with Markovian switching are becoming more and
more popular. In this paper, we will study the stationary
distribution of numerical solutions of SPDEs with Markovian
switching. Bao et al. [10] investigated the stability in distribution of mild solutions to SPDEs. Bao and Yuan [11] discussed
the numerical approximation of stationary distribution for
SPDEs. For the stationary distribution of numerical solutions of stochastic differential equations in finite-dimensional
space, Mao et al. [12] utilized the Euler-Maruyama scheme
with variable step size to obtain the stationary distribution
and they also proved that the probability measures induced
by the numerical solutions converge weakly to the stationary
distribution of the true solution. But since the mild solutions
of SPDEs with Markovian switching do not have stochastic
differential, a significant consequence of this fact is that the
Itô formula cannot be used for mild solutions of SPDEs with
Markovian switching directly. Consequently, we generalize
the stationary distribution of numerical solutions of the finite
dimensional stochastic differential equations with Markovian
switching to that of infinite dimensional cases.
Motived by [11–13], we will show in this paper that the
mild solutions of SPDE with Markovian switching (1) have a
unique stationary distribution for sufficiently small step size.
2
Abstract and Applied Analysis
So this paper is organised as follows: in Section 2, we give
necessary notations and define Euler-Maruyama scheme of
mild solutions. In Section 3, we give some lemmas and the
main result in this paper. Finally, we will give an example to
illustrate the theory in Section 4.
󵄩2
󵄩
2𝜆 𝑗 ⟨𝑥 − 𝑦, 𝑓 (𝑥, 𝑗) − 𝑓 (𝑦, 𝑗)⟩𝐻 + 𝜆 𝑗 󵄩󵄩󵄩𝑔 (𝑥, 𝑗) − 𝑔 (𝑦, 𝑗)󵄩󵄩󵄩HS
𝑁
󵄩2
󵄩2
󵄩
󵄩
+ ∑𝛾𝑗𝑙 𝜆 𝑙 󵄩󵄩󵄩𝑥 − 𝑦󵄩󵄩󵄩𝐻 ≤ −𝜇󵄩󵄩󵄩𝑥 − 𝑦󵄩󵄩󵄩𝐻,
2. Statements of Problem
∀𝑥, 𝑦 ∈ 𝐻, 𝑗 ∈ S.
𝑙=1
Throughout this paper, unless otherwise specified, we let
(Ω, F, {F𝑡 }𝑡≥0 , P) be complete probability space with a
filtration {F𝑡 }𝑡≥0 satisfying the usual conditions (i.e., it is
increasing and right continuous while F0 contains all Pnull sets). Let (𝐻, ⟨⋅, ⋅⟩𝐻, ‖ ⋅ ‖𝐻) be a real separable Hilbert
space and 𝑊(𝑡) an 𝐻-valued cylindrical Brownian motion
(Wiener process) defined on the probability space. Let 𝐼𝐺 be
the indicator function of a set 𝐺. Denote by (L(𝐻), ‖ ⋅ ‖)
and (LHS (𝐻), ‖ ⋅ ‖HS ) the family of bounded linear operators
and Hilbert-Schmidt operator from 𝐻 into 𝐻, respectively.
Let 𝑟(𝑡), 𝑡 ≥ 0, be a right-continuous Markov chain on the
probability space taking values in a finite state space S =
{1, 2, . . . , 𝑁} with the generator Γ = (𝛾𝑖𝑗 )𝑁×𝑁 given by
𝛾 𝛿 + 𝑜 (𝛿)
P {𝑟 (𝑡 + 𝛿) = 𝑗 | 𝑟 (𝑡) = 𝑖} = { 𝑖𝑗
1 + 𝛾𝑖𝑗 𝛿 + 𝑜 (𝛿)
if 𝑖 ≠ 𝑗,
if 𝑖 = 𝑗,
(2)
where 𝛿 > 0. Here 𝛾𝑖𝑗 > 0 is the transition rate from 𝑖 to 𝑗 if
𝑖 ≠ 𝑗 while
𝛾𝑖𝑖 = −∑ 𝛾𝑖𝑗 .
(3)
𝑗 ≠ 𝑖
We assume that the Markov chain 𝑟(⋅) is independent of the
Brownian motion 𝑊(⋅). It is well known that almost every
sample path of 𝑟(⋅) is a right-continuous step function with
finite number of simple jumps in any finite subinterval of
R+ := [0, +∞).
Consider SPDEs with Markovian switching on 𝐻
𝑑𝑋 (𝑡) = [𝐴𝑋 (𝑡) + 𝑓 (𝑋 (𝑡) , 𝑟 (𝑡))] 𝑑𝑡
+ 𝑔 (𝑋 (𝑡) , 𝑟 (𝑡)) 𝑑𝑊 (𝑡) ,
(A3) There exist 𝜇 > 0 and 𝜆 𝑗 > 0, (𝑗 = 1, 2, . . . , 𝑁) such
that
𝑡 ≥ 0,
(4)
with initial value 𝑋(0) = 𝑥 ∈ 𝐻 and 𝑟(0) = 𝑖 ∈ S. Here
𝑓 : 𝐻 × S → 𝐻, 𝑔 : 𝐻 × S → LHS (𝐻). Throughout the
paper, we impose the following assumptions.
(A1) (𝐴, D(𝐴)) is a self-adjoint operator on 𝐻 generating
a 𝐶0 -semigroup {𝑒𝐴𝑡 }𝑡≥0 , such that ‖ 𝑒𝐴𝑡 ‖≤ 𝑒−𝛼𝑡 for
some 𝛼 > 0. In this case, −𝐴 has discrete spectrum 0 <
𝜌1 ≤ 𝜌2 ≤ ⋅ ⋅ ⋅ ≤ lim𝑖 → ∞ 𝜌𝑖 = ∞ with corresponding
eigenbasis {𝑒𝑖 }𝑖≥1 of 𝐻.
(A2) Both 𝑓 and 𝑔 are globally Lipschitz continuous. That
is, there exists a constant 𝐿 > 0 such that
󵄩2 󵄩
󵄩2
󵄩󵄩
󵄩󵄩𝑓 (𝑥, 𝑗) − 𝑓 (𝑦, 𝑗)󵄩󵄩󵄩𝐻 ∨ 󵄩󵄩󵄩𝑔 (𝑥, 𝑗) − 𝑔 (𝑦, 𝑗)󵄩󵄩󵄩HS
󵄩2
󵄩
≤ 𝐿󵄩󵄩󵄩𝑥 − 𝑦󵄩󵄩󵄩𝐻, ∀𝑥, 𝑦 ∈ 𝐻, 𝑗 ∈ S;
(5)
(6)
It is well known (see [1, 8]) that under (A1)–(A3), (4) has
a unique mild solution 𝑋(𝑡) on 𝑡 ≥ 0. That is, for any 𝑋(0) =
𝑥 ∈ 𝐻 and 𝑟(0) = 𝑖 ∈ S, there exists a unique 𝐻-valued
adapted process 𝑋(𝑡) such that
𝑡
𝑋 (𝑡) = 𝑒𝑡𝐴𝑥 + ∫ 𝑒(𝑡−𝑠)𝐴 𝑓 (𝑋 (𝑠) , 𝑟 (𝑠)) 𝑑𝑠
0
𝑡
(𝑡−𝑠)𝐴
+∫ 𝑒
0
(7)
𝑔 (𝑋 (𝑠) , 𝑟 (𝑠)) 𝑑𝑊 (𝑠) .
Moreover, the pair 𝑍(𝑡) = (𝑋(𝑡), 𝑟(𝑡)) is a time-homogeneous
Markov process.
Remark 1. We observe that (A2) implies the following linear
growth conditions:
󵄩2
󵄨2 󵄩
󵄨󵄨
2
󵄨󵄨𝑓 (𝑥, 𝑗)󵄨󵄨󵄨𝐻 ∨ 󵄩󵄩󵄩𝑔 (𝑥, 𝑗)󵄩󵄩󵄩HS ≤ 𝐿 (1 + ‖𝑥‖𝐻) ,
∀𝑥 ∈ 𝐻, 𝑗 ∈ S,
(8)
2
where 𝐿 = 2 max𝑗∈S (𝐿 ∨ |𝑓(0, 𝑗)|2𝐻∨ ‖ 𝑔(0, 𝑗)‖HS
).
Remark 2. We also establish another property from (A3):
𝑁
󵄩2
󵄩
2𝜆 𝑗 ⟨𝑥, 𝑓 (𝑥, 𝑗)⟩𝐻 + 𝜆 𝑗 󵄩󵄩󵄩𝑔 (𝑥, 𝑗)󵄩󵄩󵄩HS + ∑𝛾𝑗𝑙 𝜆 𝑙 ‖𝑥‖2𝐻
𝑙=1
≤ 2𝜆 𝑗 ⟨𝑥, 𝑓 (𝑥, 𝑗) − 𝑓 (0, 𝑗)⟩𝐻
𝑁
󵄩2
󵄩
+ 𝜆 𝑗 󵄩󵄩󵄩𝑔 (𝑥, 𝑗) − 𝑔 (0, 𝑗)󵄩󵄩󵄩HS + ∑𝛾𝑗𝑙 𝜆 𝑙 ‖𝑥‖2𝐻
𝑙=1
+ 2𝜆 𝑗 ⟨𝑥, 𝑓 (0, 𝑗)⟩𝐻
󵄩2
󵄩
+ 2𝜆 𝑗 ⟨𝑔 (𝑥, 𝑗) − 𝑔 (0, 𝑗) , 𝑔 (0, 𝑗)⟩HS + 𝜆 𝑗 󵄩󵄩󵄩𝑔 (0, 𝑗)󵄩󵄩󵄩HS
≤ −𝜇‖𝑥‖2𝐻 +
󵄩
󵄩
4𝜆2𝑗 󵄩󵄩󵄩𝑓 (0, 𝑗)󵄩󵄩󵄩𝐻
𝜇
‖𝑥‖2𝐻 +
4
𝜇
+
𝜇 󵄩󵄩
󵄩2
󵄩𝑔 (𝑥, 𝑗) − 𝑔 (0, 𝑗)󵄩󵄩󵄩HS
4𝐿 󵄩
+
4𝐿𝜆2𝑗 󵄩
󵄩󵄩𝑔 (0, 𝑗)󵄩󵄩󵄩2 + 𝜆 𝑗 󵄩󵄩󵄩𝑔 (0, 𝑗)󵄩󵄩󵄩2
󵄩HS
󵄩HS
󵄩
𝜇 󵄩
≤ −𝜇‖𝑥‖2𝐻 +
󵄩
󵄩
4𝜆2𝑗 󵄩󵄩󵄩𝑓 (0, 𝑗)󵄩󵄩󵄩𝐻
𝜇
𝜇
‖𝑥‖2𝐻 + ‖𝑥‖2𝐻 +
4
4
𝜇
Abstract and Applied Analysis
3
4𝐿𝜆2𝑗 󵄩
󵄩󵄩𝑔 (0, 𝑗)󵄩󵄩󵄩2 + 𝜆 𝑗 󵄩󵄩󵄩𝑔 (0, 𝑗)󵄩󵄩󵄩2
+
󵄩HS
󵄩HS
󵄩
𝜇 󵄩
𝜇
≤ − ‖𝑥‖2𝐻 + 𝛼1 ,
2
For any 𝑛 ≥ 1, let 𝜋𝑛 : 𝐻 → 𝐻𝑛 := Span{𝑒1 , 𝑒2 , . . . , 𝑒𝑛 }
be the orthogonal projection. Consider SPDEs with Markovian switching on 𝐻𝑛 ,
∀𝑥 ∈ 𝐻, 𝑗 ∈ S,
(9)
where 𝛼1 := max𝑗∈S [(4𝜆2𝑗 ‖ 𝑓(0, 𝑗)‖2𝐻/𝜇) +(4𝐿𝜆2𝑗 /𝜇) ‖ 𝑔(0, 𝑗)
2
‖HS
+ 𝜆 𝑗 ‖ 𝑔(0, 𝑗)‖2HS ] and ⟨𝑇, 𝑆⟩HS := ∑∞
𝑖=1 ⟨𝑇𝑒𝑖 , 𝑆𝑒𝑗 ⟩𝐻 for
𝑆, 𝑇 ∈ LHS (𝐻).
Denote by 𝑍𝑥,𝑖 (𝑡) = (𝑋𝑥,𝑖 (𝑡), 𝑟𝑖 (𝑡)) the mild solution of (4)
starting from (𝑥, 𝑖) ∈ 𝐻 × S. For any subset 𝐴 ∈ B(𝐻), 𝐵 ⊂
S, let P𝑡 ((𝑥, 𝑖), 𝐴 × đľ) be the probability measure induced by
𝑍𝑥,𝑖 (𝑡), 𝑡 ≥ 0. Namely,
P𝑡 ((𝑥, 𝑖) , 𝐴 × đľ) = P (𝑍𝑥,𝑖 ∈ 𝐴 × đľ) ,
𝑑𝑋𝑛 (𝑡) = [𝐴 𝑛 𝑋𝑛 (𝑡) + 𝑓𝑛 (𝑋𝑛 (𝑡) , 𝑟 (𝑡))] 𝑑𝑡
+ 𝑔𝑛 (𝑋𝑛 (𝑡) , 𝑟 (𝑡)) 𝑑𝑊 (𝑡) ,
(13)
with initial data 𝑋𝑛 (0) = 𝜋𝑛 𝑥 = ∑𝑛𝑖=1 ⟨𝑥, 𝑒𝑖 ⟩𝐻𝑒𝑖 , 𝑥 ∈ 𝐻. Here
𝐴 𝑛 = 𝜋𝑛 𝐴, 𝑓𝑛 = 𝜋𝑛 𝑓, 𝑔𝑛 = 𝜋𝑛 𝑔.
Therefore, we can observe that
𝑒𝑡𝐴 𝑛 𝑥 = 𝑒𝑡𝐴𝑥 ,
𝐴 𝑛 𝑥 = 𝐴𝑥,
⟨𝑥, 𝑔𝑛 ⟩𝐻 = ⟨𝑥, 𝑔⟩𝐻,
⟨𝑥, 𝑓𝑛 ⟩𝐻 = ⟨𝑥, 𝑓⟩𝐻,
∀𝑥 ∈ 𝐻𝑛 ,
(14)
(10)
By the property of the projection operator and (A2), we have
where B(𝐻) is the family of the Borel subset of 𝐻.
Denote by P(𝐻×S) the family by all probability measures
on 𝐻 × S. For 𝑃1 , 𝑃2 ∈ P(𝐻 × S), define the metric 𝑑L as
follows:
󵄩󵄩
󵄩2 󵄩
󵄩2
󵄩󵄩𝐴 𝑛 (𝑥 − 𝑦)󵄩󵄩󵄩𝐻 ∨ 󵄩󵄩󵄩𝑓𝑛 (𝑥, 𝑗) − 𝑓𝑛 (𝑦, 𝑗)󵄩󵄩󵄩𝐻
󵄩
󵄩2
∨ 󵄩󵄩󵄩𝑔𝑛 (𝑥, 𝑗) − 𝑔𝑛 (𝑦, 𝑗)󵄩󵄩󵄩HS
󵄩2 󵄩
󵄩2
󵄩
(15)
≤ 𝜆2𝑛 󵄩󵄩󵄩(𝑥 − 𝑦)󵄩󵄩󵄩𝐻 ∨ 󵄩󵄩󵄩𝑓 (𝑥, 𝑗) − 𝑓 (𝑦, 𝑗)󵄩󵄩󵄩𝐻
󵄩2
󵄩2
󵄩
󵄩
∨ 󵄩󵄩󵄩𝑔 (𝑥, 𝑗) − 𝑔 (𝑦, 𝑗)󵄩󵄩󵄩HS ≤ (𝜆2𝑛 ∨ 𝐿) 󵄩󵄩󵄩(𝑥 − 𝑦)󵄩󵄩󵄩𝐻,
𝑑L (𝑃1 , 𝑃2 )
󵄨󵄨 𝑁
󵄨󵄨
𝑁
󵄨󵄨
󵄨󵄨
󵄨
= sup 󵄨󵄨󵄨 ∑ ∫ 𝜑 (𝑢, 𝑗) 𝑃1 (𝑑𝑢, 𝑗) − ∑ ∫ 𝜑 (𝑢, 𝑗) 𝑃2 (𝑑𝑢, 𝑗)󵄨󵄨󵄨󵄨 ,
󵄨󵄨
𝜑∈L 󵄨󵄨𝑗=1 𝐻
𝑗=1
󵄨
󵄨
(11)
where L = {𝜑 : 𝐻 × S → R : |𝜑(𝑢, 𝑗) − 𝜑(V, 𝑙)| ≤‖ 𝑢 − V‖𝐻 +
|𝑗 − 𝑙|, and |𝜑(𝑢, 𝑗)| ≤ 1, for 𝑢, V ∈ 𝐾, 𝑗, 𝑙 ∈ S}.
Remark 3. It is known that the weak convergence of probability measures is a metric concept with respect to classes of test
function. In other words, a sequence of probability measures
{𝑃𝑘 }𝑘≥1 of P(𝐻 × S) converges weakly to a probability
measure 𝑃0 ∈ P(𝐻 × S) if and only if lim𝑘 → ∞ 𝑑L (𝑃𝑘 , 𝑃0 ) = 0.
Definition 4. The mild solution 𝑍(𝑡) = (𝑋(𝑡), 𝑟(𝑡)) of (4) is
said to have a stationary distribution 𝜋(⋅ × ⋅) ∈ P(𝐻 × S) if
the probability measure P𝑡 ((𝑥, 𝑖), (⋅ × ⋅)) converges weakly to
𝜋(⋅×⋅) as 𝑡 → ∞ for every 𝑖 ∈ S, and every 𝑥 ∈ 𝑈, a bounded
subset of 𝐻, that is,
lim 𝑑
𝑡→∞ L
(P𝑡 (𝑥, 𝑖) , 𝜋 (⋅ × ⋅))
󵄨󵄨
󵄨󵄨
= lim (sup 󵄨󵄨󵄨󵄨E𝜑 (𝑍𝑥,𝑖 (𝑡))
𝑡 → ∞ 𝜑∈L 󵄨
󵄨󵄨
(12)
󵄨󵄨
󵄨󵄨
− ∑ ∫ 𝜑 (𝑢, 𝑗) 𝜋 (𝑑𝑢, 𝑗)󵄨󵄨󵄨󵄨) = 0.
󵄨󵄨
𝑗=1 𝐻
󵄨
𝑁
By Theorem 3.1 in [10] and Theorem 3.1 in [14], we have
the following.
Theorem 5. Under (A1)–(A3), the Markov process 𝑍(𝑡) has a
unique stationary distribution 𝜋(⋅ × ⋅) ∈ P(𝐻 × S).
∀𝑥, 𝑦 ∈ 𝐻𝑛 , 𝑗 ∈ S.
Hence, (13) admits a unique strong solution {𝑋𝑛 (𝑡)}𝑡≥0 on 𝐻𝑛
(see [8]).
We now introduce an Euler-Maruyama based computational method. The method makes use of the following lemma
(see [15]).
Lemma 6. Given Δ > 0, then {𝑟(𝑘Δ), 𝑘 = 0, 1, 2, . . .} is a
discrete Markov chain with the one-step transition probability
matrix
𝑃 (Δ) = (𝑃𝑖,𝑗 (Δ))𝑁×𝑁 = 𝑒ΔΓ .
(16)
Given a fixed step size Δ > 0 and the one-step transition
probability matrix 𝑃(Δ) in (16), the discrete Markov chain
{𝑟(𝑘Δ), 𝑘 = 0, 1, 2, . . .} can be simulated as follows: let
𝑟(0) = 𝑖0 , and compute a pseudorandom number 𝜉1 from the
uniform (0, 1) distribution.
Define
𝑟 (Δ)
𝑖,
𝑖 ∈ S − {𝑁}
{
{
{
𝑖−1
{
{
{
such that ∑ 𝑃𝑟(0),𝑗 (Δ)
{
{
{
𝑗=1
{
{
{
{
≤ 𝜉1
={
𝑖
{
{
{
< ∑ 𝑃𝑟(0),𝑗 (Δ) ,
{
{
{
𝑗=1
{
{
{
𝑁−1
{
{
{𝑁, ∑ 𝑃𝑟(0),𝑗 (Δ) ≤ 𝜉1 ,
𝑗=1
{
(17)
4
Abstract and Applied Analysis
where we set ∑0𝑗=1 𝑃𝑟(0),𝑗 (Δ) = 0 as usual. Having computed
𝑟(0), 𝑟(Δ), . . . , 𝑟(𝑘Δ), we can compute 𝑟((𝑘 + 1)Δ) by drawing
a uniform (0, 1) pseudorandom number 𝜉𝑘+1 and setting
𝑟 ((𝑘 + 1) Δ)
𝑛
Lemma 7. {𝑍 (𝑘Δ)}𝑘≥0 is a homogeneous Markov process
with the transition probability kernel P𝑛,Δ ((𝑥, 𝑖), 𝐴 × {𝑗}).
To highlight the initial value, we will use notation
{𝑍
𝑖,
𝑖 ∈ S − {𝑁}
{
{
{
𝑖−1
{
{
{
such that ∑ 𝑃𝑟(𝑘Δ),𝑗 (Δ)
{
{
{
𝑗=1
{
{
𝑖
={
≤ 𝜉𝑘+1 < ∑ 𝑃𝑟(𝑘Δ),𝑗 (Δ) ,
{
{
{
𝑗=1
{
{
{
𝑁−1
{
{
{
{𝑁, ∑ 𝑃𝑟(𝑘Δ),𝑗 (Δ) ≤ 𝜉𝑘+1 .
𝑗=1
{
𝑛,(𝑥,𝑖)
(𝑘Δ)}.
𝑛,(𝑥,𝑖)
(18)
The procedure can be carried out repeatedly to obtain more
trajectories.
We now define the Euler-Maruyama approximation for
(13). For a stepsize Δ ∈ (0, 1), the discrete approximation
𝑛
𝑛
𝑌 (𝑘Δ) ≈ 𝑋𝑛 (𝑘Δ), is formed by simulating from 𝑌 (0) =
𝜋𝑛 𝑥, 𝑟(0) = 𝑟0 , and
(𝑘Δ)}𝑘≥0 is
Definition 8. For a given stepsize Δ > 0, {𝑍
𝑛,Δ
said to have a stationary distribution {𝜋 (⋅ × ⋅)} ∈ P(𝐻𝑛 ×
S) if the 𝑘-step transition probability kernel P𝑛,Δ
𝑘 ((𝑥, 𝑖), ⋅ × ⋅)
converges weakly to 𝜋𝑛,Δ (⋅ × ⋅) as 𝑘 → ∞, for every (𝑥, 𝑖) ∈
𝐻𝑛 × S, that is,
lim 𝑑L (𝑃𝑘𝑛,Δ ((𝑥, 𝑖) , ⋅ × ⋅) , 𝜋𝑛,Δ (⋅ × ⋅)) = 0.
𝑘→∞
(22)
We will establish our result of this paper in Section 3.
Theorem 9. Under (A1)–(A3), for a given stepsize Δ > 0,
𝑛
𝑌 ((𝑘 + 1) Δ)
𝑛,(𝑥,𝑖)
𝑛
𝑛
= 𝑒Δ𝐴 𝑛 {𝑌 (𝑘Δ) + 𝑓𝑛 (𝑌 (𝑘Δ) , 𝑟 (𝑘Δ)) Δ
(19)
and arbitrary 𝑥 ∈ 𝐻𝑛 , 𝑖 ∈ S, {𝑍
(𝑘Δ)}𝑘≥0 has a unique
stationary distribution 𝜋𝑛,Δ (⋅ × ⋅) ∈ P(𝐻𝑛 × S).
𝑛
+𝑔𝑛 (𝑌 (𝑘Δ) , 𝑟 (𝑘Δ)) Δ𝑊𝑘 } ,
where Δ𝑊𝑘 = 𝑊((𝑘 + 1)Δ) − 𝑊(𝑘Δ)).
To carry out our analysis conveniently, we give the
continuous Euler-Maruyama approximation solution which
is defined by
𝑡
𝑌𝑛 (𝑡) = 𝑒𝑡𝐴 𝑛 𝜋𝑛 𝑥 + ∫ 𝑒(𝑡−⌊𝑠⌋)𝐴 𝑛 𝑓𝑛 (𝑌𝑛 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋)) 𝑑𝑠
0
𝑡
+ ∫ 𝑒(𝑡−⌊𝑠⌋)𝐴 𝑛 𝑔𝑛 (𝑌𝑛 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋)) 𝑑𝑊 (𝑠)
0
𝑡𝐴 𝑛
=𝑒
𝑡
(20)
(𝑡−⌊𝑠⌋)𝐴
𝜋𝑛 𝑥 + ∫ 𝑒
0
3. Stationary in Distribution of
Numerical Solutions
In this section, we shall present some useful lemmas and
prove Theorem 9. In what follows, 𝐶 > 0 is a generic constant
whose values may change from line to line.
For any initial value (𝑥, 𝑖), let 𝑌𝑛,𝑥,𝑖 (𝑡) be the continuous
Euler-Maruyama solution of (20) and starting from (𝑥, 𝑖) ∈
𝐻 × S. Let 𝑋𝑥,𝑖 (𝑡) be the mild solution of (4) and starting
from (𝑥, 𝑖) ∈ 𝐻 × S.
Lemma 10. Under (A1)–(A3), then
𝑛
𝑓𝑛 (𝑌 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋)) 𝑑𝑠
𝑡
+ ∫ 𝑒(𝑡−⌊𝑠⌋)𝐴 𝑔𝑛 (𝑌𝑛 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋)) 𝑑𝑊 (𝑠) ,
0
where ⌊𝑡⌋ = [𝑡/Δ]Δ and [𝑡/Δ] denotes the integer part of 𝑡/Δ
𝑛
𝑛
and 𝑌𝑛 (0) = 𝑌 (0) = 𝜋𝑛 𝑥, and 𝑌𝑛 (𝑘Δ) = 𝑌 (𝑘Δ).
𝑛
It is obvious that 𝑌 (𝑡) coincides with the discrete approximation solution at the gridpoints. For any Borel set 𝐴 ∈
𝑛
𝑛
B(𝐻𝑛 ), 𝑥 ∈ 𝐻𝑛 , 𝑖, 𝑗 ∈ S, let 𝑍 (𝑘Δ) = (𝑌 (𝑘Δ), 𝑟(𝑘Δ)),
P𝑛,Δ ((𝑥, 𝑖) , 𝐴 × {𝑗})
𝑛
󵄩2
󵄩
E󵄩󵄩󵄩󵄩𝑌𝑛,𝑥,𝑖 (𝑡) − 𝑌𝑛,𝑥,𝑖 (⌊𝑡⌋)󵄩󵄩󵄩󵄩𝐻
≤
3 (𝜌𝑛2
󵄩2
󵄩
+ 2𝐿) Δ (1 + E󵄩󵄩󵄩𝑌𝑛 (⌊𝑡⌋)󵄩󵄩󵄩𝐻) .
(23)
Proof. Write 𝑌𝑛,𝑥,𝑖 (𝑡) = 𝑌𝑛 (𝑡), 𝑌𝑛,𝑥,𝑖 (⌊𝑡⌋) = 𝑌𝑛 (⌊𝑡⌋). From
(20), we have
𝑛
:= P (𝑍 (Δ) ∈ 𝐴 × {𝑗} | 𝑍 (0) = (𝑥, 𝑖)) ,
(21)
P𝑛,Δ
𝑘 ((𝑥, 𝑖) , 𝐴 × {𝑗})
𝑛
𝑌𝑛 (⌊𝑡⌋) = 𝑒⌊𝑡⌋𝐴 𝜋𝑛 𝑥 + ∫
0
𝑛
:= P (𝑍 (𝑘Δ) ∈ 𝐴 × {𝑗} | 𝑍 (0) = (𝑥, 𝑖)) .
Following the argument of Theorem 5 in [13], we have the
following.
⌊𝑡⌋
+∫
⌊𝑡⌋
0
𝑒(⌊𝑡⌋−⌊𝑠⌋)𝐴 𝑓𝑛 (𝑌𝑛 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋)) 𝑑𝑠
𝑒(⌊𝑡⌋−⌊𝑠⌋)𝐴 𝑔𝑛 (𝑌𝑛 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋)) 𝑑𝑊 (𝑠) .
(24)
Abstract and Applied Analysis
5
here we use the fundamental inequality 1 − 𝑒−𝑎 ≤ 𝑎, 𝑎 > 0.
And, by (8), it follows that
Thus,
𝑌𝑛 (𝑡) − 𝑌𝑛 (⌊𝑡⌋)
𝑡
= 𝑒(𝑡−⌊𝑡⌋)𝐴
󵄩
󵄩2
E ∫ 󵄩󵄩󵄩𝑓𝑛 (𝑌𝑛 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋))󵄩󵄩󵄩𝐻𝑑𝑠
⌊𝑡⌋
× (𝑒⌊𝑡⌋𝐴 𝜋𝑛 𝑥
⌊𝑡⌋
𝑡
󵄩
󵄩2
+ E ∫ 󵄩󵄩󵄩𝑔𝑛 (𝑌𝑛 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋))󵄩󵄩󵄩HS 𝑑𝑠
⌊𝑡⌋
(⌊𝑡⌋−⌊𝑠⌋)𝐴
+∫
𝑒
0
󵄩2
󵄩
≤ 2đżΔ (1 + E󵄩󵄩󵄩𝑌𝑛 (⌊𝑡⌋)󵄩󵄩󵄩𝐻) .
𝑛
× đ‘“đ‘› (𝑌 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋)) 𝑑𝑠
+∫
⌊𝑡⌋
0
𝑒(⌊𝑡⌋−⌊𝑠⌋)𝐴
× đ‘”đ‘› (𝑌𝑛 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋)) 𝑑𝑊 (𝑠) )
− 𝑌𝑛 (⌊𝑡⌋)
(28)
Substituting (27) and (28) into (26), the desired assertion (23)
follows.
(25)
𝑡
+ ∫ 𝑒(𝑡−⌊𝑠⌋)𝐴 𝑓𝑛 (𝑌𝑛 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋)) 𝑑𝑠
⌊𝑡⌋
Lemma 11. Under (A1)–(A3), if Δ < min{1, 1/3(𝜌𝑛2 + 2𝐿),
2
((4𝛼𝑝 + 𝜇)/(8𝑞 + 4𝑞𝜌𝑛2 𝐿+4𝑞𝐿 + 24𝑞̂𝑟 + 6𝑞𝐿(𝜌𝑛2 +2𝐿))) }, then
there is a constant 𝐶 > 0 that depends on the initial value 𝑥 but
is independent of Δ, such that the continuous Euler-Maruyama
solution of (20) has
𝑡
+ ∫ 𝑒(𝑡−⌊𝑠⌋)𝐴 𝑔𝑛 (𝑌𝑛 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋)) 𝑑𝑊 (𝑠)
󵄩
󵄩
sup E󵄩󵄩󵄩󵄩𝑌𝑛,𝑥,𝑖 (𝑡)󵄩󵄩󵄩󵄩𝐻 ≤ 𝐶,
⌊𝑡⌋
𝑡≥0
= (𝑒(𝑡−⌊𝑡⌋)𝐴 − 1) 𝑌𝑛 (⌊𝑡⌋)
𝑡
(29)
where 𝑞 = max1≤𝑖≤𝑁 𝜆 𝑖 , 𝑝 = min1≤𝑖≤𝑁 𝜆 𝑖 .
+ ∫ 𝑒(𝑡−⌊𝑠⌋)𝐴 𝑓𝑛 (𝑌𝑛 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋)) 𝑑𝑠
⌊𝑡⌋
Proof. Write 𝑌𝑛,𝑥,𝑖 (𝑡) = 𝑌𝑛 (𝑡), 𝑟𝑖 (𝑘Δ) = 𝑟(𝑘Δ). From (20), we
have the following differential form:
𝑡
+ ∫ 𝑒(𝑡−⌊𝑠⌋)𝐴 𝑔𝑛 (𝑌𝑛 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋)) 𝑑𝑊 (𝑠) .
⌊𝑡⌋
Then, by the Hölder inequality and the Itô isometry, we obtain
𝑑𝑌𝑛 (𝑡)
= {𝐴𝑌𝑛 (𝑡) + 𝑒(𝑡−⌊𝑡⌋)𝐴 𝑓𝑛 (𝑌𝑛 (⌊𝑡⌋) , 𝑟 (⌊𝑡⌋))} 𝑑𝑡
󵄩2
󵄩
E󵄩󵄩󵄩𝑌𝑛 (𝑡) − 𝑌𝑛 (⌊𝑡⌋)󵄩󵄩󵄩𝐻
+ 𝑒(𝑡−⌊𝑡⌋)𝐴 𝑔𝑛 (𝑌𝑛 (⌊𝑡⌋) , 𝑟 (⌊𝑡⌋)) 𝑑𝑊 (𝑡) ,
󵄩2
󵄩
≤ 3 {E󵄩󵄩󵄩󵄩(𝑒(𝑡−⌊𝑡⌋)𝐴 − 1) 𝑌𝑛 (⌊𝑡⌋)󵄩󵄩󵄩󵄩𝐻
𝑡
󵄩
󵄩2
+ E ∫ 󵄩󵄩󵄩𝑓𝑛 (𝑌𝑛 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋))󵄩󵄩󵄩𝐻𝑑𝑠
⌊𝑡⌋
(26)
with 𝑌𝑛 (0) = 𝜋𝑛 𝑥.
Let 𝑉(𝑥, 𝑖) = 𝜆 𝑖 ‖ 𝑥‖2𝐻. By the generalised Itô formula, for
any 𝜃 > 0, we derive from (30) that
𝑡
󵄩2
󵄩
+E ∫ 󵄩󵄩󵄩𝑔𝑛 (𝑌𝑛 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋))󵄩󵄩󵄩HS 𝑑𝑠} .
⌊𝑡⌋
󵄩2
󵄩
𝑒𝜃𝑡 E (𝜆 𝑟(𝑡) 󵄩󵄩󵄩𝑌𝑛 (𝑡)󵄩󵄩󵄩𝐻)
𝑡
From (A1), we have
≤ 𝜆 𝑖 ‖𝑥‖2𝐻 + E ∫ 𝑒𝜃𝑠 𝜆 𝑟(𝑠)
0
󵄩2
󵄩
E󵄩󵄩󵄩󵄩(𝑒(𝑡−⌊𝑡⌋)𝐴 − 1) 𝑌𝑛 (⌊𝑡⌋)󵄩󵄩󵄩󵄩𝐻
󵄩󵄩 𝑛
󵄩󵄩2
󵄩󵄩
󵄩󵄩
−𝜌𝑖 (𝑡−⌊𝑡⌋)
𝑛
󵄩
= E󵄩󵄩∑ (𝑒
− 1) ⟨𝑌 (⌊𝑡⌋) , 𝑒𝑖 ⟩𝐻𝑒𝑖 󵄩󵄩󵄩
󵄩󵄩𝑖=1
󵄩󵄩
󵄩
󵄩𝐻
󵄩2
󵄩
) E󵄩󵄩󵄩𝑌𝑛 (⌊𝑡⌋)󵄩󵄩󵄩𝐻
−𝜌𝑛 (𝑡−⌊𝑡⌋) 2
≤ (1 − 𝑒
≤
󵄩
𝜌𝑛2 Δ2 E󵄩󵄩󵄩𝑌𝑛
󵄩2
(⌊𝑡⌋)󵄩󵄩󵄩𝐻,
(30)
󵄩2
󵄩
× {𝜃󵄩󵄩󵄩𝑌𝑛 (𝑠)󵄩󵄩󵄩𝐻 + 2⟨𝑌𝑛 (𝑠) , 𝐴𝑌𝑛 (𝑠)⟩𝐻
(27)
+ 2⟨𝑌𝑛 (𝑠) , 𝑒(𝑠−⌊𝑠⌋)𝐴 𝑓𝑛 (𝑌𝑛 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋))⟩𝐻
󵄩2
󵄩
+󵄩󵄩󵄩𝑔𝑛 (𝑌𝑛 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋))󵄩󵄩󵄩HS } 𝑑𝑠
𝑡
𝑁
0
𝑙=1
󵄩2
󵄩
+ E ∫ 𝑒𝜃𝑠 ∑𝛾𝑟(𝑠)𝑙 𝜆 𝑙 󵄩󵄩󵄩𝑌𝑛 (𝑠)󵄩󵄩󵄩𝐻𝑑𝑠
6
Abstract and Applied Analysis
𝑡
󵄩2
󵄩
≤ 𝑞‖𝑥‖2𝐻 + 𝜃𝑞E ∫ 𝑒𝜃𝑠 󵄩󵄩󵄩𝑌𝑛 (𝑠)󵄩󵄩󵄩𝐻𝑑𝑠
0
𝑡
󵄩2
󵄩
− 2𝛼𝑝E ∫ 𝑒𝜃𝑠 󵄩󵄩󵄩𝑌𝑛 (𝑠)󵄩󵄩󵄩𝐻𝑑𝑠
0
𝑡
𝑡
𝑁
0
𝑙=1
󵄩2
󵄩
+ E ∫ 𝑒𝜃𝑠 ∑𝛾𝑟(𝑠)𝑙 𝜆 𝑙 󵄩󵄩󵄩𝑌𝑛 (𝑠)󵄩󵄩󵄩𝐻𝑑𝑠
𝑡
+ E ∫ 𝑒𝜃𝑠 𝜆 𝑟(𝑠)
0
𝜃𝑠
× {2⟨𝑌𝑛 (𝑠) , (𝑒(𝑠−⌊𝑠⌋)𝐴 − 1) 𝑓 (𝑌𝑛 (𝑠) , 𝑟 (𝑠))⟩𝐻
+ E ∫ 𝑒 𝜆 𝑟(𝑠)
0
+ 2 ⟨𝑌𝑛 (𝑠) , 𝑒(𝑠−⌊𝑠⌋)𝐴
× {2⟨𝑌𝑛 (𝑠) , 𝑒(𝑠−⌊𝑠⌋)𝐴 𝑓 (𝑌𝑛 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋))⟩𝐻
× (𝑓 (𝑌𝑛 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋))
󵄩
󵄩2
+󵄩󵄩󵄩𝑔 (𝑌𝑛 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋))󵄩󵄩󵄩HS } 𝑑𝑠
𝑡
𝑁
−𝑓 (𝑌𝑛 (𝑠) , 𝑟 (𝑠))) ⟩𝐻
𝑙=1
󵄩2
󵄩
+ Δ1/2 󵄩󵄩󵄩𝑔 (𝑌𝑛 (𝑠) , 𝑟 (𝑠))󵄩󵄩󵄩HS + (1 + Δ−1/2 )
󵄩2
󵄩
+ E ∫ 𝑒 ∑𝛾𝑟(𝑠)𝑙 𝜆 𝑙 󵄩󵄩󵄩𝑌𝑛 (𝑠)󵄩󵄩󵄩𝐻𝑑𝑠.
0
𝜃𝑠
(31)
󵄩
× óľ„Šóľ„Šóľ„Š(𝑔 (𝑌𝑛 (𝑠) , 𝑟 (𝑠))
󵄩2
−𝑔 (𝑌𝑛 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋)))󵄩󵄩󵄩HS } 𝑑𝑠
By the fundamental transformation, we obtain that
𝑡
󵄩2
󵄩
≤ 𝑞‖𝑥‖2𝐻 + 𝜃𝑞E ∫ 𝑒𝜃𝑠 󵄩󵄩󵄩𝑌𝑛 (𝑠)󵄩󵄩󵄩𝐻𝑑𝑠
0
⟨𝑌𝑛 (𝑡) , 𝑒(𝑡−⌊𝑡⌋)𝐴 𝑓 (𝑌𝑛 (⌊𝑡⌋) , 𝑟 (⌊𝑡⌋))⟩𝐻
= ⟨𝑌𝑛 (𝑡) , 𝑓 (𝑌𝑛 (𝑡) , 𝑟 (𝑡))⟩𝐻
+ ⟨𝑌𝑛 (𝑡) , (𝑒(𝑡−⌊𝑡⌋)𝐴 − 1) 𝑓 (𝑌𝑛 (𝑡) , 𝑟 (𝑡))⟩𝐻
𝑡
(32)
+ ⟨𝑌𝑛 (𝑡) , 𝑒(𝑡−⌊𝑡⌋)𝐴 (𝑓 (𝑌𝑛 (⌊𝑡⌋) , 𝑟 (⌊𝑡⌋))
−𝑓 (𝑌𝑛 (𝑡) , 𝑟 (𝑡))) ⟩𝐻.
󵄩2
󵄩
− 2𝛼𝑝E ∫ 𝑒𝜃𝑠 󵄩󵄩󵄩𝑌𝑛 (𝑠)󵄩󵄩󵄩𝐻𝑑𝑠
0
−
𝑡
𝑡
𝜇
E ∫ 𝑒𝜃𝑠 ‖𝑌 (𝑠)‖2𝐻𝑑𝑠 + 𝛼1 ∫ 𝑒𝜃𝑠 𝑑𝑠
2 0
0
𝑡
+ E ∫ 𝑒𝜃𝑠 𝜆 𝑟(𝑠)
0
× {2⟨𝑌𝑛 (𝑠) , (𝑒(𝑠−⌊𝑠⌋)𝐴 − 1) 𝑓 (𝑌𝑛 (𝑠) , 𝑟 (𝑠))⟩𝐻
By Höld inequality, we have
+ 2 ⟨𝑌𝑛 (𝑠) , 𝑒(𝑠−⌊𝑠⌋)𝐴 (𝑓 (𝑌𝑛 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋))
󵄩2
󵄩󵄩
𝑛
󵄩󵄩𝑔 (𝑌 (⌊𝑡⌋) , 𝑟 (⌊𝑡⌋))󵄩󵄩󵄩HS
󵄩
= 󵄩󵄩󵄩𝑔 (𝑌𝑛 (𝑡) , 𝑟 (𝑡))
−𝑓 (𝑌𝑛 (𝑠) , 𝑟 (𝑠))) ⟩𝐻
󵄩
󵄩2
+ Δ1/2 󵄩󵄩󵄩𝑔 (𝑌𝑛 (𝑠) , 𝑟 (𝑠))󵄩󵄩󵄩HS + (1 + Δ−1/2 )
󵄩2
− (𝑔 (𝑌𝑛 (𝑡) , 𝑟 (𝑡)) − 𝑔 (𝑌𝑛 (⌊𝑡⌋) , 𝑟 (⌊𝑡⌋)))󵄩󵄩󵄩HS
󵄩
× óľ„Šóľ„Šóľ„Š(𝑔 (𝑌𝑛 (𝑠) , 𝑟 (𝑠))
󵄩
󵄩2
≤ (1 + Δ1/2 ) 󵄩󵄩󵄩𝑔 (𝑌𝑛 (𝑡) , 𝑟 (𝑡))󵄩󵄩󵄩HS + (1 + Δ−1/2 )
󵄩2
󵄩
× óľ„Šóľ„Šóľ„Š(𝑔 (𝑌𝑛 (𝑡) , 𝑟 (𝑡)) − 𝑔 (𝑌𝑛 (⌊𝑡⌋) , 𝑟 (⌊𝑡⌋)))󵄩󵄩󵄩HS .
(33)
󵄩2
−𝑔 (𝑌𝑛 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋)))󵄩󵄩󵄩HS } 𝑑𝑠
:= 𝐽1 (𝑡) + 𝐽2 (𝑡) + 𝐽3 (𝑡) + 𝐽4 (𝑡) .
(34)
Then, from (31), we have
󵄩2
󵄩
𝑒𝜃𝑡 E (𝜆 𝑟(𝑡) 󵄩󵄩󵄩𝑌𝑛 (𝑡)󵄩󵄩󵄩𝐻)
𝑡
󵄩2
󵄩
≤ 𝑞‖𝑥‖2𝐻 + 𝜃𝑞E ∫ 𝑒𝜃𝑠 󵄩󵄩󵄩𝑌𝑛 (𝑠)󵄩󵄩󵄩𝐻𝑑𝑠
0
𝑡
󵄩2
󵄩
− 2𝛼𝑝E ∫ 𝑒𝜃𝑠 󵄩󵄩󵄩𝑌𝑛 (𝑠)󵄩󵄩󵄩𝐻𝑑𝑠
0
𝑡
𝜃𝑠
𝑛
By the elemental inequality: 2𝑎𝑏 ≤ (𝑎2 /𝜅)+𝜅𝑏2 , 𝑎, 𝑏 ∈ R, 𝜅 >
0, and (8), (27), we obtain that, for Δ < 1,
𝑡
󵄩2
󵄩
𝐽2 (𝑡) ≤ E ∫ 𝑒𝜃𝑠 𝜆 𝑟(𝑠) {Δ1/2 󵄩󵄩󵄩𝑌𝑛 (𝑠)󵄩󵄩󵄩𝐻
0
𝑛
+ E ∫ 𝑒 𝜆 𝑟(𝑠) {2⟨𝑌 (𝑠) , 𝑓 (𝑌 (𝑠) , 𝑟 (𝑠))⟩𝐻
0
󵄩
󵄩2
+ 󵄩󵄩󵄩𝑔 (𝑌𝑛 (𝑠) , 𝑟 (𝑠))󵄩󵄩󵄩HS } 𝑑𝑠
󵄩
+ Δ−1/2 󵄩󵄩󵄩󵄩(𝑒(𝑠−⌊𝑠⌋)𝐴 − 1)
󵄩2
× đ‘“ (𝑌𝑛 (𝑠) , 𝑟 (𝑠))󵄩󵄩󵄩𝐻} 𝑑𝑠
Abstract and Applied Analysis
7
𝑡
󵄩2
󵄩
≤ E ∫ 𝑒𝜃𝑠 𝜆 𝑟(𝑠) Δ1/2 󵄩󵄩󵄩𝑌𝑛 (𝑠)󵄩󵄩󵄩𝐻𝑑𝑠
By Markov property, we compute
0
𝑡
󵄩2
󵄩
+ E ∫ 𝑒𝜃𝑠 𝜆 𝑟(𝑠) Δ−1/2 𝜌𝑛2 Δ2 𝐿 (1 + 󵄩󵄩󵄩𝑌𝑛 (𝑠)󵄩󵄩󵄩𝐻) 𝑑𝑠
E [(1 + ‖𝑌 (⌊𝑡⌋)‖2𝐻) 𝐼{𝑟(𝑡) ≠ 𝑟(⌊𝑡⌋)} ]
0
𝑡
≤ E ∫ 𝑒𝜃𝑠 𝜆 𝑟(𝑠) {(Δ1/2 + Δ1/2 𝜌𝑛2 𝐿)
= E (E [(1 + ‖𝑌 (⌊𝑡⌋)‖2𝐻) 𝐼{𝑟(𝑡) ≠ 𝑟(⌊𝑡⌋)} | 𝑟 (⌊𝑡⌋)])
0
󵄩2
󵄩
×󵄩󵄩󵄩𝑌𝑛 (𝑠)󵄩󵄩󵄩𝐻 + Δ1/2 𝜌𝑛2 𝐿} 𝑑𝑠
= E (E [(1 + ‖𝑌 (⌊𝑡⌋)‖2𝐻) | 𝑟 (⌊𝑡⌋)])
× E [𝐼{𝑟(𝑡) ≠ 𝑟(⌊𝑡⌋)} | 𝑟 (⌊𝑡⌋)]
𝑡
󵄩2
󵄩
≤ 𝑞E ∫ 𝑒𝜃𝑠 {Δ1/2 (1 + 𝜌𝑛2 𝐿) 󵄩󵄩󵄩𝑌𝑛 (𝑠)󵄩󵄩󵄩𝐻 + Δ1/2 𝜌𝑛2 𝐿} 𝑑𝑠.
0
(35)
By (A2) and (8), we have
= E (1 + ‖𝑌 (⌊𝑡⌋)‖2𝐻) ∑𝐼{𝑟(⌊𝑡⌋)=𝑖} P (𝑟 (𝑡) ≠ 𝑖 | 𝑟 (⌊𝑡⌋) = 𝑖)
𝑖∈S
= E (1 +
󵄩󵄩
󵄩2
𝑛
𝑛
󵄩󵄩(𝑓 (𝑌 (⌊𝑡⌋) , 𝑟 (⌊𝑡⌋)) − 𝑓 (𝑌 (𝑡) , 𝑟 (𝑡)))󵄩󵄩󵄩𝐻
× ∑ (𝛾𝑖𝑗 (𝑡 − ⌊𝑡⌋) + 𝑜 (𝑡 − ⌊𝑡⌋))
󵄩
󵄩2
≤ 2󵄩󵄩󵄩(𝑓 (𝑌𝑛 (⌊𝑡⌋) , 𝑟 (⌊𝑡⌋)) − 𝑓 (𝑌𝑛 (⌊𝑡⌋) , 𝑟 (𝑡)))󵄩󵄩󵄩𝐻
󵄩
󵄩2
+ 2󵄩󵄩󵄩(𝑓 (𝑌𝑛 (⌊𝑡⌋) , 𝑟 (𝑡)) − 𝑓 (𝑌𝑛 (𝑡) , 𝑟 (𝑡)))󵄩󵄩󵄩𝐻
‖𝑌 (⌊𝑡⌋)‖2𝐻) ∑𝐼{𝑟(⌊𝑡⌋)=𝑖}
𝑖∈S
𝑗 ≠ 𝑖
(36)
󵄩2
󵄩
≤ 8𝐿 (1 + 󵄩󵄩󵄩𝑌𝑛 (⌊𝑡⌋)󵄩󵄩󵄩𝐻) 𝐼{𝑟(𝑡) ≠ 𝑟(⌊𝑡⌋)}
= E (1 + ‖𝑌 (⌊𝑡⌋)‖2𝐻) (max (−𝛾𝑖𝑖 ) Δ + 𝑜 (Δ)) ∑𝐼{𝑟(⌊𝑡⌋)=𝑖}
𝑖∈S
≤ 𝛾̂ΔE (1 +
󵄩2
󵄩
+ 2𝐿󵄩󵄩󵄩𝑌𝑛 (𝑡) − 𝑌𝑛 (⌊𝑡⌋)󵄩󵄩󵄩𝐻.
𝑖∈S
‖𝑌 (⌊𝑡⌋)‖2𝐻) ,
(39)
Similarly, we have
󵄩󵄩
󵄩2
𝑛
𝑛
󵄩󵄩 𝑔 (𝑌 (𝑡) , 𝑟 (𝑡))) − 𝑔 (𝑌 (⌊𝑡⌋) , 𝑟 (⌊𝑡⌋))󵄩󵄩󵄩HS
󵄩2
󵄩
≤ 8𝐿 (1 + 󵄩󵄩󵄩𝑌𝑛 (⌊𝑡⌋)󵄩󵄩󵄩𝐻) 𝐼{𝑟(𝑡) ≠ 𝑟(⌊𝑡⌋)}
where 𝛾̂ = 𝑁[1 + max1≤𝑖≤𝑁 (−𝛾𝑖𝑖 )]. Substituting (39) into (38)
gives
(37)
󵄩2
󵄩
+ 2𝐿󵄩󵄩󵄩𝑌𝑛 (𝑡) − 𝑌𝑛 (⌊𝑡⌋)󵄩󵄩󵄩𝐻.
𝐽3 (𝑡)
Thus, we obtain from (36) that
𝑡
󵄩2
󵄩
≤ 𝑞E ∫ 𝑒𝜃𝑠 {Δ1/2 󵄩󵄩󵄩𝑌𝑛 (𝑠)󵄩󵄩󵄩𝐻
𝐽3 (𝑡)
0
󵄩2
󵄩
+2Δ−1/2 𝐿󵄩󵄩󵄩𝑌𝑛 (𝑠) − 𝑌𝑛 (⌊𝑠⌋)󵄩󵄩󵄩𝐻} 𝑑𝑠
𝑡
≤ 2E ∫ 𝑒𝜃𝑠 𝜆 𝑟(𝑠) ⟨𝑌𝑛 (𝑠) , 𝑒(𝑠−⌊𝑠⌋)𝐴
0
(40)
𝑡
+ 𝑞 ∫ 8𝑒𝜃𝑠 Δ1/2 𝛾̂𝐿E (1 + ‖𝑌 (⌊𝑠⌋)‖2𝐻) 𝑑𝑠.
× (𝑓 (𝑌𝑛 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋))
0
𝑛
−𝑓 (𝑌 (𝑠) , 𝑟 (𝑠))) ⟩𝐻𝑑𝑠
𝑡
󵄩
󵄩2
󵄩2
󵄩
≤ E ∫ 𝑒𝜃𝑠 𝜆 𝑟(𝑠) {Δ1/2 󵄩󵄩󵄩𝑌𝑛 (𝑠)󵄩󵄩󵄩𝐻𝑑𝑠 + Δ−1/2 󵄩󵄩󵄩󵄩𝑒(𝑠−⌊𝑠⌋)𝐴 󵄩󵄩󵄩󵄩
0
Furthermore, due to (37) and (39), we have
󵄩
× óľ„Šóľ„Šóľ„Šđ‘“ (𝑌𝑛 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋))
󵄩2
−𝑓 (𝑌𝑛 (𝑠) , 𝑟 (𝑠))󵄩󵄩󵄩𝐻} 𝑑𝑠
𝑡
𝜃𝑠
≤ 𝑞E ∫ 𝑒 {Δ
0
𝐽4 (𝑡)
𝑡
= E ∫ 𝑒𝜃𝑠 𝜆 𝑟(𝑠)
󵄩2
−1/2
󵄩󵄩𝑌 (𝑠)󵄩󵄩󵄩𝐻 + Δ 8𝐿
1/2 󵄩
󵄩 𝑛
0
󵄩
󵄩2
× {Δ1/2 󵄩󵄩󵄩𝑔 (𝑌𝑛 (𝑠) , 𝑟 (𝑠))󵄩󵄩󵄩HS
󵄩2
󵄩
× (1 + 󵄩󵄩󵄩𝑌𝑛 (⌊𝑠⌋)󵄩󵄩󵄩𝐻) 𝐼{𝑟(𝑠) ≠ 𝑟(⌊𝑠⌋)}
󵄩2
󵄩
+2Δ−1/2 𝐿󵄩󵄩󵄩𝑌𝑛 (𝑠) − 𝑌𝑛 (⌊𝑠⌋)󵄩󵄩󵄩𝐻} 𝑑𝑠.
󵄩
+ (1 + Δ−1/2 ) 󵄩󵄩󵄩 𝑔 (𝑌𝑛 (𝑠) , 𝑟 (𝑠)))
(38)
󵄩2
−𝑔 (𝑌𝑛 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋)) 󵄩󵄩󵄩HS } 𝑑𝑠
8
Abstract and Applied Analysis
𝑡
󵄩2
󵄩
≤ 𝑞E ∫ 𝑒𝜃𝑠 Δ1/2 𝐿 (1 + 󵄩󵄩󵄩𝑌𝑛 (𝑠)󵄩󵄩󵄩𝐻) 𝑑𝑠 + 𝑞 (1 + Δ−1/2 )
By Lemma 10 and the inequality (42), we obtain that
0
󵄩2
󵄩
𝑒𝜃𝑡 E (𝜆 𝑟(𝑡) 󵄩󵄩󵄩𝑌𝑛 (𝑡)󵄩󵄩󵄩𝐻)
𝑡
󵄩2
󵄩
× ∫ 𝑒𝜃𝑠 8𝐿 (1 + 󵄩󵄩󵄩𝑌𝑛 (⌊𝑠⌋)󵄩󵄩󵄩𝐻) 𝐼{𝑟(𝑠) ≠ 𝑟(⌊𝑠⌋)} 𝑑𝑠
0
+ 2𝐿𝑞 (1 + Δ
≤ 𝑞Δ
1/2
𝑡
−1/2
𝑡
𝑡
≤ 𝑞‖𝑥‖2𝐻 + ∫ 𝑒𝜃𝑠 [𝛼1 + 2𝑞𝜃 − 4𝛼𝑝 − 𝜇
󵄩2
) ∫ 𝑒 󵄩󵄩𝑌 (𝑠) − 𝑌 (⌊𝑠⌋)󵄩󵄩󵄩𝐻𝑑𝑠
0
𝜃𝑠 󵄩
󵄩 𝑛
𝑛
0
+ 3𝑞𝜌𝑛2 Δ1/2 𝐿 + 24𝑞Δ1/2 𝛾̂ 𝐿
󵄩2
󵄩
𝐿E ∫ 𝑒 (1 + 󵄩󵄩󵄩𝑌𝑛 (𝑠)󵄩󵄩󵄩𝐻) 𝑑𝑠
0
𝜃𝑠
+3𝑞Δ1/2 𝐿 + 4𝑞Δ1/2 ] 𝑑𝑠
𝑡
󵄩2
󵄩
+ 16𝑞̂𝛾Δ1/2 𝐿 ∫ 𝑒𝜃𝑠 (1 + 󵄩󵄩󵄩𝑌𝑛 (𝑠)󵄩󵄩󵄩𝐻) 𝑑𝑠
𝑡
+ ∫ 𝑒𝜃𝑠 [4𝑞𝜃 − 8𝛼𝑝 − 2𝜇 + 4𝑞Δ1/2 (2 + 𝜌𝑛2 𝐿)
0
0
𝑡
󵄩2
󵄩
+4𝑞Δ1/2 𝐿 + 24𝑞Δ1/2 𝛾̂ 𝐿] 󵄩󵄩󵄩𝑌𝑛 (⌊𝑠⌋)󵄩󵄩󵄩𝐻𝑑𝑠
󵄩2
󵄩
+ 2𝐿𝑞 (1 + Δ−1/2 ) ∫ 𝑒𝜃𝑠 󵄩󵄩󵄩𝑌𝑛 (𝑠) − 𝑌𝑛 (⌊𝑠⌋)󵄩󵄩󵄩𝐻𝑑𝑠.
0
𝑡
󵄩2
󵄩
+ 6𝑞𝐿Δ1/2 (𝜌𝑛2 + 2𝐿) E ∫ 𝑒𝜃𝑠 (1 + 󵄩󵄩󵄩𝑌𝑛 (⌊𝑠⌋)󵄩󵄩󵄩𝐻) 𝑑𝑠
(41)
0
𝑡
≤ 𝑞‖𝑥‖2𝐻 + ∫ 𝑒𝜃𝑠 [𝛼1 + 2𝑞𝜃 − 4𝛼𝑝 − 𝜇 + 3𝑞𝜌𝑛2 Δ1/2 𝐿
On the other hand, by Lemma 10, when 3(𝜌𝑛2 + 2𝐿)Δ ≤ 1, we
have
0
+ 24𝑞Δ1/2 𝛾̂ 𝐿 + 3𝑞Δ1/2 𝐿 + 4𝑞Δ1/2
+6𝑞𝐿Δ1/2 (𝜌𝑛2 + 2𝐿)] 𝑑𝑠
󵄩2
󵄩
E󵄩󵄩󵄩𝑌𝑛 (𝑡)󵄩󵄩󵄩𝐻
𝑡
+ ∫ 𝑒𝜃𝑠 [4𝑞𝜃 − 8𝛼𝑝 − 2𝜇 + 4𝑞Δ1/2 (2 + 𝜌𝑛2 𝐿)
󵄩2
󵄩2
󵄩
󵄩
≤ 2E󵄩󵄩󵄩𝑌𝑛 (𝑡) − 𝑌𝑛 (⌊𝑡⌋)󵄩󵄩󵄩𝐻 + 2E󵄩󵄩󵄩𝑌𝑛 (⌊𝑡⌋)󵄩󵄩󵄩𝐻
󵄩2
󵄩
≤ 6 (𝜌𝑛2 + 2𝐿) Δ (1 + E󵄩󵄩󵄩𝑌𝑛 (⌊𝑡⌋)󵄩󵄩󵄩𝐻)
0
+ 4𝑞Δ1/2 𝐿 + 24𝑞Δ1/2 𝛾̂ 𝐿
(42)
󵄩2
󵄩
+6𝑞𝐿Δ1/2 (𝜌𝑛2 + 2𝐿)] 󵄩󵄩󵄩𝑌𝑛 (⌊𝑠⌋)󵄩󵄩󵄩𝐻𝑑𝑠.
󵄩2
󵄩
+ 2E󵄩󵄩󵄩𝑌𝑛 (⌊𝑡⌋)󵄩󵄩󵄩𝐻
󵄩2
󵄩
≤ 4E󵄩󵄩󵄩𝑌𝑛 (⌊𝑡⌋)󵄩󵄩󵄩𝐻 + 2.
(44)
Let 𝜃 = (4𝛼𝑝 + 𝜇)/4𝑞, for Δ < ((4𝛼𝑝 + 𝜇)/(8𝑞 + 4𝑞𝜌𝑛2 𝐿 + 4𝑞𝐿 +
24𝑞̂𝑟 + 6𝑞𝐿(𝜌𝑛2 + 2𝐿)))2 , then
Putting (35), (40), and (41) into (34), we have
𝑡
𝑝𝑒𝜃𝑡 E (‖𝑌 (𝑡)‖2𝐻) ≤ 𝑞‖𝑥‖2𝐻 + ∫ 𝑒𝜃𝑠 [𝛼1 +
0
󵄩2
󵄩
𝑒𝜃𝑡 E (𝜆 𝑟(𝑡) 󵄩󵄩󵄩𝑌𝑛 (𝑡)󵄩󵄩󵄩𝐻)
4𝛼𝑝 + 𝜇
] 𝑑𝑠.
2
(45)
That is,
𝑡
≤ 𝑞‖𝑥‖2𝐻 + ∫ 𝑒𝜃𝑠 [𝛼1 + 𝑞𝜌𝑛2 Δ1/2 𝐿
0
+24𝑞Δ
𝑡
1/2
+ E ∫ 𝑒𝜃𝑠 [𝑞𝜃 − 2𝛼𝑝 −
0
𝛾̂ 𝐿 + 𝑞Δ
1/2
sup E (‖𝑌 (𝑡)‖2𝐻) ≤ 𝐶.
𝑡≥0
𝐿] 𝑑𝑠
𝜇
+ 𝑞Δ1/2 (2 + 𝜌𝑛2 𝐿) + 𝑞Δ1/2 𝐿]
2
󵄩2
󵄩
× óľ„Šóľ„Šóľ„Šđ‘Œđ‘› (𝑠)󵄩󵄩󵄩𝐻𝑑𝑠 + 24𝑞Δ1/2 𝛾̂ 𝐿E
𝑡
󵄩2
󵄩
× ∫ 𝑒𝜃𝑠 󵄩󵄩󵄩𝑌𝑛 (⌊𝑠⌋)󵄩󵄩󵄩𝐻𝑑𝑠
Lemma 12. Let (A1)–(A3) hold. If Δ < min{1, 1/18(𝜌𝑛2 +
2𝐿), ((2𝛼𝑝 + 𝜇)/(4𝑞 + 2𝑞𝐿 + 2𝑞𝜌𝑛2 𝐿 + 12𝑞𝐿̂𝑟))2 }, then
󵄩2
󵄩
lim E󵄩󵄩󵄩󵄩𝑌𝑛,𝑥,𝑖 (𝑡) − 𝑌𝑛,𝑦,𝑖 (𝑡)󵄩󵄩󵄩󵄩𝐻 = 0
0
𝑡→∞
𝑢𝑛𝑖𝑓𝑜𝑟𝑚𝑙𝑦 𝑓𝑜𝑟 𝑥, 𝑦 ∈ 𝑈,
(47)
𝑡
󵄩2
󵄩
+ (4𝑞Δ−1/2 𝐿 + 2𝑞𝐿) E ∫ 𝑒𝜃𝑠 󵄩󵄩󵄩𝑌𝑛 (𝑠) − 𝑌𝑛 (⌊𝑠⌋)󵄩󵄩󵄩𝐻𝑑𝑠.
0
(46)
(43)
where 𝑈 is a bounded subset of 𝐻𝑛 .
Abstract and Applied Analysis
9
+ 2 ⟨𝑌𝑥 (𝑠) − 𝑌𝑦 (𝑠) , 𝑒(𝑠−⌊𝑠⌋)𝐴
Proof. Write 𝑌𝑛,𝑥,𝑖 (𝑡) = 𝑌𝑥 (𝑡), 𝑌𝑛,𝑦,𝑖 (𝑡) = 𝑌𝑦 (𝑡), 𝑟𝑖 (𝑘Δ) =
𝑟(𝑘Δ). From (20), it is easy to show that
𝑥
𝑦
𝑥
× (𝑓𝑛 (𝑌𝑥 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋))
𝑦
(𝑌 (𝑡) − 𝑌 (𝑡)) − (𝑌 (⌊𝑡⌋) − 𝑌 (⌊𝑡⌋))
𝑥
𝑥
𝑦
−𝑓𝑛 (𝑌𝑦 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋)))⟩𝐻
𝑦
= (𝑌 (𝑡) − 𝑌 (⌊𝑡⌋)) − (𝑌 (𝑡) − 𝑌 (⌊𝑡⌋))
󵄩
+ 󵄩󵄩󵄩𝑔𝑛 (𝑌𝑥 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋))
= (𝑒(𝑡−⌊𝑡⌋)𝐴 − 1) (𝑌𝑥 (⌊𝑡⌋) − 𝑌𝑦 (⌊𝑡⌋))
󵄩2
−𝑔𝑛 (𝑌𝑦 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋))󵄩󵄩󵄩HS } 𝑑𝑠
𝑡
+ ∫ 𝑒(𝑡−⌊𝑠⌋)𝐴 (𝑓𝑛 (𝑌𝑥 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋))
⌊𝑡⌋
(48)
−𝑓𝑛 (𝑌𝑦 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋))) 𝑑𝑠
𝑡
(𝑡−⌊𝑠⌋)𝐴
+∫ 𝑒
⌊𝑡⌋
0
𝑙=1
𝑡
(𝑔𝑛 (𝑌 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋))
𝑡
󵄩2
󵄩
− 2𝛼𝑝E ∫ 𝑒𝜃𝑠 󵄩󵄩󵄩𝑌𝑥 (𝑠) − 𝑌𝑦 (𝑠)󵄩󵄩󵄩𝐻𝑑𝑠
0
−𝑔𝑛 (𝑌𝑦 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋))) 𝑑𝑊 (𝑠) .
By using the argument of Lemma 10, we derive that, if Δ < 1,
󵄩2
󵄩
≤ 3 (𝜌𝑛2 + 2𝐿) ΔE󵄩󵄩󵄩𝑌𝑥 (⌊𝑡⌋) − 𝑌𝑦 (⌊𝑡⌋)󵄩󵄩󵄩𝐻,
𝑁
󵄩2
󵄩2
󵄩
󵄩
≤ 𝑞󵄩󵄩󵄩𝑥 − 𝑦󵄩󵄩󵄩𝐻 + 𝑞𝜃E ∫ 𝑒𝜃𝑠 󵄩󵄩󵄩𝑌𝑥 (𝑠) − 𝑌𝑦 (𝑠)󵄩󵄩󵄩𝐻𝑑𝑠
0
𝑥
󵄩
󵄩2
E󵄩󵄩󵄩(𝑌𝑥 (𝑡) − 𝑌𝑦 (𝑡)) − (𝑌𝑥 (⌊𝑡⌋) − 𝑌𝑦 (⌊𝑡⌋))󵄩󵄩󵄩𝐻
𝑡
󵄩2
󵄩
+ E ∫ 𝑒𝜃𝑠 ∑𝛾𝑟(𝑠)𝑙 𝜆 𝑙 󵄩󵄩󵄩𝑌𝑥 (𝑠) − 𝑌𝑦 (𝑠)󵄩󵄩󵄩𝐻𝑑𝑠
𝑡
+ E ∫ 𝑒𝜃𝑠 𝜆 𝑟(𝑠) {2 ⟨𝑌𝑥 (𝑠) − 𝑌𝑦 (𝑠) , 𝑒(𝑠−⌊𝑠⌋)𝐴
0
× (𝑓 (𝑌𝑥 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋))
(49)
−𝑓 (𝑌𝑦 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋))) ⟩𝐻
󵄩
󵄩2
E󵄩󵄩󵄩(𝑌𝑥 (𝑡) − 𝑌𝑦 (𝑡))󵄩󵄩󵄩𝐻
󵄩
+ 󵄩󵄩󵄩𝑔 (𝑌𝑥 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋))
󵄩
= E 󵄩󵄩󵄩(𝑌𝑥 (𝑡) − 𝑌𝑦 (𝑡)) − (𝑌𝑥 (⌊𝑡⌋) − 𝑌𝑦 (⌊𝑡⌋))
󵄩2
−𝑔 (𝑌𝑦 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋))󵄩󵄩󵄩HS } 𝑑𝑠
󵄩2
+ (𝑌𝑥 (⌊𝑡⌋) − 𝑌𝑦 (⌊𝑡⌋))󵄩󵄩󵄩𝐻
𝑡
𝑁
0
𝑙=1
󵄩2
󵄩
+ E ∫ 𝑒𝜃𝑠 ∑𝛾𝑟(𝑠)𝑙 𝜆 𝑙 󵄩󵄩󵄩𝑌𝑥 (𝑠) − 𝑌𝑦 (𝑠)󵄩󵄩󵄩𝐻𝑑𝑠.
󵄩
≤ (1 + 2) E 󵄩󵄩󵄩(𝑌𝑥 (𝑡) − 𝑌𝑦 (𝑡))
󵄩2
− (𝑌𝑥 (⌊𝑡⌋) − 𝑌𝑦 (⌊𝑡⌋))󵄩󵄩󵄩𝐻
(52)
1 󵄩
󵄩2
+ (1 + ) E󵄩󵄩󵄩𝑌𝑥 (⌊𝑡⌋) − 𝑌𝑦 (⌊𝑡⌋)󵄩󵄩󵄩𝐻
2
󵄩2
󵄩
≤ 9 (𝜌𝑛2 + 2𝐿) ΔE󵄩󵄩󵄩(𝑌𝑥 (⌊𝑡⌋) − 𝑌𝑦 (⌊𝑡⌋))󵄩󵄩󵄩𝐻
By the fundamental transformation, we obtain that
⟨𝑌𝑥 (𝑡) − 𝑌𝑦 (𝑡) , 𝑒(𝑡−⌊𝑡⌋)𝐴
× (𝑓 (𝑌𝑥 (⌊𝑡⌋) , 𝑟 (⌊𝑡⌋)) − 𝑓 (𝑌𝑦 (⌊𝑡⌋) , 𝑟 (⌊𝑡⌋))) ⟩𝐻
󵄩2
󵄩
+ 1.5E󵄩󵄩󵄩(𝑌𝑥 (⌊𝑡⌋) − 𝑌𝑦 (⌊𝑡⌋))󵄩󵄩󵄩𝐻.
(50)
= ⟨𝑌𝑥 (𝑡) − 𝑌𝑦 (𝑡) , 𝑓 (𝑌𝑥 (𝑡) , 𝑟 (𝑡)) − 𝑓 (𝑌𝑦 (𝑡) , 𝑟 (𝑡))⟩𝐻
+ ⟨𝑌𝑥 (𝑡) − 𝑌𝑦 (𝑡) , (𝑒(𝑡−⌊𝑡⌋)𝐴 − 1)
If Δ < 1/18(𝜌𝑛2 + 2𝐿), then
󵄩2
󵄩2
󵄩
󵄩
E󵄩󵄩󵄩(𝑌𝑥 (𝑡) − 𝑌𝑦 (𝑡))󵄩󵄩󵄩𝐻 ≤ 2E󵄩󵄩󵄩(𝑌𝑥 (⌊𝑡⌋) − 𝑌𝑦 (⌊𝑡⌋))󵄩󵄩󵄩𝐻.
(51)
Using (30) and the generalised Itô formula, for any 𝜃 > 0, we
have
󵄩2
󵄩
𝑒𝜃𝑡 E (𝜆 𝑟(𝑡) 󵄩󵄩󵄩𝑌𝑥 (𝑡) − 𝑌𝑦 (𝑡)󵄩󵄩󵄩𝐻)
󵄩2
󵄩
≤ 𝜆 𝑖 󵄩󵄩󵄩𝑥 − 𝑦󵄩󵄩󵄩𝐻
𝑡
󵄩2
󵄩
+ E ∫ 𝑒𝜃𝑠 𝜆 𝑟(𝑠) {𝜃󵄩󵄩󵄩𝑌𝑥 (𝑠) − 𝑌𝑦 (𝑠)󵄩󵄩󵄩𝐻
0
+ 2 ⟨𝑌𝑥 (𝑠) − 𝑌𝑦 (𝑠) ,
𝐴𝑌𝑥 (𝑠) − 𝑌𝑦 (𝑠)⟩𝐻
× (𝑓 (𝑌𝑥 (𝑡) , 𝑟 (𝑡)) − 𝑓 (𝑌𝑦 (𝑡) , 𝑟 (𝑡))) ⟩𝐻
+ ⟨𝑌𝑥 (𝑡) − 𝑌𝑦 (𝑡) , 𝑒(𝑡−⌊𝑡⌋)𝐴
× (𝑓 (𝑌𝑥 (⌊𝑡⌋) , 𝑟 (⌊𝑡⌋)) − 𝑓 (𝑌𝑦 (⌊𝑡⌋) , 𝑟 (⌊𝑡⌋)))
− (𝑓 (𝑌𝑥 (𝑡) , 𝑟 (𝑡)) − 𝑓 (𝑌𝑦 (𝑡) , 𝑟 (𝑡))) ⟩𝐻.
(53)
By the Höld inequality, we have
󵄩󵄩
󵄩2
𝑥
𝑦
󵄩󵄩𝑔 (𝑌 (⌊𝑡⌋) , 𝑟 (⌊𝑡⌋)) − 𝑔 (𝑌 (⌊𝑡⌋) , 𝑟 (⌊𝑡⌋))󵄩󵄩󵄩HS
󵄩
󵄩2
≤ (1 + Δ1/2 ) 󵄩󵄩󵄩𝑔 (𝑌𝑥 (𝑡) , 𝑟 (𝑡)) − 𝑔 (𝑌𝑦 (𝑡) , 𝑟 (𝑡))󵄩󵄩󵄩HS
10
Abstract and Applied Analysis
󵄩
+ (1 + Δ−1/2 ) 󵄩󵄩󵄩(𝑔 (𝑌𝑥 (𝑡) , 𝑟 (𝑡)) − 𝑔 (𝑌𝑦 (𝑡) , 𝑟 (𝑡)))
× (𝑓 (𝑌𝑥 (𝑠) , 𝑟 (𝑠))
󵄩2
−𝑌𝑦 (𝑠) , 𝑟 (𝑠))󵄩󵄩󵄩𝐻} 𝑑𝑠
− (𝑔 (𝑌𝑥 (⌊𝑡⌋) , 𝑟 (⌊𝑡⌋))
󵄩2
−𝑔 (𝑌𝑦 (⌊𝑡⌋) , 𝑟 (⌊𝑡⌋)))󵄩󵄩󵄩HS .
(54)
0
𝑡
Then, from (52) and (A3), we have
󵄩2
󵄩
𝑒𝜃𝑡 E (𝜆 𝑟(𝑡) 󵄩󵄩󵄩𝑌𝑥 (𝑡) − 𝑌𝑦 (𝑡)󵄩󵄩󵄩𝐻)
󵄩2
󵄩
≤ 𝑞E ∫ 𝑒𝜃𝑠 Δ1/2 (1 + 𝜌𝑛2 𝐿) 󵄩󵄩󵄩𝑌𝑥 (𝑠) − 𝑌𝑦 (𝑠)󵄩󵄩󵄩𝐻𝑑𝑠.
0
(56)
󵄩2
󵄩
≤ 𝑞󵄩󵄩󵄩𝑥 − 𝑦󵄩󵄩󵄩𝐻 + (𝑞𝜃 − 2𝛼𝑝 − 𝜇) E
It is easy to show that
𝑡
󵄩2
󵄩
× ∫ 𝑒𝜃𝑠 󵄩󵄩󵄩𝑌𝑥 (𝑠) − 𝑌𝑦 (𝑠)󵄩󵄩󵄩𝐻𝑑𝑠
0
𝑡
𝑡
󵄩2
󵄩
≤ E ∫ 𝑒𝜃𝑠 𝜆 𝑟(𝑠) (Δ1/2 + Δ3/2 𝜌𝑛2 𝐿) 󵄩󵄩󵄩𝑌𝑥 (𝑠) − 𝑌𝑦 (𝑠)󵄩󵄩󵄩𝐻𝑑𝑠
𝐺3 (𝑡)
𝜃𝑠
+ 2E ∫ 𝑒 𝜆 𝑟(𝑠)
𝑡
󵄩2
󵄩
≤ 𝑞E ∫ 𝑒𝜃𝑠 {Δ1/2 󵄩󵄩󵄩𝑌𝑥 (𝑠) − 𝑌𝑦 (𝑠)󵄩󵄩󵄩𝐻
0
0
× âŸ¨đ‘Œđ‘Ľ (𝑠) − 𝑌𝑦 (𝑠) , (𝑒(𝑠−⌊𝑠⌋)𝐴 − 1)
󵄩
󵄩2
+ Δ−1/2 󵄩󵄩󵄩󵄩(𝑒(𝑠−⌊𝑠⌋)𝐴 )󵄩󵄩󵄩󵄩
× (𝑓 (𝑌𝑥 (𝑠) , 𝑟 (𝑠))
󵄩
× óľ„Šóľ„Šóľ„Šđ‘“ (𝑌𝑥 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋))
−𝑓 (𝑌𝑦 (𝑠) , 𝑟 (𝑠))) ⟩𝐻𝑑𝑠
𝑡
− 𝑓 (𝑌𝑦 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋))
0
− (𝑓 (𝑌𝑥 (𝑠) , 𝑟 (𝑠))
+ 2E ∫ 𝑒𝜃𝑠 𝜆 𝑟(𝑠)
× âŸ¨đ‘Œđ‘Ľ (𝑠) − 𝑌𝑦 (𝑠) , 𝑒(𝑠−⌊𝑠⌋)𝐴
󵄩2
−𝑌𝑦 (𝑠) , 𝑟 (𝑠))󵄩󵄩󵄩𝐻} 𝑑𝑠
× (𝑓 (𝑌𝑥 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋))
−𝑓 (𝑌𝑦 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋)))
− (𝑓 (𝑌𝑥 (𝑠) , 𝑟 (𝑠))
−𝑓 (𝑌𝑦 (𝑠) , 𝑟 (𝑠)))⟩𝐻𝑑𝑠
𝑡
󵄩
+ E ∫ 𝑒𝜃𝑠 𝜆 𝑟(𝑠) {Δ1/2 󵄩󵄩󵄩𝑔 (𝑌𝑥 (𝑠) , 𝑟 (𝑠))
0
𝑡
(55)
󵄩2
󵄩
≤ 𝑞E ∫ 𝑒𝜃𝑠 Δ1/2 󵄩󵄩󵄩𝑌𝑥 (𝑠) − 𝑌𝑦 (𝑠)󵄩󵄩󵄩𝐻𝑑𝑠 + 𝐺3 (𝑡) .
0
By (39), we have
𝐺3 (𝑡)
𝑡
󵄩
≤ 2𝑞Δ−1/2 E ∫ 𝑒𝜃𝑠 [ 󵄩󵄩󵄩𝑓 (𝑌𝑥 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋))
0
󵄩2
−𝑔 (𝑌 (𝑠) , 𝑟 (𝑠))󵄩󵄩󵄩HS
󵄩2
−𝑓 (𝑌𝑦 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋))󵄩󵄩󵄩𝐻
𝑦
󵄩
+ 󵄩󵄩󵄩(𝑓 (𝑌𝑥 (𝑠) , 𝑟 (𝑠))
+ (1 + Δ−1/2 )
󵄩
× óľ„Šóľ„Šóľ„Š(𝑔 (𝑌𝑥 (𝑠) , 𝑟 (𝑠))
󵄩2
−𝑌𝑦 (𝑠) , 𝑟 (𝑠))󵄩󵄩󵄩𝐻]
𝑦
−𝑔 (𝑌 (𝑠) , 𝑟 (𝑠)))
𝑥
− (𝑔 (𝑌 (⌊𝑠⌋) , 𝑟 (⌊𝑠⌋))
− 𝑔 (𝑌𝑦 (⌊𝑠⌋) ,
󵄩2
𝑟 (⌊𝑠⌋) ) )󵄩󵄩󵄩HS } 𝑑𝑠
:= 𝐺1 (𝑡) + 𝐺2 (𝑡) + 𝐺3 (𝑡) + 𝐺4 (𝑡) .
By (A2) and (27), we have, for Δ < 1,
𝐺2 (𝑡)
𝑡
(58)
× đź{𝑟(𝑠) ≠ 𝑟(⌊𝑠⌋)} 𝑑𝑠
𝑡
󵄩2
󵄩
≤ 4𝑞Δ−1/2 𝐿E ∫ 𝑒𝜃𝑠 󵄩󵄩󵄩𝑌𝑥 (⌊𝑠⌋) − 𝑌𝑦 (⌊𝑠⌋)󵄩󵄩󵄩𝐻
0
× đź{𝑟(𝑠) ≠ 𝑟(⌊𝑠⌋)} 𝑑𝑠
𝑡
󵄩2
󵄩
≤ 4𝑞𝐿̂𝛾Δ1/2 E ∫ 𝑒𝜃𝑠 󵄩󵄩󵄩𝑌𝑥 (⌊𝑠⌋) − 𝑌𝑦 (⌊𝑠⌋)󵄩󵄩󵄩𝐻𝑑𝑠.
0
Therefore, we obtain that
𝑡
󵄩
≤ E ∫ 𝑒𝜃𝑠 𝜆 𝑟(𝑠) {Δ1/2 󵄩󵄩󵄩𝑌𝑥 (𝑠) −
0
(57)
󵄩2
𝑌𝑦 (𝑠)󵄩󵄩󵄩𝐻
󵄩
+ Δ−1/2 󵄩󵄩󵄩󵄩(𝑒(𝑠−⌊𝑠⌋)𝐴 − 1)
󵄩2
󵄩
𝐺3 (𝑡) ≤ 𝑞Δ1/2 E ∫ 𝑒𝜃𝑠 󵄩󵄩󵄩𝑌𝑥 (𝑠) − 𝑌𝑦 (𝑠)󵄩󵄩󵄩𝐻𝑑𝑠
0
+ 4𝑞𝐿̂𝛾Δ
1/2
𝑡
󵄩2
E ∫ 𝑒 󵄩󵄩𝑌 (⌊𝑠⌋) − 𝑌 (⌊𝑠⌋)󵄩󵄩󵄩𝐻𝑑𝑠.
0
𝜃𝑠 󵄩
󵄩 𝑥
𝑦
(59)
Abstract and Applied Analysis
11
On the other hand, using the similar argument of (58), we
have
𝑛,Δ
lim 𝑑L (P𝑛,Δ
(⋅ × ⋅))
𝑘 ((𝑥, 𝑖) , ⋅ × ⋅) , 𝜋
𝑘→∞
𝑡
󵄩2
󵄩
𝐺4 (𝑡) ≤ 𝑞E ∫ 𝑒𝜃𝑠 {Δ1/2 𝐿󵄩󵄩󵄩𝑌𝑥 (𝑠) − 𝑌𝑦 (𝑠)󵄩󵄩󵄩𝐻
𝑛,Δ
≤ lim 𝑑L (P𝑛,Δ
𝑘 ((𝑥, 𝑖) , ⋅ × ⋅) , P𝑘 ((0, 1) , ⋅ × ⋅)) (65)
𝑘→∞
0
+ (1 + Δ−1/2 ) 4𝐿̂𝛾Δ
By the triangle inequality (63) and (64), we have
(60)
𝑛,Δ
+ lim 𝑑L (P𝑛,Δ
(⋅ × ⋅)) = 0.
𝑘 ((0, 1) , ⋅ × ⋅) , 𝜋
𝑘→∞
󵄩2
󵄩
× óľ„Šóľ„Šóľ„Šđ‘Œđ‘Ľ (⌊𝑠⌋) − 𝑌𝑦 (⌊𝑠⌋)󵄩󵄩󵄩𝐻} 𝑑𝑠.
Hence, we have
4. Corollary and Example
󵄩2
󵄩
𝑝𝑒𝜃𝑡 E (󵄩󵄩󵄩𝑌𝑥 (𝑡) − 𝑌𝑦 (𝑡)󵄩󵄩󵄩𝐻)
In this section, we give a criterion based 𝑀-matrices which
can be verified easily in applications.
󵄩2
󵄩
≤ 𝑞󵄩󵄩󵄩𝑥 − 𝑦󵄩󵄩󵄩𝐻
(A4) For each 𝑗 ∈ S, there exists a pair of constants 𝛽𝑗 and
𝛿𝑗 such that, for 𝑥, 𝑦 ∈ 𝐻,
𝑡
+ E ∫ 𝑒𝜃𝑠 [𝑞𝜃 − 2𝛼𝑝 − 𝜇 + 𝑞 (1 + 𝜌𝑛2 𝐿) Δ1/2
0
󵄩2
󵄩
+𝑞Δ1/2 + 𝑞𝐿Δ1/2 ] 󵄩󵄩󵄩𝑌𝑥 (𝑠) − 𝑌𝑦 (𝑠)󵄩󵄩󵄩𝐻𝑑𝑠
𝑡
󵄩2
󵄩2
󵄩
󵄩󵄩
󵄩󵄩𝑔 (𝑥, 𝑗) − 𝑔 (𝑦, 𝑗)󵄩󵄩󵄩HS ≤ 𝛿𝑗 󵄩󵄩󵄩𝑥 − 𝑦󵄩󵄩󵄩𝐻.
(66)
Moreover, A := − diag(2𝛽1 + 𝛿1 , . . . , 2𝛽𝑁 + 𝛿𝑁) − Γ is
a nonsingular 𝑀-matrix [8].
+ E ∫ 𝑒𝜃𝑠 [4𝑞𝐿̂𝛾Δ1/2 + (Δ1/2 + Δ) 4𝑞𝐿̂𝛾]
0
󵄩2
󵄩
× óľ„Šóľ„Šóľ„Šđ‘Œđ‘Ľ (⌊𝑠⌋) − 𝑌𝑦 (⌊𝑠⌋)󵄩󵄩󵄩𝐻𝑑𝑠.
󵄩2
󵄩
⟨𝑥 − 𝑦, 𝑓 (𝑥, 𝑗) − 𝑓 (𝑦, 𝑗)⟩𝐻 ≤ 𝛽𝑗 󵄩󵄩󵄩𝑥 − 𝑦󵄩󵄩󵄩𝐻,
(61)
By (50), we obtain that
Corollary 13. Under (A1), (A2), and (A4), for a given stepsize
𝑛,(𝑥,𝑖)
Δ > 0, and arbitrary 𝑥 ∈ 𝐻𝑛 , 𝑖 ∈ S, {𝑍
(𝑘Δ)}𝑘≥0 has a
unique stationary distribution 𝜋𝑛,Δ (⋅ × ⋅) ∈ P(𝐻𝑛 × S).
Proof. In fact, we only need to prove that (A3) holds.
By (A4), there exists (𝜆 1 , 𝜆 2 , . . . , 𝜆 𝑁)𝑇 > 0, such that
(𝑞1 , 𝑞2 , . . . , 𝑞𝑁)𝑇 = A(𝜆 1 , 𝜆 2 , . . . , 𝜆 𝑁)𝑇 > 0.
Set 𝜇 = min1≤𝑗≤𝑁 𝑞𝑗 , by (66), we have
󵄩2
󵄩
𝑝𝑒𝜃𝑡 E (󵄩󵄩󵄩𝑌𝑥 (𝑡) − 𝑌𝑦 (𝑡)󵄩󵄩󵄩𝐻)
𝑡
󵄩2
󵄩
≤ 𝑞󵄩󵄩󵄩𝑥 − 𝑦󵄩󵄩󵄩𝐻 + E ∫ 𝑒𝜃𝑠 [2𝑞𝜃 − 4𝛼𝑝 − 2𝜇
0
2𝜆 𝑗 ⟨𝑥 − 𝑦, 𝑓 (𝑥, 𝑗) − 𝑓 (𝑦, 𝑗)⟩𝐻
+ 4𝑞Δ1/2 + 2𝑞𝐿Δ1/2
+2𝑞𝜌𝑛2 𝐿Δ1/2 + 12𝑞𝐿̂𝛾Δ1/2 ]
󵄩2
󵄩
× óľ„Šóľ„Šóľ„Šđ‘Œđ‘Ľ (⌊𝑠⌋) − 𝑌𝑦 (⌊𝑠⌋)󵄩󵄩󵄩𝐻𝑑𝑠.
(62)
((2𝛼𝑝 + 𝜇)/(4𝑞 + 2𝑞𝐿 + 2𝑞𝜌𝑛2 𝐿 +
Let 𝜃 = (2𝛼𝑝 + 𝜇)/2𝑞, for Δ <
12𝑞𝐿̂𝑟))2 , then the desired assertion (47) follows.
𝑁
󵄩2
󵄩2
󵄩
󵄩
+ 𝜆 𝑗 󵄩󵄩󵄩𝑔 (𝑥, 𝑗) − 𝑔 (𝑦, 𝑗)󵄩󵄩󵄩𝐻 + ∑𝛾𝑗𝑙 𝜆 𝑙 󵄩󵄩󵄩𝑥 − 𝑦󵄩󵄩󵄩𝐻
𝑙=1
𝑁
󵄩2
󵄩2
󵄩2
󵄩
󵄩
󵄩
≤ 2𝜆 𝑗 𝛽𝑗 󵄩󵄩󵄩𝑥 − 𝑦󵄩󵄩󵄩𝐻 + 𝛿𝑗 𝜆 𝑗 󵄩󵄩󵄩𝑥 − 𝑦󵄩󵄩󵄩𝐻 + ∑𝛾𝑗𝑙 𝜆 𝑙 󵄩󵄩󵄩𝑥 − 𝑦󵄩󵄩󵄩𝐻
𝑙=1
𝑁
󵄩2
󵄩
= (2𝜆 𝑗 𝛽𝑗 + 𝛿𝑗 𝜆 𝑗 + ∑𝛾𝑗𝑙 𝜆 𝑙 ) 󵄩󵄩󵄩𝑥 − 𝑦󵄩󵄩󵄩𝐻
𝑙=1
󵄩2
󵄩2
󵄩
󵄩
= −𝑞𝑗 󵄩󵄩󵄩𝑥 − 𝑦󵄩󵄩󵄩𝐻 ≤ −𝜇󵄩󵄩󵄩𝑥 − 𝑦󵄩󵄩󵄩𝐻
We can now easily prove our main result.
Proof of Theorem 9. Since 𝐻𝑛 is finite-dimensional, by
Lemma 3.1 in [12], we have
lim 𝑑L (P𝑛,Δ
𝑘
𝑘→∞
((𝑥, 𝑖) , ⋅ ×
⋅) , P𝑛,Δ
𝑘
((𝑦, 𝑖) , ⋅ × ⋅)) = 0,
(63)
𝑛,Δ
lim 𝑑L (P𝑛,Δ
(⋅ × ⋅)) = 0.
𝑘 ((0, 1) , ⋅ × ⋅) , 𝜋
In the following, we give an example to illustrate the
Corollary 13.
Example 14. Consider
uniformly in 𝑥, 𝑦 ∈ 𝐻𝑛 , 𝑖, 𝑗 ∈ S.
By Lemma 7, there exists 𝜋𝑛,Δ (⋅ × ⋅) ∈ P(𝐻𝑛 × S), such
that
𝑘→∞
(67)
(64)
𝑑𝑋 (𝑡, 𝜉) = [
𝜕2
𝑋 (𝑡, 𝜉) + 𝐵 (𝑟 (𝑡)) 𝑋 (𝑡, 𝜉)] 𝑑𝑡
𝜕𝜉2
+ 𝑔 (𝑋 (𝑡, 𝜉) , 𝑟 (𝑡)) 𝑑𝑊 (𝑡) ,
0 < 𝜉 < 𝜋, 𝑡 ≥ 0.
(68)
12
Abstract and Applied Analysis
We take 𝐻 = 𝐿2 (0, 𝜋) and 𝐴 = 𝜕2 /𝜕𝜉2 with domain
D(𝐴) = 𝐻2 (0, 𝜋) ∩ 𝐻01 (0, 𝜋), then 𝐴 is a self-adjoint negative
operator. For the eigenbasis 𝑒𝑘 (𝜉) = (2/𝜋)1/2 sin(𝑘𝜉), 𝜉 ∈
[0, 𝜋], 𝐴𝑒𝑘 = −𝑘2 𝑒𝑘 , 𝑘 ∈ N. It is easy to show that
∞
∞
2
󵄩󵄩 𝑡𝐴 󵄩󵄩2
󵄩󵄩𝑒 𝑥󵄩󵄩 = ∑𝑒−2𝑘 𝑡 ⟨𝑥, 𝑒𝑖 ⟩2𝐻 ≤ 𝑒−2𝑡 ∑⟨𝑥, 𝑒𝑖 ⟩2𝐻.
󵄩
󵄩𝐻
𝑖=1
(69)
𝑖=1
This further gives that
󵄩󵄩 𝑡𝐴󵄩󵄩
󵄩󵄩𝑒 󵄩󵄩 ≤ 𝑒−𝑡 ,
󵄩 󵄩
(70)
where 𝛼 = 1, thus (A1) holds.
Let 𝑊(𝑡) be a scalar Brownian motion, let 𝑟(𝑡) be a
continuous-time Markov chain values in S = 1, 2, with the
generator
−2 2
),
1 −1
Γ=(
−0.3 −0.1
),
𝐵 (1) = 𝐵1 = (
−0.2 −0.2
(71)
−0.4 −0.2
).
𝐵 (2) = 𝐵2 = (
−0.3 −0.2
Then 𝜆 max (𝐵1𝑇 𝐵1 ) = 0.1706, 𝜆 max (𝐵2𝑇 𝐵2 ) = 0.3286.
Moreover, 𝑔 satisfies
󵄩2
󵄩2
󵄩
󵄩󵄩
󵄩󵄩𝑔 (𝑥, 𝑗) − 𝑔 (𝑦, 𝑗)󵄩󵄩󵄩HS ≤ 𝛿𝑗 󵄩󵄩󵄩𝑥 − 𝑦󵄩󵄩󵄩𝐻,
(72)
where 𝛿1 = 0.1, 𝛿2 = 0.06.
Defining 𝑓(𝑥, 𝑗) = 𝐵(𝑗)𝑥, then
󵄩2
󵄩2
󵄩
󵄩󵄩
󵄩󵄩𝑓 (𝑥, 𝑗) − 𝑓 (𝑦, 𝑗)󵄩󵄩󵄩HS ∨ 󵄩󵄩󵄩𝑔 (𝑥, 𝑗) − 𝑔 (𝑦, 𝑗)󵄩󵄩󵄩HS
󵄩2
󵄩2
󵄩
󵄩
≤ (𝜆 max (𝐵𝑗𝑇 𝐵𝑗 ) ∨ 𝛿𝑗 ) 󵄩󵄩󵄩𝑥 − 𝑦󵄩󵄩󵄩𝐻 < 0.33󵄩󵄩󵄩𝑥 − 𝑦󵄩󵄩󵄩𝐻,
⟨𝑥 − 𝑦, 𝑓 (𝑥, 𝑗) − 𝑓 (𝑦, 𝑗)⟩𝐻
(73)
1
≤ ⟨𝑥 − 𝑦, (𝐵𝑗𝑇 + 𝐵𝑗 ) (𝑥 − 𝑦)⟩
𝐻
2
1
󵄩2
󵄩
≤ 𝜆 max (𝐵𝑗𝑇 + 𝐵𝑗 ) 󵄩󵄩󵄩𝑥 − 𝑦󵄩󵄩󵄩𝐻.
2
It is easy to compute
1
𝛽1 = 𝜆 max (𝐵1𝑇 + 𝐵1 ) = −0.0919,
2
1
𝛽2 = 𝜆 max (𝐵2𝑇 + 𝐵2 ) = −0.03075.
2
(74)
So the matrix A becomes
2.0838 −2
) . (75)
−1 1.0015
A = diag (0.0838, 0.0015) − Γ = (
It is easy to see that A is a nonsingular 𝑀-matrix. Thus,
(A4) holds. By Corollary 13, we can conclude that (68) has a
unique stationary distribution 𝜋𝑛,Δ (⋅ × ⋅).
Acknowledgment
This work is partially supported by the National Natural
Science Foundation of China under Grants 61134012 and
11271146.
References
[1] G. Da Prato and J. Zabczyk, Stochastic Equations in InfiniteDimensions, Cambridge University Press, Cambridge, UK, 1992.
[2] A. Debussche, “Weak approximation of stochastic partial differential equations: the nonlinear case,” Mathematics of Computation, vol. 80, no. 273, pp. 89–117, 2011.
[3] I. Gyöngy and A. Millet, “Rate of convergence of space time
approximations for stochastic evolution equations,” Potential
Analysis, vol. 30, no. 1, pp. 29–64, 2009.
[4] A. Jentzen and P. E. Kloeden, Taylor Approximations for
Stochastic Partial Differential Equations, CBMS-NSF Regional
Conference Series in Applied Mathematics, 2011.
[5] T. Shardlow, “Numerical methods for stochastic parabolic
PDEs,” Numerical Functional Analysis and Optimization, vol. 20,
no. 1-2, pp. 121–145, 1999.
[6] M. Athans, “Command and control (C2) theory: a challenge to
control science,” IEEE Transactions on Automatic Control, vol.
32, no. 4, pp. 286–293, 1987.
[7] D. J. Higham, X. Mao, and C. Yuan, “Preserving exponential
mean-square stability in the simulation of hybrid stochastic
differential equations,” Numerische Mathematik, vol. 108, no. 2,
pp. 295–325, 2007.
[8] X. Mao and C. Yuan, Stochastic Differential Equations with
Markovian Switching, Imperial College, London, UK, 2006.
[9] M. Mariton, Jump Linear Systems in Automatic Control, Marcel
Dekker, 1990.
[10] J. Bao, Z. Hou, and C. Yuan, “Stability in distribution of mild
solutions to stochastic partial differential equations,” Proceedings of the American Mathematical Society, vol. 138, no. 6, pp.
2169–2180, 2010.
[11] J. Bao and C. Yuan, “Numerical approximation of stationary
distribution for SPDEs,” http://arxiv.org/abs/1303.1600.
[12] X. Mao, C. Yuan, and G. Yin, “Numerical method for stationary
distribution of stochastic differential equations with Markovian
switching,” Journal of Computational and Applied Mathematics,
vol. 174, no. 1, pp. 1–27, 2005.
[13] C. Yuan and X. Mao, “Stationary distributions of EulerMaruyama-type stochastic difference equations with Markovian switching and their convergence,” Journal of Difference
Equations and Applications, vol. 11, no. 1, pp. 29–48, 2005.
[14] C. Yuan and X. Mao, “Asymptotic stability in distribution of
stochastic differential equations with Markovian switching,”
Stochastic Processes and their Applications, vol. 103, no. 2, pp.
277–291, 2003.
[15] W. J. Anderson, Continuous-Time Markov Chains, Springer,
New York, NY, USA, 1991.
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download