Full-Text PDF

advertisement
Hindawi Publishing Corporation
Abstract and Applied Analysis
Volume 2013, Article ID 169214, 12 pages
http://dx.doi.org/10.1155/2013/169214
Research Article
Convergence and Stability of the Split-Step πœƒ-Milstein Method
for Stochastic Delay Hopfield Neural Networks
Qian Guo,1,2 Wenwen Xie,1 and Taketomo Mitsui3
1
Department of Mathematics, Shanghai Normal University, Shanghai 200234, China
Division of Computational Science, E-Institute of Shanghai Universities, 100 Guilin Road, Shanghai 200234, China
3
Department of Mathematical Sciences, Faculty of Science and Engineering, Doshisha University, Kyoto 610-0394, Japan
2
Correspondence should be addressed to Qian Guo; qguo@shnu.edu.cn
Received 8 December 2012; Accepted 26 February 2013
Academic Editor: Chengming Huang
Copyright © 2013 Qian Guo et al. This is an open access article distributed under the Creative Commons Attribution License, which
permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
A new splitting method designed for the numerical solutions of stochastic delay Hopfield neural networks is introduced and
analysed. Under Lipschitz and linear growth conditions, this split-step πœƒ-Milstein method is proved to have a strong convergence
of order 1 in mean-square sense, which is higher than that of existing split-step πœƒ-method. Further, mean-square stability of the
proposed method is investigated. Numerical experiments and comparisons with existing methods illustrate the computational
efficiency of our method.
1. Introduction
Hopfield neural networks, which originated with Hopfield in
the 1980s [1], have been successfully applied in many areas
such as combinatorial optimization [2, 3], signal processing
[4], and pattern recognition [5, 6]. In the last decade, neural
networks in the presence of signal transmission delay and
stochastic perturbations, also named as stochastic delay Hopfield neural networks (SDHNNs), have gained considerable
research interest (see, e.g., [7–9] and the references therein).
It is noticed that, so far, most works on SDHNNs focus mainly
on the stability analysis of the analytical solutions, including
mean-square exponential stability [7], global asymptotic
stability [9], and so forth. Not only simulation is an important
tool to explore interesting dynamics of kinds of Hopfield
neural networks (HNNs) (see, e.g., [10] and the references
therein), but also parameter estimation in dynamical systems based on HNNs (see, e.g., [11]) needs to solve HNNs
numerically. Moreover, because most of SDHNNs do not
have explicit solutions, the numerical analysis of SDHNNs
recently stirred some initial research attention. For example,
Li et al. [12] investigated the exponential stability of the Euler
method and the semi-implicit Euler method for SDHNNs.
Rathinasamy [13] introduced a split-step πœƒ-method (SST)
for SDHNNs and analysed the mean-square stability of this
method, and the SST is only given for the commensurable
delay case. To the best of our current knowledge, the authors
mainly discussed the stability of numerical solutions for
stochastic Hopfield neural networks with discrete time delays
but skipped the details of convergence analysis.
The split-step Euler method for stochastic differential
equations (SDEs) was proposed by Higham et al. [14], further,
the splitting Euler-type algorithms have been derived for
stochastic delay differential equations (SDDEs) [15, 16]. In
this paper, we will present a splitting method with higher
order convergence for SDHNNs. To be specific, we will go
into detail about the convergence analysis and comparing the
stability with split-step πœƒ-method given in [13].
The rest of this paper is organized as follows. In Section 2,
we recall the stochastic delay neural networks model and
present a split-step πœƒ-Milstein method. In Section 3, we derive
the convergence results of the split-step πœƒ-Milstein method
for the model. In Section 4, the numerical stability analysis is
performed. In Section 5, some numerical examples are given
to confirm the theory. In the last Section, we draw some
conclusions.
2
Abstract and Applied Analysis
2. Model and the Split-Step πœƒ-Milstein Method
2.1. Model. Consider the stochastic delay Hopfield neural
networks of the form
dx (𝑑) = [−𝐢x (𝑑) + 𝐴f (x (𝑑)) + 𝐡g (z (𝑑))] dt
(1)
+ Φ (x (𝑑)) dπ‘Š (𝑑) ,
where x(𝑑) = [π‘₯1 (𝑑), . . . , π‘₯𝑛 (𝑑)]𝑇 ∈ R𝑛 is the state vector associated with the 𝑛 neurons, z(𝑑) = [π‘₯1 (𝑑 − 𝜏1 ), . . . , π‘₯𝑛 (𝑑 − πœπ‘› )]𝑇 ,
the diagonal matrix 𝐢 = diag(𝑐1 , 𝑐2 , . . . , 𝑐𝑛 ) has positive
entries, and 𝑐𝑖 represents the rate at which the 𝑖th unit
will reset its potential to the resting state in isolation when
discounted from the network and the external stochastic
perturbation. The matrices 𝐴 = (π‘Žπ‘–π‘— )𝑛×𝑛 and 𝐡 = (𝑏𝑖𝑗 )𝑛×𝑛
are the connection weight matrix and the discretely delayed
connection weight matrix, respectively. Furthermore, the
vector functions f(x(𝑑)) = [𝑓1 (π‘₯1 (𝑑)), . . . , 𝑓𝑛 (π‘₯𝑛 (𝑑))]𝑇 and
g(z(𝑑)) = [𝑔1 (π‘₯1 (𝑑 − 𝜏1 )), . . . , 𝑔𝑛 (π‘₯𝑛 (𝑑 − πœπ‘› ))]𝑇 denote the
neuron activation functions with the conditions 𝑓𝑖 (0) =
0, 𝑔𝑖 (0) = 0 for all positive πœπ‘— .
On the initial segment [−𝜏, 0] the state vector satisfies
x(𝑑) = πœ“(𝑑), where πœ“(𝑑) = [πœ“1 (𝑑), . . . , πœ“π‘› (𝑑)]𝑇 is a given
function in 𝐢([−𝜏, 0], R𝑛 ) and 𝜏 stands for max1≤𝑖≤𝑛 {πœπ‘– }.
Moreover, Φ(x(𝑑)) = diag(πœ™1 (π‘₯1 (𝑑)), . . . , πœ™π‘› (π‘₯𝑛 (𝑑))) is a
diagonal matrix with πœ™π‘– (0) = 0 and π‘Š(𝑑) = [π‘Š1 (𝑑), . . . ,
π‘Šπ‘› (𝑑)]𝑇 ∈ R𝑛 is an 𝑛-dimensional Wiener process defined
on the complete probability space (Ω, F, {F𝑑 }𝑑≥0 , P) with a
filtration {F𝑑 }𝑑≥0 satisfying the usual conditions (i.e., it is
increasing and right continuous while F0 contains all P-null
sets).
Let 𝑓𝑖 and 𝑔𝑖 be functions in 𝐢2 (𝐷; R) β‹‚ L2 ([0, 𝑇]; R)
and πœ™π‘– be in 𝐢1 (𝐷; R) ∩ L2 ([0, 𝑇]; R). Here 𝐢𝑙 (𝐷; R) denotes
the family of continuously 𝑙-times differentiable real-valued
function defined on 𝐷, while L𝑙 ([0, 𝑇]; R) denotes the
family of all real-valued measurable {F𝑑 }-adapted stochastic
𝑇
processes {𝑓(𝑑)}𝑑∈[0,𝑇] such that ∫0 |𝑓(𝑑)|𝑙 𝑑𝑑 < +∞.
2.2. Numerical Scheme. We define the mesh with a uniform
step-size β„Ž (0 < β„Ž < 1) on the interval [0, 𝑇]; that is, π‘‘π‘˜ =
π‘˜ ⋅ β„Ž (π‘˜ = 0, 1, . . . , 𝐾) and 𝑇 = πΎβ„Ž.
Let Δπ‘Šπ‘–π‘˜ = π‘Šπ‘– (π‘‘π‘˜+1 ) − π‘Šπ‘– (π‘‘π‘˜ ) denote the increment of the
Wiener process. The split-step πœƒ-Milstein (SSTM) scheme for
the solution of SDEs (1) is given by
π‘Œπ‘˜ = π‘‹π‘˜ + [− (1 − πœƒ) πΆπ‘‹π‘˜ − πœƒπΆπ‘Œπ‘˜ + 𝐴f (π‘‹π‘˜ ) + 𝐡g (π‘π‘˜ )] β„Ž,
1Μ‚ π‘˜
(π‘Œ ) [Δπ‘Šπ‘˜ ∘ Δπ‘Šπ‘˜ − 1β„Ž] ,
π‘‹π‘˜+1 = π‘Œπ‘˜ + Φ (π‘Œπ‘˜ ) Δπ‘Šπ‘˜ + Φ
2
(2)
π‘˜
where the merging parameter πœƒ satisfies 0 ≤ πœƒ ≤ 1, 𝑋 =
[𝑋1k , . . . , π‘‹π‘›π‘˜ ]𝑇 is an approximation to x(π‘‘π‘˜ ), and for 1 ≤ π‘žπ‘— ∈
Z+
πœ“π‘— (π‘‘π‘˜ − πœπ‘— ) , π‘‘π‘˜ − πœπ‘— ≤ 0,
π‘π‘—π‘˜ = { π‘˜−π‘žπ‘—
0 < π‘‘π‘˜ − πœπ‘— ∈ [π‘‘π‘˜−π‘žπ‘— , π‘‘π‘˜−π‘žπ‘— +1 ) .
𝑋𝑗 ,
(3)
Μ‚ π‘˜)
=
Moreover,
we
adopt
the
symbols
Φ(π‘Œ
σΈ€ 
π‘˜
π‘˜
σΈ€ 
π‘˜
π‘˜
π‘˜
diag(πœ™1 (π‘Œ1 )πœ™1 (π‘Œ1 ), . . . , πœ™π‘› (π‘Œπ‘› )πœ™π‘› (π‘Œπ‘› )) and Δπ‘Š = [Δπ‘Š1π‘˜ ,
. . . ,Δπ‘Šπ‘›π‘˜ ]𝑇 , the Hadamard product Δπ‘Šπ‘˜ ∘ Δπ‘Šπ‘˜ means
[(Δπ‘Š1π‘˜ )2 , . . . ,(Δπ‘Šπ‘›π‘˜ )2 ]𝑇 , and 1 = [1, . . . , 1]𝑇 ∈ R𝑛 . When
π‘‘π‘˜ ≤ 0, we define π‘‹π‘˜ = πœ“(π‘‘π‘˜ ).
Then scheme (2) can be written in equivalent form as
π‘Œπ‘˜ = π‘‹π‘˜ − (𝐼 + πœƒπΆβ„Ž)−1 πΆβ„Žπ‘‹π‘˜
+ (𝐼 + πœƒπΆβ„Ž)−1 β„Ž [𝐴f (π‘‹π‘˜ ) + 𝐡g (π‘π‘˜ )] ,
(4a)
1Μ‚ π‘˜
π‘‹π‘˜+1 = π‘Œπ‘˜ + Φ (π‘Œπ‘˜ ) Δπ‘Šπ‘˜ + Φ
(π‘Œ ) [Δπ‘Šπ‘˜ ∘ Δπ‘Šπ‘˜ − 1β„Ž] .
2
(4b)
Substituting (4a) into (4b), we have a stochastic explicit
single-step method with an increment function Λ(πœ‰, πœ‚,
β„Ž, Δπ‘Šπ‘˜ ) ∈ R𝑛 ; that is,
π‘‹π‘˜+1 = π‘‹π‘˜ + Λ (π‘‹π‘˜ , π‘π‘˜ , β„Ž, Δπ‘Šπ‘˜ ) .
(5)
3. Order and Convergence Results for SSTM
In this section we consider the global error of SSTM (2) as
applied to SDHNNs (1) with initial condition. In what follows,
β€– ⋅ β€– denotes Euclidean norm in R𝑛 .
For convergence purpose we make the following standard
assumptions.
Μ‚ satisfy the Lipschitz
Assumption 1. Assume that f, g, Φ, and Φ
condition
󡄨
󡄨󡄨
󡄨󡄨𝑓𝑖 (π‘Ž)−𝑓𝑖 (𝑏)󡄨󡄨󡄨 ≤ 𝛼𝑖 |π‘Ž−𝑏| ,
󡄨󡄨
󡄨
󡄨󡄨𝑔𝑖 (π‘Ž)−𝑔𝑖 (𝑏)󡄨󡄨󡄨 ≤ 𝛽𝑖 |π‘Ž−𝑏| ,
(π‘Ž, 𝑏 ∈ R and 𝑑 ∈ [0, 𝑇])
󡄨󡄨
󡄨
σ΅„¨σ΅„¨πœ™i (π‘Ž)−πœ™π‘– (𝑏)󡄨󡄨󡄨 ≤ 𝛾𝑖 |π‘Ž−𝑏| ,
󡄨󡄨 σΈ€ 
󡄨
σ΅„¨σ΅„¨πœ™π‘– πœ™π‘– (π‘Ž)−πœ™π‘– πœ™π‘–σΈ€  (𝑏)󡄨󡄨󡄨 ≤ πœ…π‘– |π‘Ž−𝑏| ,
󡄨
󡄨
(6)
for every 𝑖 and the linear growth condition
σ΅„©2
σ΅„©Μ‚
σ΅„©2
σ΅„©
(x) ⋅ 1σ΅„©σ΅„©σ΅„©σ΅„©
β€–f(x)β€–2 ∨ σ΅„©σ΅„©σ΅„©g(x)σ΅„©σ΅„©σ΅„© ∨ β€–Φ (x) ⋅ 1β€–2 ∨ σ΅„©σ΅„©σ΅„©σ΅„©Φ
Μƒ (1 + β€–xβ€–2 ) ,
≤𝐿
∀x ∈ R𝑛 ,
(7)
Μƒ is a positive constant and ∨ is the maximal operator.
where 𝐿
We also define 𝐿 𝑖 as 𝐿 𝑖 = max{𝛼𝑖 , 𝛽𝑖 , 𝛾𝑖 , πœ…π‘– } (𝑖 = 1, 2, . . . , 𝑛).
We also need the following assumption on the initial
condition.
Assumption 2. Assume that the initial function πœ“(𝑑) is Lipschitz continuous from [−𝜏, 0] to R𝑛 , that is, there is a positive
constant 𝐿 πœ“ satisfying
σ΅„©
σ΅„©σ΅„©
σ΅„©σ΅„©πœ“ (𝑑) − πœ“ (𝑠)σ΅„©σ΅„©σ΅„© ≤ 𝐿 πœ“ (𝑑 − 𝑠)
if − 𝜏 ≤ 𝑠 < 𝑑 < 0.
(8)
Abstract and Applied Analysis
3
Now we give the definition of local and global errors.
Definition 1. Let x(𝑑) denote the exact solution of (1). The
local approximate solution xΜƒ(π‘‘π‘˜+1 ) starting from x(π‘‘π‘˜ ) by
SSTM (2) given by
Μƒ (π‘‘π‘˜ ) , β„Ž, Δπ‘Šπ‘˜ ) ,
xΜƒ (π‘‘π‘˜+1 ) := x (π‘‘π‘˜ ) + Λ (x (π‘‘π‘˜ ) , 𝑍
Proof. If π‘‘π‘˜ − πœπ‘— ≤ 0 and 𝑠 − πœπ‘— ≤ 0, under Assumption 2 we
have
̃𝑗 (π‘‘π‘˜ )]2
𝐸[π‘₯𝑗 (𝑠 − πœπ‘— ) − 𝑍
2
= 𝐸[πœ“π‘— (𝑠 − πœπ‘— ) − πœ“π‘— (π‘‘π‘˜ − πœπ‘— )]
(9)
Μƒ π‘˜ ) denotes the evaluation of (3) using the exact
where 𝑍(𝑑
solution, yields the difference
π›Ώπ‘˜+1 := x (π‘‘π‘˜+1 ) − xΜƒ (π‘‘π‘˜+1 ) .
(10)
≤ 𝐿 πœ“ β„Ž2 .
If π‘‘π‘˜ − πœπ‘— ≤ 0 and 𝑠 − πœπ‘— > 0, with (13) we obtain
̃𝑗 (π‘‘π‘˜ )]2
𝐸[π‘₯𝑗 (𝑠 − πœπ‘— ) − 𝑍
Then the local error of SSTM is defined by β€–π›Ώπ‘˜+1 β€–, whereas its
global error means β€–πœ–π‘˜ β€– where πœ–π‘˜ := x(π‘‘π‘˜ ) − π‘‹π‘˜ .
2
= 𝐸[π‘₯𝑗 (𝑠 − πœπ‘— ) − πœ“π‘— (π‘‘π‘˜ − πœπ‘— )]
Definition 2. If the global error satisfies
σ΅„© σ΅„© 2
𝐸 (σ΅„©σ΅„©σ΅„©σ΅„©πœ–π‘˜ σ΅„©σ΅„©σ΅„©σ΅„©) ≤ Γβ„Ž2𝑝
∀β„Ž ∈ (0, β„Ž0 )
≤ 2𝐸[π‘₯𝑗 (𝑠 − πœπ‘— ) − π‘₯𝑗 (0)]
(11)
with positive constants β„Ž0 and Γ and a finite 𝑝, then we say
that the order of mean-square convergence accuracy of the
method is 𝑝. Here 𝐸 is the expectation with respect to P.
We then give the following lemmas that are useful in
deriving the convergence results.
Lemma 3 (see also [17]). Let the linear growth condition
(7) hold, and the initial function πœ“(𝑑) is assumed to be
F0 -measurable and right continuous. And one puts πΈπœ“ :=
𝐸(sup−𝜏≤𝑑≤0 β€–πœ“(𝑑)β€–2 ) < ∞. For any given positive 𝑇, there exist
positive numbers Γπœ“ and Γ2 such that the solution of (1) satisfies
𝐸 ( sup β€–x (𝑠)β€–2 ) ≤ Γπœ“ ,
−𝜏≤𝑠≤𝑇
𝐸(β€–x (𝑑) − x (𝑠)β€–)2 ≤ Γ2 (𝑑 − 𝑠)
(13)
2
≤ 2 (Γ2 + 𝐿 πœ“ β„Ž) β„Ž.
If π‘‘π‘˜ − πœπ‘— > 0 and 𝑠 − πœπ‘— > 0, we assume π‘‘π‘˜ − πœπ‘— ∈ [π‘‘π‘˜−π‘žπ‘— , π‘‘π‘˜−π‘žπ‘— +1 )
without loss of generality. Hence,
̃𝑗 (π‘‘π‘˜ )]2
𝐸[π‘₯𝑗 (𝑠 − πœπ‘— ) − 𝑍
2
= 𝐸[π‘₯𝑗 (𝑠 − πœπ‘— ) − π‘₯𝑗 (π‘‘π‘˜−π‘žπ‘— )]
2
(14)
by using inequality (13).
Lemma 5. Let x(𝑑) denote the exact solution of (1). One
assumes conditions (6) and (7). Then for the local intermediate value y(π‘‘π‘˜ ) := x(π‘‘π‘˜ ) − (𝐼 + πœƒπΆβ„Ž)−1 πΆβ„Žx(π‘‘π‘˜ ) + (𝐼 +
Μƒ π‘˜ ))], one has the estimation
πœƒπΆβ„Ž)−1 β„Ž[𝐴f(x(π‘‘π‘˜ )) + 𝐡g(𝑍(𝑑
(15)
(19)
Proof. The difference between the 𝑖th components of x(π‘‘π‘˜ )
and y(π‘‘π‘˜ ) leads to
󡄨2
󡄨󡄨
󡄨󡄨π‘₯𝑖 (π‘‘π‘˜ ) − 𝑦𝑖 (π‘‘π‘˜ )󡄨󡄨󡄨
=
Lemma 4. For 𝑠 ∈ [π‘‘π‘˜ , π‘‘π‘˜ + β„Ž], one has
Here the constant Γ𝜏 is independent of step-size β„Ž.
2
≤ 4Γ2 β„Ž
from (12).
σ΅„©
Μƒ (π‘‘π‘˜ )σ΅„©σ΅„©σ΅„©σ΅„©2 ) ≤ Γ𝜏 β„Ž.
𝐸 (σ΅„©σ΅„©σ΅„©σ΅„©z (𝑠) − 𝑍
σ΅„©
(18)
≤ 2𝐸[π‘₯𝑗 (𝑠 − πœπ‘— ) − π‘₯𝑗 (π‘‘π‘˜ − πœπ‘— )]
σ΅„©2
σ΅„©
𝐸 (σ΅„©σ΅„©σ΅„©x(π‘‘π‘˜ ) − y(π‘‘π‘˜ )σ΅„©σ΅„©σ΅„© ) ≤ Γ3 β„Ž2 .
The Jensen inequality derives
−𝜏≤𝑠≤𝑇
(17)
+ 2𝐸[πœ“π‘— (0) − πœ“π‘— (π‘‘π‘˜ − πœπ‘— )]
(12)
holds.
𝐸 ( sup β€–x (𝑠)β€–) ≤ √Γπœ“
2
+ 2𝐸[π‘₯𝑗 (π‘‘π‘˜−π‘žπ‘— ) − π‘₯𝑗 (π‘‘π‘˜ − πœπ‘— )]
where the constant Γπœ“ is independent of step-size β„Ž but
dependent on 𝑇. Moreover, for any 0 ≤ 𝑠 < 𝑑 ≤ 𝑇, 𝑑 − 𝑠 < 1,
the estimation
(16)
2
≤ 𝐿 πœ“ (𝑠 − π‘‘π‘˜ )
𝑛
β„Ž2
(1 + 𝑐𝑖 πœƒβ„Ž)
2
[𝑐𝑖 π‘₯𝑖 (π‘‘π‘˜ ) − ∑π‘Žπ‘–π‘— 𝑓𝑗 (π‘₯𝑗 (π‘‘π‘˜ ))
𝑗=1
[
𝑛
2
̃𝑗 (π‘‘π‘˜ ))] ,
− ∑𝑏𝑖𝑗 𝑔𝑗 (𝑍
𝑗=1
]
(20)
4
Abstract and Applied Analysis
whose expectation, together with 𝑐𝑖 > 0, 𝑓𝑖 (0) = 𝑔𝑖 (0) = 0,
the Lipschitz condition (6), and the estimation (12), gives
= ∫
π‘‘π‘˜
σ΅„©2
σ΅„©
𝐸 (σ΅„©σ΅„©σ΅„©x(π‘‘π‘˜ ) − y(π‘‘π‘˜ )σ΅„©σ΅„©σ΅„© )
𝑛
+∫
3β„Ž2
≤∑
𝑖=1 (1
𝑐𝑖 [π‘₯𝑖 (π‘‘π‘˜ ) − π‘₯𝑖 (𝑠)] d𝑠 −
+ 𝑐𝑖 πœƒβ„Ž)
2
∑π‘Žπ‘–π‘— [𝑓𝑗 (π‘₯𝑗 (𝑠)) − 𝑓𝑗 (π‘₯𝑗 (π‘‘π‘˜ ))]
𝑛
𝑗=1
(21)
+
󡄨̃
󡄨󡄨2 ]
+ 𝑛 ∑ 𝑏𝑖𝑗2 𝛽𝑗2 𝐸󡄨󡄨󡄨󡄨𝑍
𝑗 (π‘‘π‘˜ )󡄨󡄨󡄨
𝑗=1
]
3Γπœ“ ∑ (𝑐𝑖2
𝑖=1
+
𝑗=1
̃𝑗 (π‘‘π‘˜ ))] d𝑠
+ ∑ 𝑏𝑖𝑗 [𝑔𝑗 (π‘₯𝑗 (𝑠 − πœπ‘— )) − 𝑔𝑗 (𝑍
𝑛
𝑛
𝑐𝑖2 β„Ž2 πœƒ
π‘₯ (𝑑 )
1 + 𝑐𝑖 πœƒβ„Ž 𝑖 π‘˜
π‘‘π‘˜+1 𝑛
π‘‘π‘˜
𝑛
󡄨
󡄨2
󡄨
󡄨2
× [𝑐𝑖2 𝐸󡄨󡄨󡄨π‘₯𝑖 (π‘‘π‘˜ )󡄨󡄨󡄨 + 𝑛 ∑π‘Žπ‘–π‘—2 𝛼𝑗2 𝐸󡄨󡄨󡄨󡄨π‘₯𝑗 (π‘‘π‘˜ )󡄨󡄨󡄨󡄨
𝑗=1
[
≤
π‘‘π‘˜+1
𝑛
𝑛 ∑ π‘Žπ‘–π‘—2 𝛼𝑗2
𝑗=1
+
𝑛
𝑐𝑖 πœƒβ„Ž2 [ 𝑛
̃𝑗 (π‘‘π‘˜ ))]
∑ π‘Žπ‘–π‘— 𝑓𝑗 (π‘₯𝑗 (π‘‘π‘˜ )) + ∑𝑏𝑖𝑗 𝑔𝑗 (𝑍
1 + 𝑐𝑖 πœƒβ„Ž 𝑗=1
𝑗=1
]
[
+∫
𝑛
π‘‘π‘˜+1
π‘‘π‘˜
𝑛 ∑ 𝑏𝑖𝑗2 𝛽𝑗2 ) β„Ž2 .
𝑗=1
−∫
πœ™π‘– (π‘₯𝑖 (𝑠)) dπ‘Šπ‘– (𝑠) − ∫
π‘‘π‘˜+1
π‘‘π‘˜
π‘‘π‘˜+1
πœ™π‘– (𝑦𝑖 (π‘‘π‘˜ )) dπ‘Šπ‘– (𝑠)
𝑠
∫ πœ™π‘–σΈ€  (𝑦𝑖 (π‘‘π‘˜ )) πœ™π‘– (𝑦𝑖 (π‘‘π‘˜ )) dπ‘Šπ‘– (π‘Ÿ) dπ‘Šπ‘– (𝑠) ,
π‘‘π‘˜
π‘‘π‘˜
(25)
Now we discuss local error estimates.
Theorem 6. When one assumes Assumptions 1 and 2 and the
conditions of Lemma 3, there exist positive constants Γ0 and Γ1 ,
such that
σ΅„©
σ΅„©
max 󡄩󡄩󡄩𝐸 (π›Ώπ‘˜+1 )σ΅„©σ΅„©σ΅„©σ΅„© ≤ Γ0 β„Ž2 ,
(22)
0≤π‘˜≤𝐾−1 σ΅„©
σ΅„©2
σ΅„©
max 𝐸 (σ΅„©σ΅„©σ΅„©σ΅„©π›Ώπ‘˜+1 σ΅„©σ΅„©σ΅„©σ΅„© ) ≤ Γ1 β„Ž3
(23)
0≤π‘˜≤𝐾−1
where Δπ‘Šπ‘–π‘˜
π‘‘π‘˜+1
∫𝑑
π‘˜
Proof. The ItoΜ‚ integral form of the 𝑖th component of (1) on
[π‘‘π‘˜ , 𝑑] implies
π‘₯𝑖 (𝑑) − π‘₯𝑖 (π‘‘π‘˜ )
𝑠
𝐸 (π›Ώπ‘–π‘˜+1 )
= 𝐸 (∫
π‘‘π‘˜+1
−
𝑛
+ ∑𝑏𝑖𝑗 𝑔𝑗 (π‘₯𝑗 (𝑠 − πœπ‘— ))] d𝑠
𝑗=1
]
π‘‘π‘˜
𝑐𝑖 β„Ž
= π‘₯𝑖 (π‘‘π‘˜+1 ) − π‘₯𝑖 (π‘‘π‘˜ ) +
π‘₯ (𝑑 )
1 + 𝑐𝑖 πœƒβ„Ž 𝑖 π‘˜
𝑛
𝑛
β„Ž
̃𝑗 (π‘‘π‘˜ ))]
[ ∑ π‘Žπ‘–π‘— 𝑓𝑗 (π‘₯𝑗 (π‘‘π‘˜ )) + ∑ 𝑏𝑖𝑗 𝑔𝑗 (𝑍
1 + 𝑐𝑖 πœƒβ„Ž 𝑗=1
𝑗=1
]
[
π‘˜
σΈ€ 
− πœ™π‘– (𝑦𝑖 (π‘‘π‘˜ )) Δπ‘Šπ‘– − πœ™π‘– (𝑦𝑖 (π‘‘π‘˜ ))
2
π‘‘π‘˜
π‘‘π‘˜
𝑗=1
π‘‘π‘˜+1 𝑛
𝑠−πœπ‘—
π‘‘π‘˜
π‘‘π‘˜−π‘žπ‘—
+ 𝐸 (∫
∑𝑏𝑖𝑗 ∫
𝑗=1
𝑔𝑗󸀠 (π‘₯𝑗 (π‘Ÿ)) πœ‡π‘— (π‘₯𝑗 (π‘Ÿ))
1
+ 𝑔𝑗󸀠󸀠 (π‘₯𝑗 (π‘Ÿ)) πœ™π‘—2 (π‘₯𝑗 (π‘Ÿ)) dπ‘Ÿd𝑠)
2
By utilizing the previous identity, the 𝑖th component of the
difference π›Ώπ‘˜+1 introduced in Definition 1 can be calculated
as
−β„Ž
𝑠
∑π‘Žπ‘–π‘— ∫ 𝑓𝑗󸀠 (π‘₯𝑗 (π‘Ÿ)) πœ‡π‘— (π‘₯𝑗 (π‘Ÿ))
(26)
π‘‘π‘˜
× πœ™π‘– (𝑦𝑖 (π‘‘π‘˜ ))
π‘‘π‘˜+1 𝑛
1
+ 𝑓𝑗󸀠󸀠 (π‘₯𝑗 (π‘Ÿ)) πœ™π‘—2 (π‘₯𝑗 (π‘Ÿ)) dπ‘Ÿd𝑠)
2
+ ∫ πœ™π‘– (π‘₯𝑖 (𝑠)) dπ‘Šπ‘– (𝑠) .
2
(Δπ‘Šπ‘–π‘˜ )
𝑐𝑖2 β„Ž2 πœƒ
𝐸 (π‘₯𝑖 (π‘‘π‘˜ ))
1 + 𝑐𝑖 πœƒβ„Ž
(24)
𝑑
−
𝑠
𝑐𝑖 ∫ −πœ‡π‘– (π‘₯𝑖 (π‘Ÿ)) dπ‘Ÿd𝑠)
+ 𝐸 (∫
𝑛
𝑑
=
∫𝑑 dπ‘Šπ‘– (π‘Ÿ)dπ‘Šπ‘– (𝑠).
π‘˜
Taking expectations of both sides of (25),
= ∫ [−𝑐𝑖 π‘₯𝑖 (𝑠) + ∑π‘Žπ‘–π‘— 𝑓𝑗 (π‘₯𝑗 (𝑠))
π‘‘π‘˜
𝑗=1
[
π›Ώπ‘–π‘˜+1
π‘˜
π‘‘π‘˜
as β„Ž ↓ 0.
2
𝑑
∫𝑑 π‘˜+1 dπ‘Šπ‘– (𝑠) and ((Δπ‘Šiπ‘˜ ) − β„Ž)/2
=
+
𝑐𝑖 πœƒβ„Ž2 [ 𝑛
∑π‘Ž 𝐸 (𝑓𝑗 (π‘₯𝑗 (π‘‘π‘˜ )))
1 + 𝑐𝑖 πœƒβ„Ž 𝑗=1 𝑖𝑗
[
𝑛
̃𝑗 (π‘‘π‘˜ )))]
+ ∑𝑏𝑖𝑗 𝐸 (𝑔𝑗 (𝑍
𝑗=1
]
by (24) and ItoΜ‚ formula, where πœ‡π‘– (π‘₯𝑖 (π‘Ÿ)) := −𝑐𝑖 π‘₯𝑖 (π‘Ÿ) +
∑𝑛𝑗=1 π‘Žπ‘–π‘— 𝑓𝑗 (π‘₯𝑗 (π‘Ÿ)) + ∑𝑛𝑗=1 𝑏𝑖𝑗 𝑔𝑗 (π‘₯𝑗 (π‘Ÿ − πœπ‘— )). Under conditions
of this theorem, we have |𝐸(π›Ώπ‘˜+1 )| ≤ ΓΜƒ0 β„Ž2 by (14), Jensen
𝑖
Abstract and Applied Analysis
5
inequality |𝐸(𝑋)| ≤ 𝐸(|𝑋|), triangle inequality, and properties of definite integral. Then we have max0≤π‘˜≤𝐾−1 ‖𝐸(π›Ώπ‘˜+1 )β€– ≤
Γ0 β„Ž2 from the relation between |𝐸(π›Ώπ‘–π‘˜+1 )| and ‖𝐸(π›Ώπ‘˜+1 )β€–.
Now we prove (23). By ItoΜ‚ formula,
{
{
𝑛
{
π‘‘π‘˜+1
{
2 2
≤ 9 {β„Ž (𝑐𝑖 + 𝑛 ∑ π‘Žπ‘–π‘— 𝛼𝑗 ) ∫
⏟⏟Γ⏟⏟⏟
⏟⏟d𝑠
2β„Ž
{
π‘‘π‘˜
{
𝑗=1
(13)
{
{
𝑛
2 2
π‘‘π‘˜+1
+ β„Ž (𝑛 ∑ 𝑏𝑖𝑗 𝛽𝑗 ) ∫
π‘‘π‘˜
𝑗=1
∫
π‘‘π‘˜+1
π‘‘π‘˜
πœ™π‘– (π‘₯𝑖 (𝑠)) dπ‘Šπ‘– (𝑠) − ∫
π‘‘π‘˜+1
π‘‘π‘˜
π‘‘π‘˜+1
−∫
=∫
π‘‘π‘˜
4 2 2
π‘‘π‘˜+1
∫ πœ™π‘–σΈ€  (𝑦𝑖 (π‘‘π‘˜ )) πœ™π‘– (𝑦𝑖 (π‘‘π‘˜ )) dπ‘Šπ‘– (π‘Ÿ) dπ‘Šπ‘– (𝑠)
+∫
π‘‘π‘˜
[πœ™π‘– (π‘₯𝑖 (π‘‘π‘˜ )) − πœ™π‘– (𝑦𝑖 (π‘‘π‘˜ ))] dπ‘Šπ‘– (𝑠)
π‘‘π‘˜+1
+∫
𝑠
(27)
∫ πœ™π‘–σΈ€  (π‘₯𝑖 (π‘Ÿ)) πœ‡π‘– (π‘₯𝑖 (π‘Ÿ))
π‘‘π‘˜+1
𝛾𝑗2 ⏟⏟
Γ⏟3⏟⏟
β„ŽβŸ2⏟d𝑠 + ∫
π‘‘π‘˜
(19)
2
Γ4 (𝑠 − π‘‘π‘˜ ) d𝑠
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
moment bounded
}
}
}
}
Μƒ1 β„Ž3 .
≤Γ
}
}
}
}
Triangle Inequality, (6), (13), (19)}
𝑠
∫
∫ Γ5 β„Ždπ‘Ÿd𝑠
π‘‘π‘˜
π‘‘π‘˜
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
(28)
Finally, it is easy to prove 𝐸(‖𝛿
𝑠
∫ [πœ™π‘–σΈ€  (π‘₯𝑖 (π‘Ÿ)) πœ™π‘– (π‘₯𝑖 (π‘Ÿ)) − πœ™π‘–σΈ€  (𝑦𝑖 (π‘‘π‘˜ ))
π‘‘π‘˜
π‘‘π‘˜
𝑛
π‘˜+1 2
1
2
+ πœ™π‘–σΈ€ σΈ€  (π‘₯𝑖 (π‘Ÿ)) πœ™π‘– (π‘₯𝑖 (π‘Ÿ)) dπ‘Ÿdπ‘Šπ‘– (𝑠)
2
+∫
π‘‘π‘˜+1
π‘‘π‘˜+1
+
π‘‘π‘˜
π‘‘π‘˜
𝑛
(6) and (12)
𝑠
π‘‘π‘˜
π‘‘π‘˜
π‘‘π‘˜+1
(15)
β„Ž 𝑐𝑖 πœƒ
Γ [𝑐𝑖2 + 𝑛 ∑ π‘Žπ‘–π‘—2 𝛼j2 + 𝑛 ∑ 𝑏𝑖𝑗2 𝛽𝑗2 ]
2 πœ“
(1 + 𝑐𝑖 πœƒβ„Ž) ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
𝑗=1
𝑗=1
[
]
+
πœ™π‘– (𝑦𝑖 (π‘‘π‘˜ )) dπ‘Šπ‘– (𝑠)
⏟⏟Γ⏟⏟⏟
⏟⏟d𝑠
πœβ„Ž
× πœ™π‘– (𝑦𝑖 (π‘‘π‘˜ )) ] dπ‘Šπ‘– (π‘Ÿ) dπ‘Šπ‘– (𝑠) .
β€– ) ≤ Γ1 β„Ž3 .
Thanks to Theorem 1 in [18], we can conclude that
σ΅„© σ΅„©2
𝐸 (σ΅„©σ΅„©σ΅„©σ΅„©πœ–π‘˜ σ΅„©σ΅„©σ΅„©σ΅„© ) ≤ Γβ„Ž2 ∀β„Ž ∈ (0, 1) ,
(29)
that is, the mean-square order of global error of the SSTM is
1.
4. Stability of SSTM
We are concerned with the stability of SSTM solution. Since
(1) has an equilibrium solution x(𝑑) ≡ 0, we will discuss
whether the SSTM solution π‘‹π‘˜ with a positive step-size can
attain a similar stability when π‘˜ goes to infinity. First we give
a sufficient condition for the exponential stability in meansquare sense of the equilibrium solution. The references [13,
19] give the condition as
From (25) and (27), we have
󡄨 π‘˜+1 󡄨2
𝐸 (󡄨󡄨󡄨󡄨𝛿𝑖 󡄨󡄨󡄨󡄨 )
{
{
{
π‘‘π‘˜+1
{
2
𝐸[π‘₯𝑖 (π‘‘π‘˜ ) − π‘₯𝑖 (𝑠)] d𝑠
≤ 9 {𝑐𝑖 β„Ž ∫
{
π‘‘π‘˜
{
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
{
Cauchy−Schwartz inequality
{
𝑛
𝑛
𝑛
𝑛
𝑛
󡄨 󡄨
󡄨 󡄨
󡄨 󡄨
󡄨 󡄨
∑ σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘Žπ‘–π‘— 󡄨󡄨󡄨󡄨 𝛼𝑗 + ∑ 󡄨󡄨󡄨󡄨𝑏𝑖𝑗 󡄨󡄨󡄨󡄨 𝛽𝑗 ≤ 𝛼𝑖 ∑ σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘Žπ‘—π‘– 󡄨󡄨󡄨󡄨 + 𝛽𝑖 ∑ 󡄨󡄨󡄨󡄨𝑏𝑗𝑖 󡄨󡄨󡄨󡄨 ,
π‘‘π‘˜+1
2
̈
Cauchy−Schwartz and Holder
inequality
𝑛
π‘‘π‘˜+1
+
(1 + 𝑐𝑖 πœƒβ„Ž)
𝑛𝑐𝑖2 β„Ž4 πœƒ2
2
󡄨
󡄨2
𝐸󡄨󡄨󡄨π‘₯𝑖 (π‘‘π‘˜ )󡄨󡄨󡄨
2
π‘‘π‘˜+1
2
+∫
𝛾𝑗2 𝐸[π‘₯𝑖 (π‘‘π‘˜ ) − 𝑦𝑖 (π‘‘π‘˜ )] d𝑠
π‘‘π‘˜
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
𝑗=1
𝑗=1
𝑗=1
𝑗=1
𝑗=1
Definition 7. A numerical method is said to be mean-square
stable (MS-stable) if there exists an β„Ž0 > 0 such that any
application of the method to problem (1) generates numerical
approximations π‘‹π‘–π‘˜ , which satisfies
󡄨 󡄨2
lim 𝐸 (σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘‹π‘–π‘˜ 󡄨󡄨󡄨󡄨 ) = 0, 𝑖 = 1, 2, . . . , 𝑛,
(31)
π‘˜→∞
for all β„Ž ∈ (0, β„Ž0 ).
ItΜ‚
o Isometry
π‘‘π‘˜+1
𝑠
󡄨
2 󡄨2
+∫
(𝑠−π‘‘π‘˜ ) ∫ πΈσ΅„¨σ΅„¨σ΅„¨σ΅„¨πœ™π‘–σΈ€  (π‘₯𝑖 (π‘Ÿ)) πœ‡π‘– (π‘₯𝑖 (π‘Ÿ))+(1/2) πœ™π‘–σΈ€ σΈ€  (π‘₯𝑖 (π‘Ÿ)) πœ™π‘– (π‘₯𝑖 (π‘Ÿ)) 󡄨󡄨󡄨󡄨 dπ‘Ÿd𝑠
π‘‘π‘˜
π‘‘π‘˜
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
Cauchy−Schwartz inequality and ItΜ‚
o Isometry
}
}
}
}
σΈ€ 
σΈ€ 
+∫
∫ 𝐸[πœ™π‘– (π‘₯𝑖 (π‘Ÿ)) πœ™i (π‘₯𝑖 (π‘Ÿ)) − πœ™π‘– (𝑦𝑖 (π‘‘π‘˜ )) πœ™π‘– (𝑦𝑖 (π‘‘π‘˜ ))] dπ‘Ÿd𝑠}
}
π‘‘π‘˜
βŸβŸβŸπ‘‘βŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸβŸ
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟}
π‘˜
}
ItΜ‚
o Isometry
}
𝑠
𝑛
for every 𝑖 (𝑖 = 1, . . . , 𝑛).
𝑛
𝑛
[ ∑ π‘Ž2 𝐸󡄨󡄨󡄨󡄨𝑓𝑗 (π‘₯𝑗 (π‘‘π‘˜ ))󡄨󡄨󡄨󡄨2 + ∑ 𝑏2 𝐸󡄨󡄨󡄨󡄨𝑔𝑗 (𝑍
̃𝑗 (π‘‘π‘˜ ))󡄨󡄨󡄨󡄨2 ]
𝑖𝑗 󡄨
𝑖𝑗 󡄨
󡄨
󡄨
(1 + 𝑐𝑖 πœƒβ„Ž) 𝑗=1
𝑗=1
[
]
π‘‘π‘˜+1
𝑗=1
𝑛
(30)
̈
Cauchy−Schwartz and Holder
inequality
𝑐𝑖4 β„Ž4 πœƒ2
𝑗=1
𝑛
2
̃𝑗 (π‘‘π‘˜ )] d𝑠
+ 𝑛 ∑ 𝑏𝑖𝑗2 𝛽𝑗2 β„Ž ∫
𝐸[π‘₯𝑗 (𝑠 − πœπ‘— ) − 𝑍
π‘‘π‘˜
𝑗=1
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
+
𝑗=1
𝑛
󡄨 󡄨
󡄨 󡄨
󡄨 󡄨
󡄨 󡄨
−2𝑐𝑖 + ∑ σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘Žπ‘–π‘— 󡄨󡄨󡄨󡄨 𝛼𝑗 + ∑ 󡄨󡄨󡄨󡄨𝑏𝑖𝑗 󡄨󡄨󡄨󡄨 𝛽𝑗 + 𝛼𝑖 ∑ σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘Žπ‘—π‘– 󡄨󡄨󡄨󡄨 + 𝛽𝑖 ∑ 󡄨󡄨󡄨󡄨𝑏𝑗𝑖 󡄨󡄨󡄨󡄨 + 𝛾𝑖2 < 0
+ 𝑛 ∑ π‘Žπ‘–π‘—2 𝛼𝑗2 β„Ž ∫
𝐸[π‘₯𝑗 (𝑠) − π‘₯𝑗 (π‘‘π‘˜ )] d𝑠
π‘‘π‘˜
𝑗=1
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
2
Theorem 8. Assume (6), (30), and (1 − πœƒ)𝑐𝑖 β„Ž ≤ 1 are satisfied;
then the SSTM (2) are mean-square stable if β„Ž ∈ (0, β„Ž0 ), where
min {1, β„Žπ‘– }
{
{1≤𝑖≤𝑛
β„Ž0 = {
1
{ min {1,
,β„Ž }
1≤𝑖≤𝑛
−
πœƒ) 𝑐𝑖 𝑖
(1
{
for πœƒ = 1,
for πœƒ ∈ [0, 1) .
(32)
6
Abstract and Applied Analysis
𝑛
Here β„Žπ‘– is the smallest positive root of the cubic equation with
respect to 𝑧 given by
3
2
A𝑖 𝑧 + B𝑖 𝑧 + C𝑖 𝑧 + D𝑖 = 0,
+ ∑ {2β„Ž (1 − (1 − πœƒ) 𝑐𝑖 β„Ž) π‘‹π‘–π‘˜
𝑗=1
× [π‘Žπ‘–π‘— 𝑓𝑗 (π‘‹π‘—π‘˜ ) + 𝑏𝑖𝑗 𝑔𝑗 (π‘π‘—π‘˜ )]}
(33)
𝑛
where the coefficients mean
2
A𝑖 =
B𝑖 =
𝑛
𝑛
πœ…π‘–2
󡄨 󡄨
󡄨 󡄨
(−(1 − πœƒ)𝑐𝑖 + ∑ σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘Žπ‘–π‘— 󡄨󡄨󡄨󡄨 𝛼𝑗 + ∑ 󡄨󡄨󡄨󡄨𝑏𝑖𝑗 󡄨󡄨󡄨󡄨 𝛽𝑗 ) ,
2
𝑗=1
𝑗=1
𝛾𝑖2 (−(1
Taking expectations of both sides of (35), we can get
2
𝑛
𝑛
󡄨 󡄨
󡄨 󡄨
− πœƒ)𝑐𝑖 + ∑ σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘Žπ‘–π‘— 󡄨󡄨󡄨󡄨 𝛼𝑗 + ∑ 󡄨󡄨󡄨󡄨𝑏𝑖𝑗 󡄨󡄨󡄨󡄨 𝛽𝑗 )
𝑗=1
󡄨
󡄨2
𝐸 (σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘‹π‘–π‘˜+1 󡄨󡄨󡄨󡄨 )
𝑗=1
󡄨 󡄨2
󡄨
󡄨2
= 𝐸 (σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘Œπ‘–π‘˜ 󡄨󡄨󡄨󡄨 ) + β„ŽπΈ (σ΅„¨σ΅„¨σ΅„¨σ΅„¨πœ™π‘– (π‘Œπ‘–π‘˜ )󡄨󡄨󡄨󡄨 )
𝑛
𝑛
󡄨 󡄨
󡄨 󡄨
+ πœ…π‘–2 ( ∑ σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘Žπ‘–π‘— 󡄨󡄨󡄨󡄨 𝛼𝑗 + ∑ 󡄨󡄨󡄨󡄨𝑏𝑖𝑗 󡄨󡄨󡄨󡄨 𝛽𝑗 ) − πœ…π‘–2 (1 − πœƒ) 𝑐𝑖 ,
𝑗=1
𝑗=1
+
2
𝑛
𝑛
󡄨 󡄨
󡄨 󡄨
C𝑖 = (−(1 − πœƒ)𝑐𝑖 + ∑ σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘Žπ‘–π‘— 󡄨󡄨󡄨󡄨 𝛼𝑗 + ∑ 󡄨󡄨󡄨󡄨𝑏𝑖𝑗 󡄨󡄨󡄨󡄨 𝛽𝑗 )
𝑗=1
(34)
𝑛
𝑛
󡄨 󡄨
󡄨 󡄨
+ 2𝛾𝑖2 (− (1 − πœƒ) 𝑐𝑖 + ∑ σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘Žπ‘–π‘— 󡄨󡄨󡄨󡄨 𝛼𝑗 + ∑ 󡄨󡄨󡄨󡄨𝑏𝑖𝑗 󡄨󡄨󡄨󡄨 𝛽𝑗 )
−
𝑐𝑖2 πœƒ2
β„Ž2 󡄨󡄨 σΈ€  π‘˜
󡄨2
𝐸 (σ΅„¨σ΅„¨σ΅„¨πœ™π‘– (π‘Œπ‘– ) πœ™π‘– (π‘Œπ‘–π‘˜ )󡄨󡄨󡄨󡄨 )
2
≤ (1 + 𝛾𝑖2 β„Ž +
𝑗=1
𝑗=1
𝑛
+ 2β„Ž2 [ ∑π‘Žπ‘–π‘— 𝑓𝑗 (π‘‹π‘—π‘˜ )] [∑ 𝑏𝑖𝑗 𝑔𝑗 (π‘π‘—π‘˜ )] .
] [𝑗=1
]
[𝑗=1
(36)
(37)
β„Ž2 2
󡄨 󡄨2
πœ… ) 𝐸 (σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘Œπ‘–π‘˜ 󡄨󡄨󡄨󡄨 ) .
2 𝑖
Together with (36), we have
𝑗=1
󡄨
󡄨2
𝐸 (σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘‹π‘–π‘˜+1 󡄨󡄨󡄨󡄨 )
πœ…2
+ 𝑖,
2
𝑛
𝑛
𝑗=1
𝑗=1
≤ (1 + 𝛾𝑖2 β„Ž +
󡄨 󡄨
󡄨 󡄨
D𝑖 = 2 (−𝑐𝑖 + ∑ σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘Žπ‘–π‘— 󡄨󡄨󡄨󡄨 𝛼𝑗 + ∑ 󡄨󡄨󡄨󡄨𝑏𝑖𝑗 󡄨󡄨󡄨󡄨 𝛽𝑗 ) + 𝛾𝑖2 .
≤
for 𝑖 = 1, 2, . . . , 𝑛.
Squaring on both sides of (4b) and (4a), we have
+ [πœ™π‘–σΈ€  (π‘Œπ‘–π‘˜ ) πœ™π‘– (π‘Œπ‘–π‘˜ )] [
[
2
(Δπ‘Šπ‘–π‘˜ ) − β„Ž
2
+ [(πœ™π‘– (π‘Œπ‘–π‘˜ )) Δπ‘Šπ‘–π‘˜ + π‘Œπ‘–π‘˜ ]
2
× πœ™π‘–σΈ€  (π‘Œπ‘–π‘˜ ) πœ™π‘– (π‘Œπ‘–π‘˜ ) [(Δπ‘Šπ‘–π‘˜ ) − β„Ž]
2
󡄨2
2󡄨
(1 + πœƒπ‘π‘– β„Ž) σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘Œπ‘–π‘˜ 󡄨󡄨󡄨󡄨 = [(1 − (1 − πœƒ) 𝑐𝑖 β„Ž) π‘‹π‘–π‘˜ ]
+β„Ž
[
2
𝑛
∑ π‘Žπ‘–π‘— 𝑓𝑗 (π‘‹π‘—π‘˜ )]
𝑗=1
]
𝑛
2
(1 + πœƒπ‘π‘– β„Ž)
𝑛
𝑛
󡄨 󡄨
󡄨 󡄨2
󡄨 󡄨
+ β„Ž2 (∑ σ΅„¨σ΅„¨σ΅„¨π‘Žπ‘–π‘Ÿ 󡄨󡄨󡄨 π›Όπ‘Ÿ ) ( ∑ σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘Žπ‘–π‘— 󡄨󡄨󡄨󡄨 𝛼𝑗 𝐸 (σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘‹π‘—π‘˜ 󡄨󡄨󡄨󡄨 ))
]
π‘Ÿ=1
(35)
𝑗=1
𝑛
𝑛
󡄨 󡄨
󡄨 󡄨2
󡄨 󡄨
+ β„Ž2 (∑ σ΅„¨σ΅„¨σ΅„¨π‘π‘–π‘Ÿ 󡄨󡄨󡄨 π›½π‘Ÿ ) ( ∑ 󡄨󡄨󡄨󡄨𝑏𝑖𝑗 󡄨󡄨󡄨󡄨 𝛽𝑗 𝐸 (σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘π‘—π‘˜ 󡄨󡄨󡄨󡄨 ))
π‘Ÿ=1
𝑗=1
𝑛
󡄨
󡄨
+ ∑β„Ž 󡄨󡄨󡄨1 − (1 − πœƒ) 𝑐𝑖 β„Žσ΅„¨σ΅„¨σ΅„¨
𝑗=1
+ 2π‘Œπ‘–π‘˜ πœ™π‘– (π‘Œπ‘–π‘˜ ) Δπ‘Šπ‘–π‘˜ ,
2[
2
]
1 + 𝛾𝑖2 β„Ž + (β„Ž2 /2) πœ…π‘–2
{
󡄨 󡄨2
2
× {[1 − (1 − πœƒ) 𝑐𝑖 β„Ž] 𝐸 (σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘‹π‘–π‘˜ 󡄨󡄨󡄨󡄨 )
{
2
2
󡄨󡄨 π‘˜+1 󡄨󡄨2
󡄨󡄨𝑋𝑖 󡄨󡄨 = (π‘Œπ‘–π‘˜ ) + [πœ™π‘– (π‘Œπ‘–π‘˜ ) Δπ‘Šπ‘–π‘˜ ]
󡄨
󡄨
2
β„Ž2 2
󡄨 󡄨2
πœ… ) 𝐸 (σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘Œπ‘–π‘˜ 󡄨󡄨󡄨󡄨 )
2 𝑖
+ β„Ž2 [∑ 𝑏𝑖𝑗 𝑔𝑗 (π‘π‘—π‘˜ )]
]
[𝑗=1
2
󡄨 󡄨
󡄨 󡄨2
󡄨 󡄨2
× [σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘Žπ‘–π‘— 󡄨󡄨󡄨󡄨 𝛼𝑗 [𝐸 (σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘‹π‘–π‘˜ 󡄨󡄨󡄨󡄨 ) + 𝐸 (σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘‹π‘—π‘˜ 󡄨󡄨󡄨󡄨 )]
󡄨 󡄨
󡄨 󡄨2
󡄨 󡄨2
+ 󡄨󡄨󡄨󡄨𝑏𝑖𝑗 󡄨󡄨󡄨󡄨 𝛽𝑗 [𝐸 (σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘‹π‘–π‘˜ 󡄨󡄨󡄨󡄨 ) + 𝐸 (σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘π‘—π‘˜ 󡄨󡄨󡄨󡄨 )]]
𝑛
󡄨 󡄨
+ 2β„Ž2 ( ∑ σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘Žπ‘–π‘— 󡄨󡄨󡄨󡄨 𝛼𝑗 )
𝑗=1
𝑛
󡄨 󡄨
󡄨 󡄨2
󡄨 󡄨2 }
× ( ∑ 󡄨󡄨󡄨󡄨𝑏𝑖𝑗 󡄨󡄨󡄨󡄨 𝛽𝑗 ) [𝐸 (σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘‹π‘—π‘˜ 󡄨󡄨󡄨󡄨 ) + 𝐸 (σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘π‘—π‘˜ 󡄨󡄨󡄨󡄨 )]} .
𝑗=1
}
(38)
Abstract and Applied Analysis
7
Thus, we attain
𝑛
𝑛
󡄨
󡄨2
𝐸 (σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘‹π‘–π‘˜+1 󡄨󡄨󡄨󡄨 ) ≤ [𝑃 (β„Ž) + ∑ 𝑄𝑗 (β„Ž) + ∑𝑅𝑗 (β„Ž)]
𝑗=1
𝑗=1
[
]
󡄨 󡄨2
󡄨 󡄨2
󡄨 󡄨2
× max {𝐸 (σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘‹π‘–π‘˜ 󡄨󡄨󡄨󡄨 ) , 𝐸 (σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘‹π‘—π‘˜ 󡄨󡄨󡄨󡄨 ) , 𝐸 (σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘π‘—π‘˜ 󡄨󡄨󡄨󡄨 )} ,
1≤𝑗≤𝑛
(39)
where
𝑃 (β„Ž) =
1 + 𝛾𝑖2 β„Ž + (β„Ž2 πœ…π‘–2 /2)
(1 + πœƒπ‘π‘– β„Ž)
}
󡄨 󡄨
󡄨 󡄨
× (σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘Žπ‘–π‘— 󡄨󡄨󡄨󡄨 𝛼𝑗 + 󡄨󡄨󡄨󡄨𝑏𝑖𝑗 󡄨󡄨󡄨󡄨 𝛽𝑗 ) } ,
}
1+
π‘₯ (𝑑)
𝑓 (π‘₯1 (𝑑))
π‘₯ (𝑑)
) d𝑑
𝑑 ( 1 ) = − 𝐢 ( 1 ) d𝑑 + 𝐴 (
π‘₯2 (𝑑)
π‘₯2 (𝑑)
𝑓 (π‘₯2 (𝑑))
𝑔 (π‘₯1 (𝑑 − 𝜏1 ))
) d𝑑 + Φdπ‘Š (𝑑)
+ 𝐡(
𝑔 (π‘₯2 (𝑑 − 𝜏2 ))
(43)
on 𝑑 β©Ύ 0 with the initial condition π‘₯1 (𝑑) = 𝑑 + 1, 𝑑 ∈ [−1, 0]
and π‘₯2 (𝑑) = 𝑑 + 1, 𝑑 ∈ [−2, 0].
(1
Case 1. Let 𝑓(π‘₯) = sin(π‘₯), 𝑔(π‘₯) = arctan(π‘₯),
+ (β„Ž2 πœ…π‘–2 /2)
2
+ πœƒπ‘π‘– β„Ž)
(41)
+ (β„Ž2 πœ…π‘–2 /2)
2
+ πœƒπ‘π‘– β„Ž)
𝐢=(
𝑛
󡄨 󡄨
󡄨 󡄨
󡄨 󡄨
× {β„Ž (σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘Žπ‘–π‘— 󡄨󡄨󡄨󡄨 𝛼𝑗 + 󡄨󡄨󡄨󡄨𝑏𝑖𝑗 󡄨󡄨󡄨󡄨 𝛽𝑗 ) (∑ σ΅„¨σ΅„¨σ΅„¨π‘π‘–π‘Ÿ 󡄨󡄨󡄨 π›½π‘Ÿ )
2
4 −5
𝐴=(
),
6 3
0
π‘₯ (𝑑)
).
Φ=( 1
0 −√5π‘₯2 (𝑑)
(44)
Case 2. Let 𝑓(π‘₯) = π‘₯, 𝑔(π‘₯) = tanh(π‘₯),
𝛾𝑖2 β„Ž
(1
20 0
),
0 20
𝐢=(
−6 4
𝐡=(
),
3 1
}
󡄨 󡄨 󡄨
󡄨
+ β„Ž 󡄨󡄨󡄨1 − (1 − πœƒ) 𝑐𝑖 β„Žσ΅„¨σ΅„¨σ΅„¨ (σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘Žπ‘–π‘— 󡄨󡄨󡄨󡄨 𝛼𝑗 ) } ,
}
1+
for the value of explicit solution of (1) at time 𝑇 = 3 and π‘‹πœ”πΎπ‘Ÿ is
its numerical approximation along the rth sample path {πœ”π‘Ÿ :
π‘Ÿ = 1, 2, . . . , 2000}. We compute the numerical solution using
the split-step πœƒ-Milstein method (2) with step-size β„Ž = 2−12 ,
and we will call this the “exact solution.”
𝛾𝑖2 β„Ž
{ 󡄨 󡄨 𝑛 󡄨 󡄨
󡄨 󡄨
× {β„Ž2 σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘Žπ‘–π‘— 󡄨󡄨󡄨󡄨 𝛼𝑗 ∑ (σ΅„¨σ΅„¨σ΅„¨π‘Žπ‘–π‘Ÿ 󡄨󡄨󡄨 π›Όπ‘Ÿ + σ΅„¨σ΅„¨σ΅„¨π‘π‘–π‘Ÿ 󡄨󡄨󡄨 π›½π‘Ÿ )
𝑗=1
{
𝑅𝑗 (β„Ž) =
2
𝐾
where πœ€ := √(1/2000) ∑2000
π‘Ÿ=1 β€–π‘‹πœ”π‘Ÿ − x(𝑇)β€– . Here x(𝑇) stands
Example 9. Consider the following two-dimensional stochastic delay Hopfield neural networks of the form
2
𝑛
{
2
󡄨
󡄨
× {[1 − (1 − πœƒ) 𝑐𝑖 β„Ž] + β„Ž∑ 󡄨󡄨󡄨1 − (1 − πœƒ) 𝑐𝑖 β„Žσ΅„¨σ΅„¨σ΅„¨ (40)
𝑗=1
{
𝑄𝑗 (β„Ž) =
split-step πœƒ-method in [13], which has strong convergence
order 0.5.
The mean-square error πœ€ of numerical approximations at
time 𝑇 versus the step-size is depicted in log-log diagrams,
(42)
π‘Ÿ=1
󡄨
󡄨 󡄨 󡄨
+ β„Ž 󡄨󡄨󡄨1 − (1 − πœƒ) 𝑐𝑖 β„Žσ΅„¨σ΅„¨σ΅„¨ (󡄨󡄨󡄨󡄨𝑏𝑖𝑗 󡄨󡄨󡄨󡄨 𝛽𝑗 ) } .
Note that the assumption of Theorem implies the nonnegativity of 1 − (1 − πœƒ)𝑐𝑖 β„Ž.
Proof. Obviously, when π‘˜ → +∞, 𝐸(|π‘‹π‘–π‘˜ |2 ) → 0 if the
inequality 𝑃(β„Ž) + ∑𝑛𝑗=1 𝑄𝑗 (β„Ž) + ∑𝑛𝑗=1 𝑅𝑗 (β„Ž) < 1 holds, which is
equivalent to the inequality A𝑖 β„Ž3 + B𝑖 β„Ž2 + C𝑖 β„Ž + D𝑖 < 0.
Furthermore, it is easy to prove A𝑖 > 0 and D𝑖 < 0
by virtue of (30). By Vieta’s formulas, the product of three
roots of (33) satisfies 𝑧1 𝑧2 𝑧3 = −(D𝑖 /A𝑖 ) > 0. This means
that (33) has at least one positive root. Therefore, let β„Žπ‘– denote
the smallest positive root of the equation. Moreover, we note
that at the origin the right-hand side polynomial of (33) is
negative. This completes the proof.
5. Numerical Results
Now, we apply the introduced SSTM method to two test cases
of SDHNNs in order to compare their performance with the
10 0
),
0 10
5 −1
𝐡=(
),
1 5
𝐴=(
0 0
),
0 0
0
1.5π‘₯1 (𝑑)
).
Φ=(
0
−1.5π‘₯2 (𝑑)
(45)
In Figure 1, SSTM is applied with 7 different step-sizes:
β„Žπ‘š = 2π‘š−12 for π‘š = 1, 2, . . . , 7. Two pairs of time
delays (𝜏1 , 𝜏2 ) are set to (1, 2) and (1.13, 2.31). The first
pair has common factor β„Žπ‘š ; however, the second pair is
incommensurable by β„Žπ‘š . The computation errors πœ€ versus
step-sizes β„Ž are plotted on a log-log scale and the reference
lines of slope 1 are added. It illustrates that SSTM has raised
the strong order of the split-step πœƒ-method at least to 1 for
SDHNNs [13].
Next, Table 1 shows a comparison of stability intervals
between the SST and the SSTM for (43). Two sets of the
interval in the Table are calculated through Theorem 8 in this
paper and Theorem 5.1 in [13]. It is easy to see that the stability
intervals of the two methods are similar.
We know that Theorem 5.1 in [13] and Theorem 8 in this
paper only give sufficient conditions of mean-square stability.
Therefore the stability intervals given by these theorems
are only subsets of real ones. To confirm the situation, we
calculated the sample moments of the approximate solution
and plotted them along the time 𝑑. Here the sample moment πœ‚π‘–
𝑖 2
𝑖
means (1/2000) ∑2000
π‘Ÿ=1 β€–π‘‹πœ”π‘Ÿ β€– for the numerical solution π‘‹πœ”π‘Ÿ
8
Abstract and Applied Analysis
10−1
10−1
10−2
Error πœ€
Error πœ€
10
−2
10−3
10−3
10−4
10−4
10−4
10−3
10−2
10−1
10−5
10−4
10−3
Step-size β„Ž
Step-size β„Ž
πœƒ = 0.0
πœƒ = 0.5
10−1
10−2
πœƒ = 0.0
πœƒ = 0.5
πœƒ = 1.0
Reference line
(a) Case 1 with 𝜏1 = 1, 𝜏2 = 2
πœƒ = 1.0
Reference line
(b) Case 2 with 𝜏1 = 1, 𝜏2 = 2
10−1
10−2
10−2
Error πœ€
Error πœ€
10−1
10−3
10−3
10−4
10−4
10−3
10−2
10−1
10−4
10−4
10−3
Step-size β„Ž
πœƒ = 0.0
πœƒ = 0.5
10−1
10−2
Step-size β„Ž
πœƒ = 1.0
Reference line
πœƒ = 0.0
πœƒ = 0.5
(c) Case 1 with 𝜏1 = 1.13, 𝜏2 = 2.31
πœƒ = 1.0
Reference line
(d) Case 2 with 𝜏1 = 1.13, 𝜏2 = 2.31
Figure 1: Errors versus step-sizes for the SDHNNs (43).
Table 1: Calculated intervals of step-size for mean-square stability of the numerical schemes.
Numerical scheme
Split-step πœƒ-Milstein method (πœƒ = 0.0)
Split-step πœƒ-Milstein method (πœƒ = 0.2)
Split-step πœƒ-Milstein method (πœƒ = 0.5)
Split-step πœƒ-Milstein method (πœƒ = 0.8)
Split-step πœƒ-Milstein method (πœƒ = 1.0)
Split-step πœƒ-method (πœƒ = 0.0)
Split-step πœƒ-method (πœƒ = 0.2)
Split-step πœƒ-method (πœƒ = 0.5)
Split-step πœƒ-method (πœƒ = 0.8)
Split-step πœƒ-method (πœƒ = 1.0)
Case 1
(0.0000, 0.0500]
(0.0000, 0.0625]
(0.0000, 0.1000]
(0.0000, 0.0650)
(0.0000, 0.0500)
(0.0000, 0.0500]
(0.0000, 0.0625]
(0.0000, 0.1000]
(0.0000, 0.0689)
(0.0000, 0.0540)
Case 2
(0.0000, 0.1000]
(0.0000, 0.1250]
(0.0000, 0.2000]
(0.0000, 0.5000]
(0.0000, 0.3534)
(0.0000, 0.1000]
(0.0000, 0.1250]
(0.0000, 0.2000]
(0.0000, 0.5000]
(0.0000, 0.5792)
Abstract and Applied Analysis
9
1010
108
108
106
10
4
10
2
104
102
πœ‚
πœ‚
106
100
100
10−2
10−2
10−4
10−4
10−6
0
10
20
30
40
Time 𝑑
50
60
70
10−6
80
0
10
20
30
40
50
60
70
80
60
70
80
Time 𝑑
β„Ž = 1/10
β„Ž = 1/11
β„Ž = 1/12
β„Ž = 1/10
β„Ž = 1/11
β„Ž = 1/12
(a) SST
(b) SSTM
Figure 2: Mean-square stability comparison of SST and SSTM for the SDHNNs (43), Case 1, πœƒ = 0.
1010
108
106
105
104
100
πœ‚
πœ‚
102
100
10−2
10−4
10−5
10−6
10−10
0
10
20
30
40
Time 𝑑
β„Ž = 1/5
β„Ž = 1/6
50
60
70
80
β„Ž = 1/7
β„Ž = 1/8
(a) SST
10−8
0
10
20
30
40
Time 𝑑
β„Ž = 1/5
β„Ž = 1/6
50
β„Ž = 1/7
β„Ž = 1/8
(b) SSTM
Figure 3: Mean-square stability comparison of SST and SSTM for the SDHNNs (43), Case 2, πœƒ = 0.
approximating x(𝑑𝑖 ) along the π‘Ÿth sample path. Figures 2, 3, 4,
5, and 6 depict the results by SST and SSTM in the log-scaled
vertical axis. All the figures can give a rough estimate of the
stability interval in each case.
6. Concluding Remarks
We introduce the split-step πœƒ-Milstein method (SSTM),
which exhibits higher strong convergence rate than the
split-step πœƒ-method (SST, see [13]) for a stochastic delay
Hopfield neural networks, and the scheme proposed in this
paper can deal with incommensurable time delays which
were not considered in [13]. We give the proof of convergence
results, which has generally been omitted in the previous
works on the same subject. By comparing the stability
intervals of step size for the SST and SSTM for a test example,
we find they exhibit similar mean-square stability.
In this paper, we have found a delay-independent
sufficient condition for mean-square stability of split-step
πœƒ-Milstein method applied to nonlinear stochastic delay
10
Abstract and Applied Analysis
1015
1010
1010
105
πœ‚
πœ‚
105
10
100
0
10−5
10−5
10−10
0
10
20
30
40
Time 𝑑
50
60
70
80
10−10
0
10
β„Ž = 1/6
β„Ž = 1/7
β„Ž = 1/8
20
30
40
Time 𝑑
50
60
70
80
β„Ž = 1/6
β„Ž = 1/7
β„Ž = 1/8
(a) SST
(b) SSTM
Figure 4: Mean-square stability comparison of SST and SSTM for the SDHNNs (43), Case 1, πœƒ = 0.2.
1010
1020
1015
105
105
πœ‚
πœ‚
10
10
100
100
10
10−5
−5
10−10
0
10
20
30
40
Time 𝑑
50
60
70
80
β„Ž = 1/5
β„Ž = 1/6
β„Ž = 1/3
β„Ž = 1/4
10−10
0
10
20
30
40
Time 𝑑
β„Ž = 1/3
β„Ž = 1/4
(a) SST
50
60
70
80
β„Ž = 1/5
β„Ž = 1/6
(b) SSTM
Figure 5: Mean-square stability comparison of SST and SSTM for the SDHNNs (43), Case 2, πœƒ = 0.2.
Hopfield neural networks. Further, Figure 6 suggests that
the value of β„Ž0 , the right end-point of the stability interval,
given by Theorem 5.1 in [13] and Theorem 8 in this paper
is much smaller than the true value when πœƒ is close to
unity. In this case, we need other techniques for stability
analysis in this kind of stochastic delay differential system.
To the best of our knowledge, the works in [20, 21] put
forward good attempts. On the other hand, with respect to
stochastic delay differential equations, some other types of
stability have been successfully discussed for the Euler-type
scheme, for example, mean-square exponential stability [12],
delay-dependent stability [22], delay-dependent exponential
stability [23], and almost sure exponential stability [24].
To Milstein-type scheme, in view of more sophisticated
derivations, these issues would be challenging for future
research.
Acknowledgment
The authors would like to thank Dr. E. Buckwar and Dr.
A. Rathinasamy for their valuable suggestions. The authors
also thank the editor and the anonymous referees, whose
Abstract and Applied Analysis
11
102
100
100
10−5
10−2
10−10
10−4
πœ‚
πœ‚
105
10−15
10−6
10−20
10−8
10−25
10−10
10−30
0
20
40
60
Time 𝑑
SSTM, πœƒ = 0.8, β„Ž = 1
SSTM, πœƒ = 1.0, β„Ž = 1
80
100
120
10−12
0
10
20
30
40
50
60
70
80
90
100
Time 𝑑
SST, πœƒ = 0.8, β„Ž = 1
SST, πœƒ = 1.0, β„Ž = 1
(a) Case 1
SSTM, πœƒ = 0.8, β„Ž = 1
SSTM, πœƒ = 1.0, β„Ž = 1
SST, πœƒ = 0.8, β„Ž = 1
SST, πœƒ = 1.0, β„Ž = 1
(b) Case 2
Figure 6: Mean-square stability comparison of SST and SSTM for the SDHNNs (43).
careful reading and constructive feedback have improved
this work. The first author is partially supported by EInstitutes of Shanghai Municipal Education Commission (no.
E03004), National Natural Science Foundation of China
(no. 10901106), Natural Science Foundation of Shanghai (no.
09ZR1423200), and Innovation Program of Shanghai Municipal Education Commission (no. 09YZ150). The third author
is partially supported by the Grants-in-Aid for Scientific
Research (no. 24540154) supplied by Japan Society for the
Promotion of Science.
References
[1] J. J. Hopfield, “Neural networks and physical systems with
emergent collective computational abilities,” Proceedings of the
National Academy of Sciences of the United States of America,
vol. 79, no. 8, pp. 2554–2558, 1982.
[2] J. J. Hopfield and D. W. Tank, “Neural computation of decisons
in optimization problems,” Biological Cybernetics, vol. 52, no. 3,
pp. 141–152, 1985.
[3] G. Joya, M. A. Atencia, and F. Sandoval, “Hopfield neural
networks for optimization:study of the different dynamics,”
Neurocomputing, vol. 43, no. 1–4, pp. 219–237, 2002.
[4] Y. Sun, “Hopfield neural network based algorithms for image
restoration and reconstruction. I. Algorithms and simulations,”
IEEE Transactions on Signal Processing, vol. 48, no. 7, pp. 2105–
2118, 2000.
[5] S. Young, P. Scott, and N. Nasrabadi, “Object recognition using
multilayer Hopfield neural network,” IEEE Transactions on
Image Processing, vol. 6, no. 3, pp. 357–372, 1997.
[6] G. Pajares, “A Hopfield neural network for image change
detection,” IEEE Transactions on Neural Networks, vol. 17, no.
5, pp. 1250–1264, 2006.
[7] L. Wan and J. H. Sun, “Mean square exponential stability of
stochastic delayed Hopfield neural networks,” Physics Letters A,
vol. 343, no. 4, pp. 306–318, 2005.
[8] Z. Wang, H. Shu, J. Fang, and X. Liu, “Robust stability for
stochastic Hopfield neural networks with time delays,” Nonlinear Analysis, vol. 7, no. 5, pp. 1119–1128, 2006.
[9] Z. D. Wang, Y. R. Liu, K. Fraser, and X. H. Liu, “Stochastic
stability of uncertain Hopfield neural networks with discrete
and distributed delays,” Physics Letters A, vol. 354, no. 4, pp.
288–297, 2006.
[10] J. S. Šı́ma and P. Orponen, “General-purpose computation
with neural networks: a survey of complexity theoretic results,”
Neural Computation, vol. 15, no. 12, pp. 2727–2778, 2003.
[11] J. R. Raol and H. Madhuranath, “Neural network architectures
for parameter estimation of dynamical systems,” IEE Proceedings, vol. 143, no. 4, pp. 387–394, 1996.
[12] R. H. Li, W. Pang, and P. Leung, “Exponential stability of numerical solutions to stochastic delay Hopfield neural networks,”
Neurocomputing, vol. 73, no. 4–6, pp. 920–926, 2010.
[13] A. Rathinasamy, “The split-step πœƒ-methods for stochastic delay
Hopfield neural networks,” Applied Mathematical Modelling,
vol. 36, no. 8, pp. 3477–3485, 2012.
[14] D. J. Higham, X. Mao, and A. M. Stuart, “Strong convergence
of Euler-type methods for nonlinear stochastic differential
equations,” SIAM Journal on Numerical Analysis, vol. 40, no. 3,
pp. 1041–1063, 2002.
[15] H. Zhang, S. Gan, and L. Hu, “The split-step backward Euler
method for linear stochastic delay differential equations,” Journal of Computational and Applied Mathematics, vol. 225, no. 2,
pp. 558–568, 2009.
[16] X. Wang and S. Gan, “The improved split-step backward Euler
method for stochastic differential delay equations,” International Journal of Computer Mathematics, vol. 88, no. 11, pp.
2359–2378, 2011.
12
[17] X. R. Mao, Stochastic Differential Equations and Applications,
Horwood, Chichester, UK, 2nd edition, 2007.
[18] E. Buckwar, “One-step approximations for stochastic functional
differential equations,” Applied Numerical Mathematics, vol. 56,
no. 5, pp. 667–681, 2006.
[19] Q. Zhou and L. Wan, “Exponential stability of stochastic
delayed Hopfield neural networks,” Applied Mathematics and
Computation, vol. 199, no. 1, pp. 84–89, 2008.
[20] Y. Saito and T. Mitsui, “Mean-square stability of numerical
schemes for stochastic differential systems,” Vietnam Journal of
Mathematics, vol. 30, pp. 551–560, 2002.
[21] E. Buckwar and C. Kelly, “Towards a systematic linear stability
analysis of numerical methods for systems of stochastic differential equations,” SIAM Journal on Numerical Analysis, vol. 48,
no. 1, pp. 298–321, 2010.
[22] C. Huang, S. Gan, and D. Wang, “Delay-dependent stability
analysis of numerical methods for stochastic delay differential
equations,” Journal of Computational and Applied Mathematics,
vol. 236, no. 14, pp. 3514–3527, 2012.
[23] X. Qu and C. Huang, “Delay-dependent exponential stability
of the backward Euler method for nonlinear stochastic delay
differential equations,” International Journal of Computer Mathematics, vol. 89, no. 8, pp. 1039–1050, 2012.
[24] F. Wu, X. Mao, and L. Szpruch, “Almost sure exponential
stability of numerical solutions for stochastic delay differential
equations,” Numerische Mathematik, vol. 115, no. 4, pp. 681–697,
2010.
Abstract and Applied Analysis
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download