Document 10907608

advertisement
Hindawi Publishing Corporation
Journal of Applied Mathematics
Volume 2012, Article ID 693163, 12 pages
doi:10.1155/2012/693163
Research Article
Exponential Stability for a Class of Stochastic
Reaction-Diffusion Hopfield Neural Networks with
Delays
Xiao Liang1 and Linshan Wang2
1
2
College of Information Science and Engineering, Ocean University of China, Qingdao 266100, China
Department of Mathematics, Ocean University of China, Qingdao 266100, China
Correspondence should be addressed to Linshan Wang, wangls@ouc.edu.cn
Received 7 August 2011; Accepted 28 November 2011
Academic Editor: Jitao Sun
Copyright q 2012 X. Liang and L. Wang. This is an open access article distributed under the
Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
This paper studies the asymptotic behavior for a class of delayed reaction-diffusion Hopfield
neural networks driven by finite-dimensional Wiener processes. Some new sufficient conditions
are established to guarantee the mean square exponential stability of this system by using
Poincaré’s inequality and stochastic analysis technique. The proof of the almost surely exponential
stability for this system is carried out by using the Burkholder-Davis-Gundy inequality, the
Chebyshev inequality and the Borel-Cantelli lemma. Finally, an example is given to illustrate the
effectiveness of the proposed approach, and the simulation is also given by using the Matlab.
1. Introduction
Recently, the dynamics of Hopfield neural networks with reaction-diffusion terms have
been deeply investigated because their various generations have been widely used in
some practical engineering problems such as pattern recognition, associate memory, and
combinatorial optimization see 1–3. However, under closer scrutiny, that a more realistic
model would include some of the past states of the system, and theory of functional
differential equations systems has been extensively developed 4, 5, meanwhile many
authors have considered the asymptotic behavior of the neural networks with delays 6–
9. In fact random perturbation is unavoidable in any situation 3, 10; if we include some
environment noise in these systems, we can obtain a more perfect model of this situation
2
Journal of Applied Mathematics
3, 11–16. So, this paper is devoted to the exponential stability of the following delayed
reaction-diffusion Hopfield neural networks driven by finite-dimensional Wiener processes:
⎛
l
∂
dui t, x ⎝
∂x
j
j1
∂ui ∂ν m
∂ui
Dij x
∂xj
− ai ui n
⎞
cij fj uj t − r, x ⎠dt
j1
gij ui t − r, xdWj ,
j1
0,
1.1
t ≥ 0,
∂O
ui θ, x φi θ, x,
x ∈ O ∈ Rl , θ ∈ −r, 0, i 1, 2, . . . , n.
There are n neural network units in this system and ui t, x denote the potential of the
cell i at t and x. ai are positive constants and denote the rate with which the ith unit will reset
its potential to the resting state in isolation when it is disconnected from the network and
external inputs at t, and cij are the output connection weights from the jth neuron to the ith
neuron. fj are the active functions of the neural network. r is the time delay of a neuron. O
denotes an open bounded and connected subset of Rl with a sufficient regular boundary ∂O,
ν is the unit outward normal on ∂O, ∂ui /∂ν ∇ui , νRl , and gij are noise intensities. Initial
data φi are F0 -measurable and bounded functions, almost surely.
We denote Ω, F, P a complete probability space with filtration {Ft }t≥0 satisfying
the usual conditions see 10. Wi t, i 1, 2, . . . , m, are scale standard Brownian motions
defined on Ω, F, P.
For convenience, we rewrite system 1.1 in the vector form:
du ∇ · Dx ◦ ∇u − Au Cfut − r dt Gut − rdW,
∂ut, x 0, t ≥ 0,
∂ν 1.2
∂O
u0, x φx,
where C cij n×n , u u1 , u2 , . . . , un T , ∇u ∇u1 , . . . , ∇un T , W W1 , W2 , . . . , Wm T ,
fu f1 u1 , f2 u2 , . . . , fn un T , A Diaga1 , a2 , . . . , an , φ φ1 , φ2 , . . . , φn T , Gu gij ui n×m , D Dij n×l , and D ◦ ∇u Dij ∂ui /∂xj n×l is the Hadamard product of matrix
D and ∇u; for the definition of divergence operator ∇ · u, we refer to 2, 3.
2. Preliminaries and Notations
In this paper, we introduce the following Hilbert spaces H L2 O, V H 1 O, according
to 17–19, V ⊂ H H ⊂ V , where H , V denote the dual of the space H, V , respectively,
the injection is continuous, and the embedding is compact. · , ||| · ||| represent the norm in
H, V , respectively.
U L2 On is the space of vector-valued Lebesgue measurable functions on O,
which is a Banach space under the norm uU ni1 ui x2 1/2 .
C C−r, 0, U is the Banach space of all continuous functions from −r, 0 to U,
when equipped with the sup-norm φC sup−r≤s≤0 φU .
Journal of Applied Mathematics
3
With any continuous Ft -adapted U-valued stochastic process ut : Ω → U, t ≥ −r, we
associate a continuous Ft -adapted C-valued stochastic process ut : Ω → C, t > 0, by setting
ut s, xω ut s, xω, s ∈ −r, 0, x ∈ O.
CFb 0 denote the space of all bounded continuous processes φ : −r, 0 × Ω → U such
that φθ, · is F0 -measurable for each θ ∈ −r, 0 and EφC < ∞.
LK is the set of all linear bounded operators from K into K; when equipped with
the operator norm, it becomes a Banach space.
In this paper, we assume the following.
H1 fi and Gij are Lipschitz continuous with positive Lipschitz constants k1 , k2 such that
|fi u − fi v| ≤ k1 |u − v| and |Gij u − Gij v| ≤ k2 |u − v|, ∀u, v ∈ R, and fi 0 0,
gij 0 0.
H2 There exists α > 0 such that Dij x ≥ α/l.
H3 Let η 2αβ2 2k3 − nk12 σ 2 er − mk22 er − 2 > 0, k3 min{ai }, σ max{|cij |}.
Remark 2.1. We can infer from H1 that system 1.1 has an equilibrium ut, x, ω 0.
Let us define the linear operator as follows:
A : ΠA ∈ U −→ U,
Au ∇ · Dx ◦ ∇u,
2.1
and ΠA {u ∈ H 2 On , ∂u/∂ν|∂O 0}.
Lemma 2.2 Poincaré’s inequality. Let O be a bounded domain in Rl and φ belong to a collection
of twice differentiable functions defined on O into R; then
φ ≤ β−1 φ ,
2.2
where the constant β depends on the size of O.
Lemma 2.3. Let us consider the equation
du
Au, t ≥ 0,
dt
u0 φ.
2.3
For every φ ∈ U, let ut Stφ denote the solution of 2.3; then St is a contraction map in U.
Proof. Now we take the inner product of 2.3 with ut in U; by employing the Gaussian
theorem and condition H2, we get that Au, u ≤ −α|u|2H 1 On , ·, · is the inner product in U,
|u|2H 1 On denote the norm of H 1 On see 3, which means
1 d
ut2U α|ut|2H 1 On ≤ 0.
2 dt
2.4
4
Journal of Applied Mathematics
Thanks to the Poincaré inequality, one obtains
d
ut2U 2αβ2 ut2U ≤ 0.
dt
2.5
2
Multiplying e2αβ t in both sides of the inequality, we have
d 2αβ2 t
e
ut2U ≤ 0.
dt
2.6
Integrating the above inequality from 0 to t, we obtain
2 2
ut2U ≤ e−2αβ t φU .
2.7
By the definition of T tLU , we have T tLU ≤ 1.
Definition 2.4 see 20–22. A stochastic process ut : −r, ∞ × Ω → U is called a global
mild solution of 1.1 if
i ut is adapted to Ft
ii ut is measurable with
ut Stφ −
t
St − sA ds 0
ut φ ∈
CFb 0 ,
∞
0
t
ut2U dt < ∞ almost surely and
St − sfus − rds 0
t
St − sGus − rdW,
0
2.8
t ∈ −r, 0,
for all t ∈ −r, ∞ with probability one.
Definition 2.5. Equation 1.1 is said to be almost surely exponentially stable if, for any
solution ut, x, ω with initial data φ ∈ CFb 0 , there exists a positive constant λ such that
lim sup ln ut C ≤ −λ,
ut ∈ C, almost surely.
t−→∞
2.9
Definition 2.6. System 1.1 is said to be exponentially stable in the mean square sense if there
exist positive constants κ and α such that, for any solution ut, x, ω with the initial condition
φ ∈ CFb 0 , one has
Eut2C ≤ κe−αt−t0 ,
t ≥ t0 , ut ∈ C.
2.10
3. Main Result
Theorem 3.1. Suppose conditions H1–H3 hold; then 1.1 is exponentially stable in the mean square
sense.
Journal of Applied Mathematics
5
Proof. Let u be the mild solution of 1.1; thanks to the Itô formula, we observe that
⎞⎞
⎛ ⎛
l
n
∂
∂u
i
λt 2
λt 2
λt ⎝
2ui ⎝
Dij
− ai ui cij fj uj t − r ⎠⎠dt
d e ui λe ui dt e
∂xi
∂xj
3.1
j1
j1
eλt Gi GTi dt 2eλt ui Gi dW, Gi Gi1 , Gi2 , . . . , Gim ,
where λ is a positive constant that will be defined below. Then, by integration between 0 and
t, we find that
⎛
⎞
t
l
∂
∂u
i
⎠ds − 2 eλs ai u2 ds
Dij
eλt u2i t φi 02 λeλs u2i ds 2 eλs ⎝ui
i
∂xj
∂xj
0
0
0
j1
t
t
t
n
λs
λs
T
2 e ui cij fj uj s − r ds e Gi Gi ds 2 eλs ui Gi dW.
t
t
0
0
j1
3.2
0
Integrating the above equation over O, by virtue of Fubini’s theorem, we prove that
2
eλt u2i 2
φi 0 λ
−2
t
t
λs
0
e
O
λs
0
O
e
λs
O
0
e
t
u2i dx ds
ai u2i dx ds
Gi GTi dx ds
2
e
λs
0
t
e
λs
0
2
t
t
e
O
λs
0
O
ui
l
∂
2ui
∂x
j
O
j1
n
∂ui
Dij
∂xj
dx ds
cij fj uj s − r dx ds
3.3
j1
ui Gi dx dW.
Taking the expectation on both sides of the last equation, by means of 3, 10, 16
t 2E
0
O
eλs ui Gi dx dW 0.
3.4
Then, by Fubini’s theorem, we have
t
t
l
2
2
∂
∂ui
2
λs
2
λs
Dij
dx ds
e Eui E φi 0 λ e
Eui dx ds 2E e
ui
∂xj
∂xj
0
0
O
O
j1
t
t
n
− 2 eλs
ai Eu2i dx ds 2E eλs
ui cij fj uj s − r dx ds
λt
O
0
E
t
0
0
eλs
O
Gi GTi dx ds
I1 I 2 I 3 I 4 I 5 I 6 .
O
j1
3.5
6
Journal of Applied Mathematics
We observe that
2
2
I1 Eφi 0 ≤ sup Eφi θ ,
θ∈−r,0
t I2 λ
0
O
eλs Eu2i dx ds λ
t
eλs Eui 2 ds.
3.6
3.7
0
From the Neumann boundary condition, by means of Green’s formula and H2 see 3, 6, 7,
we know
⎛
⎞
l
∂
∂u
i
⎠dx ds
Dij
I3 2E
eλs ⎝ui
∂x
∂x
j
j
0 O
j1
2
t l
∂ui
λs
−2E
e
Dij
dx ds
∂xj
0 O
j1
t
t
2
λs
2
≤ −2α e E|ui | ds ≤ −2αβ
eλs Eui 2 ds.
t 0
3.8
0
Then, by using the positiveness of ai , one gets the relation
I4 −2
t O
0
e
λs
ai Eu2i dx ds
≤ −2k3
t
eλs Eui 2 ds,
3.9
0
where k3 min{a1 , a2 , . . . , an } > 0. By using the Young inequality as well as condition H1,
we have that
I5 2E
t O
0
eλs ui
j1
2 ⎞
n
⎟
⎜
≤
eλs ⎝E|ui |2 E cij fj ⎠dx ds
j1
0 O
⎛
⎞
t n
2
2
λs ⎝
2
≤
E|ui | σ
e
Efj uj s − r ⎠dx ds
t O
0
t ≤
O
0
t
≤
0
⎛
n
cij fj dx ds
⎛
3.10
j1
⎞
n
2
eλs ⎝E|ui |2 σ 2 k12 Euj s − r ⎠dx ds
j1
eλs Eui 2 σ 2 k12 Eus − r2U ds,
where σ max |cij |, and
I6 t 0
O
e
λs
EGi GTi dx ds
≤
mk22
t
0
eλs Eui s − r2 ds.
3.11
Journal of Applied Mathematics
7
We infer from 3.6–3.11 that
t
2 eλt Eui t2 ≤ sup Eφi θ − 2αβ2 2k3 − 1 − λ
θ∈−r,0
σ 2 k12
t
e ut −
λs
0
r2U ds
mk22
t
eλs Eui 2 ds
0
3.12
2
e Eui s − r ds.
λs
0
Adding 3.12 from i 1 to i n, we obtain
e
λt
Eu2U
2 ≤ EφC − 2αβ2 2k3 − 1 − λ
nk12 σ 2
mk22
t
t
0
e Eus −
λs
0
eλs Eu2U ds
3.13
r2U ds,
due to
t
e Eus −
λs
0
r2U ds
≤e
λr
t
≤ e2λr
−r
0
eλs Eus2U ds
−r
2
EφsU ds eλr
2
≤ re2λr EφC eλr
t
0
t
0
eλs Eus2U ds
3.14
eλs Eus2U ds;
we induce from the previous equations that
e
λt
Eu2U
≤ −c1
t
0
eλs Eu2U ds c2 ,
3.15
where c1 2αβ2 2k3 − 1 − nk12 σ 2 eλr − mk22 eλr − λ and c2 1 mk22 re2λr nk12 σ 2 re2λr Eφ2C ;
so we choose λ 1 such that c1 η > 0. By using the classical Gronwall inequality we see
that
eλt Eu2U ≤ c2 e−ηt ;
3.16
Eu2U ≤ c2 e−η1t .
3.17
in other words, we get
So, for t θ ≥ t/2 ≥ 0, we also have
Eut θ2U ≤ c2 e−η1tθ ,
≤ c2 e
−κt
,
η1
,
θ ∈ −r, 0, κ 2
3.18
8
Journal of Applied Mathematics
and we can conclude that
Eut 2C ≤ c2 e−κt .
3.19
Theorem 3.2. If the system 1.1 satisfies hypotheses H1–H3, then it is almost surely exponentially
stable.
Proof. Let ut be the mild solution of 1.1. By Definition 2.4 as well as the inequality
ni1 ai 2 ≤ n ni1 a2i , ai ∈ R, we have
E sup ut2U ≤ 4 sup St − N 1uN2U
N≤t≤N1
N≤t≤N1
2
t
4 sup −ASt − su ds
N−1
N≤t≤N1
U
2
t
4 sup St − sCfus − rds
N≤t≤N1 N−1
U
2
t
4 sup St − sGus − rdW N−1
N≤t≤N1
3.20
U
I1 I2 I3 I4 .
Using the contraction of the map St and the result of Theorem 3.1, we find
I1 4 sup ESt − N 1uN − 12U
N≤t≤N1
≤ 4 sup EuN−1 2C ≤ 4c2 e−κN−1 .
3.21
N≤t≤N1
By the Hölder inequality, we obtain
2
t
I2 4 sup E
−ASt − su ds
N−1
N≤t≤N1
U
t
≤ 4 sup t − N 1
E−ASt − su2U ds
N≤t≤N1
≤ 8 sup
N≤t≤N1
≤ 8k42
N1
N−1
N−1
3.22
EAu2U ds
Eu2U ds ≤ 8k42
N−1
N1
≤ 8k42 c2
N−1
t
N1
N−1
Eus 2C ds
e−κs ds ≤ 8k42 ρ1 e−κN−1 ,
where ρ1 c2 /κ, k4 max{a1 , a2 , . . . , an }.
Journal of Applied Mathematics
9
By virtue of Theorem 3.1, Hölder inequality, and H1, we have
2
t
I3 4 sup E
St − sCfus − rds
N−1
N≤t≤N1
U
t
2
Cfus − r ds
≤ 4 sup t − N 1E
N≤t≤N1
≤ 8σ 2 sup E
t
N−1
N≤t≤N1
≤
8k12 σ 2
≤ 8k12 c2
N1
N−1
N1
N−1
U
N−1
fus − r2 ds
U
Eus −
r2U ds
≤
8k12 σ 2
3.23
N1
N−1
Eus 2C ds
e−κs ds ≤ 8k12 ρ1 e−κN−1 .
Then, by the Burkholder-Davis-Gundy inequality see 18, 22, there exists c3 such that
2
t
I4 4 sup E
St − sGus − rdW N−1
N≤t≤N1
U
t
≤ 4c3 sup E
St − sGus − rI2U ds
N−1
N≤t≤N1
≤
4c3 k22
t
sup
≤ 4c3 k22 c2
N1
N−1
Eus −
N−1
N≤t≤N1
r2U ds
≤
4c3 k22
N1
N−1
3.24
Eus 2C ds
e−κs ds ≤ 4c3 k22 ρ1 e−κN−1 ,
where I 1, 1, . . . , 1T is an m-dimensional vector.
We can deduce from 3.21–3.24 that
E sup ut2U ≤ ρ2 e−κN−1 ,
N≤t≤N1
3.25
where ρ2 4c2 8k42 8k12 4c3 k22 ρ1 .
Thus, for any positive constants εN , thanks to the Chebyshev inequality we have that
P
sup utU > εN
N≤t≤N1
1
sup Eut2U
2
εN
N≤t≤N1
1
≤ 2 ρ2 e−κN−1 .
εN
≤
3.26
10
Journal of Applied Mathematics
Frequence of u2
Frequence of u1
u
1
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
v
0
0
−0.2
−0.2
−0.4
−0.4
−0.6
−0.6
−0.8
−0.8
−1
−1
0
0.5
1
0
0.5
t
1
t
a
b
Figure 1
Due to the Borel-Cantelli lemma, we see that
lim sup
t−→∞
ln ut2U
≤ −κ,
t
almost surely.
3.27
This completes the proof of the theorem.
4. Simulation
Consider two-dimensional stochastic reaction-diffusion recurrent neural networks with delay
as follows:
du1 t, x 10Δu1 − 7u1 1.3 tanhu1 t − 1, xdt u1 t − 1, xdW,
du2 t, x 10Δu2 − 7u2 tanhu1 t − 1, x − tanhu2 t − 1, xdt
u2 t − 1, xdW
∂ui t, 0 ∂ui t, 20
0, t ≥ 0,
∂x
∂x
u1 θ, x cos0.2πx, u2 θ, x cos0.1πx, x ∈ 0, 20, θ ∈ −1, 0.
4.1
Δ is the Laplace operator. We have β ≥ 1/20, α ≥ 10, k1 1, k2 1, k3 7, σ 1.3, n 2,
and η > 0; by Theorems 3.1 and 3.2, this system is mean square exponentially stable as well
as almost surely exponentially stable. The results can be shown in Figures 1, 2 and 3.
Journal of Applied Mathematics
11
Simulation of u1
1
0.5
u1
0
−0.5
−1
30
150
20
x
100
10
50
0
0
t
Figure 2
Simulation of u2
1
0.5
u2
0
−0.5
−1
30
150
20
x
100
10
50
0
0
t
Figure 3
We use the forward Euler method to simulate this example 23–25. We choose the
time step Δt 0.01 and space step Δx 1, and δ Δt/Δx2 0.01.
Acknowledgment
The authors wish to thank the referees for their suggestions and comments. We are also
indebted to the editors for their help. This work was supported by the National Natural
Science Foundation of China no. 11171374, Natural Science Foundation of Shandong
Province no. ZR2011AZ001.
12
Journal of Applied Mathematics
References
1 X. X. Liao and Y. L. Gao, “Stability of Hopfield neural networks with reactiondiffusion terms,” Acta
Electronica Sinica, vol. 28, pp. 78–82, 2000.
2 L. S. Wang and D. Xu, “Global exponential stability of Hopfield reaction-diffusion neural networks
with time-varying delays,” Science in China F, vol. 46, no. 6, pp. 466–474, 2003.
3 L. S. Wang and Y. F. Wang, “Stochastic exponential stability of the delayed reaction diffusion interval
neural networks with Markovian jumpling parameters,” Physics Letters A, vol. 356, pp. 346–352, 2008.
4 J. K. Hale and V. S. M. Lunel, Introduction to Functional-Differential Equations, vol. 99 of Applied
Mathematical Sciences, Springer, Berlin, Germany, 1993.
5 S.-E. A. Mohammed, Stochastic Functional Differential Equations, vol. 99 of Research Notes in Mathematics,
Pitman, London, UK, 1984.
6 L. S. Wang and Y. Y. Gao, “Global exponential robust stability of reactiondiffusion interval neural
networks with time varying delays,” Physics Letters A, vol. 305, pp. 343–348, 2006.
7 L. S. Wang, R. Zhang, and Y. Wang, “Global exponential stability of reaction-diffusion cellular neural
networks with S-type distributed time delays,” Nonlinear Analysis: Real World Applications, vol. 10, no.
2, pp. 1101–1113, 2009.
8 H. Y. Zhao and G. L. Wang, “Existence of periodic oscillatory solution of reactiondiffusion neural
networks with delays,” Physics Letters A, vol. 343, pp. 372–382, 2005.
9 J. G. Lu and L. J. Lu, “Global exponential stability and periodicity of reaction-diffusion recurrent
neural networks with distributed delays and Dirichlet boundary conditions,” Chaos, Solitons and
Fractals, vol. 39, no. 4, pp. 1538–1549, 2009.
10 X. Mao, Stochastic Differential Equations and Applications, Horwood, 1997.
11 J. Sun and L. Wan, “Convergence dynamics of stochastic reaction-diffusion recurrent neural networks
with delays,” International Journal of Bifurcation and Chaos in Applied Sciences and Engineering, vol. 15,
no. 7, pp. 2131–2144, 2005.
12 M. Itoh and L. O. Chua, “Complexity of reaction-diffusion CNN,” International Journal of Bifurcation
and Chaos in Applied Sciences and Engineering, vol. 16, no. 9, pp. 2499–2527, 2006.
13 X. Lou and B. Cui, “New criteria on global exponential stability of BAM neural networks with
distributed delays and reaction-diffusion terms,” International Journal of Neural Systems, vol. 17, pp.
43–52, 2007.
14 Q. Song and Z. Wang, “Dynamical behaviors of fuzzy reaction-diffusion periodic cellular neural
networks with variable coefficients and delays,” Applied Mathematical Modelling, vol. 33, no. 9, pp.
3533–3545, 2009.
15 Q. Song, J. Cao, and Z. Zhao, “Periodic solutions and its exponential stability of reaction-diffusion
reccurent neural networks with distributed time delays,” Nonlinear Analysis B, vol. 8, pp. 345–361,
2007.
16 K. Liu, “Lyapunov functionals and asymptotic stability of stochastic delay evolution equations,”
Stochastics and Stochastics Reports, vol. 63, no. 1-2, pp. 1–26, 1998.
17 R. Temam, Infinite-Dimensional Dynamical Systems in Mechanics and Physics, vol. 68 of Applied
Mathematical Sciences, Springer, New York, NY, USA, 1988.
18 G. Da Prato and J. Zabczyk, Stochastic Equations in Infinite Dimensions, vol. 44 of Encyclopedia of
Mathematics and its Applications, Cambridge University Press, Cambridge, UK, 1992.
19 I. Chueshov, Introduction to the Theory of Infinite-Dimensional Dissipative Systems, Acta Kharkov, 2002.
20 R. Jahanipur, “Stochastic functional evolution equations with monotone nonlinearity: existence and
stability of the mild solutions,” Journal of Differential Equations, vol. 248, no. 5, pp. 1230–1255, 2010.
21 T. Taniguchi, “Almost sure exponential stability for stochastic partial functional-differential
equations,” Stochastic Analysis and Applications, vol. 16, no. 5, pp. 965–975, 1998.
22 T. Caraballo, K. Liu, and A. Truman, “Stochastic functional partial differential equations: existence,
uniqueness and asymptotic decay property,” Proceedings the Royal Society of London A, vol. 456, no.
1999, pp. 1775–1802, 2000.
23 M. Kamrani and S. M. Hosseini, “The role of coefficients of a general SPDE on the stability and
convergence of a finite difference method,” Journal of Computational and Applied Mathematics, vol. 234,
no. 5, pp. 1426–1434, 2010.
24 D. J. Higham, “Mean-square and asymptotic stability of the stochastic theta method,” SIAM Journal
on Numerical Analysis, vol. 38, no. 3, pp. 753–769, 2000.
25 P. E. Kloeden and E. Platen, Numerical Solution of Stochastic Differential Equations, vol. 23, Springer,
Berlin, Germany, 3rd edition, 1999.
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download