Hindawi Publishing Corporation Abstract and Applied Analysis Volume 2013, Article ID 169214, 12 pages http://dx.doi.org/10.1155/2013/169214 Research Article Convergence and Stability of the Split-Step π-Milstein Method for Stochastic Delay Hopfield Neural Networks Qian Guo,1,2 Wenwen Xie,1 and Taketomo Mitsui3 1 Department of Mathematics, Shanghai Normal University, Shanghai 200234, China Division of Computational Science, E-Institute of Shanghai Universities, 100 Guilin Road, Shanghai 200234, China 3 Department of Mathematical Sciences, Faculty of Science and Engineering, Doshisha University, Kyoto 610-0394, Japan 2 Correspondence should be addressed to Qian Guo; qguo@shnu.edu.cn Received 8 December 2012; Accepted 26 February 2013 Academic Editor: Chengming Huang Copyright © 2013 Qian Guo et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. A new splitting method designed for the numerical solutions of stochastic delay Hopfield neural networks is introduced and analysed. Under Lipschitz and linear growth conditions, this split-step π-Milstein method is proved to have a strong convergence of order 1 in mean-square sense, which is higher than that of existing split-step π-method. Further, mean-square stability of the proposed method is investigated. Numerical experiments and comparisons with existing methods illustrate the computational efficiency of our method. 1. Introduction Hopfield neural networks, which originated with Hopfield in the 1980s [1], have been successfully applied in many areas such as combinatorial optimization [2, 3], signal processing [4], and pattern recognition [5, 6]. In the last decade, neural networks in the presence of signal transmission delay and stochastic perturbations, also named as stochastic delay Hopfield neural networks (SDHNNs), have gained considerable research interest (see, e.g., [7–9] and the references therein). It is noticed that, so far, most works on SDHNNs focus mainly on the stability analysis of the analytical solutions, including mean-square exponential stability [7], global asymptotic stability [9], and so forth. Not only simulation is an important tool to explore interesting dynamics of kinds of Hopfield neural networks (HNNs) (see, e.g., [10] and the references therein), but also parameter estimation in dynamical systems based on HNNs (see, e.g., [11]) needs to solve HNNs numerically. Moreover, because most of SDHNNs do not have explicit solutions, the numerical analysis of SDHNNs recently stirred some initial research attention. For example, Li et al. [12] investigated the exponential stability of the Euler method and the semi-implicit Euler method for SDHNNs. Rathinasamy [13] introduced a split-step π-method (SST) for SDHNNs and analysed the mean-square stability of this method, and the SST is only given for the commensurable delay case. To the best of our current knowledge, the authors mainly discussed the stability of numerical solutions for stochastic Hopfield neural networks with discrete time delays but skipped the details of convergence analysis. The split-step Euler method for stochastic differential equations (SDEs) was proposed by Higham et al. [14], further, the splitting Euler-type algorithms have been derived for stochastic delay differential equations (SDDEs) [15, 16]. In this paper, we will present a splitting method with higher order convergence for SDHNNs. To be specific, we will go into detail about the convergence analysis and comparing the stability with split-step π-method given in [13]. The rest of this paper is organized as follows. In Section 2, we recall the stochastic delay neural networks model and present a split-step π-Milstein method. In Section 3, we derive the convergence results of the split-step π-Milstein method for the model. In Section 4, the numerical stability analysis is performed. In Section 5, some numerical examples are given to confirm the theory. In the last Section, we draw some conclusions. 2 Abstract and Applied Analysis 2. Model and the Split-Step π-Milstein Method 2.1. Model. Consider the stochastic delay Hopfield neural networks of the form dx (π‘) = [−πΆx (π‘) + π΄f (x (π‘)) + π΅g (z (π‘))] dt (1) + Φ (x (π‘)) dπ (π‘) , where x(π‘) = [π₯1 (π‘), . . . , π₯π (π‘)]π ∈ Rπ is the state vector associated with the π neurons, z(π‘) = [π₯1 (π‘ − π1 ), . . . , π₯π (π‘ − ππ )]π , the diagonal matrix πΆ = diag(π1 , π2 , . . . , ππ ) has positive entries, and ππ represents the rate at which the πth unit will reset its potential to the resting state in isolation when discounted from the network and the external stochastic perturbation. The matrices π΄ = (πππ )π×π and π΅ = (πππ )π×π are the connection weight matrix and the discretely delayed connection weight matrix, respectively. Furthermore, the vector functions f(x(π‘)) = [π1 (π₯1 (π‘)), . . . , ππ (π₯π (π‘))]π and g(z(π‘)) = [π1 (π₯1 (π‘ − π1 )), . . . , ππ (π₯π (π‘ − ππ ))]π denote the neuron activation functions with the conditions ππ (0) = 0, ππ (0) = 0 for all positive ππ . On the initial segment [−π, 0] the state vector satisfies x(π‘) = π(π‘), where π(π‘) = [π1 (π‘), . . . , ππ (π‘)]π is a given function in πΆ([−π, 0], Rπ ) and π stands for max1≤π≤π {ππ }. Moreover, Φ(x(π‘)) = diag(π1 (π₯1 (π‘)), . . . , ππ (π₯π (π‘))) is a diagonal matrix with ππ (0) = 0 and π(π‘) = [π1 (π‘), . . . , ππ (π‘)]π ∈ Rπ is an π-dimensional Wiener process defined on the complete probability space (Ω, F, {Fπ‘ }π‘≥0 , P) with a filtration {Fπ‘ }π‘≥0 satisfying the usual conditions (i.e., it is increasing and right continuous while F0 contains all P-null sets). Let ππ and ππ be functions in πΆ2 (π·; R) β L2 ([0, π]; R) and ππ be in πΆ1 (π·; R) ∩ L2 ([0, π]; R). Here πΆπ (π·; R) denotes the family of continuously π-times differentiable real-valued function defined on π·, while Lπ ([0, π]; R) denotes the family of all real-valued measurable {Fπ‘ }-adapted stochastic π processes {π(π‘)}π‘∈[0,π] such that ∫0 |π(π‘)|π ππ‘ < +∞. 2.2. Numerical Scheme. We define the mesh with a uniform step-size β (0 < β < 1) on the interval [0, π]; that is, π‘π = π ⋅ β (π = 0, 1, . . . , πΎ) and π = πΎβ. Let Δπππ = ππ (π‘π+1 ) − ππ (π‘π ) denote the increment of the Wiener process. The split-step π-Milstein (SSTM) scheme for the solution of SDEs (1) is given by ππ = ππ + [− (1 − π) πΆππ − ππΆππ + π΄f (ππ ) + π΅g (ππ )] β, 1Μ π (π ) [Δππ β Δππ − 1β] , ππ+1 = ππ + Φ (ππ ) Δππ + Φ 2 (2) π where the merging parameter π satisfies 0 ≤ π ≤ 1, π = [π1k , . . . , πππ ]π is an approximation to x(π‘π ), and for 1 ≤ ππ ∈ Z+ ππ (π‘π − ππ ) , π‘π − ππ ≤ 0, πππ = { π−ππ 0 < π‘π − ππ ∈ [π‘π−ππ , π‘π−ππ +1 ) . ππ , (3) Μ π) = Moreover, we adopt the symbols Φ(π σΈ π π σΈ π π π diag(π1 (π1 )π1 (π1 ), . . . , ππ (ππ )ππ (ππ )) and Δπ = [Δπ1π , . . . ,Δπππ ]π , the Hadamard product Δππ β Δππ means [(Δπ1π )2 , . . . ,(Δπππ )2 ]π , and 1 = [1, . . . , 1]π ∈ Rπ . When π‘π ≤ 0, we define ππ = π(π‘π ). Then scheme (2) can be written in equivalent form as ππ = ππ − (πΌ + ππΆβ)−1 πΆβππ + (πΌ + ππΆβ)−1 β [π΄f (ππ ) + π΅g (ππ )] , (4a) 1Μ π ππ+1 = ππ + Φ (ππ ) Δππ + Φ (π ) [Δππ β Δππ − 1β] . 2 (4b) Substituting (4a) into (4b), we have a stochastic explicit single-step method with an increment function Λ(π, π, β, Δππ ) ∈ Rπ ; that is, ππ+1 = ππ + Λ (ππ , ππ , β, Δππ ) . (5) 3. Order and Convergence Results for SSTM In this section we consider the global error of SSTM (2) as applied to SDHNNs (1) with initial condition. In what follows, β ⋅ β denotes Euclidean norm in Rπ . For convergence purpose we make the following standard assumptions. Μ satisfy the Lipschitz Assumption 1. Assume that f, g, Φ, and Φ condition σ΅¨ σ΅¨σ΅¨ σ΅¨σ΅¨ππ (π)−ππ (π)σ΅¨σ΅¨σ΅¨ ≤ πΌπ |π−π| , σ΅¨σ΅¨ σ΅¨ σ΅¨σ΅¨ππ (π)−ππ (π)σ΅¨σ΅¨σ΅¨ ≤ π½π |π−π| , (π, π ∈ R and π‘ ∈ [0, π]) σ΅¨σ΅¨ σ΅¨ σ΅¨σ΅¨πi (π)−ππ (π)σ΅¨σ΅¨σ΅¨ ≤ πΎπ |π−π| , σ΅¨σ΅¨ σΈ σ΅¨ σ΅¨σ΅¨ππ ππ (π)−ππ ππσΈ (π)σ΅¨σ΅¨σ΅¨ ≤ π π |π−π| , σ΅¨ σ΅¨ (6) for every π and the linear growth condition σ΅©2 σ΅©Μ σ΅©2 σ΅© (x) ⋅ 1σ΅©σ΅©σ΅©σ΅© βf(x)β2 ∨ σ΅©σ΅©σ΅©g(x)σ΅©σ΅©σ΅© ∨ βΦ (x) ⋅ 1β2 ∨ σ΅©σ΅©σ΅©σ΅©Φ Μ (1 + βxβ2 ) , ≤πΏ ∀x ∈ Rπ , (7) Μ is a positive constant and ∨ is the maximal operator. where πΏ We also define πΏ π as πΏ π = max{πΌπ , π½π , πΎπ , π π } (π = 1, 2, . . . , π). We also need the following assumption on the initial condition. Assumption 2. Assume that the initial function π(π‘) is Lipschitz continuous from [−π, 0] to Rπ , that is, there is a positive constant πΏ π satisfying σ΅© σ΅©σ΅© σ΅©σ΅©π (π‘) − π (π )σ΅©σ΅©σ΅© ≤ πΏ π (π‘ − π ) if − π ≤ π < π‘ < 0. (8) Abstract and Applied Analysis 3 Now we give the definition of local and global errors. Definition 1. Let x(π‘) denote the exact solution of (1). The local approximate solution xΜ(π‘π+1 ) starting from x(π‘π ) by SSTM (2) given by Μ (π‘π ) , β, Δππ ) , xΜ (π‘π+1 ) := x (π‘π ) + Λ (x (π‘π ) , π Proof. If π‘π − ππ ≤ 0 and π − ππ ≤ 0, under Assumption 2 we have Μπ (π‘π )]2 πΈ[π₯π (π − ππ ) − π 2 = πΈ[ππ (π − ππ ) − ππ (π‘π − ππ )] (9) Μ π ) denotes the evaluation of (3) using the exact where π(π‘ solution, yields the difference πΏπ+1 := x (π‘π+1 ) − xΜ (π‘π+1 ) . (10) ≤ πΏ π β2 . If π‘π − ππ ≤ 0 and π − ππ > 0, with (13) we obtain Μπ (π‘π )]2 πΈ[π₯π (π − ππ ) − π Then the local error of SSTM is defined by βπΏπ+1 β, whereas its global error means βππ β where ππ := x(π‘π ) − ππ . 2 = πΈ[π₯π (π − ππ ) − ππ (π‘π − ππ )] Definition 2. If the global error satisfies σ΅© σ΅© 2 πΈ (σ΅©σ΅©σ΅©σ΅©ππ σ΅©σ΅©σ΅©σ΅©) ≤ Γβ2π ∀β ∈ (0, β0 ) ≤ 2πΈ[π₯π (π − ππ ) − π₯π (0)] (11) with positive constants β0 and Γ and a finite π, then we say that the order of mean-square convergence accuracy of the method is π. Here πΈ is the expectation with respect to P. We then give the following lemmas that are useful in deriving the convergence results. Lemma 3 (see also [17]). Let the linear growth condition (7) hold, and the initial function π(π‘) is assumed to be F0 -measurable and right continuous. And one puts πΈπ := πΈ(sup−π≤π‘≤0 βπ(π‘)β2 ) < ∞. For any given positive π, there exist positive numbers Γπ and Γ2 such that the solution of (1) satisfies πΈ ( sup βx (π )β2 ) ≤ Γπ , −π≤π ≤π πΈ(βx (π‘) − x (π )β)2 ≤ Γ2 (π‘ − π ) (13) 2 ≤ 2 (Γ2 + πΏ π β) β. If π‘π − ππ > 0 and π − ππ > 0, we assume π‘π − ππ ∈ [π‘π−ππ , π‘π−ππ +1 ) without loss of generality. Hence, Μπ (π‘π )]2 πΈ[π₯π (π − ππ ) − π 2 = πΈ[π₯π (π − ππ ) − π₯π (π‘π−ππ )] 2 (14) by using inequality (13). Lemma 5. Let x(π‘) denote the exact solution of (1). One assumes conditions (6) and (7). Then for the local intermediate value y(π‘π ) := x(π‘π ) − (πΌ + ππΆβ)−1 πΆβx(π‘π ) + (πΌ + Μ π ))], one has the estimation ππΆβ)−1 β[π΄f(x(π‘π )) + π΅g(π(π‘ (15) (19) Proof. The difference between the πth components of x(π‘π ) and y(π‘π ) leads to σ΅¨2 σ΅¨σ΅¨ σ΅¨σ΅¨π₯π (π‘π ) − π¦π (π‘π )σ΅¨σ΅¨σ΅¨ = Lemma 4. For π ∈ [π‘π , π‘π + β], one has Here the constant Γπ is independent of step-size β. 2 ≤ 4Γ2 β from (12). σ΅© Μ (π‘π )σ΅©σ΅©σ΅©σ΅©2 ) ≤ Γπ β. πΈ (σ΅©σ΅©σ΅©σ΅©z (π ) − π σ΅© (18) ≤ 2πΈ[π₯π (π − ππ ) − π₯π (π‘π − ππ )] σ΅©2 σ΅© πΈ (σ΅©σ΅©σ΅©x(π‘π ) − y(π‘π )σ΅©σ΅©σ΅© ) ≤ Γ3 β2 . The Jensen inequality derives −π≤π ≤π (17) + 2πΈ[ππ (0) − ππ (π‘π − ππ )] (12) holds. πΈ ( sup βx (π )β) ≤ √Γπ 2 + 2πΈ[π₯π (π‘π−ππ ) − π₯π (π‘π − ππ )] where the constant Γπ is independent of step-size β but dependent on π. Moreover, for any 0 ≤ π < π‘ ≤ π, π‘ − π < 1, the estimation (16) 2 ≤ πΏ π (π − π‘π ) π β2 (1 + ππ πβ) 2 [ππ π₯π (π‘π ) − ∑πππ ππ (π₯π (π‘π )) π=1 [ π 2 Μπ (π‘π ))] , − ∑πππ ππ (π π=1 ] (20) 4 Abstract and Applied Analysis whose expectation, together with ππ > 0, ππ (0) = ππ (0) = 0, the Lipschitz condition (6), and the estimation (12), gives = ∫ π‘π σ΅©2 σ΅© πΈ (σ΅©σ΅©σ΅©x(π‘π ) − y(π‘π )σ΅©σ΅©σ΅© ) π +∫ 3β2 ≤∑ π=1 (1 ππ [π₯π (π‘π ) − π₯π (π )] dπ − + ππ πβ) 2 ∑πππ [ππ (π₯π (π )) − ππ (π₯π (π‘π ))] π π=1 (21) + σ΅¨Μ σ΅¨σ΅¨2 ] + π ∑ πππ2 π½π2 πΈσ΅¨σ΅¨σ΅¨σ΅¨π π (π‘π )σ΅¨σ΅¨σ΅¨ π=1 ] 3Γπ ∑ (ππ2 π=1 + π=1 Μπ (π‘π ))] dπ + ∑ πππ [ππ (π₯π (π − ππ )) − ππ (π π π ππ2 β2 π π₯ (π‘ ) 1 + ππ πβ π π π‘π+1 π π‘π π σ΅¨ σ΅¨2 σ΅¨ σ΅¨2 × [ππ2 πΈσ΅¨σ΅¨σ΅¨π₯π (π‘π )σ΅¨σ΅¨σ΅¨ + π ∑πππ2 πΌπ2 πΈσ΅¨σ΅¨σ΅¨σ΅¨π₯π (π‘π )σ΅¨σ΅¨σ΅¨σ΅¨ π=1 [ ≤ π‘π+1 π π ∑ πππ2 πΌπ2 π=1 + π ππ πβ2 [ π Μπ (π‘π ))] ∑ πππ ππ (π₯π (π‘π )) + ∑πππ ππ (π 1 + ππ πβ π=1 π=1 ] [ +∫ π π‘π+1 π‘π π ∑ πππ2 π½π2 ) β2 . π=1 −∫ ππ (π₯π (π )) dππ (π ) − ∫ π‘π+1 π‘π π‘π+1 ππ (π¦π (π‘π )) dππ (π ) π ∫ ππσΈ (π¦π (π‘π )) ππ (π¦π (π‘π )) dππ (π) dππ (π ) , π‘π π‘π (25) Now we discuss local error estimates. Theorem 6. When one assumes Assumptions 1 and 2 and the conditions of Lemma 3, there exist positive constants Γ0 and Γ1 , such that σ΅© σ΅© max σ΅©σ΅©σ΅©πΈ (πΏπ+1 )σ΅©σ΅©σ΅©σ΅© ≤ Γ0 β2 , (22) 0≤π≤πΎ−1 σ΅© σ΅©2 σ΅© max πΈ (σ΅©σ΅©σ΅©σ΅©πΏπ+1 σ΅©σ΅©σ΅©σ΅© ) ≤ Γ1 β3 (23) 0≤π≤πΎ−1 where Δπππ π‘π+1 ∫π‘ π Proof. The ItoΜ integral form of the πth component of (1) on [π‘π , π‘] implies π₯π (π‘) − π₯π (π‘π ) π πΈ (πΏππ+1 ) = πΈ (∫ π‘π+1 − π + ∑πππ ππ (π₯π (π − ππ ))] dπ π=1 ] π‘π ππ β = π₯π (π‘π+1 ) − π₯π (π‘π ) + π₯ (π‘ ) 1 + ππ πβ π π π π β Μπ (π‘π ))] [ ∑ πππ ππ (π₯π (π‘π )) + ∑ πππ ππ (π 1 + ππ πβ π=1 π=1 ] [ π σΈ − ππ (π¦π (π‘π )) Δππ − ππ (π¦π (π‘π )) 2 π‘π π‘π π=1 π‘π+1 π π −ππ π‘π π‘π−ππ + πΈ (∫ ∑πππ ∫ π=1 ππσΈ (π₯π (π)) ππ (π₯π (π)) 1 + ππσΈ σΈ (π₯π (π)) ππ2 (π₯π (π)) dπdπ ) 2 By utilizing the previous identity, the πth component of the difference πΏπ+1 introduced in Definition 1 can be calculated as −β π ∑πππ ∫ ππσΈ (π₯π (π)) ππ (π₯π (π)) (26) π‘π × ππ (π¦π (π‘π )) π‘π+1 π 1 + ππσΈ σΈ (π₯π (π)) ππ2 (π₯π (π)) dπdπ ) 2 + ∫ ππ (π₯π (π )) dππ (π ) . 2 (Δπππ ) ππ2 β2 π πΈ (π₯π (π‘π )) 1 + ππ πβ (24) π‘ − π ππ ∫ −ππ (π₯π (π)) dπdπ ) + πΈ (∫ π π‘ = ∫π‘ dππ (π)dππ (π ). π Taking expectations of both sides of (25), = ∫ [−ππ π₯π (π ) + ∑πππ ππ (π₯π (π )) π‘π π=1 [ πΏππ+1 π π‘π as β ↓ 0. 2 π‘ ∫π‘ π+1 dππ (π ) and ((Δπiπ ) − β)/2 = + ππ πβ2 [ π ∑π πΈ (ππ (π₯π (π‘π ))) 1 + ππ πβ π=1 ππ [ π Μπ (π‘π )))] + ∑πππ πΈ (ππ (π π=1 ] by (24) and ItoΜ formula, where ππ (π₯π (π)) := −ππ π₯π (π) + ∑ππ=1 πππ ππ (π₯π (π)) + ∑ππ=1 πππ ππ (π₯π (π − ππ )). Under conditions of this theorem, we have |πΈ(πΏπ+1 )| ≤ ΓΜ0 β2 by (14), Jensen π Abstract and Applied Analysis 5 inequality |πΈ(π)| ≤ πΈ(|π|), triangle inequality, and properties of definite integral. Then we have max0≤π≤πΎ−1 βπΈ(πΏπ+1 )β ≤ Γ0 β2 from the relation between |πΈ(πΏππ+1 )| and βπΈ(πΏπ+1 )β. Now we prove (23). By ItoΜ formula, { { π { π‘π+1 { 2 2 ≤ 9 {β (ππ + π ∑ πππ πΌπ ) ∫ ββΓβββ ββdπ 2β { π‘π { π=1 (13) { { π 2 2 π‘π+1 + β (π ∑ πππ π½π ) ∫ π‘π π=1 ∫ π‘π+1 π‘π ππ (π₯π (π )) dππ (π ) − ∫ π‘π+1 π‘π π‘π+1 −∫ =∫ π‘π 4 2 2 π‘π+1 ∫ ππσΈ (π¦π (π‘π )) ππ (π¦π (π‘π )) dππ (π) dππ (π ) +∫ π‘π [ππ (π₯π (π‘π )) − ππ (π¦π (π‘π ))] dππ (π ) π‘π+1 +∫ π (27) ∫ ππσΈ (π₯π (π)) ππ (π₯π (π)) π‘π+1 πΎπ2 ββ Γβ3ββ ββ2βdπ + ∫ π‘π (19) 2 Γ4 (π − π‘π ) dπ βββββββββββββββββββ moment bounded } } } } Μ1 β3 . ≤Γ } } } } Triangle Inequality, (6), (13), (19)} π ∫ ∫ Γ5 βdπdπ π‘π π‘π βββββββββββββββββββββββββββββββββ (28) Finally, it is easy to prove πΈ(βπΏ π ∫ [ππσΈ (π₯π (π)) ππ (π₯π (π)) − ππσΈ (π¦π (π‘π )) π‘π π‘π π π+1 2 1 2 + ππσΈ σΈ (π₯π (π)) ππ (π₯π (π)) dπdππ (π ) 2 +∫ π‘π+1 π‘π+1 + π‘π π‘π π (6) and (12) π π‘π π‘π π‘π+1 (15) β ππ π Γ [ππ2 + π ∑ πππ2 πΌj2 + π ∑ πππ2 π½π2 ] 2 π (1 + ππ πβ) βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ π=1 π=1 [ ] + ππ (π¦π (π‘π )) dππ (π ) ββΓβββ ββdπ πβ × ππ (π¦π (π‘π )) ] dππ (π) dππ (π ) . β ) ≤ Γ1 β3 . Thanks to Theorem 1 in [18], we can conclude that σ΅© σ΅©2 πΈ (σ΅©σ΅©σ΅©σ΅©ππ σ΅©σ΅©σ΅©σ΅© ) ≤ Γβ2 ∀β ∈ (0, 1) , (29) that is, the mean-square order of global error of the SSTM is 1. 4. Stability of SSTM We are concerned with the stability of SSTM solution. Since (1) has an equilibrium solution x(π‘) ≡ 0, we will discuss whether the SSTM solution ππ with a positive step-size can attain a similar stability when π goes to infinity. First we give a sufficient condition for the exponential stability in meansquare sense of the equilibrium solution. The references [13, 19] give the condition as From (25) and (27), we have σ΅¨ π+1 σ΅¨2 πΈ (σ΅¨σ΅¨σ΅¨σ΅¨πΏπ σ΅¨σ΅¨σ΅¨σ΅¨ ) { { { π‘π+1 { 2 πΈ[π₯π (π‘π ) − π₯π (π )] dπ ≤ 9 {ππ β ∫ { π‘π { βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ { Cauchy−Schwartz inequality { π π π π π σ΅¨ σ΅¨ σ΅¨ σ΅¨ σ΅¨ σ΅¨ σ΅¨ σ΅¨ ∑ σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ πΌπ + ∑ σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ π½π ≤ πΌπ ∑ σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ + π½π ∑ σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ , π‘π+1 2 Μ Cauchy−Schwartz and Holder inequality π π‘π+1 + (1 + ππ πβ) πππ2 β4 π2 2 σ΅¨ σ΅¨2 πΈσ΅¨σ΅¨σ΅¨π₯π (π‘π )σ΅¨σ΅¨σ΅¨ 2 π‘π+1 2 +∫ πΎπ2 πΈ[π₯π (π‘π ) − π¦π (π‘π )] dπ π‘π βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ π=1 π=1 π=1 π=1 π=1 Definition 7. A numerical method is said to be mean-square stable (MS-stable) if there exists an β0 > 0 such that any application of the method to problem (1) generates numerical approximations πππ , which satisfies σ΅¨ σ΅¨2 lim πΈ (σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ ) = 0, π = 1, 2, . . . , π, (31) π→∞ for all β ∈ (0, β0 ). ItΜ o Isometry π‘π+1 π σ΅¨ 2 σ΅¨2 +∫ (π −π‘π ) ∫ πΈσ΅¨σ΅¨σ΅¨σ΅¨ππσΈ (π₯π (π)) ππ (π₯π (π))+(1/2) ππσΈ σΈ (π₯π (π)) ππ (π₯π (π)) σ΅¨σ΅¨σ΅¨σ΅¨ dπdπ π‘π π‘π βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ Cauchy−Schwartz inequality and ItΜ o Isometry } } } } σΈ σΈ +∫ ∫ πΈ[ππ (π₯π (π)) πi (π₯π (π)) − ππ (π¦π (π‘π )) ππ (π¦π (π‘π ))] dπdπ } } π‘π βββπ‘βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ} π } ItΜ o Isometry } π π for every π (π = 1, . . . , π). π π [ ∑ π2 πΈσ΅¨σ΅¨σ΅¨σ΅¨ππ (π₯π (π‘π ))σ΅¨σ΅¨σ΅¨σ΅¨2 + ∑ π2 πΈσ΅¨σ΅¨σ΅¨σ΅¨ππ (π Μπ (π‘π ))σ΅¨σ΅¨σ΅¨σ΅¨2 ] ππ σ΅¨ ππ σ΅¨ σ΅¨ σ΅¨ (1 + ππ πβ) π=1 π=1 [ ] π‘π+1 π=1 π (30) Μ Cauchy−Schwartz and Holder inequality ππ4 β4 π2 π=1 π 2 Μπ (π‘π )] dπ + π ∑ πππ2 π½π2 β ∫ πΈ[π₯π (π − ππ ) − π π‘π π=1 βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ + π=1 π σ΅¨ σ΅¨ σ΅¨ σ΅¨ σ΅¨ σ΅¨ σ΅¨ σ΅¨ −2ππ + ∑ σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ πΌπ + ∑ σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ π½π + πΌπ ∑ σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ + π½π ∑ σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ + πΎπ2 < 0 + π ∑ πππ2 πΌπ2 β ∫ πΈ[π₯π (π ) − π₯π (π‘π )] dπ π‘π π=1 βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 2 Theorem 8. Assume (6), (30), and (1 − π)ππ β ≤ 1 are satisfied; then the SSTM (2) are mean-square stable if β ∈ (0, β0 ), where min {1, βπ } { {1≤π≤π β0 = { 1 { min {1, ,β } 1≤π≤π − π) ππ π (1 { for π = 1, for π ∈ [0, 1) . (32) 6 Abstract and Applied Analysis π Here βπ is the smallest positive root of the cubic equation with respect to π§ given by 3 2 Aπ π§ + Bπ π§ + Cπ π§ + Dπ = 0, + ∑ {2β (1 − (1 − π) ππ β) πππ π=1 × [πππ ππ (πππ ) + πππ ππ (πππ )]} (33) π where the coefficients mean 2 Aπ = Bπ = π π π π2 σ΅¨ σ΅¨ σ΅¨ σ΅¨ (−(1 − π)ππ + ∑ σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ πΌπ + ∑ σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ π½π ) , 2 π=1 π=1 πΎπ2 (−(1 Taking expectations of both sides of (35), we can get 2 π π σ΅¨ σ΅¨ σ΅¨ σ΅¨ − π)ππ + ∑ σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ πΌπ + ∑ σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ π½π ) π=1 σ΅¨ σ΅¨2 πΈ (σ΅¨σ΅¨σ΅¨σ΅¨πππ+1 σ΅¨σ΅¨σ΅¨σ΅¨ ) π=1 σ΅¨ σ΅¨2 σ΅¨ σ΅¨2 = πΈ (σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ ) + βπΈ (σ΅¨σ΅¨σ΅¨σ΅¨ππ (πππ )σ΅¨σ΅¨σ΅¨σ΅¨ ) π π σ΅¨ σ΅¨ σ΅¨ σ΅¨ + π π2 ( ∑ σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ πΌπ + ∑ σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ π½π ) − π π2 (1 − π) ππ , π=1 π=1 + 2 π π σ΅¨ σ΅¨ σ΅¨ σ΅¨ Cπ = (−(1 − π)ππ + ∑ σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ πΌπ + ∑ σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ π½π ) π=1 (34) π π σ΅¨ σ΅¨ σ΅¨ σ΅¨ + 2πΎπ2 (− (1 − π) ππ + ∑ σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ πΌπ + ∑ σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ π½π ) − ππ2 π2 β2 σ΅¨σ΅¨ σΈ π σ΅¨2 πΈ (σ΅¨σ΅¨σ΅¨ππ (ππ ) ππ (πππ )σ΅¨σ΅¨σ΅¨σ΅¨ ) 2 ≤ (1 + πΎπ2 β + π=1 π=1 π + 2β2 [ ∑πππ ππ (πππ )] [∑ πππ ππ (πππ )] . ] [π=1 ] [π=1 (36) (37) β2 2 σ΅¨ σ΅¨2 π ) πΈ (σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ ) . 2 π Together with (36), we have π=1 σ΅¨ σ΅¨2 πΈ (σ΅¨σ΅¨σ΅¨σ΅¨πππ+1 σ΅¨σ΅¨σ΅¨σ΅¨ ) π 2 + π, 2 π π π=1 π=1 ≤ (1 + πΎπ2 β + σ΅¨ σ΅¨ σ΅¨ σ΅¨ Dπ = 2 (−ππ + ∑ σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ πΌπ + ∑ σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ π½π ) + πΎπ2 . ≤ for π = 1, 2, . . . , π. Squaring on both sides of (4b) and (4a), we have + [ππσΈ (πππ ) ππ (πππ )] [ [ 2 (Δπππ ) − β 2 + [(ππ (πππ )) Δπππ + πππ ] 2 × ππσΈ (πππ ) ππ (πππ ) [(Δπππ ) − β] 2 σ΅¨2 2σ΅¨ (1 + πππ β) σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ = [(1 − (1 − π) ππ β) πππ ] +β [ 2 π ∑ πππ ππ (πππ )] π=1 ] π 2 (1 + πππ β) π π σ΅¨ σ΅¨ σ΅¨ σ΅¨2 σ΅¨ σ΅¨ + β2 (∑ σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨ πΌπ ) ( ∑ σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ πΌπ πΈ (σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ )) ] π=1 (35) π=1 π π σ΅¨ σ΅¨ σ΅¨ σ΅¨2 σ΅¨ σ΅¨ + β2 (∑ σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨ π½π ) ( ∑ σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ π½π πΈ (σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ )) π=1 π=1 π σ΅¨ σ΅¨ + ∑β σ΅¨σ΅¨σ΅¨1 − (1 − π) ππ βσ΅¨σ΅¨σ΅¨ π=1 + 2πππ ππ (πππ ) Δπππ , 2[ 2 ] 1 + πΎπ2 β + (β2 /2) π π2 { σ΅¨ σ΅¨2 2 × {[1 − (1 − π) ππ β] πΈ (σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ ) { 2 2 σ΅¨σ΅¨ π+1 σ΅¨σ΅¨2 σ΅¨σ΅¨ππ σ΅¨σ΅¨ = (πππ ) + [ππ (πππ ) Δπππ ] σ΅¨ σ΅¨ 2 β2 2 σ΅¨ σ΅¨2 π ) πΈ (σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ ) 2 π + β2 [∑ πππ ππ (πππ )] ] [π=1 2 σ΅¨ σ΅¨ σ΅¨ σ΅¨2 σ΅¨ σ΅¨2 × [σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ πΌπ [πΈ (σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ ) + πΈ (σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ )] σ΅¨ σ΅¨ σ΅¨ σ΅¨2 σ΅¨ σ΅¨2 + σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ π½π [πΈ (σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ ) + πΈ (σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ )]] π σ΅¨ σ΅¨ + 2β2 ( ∑ σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ πΌπ ) π=1 π σ΅¨ σ΅¨ σ΅¨ σ΅¨2 σ΅¨ σ΅¨2 } × ( ∑ σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ π½π ) [πΈ (σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ ) + πΈ (σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ )]} . π=1 } (38) Abstract and Applied Analysis 7 Thus, we attain π π σ΅¨ σ΅¨2 πΈ (σ΅¨σ΅¨σ΅¨σ΅¨πππ+1 σ΅¨σ΅¨σ΅¨σ΅¨ ) ≤ [π (β) + ∑ ππ (β) + ∑π π (β)] π=1 π=1 [ ] σ΅¨ σ΅¨2 σ΅¨ σ΅¨2 σ΅¨ σ΅¨2 × max {πΈ (σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ ) , πΈ (σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ ) , πΈ (σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ )} , 1≤π≤π (39) where π (β) = 1 + πΎπ2 β + (β2 π π2 /2) (1 + πππ β) } σ΅¨ σ΅¨ σ΅¨ σ΅¨ × (σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ πΌπ + σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ π½π ) } , } 1+ π₯ (π‘) π (π₯1 (π‘)) π₯ (π‘) ) dπ‘ π ( 1 ) = − πΆ ( 1 ) dπ‘ + π΄ ( π₯2 (π‘) π₯2 (π‘) π (π₯2 (π‘)) π (π₯1 (π‘ − π1 )) ) dπ‘ + Φdπ (π‘) + π΅( π (π₯2 (π‘ − π2 )) (43) on π‘ β©Ύ 0 with the initial condition π₯1 (π‘) = π‘ + 1, π‘ ∈ [−1, 0] and π₯2 (π‘) = π‘ + 1, π‘ ∈ [−2, 0]. (1 Case 1. Let π(π₯) = sin(π₯), π(π₯) = arctan(π₯), + (β2 π π2 /2) 2 + πππ β) (41) + (β2 π π2 /2) 2 + πππ β) πΆ=( π σ΅¨ σ΅¨ σ΅¨ σ΅¨ σ΅¨ σ΅¨ × {β (σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ πΌπ + σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ π½π ) (∑ σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨ π½π ) 2 4 −5 π΄=( ), 6 3 0 π₯ (π‘) ). Φ=( 1 0 −√5π₯2 (π‘) (44) Case 2. Let π(π₯) = π₯, π(π₯) = tanh(π₯), πΎπ2 β (1 20 0 ), 0 20 πΆ=( −6 4 π΅=( ), 3 1 } σ΅¨ σ΅¨ σ΅¨ σ΅¨ + β σ΅¨σ΅¨σ΅¨1 − (1 − π) ππ βσ΅¨σ΅¨σ΅¨ (σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ πΌπ ) } , } 1+ for the value of explicit solution of (1) at time π = 3 and πππΎπ is its numerical approximation along the rth sample path {ππ : π = 1, 2, . . . , 2000}. We compute the numerical solution using the split-step π-Milstein method (2) with step-size β = 2−12 , and we will call this the “exact solution.” πΎπ2 β { σ΅¨ σ΅¨ π σ΅¨ σ΅¨ σ΅¨ σ΅¨ × {β2 σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ πΌπ ∑ (σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨ πΌπ + σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨ π½π ) π=1 { π π (β) = 2 πΎ where π := √(1/2000) ∑2000 π=1 βπππ − x(π)β . Here x(π) stands Example 9. Consider the following two-dimensional stochastic delay Hopfield neural networks of the form 2 π { 2 σ΅¨ σ΅¨ × {[1 − (1 − π) ππ β] + β∑ σ΅¨σ΅¨σ΅¨1 − (1 − π) ππ βσ΅¨σ΅¨σ΅¨ (40) π=1 { ππ (β) = split-step π-method in [13], which has strong convergence order 0.5. The mean-square error π of numerical approximations at time π versus the step-size is depicted in log-log diagrams, (42) π=1 σ΅¨ σ΅¨ σ΅¨ σ΅¨ + β σ΅¨σ΅¨σ΅¨1 − (1 − π) ππ βσ΅¨σ΅¨σ΅¨ (σ΅¨σ΅¨σ΅¨σ΅¨πππ σ΅¨σ΅¨σ΅¨σ΅¨ π½π ) } . Note that the assumption of Theorem implies the nonnegativity of 1 − (1 − π)ππ β. Proof. Obviously, when π → +∞, πΈ(|πππ |2 ) → 0 if the inequality π(β) + ∑ππ=1 ππ (β) + ∑ππ=1 π π (β) < 1 holds, which is equivalent to the inequality Aπ β3 + Bπ β2 + Cπ β + Dπ < 0. Furthermore, it is easy to prove Aπ > 0 and Dπ < 0 by virtue of (30). By Vieta’s formulas, the product of three roots of (33) satisfies π§1 π§2 π§3 = −(Dπ /Aπ ) > 0. This means that (33) has at least one positive root. Therefore, let βπ denote the smallest positive root of the equation. Moreover, we note that at the origin the right-hand side polynomial of (33) is negative. This completes the proof. 5. Numerical Results Now, we apply the introduced SSTM method to two test cases of SDHNNs in order to compare their performance with the 10 0 ), 0 10 5 −1 π΅=( ), 1 5 π΄=( 0 0 ), 0 0 0 1.5π₯1 (π‘) ). Φ=( 0 −1.5π₯2 (π‘) (45) In Figure 1, SSTM is applied with 7 different step-sizes: βπ = 2π−12 for π = 1, 2, . . . , 7. Two pairs of time delays (π1 , π2 ) are set to (1, 2) and (1.13, 2.31). The first pair has common factor βπ ; however, the second pair is incommensurable by βπ . The computation errors π versus step-sizes β are plotted on a log-log scale and the reference lines of slope 1 are added. It illustrates that SSTM has raised the strong order of the split-step π-method at least to 1 for SDHNNs [13]. Next, Table 1 shows a comparison of stability intervals between the SST and the SSTM for (43). Two sets of the interval in the Table are calculated through Theorem 8 in this paper and Theorem 5.1 in [13]. It is easy to see that the stability intervals of the two methods are similar. We know that Theorem 5.1 in [13] and Theorem 8 in this paper only give sufficient conditions of mean-square stability. Therefore the stability intervals given by these theorems are only subsets of real ones. To confirm the situation, we calculated the sample moments of the approximate solution and plotted them along the time π‘. Here the sample moment ππ π 2 π means (1/2000) ∑2000 π=1 βπππ β for the numerical solution πππ 8 Abstract and Applied Analysis 10−1 10−1 10−2 Error π Error π 10 −2 10−3 10−3 10−4 10−4 10−4 10−3 10−2 10−1 10−5 10−4 10−3 Step-size β Step-size β π = 0.0 π = 0.5 10−1 10−2 π = 0.0 π = 0.5 π = 1.0 Reference line (a) Case 1 with π1 = 1, π2 = 2 π = 1.0 Reference line (b) Case 2 with π1 = 1, π2 = 2 10−1 10−2 10−2 Error π Error π 10−1 10−3 10−3 10−4 10−4 10−3 10−2 10−1 10−4 10−4 10−3 Step-size β π = 0.0 π = 0.5 10−1 10−2 Step-size β π = 1.0 Reference line π = 0.0 π = 0.5 (c) Case 1 with π1 = 1.13, π2 = 2.31 π = 1.0 Reference line (d) Case 2 with π1 = 1.13, π2 = 2.31 Figure 1: Errors versus step-sizes for the SDHNNs (43). Table 1: Calculated intervals of step-size for mean-square stability of the numerical schemes. Numerical scheme Split-step π-Milstein method (π = 0.0) Split-step π-Milstein method (π = 0.2) Split-step π-Milstein method (π = 0.5) Split-step π-Milstein method (π = 0.8) Split-step π-Milstein method (π = 1.0) Split-step π-method (π = 0.0) Split-step π-method (π = 0.2) Split-step π-method (π = 0.5) Split-step π-method (π = 0.8) Split-step π-method (π = 1.0) Case 1 (0.0000, 0.0500] (0.0000, 0.0625] (0.0000, 0.1000] (0.0000, 0.0650) (0.0000, 0.0500) (0.0000, 0.0500] (0.0000, 0.0625] (0.0000, 0.1000] (0.0000, 0.0689) (0.0000, 0.0540) Case 2 (0.0000, 0.1000] (0.0000, 0.1250] (0.0000, 0.2000] (0.0000, 0.5000] (0.0000, 0.3534) (0.0000, 0.1000] (0.0000, 0.1250] (0.0000, 0.2000] (0.0000, 0.5000] (0.0000, 0.5792) Abstract and Applied Analysis 9 1010 108 108 106 10 4 10 2 104 102 π π 106 100 100 10−2 10−2 10−4 10−4 10−6 0 10 20 30 40 Time π‘ 50 60 70 10−6 80 0 10 20 30 40 50 60 70 80 60 70 80 Time π‘ β = 1/10 β = 1/11 β = 1/12 β = 1/10 β = 1/11 β = 1/12 (a) SST (b) SSTM Figure 2: Mean-square stability comparison of SST and SSTM for the SDHNNs (43), Case 1, π = 0. 1010 108 106 105 104 100 π π 102 100 10−2 10−4 10−5 10−6 10−10 0 10 20 30 40 Time π‘ β = 1/5 β = 1/6 50 60 70 80 β = 1/7 β = 1/8 (a) SST 10−8 0 10 20 30 40 Time π‘ β = 1/5 β = 1/6 50 β = 1/7 β = 1/8 (b) SSTM Figure 3: Mean-square stability comparison of SST and SSTM for the SDHNNs (43), Case 2, π = 0. approximating x(π‘π ) along the πth sample path. Figures 2, 3, 4, 5, and 6 depict the results by SST and SSTM in the log-scaled vertical axis. All the figures can give a rough estimate of the stability interval in each case. 6. Concluding Remarks We introduce the split-step π-Milstein method (SSTM), which exhibits higher strong convergence rate than the split-step π-method (SST, see [13]) for a stochastic delay Hopfield neural networks, and the scheme proposed in this paper can deal with incommensurable time delays which were not considered in [13]. We give the proof of convergence results, which has generally been omitted in the previous works on the same subject. By comparing the stability intervals of step size for the SST and SSTM for a test example, we find they exhibit similar mean-square stability. In this paper, we have found a delay-independent sufficient condition for mean-square stability of split-step π-Milstein method applied to nonlinear stochastic delay 10 Abstract and Applied Analysis 1015 1010 1010 105 π π 105 10 100 0 10−5 10−5 10−10 0 10 20 30 40 Time π‘ 50 60 70 80 10−10 0 10 β = 1/6 β = 1/7 β = 1/8 20 30 40 Time π‘ 50 60 70 80 β = 1/6 β = 1/7 β = 1/8 (a) SST (b) SSTM Figure 4: Mean-square stability comparison of SST and SSTM for the SDHNNs (43), Case 1, π = 0.2. 1010 1020 1015 105 105 π π 10 10 100 100 10 10−5 −5 10−10 0 10 20 30 40 Time π‘ 50 60 70 80 β = 1/5 β = 1/6 β = 1/3 β = 1/4 10−10 0 10 20 30 40 Time π‘ β = 1/3 β = 1/4 (a) SST 50 60 70 80 β = 1/5 β = 1/6 (b) SSTM Figure 5: Mean-square stability comparison of SST and SSTM for the SDHNNs (43), Case 2, π = 0.2. Hopfield neural networks. Further, Figure 6 suggests that the value of β0 , the right end-point of the stability interval, given by Theorem 5.1 in [13] and Theorem 8 in this paper is much smaller than the true value when π is close to unity. In this case, we need other techniques for stability analysis in this kind of stochastic delay differential system. To the best of our knowledge, the works in [20, 21] put forward good attempts. On the other hand, with respect to stochastic delay differential equations, some other types of stability have been successfully discussed for the Euler-type scheme, for example, mean-square exponential stability [12], delay-dependent stability [22], delay-dependent exponential stability [23], and almost sure exponential stability [24]. To Milstein-type scheme, in view of more sophisticated derivations, these issues would be challenging for future research. Acknowledgment The authors would like to thank Dr. E. Buckwar and Dr. A. Rathinasamy for their valuable suggestions. The authors also thank the editor and the anonymous referees, whose Abstract and Applied Analysis 11 102 100 100 10−5 10−2 10−10 10−4 π π 105 10−15 10−6 10−20 10−8 10−25 10−10 10−30 0 20 40 60 Time π‘ SSTM, π = 0.8, β = 1 SSTM, π = 1.0, β = 1 80 100 120 10−12 0 10 20 30 40 50 60 70 80 90 100 Time π‘ SST, π = 0.8, β = 1 SST, π = 1.0, β = 1 (a) Case 1 SSTM, π = 0.8, β = 1 SSTM, π = 1.0, β = 1 SST, π = 0.8, β = 1 SST, π = 1.0, β = 1 (b) Case 2 Figure 6: Mean-square stability comparison of SST and SSTM for the SDHNNs (43). careful reading and constructive feedback have improved this work. The first author is partially supported by EInstitutes of Shanghai Municipal Education Commission (no. E03004), National Natural Science Foundation of China (no. 10901106), Natural Science Foundation of Shanghai (no. 09ZR1423200), and Innovation Program of Shanghai Municipal Education Commission (no. 09YZ150). The third author is partially supported by the Grants-in-Aid for Scientific Research (no. 24540154) supplied by Japan Society for the Promotion of Science. References [1] J. J. Hopfield, “Neural networks and physical systems with emergent collective computational abilities,” Proceedings of the National Academy of Sciences of the United States of America, vol. 79, no. 8, pp. 2554–2558, 1982. [2] J. J. Hopfield and D. W. Tank, “Neural computation of decisons in optimization problems,” Biological Cybernetics, vol. 52, no. 3, pp. 141–152, 1985. [3] G. Joya, M. A. Atencia, and F. Sandoval, “Hopfield neural networks for optimization:study of the different dynamics,” Neurocomputing, vol. 43, no. 1–4, pp. 219–237, 2002. [4] Y. Sun, “Hopfield neural network based algorithms for image restoration and reconstruction. I. Algorithms and simulations,” IEEE Transactions on Signal Processing, vol. 48, no. 7, pp. 2105– 2118, 2000. [5] S. Young, P. Scott, and N. Nasrabadi, “Object recognition using multilayer Hopfield neural network,” IEEE Transactions on Image Processing, vol. 6, no. 3, pp. 357–372, 1997. [6] G. Pajares, “A Hopfield neural network for image change detection,” IEEE Transactions on Neural Networks, vol. 17, no. 5, pp. 1250–1264, 2006. [7] L. Wan and J. H. Sun, “Mean square exponential stability of stochastic delayed Hopfield neural networks,” Physics Letters A, vol. 343, no. 4, pp. 306–318, 2005. [8] Z. Wang, H. Shu, J. Fang, and X. Liu, “Robust stability for stochastic Hopfield neural networks with time delays,” Nonlinear Analysis, vol. 7, no. 5, pp. 1119–1128, 2006. [9] Z. D. Wang, Y. R. Liu, K. Fraser, and X. H. Liu, “Stochastic stability of uncertain Hopfield neural networks with discrete and distributed delays,” Physics Letters A, vol. 354, no. 4, pp. 288–297, 2006. [10] J. S. SΜΔ±Μma and P. Orponen, “General-purpose computation with neural networks: a survey of complexity theoretic results,” Neural Computation, vol. 15, no. 12, pp. 2727–2778, 2003. [11] J. R. Raol and H. Madhuranath, “Neural network architectures for parameter estimation of dynamical systems,” IEE Proceedings, vol. 143, no. 4, pp. 387–394, 1996. [12] R. H. Li, W. Pang, and P. Leung, “Exponential stability of numerical solutions to stochastic delay Hopfield neural networks,” Neurocomputing, vol. 73, no. 4–6, pp. 920–926, 2010. [13] A. Rathinasamy, “The split-step π-methods for stochastic delay Hopfield neural networks,” Applied Mathematical Modelling, vol. 36, no. 8, pp. 3477–3485, 2012. [14] D. J. Higham, X. Mao, and A. M. Stuart, “Strong convergence of Euler-type methods for nonlinear stochastic differential equations,” SIAM Journal on Numerical Analysis, vol. 40, no. 3, pp. 1041–1063, 2002. [15] H. Zhang, S. Gan, and L. Hu, “The split-step backward Euler method for linear stochastic delay differential equations,” Journal of Computational and Applied Mathematics, vol. 225, no. 2, pp. 558–568, 2009. [16] X. Wang and S. Gan, “The improved split-step backward Euler method for stochastic differential delay equations,” International Journal of Computer Mathematics, vol. 88, no. 11, pp. 2359–2378, 2011. 12 [17] X. R. Mao, Stochastic Differential Equations and Applications, Horwood, Chichester, UK, 2nd edition, 2007. [18] E. Buckwar, “One-step approximations for stochastic functional differential equations,” Applied Numerical Mathematics, vol. 56, no. 5, pp. 667–681, 2006. [19] Q. Zhou and L. Wan, “Exponential stability of stochastic delayed Hopfield neural networks,” Applied Mathematics and Computation, vol. 199, no. 1, pp. 84–89, 2008. [20] Y. Saito and T. Mitsui, “Mean-square stability of numerical schemes for stochastic differential systems,” Vietnam Journal of Mathematics, vol. 30, pp. 551–560, 2002. [21] E. Buckwar and C. Kelly, “Towards a systematic linear stability analysis of numerical methods for systems of stochastic differential equations,” SIAM Journal on Numerical Analysis, vol. 48, no. 1, pp. 298–321, 2010. [22] C. Huang, S. Gan, and D. Wang, “Delay-dependent stability analysis of numerical methods for stochastic delay differential equations,” Journal of Computational and Applied Mathematics, vol. 236, no. 14, pp. 3514–3527, 2012. [23] X. Qu and C. Huang, “Delay-dependent exponential stability of the backward Euler method for nonlinear stochastic delay differential equations,” International Journal of Computer Mathematics, vol. 89, no. 8, pp. 1039–1050, 2012. [24] F. Wu, X. Mao, and L. Szpruch, “Almost sure exponential stability of numerical solutions for stochastic delay differential equations,” Numerische Mathematik, vol. 115, no. 4, pp. 681–697, 2010. Abstract and Applied Analysis Advances in Operations Research Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Advances in Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Mathematical Problems in Engineering Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Algebra Hindawi Publishing Corporation http://www.hindawi.com Probability and Statistics Volume 2014 The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Differential Equations Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Volume 2014 Submit your manuscripts at http://www.hindawi.com International Journal of Advances in Combinatorics Hindawi Publishing Corporation http://www.hindawi.com Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Mathematics and Mathematical Sciences Journal of Hindawi Publishing Corporation http://www.hindawi.com Stochastic Analysis Abstract and Applied Analysis Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com International Journal of Mathematics Volume 2014 Volume 2014 Discrete Dynamics in Nature and Society Volume 2014 Volume 2014 Journal of Journal of Discrete Mathematics Journal of Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Applied Mathematics Journal of Function Spaces Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Optimization Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014