Electronic Journal of Differential Equations, Vol. 2012 (2012), No. 130, pp. 1–10. ISSN: 1072-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu NEWTON’S METHOD FOR STOCHASTIC FUNCTIONAL DIFFERENTIAL EQUATIONS MONIKA WRZOSEK Abstract. In this article, we apply Newton’s method to stochastic functional differential equations. The first part concerns a first-order convergence. We formulate a Gronwall-type inequality which plays an important role in the proof of the convergence theorem for the Newton method. In the second part a probabilistic second-order convergence is studied. 1. Introduction Newton’s method, known as the tangent method, was established to solve nonlinear algebraic equations of the form F (x) = 0 by means of the following recurrence formula F (xk ) . xk+1 = xk − 0 F (xk ) The convergence of the sequence to the exact solution, uniqueness and local existence of the solution are stated in the Kantorovich theorem on successive approximations [7]. In [5] Chaplygin solves ordinary differential equations x0 = F (t, x), x(t0 ) = x0 (1.1) by constructing convergent sequences of successive approximations x0k+1 = F (t, xk ) + Fx (t, xk )(xk+1 − xk ), xk+1 (t0 ) = x0 . Numerous generalizations of the method to the case of functional differential equations are also known in the literature, e.g. [9], [15]. In particular ordinary differential equations are generalized to integro-differential, delayed, Volterra-type differential equations and problems with the Hale operator. After [4] we mention the model with an operator T defined on a set of absolutely continuous functions: x0 = T x, x(t0 ) = x0 . The Newton scheme for such equations is of the form x0k+1 = T xk + (Tx xk )(xk+1 − xk ), xk+1 (t0 ) = x0 , 2000 Mathematics Subject Classification. 60H10, 60H35, 65C30. Key words and phrases. Newton’s method; stochastic functional differential equations. c 2012 Texas State University - San Marcos. Submitted May 31, 2012. Published Augsut 15, 2012. Supported by grant BW-UG 538-5100-0964-12 from the University of Gdańsk. 1 (1.2) 2 M. WRZOSEK EJDE-2012/130 where Tx denotes the Fréchet derivative of an operator T . In [8] Kawabata and Yamada prove the convergence of Newton’s method for stochastic differential equations. In [2] Amano formulates an equivalent problem and provides a direct way to estimate the approximation error. In [3] he proposes a probabilistic second-order error estimate which has been an open problem since Kawabata and Yamada’s results. The proof is based on solving the error stochastic differential equation and introducing a certain representation of the solution (see Lemma 2.5 [3]) of which components are easy to estimate. Our goal is to obtain corresponding results in the case of stochastic functional differential equations with Hale functionals. The existence and uniqueness of solutions to stochastic functional differential equations has been discussed in a large number of papers, see [10, 11, 13, 14]. The assumptions on the given functions imply the existence and uniqueness of solutions and convergence of the Newton sequence. The first part of this article concerns a first-order convergence. We formulate a Gronwall-type inequality which is a generalization of Amano’s [1]. It plays an important role in the proof of the convergence theorem for the Newton method. In the second part a probabilistic second-order convergence is studied. However, Amano’s techniques [3] are not applicable to the functional case because it is not possible to construct explicit solution of linear stochastic functional differential equation using his methods. Therefore we introduce an entirely different approach and obtain a comprehensive second-order convergence theorem by simpler arguments. For this purpose we define certain sets, utilize the Gronwall-type lemma and the Chebyshev inequality. 2. Formulation of the problem Let (Ω, F, P ) be a complete probability space, (Bt )t∈[0,T ] the standard Brownian motion and (Ft )t∈[0,T ] its natural filtration. We extend this filtration as follows: Ft := F0 for t ∈ [−τ, 0). By L2 (Ω) we denote the space of all random variables Y : Ω → R such that kY k2 = E[Y 2 ] < ∞. Let X[c,d] be the space of all continuous and Ft -adapted processes X : [c, d] → L2 (Ω) with the norm kXk2[c,d] = E[ sup |Xt |2 ] < ∞. c≤t≤d Fix τ ≥ 0 and T > 0. For a process X ∈ X[−τ,T ] and any t ∈ [0, T ] we define the L2 (Ω)-valued Hale-type operator Xt+ · : [−τ, 0] → L2 (Ω) by (Xt+ · )s = Xt+s for s ∈ [−τ, 0]. We consider the initial value problem for stochastic functional differential equation: dXt = b(t, Xt+ · )dt + σ(t, Xt+· )dBt for t ∈ [0, T ], Xt = ϕt for t ∈ [−τ, 0], (2.1) where b, σ : [0, T ] × C([−τ, 0], R) → R are deterministic Fréchet differentiable functions, ϕ ∈ X[−τ,0] . Since Ft := F0 for t ∈ [−τ, 0), the process ϕ is deterministic thus independent of the Brownian motion on [0, T ]. Problem (2.1) is equivalent to EJDE-2012/130 NEWTON’S METHOD 3 the stochastic functional integral equation: Z t Z t X t = ϕ0 + b(s, Xs+ · )ds + σ(s, Xs+ · )dBs for t ∈ [0, T ], 0 0 (2.2) Xt = ϕt for t ∈ [−τ, 0]. We formulate the Newton scheme for problem (2.1). Let X (0) ∈ X[−τ,T ] , (0) Xt = ϕt , t ∈ [−τ, 0] and (k+1) dXt (k+1) Xt (k) (k+1) (k) (k) = b(t, Xt+ · ) + bw (t, Xt+ · )(Xt+ · − Xt+ · ) dt (k) (k) (k+1) (k) + σ(t, Xt+ · ) + σw (t, Xt+ · )(Xt+ · − Xt+ · ) dBt = ϕt for t ∈ [0, T ], (2.3) for t ∈ [−τ, 0], where bw (t, w), σw (t, w) : C([−τ, 0], R) → R are Fréchet derivatives of b, σ with respect to the functional variable w ∈ C([−τ, 0], R). We recall the functional norms of bw (t, w) and σw (t, w), kbw (t, w)kF = |bw (t, w)w̄|, sup sups∈[−τ,0] |w̄(s)|≤1 kσw (t, w)kF = |σw (t, w)w̄|, sup sups∈[−τ,0] |w̄(s)|≤1 where we take the supremum over all w̄ ∈ C([−τ, 0], R) whose uniform norms do not exceed 1. We assume that there exists a nonnegative constant M such that kbw (t, w)kF ≤ M, kσw (t, w)kF ≤ M, (2.4) which implies the Lipschitz condition for b(t, w) and σ(t, w): |b(t, w) − b(t, w̄)| ≤ M sup |w(s) − w̄(s)|, (2.5) −τ ≤s≤0 |σ(t, w) − σ(t, w̄)| ≤ M sup |w(s) − w̄(s)| (2.6) −τ ≤s≤0 for w, w̄ ∈ C([−τ, 0], R). Before formulating the main result we introduce the Gronwall-type inequality which will be necessary in the proof of the convergence theorem. By (C([−τ, 0], R))∗ we denote the space of all linear and bounded functionals on C([−τ, 0], R). Lemma 2.1. Suppose that α(1) , α(2) : [0, T ] → X[0,T ] are continuous, A(1) , A(2) : ∗ [0, T ] → (C([−τ, 0], R)) and there exists a nonnegative constant M such that (i) kAt (t, w)kF ≤ M, i = 1, 2, for t ∈ [0, T ], w ∈ C([−τ, 0], R). Then for a process Xt satisfying the linear stochastic functional differential equation (1) (1) (2) (2) dXt = αt + At Xt+ · dt + αt + At Xt+ · dBt for t ∈ [0, T ], (2.7) Xt = 0 for t ∈ [−τ, 0] we have kXk2[0,t] 4M 2 (t+4)t Z ≤ 4e 0 t tkα(1) k2[0,s] + 4kα(2) k2[0,s] ds for t ∈ [0, T ]. 4 M. WRZOSEK EJDE-2012/130 Proof. Using the Schwarz inequality, Itô isometry, Doob martingale inequality and the fact that (x + y)2 ≤ 2(x2 + y 2 ), we have E sup Xs2 0≤s≤t Z s h Z s 2 i (2) (2) = E sup [αr(1) + A(1) X ]dr + [α + A X ]dB r+ · r+ · r r r r 0≤s≤t 0 0 h Z s 2 i ≤ 2E sup [αr(1) + A(1) X ]dr r+ · r 0≤s≤t 0 h Z s 2 i + 2E sup [αr(2) + A(2) r Xr+ · ]dBr 0≤s≤t 0 Z t 2 2 2 ≤ 2tE [αs(1) + A(1) X ] ds + 2 · 2 E [αs(2) + A(2) s+ · s s Xs+ · ] ds 0 0 Z t Z t 2 (2) 2 (2) 2 ≤ 4t E[αs(1) ]2 + E[A(1) X ] ds + 16 E[α ] + E A X ds. s+ · s+ · s s s Z t 0 0 Notice that h 2 i (i) E[αt ]2 ≤ E sup αs(i) = kα(i) k2[0,t] , i = 1, 2 0≤s≤t (i) (i) E[At Xt+ · ]2 ≤ E kAt k2F sup |Xs |2 ≤ M 2 E[ sup |Xs |2 ] ≤ M 2 kXk2[0,t] . 0≤s≤t 0≤s≤t Hence E[ sup Xs2 ] 0≤s≤t Z t [kα(1) k2[0,s] 2 kXk2[0,s] ]ds Z t ≤ 4t +M + 16 [kα(2) k2[0,s] + M 2 kXk2[0,s] ]ds 0 0 Z t Z t (1) 2 (2) 2 2 =4 [tkα k[0,s] + 4kα k[0,s] ]ds + 4M (t + 4) kXk2[0,s] ds. 0 0 Thus kXk2[0,t] Z ≤4 t [tkα(1) k2[0,s] + 4kα(2) k2[0,s] ]ds 2 Z + 4M (t + 4) 0 t kXk2[0,s] ds. 0 For a fixed t0 such that 0 ≤ t0 ≤ T , we have Z t0 h Z t i kXk2[0,t] ≤ 4 t0 kα(1) k2[0,s] + 4kα(2) k2[0,s] ds + 4M 2 (t0 + 4) kXk2[0,s] ds 0 0 for 0 ≤ t ≤ t0 . We apply the Gronwall inequality and obtain Z t0 h i 2 4M 2 (t0 +4)t kXk[0,t] ≤ 4e t0 kα(1) k2[0,s] + 4kα(2) k2[0,s] ds, 0 ≤ t ≤ t0 . 0 Since t0 is fixed arbitrarily, we obtain Z th i 2 tkα(1) k2[0,s] + 4kα(2) k2[0,s] ds, kXk2[0,t] ≤ 4e4M (t+4)t t ∈ [0, T ]. 0 EJDE-2012/130 NEWTON’S METHOD 5 3. A Convergence Theorem For the Newton sequence (X (k) )k∈N , introduced in (2.3), we denote (k) ∆Xt (k) Observe that ∆Xt (k+1) d ∆Xt (k+1) = Xt (k) − Xt . satisfies the following stochastic functional differential equation (k+1) (k) (k) (k) (k+1) (k+1) = {b(t, Xt+ · ) − b(t, Xt+ · ) − bw (t, Xt+ · )∆Xt+ · + bw (t, Xt+ · )∆Xt+ · }dt (k+1) (k) (k) (k) (k+1) (k+1) + {σ(t, Xt+ · ) − σ(t, Xt+ · ) − σw (t, Xt+ · )∆Xt+ · + σw (t, Xt+ · )∆Xt+ · }dBt (3.1) for t ∈ [0, T ]. Theorem 3.1. Suppose (2.4) holds. Then the Newton sequence X (k) = (X (k) )k∈N defined by (2.3) converges to the unique solution X of equation (2.1) in the following sense: lim kX (k) − Xk[−τ,T ] = 0. k→∞ Proof. We show that (X (k) )k∈N satisfies the Cauchy condition with respect to the norm k · k[−τ,T ] . From Lemma 2.1 and the Doob inequality we have k∆X (k+1) k2[0,t] ≤ 4e4M 2 (t+4)t t Z h i (k+1) (k) (k) (k) tE sup |b(r, Xr+ · ) − b(r, Xr+ · ) − bw (r, Xr+ · )∆Xr+ · |2 ds 0≤r≤s 0 + 4e4M 2 (t+4)t t Z h i (k+1) (k) (k) (k) 4E sup |σ(r, Xr+ · ) − σ(r, Xr+ · ) − σw (r, Xr+ · )∆Xr+ · |2 ds. 0≤r≤s 0 By the Lipschitz condition for b(t, w) and σ(t, w): (k+1) (k) |b(t, Xt+ · ) − b(t, Xt+ · )| ≤ M sup |∆Xs(k) |, (3.2) 0≤s≤t (k+1) (k) |σ(t, Xt+ · ) − σ(t, Xt+ · )| ≤ M sup |∆Xs(k) |. (3.3) 0≤s≤t We obtain the estimate k∆X (k+1) k2[0,t] 4M 2 (t+4)t 2 Z t ≤ 16M (t + 4)e k∆X (k) k2[0,s] ds. 0 Since t ≤ T , we have k∆X (k+1) k2[0,t] Z ≤C t k∆X (k) k2[0,s] ds, 0 where C = 16M 2 (T + 4)e4M The recursive use of (3.4) leads to k∆X (k+1) k2[0,t] ≤ Thus kX (k+p) − X (k) k2[0,t] ≤ 2 (T +4)T . C k+1 k∆X (0) k2[0,t] . (k + 1)! C k+p−1 Ck + ... + k∆X (0) k2[0,t] . (k + p − 1)! k! (3.4) 6 M. WRZOSEK EJDE-2012/130 We conclude that (X (k) )k∈N is a Cauchy sequence in the Banach space X[−τ,T ] . Therefore it is convergent to (Xt )t∈[−τ,T ] , which is a solution to equation (2.1). This completes the proof. 4. Probabilistic second-order convergence Before formulating the main result, we need the following lemma. Lemma 4.1. Let (gt )t∈[0,T ] ∈ X[0,T ] and {At }t∈[0,T ] be a family of F-measurable subsets of Ω satisfying At ∈ F t , At ⊂ As for 0 ≤ s ≤ t ≤ T. Then we have h Z t 2 i Z t E 1A t gs dBs ≤ E[1As gs2 ]ds 0 (4.1) 0 for any 0 ≤ t ≤ T . Proof. Since for s ≤ t we have the implication At ⊂ As ⇒ 1At = 1At 1As and 1As gs is a stochastically integrable function with respect to s, it follows from the definition of stochastic integral that 2 2 Z t 2 Z t Z t 1As gs dBs 1As gs dBs ≤ gs dBs = 1At 1A t 0 0 0 almost surely in Ω for any 0 ≤ t ≤ T . By Itô isometry Z t h 2 i Z t 2 i h Z t 1As gs dBs = E[1As gs2 ]ds. gs dBs ≤E E 1A t 0 0 0 This completes the proof. Remark 4.2. The proof of Lemma 4.1 is proposed by the Referee. Our former proof of this lemma was based on the duality property of the Malliavin calculus (see Def.1.3.1(ii) [12]), the product rule for Malliavin derivative (see Theorem 3.4 [6]), Proposition 1.2.6 [12] and Proposition 1.3.8 [12]. It also resulted in an interesting extension of the Itô isometry h Z t 2 i Z t E 1A t gs dBs = E[1At gs2 ]ds. 0 0 As in the previous sections we assume that (X (k) )k∈N is the Newton sequence, where b, σ : [0, T ] × C([−τ, 0], R) → R are deterministic Fréchet differentiable functions, the initial function ϕ ∈ X[−τ,0] is a deterministic process, the Fréchet deriva∗ tives bw (t, w), σw (t, w) are in (C([−τ, 0], R)) and condition (2.4) is satisfied. Theorem 4.3. Suppose that the above assumptions are satisfied and there exists a nonnegative constant L such that kbw (t, w) − bw (t, w̄)kF ≤ L sup |w(s) − w̄(s)|, (4.2) −τ ≤s≤0 kσw (t, w) − σw (t, w̄)kF ≤ L sup |w(s) − w̄(s)| −τ ≤s≤0 (4.3) EJDE-2012/130 NEWTON’S METHOD 7 for all w, w̄ ∈ C([−τ, 0], R). Then for any T > 0, there exists a nonnegative constant C such that (k) (k+1) P sup |∆Xt | ≤ ρ ⇒ sup |∆Xt | ≤ Rρ2 ≥ 1 − CT R−2 0≤t≤T 0≤t≤T for all R > 0, 0 < ρ ≤ 1, k = 0, 1, 2, . . . . Proof. Define the sets (k) Aρ,t = {ω : sup |∆Xs(k) | ≤ ρ} for 0 < ρ ≤ 1, 0 ≤ t ≤ T, k = 0, 1, 2, . . . . 0≤s≤t (k) We consider the sequence (∆X (k) )k∈N restricted to the sets Aρ,t . For this reason (k) we multiply equation (3.1) by 1A(k) , the characteristic function of the set Aρ,t , and ρ,t obtain Z t h i (k+1) (k+1) (k) (k) (k) 1A(k) ∆Xt = 1A(k) b(s, Xs+ · ) − b(s, Xs+ · ) − bw (s, Xs+ · )∆Xs+ · ds ρ,t ρ,t 0 Z t (k+1) (k+1) + 1A(k) bw (s, Xs+ · )∆Xs+ · ds ρ,t 0 Z th i (k+1) (k) (k) (k) + 1A(k) σ(s, Xs+ · ) − σ(s, Xs+ · ) − σw (s, Xs+ · )∆Xs+ · dBs ρ,t 0 Z t (k+1) (k+1) + 1A(k) σw (s, Xs+ · )∆Xs+ · dBs . ρ,t 0 (k) Applying the Doob inequality, Lemma 4.1 and the fact that t 7→ Aρ,t is nondecreasing we obtain k1A(k) ∆X (k+1) k2[0,t] ρ,t Z t h i (k+1) (k) (k) (k) ≤ 16t E 1A(k) |b(s, Xs+ · ) − b(s, Xs+ · ) − bw (s, Xs+ · )∆Xs+ · |2 ds ρ,t 0 Z t h i (k+1) (k+1) + 16t E 1A(k) |bw (s, Xs+ · )∆Xs+ · |2 ds ρ,t 0 Z t h i (k+1) (k) (k) (k) + 16E 1A(k) | σ(s, Xs+ · ) − σ(s, Xs+ · ) − σw (s, Xs+ · )∆Xs+ · dBs |2 ρ,t 0 Z t h i (k+1) (k+1) + 16E 1A(k) | σw (s, Xs+ · )∆Xs+ · dBs |2 ρ,t 0 Z t h i (k+1) (k) (k) (k) ≤ 16t E 1A(k) |b(s, Xs+ · ) − b(s, Xs+ · ) − bw (s, Xs+ · )∆Xs+ · |2 ds ρ,s 0 Z t h i (k+1) (k+1) E 1A(k) |bw (s, Xs+ · )∆Xs+ · |2 ds + 16t ρ,s 0 Z t h i (k+1) (k) (k) (k) + 16 E 1A(k) |σ(s, Xs+ · ) − σ(s, Xs+ · ) − σw (s, Xs+ · )∆Xs+ · |2 ds ρ,s 0 Z t h i (k+1) (k+1) E 1A(k) |σw (s, Xs+ · )∆Xs+ · |2 ds. + 16 0 ρ,s From (4.2) and (4.3) we have (k+1) (k) kbw (t, Xt+ · ) − bw (t, Xt+ · )kF ≤ L sup |∆Xs(k) |, 0≤s≤t (4.4) 8 M. WRZOSEK (k+1) EJDE-2012/130 (k) kσw (t, Xt+ · ) − σw (t, Xt+ · )kF ≤ L sup |∆Xs(k) |. 0≤s≤t From the fundamental theorem of calculus it follows that (k+1) (k) (k) (k) |b(r, Xr+ · ) − b(r, Xr+ · ) − bw (r, Xr+ · )∆Xr+ · | Z 1 (k) (k) (k) ≤ kbw (r, Xr+ · + θ∆Xr+ · ) − bw (r, Xr+ · )kF dθ sup |∆Xs(k) | 0≤s≤r 0 1 ≤ L sup |∆Xs(k) |2 . 2 0≤s≤r Similarly, we have (k+1) (k) (k) (k) |σ(r, Xr+ · ) − σ(r, Xr+ · ) − σw (r, Xr+ · )∆Xr+ · | ≤ 1 L sup |∆Xs(k) |2 . 2 0≤s≤r Consequently, we have the estimate Z t h i 1 k1A(k) ∆X (k+1) k2[0,t] ≤ 16t E 1A(k) L2 sup |∆Xr(k) |4 ds ρ,s 4 ρ,t 0≤r≤s 0 Z t h i (k+1) (k+1) + 16t E 1A(k) |bw (s, Xs+ · )∆Xs+ · |2 ds ρ,s 0 Z t h i 1 + 16 E 1A(k) L2 sup |∆Xr(k) |4 ds ρ,s 4 0≤r≤s 0 Z t h i (k+1) (k+1) + 16 E 1A(k) |σw (s, Xs+ · )∆Xs+ · |2 ds. ρ,s 0 Recall that (k) |∆Xr | ≤ ρ on (k) Aρ,s for 0 ≤ r ≤ s. From (2.4) we have k1A(k) ∆X (k+1) k2[0,t] ρ,t ≤ 4(t + 1)L2 tρ4 + 16(t + 1)M 2 t Z ρ,s 0 ≤ 4(T + 1)tL2 ρ4 + 16(T + 1)M 2 h i E 1A(k) sup |∆Xr(k+1) |2 ds 0≤r≤s t Z 0 k1A(k) ∆X (k+1) k2[0,s] ds. ρ,s We apply the Gronwall inequality and obtain 2 k1A(k) ∆X (k+1) k2[0,t] ≤ 4(T + 1)tL2 ρ4 e16T (T +1)M . ρ,t The Chebyshev inequality yields P sup |∆Xs(k) | ≤ ρ ∧ sup |∆Xs(k+1) | > Rρ2 0≤s≤t 0≤s≤t h = P 1A(k) sup |∆Xs(k+1) | > Rρ2 ρ,t i 0≤s≤t 1 ≤ 2 4 k1A(k) ∆X (k+1) k2[0,t] ρ,t R ρ 2 1 ≤ 2 4 4(T + 1)tL2 ρ4 e16T (T +1)M = CtR−2 . R ρ Hence for any R > 0 we have P sup |∆Xs(k) | ≤ ρ ⇒ sup |∆Xs(k+1) | ≤ Rρ2 ≥ 1 − CtR−2 . 0≤s≤t 0≤s≤t (4.5) EJDE-2012/130 NEWTON’S METHOD 9 Thus the assertion is true. Remark 4.4. Our study is devoted to the case of deterministic functionals b and σ. Our result can be extended to the more complicated case of b and σ dependent on non-anticipative stochastic processes. (k) Remark 4.5. If P sup0≤t≤T |∆Xt | ≤ ρ > 0 then the probability of the implication (k) sup |∆Xt | ≤ ρ (k+1) ⇒ sup |∆Xt | ≤ Rρ2 0≤t≤T 0≤t≤T can be expressed in terms of conditional probability: (k) (k+1) | ≤ Rρ2 sup |∆Xt | ≤ ρ ≥ 1 − CT R−2 , P sup |∆Xt 0≤t≤T 0≤t≤T which is more intuitive. An example. Let b(t, w) = 0 and σ(t, w) = arctan w( 2t ). The stochastic functional differential equation is of the form: dXt = arctan Xt/2 dBt for t ∈ [0, 1], (4.6) X0 = 1. The corresponding Newton scheme is (k+1) dXt (k) = arctan Xt/2 + 1 1+ (k) 2 Xt/2 (k+1) X0 (k) ∆Xt dBt for t ∈ [0, 1], = 1. The Lipschitz condition (2.6) for σ(t, w) is satisfied with M = 1; therefore by Theorem 3.1 the Newton sequence is convergent to the solution of (4.6). We have the estimate (64e20 )k (64e20 )k π k∆X (0) k2[0,t] = k B· − 1k2[0,t] k! k! 4 (64e20 )k π 2 2 (64e20 )k (π 2 + 16) ≤8 E[ Bt + 1] = . k! 16 2k! k∆X (k) k2[0,t] ≤ Since σw (t, w) satisfies the Lipschitz condition (4.3) with L = 1, by Theorem 4.3 the second-order convergence is obtained. h i (k) (k+1) P sup |∆Xt | ≤ ρ ⇒ sup |∆Xt | ≤ Rρ2 ≥ 1 − 8e32 R−2 . 0≤t≤1 0≤t≤1 for all R > 0, 0 < ρ ≤ 1, k = 0, 1, 2, . . . . Acknowledgements. The author is greatly indebted to the anonymous referee for the valuable comments and suggestions, which improved, especially, the main results of Section 4. The elegant and short proof of Lemma 4.1 is also proposed by the referee. The original proof was one page long. 10 M. WRZOSEK EJDE-2012/130 References [1] Amano K.; A stochastic Gronwall inequality and its applications, JIPAM. J. Inequal. Pure Appl. Math. 6 (2005), no. 1, Article 17. (electronic). [2] Amano K.; A note on Newtons method for stochastic differential equations and its error estimate, Proc. Japan Acad., 84(2008), Ser.A, 1 - 3 [3] Amano K.; Newton’s method for stochastic differential equations and its probabilistic secondorder error estimate Electronic Journal of Differential Equations, Vol. 2012 (2012), No. 03, pp. 1 8. [4] Azbelev N. V., Maksimov V. P., Rakhmatullina L. F.; Introduction to the Theory of Linear Functional Differential Equations, World Federation Publishers Company, Atlanta, 1996. [5] Chaplygin S. A.; Collected papers on Mechanics and Mathematics, Moscow, 1954. [6] Di Nunno G., Øksendal B., Proske F.; Malliavin Calculus for Lévy Processes with Applications to Finance, Universitext, Springer, 2009. [7] Kantorovich L. V., Akilov G.P.; Functional analysis in normed spaces, Oxford, Pergamon, 1964. [8] Kawabata S., Yamada T.; On Newtons method for stochastic differential equations, in Seminaire de Probabilites, XXV, pp. 121 137, Lecture Notes in Math., 1485, Springer, Berlin, 1991. [9] Lakshmikantham V., Vatsala A. S.; Generalized Quasilinearization for Nonlinear Problems, Kluwer Academic Publishers, 1998. [10] Mao X.; Solutions of stochastic differential-functional equations via bounded stochastic integral constractors, J. Theoretical Probability, 5(3), 1992, 487 - 502. [11] Mohammed S. E. A.; Stochastic Functional Differential Equations, Pitman Publishing Program, Boston, 1984. [12] Nualart D.; The Malliavin Calculus and Related Topics, Springer, 2006. [13] Ponosov A. V.; A fixed-point method in the theory of stochastic differential equations, Soviet Mathematics. Doklady 37, 1988, no. 2, 426 - 429. [14] Turo J.; Study to a class of stochastic functional-differential equations using integral contractors, Functional Differential Equations, Vol.6 (1999), No.1-2, pp. 177 - 189. [15] Wang P., Wu Y., Wiwatanapaphee B.; An extension of method of quasilinearization for integro-differential equations, Int. J. Pure Appl. Math., 2009, 54(1), 27 - 37. Monika Wrzosek Institute of Mathematics, University of Gdańsk, Wita Stwosza 57, 80-952 Gdańsk, Poland E-mail address: Monika.Wrzosek@mat.ug.edu.pl