Research Article Convergence of Variational Iteration Method for Second-Order Delay Differential Equations

advertisement
Hindawi Publishing Corporation
Journal of Applied Mathematics
Volume 2013, Article ID 634670, 9 pages
http://dx.doi.org/10.1155/2013/634670
Research Article
Convergence of Variational Iteration Method for
Second-Order Delay Differential Equations
Hongliang Liu, Aiguo Xiao, and Lihong Su
Hunan Key Laboratory for Computation and Simulation in Science and Engineering and Key Laboratory of Intelligent Computing and
Information Processing of Ministry of Education, Xiangtan University, Xiangtan, Hunan 411105, China
Correspondence should be addressed to Hongliang Liu; lhl@xtu.edu.cn
Received 24 October 2012; Revised 17 December 2012; Accepted 17 December 2012
Academic Editor: Changbum Chun
Copyright © 2013 Hongliang Liu et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
This paper employs the variational iteration method to obtain analytical solutions of second-order delay differential equations.
The corresponding convergence results are obtained, and an effective technique for choosing a reasonable Lagrange multiplier is
designed in the solving process. Moreover, some illustrative examples are given to show the efficiency of this method.
1. Introduction
2. Convergence
The second-order delay differential equations often appear
in the dynamical system, celestial mechanics, kinematics,
and so forth. Some numerical methods for solving secondorder delay differential equations have been discussed, which
include πœƒ-method [1], trapezoidal method [2], and RungeKutta-Nyström method [3]. The variational iteration method
(VIM) was first proposed by He [4, 5] and has been extensively applied due to its flexibility, convenience, and efficiency.
So far, the VIM is applied to autonomous ordinary differential
systems [6], pantograph equations [7], integral equations
[8], delay differential equations [9], fractional differential
equations [10], the singular perturbation problems [11], and
delay differential-algebraic equations [12]. Rafei et al. [13] and
Marinca et al. [14] applied the VIM to oscillations. Tatari
and Dehghan [15] consider the VIM for solving second-order
initial value problems. For a more comprehensive survey on
this method and its applications, the readers refer to the
review articles [16–19] and the references therein. But the
VIM for second-order delay differential equations has not
been considered.
The article apply the VIM to second-order delay differential equations to obtain the analytical or approximate analytical solutions. The corresponding convergence results are
obtained. Some illustrative examples confirm the theoretical
results.
2.1. The First Kind of Second-Order Delay Differential Equations. Consider the initial value problems of second-order
delay differential equations
𝑦󸀠󸀠 (𝑑) = 𝑓 (𝑑, 𝑦 (𝑑) , 𝑦 (𝛼 (𝑑))) ,
𝑦󸀠 (𝑑) = πœ‘σΈ€  (𝑑) ,
𝑦 (𝑑) = πœ‘ (𝑑) ,
𝑑 ∈ [0, 𝑇] ,
𝑑 ∈ [−𝜏, 0] ,
(1)
𝑑 ∈ [−𝜏, 0] ,
where πœ‘(𝑑) is a differentiable function, 𝛼(𝑑) ∈ 𝐢1 [0, 𝑇] is a
strictly monotone increasing function and satisfies that −𝜏 ≤
𝛼(𝑑) ≤ 𝑑 and 𝛼(0) = −𝜏, there exists 𝑑1 ∈ [0, 𝑇] such that
𝛼(𝑑1 ) = 0, and 𝑓 : 𝐷 = [0, 𝑇] × π‘… × π‘… → 𝑅 is a given continuous
mapping and satisfies the Lipschitz condition
σ΅„©
σ΅„©
σ΅„©
σ΅„©σ΅„©
󡄩󡄩𝑓 (𝑑, 𝑒1 , 𝑣) − 𝑓 (𝑑, 𝑒2 , 𝑣)σ΅„©σ΅„©σ΅„© ≤ 𝛽0 󡄩󡄩󡄩𝑒1 − 𝑒2 σ΅„©σ΅„©σ΅„© ,
σ΅„©σ΅„©
σ΅„©
σ΅„©
σ΅„©
󡄩󡄩𝑓 (𝑑, 𝑒, 𝑣1 ) − 𝑓 (𝑑, 𝑒, 𝑣2 )σ΅„©σ΅„©σ΅„© ≤ 𝛽1 󡄩󡄩󡄩𝑣1 − 𝑣2 σ΅„©σ΅„©σ΅„© ,
(2)
where 𝛽0 , 𝛽1 are Lipschitz constants; β€–⋅β€– denotes the standard
Euclidean norm.
2
Journal of Applied Mathematics
Now the VIM for (1) can read
Moreover, the general Lagrange multiplier
π‘¦π‘š+1 (𝑑)
πœ† (𝑑, πœ‰) = πœ‰ − 𝑑
= π‘¦π‘š (𝑑)
𝑑
σΈ€ σΈ€ 
+ ∫ πœ† (𝑑, πœ‰) [π‘¦π‘š
(πœ‰)− 𝑓̃ (πœ‰, π‘¦π‘š (πœ‰) , πœ‘ (𝛼 (πœ‰)))] π‘‘πœ‰,
(3)
0
(8)
can be readily identified by (7). Thus, the variational iteration
formula can be written as
π‘¦π‘š+1 (𝑑)
= π‘¦π‘š (𝑑)
0 < 𝑑 < 𝑑1 ;
𝑑
σΈ€ σΈ€ 
+ ∫ (πœ‰ − 𝑑) [π‘¦π‘š
(πœ‰) − 𝑓 (πœ‰, π‘¦π‘š (πœ‰) , πœ‘ (𝛼 (πœ‰)))] π‘‘πœ‰,
π‘¦π‘š+1 (𝑑)
0
0 < 𝑑 < 𝑑1 ;
= π‘¦π‘š (𝑑)
𝑑1
σΈ€ σΈ€ 
+ ∫ πœ† (𝑑, πœ‰) [π‘¦π‘š
(πœ‰) − 𝑓̃ (πœ‰, π‘¦π‘š (πœ‰) , πœ‘ (𝛼 (πœ‰)))] π‘‘πœ‰
(9)
0
𝑑
σΈ€ σΈ€ 
πœ† (𝑑, πœ‰) [π‘¦π‘š
+∫
𝑑1
π‘¦π‘š+1 (𝑑)
(πœ‰) − 𝑓̃ (πœ‰, π‘¦π‘š (πœ‰) , π‘¦π‘š (𝛼 (πœ‰)))] π‘‘πœ‰,
= π‘¦π‘š (𝑑)
𝑑1
𝑑 > 𝑑1 ,
(4)
where π‘¦π‘š (𝑑) = πœ‘(𝑑) for 𝑑 ∈ [−𝜏, 0]; 𝑓̃ denotes the restrictive
variation, that is, 𝛿𝑓̃ = 0. Thus, we have
σΈ€ σΈ€ 
+ ∫ (πœ‰ − 𝑑) [π‘¦π‘š
(πœ‰) − 𝑓 (πœ‰, π‘¦π‘š (πœ‰) , πœ‘ (𝛼 (πœ‰)))] π‘‘πœ‰
0
𝑑
σΈ€ σΈ€ 
+ ∫ (πœ‰ − 𝑑) [π‘¦π‘š
(πœ‰) − 𝑓 (πœ‰, π‘¦π‘š (πœ‰) , π‘¦π‘š (𝛼 (πœ‰)))] π‘‘πœ‰,
𝑑1
𝑑 > 𝑑1 .
(10)
π›Ώπ‘¦π‘š+1 (𝑑)
= π›Ώπ‘¦π‘š (𝑑)
𝑑
σΈ€ σΈ€ 
+ ∫ π›Ώπœ† (𝑑, πœ‰) [π‘¦π‘š
(πœ‰) − 𝑓̃ (πœ‰, π‘¦π‘š (πœ‰) , πœ‘ (𝛼 (πœ‰)))] π‘‘πœ‰ (5)
0
𝑑
σΈ€ σΈ€ 
= π›Ώπ‘¦π‘š (𝑑) + ∫ π›Ώπœ† (𝑑, πœ‰) π‘¦π‘š
(πœ‰) π‘‘πœ‰,
0
0 < 𝑑 < 𝑑1 .
Theorem 1. Suppose that the initial value problems (1) satisfy
the condition (2), and 𝑦(𝑑), 𝑦𝑖 (𝑑) ∈ 𝐢2 [0, 𝑇], 𝑖 = 1, 2, . . . .
Then the sequence {π‘¦π‘š (𝑑)}∞
π‘š=1 defined by (9) and (10) with 𝑦0 (𝑑)
converges to the solution of (1).
Proof. From (1), we have
𝑑
𝑦 (𝑑) = 𝑦 (𝑑) + ∫ (πœ‰ − 𝑑) [𝑦󸀠󸀠 (πœ‰) − 𝑓 (πœ‰, 𝑦 (πœ‰) , πœ‘ (𝛼 (πœ‰)))] π‘‘πœ‰,
Using integration by parts to (4), we have
0
π›Ώπ‘¦π‘š+1 (𝑑)
0 < 𝑑 < 𝑑1 ;
= π›Ώπ‘¦π‘š (𝑑)
(11)
𝑑1
σΈ€ σΈ€ 
+ ∫ π›Ώπœ† (𝑑, πœ‰) [π‘¦π‘š
(πœ‰) − 𝑓̃ (πœ‰, π‘¦π‘š (πœ‰) , πœ‘ (𝛼 (πœ‰)))] π‘‘πœ‰
𝑑1
𝑦 (𝑑) = 𝑦 (𝑑) + ∫ (πœ‰ − 𝑑) [𝑦󸀠󸀠 (πœ‰) − 𝑓 (πœ‰, 𝑦 (πœ‰) , πœ‘ (𝛼 (πœ‰)))] π‘‘πœ‰
0
𝑑
σΈ€ σΈ€ 
+ ∫ π›Ώπœ† (𝑑, πœ‰) [π‘¦π‘š
(πœ‰) − 𝑓̃ (πœ‰, π‘¦π‘š (πœ‰) , π‘¦π‘š (𝛼 (πœ‰)))] π‘‘πœ‰
𝑑1
𝑑 > 𝑑1 .
󡄨󡄨
𝑑 2
πœ• πœ† (𝑑, πœ‰)
πœ•πœ† (𝑑, πœ‰)
󡄨
π›Ώπ‘¦π‘š (πœ‰) π‘‘πœ‰.
π›Ώπ‘¦π‘š (πœ‰)󡄨󡄨󡄨 + ∫
σ΅„¨σ΅„¨πœ‰=𝑑
πœ•πœ‰
πœ•πœ‰2
0
(12)
(6)
From the above formula, the stationary conditions are
obtained as
2
πœ• πœ† (𝑑, πœ‰)
= 0,
πœ•πœ‰2
πœ•πœ† (𝑑, πœ‰) 󡄨󡄨󡄨󡄨
1−
󡄨 = 0,
πœ•πœ‰ σ΅„¨σ΅„¨σ΅„¨πœ‰=𝑑
󡄨
πœ† (𝑑, πœ‰)σ΅„¨σ΅„¨σ΅„¨πœ‰=𝑑 = 0.
+ ∫ (πœ‰ − 𝑑) [𝑦󸀠󸀠 (πœ‰) − 𝑓 (πœ‰, 𝑦 (πœ‰) , 𝑦 (𝛼 (πœ‰)))] π‘‘πœ‰,
𝑑1
󡄨
σΈ€ 
= π›Ώπ‘¦π‘š (𝑑) + π›Ώπœ† (𝑑, πœ‰) π‘¦π‘š
(πœ‰)σ΅„¨σ΅„¨σ΅„¨σ΅„¨πœ‰=𝑑
−
0
𝑑
Let 𝐸𝑖 (𝑑) = 𝑦𝑖 (𝑑)−𝑦(𝑑), 𝑖 = 0, 1, . . . . If 𝑑 β©½ 0, then 𝐸𝑖 (𝑑) = 0, 𝑖 =
0, 1, . . . . From (9) and (11), we have
πΈπ‘š+1 (𝑑)
= πΈπ‘š (𝑑)
𝑑
σΈ€ σΈ€ 
+ ∫ (πœ‰ − 𝑑)[πΈπ‘š
(πœ‰) − (𝑓 (πœ‰, π‘¦π‘š (πœ‰) , πœ‘ (𝛼 (πœ‰)))
0
(7)
−𝑓 (πœ‰, 𝑦 (πœ‰) , πœ‘ (𝛼 (πœ‰))))] π‘‘πœ‰,
0 < 𝑑 < 𝑑1 .
(13)
Journal of Applied Mathematics
3
σΈ€ 
σΈ€ 
Since [𝛼−1 (𝑑)] is bounded, 𝑀 = max−𝜏≤πœ‰≤𝛼(𝑇) (𝛼−1 (πœ‰)) is
bounded. Moreover, it follows from (2) and the inequality
|πœ‰ − 𝑑| ≤ 𝑇 that
From (10) and (12), we have
πΈπ‘š+1 (𝑑)
𝑑
󡄨󡄩
σ΅„©
󡄨
σ΅„©σ΅„©
σ΅„©σ΅„©πΈπ‘š+1 (𝑑)σ΅„©σ΅„©σ΅„© ≤ ∫ 󡄨󡄨󡄨𝑑 − πœ‰σ΅„¨σ΅„¨σ΅„¨ 󡄩󡄩󡄩𝑓 (πœ‰, π‘¦π‘š (πœ‰) , πœ‘ (𝛼 (πœ‰)))
0
= πΈπ‘š (𝑑)
𝑑1
σ΅„©
− 𝑓 (πœ‰, 𝑦 (πœ‰) , πœ‘ (𝛼 (πœ‰)))σ΅„©σ΅„©σ΅„© π‘‘πœ‰
σΈ€ σΈ€ 
+ ∫ (πœ‰ − 𝑑) [πΈπ‘š
(πœ‰) − (𝑓 (πœ‰, π‘¦π‘š (πœ‰) , πœ‘ (𝛼 (πœ‰)))
0
− 𝑓 (πœ‰, 𝑦 (πœ‰) , πœ‘ (𝛼 (πœ‰)))) ]π‘‘πœ‰
𝑑
𝑑
σ΅„©
σ΅„©
≤ ∫ 𝑇𝛽0 σ΅„©σ΅„©σ΅„©π‘¦π‘š (πœ‰) − 𝑦 (πœ‰)σ΅„©σ΅„©σ΅„© π‘‘πœ‰
0
𝑑
σ΅„©
σ΅„©
= ∫ 𝑇𝛽0 σ΅„©σ΅„©σ΅„©πΈπ‘š (πœ‰)σ΅„©σ΅„©σ΅„© π‘‘πœ‰,
σΈ€ σΈ€ 
+ ∫ (πœ‰ − 𝑑) [πΈπ‘š
(πœ‰) − (𝑓 (πœ‰, π‘¦π‘š (πœ‰) , π‘¦π‘š (𝛼 (πœ‰)))
𝑑1
0
− 𝑓 (πœ‰, 𝑦 (πœ‰) , 𝑦 (𝛼 (πœ‰)))) ]π‘‘πœ‰,
𝑑 > 𝑑1 .
(14)
0 < 𝑑 < 𝑑1 ;
(16)
𝑑1
󡄨󡄩
σ΅„©σ΅„©
σ΅„©
󡄨
σ΅„©σ΅„©πΈπ‘š+1 (𝑑)σ΅„©σ΅„©σ΅„© ≤ ∫ 󡄨󡄨󡄨𝑑 − πœ‰σ΅„¨σ΅„¨σ΅„¨ 󡄩󡄩󡄩𝑓 (πœ‰, π‘¦π‘š (πœ‰) , πœ‘ (𝛼 (πœ‰)))
0
σ΅„©
− 𝑓 (πœ‰, 𝑦 (πœ‰) , πœ‘ (𝛼 (πœ‰)))σ΅„©σ΅„©σ΅„© π‘‘πœ‰
𝑑
󡄨󡄩
󡄨
+ ∫ 󡄨󡄨󡄨𝑑 − πœ‰σ΅„¨σ΅„¨σ΅„¨ 󡄩󡄩󡄩𝑓 (πœ‰, π‘¦π‘š (πœ‰) , π‘¦π‘š (𝛼 (πœ‰)))
𝑑
Using integration by parts, we have
1
𝑑
σ΅„©
− 𝑓 (πœ‰, 𝑦 (πœ‰) , 𝑦 (𝛼 (πœ‰)))σ΅„©σ΅„©σ΅„© π‘‘πœ‰
σΈ€ σΈ€ 
πΈπ‘š+1 (𝑑) = πΈπ‘š (𝑑) + ∫ (πœ‰ − 𝑑) πΈπ‘š
(πœ‰) π‘‘πœ‰
0
𝑑
1
σ΅„©
σ΅„©
≤ ∫ 𝑇𝛽0 σ΅„©σ΅„©σ΅„©π‘¦π‘š (πœ‰) − 𝑦 (πœ‰)σ΅„©σ΅„©σ΅„© π‘‘πœ‰
0
𝑑
− ∫ (πœ‰ − 𝑑) [𝑓 (πœ‰, π‘¦π‘š (πœ‰) , πœ‘ (𝛼 (πœ‰)))
0
𝑑
σ΅„©
σ΅„©
σ΅„©
σ΅„©
+ ∫ 𝑇 (𝛽0 σ΅„©σ΅„©σ΅„©πΈπ‘š (πœ‰)σ΅„©σ΅„©σ΅„© + 𝛽1 σ΅„©σ΅„©σ΅„©πΈπ‘š (𝛼 (πœ‰))σ΅„©σ΅„©σ΅„©) π‘‘πœ‰
− 𝑓 (πœ‰, 𝑦 (πœ‰) , πœ‘ (𝛼 (πœ‰)))] π‘‘πœ‰
𝑑1
𝑑
= − ∫ (πœ‰ − 𝑑) [𝑓 (πœ‰, π‘¦π‘š (πœ‰) , πœ‘ (𝛼 (πœ‰)))
𝑑
𝑑
0
𝑑1
σ΅„©
σ΅„©
σ΅„©
σ΅„©
= ∫ 𝑇𝛽0 σ΅„©σ΅„©σ΅„©πΈπ‘š (πœ‰)σ΅„©σ΅„©σ΅„© π‘‘πœ‰ + ∫ 𝑇𝛽1 σ΅„©σ΅„©σ΅„©πΈπ‘š (𝛼 (πœ‰))σ΅„©σ΅„©σ΅„© π‘‘πœ‰
0
− 𝑓 (πœ‰, 𝑦 (πœ‰) , πœ‘ (𝛼 (πœ‰)))] π‘‘πœ‰,
𝑑
σ΅„©
σ΅„©
= ∫ 𝑇𝛽0 σ΅„©σ΅„©σ΅„©πΈπ‘š (πœ‰)σ΅„©σ΅„©σ΅„© π‘‘πœ‰
0
0 < 𝑑 < 𝑑1 ;
𝑑
σΈ€ σΈ€ 
πΈπ‘š+1 (𝑑) = πΈπ‘š (𝑑) + ∫ (πœ‰ − 𝑑) πΈπ‘š
(πœ‰) π‘‘πœ‰
+∫
𝛼(𝑑)
𝛼(𝑑1 )
0
𝑑1
σΈ€ 
σ΅„©
σ΅„©
𝑇𝛽1 σ΅„©σ΅„©σ΅„©πΈπ‘š (πœ‰)σ΅„©σ΅„©σ΅„© (𝛼−1 (πœ‰)) π‘‘πœ‰
𝑑
σ΅„©
σ΅„©
≤ 𝑇𝑀𝛽 ∫ σ΅„©σ΅„©σ΅„©πΈπ‘š (πœ‰)σ΅„©σ΅„©σ΅„© π‘‘πœ‰,
0
− ∫ (πœ‰ − 𝑑) [𝑓 (πœ‰, π‘¦π‘š (πœ‰) , πœ‘ (𝛼 (πœ‰)))
0
𝑑 > 𝑑1 ,
(17)
− 𝑓 (πœ‰, 𝑦 (πœ‰) , πœ‘ (𝛼 (πœ‰)))] π‘‘πœ‰
where 𝛽 = max 𝛽𝑖 , 𝑖 = 1, 2. Moreover,
𝑑
− ∫ (πœ‰ − 𝑑) [𝑓 (πœ‰, π‘¦π‘š (πœ‰) , π‘¦π‘š (𝛼 (πœ‰)))
𝑑
𝑠
𝑑
𝑠
𝑠
𝑑
𝑠
𝑠
1
2
σ΅„©σ΅„©
σ΅„©
σ΅„©
σ΅„©
σ΅„©σ΅„©πΈπ‘š+1 (𝑑)σ΅„©σ΅„©σ΅„© ≤ (𝑇𝑀𝛽) ∫ ∫ σ΅„©σ΅„©σ΅„©πΈπ‘š−1 (𝑠2 )σ΅„©σ΅„©σ΅„© 𝑑𝑠2 𝑑𝑠1
0 0
𝑑1
− 𝑓 (πœ‰, 𝑦 (πœ‰) , 𝑦 (𝛼 (πœ‰)))] π‘‘πœ‰
1
2
3
σ΅„©
σ΅„©
≤ (𝑇𝑀𝛽) ∫ ∫ ∫ σ΅„©σ΅„©σ΅„©πΈπ‘š−2 (𝑠3 )σ΅„©σ΅„©σ΅„© 𝑑𝑠3 𝑑𝑠2 𝑑𝑠1
0 0
0
𝑑1
= − ∫ (πœ‰ − 𝑑) [𝑓 (πœ‰, π‘¦π‘š (πœ‰) , πœ‘ (𝛼 (πœ‰)))
0
𝑠
1
2
3
4
σ΅„©
σ΅„©
≤ (𝑇𝑀𝛽) ∫ ∫ ∫ ∫ σ΅„©σ΅„©σ΅„©πΈπ‘š−3 (𝑠4 )σ΅„©σ΅„©σ΅„© 𝑑𝑠4 𝑑𝑠3 𝑑𝑠2 𝑑𝑠1
0 0
0
0
− 𝑓 (πœ‰, 𝑦 (πœ‰) , πœ‘ (𝛼 (πœ‰)))] π‘‘πœ‰
𝑑
⋅⋅⋅
− ∫ (πœ‰ − 𝑑) [𝑓 (πœ‰, π‘¦π‘š (πœ‰) , π‘¦π‘š (𝛼 (πœ‰)))
𝑑1
≤ (𝑇𝑀𝛽)
− 𝑓 (πœ‰, 𝑦 (πœ‰) , 𝑦 (𝛼 (πœ‰)))] π‘‘πœ‰,
𝑑 > 𝑑1 .
π‘š+1
𝑑
𝑠
𝑠
𝑠
1
2
π‘š
σ΅„©
σ΅„©
∫ ∫ ∫ ⋅ ⋅ ⋅ ∫ 󡄩󡄩󡄩𝐸0 (π‘ π‘š+1 )σ΅„©σ΅„©σ΅„© π‘‘π‘ π‘š+1
0 0
0
0
⋅ ⋅ ⋅ 𝑑𝑠3 𝑑𝑠2 𝑑𝑠1 ,
(15)
(18)
4
Journal of Applied Mathematics
where π‘¦π‘š (𝑑) = πœ‘(𝑑) for 𝑑 ∈ [−𝜏, 0]; 𝑓̃ denotes the restrictive
variation, that is, 𝛿𝑓̃ = 0. Thus, we have
where ‖𝐸0 (𝑑)β€– is constant. Therefore, we have
π‘š+1
σ΅„©σ΅„©
σ΅„© σ΅„©
σ΅„© (𝑇𝑀𝛽)
󳨀→ 0,
σ΅„©σ΅„©πΈπ‘š+1 (𝑑)σ΅„©σ΅„©σ΅„© ≤ 󡄩󡄩󡄩𝐸0 (𝑑)σ΅„©σ΅„©σ΅„©
(π‘š + 1)!
(π‘š 󳨀→ ∞) .
(19)
π›Ώπ‘¦π‘š+1 (𝑑) = π›Ώπ‘¦π‘š (𝑑)
𝑑
σΈ€ σΈ€ 
+ ∫ π›Ώπœ† (𝑑, πœ‰) [π‘¦π‘š
(πœ‰) + πœ”2 π‘¦π‘š (πœ‰)
0
2.2. The Second Kind of Second-Order Delay Differential
Equations. Consider the initial value problems of secondorder delay oscillation differential equations
+ 𝑓̃ (πœ‰, π‘¦π‘š (πœ‰) , πœ‘π‘š (𝛼 (πœ‰)))] π‘‘πœ‰
𝑑
σΈ€ σΈ€ 
= π›Ώπ‘¦π‘š (𝑑) + ∫ π›Ώπœ† (𝑑, πœ‰) [π‘¦π‘š
(πœ‰) + πœ”2 π‘¦π‘š (πœ‰)] π‘‘πœ‰,
0
σΈ€ σΈ€ 
2
𝑦 (𝑑) = −πœ” 𝑦 (𝑑) − 𝑓 (𝑑, 𝑦 (𝑑) , 𝑦 (𝛼 (𝑑))) ,
𝑦󸀠 (𝑑) = πœ‘σΈ€  (𝑑) ,
𝑦 (𝑑) = πœ‘ (𝑑) ,
𝑑 ∈ [0, 𝑇] ,
0 < 𝑑 < 𝑑1 .
(24)
𝑑 ∈ [−𝜏, 0] ,
Using integration by parts to (23), we have
𝑑 ∈ [−𝜏, 0] ,
(20)
where πœ‘(𝑑) is a differentiable function, 𝛼(𝑑) ∈ 𝐢1 [0, 𝑇] is a
strictly monotone increasing function and satisfies that −𝜏 ≤
𝛼(𝑑) ≤ 𝑑 and 𝛼(0) = −𝜏, there exists 𝑑1 ∈ [0, 𝑇] such that
𝛼(𝑑1 ) = 0, πœ” is a constant, and 𝑓 : 𝐷 = [0, 𝑇] × π‘… × π‘… →
𝑅 is a given continuous mapping and satisfies the Lipschitz
condition
σ΅„©
σ΅„©
σ΅„©
σ΅„©σ΅„©
󡄩󡄩𝑓 (𝑑, 𝑒1 , 𝑣) − 𝑓 (𝑑, 𝑒2 , 𝑣)σ΅„©σ΅„©σ΅„© ≤ πœ…0 󡄩󡄩󡄩𝑒1 − 𝑒2 σ΅„©σ΅„©σ΅„© ,
σ΅„©σ΅„©
σ΅„©
σ΅„©
σ΅„©
󡄩󡄩𝑓 (𝑑, 𝑒, 𝑣1 ) − 𝑓 (𝑑, 𝑒, 𝑣2 )σ΅„©σ΅„©σ΅„© ≤ πœ…1 󡄩󡄩󡄩𝑣1 − 𝑣2 σ΅„©σ΅„©σ΅„© ,
π›Ώπ‘¦π‘š+1 (𝑑) = π›Ώπ‘¦π‘š (𝑑)
𝑑1
σΈ€ σΈ€ 
+ ∫ π›Ώπœ† (𝑑, πœ‰) [π‘¦π‘š
(πœ‰) + πœ”2 π‘¦π‘š (πœ‰)
0
+ 𝑓̃ (πœ‰, π‘¦π‘š (πœ‰) , πœ‘π‘š (𝛼 (πœ‰)))] π‘‘πœ‰
𝑑
σΈ€ σΈ€ 
+ ∫ π›Ώπœ† (𝑑, πœ‰) [π‘¦π‘š
(πœ‰) + πœ”2 π‘¦π‘š (πœ‰)
𝑑1
+ 𝑓̃ (πœ‰, π‘¦π‘š (πœ‰) , π‘¦π‘š (𝛼 (πœ‰)))] π‘‘πœ‰
(21)
𝑑
σΈ€ σΈ€ 
= π›Ώπ‘¦π‘š (𝑑) + ∫ π›Ώπœ† (𝑑, πœ‰) [π‘¦π‘š
(πœ‰) + πœ”2 π‘¦π‘š (πœ‰)] π‘‘πœ‰
where πœ…0 , πœ…1 are Lipschitz constants.
Now the VIM for (20) can read
0
= π›Ώπ‘¦π‘š (𝑑) −
π‘¦π‘š+1 (𝑑) = π‘¦π‘š (𝑑)
󡄨󡄨
πœ•πœ† (𝑑, πœ‰)
󡄨
π›Ώπ‘¦π‘š (πœ‰)󡄨󡄨󡄨
σ΅„¨σ΅„¨πœ‰=𝑑
πœ•πœ‰
󡄨
σΈ€ 
+ πœ† (𝑑, πœ‰) π›Ώπ‘¦π‘š
(πœ‰)σ΅„¨σ΅„¨σ΅„¨πœ‰=𝑑
𝑑
σΈ€ σΈ€ 
+ ∫ πœ† (𝑑, πœ‰) [π‘¦π‘š
(πœ‰) + πœ”2 π‘¦π‘š (πœ‰)
𝑑
0
+ 𝑓̃ (πœ‰, π‘¦π‘š (πœ‰) , πœ‘π‘š (𝛼 (πœ‰))) ] π‘‘πœ‰,
+ ∫ [πœ”2 πœ† (𝑑, πœ‰) +
0
πœ•2 πœ† (𝑑, πœ‰)
] π›Ώπ‘¦π‘š (πœ‰) π‘‘πœ‰.
πœ•πœ‰2
(25)
0 < 𝑑 < 𝑑1 ;
(22)
π‘¦π‘š+1 (𝑑) = π‘¦π‘š (𝑑)
𝑑1
+∫
0
σΈ€ σΈ€ 
πœ† (𝑑, πœ‰) [π‘¦π‘š
From the above formula, the stationary conditions are obtained as
πœ•2 πœ† (𝑑, πœ‰)
= 0,
πœ•πœ‰2
πœ•πœ† (𝑑, πœ‰) 󡄨󡄨󡄨󡄨
1−
󡄨 = 0,
πœ•πœ‰ σ΅„¨σ΅„¨σ΅„¨πœ‰=𝑑
󡄨
πœ†(𝑑, πœ‰)σ΅„¨σ΅„¨σ΅„¨πœ‰=𝑑 = 0.
πœ”2 πœ† (𝑑, πœ‰) +
2
(πœ‰) + πœ” π‘¦π‘š (πœ‰)
+ 𝑓̃ (πœ‰, π‘¦π‘š (πœ‰) , πœ‘π‘š (𝛼 (πœ‰))) ] π‘‘πœ‰
𝑑
σΈ€ σΈ€ 
+ ∫ πœ† (𝑑, πœ‰) [π‘¦π‘š
(πœ‰) + πœ”2 π‘¦π‘š (πœ‰)
𝑑1
+ 𝑓̃ (πœ‰, π‘¦π‘š (πœ‰) , π‘¦π‘š (𝛼 (πœ‰))) ] π‘‘πœ‰,
𝑑 > 𝑑1 ,
(23)
(26)
Moreover, the general Lagrange multiplier
πœ† (𝑑, πœ‰) =
1
sin πœ” (πœ‰ − 𝑑)
πœ”
(27)
Journal of Applied Mathematics
5
can be readily identified by (26). Thus, the variational iteration formula can be written as
π‘¦π‘š+1 (𝑑)
Now the VIM for (29) can read
π‘¦π‘š+1 (𝑑) = π‘¦π‘š (𝑑)
= π‘¦π‘š (𝑑)
+∫
𝑑
0
𝑑
σΈ€ σΈ€ 
σΈ€ 
+ ∫ πœ† (𝑑, πœ‰) [π‘¦π‘š
(πœ‰) + π‘Ž (πœ‰) π‘¦π‘š
(πœ‰) + 𝑏 (πœ‰) π‘¦π‘š (πœ‰)
1
σΈ€ σΈ€ 
sin πœ” (πœ‰ − 𝑑) [π‘¦π‘š
(πœ‰) + πœ”2 π‘¦π‘š (πœ‰)
πœ”
0
Μƒ (πœ‰, π‘¦π‘š (πœ‰) , πœ‘ (𝛼 (πœ‰)))] π‘‘πœ‰,
+𝑁
+ 𝑓 (πœ‰, π‘¦π‘š (πœ‰) , πœ‘ (𝛼 (πœ‰))) ] π‘‘πœ‰,
0 < 𝑑 < 𝑑1 ;
0 < 𝑑 < 𝑑1 ;
π‘¦π‘š+1 (𝑑)
(31)
π‘¦π‘š+1 (𝑑) = π‘¦π‘š (𝑑)
𝑑1
= π‘¦π‘š (𝑑)
+∫
𝑑1
0
σΈ€ σΈ€ 
σΈ€ 
+∫ πœ† (𝑑, πœ‰) [π‘¦π‘š
(πœ‰)+π‘Ž (πœ‰) π‘¦π‘š
(πœ‰) + 𝑏 (πœ‰) π‘¦π‘š (πœ‰)
0
1
σΈ€ σΈ€ 
sin πœ” (πœ‰ − 𝑑) [π‘¦π‘š
(πœ‰) + πœ”2 π‘¦π‘š (πœ‰)
πœ”
Μƒ (πœ‰, π‘¦π‘š (πœ‰) , πœ‘ (𝛼 (πœ‰)))] π‘‘πœ‰
+𝑁
𝑑
+ 𝑓 (πœ‰, π‘¦π‘š (πœ‰) , πœ‘ (𝛼 (πœ‰))) ] π‘‘πœ‰
+∫
𝑑
𝑑1
σΈ€ σΈ€ 
σΈ€ 
+∫ πœ† (𝑑, πœ‰) [π‘¦π‘š
(πœ‰) + π‘Ž (πœ‰) π‘¦π‘š
(πœ‰) + 𝑏 (πœ‰) π‘¦π‘š (πœ‰)
𝑑1
1
σΈ€ σΈ€ 
sin πœ” (πœ‰ − 𝑑) [π‘¦π‘š
(πœ‰) + πœ”2 π‘¦π‘š (πœ‰)
πœ”
Μƒ (πœ‰, π‘¦π‘š , π‘¦π‘š (𝛼 (πœ‰)))] π‘‘πœ‰,
+𝑁
𝑑 > 𝑑1 ,
+ 𝑓 (πœ‰, π‘¦π‘š (πœ‰) , π‘¦π‘š (𝛼 (πœ‰))) ] π‘‘πœ‰,
𝑑 > 𝑑1 .
(28)
Theorem 2. Suppose that the initial value problems (20)
satisfy the condition (21), and 𝑦(𝑑), 𝑦𝑖 (𝑑) ∈ 𝐢2 [0, 𝑇], 𝑖 =
1, 2, . . . . Then the sequence {π‘¦π‘š (𝑑)}∞
π‘š=1 defined by (28) with
𝑦0 (𝑑) converges to the solution of (20).
(32)
Μƒ denotes the restrictive
where π‘¦π‘š (𝑑) = πœ‘(𝑑) for 𝑑 ∈ [−𝜏, 0]; 𝑁
Μƒ
variation, that is, 𝛿𝑁 = 0. Thus, we have
π›Ώπ‘¦π‘š+1 (𝑑)
= π›Ώπ‘¦π‘š (𝑑)
Proof. The proof process is similar that in Theorem 1.
𝑑
2.3. The Third Kind of Second-Order Delay Differential Equations. In order to improve the iteration speed, we modify
the above iterative formulas and reconstruct the Lagrange
multiplier. Consider the initial value problems of secondorder delay differential equations
σΈ€ σΈ€ 
σΈ€ 
𝑦 (𝑑) + π‘Ž (𝑑) 𝑦 (𝑑) + 𝑏 (𝑑) 𝑦 (𝑑) + 𝑁 (𝑑, 𝑦 (𝑑) , 𝑦 (𝛼 (𝑑))) = 0,
σΈ€ σΈ€ 
σΈ€ 
+ ∫ π›Ώπœ† (𝑑, πœ‰) [π‘¦π‘š
(πœ‰) + π‘Ž (πœ‰) π‘¦π‘š
(πœ‰) + 𝑏 (πœ‰) π‘¦π‘š (πœ‰)
0
Μƒ (πœ‰, π‘¦π‘š (πœ‰) , πœ‘ (𝛼 (πœ‰))) ] π‘‘πœ‰
+𝑁
= π›Ώπ‘¦π‘š (𝑑)
𝑑
σΈ€ σΈ€ 
σΈ€ 
+ ∫ π›Ώπœ† (𝑑, πœ‰) [π‘¦π‘š
(πœ‰)+π‘Ž (πœ‰) π‘¦π‘š
(πœ‰)+𝑏 (πœ‰) π‘¦π‘š (πœ‰)] π‘‘πœ‰,
0
0 < 𝑑 < 𝑑1 .
(33)
𝑑 ∈ [0, 𝑇] ,
𝑦󸀠 (𝑑) = πœ‘σΈ€  (𝑑) ,
𝑦 (𝑑) = πœ‘ (𝑑) ,
𝑑 ∈ [−𝜏, 0] ,
𝑑 ∈ [−𝜏, 0] ,
(29)
where πœ‘(𝑑) is a differentiable function, 𝛼(𝑑) ∈ 𝐢1 [0, 𝑇] is a
strictly monotone increasing function and satisfies that −𝜏 ≤
𝛼(𝑑) ≤ 𝑑 and 𝛼(0) = −𝜏, there exists 𝑑1 ∈ [0, 𝑇] such that
𝛼(𝑑1 ) = 0, π‘Ž(𝑑), 𝑏(𝑑) are bounded functions, and 𝑁 : 𝐷 =
[0, 𝑇] × π‘… × π‘… → 𝑅 is a given continuous mapping and
satisfies the Lipschitz condition
σ΅„©
σ΅„©
σ΅„©
σ΅„©σ΅„©
󡄩󡄩𝑁 (𝑑, 𝑒1 , 𝑣) − 𝑁 (𝑑, 𝑒2 , 𝑣)σ΅„©σ΅„©σ΅„© ≤ 𝛾0 󡄩󡄩󡄩𝑒1 − 𝑒2 σ΅„©σ΅„©σ΅„© ,
σ΅„©σ΅„©
σ΅„©
σ΅„©
σ΅„©
󡄩󡄩𝑁 (𝑑, 𝑒, 𝑣1 ) − 𝑁 (𝑑, 𝑒, 𝑣2 )σ΅„©σ΅„©σ΅„© ≤ 𝛾1 󡄩󡄩󡄩𝑣1 − 𝑣2 σ΅„©σ΅„©σ΅„© ,
where 𝛾0 , 𝛾1 are Lipschitz constants.
(30)
Using integration by parts to (32), we have
π›Ώπ‘¦π‘š+1 (𝑑)
= π›Ώπ‘¦π‘š (𝑑)
𝑑1
σΈ€ σΈ€ 
σΈ€ 
+ ∫ π›Ώπœ† (𝑑, πœ‰) [π‘¦π‘š
(πœ‰) + π‘Ž (πœ‰) π‘¦π‘š
(πœ‰) + 𝑏 (πœ‰) π‘¦π‘š (πœ‰)
0
Μƒ (πœ‰, π‘¦π‘š (πœ‰) , πœ‘ (𝛼 (πœ‰)))] π‘‘πœ‰
+𝑁
𝑑
σΈ€ σΈ€ 
σΈ€ 
+ ∫ π›Ώπœ† (𝑑, πœ‰) [π‘¦π‘š
(πœ‰) + π‘Ž (πœ‰) π‘¦π‘š
(πœ‰) + 𝑏 (πœ‰) π‘¦π‘š (πœ‰)
𝑑1
Μƒ (πœ‰, π‘¦π‘š (πœ‰) , π‘¦π‘š (𝛼 (πœ‰)))] π‘‘πœ‰
+𝑁
6
Journal of Applied Mathematics
= π›Ώπ‘¦π‘š (𝑑)
Note that
𝑑
σΈ€ σΈ€ 
σΈ€ 
+ ∫ π›Ώπœ† (𝑑, πœ‰) [π‘¦π‘š
(πœ‰) + π‘Ž (πœ‰) π‘¦π‘š
(πœ‰) + 𝑏 (πœ‰) π‘¦π‘š (πœ‰)] π‘‘πœ‰
0
𝑑
σΈ€ 
= π›Ώπ‘¦π‘š (𝑑) + ∫ πœ† (𝑑, πœ‰) π›Ώπ‘‘π‘¦π‘š
(πœ‰)
+ ∫ πœ† (𝑑, πœ‰) π‘Ž (πœ‰) π›Ώπ‘‘π‘¦π‘š (πœ‰)
0
𝑑
πœ† (𝑑, πœ‰) = −𝑒− ∫0 π‘Ž(πœ‰)π‘‘πœ‰ [𝑒1 (πœ‰) 𝑒2 (𝑑) − 𝑒2 (πœ‰) 𝑒1 (𝑑)] .
(42)
Substituting (42) into (31) and (32), we obtain
+ ∫ π›Ώπœ† (𝑑, πœ‰) 𝑏 (πœ‰) π‘¦π‘š (πœ‰) π‘‘πœ‰
π‘¦π‘š+1 (𝑑) = π‘¦π‘š (𝑑)
0
󡄨󡄨
πœ•πœ† (𝑑, πœ‰)
󡄨
+ π‘Ž (πœ‰) πœ† (πœ‰)) π›Ώπ‘¦π‘š (πœ‰)󡄨󡄨󡄨
σ΅„¨σ΅„¨πœ‰=𝑑
πœ•πœ‰
󡄨
σΈ€ 
+ πœ† (𝑑, πœ‰) 𝛿 π‘¦π‘š
(πœ‰)σ΅„¨σ΅„¨σ΅„¨σ΅„¨πœ‰=𝑑
𝑑
𝑑
+ ∫ 𝑒− ∫0 π‘Ž(πœ‰)π‘‘πœ‰ [𝑒1 (πœ‰) 𝑒2 (𝑑) − 𝑒2 (πœ‰) 𝑒1 (𝑑)]
= (1 −
0
(41)
𝑑
𝑑
𝑑
πœ† 2 (𝑑, πœ‰)
.
√π‘Š (𝑑, 0)
𝑒2 (πœ‰) =
Equation (40) can be expressed as
0
+∫ [
πœ† 1 (𝑑, πœ‰)
,
√π‘Š (𝑑, 0)
𝑒1 (πœ‰) =
0
+ 𝑁 (πœ‰, π‘¦π‘š (πœ‰) , πœ‘ (𝛼 (πœ‰))) ] π‘‘πœ‰,
πœ•2 πœ† (𝑑, πœ‰) πœ• (π‘Ž (πœ‰) πœ† (𝑑, πœ‰))
−
+ 𝑏 (πœ‰) πœ† (𝑑, πœ‰)]
πœ•2 (πœ‰)
πœ•πœ‰
0 < 𝑑 < 𝑑1 ;
× π›Ώπ‘¦π‘š (πœ‰) π‘‘πœ‰.
(34)
π‘¦π‘š+1 (𝑑) = π‘¦π‘š (𝑑)
𝑑1
From the above formula, the stationary conditions are
obtained as
πœ•2 πœ† (𝑑, πœ‰) πœ• (π‘Ž (πœ‰) πœ† (𝑑, πœ‰))
−
+ 𝑏 (πœ‰) πœ† (𝑑, πœ‰) = 0,
πœ•2 πœ‰
πœ•πœ‰
󡄨󡄨
πœ•πœ† (𝑑, πœ‰)
󡄨
1−
+ π‘Ž (πœ‰) πœ† (𝑑, πœ‰)󡄨󡄨󡄨 = 0,
σ΅„¨σ΅„¨πœ‰=𝑑
πœ•πœ‰
󡄨
πœ† (𝑑, πœ‰)σ΅„¨σ΅„¨σ΅„¨πœ‰=𝑑 = 0.
𝑐1 πœ†σΈ€ 1 (𝑑, 𝑑) + 𝑐2 πœ†σΈ€ 2 (𝑑, 𝑑) = 1.
(35)
(36)
(37)
𝑑
𝑑
σΈ€ σΈ€ 
σΈ€ 
× [π‘¦π‘š
(πœ‰) + π‘Ž (πœ‰) π‘¦π‘š
(πœ‰) + 𝑏 (πœ‰) π‘¦π‘š (πœ‰)
+ 𝑁 (πœ‰, π‘¦π‘š , π‘¦π‘š (𝛼 (πœ‰))) ] π‘‘πœ‰,
𝑑 > 𝑑1 .
(44)
Theorem 3. Suppose that the initial value problems (29) satisfy
the condition (30), and 𝑦(𝑑), 𝑦𝑖 (𝑑) ∈ 𝐢2 [0, 𝑇], 𝑖 = 1, 2, . . . .
Then the sequence {π‘¦π‘š (𝑑)}∞
π‘š=1 defined by (43) and (44) with
𝑦0 (𝑑) converges to the solution of (29).
Proof. The proof process is similar to that in Theorem 1.
(38)
Using the Liouville formula, we have
π‘Š (𝑑, 𝑑) = π‘Š (𝑑, 0) 𝑒∫0 π‘Ž(πœ‰)π‘‘πœ‰ .
𝑑
+ ∫ 𝑒− ∫0 π‘Ž(πœ‰)π‘‘πœ‰ [𝑒1 (πœ‰) 𝑒2 (𝑑) − 𝑒2 (πœ‰) 𝑒1 (𝑑)]
𝑑1
󡄨󡄨 πœ† (𝑑,𝑑) πœ† (𝑑,𝑑) 󡄨󡄨
Note that π‘Š(𝑑, 𝑑) = 󡄨󡄨󡄨 πœ†σΈ€ 1 (𝑑,𝑑) πœ†σΈ€ 2 (𝑑,𝑑) 󡄨󡄨󡄨 is the Wronski determinant
2
󡄨
󡄨 1
of πœ† 1 (𝑑, πœ‰), πœ† 2 (𝑑, πœ‰). We have
−πœ† 2 (𝑑, 𝑑) πœ† 1 (𝑑, πœ‰) + πœ† 1 (𝑑, 𝑑) πœ† 2 (𝑑, πœ‰)
.
πœ† (𝑑, πœ‰) =
π‘Š (𝑑, 𝑑)
0
+𝑁 (πœ‰, π‘¦π‘š (πœ‰) , πœ‘ (𝛼 (πœ‰))) ] π‘‘πœ‰
Using the initial conditions of (35), we have
𝑐1 πœ† 1 (𝑑, 𝑑) + 𝑐2 πœ† 2 (𝑑, 𝑑) = 0,
𝑑
+ ∫ 𝑒− ∫0 π‘Ž(πœ‰)π‘‘πœ‰ [𝑒1 (πœ‰) 𝑒2 (𝑑) − 𝑒2 (πœ‰) 𝑒1 (𝑑)]
σΈ€ σΈ€ 
σΈ€ 
× [π‘¦π‘š
(πœ‰) + π‘Ž (πœ‰) π‘¦π‘š
(πœ‰) + 𝑏 (πœ‰) π‘¦π‘š (πœ‰)
We suppose that πœ† 1 (𝑑, πœ‰), πœ† 2 (𝑑, πœ‰) are the fundamental solutions of (35); the corresponding general solution of (35) is
πœ† (𝑑, πœ‰) = 𝑐1 πœ† 1 (𝑑, πœ‰) + 𝑐2 πœ† 2 (𝑑, πœ‰) .
(43)
σΈ€ σΈ€ 
σΈ€ 
× [π‘¦π‘š
(πœ‰) + π‘Ž (πœ‰) π‘¦π‘š
(πœ‰) + 𝑏 (πœ‰) π‘¦π‘š (πœ‰)
3. Illustrative Examples
In this section, some illustrative examples are given to show
the efficiency of the VIM for solving second-order delay
differential equations.
Example 4. Consider the initial value problem of secondorder differential equation with pantograph delay
(39)
So πœ†(𝑑, πœ‰) can be expressed as
−πœ† 2 (𝑑, 𝑑) πœ† 1 (𝑑, πœ‰) + πœ† 1 (𝑑, 𝑑) πœ† 2 (𝑑, πœ‰) − ∫0𝑑 π‘Ž(πœ‰)π‘‘πœ‰
πœ† (𝑑, πœ‰) =
.
𝑒
πœ† 1 (𝑑, 0) πœ†σΈ€ 2 (𝑑, 0) − πœ†σΈ€ 1 (𝑑, 0) πœ† 2 (𝑑, 0)
(40)
𝑑
𝑑
𝑦󸀠󸀠 (𝑑) = −𝑦 ( ) − 𝑦2 (𝑑) + sin4 (𝑑) + sin2 ( ) + 8,
2
2
𝑑 > 0,
πœ‘σΈ€  (0) = 0,
πœ‘ (0) = 2,
(45)
Journal of Applied Mathematics
7
with the exact solution 𝑦(𝑑) = (5 − cos 2𝑑)/2. Using the VIM
given in formulas (9) and (10), we construct the correction
functional
π‘¦π‘š+1 (𝑑)
2.04
2.03
= π‘¦π‘š (𝑑)
𝑑
πœ‰
σΈ€ σΈ€ 
2
+ ∫ (πœ‰ − 𝑑) (π‘¦π‘š
(πœ‰) + π‘¦π‘š ( ) + π‘¦π‘š
(πœ‰) − sin4 (πœ‰)
2
0
− sin2 (πœ‰) − 8) π‘‘πœ‰,
2.02
π‘š = 1, 2, . . . .
(46)
2.01
We take 𝑦0 (𝑑) = 2 as the initial approximation and obtain that
𝑦1 (𝑑) =
2
5 23 2 1
1
5
+ 𝑑 + cos 𝑑 + cos2 𝑑 − cos4 𝑑,
4 16
2
16
16
𝑦2 (𝑑) = −
−
0
1
1
11
𝑑 sin 𝑑 cos 𝑑 + 3 cos ( 𝑑) +
cos 𝑑
256
2
16
Figure 1: Results for Example 4.
= π‘¦π‘š (𝑑)
+∫
𝑑
0
+
1
1
39
1
𝑑 sin 𝑑 cos 𝑑 + 120 cos ( 𝑑) +
cos ( 𝑑)
256
4
8
2
+
1 2
1393
157 2 1
cos 𝑑 +
cos 𝑑 cos4 𝑑 +
𝑑 cos 𝑑
2048
512
16
2048
−
1
1
1 2
1
𝑑 cos ( 𝑑) − 𝑑2 cos ( 𝑑) ,
32
2
2
4
1
πœ‰
σΈ€ σΈ€ 
2
( )
sin (4πœ‰ − 4𝑑) (π‘¦π‘š
(πœ‰) + 16π‘¦π‘š (πœ‰) − π‘¦π‘š
4
4
+ sin2 πœ‰) π‘‘πœ‰,
π‘š = 1, 2, . . . .
(49)
We take 𝑦0 (𝑑) = 4𝑑 as the initial approximation, and obtain
that
𝑦1 (𝑑) = −0.0390625 + 0.0625𝑑2 + sin (4𝑑)
..
.
+ 0.04166666667 cos (2𝑑)
(47)
The exact and approximate solutions are plotted in Figure 1,
which shows that the method gives a very good approximation to the exact solution.
Example 5. Consider the second-order delay differential
equation
πœ‘ (0) = 0.
0.2
π‘¦π‘š+1 (𝑑)
1
15
1
121
+ 14𝑑 sin ( 𝑑) + 𝑑 sin ( 𝑑) +
𝑑 sin 𝑑
4
16
2
2048
πœ‘σΈ€  (0) = 4,
0.15
Using the VIM given in formulas (28), we construct the
correction functional
815 4
23 6
252541 9781 2
+
𝑑 +
𝑑 +
𝑑
2048
4096
49125
491520
𝑑
𝑦󸀠󸀠 (𝑑) = −16𝑦 (𝑑) + 𝑦2 ( ) − sin2 𝑑,
4
0.1
(t)
989 815 2
23 4 1
1
1
+
𝑑 +
𝑑 + 𝑑 sin ( 𝑑) + 𝑑 sin 𝑑
512 512
1536
2
2
16
1
157 2
+
cos 𝑑 − cos4 𝑑,
512
16
𝑦3 (𝑑) = −
0.05
− 0.002604166667 cos (4𝑑) ,
𝑦2 (𝑑) = 0.00613912861 − 3.998216869𝑑 + 0.1805413564𝑑2
− 0.4340277778𝑑3 + π‘œ (𝑑4 )
+ (0.006666666667 − 0.5208333333𝑑 + π‘œ (𝑑2 )) sin 𝑑
+ (−0.1946373457 − 0.5555555556𝑑 + π‘œ (𝑑2 )) cos 𝑑
− 0.1085069444 sin (2𝑑) − 2 cos 4𝑑,
𝑑 > 0,
(48)
..
.
(50)
8
Journal of Applied Mathematics
Table 1: The errors of the iteration solutions.
𝑑 = 0.01
1.9048𝐸 − 09
1.89278𝐸 − 05
Iterative formula (43)
Iterative formula (9)
𝑑 = 0.05
1.1905𝐸 − 06
1.0972𝐸 − 03
Example 6. Consider the second-order delay differential
equation
𝑑
2
𝑦󸀠󸀠 (𝑑) = − 𝑦󸀠 (𝑑) + 16𝑦2 ( ) + 6 − 𝑑4 ,
𝑑
2
𝑑 > 0,
(51)
πœ‘σΈ€  (0) = 0,
with the exact solution 𝑦(𝑑) = 𝑑2 . From (35), we can solve that
πœ†(𝑑, πœ‰) = −πœ‰ + πœ‰2 /𝑑. Using the VIM given in formulas (43) and
(44), we construct the correction functional
π‘¦π‘š+1 (𝑑)
= π‘¦π‘š (𝑑)
𝑑
0
πœ‰2
πœ‰
2 σΈ€ 
σΈ€ σΈ€ 
2
( ) (52)
) (π‘¦π‘š
(πœ‰) + π‘¦π‘š
(πœ‰) − 16π‘¦π‘š
𝑑
πœ‰
2
− 6 + πœ‰4 ) π‘‘πœ‰,
π‘š = 1, 2, . . . .
We take 𝑦0 (𝑑) = 2𝑑 as the initial approximation and obtain
that
1
𝑦1 (𝑑) = − 𝑑6 + 𝑑2 ,
42
𝑦2 (𝑑) = 𝑑2 +
𝑑 = 0.15
906428𝐸 − 05
3.8062𝐸 − 02
Scholars and the Innovative Research Team in the University of China (IRT1179), the Aid Program for Science and
Technology Innovative Research Team in Higher Educational
Institutions of Hunan Province of China, and the Fund
Project of Hunan Province Education Office (11C1120).
References
πœ‘ (0) = 0,
+ ∫ (−πœ‰ +
𝑑 = 0.1
1.9048𝐸 − 05
9.2763𝐸 − 03
4 6
1 10
1
𝑑 −
𝑑 +
𝑑14 ,
21
1760
1935360
(53)
..
.
We use the iterative formulas (9) and (43) for Example 6,
respectively. When the iteration number 𝑛 = 2, the corresponding relative errors are showed in Table 1.
Table 1 shows that the iteration speed of the iterative
formula (43) for Example 6 is much faster than that of
iterative formula (9). This demonstrates that it is important
to choose a reasonable Lagrange multiplier.
4. Conclusion
In this paper, we apply the VIM to obtain the analytical
or approximate analytical solutions of second-order delay
differential equations. Some illustrative examples show that
this method gives a very good approximation to the exact
solution. The VIM is a promising method for second-order
delay differential equations.
Acknowledgments
The authors would like to thank the projects from NSF
of China (11226322, 11271311), the Program for Changjiang
[1] Y. Xu, J. J. Zhao, and M. Z. Liu, “TH-stability of πœƒ-method
for a second-order delay differential equation,” Mathematica
Numerica Sinica, vol. 26, no. 2, pp. 189–192, 2004.
[2] C. M. Huang and W. H. Li, “Delay-dependent stability analysis
of the trapezium rule for a class of second-order delay differential equations,” Mathematica Numerica Sinica, vol. 29, no. 2, pp.
155–162, 2007.
[3] X. H. Ding, J. Y. Zou, and M. Z. Liu, “𝑃-stability of continuous
Runge-Kutta-Nyström method for a class of delayed second
order differential equations,” Numerical Mathematics, vol. 27,
no. 2, pp. 123–132, 2005.
[4] J.-H. He, “Variational iteration method: a kind of non-linear
analytical technique: some examples,” International Journal of
Non-Linear Mechanics, vol. 34, no. 4, pp. 699–708, 1999.
[5] J.-H. He, “Some asymptotic methods for strongly nonlinear
equations,” International Journal of Modern Physics B, vol. 20,
no. 10, pp. 1141–1199, 2006.
[6] J.-H. He, “Variational iteration method for autonomous ordinary differential systems,” Applied Mathematics and Computation, vol. 114, no. 2-3, pp. 115–123, 2000.
[7] Z.-H. Yu, “Variational iteration method for solving the multipantograph delay equation,” Physics Letters A, vol. 372, no. 43,
pp. 6475–6479, 2008.
[8] L. Xu, “Variational iteration method for solving integral equations,” Computers & Mathematics with Applications, vol. 54, no.
7-8, pp. 1071–1078, 2007.
[9] S.-P. Yang and A.-G. Xiao, “Convergence of the variational
iteration method for solving multi-delay differential equations,”
Computers & Mathematics with Applications, vol. 61, no. 8, pp.
2148–2151, 2011.
[10] S. Yang, A. Xiao, and H. Su, “Convergence of the variational
iteration method for solving multi-order fractional differential
equations,” Computers & Mathematics with Applications, vol. 60,
no. 10, pp. 2871–2879, 2010.
[11] Y. Zhao and A. Xiao, “Variational iteration method for singular
perturbation initial value problems,” Computer Physics Communications, vol. 181, no. 5, pp. 947–956, 2010.
[12] H. Liu, A. Xiao, and Y. Zhao, “Variational iteration method for
delay differential-algebraic equations,” Mathematical & Computational Applications, vol. 15, no. 5, pp. 834–839, 2010.
[13] M. Rafei, D. D. Ganji, H. Daniali, and H. Pashaei, “The variational iteration method for nonlinear oscillators with discontinuities,” Journal of Sound and Vibration, vol. 305, no. 4-5, pp.
614–620, 2007.
Journal of Applied Mathematics
[14] V. Marinca, N. HerisΜ§anu, and C. Bota, “Application of the variational iteration method to some nonlinear one-dimensional
oscillations,” Meccanica, vol. 43, no. 1, pp. 75–79, 2008.
[15] M. Tatari and M. Dehghan, “On the convergence of He’s
variational iteration method,” Journal of Computational and
Applied Mathematics, vol. 207, no. 1, pp. 121–128, 2007.
[16] J.-H. He and X.-H. Wu, “Variational iteration method: new
development and applications,” Computers & Mathematics with
Applications, vol. 54, no. 7-8, pp. 881–894, 2007.
[17] J.-H. He, “Variational iteration method—some recent results
and new interpretations,” Journal of Computational and Applied
Mathematics, vol. 207, no. 1, pp. 3–17, 2007.
[18] J.-H. He, “A short remark on fractional variational iteration
method,” Physics Letters A, vol. 375, no. 38, pp. 3362–3364, 2011.
[19] J.-H. He, “Asymptotic methods for solitary solutions and compactons,” Abstract and Applied Analysis, vol. 2012, Article ID
916793, 130 pages, 2012.
9
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download