Document 10910768

advertisement
Hindawi Publishing Corporation
International Journal of Stochastic Analysis
Volume 2013, Article ID 537023, 17 pages
http://dx.doi.org/10.1155/2013/537023
Research Article
Asymptotic Behavior of Densities for Stochastic Functional
Differential Equations
Akihiro Kitagawa,1 and Atsushi Takeuchi2
1
2
Aikou Educational Institute, Kinuyama 5-1610-1, Ehime Matsuyama, 791-8501, Japan
Department of Mathematics, Osaka City University, Sugimoto 3-3-138, Sumiyoshi-ku,
Osaka 558-8585, Japan
Correspondence should be addressed to Atsushi Takeuchi; takeuchi@sci.osaka-cu.ac.jp
Received 30 September 2012; Accepted 10 December 2012
Academic Editor: S. Mohammed
Copyright © 2013 A. Kitagawa and A. Takeuchi. This is an open access article distributed under the Creative Commons
Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is
properly cited.
Consider stochastic functional differential equations depending on whole past histories in a finite time interval, which determine
non-Markovian processes. Under the uniformly elliptic condition on the coefficients of the diffusion terms, the solution admits a
smooth density with respect to the Lebesgue measure. In the present paper, we will study the large deviations for the family of the
solution process and the asymptotic behaviors of the density. The Malliavin calculus plays a crucial role in our argument.
1. Introduction
Stochastic functional differential equations, or stochastic
delay differential equations, determine non-Markovian processes, because the current states of the process in the equation depend on the past histories of the process. Such kind of
equations was initiated by Itô and Nisio [1] in their pioneering
work about 50 years ago. As stated in [2], there are some
difficulties to study such equations, because we cannot use
any methods in analysis, partial differential equations, and
potential theory at all. On the other hand, it seems to be more
natural to consider the models determined by the solutions
to the stochastic functional differential equations in finance,
physics, biology, and so forth, because such processes include
their past histories and can be recognized to reflect real
phenomena in various fields much more exactly.
The Malliavin calculus is well known as a powerful tool
to study some properties on the density function by a probabilistic approach. There are a lot of works on the densities
for diffusion processes by many authors, from the viewpoint
of the Malliavin calculus (cf. [3]). Moreover it is also applicable to the case of solutions to stochastic functional differential
equations, regarding as one of the examples of the Wiener
functionals. Kusuoka and Stroock in [4] studied the application of the Malliavin calculus to the solutions to stochastic
functional differential equations and obtained the result on
the existence of the smooth density for the solution with
respect to the Lebesgue measure. On the other hand, it is well
known that the Malliavin calculus is very fruitful to study
the asymptotic behavior of the density function related to the
large deviations theory (cf. Léandre [5–8] and Nualart [9]).
In fact, the Varadhan-type estimate of the density function
for the diffusion processes can be also obtained from this
viewpoint. Ferrante et al. in [10] discussed such problem in
the case of stochastic delay differential equations, where the
drift term depends on the whole past histories on the finite
time interval, while the diffusion terms depend on the state
only for the edges of the finite time interval. Mohammed and
Zhang in [11] studied the large deviations for the solution
process under a similar situation to [10]. But, the special
forms on the diffusion terms play a crucial role throughout
their arguments in [10, 11].
In the present paper, we will study the large deviations on
the solution process to the stochastic functional differential
equations. Our stochastic functional differential equations
are much more general, because they are time inhomogeneous, and they are not only the drift terms, but also the diffusion terms in the equation depend on the whole past histories
of the process over a finite interval. Furthermore, as a typical
application of the large deviation theory and the Malliavin
2
International Journal of Stochastic Analysis
calculus, we will study the asymptotic behavior, so-called
the Varadhan-type estimate, of the density function for the
solution process, which is quite similar to the case of diffusion
processes. The effect of the time delay plays a crucial role in
the behavior of the density function, and the obtained result
can be also regarded as the natural extension of the estimate
for diffusion processes, which are the most interesting points
in the present paper.
The paper is organized as follows. In Section 2, we will
prepare some notations and introduce our stochastic functional differential equations. Section 3 will be devoted to the
brief summary on the Malliavin calculus and its application
to our equations. We will consider some estimates which
guarantee the smoothness of the solution process and the
non degeneracy in the Malliavin sense. The existence of
the smooth density will be also discussed in Section 3. The
negative-order moments of the Malliavin covariance matrix
will be studied there which is important in order to give the
estimate of the density function. Sections 4 and 5 are our
main goals in the present paper. In Section 4, we will focus
on the large deviation principles on the solution processes.
As an application of the result obtained in Section 4, we will
study the asymptotic behavior on the density for the solution
process. Moreover, we can also derive the short time asymptotics on the density function, which can be interpreted as
the generalization of the Varadhan-type estimate on diffusion
processes (cf. [5–9]).
where 𝑋𝑠𝜀 = {𝑋𝜀 (𝑠 + 𝑢); 𝑢 ∈ [−𝑟, 0]} is the segment. Since the
current state of the solution depends on its past histories, the
process 𝑋𝜀 is non-Markovian clearly. Since the coefficients of
(2) satisfy the Lipschitz and the linear growth condition in
the functional sense, there exists a unique solution to (2), via
the successive approximation 𝑋𝜀,(𝑛) = {𝑋𝜀,(𝑛) (𝑡); 𝑡 ∈ [−𝑟, 𝑇]}
(𝑛 ∈ Z+ ) of the solution process 𝑋𝜀 to (2) as follows:
𝑋𝜀,(0) (𝑡) = 𝜂 (𝑡)
(𝑡 ∈ [−𝑟, 0]) ,
𝑋𝜀,(0) (𝑡) = 𝜂 (0)
(𝑡 ∈ (0, 𝑇]) ,
𝑋𝜀,(𝑛) (𝑡) = 𝜂 (𝑡)
(𝑡 ∈ [−𝑟, 0]) ,
(3)
𝑑𝑋𝜀,(𝑛) (𝑡) = 𝐴 0 (𝑡, 𝑋𝑡𝜀,(𝑛−1) ) 𝑑𝑡
+ 𝜀𝐴 (𝑡, 𝑋𝑡𝜀,(𝑛−1) ) 𝑑𝑊 (𝑡)
(𝑡 ∈ (0, 𝑇]) ,
for 𝑛 ∈ N (cf. Itô and Nisio [1], Mohammed [2, 12]).
Proposition 1. For any 𝑝 > 1, it holds that
𝑝
sup E [ sup |𝑋𝜀 (𝑡) | ] ≤ 𝐶3,𝑝,𝑇,𝜂 .
0<𝜀≤1
(4)
𝑡∈[−𝑟,𝑇]
Proof. Let 𝑝 > 2 and 𝑡 ∈ [0, 𝑇]. The Hölder inequality and
the Burkholder inequality tell us to see that
󵄨𝑝
󵄨
E [ sup 󵄨󵄨󵄨𝑋𝜀 (𝜏)󵄨󵄨󵄨 ]
𝜏∈[−𝑟,𝑡]
2. Preliminaries
𝑝
≤ 𝐶4,𝑝 ‖𝜂‖𝑝∞ + 𝐶4,𝑝 E [ sup |𝑋𝜀 (𝜏) | ]
Let 𝑟 and 𝑇 be positive constants, and denote an 𝑚-dimensional Brownian motion by 𝑊 = {𝑊(𝑡) = (𝑊1 (𝑡), . . . ,
𝑊𝑚 (𝑡)); 𝑡 ∈ [0, 𝑇]}. Let 𝐴 𝑖 (𝑖 = 0, 1, . . . , 𝑚) be R𝑑 -valued
functions on [0, 𝑇] × đś([−𝑟, 0]; R𝑑 ) such that, for each 𝑡 ∈
[0, 𝑇], the mapping 𝐴 𝑖 (𝑡, ⋅) : 𝐶([−𝑟, 0]; R𝑑 ) ∋ 𝑓 󳨃→
𝐴 𝑖 (𝑡, 𝑓) ∈ R𝑑 is smooth in the Frechét sense and all Frechét
derivatives of any orders greater than 1 are bounded. Under
the conditions stated above, the functions 𝐴 𝑖 (𝑖 = 0, 1, . . . , 𝑚)
satisfy the linear growth condition and the Lipschitz condition in the functional sense of the form:
𝜏∈[0,𝑡]
󵄨󵄨𝑝
󵄨󵄨 𝜏
󵄨
󵄨
≤ 𝐶4,𝑝 ‖𝜂‖𝑝∞ + 𝐶5,𝑝 E [ sup 󵄨󵄨󵄨∫ 𝐴 0 (𝑠, 𝑋𝑠𝜀 ) 𝑑𝑠󵄨󵄨󵄨 ]
󵄨󵄨
󵄨
0
𝜏∈[0,𝑡]󵄨
󵄨󵄨𝑝
󵄨󵄨 𝜏
󵄨
󵄨
+ 𝐶5,𝑝 𝜀𝑝 E [ sup 󵄨󵄨󵄨∫ 𝐴 (𝑠, 𝑋𝑠𝜀 ) 𝑑𝑊 (𝑠)󵄨󵄨󵄨 ]
󵄨󵄨
𝜏∈[0,𝑡]󵄨󵄨 0
𝑡
0
𝑡 𝑚
󵄨
󵄨
󵄩 󵄩
sup ∑ 󵄨󵄨󵄨𝐴 𝑖 (𝑡, 𝑓)󵄨󵄨󵄨 ≤ 𝐶1,𝑇 (1 + 󵄩󵄩󵄩𝑓󵄩󵄩󵄩∞ ) ,
0 𝑖=1
𝑡
󵄨𝑝
󵄨
≤ 𝐶7,𝑝,𝑇,𝜂 + 𝐶8,𝑝,𝑇 ∫ E [ sup 󵄨󵄨󵄨𝑋𝜀 (𝜏)󵄨󵄨󵄨 ] 𝑑𝑠,
0
󵄩
󵄨
󵄩
󵄨
sup ∑ 󵄨󵄨󵄨𝐴 𝑖 (𝑡, 𝑓) − 𝐴 𝑖 (𝑡, 𝑔)󵄨󵄨󵄨 ≤ 𝐶2,𝑇 󵄩󵄩󵄩𝑓 − 𝑔󵄩󵄩󵄩∞ ,
𝜏∈[−𝑟,𝑠]
𝑡∈[0,𝑇] 𝑖=0
for 𝑓, 𝑔 ∈ 𝐶([−𝑟, 0]; R𝑑 ), where ‖𝑓‖∞ = sup𝑡∈[−𝑟,0] |𝑓(𝑡)|.
Denote by 𝐴 = (𝐴 1 , . . . , 𝐴 𝑚 ).
Let 0 < 𝜀 ≤ 1 be sufficiently small. For a deterministic
path 𝜂 ∈ 𝐶([−𝑟, 0]; R𝑑 ), we will consider the R𝑑 -valued process 𝑋𝜀 = {𝑋𝜀 (𝑡); 𝑡 ∈ [−𝑟, 𝑇]} given by the stochastic functional differential equation of the form:
𝜀
𝑋 (𝑡) = 𝜂 (𝑡)
𝜀
𝑑𝑋 (𝑡) =
𝐴 0 (𝑡, 𝑋𝑡𝜀 ) 𝑑𝑡
+
from the linear growth condition on the coefficients 𝐴 𝑖 (𝑖 =
0, 1, . . . , 𝑚). Hence, the Gronwall inequality enables us to
obtain the assertion for 𝑝 > 2.
As for 1 < 𝑝 ≤ 2, the Jensen inequality yields us to see
that
𝑝
2𝑝
sup E [ sup |𝑋𝜀 (𝑡) | ] ≤ ( sup E [ sup |𝑋𝜀 (𝑡)| ])
0<𝜀≤1
(𝑡 ∈ [−𝑟, 0]) ,
𝜀𝐴 (𝑡, 𝑋𝑡𝜀 ) 𝑑𝑊 (𝑡)
𝑝
+ 𝐶6,𝑝 𝜀𝑝 𝑇𝑝/2−1 ∫ ∑E [|𝐴 𝑖 (𝑠, 𝑋𝑠𝜀 ) | ] 𝑑𝑠
(1)
𝑚
𝑝
≤ 𝐶4,𝑝 ‖𝜂‖𝑝∞ + 𝐶5,𝑝 𝑇𝑝−1 ∫ E [|𝐴 0 (𝑠, 𝑋𝑠𝜀 ) | ] 𝑑𝑠
𝑚
𝑡∈[0,𝑇] 𝑖=0
(5)
𝑡∈[−𝑟,𝑇]
0<𝜀≤1
𝑡∈[−𝑟,𝑇]
1/2
,
(6)
(𝑡 ∈ (0, 𝑇]) ,
(2)
which implies the assertion by using the consequence stated
above. The proof is complete.
International Journal of Stochastic Analysis
3
for all 𝑝 > 1. Then, for any 𝑝 > 2 and 𝜏 ∈ [0, 𝑇], it holds that
3. Applications of the Malliavin Calculus
At the beginning, we will introduce the outline of the Malliavin calculus on the Wiener space 𝐶0 ([0, 𝑇]; R𝑚 ), briefly,
where 𝐶0 ([0, 𝑇]; R𝑚 ) is the set of R𝑚 -valued continuous
functions on [0, 𝑇] starting from the origin. See Di Nunno
et al. [13] and Nualart [9, 14] for details. Let 𝐻 be the Cameron-Martin subspace of 𝐶0 ([0, 𝑇]; R𝑚 ) with the inner product
𝑇
⟨𝑔, ℎ⟩𝐻 = ∫ 𝑔̇ (𝑡) ⋅ ℎ̇ (𝑡) 𝑑𝑡
0
(𝑔, ℎ ∈ 𝐻) .
(7)
󵄩󵄩 𝑡
󵄩󵄩𝑝
𝜏 𝑚
󵄩
󵄩
󵄩
󵄩𝑝
E [ sup 󵄩󵄩󵄩∫ 𝛼 (𝑠) 𝑑𝑊 (𝑠)󵄩󵄩󵄩 ] ≤ 𝐶9,𝑝,𝜏 ∫ ∑E [󵄩󵄩󵄩𝛼𝑖 (𝑠)óľ„Šóľ„Šóľ„ŠΓ ] 𝑑𝑠.
󵄩󵄩Γ
0 𝑖=1
󵄩 0
𝑡∈[0,𝜏]󵄩
(13)
Lemma 3 (cf. Nualart [9], Proposition 1.3.8). Let {𝛽(𝑡); 𝑡 ∈
[0, 𝑇]} be a (F𝑡 )-adapted, R𝑚 ⊗ R𝑑 -valued process such that
𝛽(𝑡) ∈ D1,2 (R𝑚 ⊗ R𝑑 ) for almost all 𝑡 ∈ [0, 𝑇], and that
𝑇
𝑇
0
0
Denote by S the set of R-valued random variables such that
a random variable 𝐹 is represented as the following form:
E [∫ ∫ |𝐷𝑢 𝛽(𝑡)|2 𝑑𝑢𝑑𝑡] < +∞.
𝐹 (𝑊) = 𝑓 (𝑊 [ℎ1 ] , . . . , 𝑊 [ℎ𝑛 ])
Then, for each 𝑡 ∈ [0, 𝑇], it holds that ∫0 𝛽(𝑠)𝑑𝑊(𝑠) ∈
D1,2 (R𝑑 ), and that
(8)
for 𝑊 ∈ 𝐶0 ([0, 𝑇]; R𝑚 ), where ℎ1 , . . . , ℎ𝑛 ∈ 𝐻, 𝑊[ℎ] =
𝑇
∫0 ℎ(𝑠) ⋅ 𝑑𝑊(𝑠) for ℎ ∈ 𝐻, and 𝑓 ∈ 𝐶𝑝∞ (R𝑛 ; R). Here, we will
denote by 𝐶𝑝∞ (R𝑛 ; R) the set of smooth functions on R𝑛 such
that all derivatives of any orders have polynomial growth. For
𝑘 ∈ N, the 𝑘-times Malliavin-Shigekawa derivative 𝐷𝑘 𝐹 =
{𝐷𝑢𝑘1 ,...,𝑢𝑘 𝐹; 𝑢1 , . . . , 𝑢𝑘 ∈ [0, 𝑇]} for 𝐹 ∈ S is defined by
𝑛
{
{
∑ 𝜕𝑗 𝑓 (𝑊 [ℎ1 ] , . . . , 𝑊 [ℎ𝑛 ])
{
{
{
𝑗=1
{
𝑢1
𝐷𝑢𝑘1 ,...,𝑢𝑘 𝐹 (𝑊) = {
{
(𝑘 = 1) ,
{ × ∫ ℎ𝑗 (𝑠) 𝑑𝑠
{
{
0
{
(𝑘 ≥ 2) .
{𝐷𝑢1 ⋅ ⋅ ⋅ 𝐷𝑢𝑘 𝐹 (𝑊)
(9)
We will consider 𝐷0 𝐹 = 𝐹, which helps us to define the operator 𝐷𝑘 for 𝑘 ∈ Z+ . For 𝑝 > 1 and 𝑘 ∈ Z+ , let D𝑘,𝑝 be the
completion of S with respect to the norm
1/𝑝
‖𝐹‖𝑘,𝑝
(E [|𝐹|𝑝 ])
{
{
{
𝑘
={
1/𝑝
{E[|𝐹|𝑝 ]1/𝑝 + ∑E[󵄩󵄩󵄩󵄩𝐷𝑗 𝐹󵄩󵄩󵄩󵄩𝑝 ]
{
⊗𝑗
󵄩
󵄩𝐻
𝑗=1
{
0
𝑡
𝑢∧𝑡
0
0
𝐷𝑢 (∫ 𝛽 (𝑠) 𝑑𝑊 (𝑠)) = ∫
(𝑘 ∈ N) .
𝑑
𝑑
𝐷 𝐹⋅
𝐷 𝐹𝑑𝑢
𝑑𝑢 𝑢
𝑑𝑢 𝑢
(11)
is well defined, which is called the Malliavin covariance
matrix for 𝐹.
Before studying the application of the Malliavin calculus
to the solution process 𝑋𝜀 to (2), we will prepare two basic
and well-known facts.
𝑇
0
𝑝
0
(15)
Now, we will return our position to study the application
of the Malliavin calculus to the solution process 𝑋𝜀 to (2).
Proposition 4. Let 𝑛 ∈ Z+ and 0 < 𝜀 ≤ 1. Then, for each 𝑡 ∈
[−𝑟, 𝑇], the R𝑑 -valued random variable 𝑋𝜀,(𝑛) (𝑡) is in D∞ (R𝑑 ).
Moreover, for each 𝑘 ∈ Z+ , it holds that
𝑝
E [ sup ‖𝐷𝑘 𝑋𝜀,(𝑛) (𝑡)‖𝐻⊗𝑘 ⊗R𝑑 ] ≤ 𝐶10,𝑘,𝑝,𝑇,𝜂 ,
𝑡∈[−𝑟,𝑇]
𝑝
E [ sup ‖𝐷𝑘 𝑋𝜀,(𝑛) (𝑡) − 𝐷𝑘 𝑋𝜀,(𝑛−1) (𝑡)‖𝐻⊗𝑘 ⊗R𝑑 ] ≤
𝑡∈[−𝑟,𝑇]
𝐶11,𝑘,𝑝,𝑇,𝜂
2𝑛/2
.
Proof. At the beginning, we will consider the case 𝑝 > 2
inductively on 𝑘 ∈ Z+ . As for 𝑘 = 0, it is a routine work to
check the assertion via the Hölder inequality and the
Burkholder inequality, from the Lipschitz condition and the
linear growth condition on the coefficients 𝐴 𝑖 (𝑖 = 0, 1, . . . ,
𝑚), similarly to Proposition 1. Next, we will discuss the case
𝑘 = 1. Let 𝑛 ∈ N, because the assertion for 𝑛 = 0 is trivial.
Since 𝐷𝑋𝜀,(𝑛) = 0 for 𝑡 ∈ [−𝑟, 0], we have only to prove the
assertion for 𝑡 ∈ (0, 𝑇]. The chain rule on the operator 𝐷 and
Lemma 3 tell us to see that
𝑢
𝐷𝑢 𝑋𝜀,(𝑛) (𝑡) = 𝜀 ∫ 𝐴 (𝑠, 𝑋𝑠𝜀,(𝑛−1) ) I(𝑠≤𝑡) 𝑑𝑠
0
𝑡
+ ∫ ∇𝐴 0 (𝑠, 𝑋𝑠𝜀,(𝑛−1) ) 𝐷𝑢 𝑋𝑠𝜀,(𝑛−1) 𝑑𝑠
0
𝑡
+ 𝜀 ∫ ∇𝐴 (𝑠, 𝑋𝑠𝜀,(𝑛−1) ) 𝐷𝑢 𝑋𝑠𝜀,(𝑛−1) 𝑑𝑊 (𝑠)
0
Lemma 2 (cf. Kusuoka and Stroock [4], Lemma 2.1). Let Γ be
a real separable Hilbert space, and 𝛼 : [0, 𝑇] × Ω → R𝑚 ⊗ Γ
be a progressively measurable process such that
E [∫ ‖𝛼(𝑠)‖R𝑚 ⊗Γ đ‘‘đ‘ ] < +∞
𝑡
𝛽 (𝑠) 𝑑𝑠 + ∫ 𝐷𝑢 𝛽 (𝑠) 𝑑𝑊 (𝑠) .
(16)
Let D𝑘,𝑝 (R𝑑 ) be the set of R𝑑 -valued random variables with
the components of which belong to D𝑘,𝑝 , and set D∞ (R𝑑 ) =
⋂𝑝>1 ⋂𝑘∈Z+ D𝑘,𝑝 (R𝑑 ). For 𝐹 ∈ D1,2 (R𝑑 ), the R𝑑 ⊗ R𝑑 -valued
random variable 𝑉𝐹 given by
𝑇
𝑡
(𝑘 = 0) ,
(10)
𝑉𝐹 = ⟨𝐷𝐹, 𝐷𝐹⟩𝐻 = ∫
(14)
(12)
(17)
for 𝑢 ∈ [0, 𝑇] (cf. Ferrante et al. [10], Lemma 6.1), where the
symbol ∇ is the Frechét derivative in 𝐶([−𝑟, 0]; R𝑑 ). Thus,
the Hölder inequality and Lemma 2 enable us to get the
assertions. Finally, we will discuss the general case 𝑘 ∈ Z+ .
4
International Journal of Stochastic Analysis
Suppose that the assertions are right until the case 𝑛 − 1.
Remark that
𝑡
R𝑚 ⊗ R𝑑 -valued process {𝐷𝑢 𝑋𝜀 (𝑡); 𝑡 ∈ [−𝑟, 𝑇]} satisfies the
equation of the form:
𝐷𝑢 𝑋𝜀 (𝑡) = 0
𝐷𝑢𝑘1 ,...,𝑢𝑘 (∫ 𝐴 (𝑠, 𝑋𝑠𝜀,(𝑛−1) ) 𝑑𝑊 (𝑠))
0
𝐷𝑢 𝑋𝜀 (𝑡) = 𝜀 ∫
𝑡
(∫ 𝐷𝑢𝑘 (𝐴 (𝑠, 𝑋𝑠𝜀,(𝑛−1) )) 𝑑𝑊 (𝑠))
= 𝐷𝑢𝑘−1
1 ,...,𝑢𝑘−1
(∫
+ 𝐷𝑢𝑘−1
1 ,...,𝑢𝑘−1
0
𝐴 (𝑠, 𝑋𝑠𝜀,(𝑛−1) ) 𝑑𝑠)
𝐷𝑢𝑘−1
(𝐴 (𝑠, 𝑋𝑠𝜀,(𝑛−1) )) 𝑑𝑠,
𝜎(1) ,...,𝑢𝜎(𝑘−1)
from Lemma 3, where S𝑘 is the set of permutations of
{1, . . . , 𝑘}. Since
𝐷𝑢𝑘1 ,...,𝑢𝑘 (𝐴 𝑖 (𝑠, 𝑋𝑠𝜀,(𝑛−1) ))
= 𝐷𝑢𝑘−1
(∇𝐴 𝑖 (𝑠, 𝑋𝑠𝜀,(𝑛−1) ) 𝐷𝑢𝑘 𝑋𝑠𝜀,(𝑛−1) )
1 ,...,𝑢𝑘−1
𝑘−1
= ∑ ∑(
𝜎∈S𝑘 𝑗=0
𝑘−1
(∇𝐴 𝑖 (𝑠, 𝑋𝑠𝜀,(𝑛−1) ))
) 𝐷𝑢𝑘−1−𝑗
𝜎(1) ,...,𝑢𝜎(𝑘−𝑗)
𝑗
Proof. Let 𝑝 > 1 and 𝑘 ∈ Z+ be arbitrary. For each 𝑡 ∈ [−𝑟, 𝑇],
the sequence {𝑋𝜀,(𝑛) (𝑡); 𝑛 ∈ N} is the Cauchy one in D𝑘,𝑝 (R𝑑 ),
from Proposition 4. Hence, we can find the limit, denoted
̃ 𝜀 (𝑡), in D𝑘,𝑝 (R𝑑 ). Then, it is a routine work to see that
by 𝑋
̃ 𝜀 (𝑡); 𝑡 ∈ [−𝑟, 𝑇]} satisfies (2), via the Hölder
the process {𝑋
inequality and the Burkholder inequality, from the conditions
̃ 𝜀 (𝑡) =
on the coefficients 𝐴 𝑖 (𝑖 = 0, 1, . . . , 𝑚), which implies 𝑋
𝜀
𝑋 (𝑡) for 𝑡 ∈ [−𝑟, 𝑇] from the uniqueness of the solutions.
Thus, we can get 𝑋(𝑡) ∈ D𝑘,𝑝 (R𝑑 ) for 𝑡 ∈ [−𝑟, 𝑇]. Similarly,
we can check that {𝐷𝑢 𝑋(𝑡); 𝑢 ∈ [0, 𝑇]} satisfies (21), by taking
the limit in each term of (17) via the Hölder inequality and
Lemma 2.
For 𝑢 ∈ [0, 𝑇], denote by {𝑍𝜀 (𝑡, 𝑢); 𝑡 ∈ [−𝑟, 𝑇]} the R𝑑 ⊗
R -valued process determined by the following equation:
𝑑
𝑍𝜀 (𝑡, 𝑢) = 0
(19)
𝑑𝑍𝜀 (𝑡, 𝑢) = ∇𝐴 0 (𝑡, 𝑋𝑡𝜀 ) 𝑍𝑡𝜀 (⋅, 𝑢) 𝑑𝑡
for 𝑖 = 0, 1, . . . , 𝑚, and
=
(𝑡 ∈ [−𝑟, 0] or 𝑡 < 𝑢) ,
𝑍𝜀 (𝑢, 𝑢) = 𝐼𝑑 ,
𝑋𝑠𝜀,(𝑛−1)
× đˇđ‘˘đ‘—+1
𝜎(𝑘−𝑗+1) ,...,𝑢𝜎(𝑘−1) ,𝑢𝑘
𝐷𝑢𝑘1 ,...,𝑢𝑘 𝑋𝜀,(𝑛)
(21)
(18)
0
𝜎∈S𝑘
(𝑡 ∈ [𝑢, 𝑇]) .
0
𝑡
0
0
𝑡
= ∫ 𝐷𝑢𝑘1 ,...,𝑢𝑘 (𝐴 (𝑠, 𝑋𝑠𝜀,(𝑛−1) )) 𝑑𝑊 (𝑠)
𝑢𝜎(𝑘) ∧𝑡
𝑡
𝐴 (𝑠, 𝑋𝑠𝜀 ) 𝑑𝑠 + ∫ ∇𝐴 0 (𝑠, 𝑋𝑠𝜀 ) 𝐷𝑢 𝑋𝑠𝜀 𝑑𝑠
+ 𝜀 ∫ ∇𝐴 (𝑠, 𝑋𝑠𝜀 ) 𝐷𝑢 𝑋𝑠𝜀 𝑑𝑊 (𝑠)
= ⋅⋅⋅
+ ∑ ∫
𝑢∧𝑡
0
0
𝑢𝑘 ∧𝑡
(𝑡 ∈ [−𝑟, 0] or 𝑡 < 𝑢) ,
+ 𝜀∇𝐴 (𝑡, 𝑋𝑡𝜀 ) 𝑍𝑡𝜀 (⋅, 𝑢) 𝑑𝑊 (𝑡)
(𝑡 ∈ (𝑢, 𝑇]) ,
(22)
(𝑡)
𝐷𝑢𝑘1 ,...,𝑢𝑘
(∫
𝑡
0
where 𝑍𝑡𝜀 (⋅, 𝑢) = {𝑍𝜀 (𝑡 + 𝜏, 𝑢); 𝜏 ∈ [−𝑟, 0]}.
𝐴 0 (𝑠, 𝑋𝑠𝜀,(𝑛−1) ) 𝑑𝑠)
Corollary 6.
𝑡
+ 𝐷𝑢𝑘1 ,...,𝑢𝑘 (𝜀 ∫ 𝐴 (𝑠, 𝑋s𝜀,(𝑛−1) ) 𝑑𝑊 (𝑠))
𝐷𝑢 𝑋𝜀 (𝑡) = 𝜀 ∫
0
= ∑ 𝜀∫
𝑢𝜎(𝑘) ∧𝑡
0
𝜎∈S𝑘
𝐷𝑢𝑘−1
(𝐴 (𝑠, 𝑋𝑠𝜀,(𝑛−1) )) 𝑑𝑠
𝜎(1) ,...,𝑢𝜎(𝑘−1)
𝑢∧𝑡
0
(20)
𝑡
+ ∫ 𝐷𝑢𝑘1 ,...,𝑢𝑘 (𝐴 0 (𝑠, 𝑋𝑠𝜀,(𝑛−1) )) 𝑑𝑠
0
𝑡
+ 𝜀 ∫ 𝐷𝑢𝑘1 ,...,𝑢𝑘 (𝐴 (𝑠, 𝑋𝑠𝜀,(𝑛−1) )) 𝑑𝑊 (𝑠) ,
0
we can get the assertion by using the Hölder inequality,
Lemma 2, and the assumption on the case until 𝑘 − 1 of the
induction.
The case 1 < 𝑝 ≤ 2 is the direct consequence by the Jensen
inequality. The proof is complete.
Proposition 5. For 𝑡 ∈ [−𝑟, 𝑇], the R𝑑 -valued random variable 𝑋𝜀 (𝑡) is in D∞ (R𝑚 ). Moreover, for each 𝑢 ∈ [0, 𝑇], the
𝑍𝜀 (𝑡, 𝑠) 𝐴 (𝑠, 𝑋𝑠𝜀 ) 𝑑𝑠.
(23)
Proof. Direct consequence of Proposition 5 and the uniqueness of the solution to (21).
Finally, we will introduce the well-known criterion on
the existence of the smooth density for the probability law of
𝑋𝜀 (𝑡) with respect to the Lebesgue measure on R𝑑 .
Lemma 7 (cf. Kusuoka and Stroock [4]). Suppose the uniformly elliptic condition on the coefficients 𝐴 𝑖 (𝑖 = 1, . . . , 𝑚)
of (2) as follows:
𝑚
inf
inf
inf
2
∑ (𝜁 ⋅ 𝐴 𝑖 (𝑡, 𝑓)) > 0.
𝜁∈S𝑑−1 𝑡∈[0,𝑇] 𝑓∈𝐶([−𝑟,0];R𝑑 ) 𝑖=1
(24)
Then, for each 𝑡 ∈ (0, 𝑇] and 0 < 𝜀 ≤ 1, there exists a smooth
density 𝑝𝜀 (𝑡, 𝑦) for the probability law of 𝑋𝜀 (𝑡) with respect to
the Lebesgue measure over R𝑑 .
International Journal of Stochastic Analysis
5
Proof. Since 𝑋(𝑡) ∈ D∞ (R𝑑 ) from Proposition 5, it is suf−1
ficiently to study that (det 𝑉𝜀 (𝑡)) ∈ ⋂𝑝>1 L𝑝 (Ω) under the
uniformly elliptic condition (24), where 𝑉𝜀 (𝑡) is the Malliavin
covariance matrix for 𝑋𝜀 (𝑡). Denote by
𝑡 𝑚
̃𝜀 (𝑡) = ∫ ∑𝑍𝜀 (𝑡, 𝑢) 𝐴 𝑖 (𝑢, 𝑋𝜀 ) 𝐴 𝑖 (𝑢, 𝑋𝜀 )∗ 𝑍𝜀 (𝑡, 𝑢)∗ 𝑑𝑢.
𝑉
𝑢
𝑢
0 𝑖=1
(25)
̃𝜀 (𝑡), so we have only to discuss the
Then, 𝑉𝜀 (𝑡) = 𝜀2 𝑉
̃𝜀 (𝑡). As stated in Lemma 1 of Komatsu
moment estimate on 𝑉
and Takeuchi [15], we will pay attention to the boundedness
of
Similarly, the Chebyshev inequality leads to
󵄨𝑝
󵄨
𝐼3 ≤ 𝜆−𝜎𝑝 E [ sup 󵄨󵄨󵄨𝑋𝜀 (𝑠)󵄨󵄨󵄨 ] ≤ 𝐶14,𝑝,𝑇,𝜂 𝜆−𝜎𝑝 ,
𝑠∈[−𝑟,𝑡]
from Proposition 1. On the other hand, as for 𝐼1 , we have
𝑡 𝑚
󵄨
󵄨2
𝐼1 ≤ E1 [exp (−𝜆 ∫ ∑󵄨󵄨󵄨𝜁 ⋅ 𝑍𝜀 (𝑡, 𝑢) 𝐴 𝑖 (𝑢, 𝑋𝑢𝜀 )󵄨󵄨󵄨 𝑑𝑢)]
𝑡
𝜉
≤ E1 [exp (−
−𝑝
̃𝜀 (𝑡)𝜁) ] ,
sup E [(𝜁 ⋅ 𝑉
for any 𝑝 > 1, which is sufficient to our goal. Since
+∞
1
∫
Γ (𝑝) 0
−𝑝
̃𝜀 (𝑡) 𝜁) ] =
E [(𝜁 ⋅ 𝑉
(27)
×exp (−
𝜀
̃ (𝑡) 𝜁)] 𝑑𝜆,
× E [exp (−𝜆𝜁 ⋅ 𝑉
we have to study the decay order of sup𝜁∈S𝑑−1 E[exp(−𝜆𝜁 ⋅
̃𝜀 (𝑡)𝜁)] as 𝜆 → +∞.
𝑉
Let 𝜆 > 1 be sufficiently large. Remark that
𝜀
𝑚
𝜉
𝑖=1
𝑚
𝜆
󵄨
󵄨2
inf
inf inf
∑󵄨󵄨󵄨𝜁 ⋅ 𝐴 𝑖 (𝑢, 𝑓)󵄨󵄨󵄨 )
𝑑−1
𝑑
𝑢∈[0,𝑇]
2 𝜁∈S
𝑓∈𝐶([−𝑟,0];R ) 𝑖=1
(33)
Therefore, we can get
𝜀
̃ (𝑡) 𝜁)] ≤ 𝐶17,𝑝,𝑇,𝜂 𝜆−𝐶18 𝑝 ,
E [exp (−𝜆 𝜁 ⋅ 𝑉
𝜀
−𝑝
󵄩
󵄩2
+ P [∫ 󵄩󵄩󵄩𝑍𝜀 (𝑡, 𝑢) − 𝐼𝑑 󵄩󵄩󵄩R𝑑 ⊗R𝑑 𝑑𝑢 ≥ 𝜆−𝛾 ]
𝑡
𝜉
(29)
̃𝑖 (𝑡, 𝑓 (0))
𝐴 𝑖 (𝑡, 𝑓) = 𝐴
𝑠∈[−𝑟,𝑡]
=: 𝐼1 + 𝐼2 + 𝐼3 ,
2
E1 [⋅] := E [⋅ : ∫ ‖𝑍𝜀 (𝑡, 𝑢) − 𝐼𝑑 ‖R𝑑 ⊗R𝑑 𝑑𝑢 < 𝜆−𝛾 ,
𝑡𝜉
(30)
󵄨
󵄨
sup 󵄨󵄨󵄨𝑋𝜀 (𝑠)󵄨󵄨󵄨 < 𝜆𝜎 ] .
𝑠∈[−𝑟,𝑡]
The Chebyshev inequality yields that
𝑝
󵄩
󵄩2
𝐼2 ≤ 𝜆𝛾𝑝 E [(∫ 󵄩󵄩󵄩𝑍𝜀 (𝑡, 𝑢) − 𝐼𝑑 󵄩󵄩󵄩R𝑑 ⊗R𝑑 𝑑𝑢) ]
𝑡
(𝑖 = 1, . . . , 𝑚) ,
(36)
̃𝑖 : [0, 𝑇] × R𝑑 → R𝑑 with the good conditions on
where 𝐴
the boundedness and the regularity. Now, our stochastic
functional differential equation is as follows:
where
.
for any 𝑝 > 1. The proof is complete.
Remark 8. Consider the case
󵄨
󵄨
+ P [ sup 󵄨󵄨󵄨𝑋𝜀 (𝑠)󵄨󵄨󵄨 ≥ 𝜆𝜎 ]
𝜉
𝜁∈S𝑑−1
(35)
𝑡
𝑡
−𝑝
̃ (𝑡) 𝜁) ]
sup E [(𝜁 ⋅ 𝑉𝜀 (𝑡) 𝜁) ] = 𝜀−2𝑑𝑝 sup E [(𝜁 ⋅ 𝑉
≤ 𝐶19,𝑝,𝑇 𝜀−2𝑑𝑝 ,
̃𝜀 (𝑡) 𝜁)]
≤ E1 [exp (−𝜆 𝜁 ⋅ 𝑉
𝑡
(34)
so we have
𝜁∈S𝑑−1
̃ (𝑡) 𝜁)]
E [exp (−𝜁 ⋅ 𝑉
≤ 𝐶13,𝑝,𝑇 𝜆
𝑡
≤ 𝐶15 exp (−𝐶16 𝜆) .
(28)
for any 𝑝 > 1, from the Burkholder inequality and the Hölder
inequality. Let 𝜉 > 1/2, 1 < 𝛾 < 2𝜉 and 0 < 𝜎 < (𝛾 − 1)/2.
Write 𝑡𝜉 := 𝑡 − 𝜆−𝜉 , and let 𝜁 ∈ S𝑑−1 . Then, we see that
−(2𝜉−𝛾)𝑝
𝑡 𝑚
𝜆
󵄨2
󵄨
inf ∫ ∑󵄨󵄨󵄨𝜁 ⋅ 𝐴 𝑖 (𝑢, 𝑋𝑢𝜀 )󵄨󵄨󵄨 𝑑𝑢)
2 𝜁∈S𝑑−1 𝑡𝜉 𝑖=1
≤ exp (𝜆1−𝛾+2𝜎 )
𝜆𝑝−1
󵄩
󵄩𝑝
E [󵄩󵄩󵄩𝑍𝜀 (𝑡, 𝑢) − 𝐼𝑑 󵄩󵄩󵄩R𝑑 ⊗R𝑑 ] ≤ 𝐶12,𝑝,𝑇 (𝑡 − 𝑢)𝑝/2 ,
𝑖=1
󵄩
󵄩2
󵄨
󵄨2
×exp (𝜆 ∫ 󵄩󵄩󵄩𝑍𝜀 (𝑡, 𝑢)−𝐼𝑑 󵄩󵄩󵄩R𝑑 ⊗R𝑑∑󵄨󵄨󵄨𝐴 𝑖 (𝑢, 𝑋𝑢𝜀 )󵄨󵄨󵄨 𝑑𝑢)]
𝑡
(26)
𝜁∈S𝑑−1
(32)
(31)
𝑋𝜀 (𝑡) = 𝜂 (𝑡)
(𝑡 ∈ [−𝑟, 0]) ,
̃ (𝑡, 𝑋𝜀 (𝑡)) 𝑑𝑊 (𝑡)
𝑑𝑋𝜀 (𝑡) = 𝐴 0 (𝑡, 𝑋𝑡𝜀 ) 𝑑𝑡 + 𝜀𝐴
(𝑡 ∈ (0, 𝑇]) ,
(37)
̃ = (𝐴
̃1 , . . . , 𝐴
̃𝑚 ). Then, we can get the same upper
where 𝐴
estimate of the inverse of the Malliavin covariance matrix
𝑉𝜀 (𝑡) for 𝑋𝜀 (𝑡) in the hypoelliptic situation, which means that
̃𝑖 (𝑖 = 1, . . . , 𝑚),
the linear space generated by the vectors 𝐴
𝑑
and their Lie brackets span the space R (cf. Takeuchi [16]).
6
International Journal of Stochastic Analysis
4. Large Deviation Principles for 𝑋𝜀
Proof. Let 𝑓, 𝑔 ∈ 𝐻𝑎 . Since
At the beginning, we will introduce the well-known fact
on the sample-path large deviations for Brownian motions.
See also [8]. Recall that 𝐻 is the Cameron-Martin space of
𝐶0 ([0, 𝑇]; R𝑚 ).
Lemma 9 (cf. Dembo and Zeitouni [17], Theorem 5.2.3). The
family {P ∘ (𝜀 𝑊)−1 ; 0 < 𝜀 ≤ 1} of the laws of 𝜀𝑊 over
𝐶0 ([0, 𝑇]; R𝑚 ) satisfies the large deviation principle with the
good rate function 𝐼, where
󵄩󵄩 󵄩󵄩2
{
{ 󵄩󵄩𝑓󵄩󵄩𝐻
𝐼 (𝑓) = { 2
{
{+∞
(𝑓 ∈ 𝐻) ,
𝑓
𝑑𝑥 (𝑡) =
𝑓
𝐴 0 (𝑡, 𝑥𝑡 ) 𝑑𝑡
+
0
0
𝑡
󵄨
󵄨
+ ∫ ‖𝐴 (𝑠, 𝑥𝑠𝑓 ) ‖R𝑚 ⊗R𝑑 󵄨󵄨󵄨󵄨𝑓̇ (𝑠)󵄨󵄨󵄨󵄨 𝑑𝑠
0
(𝑓 ∉ 𝐻) .
𝑡
󵄨
󵄨
+ 𝐶21,𝑇 ∫ (1 + 󵄨󵄨󵄨󵄨𝑓̇ (𝑠)󵄨󵄨󵄨󵄨)
0
(𝑡 ∈ (0, 𝑇]) .
(39)
𝐶𝜂 ([−𝑟, 𝑇] ; R𝑑 ) = {𝑤 ∈ 𝐶 ([−𝑟, 𝑇] ; R𝑑 ) ;
𝑤 (𝑡) = 𝜂 (𝑡) (𝑡 ∈ [−𝑟, 0])} .
(40)
̃𝐼 (𝑔) = inf {𝐼 (𝑓) ; 𝑓 ∈ 𝐻, 𝑔 = 𝑥𝑓 } ,
󵄨
󵄨
× (1 + sup 󵄨󵄨󵄨󵄨𝑥𝑓 (𝜏)󵄨󵄨󵄨󵄨) 𝑑𝑠,
𝜏∈[−𝑟,𝑠]
from the linear growth condition on 𝐴 𝑖 (𝑖 = 0, 1, . . . , 𝑚),
which tells us to see that
󵄨
󵄨
sup 󵄨󵄨󵄨󵄨𝑥𝑓 (𝜏)󵄨󵄨󵄨󵄨 ≤ 𝐶22,𝑇,𝜂,𝑎 .
(46)
𝜏∈[−𝑟,𝑇]
On the other hand, since
Theorem 10. The family {P ∘ (𝑋𝜀 )−1 ; 0 < 𝜀 ≤ 1} of the laws of
𝑋𝜀 over 𝐶𝜂 ([−𝑟, 𝑇]; R𝑑 ) satisfies the large deviation principle
with the good rate function ̃𝐼, where
𝑡
𝑥𝑓 (𝑡) − 𝑥𝑔 (𝑡) = ∫ {𝐴 0 (𝑠, 𝑥𝑠𝑓 ) − 𝐴 0 (𝑠, 𝑥𝑠𝑔 )} 𝑑𝑠
0
𝑡
+ ∫ {𝐴 (𝑠, 𝑥𝑠𝑓 ) 𝑓̇ (𝑠) − 𝐴 (𝑠, 𝑥𝑠𝑔 ) 𝑔̇ (𝑠)} 𝑑𝑠,
0
(41)
and 𝐼 is the function given in Lemma 9.
Theorem 10 tells us to see, via the contraction principle
(cf. Dembo and Zeitouni [17], Theorem 4.2.1).
Corollary 11. For each 𝑡 ∈ [0, 𝑇], the family {P ∘ (𝑋𝜀 (𝑡))−1 ;
0 < 𝜀 ≤ 1} of the laws of 𝑋𝜀 (𝑡) over R𝑑 satisfies the large deviation principle with the good rate function 𝐼, where
𝐼 (𝑦) = inf {̃𝐼 (𝑔) ; 𝑔 ∈ 𝐶𝜂 ([−𝑟, 𝑇] ; R𝑑 ) , 𝑦 = 𝑔 (𝑡)} , (42)
and ̃𝐼 is the function given in Theorem 10.
Now, we will prove Theorem 10, according to Azencott
[18] and Léandre [5–8]. Our strategy stated here is almost
parallel to [10, 11].
Proposition 12. For any 𝑎 > 0, the mapping
𝑓
(45)
≤ 𝐶20,𝑇,𝜂
(𝑡 ∈ [−𝑟, 0]) ,
𝑓
𝐴 (𝑡, 𝑥𝑡 ) 𝑓̇ (𝑡) 𝑑𝑡
(44)
𝑡
󵄨
󵄨
≤ 2‖𝜂‖∞ + ∫ 󵄨󵄨󵄨󵄨𝐴 0 (𝑠, 𝑥𝑠𝑓 )󵄨󵄨󵄨󵄨 𝑑𝑠
0
(38)
Denote by
𝑑
𝐻𝑎 := {𝑓 ∈ 𝐻; ‖𝑓‖𝐻 ≤ 𝑎} ∋ 𝑓 󳨃󳨀→ 𝑥 ∈ 𝐶𝜂 ([−𝑟, 𝑇] ; R )
(43)
is continuous.
𝑡
we see that
󵄨
󵄨
󵄨
󵄨
sup 󵄨󵄨󵄨󵄨𝑥𝑓 (𝜏)󵄨󵄨󵄨󵄨 ≤ ‖𝜂‖∞ + sup 󵄨󵄨󵄨󵄨𝑥𝑓 (𝜏)󵄨󵄨󵄨󵄨
𝜏∈[−𝑟,𝑡]
𝜏∈[0,𝑡]
For 𝑓 ∈ 𝐻, let 𝑥𝑓 = {𝑥𝑓 (𝑡); 𝑡 ∈ [−𝑟, 𝑇]} be the solution
to the following functional differential equation:
𝑥𝑓 (𝑡) = 𝜂 (𝑡)
𝑡
𝑥𝑓 (𝑡) = 𝜂 (0) + ∫ 𝐴 0 (𝑠, 𝑥𝑠𝑓 ) 𝑑𝑠 + ∫ 𝐴 (𝑠, 𝑥𝑠𝑓 ) 𝑓̇ (𝑠) 𝑑𝑠,
(47)
for 𝑡 ∈ (0, 𝑇], and the R𝑑 -valued functions 𝐴 𝑖 (𝑖 = 0, 1, . . . ,
𝑚) satisfy the Lipschitz condition and the linear growth condition, we have
󵄨
󵄨
sup 󵄨󵄨󵄨󵄨𝑥𝑓 (𝜏) − 𝑥𝑔 (𝜏)󵄨󵄨󵄨󵄨
𝜏∈[−𝑟,𝑡]
󵄨
󵄨
= sup 󵄨󵄨󵄨󵄨𝑥𝑓 (𝜏) − 𝑥𝑔 (𝜏)󵄨󵄨󵄨󵄨
𝜏∈[0,𝑡]
𝑡
󵄨
󵄨
≤ ∫ 󵄨󵄨󵄨󵄨𝐴 0 (s, 𝑥𝑠𝑓 ) − 𝐴 0 (𝑠, 𝑥𝑠𝑔 )󵄨󵄨󵄨󵄨 𝑑𝑠
0
𝑡
󵄨
󵄨
+ ∫ ‖𝐴 (𝑠, 𝑥𝑠𝑓 ) − 𝐴 (𝑠, 𝑥𝑠𝑔 ) ‖R𝑚 ⊗R𝑑 󵄨󵄨󵄨󵄨𝑓̇ (𝑠)󵄨󵄨󵄨󵄨 𝑑𝑠
0
𝑡
󵄨
󵄨
+ ∫ ‖𝐴 (𝑠, 𝑥𝑠𝑔 ) ‖R𝑚 ⊗R𝑑 󵄨󵄨󵄨󵄨𝑓̇ (𝑠) − 𝑔̇ (𝑠)󵄨󵄨󵄨󵄨 𝑑𝑠
0
𝑚 󵄨
𝑡
󵄨
󵄨
󵄨 𝑖 󵄨󵄨
≤ 𝐶23,𝑇 ∫ sup 󵄨󵄨󵄨󵄨𝑥𝑓 (𝜏)−𝑥𝑔 (𝜏)󵄨󵄨󵄨󵄨 (1+ ∑ 󵄨󵄨󵄨𝑓̇ (𝑠)󵄨󵄨󵄨) 𝑑s
󵄨
󵄨
0 𝜏∈[−𝑟,𝑠]
𝑖=1
+ 𝐶24,𝑇,𝜂,𝑎 ‖𝑓 − 𝑔‖𝐻.
(48)
International Journal of Stochastic Analysis
7
𝑡
󵄨
󵄨
≤ ∫ 󵄨󵄨󵄨󵄨𝐴 0 (𝑠, 𝑋𝑠𝜀,𝑓 ) − 𝐴 0 (𝑠, 𝑥𝑠𝑓 )󵄨󵄨󵄨󵄨 𝑑𝑠
0
The Gronwall inequality tells us to see that
󵄨
󵄨
sup 󵄨󵄨󵄨󵄨𝑥𝑓 (𝜏) − 𝑥𝑔 (𝜏)󵄨󵄨󵄨󵄨
𝜏∈[−𝑟,𝑡]
𝑡
󵄩
󵄩
+ ∫ 󵄩󵄩󵄩󵄩𝐴 (𝑠, 𝑋𝑠𝜀,𝑓 ) − 𝐴 (𝑠, 𝑥𝑠𝑓 )󵄩󵄩󵄩󵄩R𝑚 ⊗R𝑑
0
≤ 𝐶24,𝑇,𝜂,𝑎 ‖𝑓 − 𝑔‖𝐻
+ sup |𝜀𝑀 (𝜏)|
𝑚
󵄨󵄨 𝑖 󵄨󵄨
× exp [𝐶23,𝑇 ∫ (1 + ∑ 󵄨󵄨󵄨𝑓̇ (𝑠)󵄨󵄨󵄨) 𝑑𝑠]
󵄨
󵄨
0
𝑖=1
𝑡
󵄨󵄨 ̇ 󵄨󵄨
󵄨󵄨𝑓 (𝑠)󵄨󵄨 𝑑𝑠
󵄨
󵄨
(49)
𝜏∈[0,𝑡]
𝑡
󵄨
󵄨
≤ 𝐶28,𝑇 ∫ sup 󵄨󵄨󵄨󵄨𝑋𝜀,𝑓 (𝜏) − 𝑥𝑓 (𝜏)󵄨󵄨󵄨󵄨
0 𝜏∈[−𝑟,𝑠]
≤ 𝐶25,𝑇,𝜂,𝑎 ‖𝑓 − 𝑔‖𝐻,
𝑚 󵄨
󵄨 𝑖 󵄨󵄨
× (1 + ∑ 󵄨󵄨󵄨𝑓̇ (𝑠)󵄨󵄨󵄨) 𝑑𝑠
󵄨
󵄨
𝑖=1
which completes the proof.
Proposition 13. Suppose that the R𝑑 -valued functions 𝐴 𝑖 (𝑖 =
1, . . . , 𝑚) are bounded. Then, for any 𝑓 ∈ 𝐻 and 𝜌 > 0, there
exist 𝛼𝜌 > 0 and 𝜀𝜌 > 0 such that
󵄨
󵄨
󵄨
󵄨
P [ sup 󵄨󵄨󵄨󵄨𝑋𝜀 (𝜏) − 𝑥𝑓 (𝜏)󵄨󵄨󵄨󵄨 > 𝜌, sup 󵄨󵄨󵄨𝜀 𝑊 (𝜏)−𝑓 (𝜏)󵄨󵄨󵄨 ≤ 𝛼𝜌]
𝜏∈[−𝑟,𝑇]
𝜏∈[0,𝑇]
+ sup |𝜀𝑀 (𝜏)| .
𝜏∈[0,𝑡]
(53)
The Gronwall inequality tells us to see that
󵄨
󵄨
sup 󵄨󵄨󵄨󵄨𝑋𝜀,𝑓 (𝜏) − 𝑥𝑓 (𝜏)󵄨󵄨󵄨󵄨
𝜏∈[−𝑟,𝑡]
𝜌2
≤ 𝐶26,𝑇,𝑓,𝜌 exp [−𝐶27,𝑇,𝑓 2 ] ,
𝜀
(50)
≤ ( sup |𝜀𝑀 (𝜏)|)
𝜏∈[0,𝑡]
for any 0 < 𝜀 ≤ 𝜀𝜌 .
󵄨󵄨 𝑖 󵄨󵄨
× exp [𝐶28,𝑇 ∫ (1 + ∑ 󵄨󵄨󵄨𝑓̇ (𝑠)󵄨󵄨󵄨) 𝑑𝑠]
󵄨
󵄨
0
𝑖=1
̃ by
Proof. Define a new probability measure 𝑑P
2
𝑇 𝑚 ̇𝑖
̃ 󵄨󵄨󵄨
𝑑P
󵄨󵄨 = exp [∫ ∑ 𝑓 (𝑠) 𝑑𝑊𝑖 (𝑠) − ‖𝑓‖𝐻 ] .
󵄨󵄨
𝑑P 󵄨󵄨F𝑇
2𝜀2
0 𝑖=1 𝜀
(51)
The Girsanov theorem tells us to see that the R𝑚 -valued
̃
process {𝑊(𝑡)
:= 𝑊(𝑡) − 𝑓(𝑡)/𝜀; 𝑡 ∈ [0, 𝑇]} is also the 𝑚dimensional Brownian motion under the probability measure
̃ Let {𝑋𝜀,𝑓 (𝑡); 𝑡 ∈ [−𝑟, 𝑇]} be the R𝑑 -valued process
𝑑 P.
determined by the following equation:
𝑋𝜀,𝑓 (𝑡) = 𝜂 (𝑡)
𝑚
𝑡
𝜏∈[0,𝑡]
For each 𝑘 = 1, . . . , 𝑑, the martingale representation theorem
enables us to see that there exists a 1-dimensional Brownian
motion {𝐵𝑘 (𝑡); 𝑡 ∈ [0, 𝑇]} starting at the origin with
𝑀𝑘 (𝑡) = 𝐵𝑘 (⟨𝑀𝑘 ⟩ (𝑡)) ,
(55)
𝑖=1
𝜀,𝑓
𝑑𝑋𝜀,𝑓 (𝑡) = 𝐴 0 (𝑡, 𝑋𝑡 ) 𝑑𝑡
𝜀,𝑓
̃ (𝑡) + 𝑓̇ (𝑡) 𝑑𝑡}
+ 𝐴 (𝑡, 𝑋𝑡 ) {𝜀𝑑𝑊
≤ 𝐶29,𝑇,𝑓 ( sup |𝜀𝑀 (𝜏)|) .
𝑡 𝑚
󵄨
󵄨2
⟨𝑀𝑘 ⟩ (𝑡) = ∫ ∑󵄨󵄨󵄨󵄨𝐴𝑘𝑖 (𝑠, 𝑋𝑠𝜀,𝑓 )󵄨󵄨󵄨󵄨 𝑑𝑠,
0
(𝑡 ∈ [−𝑟, 0]) ,
(54)
(𝑡 ∈ (0, 𝑇]) .
(52)
for 𝑘 = 1, . . . , 𝑑. Remark that ⟨𝑀𝑘 ⟩(𝑡) ≤ 𝐶30,𝑇 , because of the
boundedness of the R𝑑 -valued functions 𝐴 𝑖 (𝑖 = 1, . . . , 𝑚).
Since
𝑡
̃
Write 𝑀(𝑡) := ∫0 𝐴(𝑠, 𝑋𝑠𝜀,𝑓 )𝑑𝑊(𝑠).
Remark that
󵄨
󵄨
sup 󵄨󵄨󵄨󵄨𝑋𝜀,𝑓 (𝜏) − 𝑥𝑓 (𝜏)󵄨󵄨󵄨󵄨
𝜏∈[−𝑟,𝑡]
󵄨
󵄨
= sup 󵄨󵄨󵄨󵄨𝑋𝜀,𝑓 (𝜏) − 𝑥𝑓 (𝜏)󵄨󵄨󵄨󵄨
𝜏∈[0,𝑡]
𝜌 ]
̃ [ sup 󵄨󵄨󵄨󵄨𝐵𝑘 (𝜏)󵄨󵄨󵄨󵄨 >
P
󵄨
󵄨 𝐶31,𝑇,𝑓 𝜀
[𝜏∈[0,𝐶30,𝑇 ]
]
≤ √2 exp [−
2
𝜌
],
2
4 𝐶30,𝑇 𝐶31,𝑇,𝑓
𝜀2
(56)
8
International Journal of Stochastic Analysis
from the reflection principle on Brownian motions, we have
󵄨
󵄨
󵄨
󵄨
P [ sup 󵄨󵄨󵄨󵄨𝑋𝜀 (𝜏) − 𝑥𝑓 (𝜏)󵄨󵄨󵄨󵄨 > 𝜌, sup 󵄨󵄨󵄨𝜀𝑊 (𝜏) − 𝑓 (𝜏)󵄨󵄨󵄨 ≤ 𝛼𝜌 ]
𝜏∈[−𝑟,𝑇]
𝜏∈[0,𝑇]
̃ [ sup 󵄨󵄨󵄨󵄨𝑋𝜀,𝑓 (𝜏)−𝑥𝑓 (𝜏)󵄨󵄨󵄨󵄨 > 𝜌, sup 󵄨󵄨󵄨󵄨𝜀𝑊
̃ (𝜏)󵄨󵄨󵄨󵄨 ≤ 𝛼𝜌 ]
=P
󵄨
󵄨
󵄨
󵄨
𝜏∈[−𝑟,𝑇]
𝜏∈[0,𝑇]
̃ [ sup |𝑀 (𝜏)| >
≤P
𝜏∈[0,𝑇]
Define 𝜎𝑅 = inf{𝑡 > 0; |𝑋𝜀 (𝑡)| > 𝑅}. Then, it holds that
2 𝑁
E [(1 + |𝑋𝜀 (𝑡 ∧ 𝜎𝑅 )| ) ]
𝑁
≤ (1 + ‖𝜂‖2∞ )
+ E [∫
𝑡∧𝜎𝑅
0
𝜌
𝐶29,𝑇,𝑓 𝜀
󵄨2 𝑁−1
󵄨
{𝑁(1 + 󵄨󵄨󵄨𝑋𝜀 (𝑠)󵄨󵄨󵄨 )
]
𝑚
󵄨
󵄨2
×(2𝑋𝜀 (𝑠)⋅𝐴 0 (𝑠, 𝑋𝑠𝜀)+𝜀2∑󵄨󵄨󵄨𝐴 𝑖 (𝑠, 𝑋𝑠𝜀 )󵄨󵄨󵄨 )
𝑖=1
𝜌
]
̃ [ sup |𝐵 (𝜏)| >
≤P
𝐶29,𝑇,𝑓 𝜀
𝜏∈[0,𝐶30,𝑇 ]
[
]
2 𝑁−2
+ 2 𝑁 (𝑁 − 1) 𝜀2 (1 + |𝑋𝜀 (𝑠)| )
𝑚
{
}
𝜌
󵄨󵄨 𝑘 󵄨󵄨
]
̃ [⋃
≤P
{ sup 󵄨󵄨󵄨𝐵 (𝜏)󵄨󵄨󵄨 >
}
√
𝐶
𝑑𝜀
29,𝑇,𝑓
[𝑘=1 {𝜏∈[0,𝐶30,𝑇 ]
}]
≤ √2 𝑑 exp [−
𝑖=1
𝑁
≤ (1 + ‖𝜂‖∞ ) + 𝐶32,𝑇 (𝑁 + 𝜀2 𝑁 + 𝜀2 𝑁2 )
2
𝜌
],
2
𝐶29,𝑇,𝑓
𝑑𝜀2
4 𝐶30,𝑇
𝑡
(57)
Proposition 14. It holds that
󵄨
󵄨
lim lim sup 𝜀 ln P [ sup 󵄨󵄨󵄨𝑋𝜀 (𝑡)󵄨󵄨󵄨 > 𝑅] = −∞.
𝜀↘0
󵄨
󵄨2 𝑁
× E [∫ (1 + 󵄨󵄨󵄨𝑋𝜀 (𝑠 ∧ 𝜎𝑅 )󵄨󵄨󵄨 ) 𝑑𝑠] ,
0
(60)
from the linear growth condition on the coefficients 𝐴 𝑖 (𝑖 =
0, 1, . . . , 𝑚) of (2). Hence, the Gronwall inequality implies
that
which completes the proof.
𝑅 → +∞
2
× ∑(𝑋𝜀 (𝑠) ⋅ 𝐴 𝑖 (𝑠, 𝑋𝑠𝜀 )) } 𝑑𝑠]
𝑑
𝑡∈[−𝑟,𝑇]
(58)
Proof. Let 𝑁 > 2 be sufficient large. From the Itô formula, we
see that
󵄨2 𝑁
󵄨
E [(1 + 󵄨󵄨󵄨𝑋𝜀 (𝑡 ∧ 𝜎𝑅 )󵄨󵄨󵄨 ) ]
≤ (1 + ‖𝜂‖∞ )𝑁 exp [𝐶32,𝑇 (𝑁 + 𝜀2 𝑁 + 𝜀2 𝑁2 ) 𝑡] .
(61)
In particular, taking 𝑁 = 1/𝜀 yields that
󵄨
󵄨2 1/𝜀
E [(1 + 󵄨󵄨󵄨𝑋𝜀 (𝑡 ∧ 𝜎𝑅 )󵄨󵄨󵄨 ) ]
󵄨
󵄨2 𝑁
(1 + 󵄨󵄨󵄨𝑋𝜀 (𝑡)󵄨󵄨󵄨 )
󵄨2 𝑁
󵄨
= (1 + 󵄨󵄨󵄨𝜂 (0)󵄨󵄨󵄨 )
𝑡
1/𝜀
≤ (1 + ‖𝜂‖∞ )
𝑁−1
+ ∫ 𝑁(1 + |𝑋𝜀 (𝑠)|2 )
2𝜀𝑋𝜀 (𝑠) ⋅ 𝐴 (𝑠, 𝑋𝑠𝜀 ) 𝑑𝑊 (𝑠)
0
𝑡
󵄨2
󵄨
+ ∫ {𝑁 (1 + 󵄨󵄨󵄨𝑋𝜀 (𝑠)󵄨󵄨󵄨 )
𝑁−1
(62)
Therefore, the Chebyshev inequality leads us to see that
󵄨
󵄨
P [ sup 󵄨󵄨󵄨𝑋𝜀 (𝑡)󵄨󵄨󵄨 > 𝑅]
0
𝑡∈[−𝑟,𝑇]
𝑚
󵄨
󵄨2
× (2𝑋𝜀 (𝑠) ⋅ 𝐴 0 (𝑠, 𝑋𝑠𝜀 ) + 𝜀2 ∑󵄨󵄨󵄨𝐴 𝑖 (𝑠, 𝑋𝑠𝜀 )󵄨󵄨󵄨 )
𝑖=1
2 𝑁−2
+ 2𝑁 (𝑁 − 1) 𝜀2 (1 + |𝑋𝜀 (𝑠) | )
𝑚
1
exp [𝐶33,𝑇 ( + 1) 𝑡] .
𝜀
= P [𝜎𝑅 ≤ 𝑇]
󵄨
󵄨
≤ P [󵄨󵄨󵄨𝑋𝜀 (𝑇 ∧ 𝜎𝑅 )󵄨󵄨󵄨 ≥ 𝑅]
2 −1/𝜀
≤ (1 + 𝑅 )
2
×∑(𝑋𝜀 (𝑠) ⋅ 𝐴 𝑖 (𝑠, 𝑋𝑠𝜀 )) } 𝑑𝑠.
(59)
≤(
2 𝑁
E [(1 + |𝑋 (𝑇 ∧ 𝜎𝑅 )| ) ]
1/𝜀
𝑖=1
(63)
𝜀
1 + ‖𝜂‖2∞
)
1 + 𝑅2
1
exp [𝐶33,𝑇 ( + 1) 𝑇] ,
𝜀
International Journal of Stochastic Analysis
9
{P ∘ (𝑋𝜀 )−1 ; 0 < 𝜀 ≤ 1} comes from the one for {P ∘ (𝜀𝑊)−1 ;
0 < 𝜀 ≤ 1} in Lemma 9.
so we have
󵄨
󵄨
lim sup 𝜀 ln P [ sup 󵄨󵄨󵄨𝑋𝜀 (𝑡)󵄨󵄨󵄨 > 𝑅]
𝜀↘0
𝑡∈[−𝑟,𝑇]
1 + ‖𝜂‖2∞
≤ ln (
) + 𝐶33,𝑇 𝑇,
1 + 𝑅2
(64)
which completes the proof.
Let 𝑅 ≥ 1. Define that 𝜎𝑅 = inf{𝑡 > 0; |𝑋𝜀 (𝑡)| > 𝑅} and
𝑋 (𝑡) = 𝑋𝜀 (𝑡 ∧ 𝜎𝑅 ).
𝜀,𝑅
Step 2. We will discuss the general case on 𝐴 𝑖 (𝑖 = 1, . . . , 𝑚).
Let 𝑅 ≥ 1, and 𝐹 be a closed set in 𝐶𝜂 ([−𝑟, 𝑇]; R𝑑 ). Denote by
𝐹𝑅 = 𝐹 ∩ 𝐵(0; 𝑅) and by 𝐹𝑅𝛿 the closed 𝛿-neighborhood of 𝐹𝑅 ,
where 𝐵(0; 𝑅) is the open ball in 𝐶𝜂 ([−𝑟, 𝑇]; R𝑑 ) with radius
𝑅 centered at 0 ∈ 𝐶𝜂 ([−𝑟, 𝑇]; R𝑑 ). Then, it holds that
P [𝑋𝜀 ∈ 𝐹]
󵄨
󵄨
≤ P [𝑋𝜀 ∈ 𝐹, sup 󵄨󵄨󵄨𝑋𝜀 (𝑡)󵄨󵄨󵄨 ≤ 𝑅]
Proposition 15. For any 𝛿 > 0, it holds that
𝑡∈[−𝑟,𝑇]
󵄨
󵄨
lim lim sup 𝜀 ln P [ sup 󵄨󵄨󵄨󵄨𝑋𝜀 (𝑡) − 𝑋𝜀,𝑅 (𝑡)󵄨󵄨󵄨󵄨 > 𝛿] = −∞.
𝜀↘0
𝑡∈[−𝑟,𝑇]
𝑡∈[−𝑟,𝑇]
(65)
󵄨
󵄨
= P [𝑋𝜀,𝑅 ∈ 𝐹𝑅 ] + P [ sup 󵄨󵄨󵄨𝑋𝜀 (𝑡)󵄨󵄨󵄨 > 𝑅] .
Proof. Remark that
𝑡∈[−𝑟,𝑇]
󵄨
󵄨
P [ sup 󵄨󵄨󵄨󵄨𝑋𝜀 (𝑡) − 𝑋𝜀,𝑅 (𝑡)󵄨󵄨󵄨󵄨 > 𝛿]
𝑡∈[−𝑟,𝑇]
󵄨
󵄨
󵄨
󵄨
≤ P [ sup 󵄨󵄨󵄨󵄨𝑋𝜀 (𝑡) − 𝑋𝜀,𝑅 (𝑡)󵄨󵄨󵄨󵄨 > 𝛿, sup 󵄨󵄨󵄨𝑋𝜀 (𝑡)󵄨󵄨󵄨 ≤ 𝑅]
𝑡∈[−𝑟,𝑇]
𝑡∈[−𝑟,𝑇]
As seen in Step 1, we have already obtained the large deviation
principle for {P∘(𝑋𝜀,𝑅 )−1 ; 0 < 𝜀 ≤ 1} with the good rate function ̃𝐼𝑅 , where 𝐼(𝑓) is given in Lemma 9 and
̃𝐼𝑅 (𝑔) = inf {𝐼 (𝑓) ; 𝑓 ∈ 𝐻, 𝑔 = 𝑥𝑓 , sup 󵄨󵄨󵄨󵄨𝑥𝑓 (𝑡)󵄨󵄨󵄨󵄨 ≤ 𝑅} .
󵄨
󵄨
𝑡∈[−𝑟,𝑇]
󵄨
󵄨
+ P [ sup 󵄨󵄨󵄨𝑋𝜀 (𝑡)󵄨󵄨󵄨 > 𝑅]
𝑡∈[−𝑟,𝑇]
(69)
󵄨
󵄨
= P [ sup 󵄨󵄨󵄨󵄨𝑋𝜀 (𝑡) − 𝑋𝜀,𝑅 (𝑡)󵄨󵄨󵄨󵄨 > 𝛿, 𝜎𝑅 ≥ 𝑇]
𝑡∈[−𝑟,𝑇]
So, we have
+ P [𝜎𝑅 ≤ 𝑇]
lim sup 𝜀 ln P [𝑋𝜀,𝑅 ∈ 𝐹𝑅 ] ≤ − inf ̃𝐼𝑅 (𝑔) .
= P [𝜎𝑅 ≤ 𝑇]
≤
1
exp [𝐶33,𝑇 ( + 1) 𝑇] ,
𝜀
lim sup 𝜀 ln P [𝑋𝜀 ∈ 𝐹]
1 + ‖𝜂‖2∞
) + 𝐶33,𝑇 𝑇,
1 + 𝑅2
𝜀↘0
≤ lim {(− inf ̃𝐼𝑅 (𝑔))
as seen in the proof of Proposition 14. So, we can get
󵄨
󵄨
lim sup 𝜀 ln P [ sup 󵄨󵄨󵄨󵄨𝑋𝜀 (𝑡) − 𝑋𝜀,𝑅 (𝑡)󵄨󵄨󵄨󵄨 > 𝛿]
𝜀↘0
𝑡∈[−𝑟,𝑇]
(70)
Therefore, we can get
(66)
≤ ln (
𝑔∈𝐹𝑅
𝜀↘0
󵄨
󵄨
≤ P [󵄨󵄨󵄨𝑋𝜀 (𝑇 ∧ 𝜎𝑅 )󵄨󵄨󵄨 ≥ 𝑅]
1/𝜀
1 + ‖𝜂‖2∞
(
)
1 + 𝑅2
(68)
󵄨
󵄨
+ P [ sup 󵄨󵄨󵄨𝑋𝜀 (𝑡)󵄨󵄨󵄨 > 𝑅]
𝑅 → +∞
𝑅 → +∞
𝑔∈𝐹𝑅
󵄨
󵄨
∨ (lim sup 𝜀 ln P [ sup 󵄨󵄨󵄨𝑋𝜀 (𝑡)󵄨󵄨󵄨 > 𝑅])}
𝜀↘0
(67)
which completes the proof.
Proof of Theorem 10. We will prove the assertion in two steps
of the form: the case where 𝐴 𝑖 (𝑖 = 1, . . . , 𝑚) are bounded,
and the general case on 𝐴 𝑖 (𝑖 = 1, . . . , 𝑚).
Step 1. Suppose that the coefficients 𝐴 𝑖 (𝑖 = 1, . . . , 𝑚) are
bounded. Propositions 12 and 13 are sufficient to our goal (cf.
[17, 18]). In fact, the large deviation principle for the family
𝑡∈[−𝑟,𝑇]
= lim (− inf ̃𝐼 (𝑔))
𝑅 → +∞
𝑔∈𝐹𝑅
≤ −inf ̃𝐼 (𝑔) ,
𝑔∈𝐹
(71)
from Proposition 14, which completes the proof on the upper
estimate of the large deviation principle.
Next, we will pay attention to the lower estimate of
the large deviation principle. Let 𝐺 be an open set in
10
International Journal of Stochastic Analysis
̃ in 𝐺 ∩ 𝐵(0; 𝑅). Then, we can find
𝐶𝜂 ([−𝑟, 𝑇]; R𝑑 ), and take 𝑔
𝛿 > 0 such that 𝐵(̃
𝑔; 𝛿) ⊂ 𝐺. Thus, we have
𝑔)
−̃𝐼 (̃
𝑔) = −̃𝐼𝑅 (̃
≤−
inf
Take 𝑈 = ∏𝑑𝑗=1 [𝑎𝑗 , 𝑏𝑗 ] ⊂ R𝑑 such that 𝑈 ⊂ Supp[Λ đœŽ ]. Then,
the integration by parts formula tells us to see that
P [𝑋𝜀 (𝑡) ∈ 𝑈]
̃𝐼𝑅 (𝑔)
= E [I𝑈 (𝑋𝜀 (𝑡)) Λ đœŽ (𝑋𝜀 (𝑡))]
̃ ; 𝛿/2)
𝑔∈𝐵(𝑔
≤ lim inf 𝜀 ln P [𝑋
𝜀,𝑅
𝜀↘0
𝑋𝜀,1 (𝑡)
𝛿
∈ 𝐵 (̃
𝑔; )]
2
= E [∫
−∞
≤ lim inf 𝜀
⋅⋅⋅∫
𝑋𝜀,𝑑 (𝑡)
−∞
I𝑈 (𝑦1 , . . . , 𝑦𝑑 ) 𝑑𝑦1
⋅ ⋅ ⋅ 𝑑𝑦𝑑 Γ(1,...,𝑑) (𝑋𝜀 (𝑡) , Λ đœŽ (𝑋𝜀 (𝑡))) ]
]
𝜀↘0
× ln {P [𝑋𝜀 ∈ 𝐺]
𝑑
= ∫ E [∏I(𝑦𝑗 ,+∞) (𝑋𝜀,𝑗 (𝑡))
𝑈
[ 𝑗=1
󵄨 𝛿
󵄨
+P [ sup 󵄨󵄨󵄨󵄨𝑋𝜀 (𝑡) − 𝑋𝜀,𝑅 (𝑡)󵄨󵄨󵄨󵄨 > ]}
2
𝑡∈[−𝑟,𝑇]
× Γ(1,...,𝑑) (𝑋𝜀 (𝑡) , Λ đœŽ (𝑋𝜀 (𝑡))) ] 𝑑𝑦1 ⋅ ⋅ ⋅ 𝑑𝑦𝑑 ,
]
(76)
≤ (lim inf 𝜀 ln P [𝑋𝜀 ∈ 𝐺])
𝜀↘0
󵄨 𝛿
󵄨
∨ (lim inf 𝜀 ln P [ sup 󵄨󵄨󵄨󵄨𝑋𝜀 (𝑡)−𝑋𝜀,𝑅 (𝑡)󵄨󵄨󵄨󵄨 > ]) .
𝜀↘0
2
𝑡∈[−𝑟,𝑇]
(72)
̃ ∈ 𝐵(0; 𝑅), while the
The first equality is right, because of 𝑔
third inequality is the consequence of the large deviation
principle for 𝑋𝜀,𝑅 as seen in Step 1. The forth inequality is
𝑔; 𝛿/2)𝑐 under 𝑋𝜀,𝑅 ∈ 𝐵(̃
𝑔; 𝛿)𝑐 and
right, because 𝑋𝜀 ∈ 𝐵(̃
sup𝑡∈[−𝑟,𝑇] |𝑋𝜀 (𝑡) − 𝑋𝜀,𝑅 (𝑡)| ≤ 𝛿/2. Taking the limit as 𝑅 →
+∞ leads us to see that
−̃𝐼 (̃
𝑔) ≤ lim inf 𝜀 ln P [𝑋 ∈ 𝐺] ,
𝜀
𝜀↘0
(73)
from Proposition 15, which completes the proof on the lower
estimate of the large deviation principle. The proof of
Theorem 10 is complete.
5. Density Estimates
In this section, we will consider the estimate of the density
𝑝𝜀 (𝑡, 𝑦) for the solution 𝑋𝜀 (𝑡), from the viewpoint of the
Malliavin calculus.
Theorem 16 (Upper estimate). Suppose that the R𝑑 -valued
functions 𝐴 𝑖 (𝑖 = 1, . . . , 𝑚) satisfy the uniformly elliptic condition (24). Then, it holds that
lim sup 𝜀2 ln 𝑝𝜀 (𝑡, 𝑦) ≤ −𝐼 (𝑦) ,
𝜀↘0
(74)
Γ𝑗 (𝑋𝜀 (𝑡) , Λ đœŽ (𝑋𝜀 (𝑡)))
𝑑
−1
= 𝛿 (Λ đœŽ (𝑋𝜀 (𝑡)) ∑ [(𝑉𝜀 (𝑡)) ]𝑗𝑘 𝐷𝑋𝜀,𝑘 (𝑡)) ,
𝑘=1
(77)
Γ(1,...,𝑑) (𝑋𝜀 (𝑡) , Λ đœŽ (𝑋𝜀 (𝑡)))
= Γ𝑑 (𝑋𝜀 (𝑡) , Γ(1,...,𝑑−1) (𝑋𝜀 (𝑡) , Λ đœŽ (𝑋𝜀 (𝑡)))) ,
and 𝛿 is the Skorokhod integral operator. Remark that, under
the uniformly elliptic condition (24) on the R𝑑 -valued functions 𝐴 𝑖 (𝑖 = 1, . . . , 𝑚),
󵄩󵄩
󵄩
𝜀
𝜀
󵄩󵄩Γ(1,...,𝑑) (𝑋 (𝑡) , Λ đœŽ (𝑋 (𝑡)))󵄩󵄩󵄩L𝑝 (Ω)
󵄩
−1 󵄩
󵄩 󵄩
󵄩
󵄩
≤ 𝐶34,𝑝,𝑇,𝜂 󵄩󵄩󵄩󵄩(𝑉𝜀 (𝑡)) 󵄩󵄩󵄩󵄩L𝛼 (Ω) 󵄩󵄩󵄩𝑋𝜀 (𝑡)󵄩󵄩󵄩𝛽,𝛾 óľ„Šóľ„Šóľ„ŠΛ đœŽ (𝑋𝜀 (𝑡))󵄩󵄩󵄩𝜅,𝜎
≤ 𝐶35,𝑝,𝑇,𝜂 𝜀−2𝑑 ,
(78)
where 𝛼, 𝛾, 𝜎 > 1 and 𝛽, 𝜅 ∈ Z+ , by using Proposition 5 and
the proof of Lemma 7. Hence, the density 𝑝𝜀 (𝑡, 𝑦) can be
estimated from the above as follows:
𝑝𝜀 (𝑡, 𝑦)
𝑑
where the function 𝐼 is given in Corollary 11.
Proof. Let 0 < 𝜎 < 1 be sufficiently small, and Λ đœŽ ∈
𝐶0∞ (R𝑑 ; [0, 1]) such that
󵄨
󵄨
1 (󵄨󵄨󵄨𝑧 − 𝑦󵄨󵄨󵄨 ≤ 𝜎) ,
Λ đœŽ (𝑧) = {
󵄨
󵄨󵄨
0 (󵄨󵄨𝑧 − 𝑦󵄨󵄨󵄨 > 2𝜎) .
where
(75)
= E [∏I(𝑦𝑗 ,+∞) (𝑋𝜀,𝑗 (𝑡)) Γ(1,...,𝑑) (𝑋𝜀 (𝑡) , Λ đœŽ (𝑋𝜀 (𝑡)))]
]
[ 𝑗=1
󵄨󵄨
󵄨󵄨
𝜀
𝜀
𝜀
≤ E [󵄨󵄨Γ(1,...,𝑑) (𝑋 (𝑡) , Λ đœŽ (𝑋 (𝑡)))󵄨󵄨 ISupp[Λ đœŽ ] (𝑋 (𝑡))]
≤ 𝐶35,𝑝,𝑇,𝜂 𝜀−2𝑑 P[𝑋𝜀 (𝑡) ∈ Supp [Λ đœŽ ]]
1/𝑞
,
(79)
International Journal of Stochastic Analysis
11
where 𝑞 > 1 such that 1/𝑝 + 1/𝑞 = 1. From Corollary 11, we
have
lim sup 𝜀2 ln P [𝑋𝜀 (𝑡) ∈ Supp [Λ đœŽ ]] ≤ −
𝜀↘0
𝑡
󵄩
󵄩
≤ ∫ (󵄩󵄩󵄩󵄩∇𝐴 0 (𝑠, 𝑥𝑠𝑓 )󵄩󵄩󵄩󵄩𝐶([−𝑟,0];R𝑑 )⊗R𝑑
𝑢
󵄩
󵄩
+ 󵄩󵄩󵄩󵄩∇𝐴 (𝑠, 𝑥𝑠𝑓 )󵄩󵄩󵄩󵄩𝐶([−𝑟,0];R𝑑 )⊗R𝑚 ⊗R𝑑
inf 𝐼 (𝑧) .
𝑧∈Supp[Λ đœŽ ]
(80)
𝑡
󵄩
󵄩
+ ∫ (󵄩󵄩󵄩󵄩∇𝐴 0 (𝑠, 𝑥𝑠𝑓 )󵄩󵄩󵄩󵄩𝐶([−𝑟,0];R𝑑 )⊗R𝑑
𝑢
Since the function 𝐼 is a lower semicontinuous, taking the
limit as 𝜎 ↘ 0 and 𝑞 ↘ 1 enables us to see that
lim sup 𝜀2 ln 𝑝𝜀 (𝑡, 𝑦) ≤ −𝐼 (𝑦) ,
𝜀↘0
󵄨
󵄨
󵄩
󵄩
+󵄩󵄩󵄩󵄩∇𝐴 (𝑠, 𝑥𝑠𝑓 )󵄩󵄩󵄩󵄩𝐶([−𝑟,0];R𝑑 )⊗R𝑚 ⊗R𝑑 󵄨󵄨󵄨󵄨𝑓̇ (𝑠)󵄨󵄨󵄨󵄨)
󵄩
󵄩
× sup 󵄩󵄩󵄩󵄩𝑍 (𝜏, 𝑢) − 𝐼𝑑 󵄩󵄩󵄩󵄩R𝑑 ⊗R𝑑 𝑑𝑠
𝜏∈[𝑠−𝑟,𝑠]
(81)
𝑡
Remark 17. As stated in Remark 8, a similar problem can be
also studied under the hypoelliptic condition, in the case
(𝑖 = 1, . . . , 𝑚) ,
𝑚 󵄨
𝑡
󵄨 𝑖 󵄨󵄨
+ 𝐶37,𝑇 ∫ (1 + ∑ 󵄨󵄨󵄨𝑓̇ (𝑠)󵄨󵄨󵄨)
󵄨
󵄨
𝑢
𝑖=1
(82)
󵄩
󵄩
× sup 󵄩󵄩󵄩󵄩𝑍 (𝜏, 𝑢) − 𝐼𝑑 󵄩󵄩󵄩󵄩R𝑑 ⊗R𝑑 𝑑𝑠.
𝜏∈[𝑢,𝑠]
̃𝑖 : [0, 𝑇] × R𝑑 → R𝑑 with the good conditions on
where 𝐴
the boundedness and the regularity (cf. [16]).
Now, we will study the lower estimate of the density
𝑝 (𝑡, 𝑦) for the solution process 𝑋𝜀 to (2). Before doing it, we
will prepare some arguments.
(85)
Remark that
𝜀
Proposition 18. Let 𝑓 ∈ 𝐻, and assume the uniformly elliptic
condition (24) on the functions 𝐴 𝑖 (𝑖 = 1, . . . , 𝑚). Then, it
holds that
det V𝑓 (𝑡) > 0,
(83)
for each 𝑡 ∈ (0, 𝑇], where V𝑓 (𝑡) is the Gram matrix for 𝑥𝑓 (𝑡).
Proof. Let 𝑢 ∈ [0, 𝑇], and {𝑍(𝑡, 𝑢); 𝑡 ∈ [−𝑟, 𝑇]} be the
R𝑑 ⊗ R𝑑 -valued mappings given by the following functional
differential equation:
𝑍 (𝑡, 𝑢) = 0
𝑑𝑍 (𝑡, 𝑢)
(𝑡 ∈ [−𝑟, 0] or 𝑡 ∈ (0, 𝑢)) ,
𝑓
= ∇𝐴 0 (𝑡, 𝑥𝑡 )
𝑚 󵄨
𝑡
󵄨 𝑖 󵄨󵄨
∫ (1 + ∑ 󵄨󵄨󵄨𝑓̇ (𝑠)󵄨󵄨󵄨) 𝑑𝑠 ≤ 𝐶38,𝑓,𝑇 (𝑡 − 𝑢)1/2 .
󵄨
󵄨
𝑢
𝑖=1
Hence, the Gronwall inequality tells us to see that
󵄩
󵄩
sup 󵄩󵄩󵄩󵄩𝑍(𝜏, 𝑢) − 𝐼𝑑 󵄩󵄩󵄩󵄩R𝑑 ⊗R𝑑 ≤ 𝐶39,𝑓,𝑇 (𝑡 − 𝑢)1/2 .
𝜏∈[𝑢,𝑡]
where 𝑍𝑡 (⋅, 𝑢) = {𝑍(𝑡+𝜏, 𝑢); 𝜏 ∈ [−𝑟, 0]}. From the condition
on the coefficients 𝐴 𝑖 (𝑖 = 0, 1, . . . , 𝑚), we see that
sup ‖𝑍 (𝜏, 𝑢) − 𝐼𝑑 ‖R𝑑 ⊗R𝑑
𝜏∈[𝑢,𝑡]
󵄩󵄩
󵄩󵄩 𝜏
󵄩
󵄩
≤ sup 󵄩󵄩󵄩∫ ∇𝐴 0 (𝑠, 𝑥𝑠𝑓 ) 𝑍𝑠 (⋅, 𝑢) 𝑑𝑠󵄩󵄩󵄩
󵄩󵄩R𝑑 ⊗R𝑑
󵄩
𝑢
𝜏∈[𝑢,𝑡]󵄩
󵄩󵄩
󵄩󵄩 𝜏
󵄩
󵄩
+ sup 󵄩󵄩󵄩∫ ∇𝐴 (𝑠, 𝑥𝑠𝑓 ) 𝑍𝑠 (⋅, 𝑢) 𝑓̇ (𝑠) 𝑑𝑠󵄩󵄩󵄩
󵄩󵄩R𝑑 ⊗R𝑑
󵄩 𝑢
𝜏∈[𝑢,𝑡]󵄩
(87)
𝜏∈[−𝑟,𝑡]
Now, we will pay attention to the lower estimate of
det V𝑓 (𝑡). Since, for each 𝑢 ∈ [0, 𝑇], {𝐷𝑢 𝑥𝑓 (𝑡); 𝑡 ∈ [−𝑟, 𝑇]}
satisfies the equation
(𝑡 ∈ [−𝑟, 0]) ,
𝑢
(𝑡 ∈ [𝑢, 𝑇]) ,
(84)
(86)
On the other hand, remark that we have already seen in the
proof of Proposition 12 that
󵄨
󵄨
sup 󵄨󵄨󵄨󵄨𝑥𝑓 (𝜏)󵄨󵄨󵄨󵄨 ≤ 𝐶22,𝑇,𝜂,𝑓 .
(88)
𝐷𝑢 𝑥𝑓 (𝑡) = 0
𝑍𝑡 (⋅, 𝑢) 𝑑𝑡
𝑓
+ ∇𝐴 (𝑡, 𝑥𝑡 ) 𝑍𝑡 (⋅, 𝑢) 𝑓̇ (𝑡) 𝑑𝑡
𝑚
󵄨󵄨 𝑖 󵄨󵄨
≤ 𝐶36,𝑇 ∫ (1 + ∑ 󵄨󵄨󵄨𝑓̇ (𝑠)󵄨󵄨󵄨) 𝑑𝑠
󵄨
󵄨
𝑢
𝑖=1
which is the conclusion of Theorem 16.
̃𝑖 (𝑡, 𝑓 (0))
𝐴 𝑖 (𝑡, 𝑓) = 𝐴
󵄨󵄨 ̇ 󵄨󵄨
󵄨󵄨𝑓 (𝑠)󵄨󵄨) 𝑑𝑠
󵄨
󵄨
𝑡
𝐷𝑢 𝑥𝑓 (𝑡) = ∫ 𝐴 (𝑠, 𝑥𝑠𝑓 ) I(𝑠≤𝑡) 𝑑𝑠 + ∫ ∇𝐴 0 (𝑠, 𝑥𝑠𝑓 ) 𝐷𝑢 𝑥𝑠𝑓 𝑑𝑠
0
0
𝑡
+ ∫ ∇𝐴 (𝑠, 𝑥𝑠𝑓 ) 𝐷𝑢 𝑥𝑠𝑓 𝑓̇ (𝑠) 𝑑𝑠
0
(𝑡 ∈ (0, 𝑇]) ,
(89)
we have
𝐷𝑢 𝑥𝑓 (𝑡) = ∫
𝑢∧𝑡
0
𝑍 (𝑡, 𝑠) 𝐴 (𝑠, 𝑥𝑠𝑓 ) 𝑑𝑠,
(90)
similarly to Corollary 6. Hence, the Gram matrix V𝑓 (𝑡) can be
expressed as follows:
𝑡
∗
V𝑓 (𝑡) = ∫ 𝑍 (𝑡, 𝑢) 𝐴 (𝑢, 𝑥𝑢𝑓 ) 𝐴(𝑢, 𝑥𝑢𝑓 ) 𝑍(𝑡, 𝑢)∗ 𝑑𝑢.
0
(91)
12
International Journal of Stochastic Analysis
Let 𝑡𝛼 ∈ [0, 𝑇] be sufficiently close to 𝑡. So, we see that
which is strictly positive. Here, we will remark that there
exists the constant 𝐶41,𝑇,𝜂,𝑓 > 0 with
𝑚
1
2
inf ∑(𝜁 ⋅ 𝐴 𝑖 (𝑡, 𝑔)) − 𝐶40,𝑇,𝜂,𝑓 (𝑡 − 𝑡𝛼 ) ≥ 𝐶41,𝑇,𝜂,𝑓 ,
2 𝜁,𝑡,𝑔 𝑖=1
(93)
det V𝑓 (𝑡)
𝑡
∗
= det [∫ 𝑍 (𝑡, 𝑢) 𝐴 (𝑢, 𝑥𝑢𝑓 ) 𝐴(𝑢, 𝑥𝑢𝑓 ) 𝑍(𝑡, 𝑢)∗ 𝑑𝑢]
0
𝑡 𝑚
2
because the functions 𝐴 𝑖 (𝑖 = 1, . . . , 𝑚) satisfy the uniformly
elliptic condition (24), and 𝑡𝛼 is sufficiently close to 𝑡, which
justifies the sixth inequality.
𝑑
≥ { inf ∫ ∑(𝜁 ⋅ 𝑍 (𝑡, 𝑢) 𝐴 𝑖 (𝑢, 𝑥𝑢𝑓 )) 𝑑𝑢}
𝜁∈S𝑑−1 0 𝑖=1
𝑡 𝑚
= { inf ∫ ∑(𝜁 ⋅
𝜁∈S𝑑−1 0 𝑖=1
For 𝑓 ∈ 𝐻, let {𝑋𝜀,𝑓 (𝑡); 𝑡 ∈ [−𝑟, 𝑇]} be the R𝑑 -valued
process determined by the following equation:
𝑑
2
𝑍 (𝑡, 𝑢) 𝐴 𝑖 (𝑢, 𝑥𝑢𝑓 )) 𝑑𝑢}
𝑋𝜀,𝑓 (𝑡) = 𝜂 (𝑡)
󵄨
󵄨
× I ( sup 󵄨󵄨󵄨󵄨𝑥𝑓 (𝑡)󵄨󵄨󵄨󵄨 ≤ 𝐶22,𝑇,𝜂,𝑓 )
𝑡∈[−𝑟,𝑇]
𝜀,𝑓
𝜀,𝑓
𝑑𝑋𝜀,𝑓 (𝑡) = 𝐴 0 (𝑡, 𝑋𝑡 ) 𝑑𝑡 + 𝜀𝐴 (𝑡, 𝑋𝑡 ) 𝑑𝑊 (𝑡)
+ 𝐴 (𝑡, 𝑋𝑡 ) 𝑓̇ (𝑡) 𝑑𝑡
𝜀,𝑓
× I( sup ‖𝑍(𝑡, 𝑢)−𝐼𝑑 ‖R𝑑 ⊗R𝑑 ≤ 𝐶39,𝑇,𝑓 (𝑡 − 𝑡𝛼 )1/2)
𝑢∈[𝑡𝛼 ,𝑡]
≥{
(𝑡 ∈ [−𝑟, 0]) ,
(94)
(𝑡 ∈ (0, 𝑇]) .
𝑓
̃ (𝑡); 𝑡 ∈ [−𝑟, 𝑇]} be the R𝑑 -valued process determined
Let {𝑍
by the following equation:
𝑡 𝑚
2
1
inf ∫ ∑(𝜁 ⋅ 𝐴 𝑖 (𝑢, 𝑥𝑢𝑓 )) 𝑑𝑢
2 𝜁∈S𝑑−1 𝑡𝛼 𝑖=1
𝑡 𝑚
2
−∫ ∑‖𝑍 (𝑡, 𝑢)−𝐼𝑑 ‖R𝑑 ⊗R𝑑
𝑡𝛼 𝑖=1
̃𝑓 (𝑡) = 0
𝑍
𝑑
󵄨󵄨
󵄨2
󵄨󵄨𝐴 𝑖 (𝑢, 𝑥𝑢𝑓 )󵄨󵄨󵄨 𝑑𝑢}
󵄨
󵄨
(𝑡 ∈ [−𝑟, 0]) ,
̃𝑓 𝑑𝑡
̃𝑓 (𝑡) = 𝐴 (𝑡, 𝑥𝑓 ) 𝑑𝑊 (𝑡) + ∇𝐴 0 (𝑡, 𝑥𝑓 ) 𝑍
𝑑𝑍
𝑡
𝑡
𝑡
𝑓 ̃𝑓 ̇
+ ∇𝐴 (𝑡, 𝑥𝑡 ) 𝑍
𝑡 𝑓 (𝑡) 𝑑𝑡
󵄨
󵄨
× I ( sup 󵄨󵄨󵄨󵄨𝑥𝑓 (𝑡)󵄨󵄨󵄨󵄨 ≤ 𝐶22,𝑇,𝜂,𝑓 )
𝑡∈[−𝑟,𝑇]
(𝑡 ∈ (0, 𝑇]) .
Lemma 19. Let 𝑡 ∈ (0, 𝑇]. It holds that
󵄩󵄩
󵄩
̃𝑓 (𝑡)󵄩󵄩󵄩󵄩 = 0,
lim󵄩󵄩󵄩󵄩𝑌𝜀,𝑓 (𝑡) − 𝑍
󵄩󵄩𝑘,𝑝
𝜀↘0 󵄩
×I ( sup ‖𝑍(𝑡, 𝑢)−𝐼𝑑 ‖R𝑑 ⊗R𝑑 ≤ 𝐶39,𝑇,𝑓 (𝑡 − 𝑡𝛼 )1/2)
𝑢∈[𝑡𝛼 ,𝑡]
𝑚
𝑡 − 𝑡𝛼
2
≥{
inf ∑(𝜁 ⋅ 𝐴 𝑖 (𝑡, 𝑔)) −𝐶40,𝑇,𝜂,𝑓 (𝑡 − 𝑡𝛼 )2 }
2 𝜁,𝑡,𝑔 𝑖=1
𝑑
󵄨
󵄨
× I ( sup 󵄨󵄨󵄨󵄨𝑥𝑓 (𝑡)󵄨󵄨󵄨󵄨 ≤ 𝐶22,𝑇,𝜂,𝑓 )
𝑡∈[−𝑟,𝑇]
(96)
for any 𝑝 > 1 and 𝑘 ∈ Z+ , where 𝑌𝜀,𝑓 (𝑡) = (𝑋𝜀,𝑓 (𝑡) − 𝑥𝑓 (𝑡))/𝜀.
Proof. We will prove the statement along the following
procedure.
Step 1. For any 𝑝 > 1,
𝑝
×I ( sup ‖𝑍(𝑡, 𝑢)−𝐼𝑑 ‖R𝑑 ⊗R𝑑 ≤ 𝐶39,𝑇,𝑓 (𝑡−𝑡𝛼 )1/2)
𝑢∈[𝑡𝛼 ,𝑡]
≥ 𝐶41,𝑇,𝜂,𝑓 (𝑡 − 𝑡𝛼 )
(95)
limE [ sup |𝑋𝜀,𝑓 (𝑡) − 𝑥𝑓 (𝑡)| ] = 0.
𝜀↘0
𝑡∈[−𝑟,𝑇]
(97)
In fact, since
𝑑
𝑡
𝑋𝜀,𝑓 (𝑡) − 𝑥𝑓 (𝑡) = ∫ {𝐴 0 (𝑠, 𝑋𝑠𝜀,𝑓 ) − 𝐴 0 (𝑠, 𝑥𝑠𝑓 )} 𝑑𝑠
0
󵄨
󵄨
× I ( sup 󵄨󵄨󵄨󵄨𝑥𝑓 (𝑡)󵄨󵄨󵄨󵄨 ≤ 𝐶22,𝑇,𝜂,𝑓 )
𝑡∈[−𝑟,𝑇]
𝑡
+ ∫ {𝐴 (𝑠, 𝑋𝑠𝜀,𝑓 ) − 𝐴 (𝑠, 𝑥𝑠𝑓 )} 𝑓̇ (𝑠) 𝑑𝑠
0
× I ( sup ‖𝑍(𝑡, 𝑢)−‖R𝑑 ⊗R𝑑 ≤ 𝐶39,𝑇,𝑓 (𝑡 − 𝑡𝛼 )1/2 )
𝑢∈[𝑡,𝑡𝛼 ]
𝑑
= 𝐶41,𝑇,𝜂,𝑓 (𝑡 − 𝑡𝛼 ) ,
(92)
𝑡
+ 𝜀 ∫ 𝐴 (𝑠, 𝑋𝑠𝜀,𝑓 ) 𝑑𝑊 (𝑠) ,
0
(98)
for 𝑡 ∈ [0, 𝑇], and the coefficients 𝐴 𝑖 (𝑖 = 0, 1, . . . , 𝑚) satisfy
the Lipschitz condition and the linear growth condition, we
International Journal of Stochastic Analysis
13
can get the assertion of Step 1 by using the Hölder inequality,
the Burkholder inequality, and the Gronwall inequality.
Step 2. For any 𝑝 > 1,
𝐷𝑢 𝑌𝜀,𝑓 (𝑡)
lim E [ sup |𝑌
𝜀↘0
Remark that
𝜀,𝑓
𝑡∈[−𝑟,𝑇]
𝑢∧𝑡
𝑝
̃𝑓 (𝑡) | ] = 0,
(𝑡) − 𝑍
(99)
=∫
0
𝑡
+∫
which tells us to see that the assertion of Lemma 19 holds in
the case of 𝑘 = 0.
In fact, we will remark that
{𝐴 (𝑠, 𝑋𝑠𝜀,𝑓 )
0
+∫
𝑡
0
+
𝐴 (𝑠, 𝑋𝑠𝜀,𝑓 ) − 𝐴 (𝑠, 𝑥𝑠𝑓 )
𝜀
} 𝑑𝑠
1
{∇𝐴 0 (𝑠, 𝑋𝑠𝜀,𝑓 ) 𝐷𝑢 𝑋𝑠𝜀,𝑓 − ∇𝐴 0 (𝑠, 𝑥𝑠𝑓 ) 𝐷𝑢 𝑥𝑠𝑓 } 𝑑𝑠
𝜀
1
{∇𝐴 (𝑠, 𝑋𝑠𝜀,𝑓 ) 𝐷𝑢 𝑋𝑠𝜀,𝑓 −∇𝐴 (𝑠, 𝑥𝑠𝑓 ) 𝐷𝑢 𝑥𝑠𝑓} 𝑓̇ (𝑠) 𝑑𝑠
𝜀
𝑡
𝐴 𝑖 (𝑠, 𝑋𝑠𝜀,𝑓 )
−
𝐴 𝑖 (𝑠, 𝑥𝑠𝑓 )
𝜀
+ ∫ ∇𝐴 (𝑠, 𝑋𝑠𝜀,𝑓 ) 𝐷𝑢 𝑋𝑠𝜀,𝑓 𝑑 𝑊 (𝑠) ,
0
𝑓
̃
− ∇𝐴 𝑖 (𝑠, 𝑥𝑠𝑓 ) 𝑍
𝑠
(103)
𝑓
̃ )
= ∇𝐴 𝑖 (𝑠, 𝑥𝑠𝑓 ) (𝑌𝑠𝜀,𝑓 − 𝑍
𝑠
1
+ ∇2 𝐴 𝑖 (𝑠, 𝜎 𝑋𝑠𝜀,𝑓 +(1−𝜎) 𝑥𝑠𝑓 ) [𝑋𝑠𝜀,𝑓 −𝑥𝑠𝑓 , 𝑌𝑠𝜀,𝑓 ],
2
(100)
for 𝑡 ∈ [𝑢, 𝑇]. Since
𝐷𝑢 𝑥𝑓 (𝑡) = ∫
𝑢∧𝑡
0
from the Taylor theorem for 𝑖 = 0, 1, . . . , 𝑚, where 0 < 𝜎 < 1
is the constant. Here, for each 𝑡 ∈ [0, 𝑇] and 𝜑 ∈ 𝐶([−𝑟, 0];
R𝑑 ), ∇2 𝐴 𝑖 (𝑡, 𝜑)[⋅, ⋅] is the bilinear mapping on 𝐶([−𝑟, 0];
R𝑑 ) × đś([−𝑟, 0]; R𝑑 ). Since
0
𝑡
+ ∫ ∇𝐴 (𝑠, 𝑥𝑠𝑓 ) 𝐷𝑢 𝑥𝑠𝑓 𝑓̇ (𝑠) 𝑑𝑠,
0
(104)
as seen in Proposition 18, we have
𝑓
̃ (𝑡)
𝑌𝜀,𝑓 (𝑡) − 𝑍
𝑡
𝐴 0 (𝑠, 𝑋𝑠𝜀,𝑓 ) − 𝐴 0 (𝑠, 𝑥𝑠𝑓 )
0
𝜀
=∫ [
𝑡
𝐴 (𝑠, 𝑥𝑠𝑓 ) 𝑑𝑠 + ∫ ∇𝐴 0 (𝑠, 𝑥𝑠𝑓 ) 𝐷𝑢 𝑥𝑠𝑓 𝑑𝑠
̃𝑓 ] 𝑑𝑠
− ∇𝐴 0 (𝑠, 𝑥𝑠𝑓 ) 𝑍
𝑠
𝑡 𝐴 (𝑠, 𝑋𝜀,𝑓 )−𝐴 (𝑠, 𝑥𝑓 )
𝑠
𝑠
̃𝑓]𝑓̇ (𝑠) 𝑑𝑠
+∫ [
−∇𝐴 (𝑠, 𝑥𝑠𝑓 ) 𝑍
𝑠
𝜀
0
󵄨
󵄨
sup 󵄨󵄨󵄨󵄨𝐷𝑢 𝑥𝑓 (𝑡)󵄨󵄨󵄨󵄨 ≤ 𝐶42,𝑇,𝜂,𝑓 .
𝑡∈[−𝑟,𝑇]
(105)
Moreover, similarly to Proposition 5, we have
𝑡
+ ∫ {𝐴 (𝑠, 𝑋𝑠𝜀,𝑓 ) − 𝐴 (𝑠, 𝑥𝑠𝑓 )} 𝑑𝑊 (𝑠) ,
0
(101)
𝑝
E [ sup |𝐷𝑢 𝑋𝜀,𝑓 (𝑡)| ] ≤ 𝐶43,𝑝,𝑇,𝜂,𝑓 ,
𝑡∈[−𝑟,𝑇]
(106)
for 𝑡 ∈ [0, 𝑇], and the coefficients 𝐴 𝑖 (𝑡, ⋅) (𝑖 = 0, 1, . . . , 𝑚)
∞
are in 𝐶1+,𝑏
(𝐶([−𝑟, 0]; R𝑑 ); R𝑑 ) with respect to the second
variable in 𝐶([−𝑟, 0]; R𝑑 ) for each 𝑡 ∈ [0, 𝑇], we can get the
assertion in Step 2 via the Hölder inequality, the Burkholder
inequality, and the Gronwall inequality.
for any 𝑝 > 1. Then, the assertion in Step 3 can be justified by
using the Hölder inequality, the Burkholder inequality, and
the Gronwall inequality.
Step 3. Let 𝑢 ∈ [0, 𝑇]. Then, for any 𝑝 > 1,
Step 4. Let 𝑢 ∈ [0, 𝑇]. Then, for any 𝑝 > 1,
𝑝
sup E [ sup |𝐷𝑢 𝑌𝜀,𝑓 (𝑡)| ] < +∞.
0<𝜀≤1
𝑡∈[−𝑟,𝑇]
𝑝
(102)
̃𝑓 (𝑡) | ] = 0.
lim E [ sup |𝐷𝑢 𝑌𝜀,𝑓 (𝑡) − 𝐷𝑢 𝑍
𝜀↘0
𝑡∈[−𝑟,𝑇]
(107)
14
International Journal of Stochastic Analysis
𝑘
In fact, since
(∫
+ ∑ 𝐷𝑢𝑘−1
1 ,...,𝑢𝑗−1 ,𝑢𝑗+1 ,...,𝑢𝑘
𝐷𝑢 𝑌𝜀,𝑓 (𝑡)
=∫
𝑢
0
0
𝑗=1
{𝐴 (𝑠, 𝑋𝑠𝜀,𝑓 )
+
𝐴 (𝑠, 𝑋𝑠𝜀,𝑓 ) − 𝐴 (𝑠, 𝑥𝑠𝑓 )
𝜀
} I(𝑠≤𝑡) 𝑑𝑠
𝑡
1
+∫ {∇𝐴 0 (𝑠, 𝑋𝑠𝜀,𝑓 ) 𝐷𝑢 𝑋𝑠𝜀,𝑓 −∇𝐴 0 (𝑠, 𝑥𝑠𝑓 ) 𝐷𝑢 𝑥𝑠𝑓} 𝑑𝑠
0 𝜀
+∫
𝑡
0
1
{∇𝐴 (𝑠, 𝑋𝑠𝜀,𝑓 ) 𝐷𝑢 𝑋𝑠𝜀,𝑓
𝜀
−∇𝐴 (𝑠, 𝑥𝑠𝑓 ) 𝐷𝑢 𝑥𝑠𝑓 } 𝑓̇ (𝑠) 𝑑𝑠
+∫
𝑡
0
𝑢𝑗 ∧𝑡
𝑡
𝑡
0
0
𝜑 (𝑠) 𝑑𝑠) ,
𝐷𝑢𝑘1 ,...,𝑢𝑘 (∫ 𝜓 (𝑠) 𝑑𝑠) = ∫ 𝐷𝑢𝑘1 ,...,𝑢𝑘 (𝜓 (𝑠)) 𝑑𝑠,
(110)
for adapted processes 𝜑 and 𝜓 with nice properties. Then, we
can get the assertion by induction on 𝑘 ∈ N.
Then, the assertion is the direct consequences of Step 2
and Step 5. The proof of Lemma 19 is complete.
Theorem 20 (Lower estimate). Suppose that the R𝑑 -valued
functions 𝐴 𝑖 (𝑖 = 1, . . . , 𝑚) satisfy the uniformly elliptic condition (24). Then, it holds that
∇𝐴 (𝑠, 𝑋s𝜀,𝑓 ) 𝐷𝑢 𝑋𝑠𝜀,𝑓 𝑑𝑊 (𝑠) ,
lim inf 𝜀2 ln 𝑝𝜀 (𝑡, 𝑦) ≥ −𝐼 (𝑦) ,
𝑓
̃ (𝑡)
𝐷𝑢 𝑍
(111)
𝜀↘0
𝑢
where the function 𝐼 is given in Corollary 11.
̃𝑓 } I(𝑠≤𝑡) 𝑑𝑠
= ∫ {𝐴 (𝑠, 𝑥𝑠𝑓 ) + ∇𝐴 (𝑠, 𝑥𝑠𝑓 ) 𝑍
𝑠
0
𝑡
̃𝑓 𝑑𝑠
+ ∫ ∇𝐴 0 (𝑠, 𝑥𝑠𝑓 ) 𝐷𝑢 𝑍
𝑠
Proof. Since the assertion of Theorem 20 is trivial in the case
of 𝐼(𝑦) = +∞, we will suppose that 𝐼(𝑦) < +∞. Let
Φ ∈ 𝐶0∞ (R𝑑 ; R) be nonnegative. For sufficiently small 0 <
𝜎 < 1, recall the function Λ đœŽ as introduced in the proof of
Theorem 16: Λ đœŽ ∈ 𝐶0∞ (R𝑑 ; [0, 1]) such that
󵄨
󵄨
1 (󵄨󵄨󵄨𝑧 − 𝑦󵄨󵄨󵄨 ≤ 𝜎) ,
(112)
Λ đœŽ (𝑧) = {
󵄨
󵄨󵄨
0 (󵄨󵄨𝑧 − 𝑦󵄨󵄨󵄨 > 2𝜎) .
Then, the Girsanov theorem tells us to see that
0
𝑡
̃𝑓 ] 𝑑𝑠
+ ∫ ∇2 𝐴 0 (𝑠, 𝑥𝑠𝑓 ) [𝐷𝑢 𝑥𝑠𝑓 , 𝑍
𝑠
0
𝑡
̃𝑓 𝑓̇ (𝑠) 𝑑𝑠
+ ∫ ∇𝐴 (𝑠, 𝑥𝑠𝑓 ) 𝐷𝑢 𝑍
𝑠
0
𝑡
𝑓
̃ ] 𝑓̇ (𝑠) 𝑑𝑠
+ ∫ ∇2 𝐴 (𝑠, 𝑥𝑠𝑓 ) [𝐷𝑢 𝑥𝑠𝑓 , 𝑍
𝑠
0
+∫
𝑡
0
E [Φ (𝑋𝜀 (𝑡))]
∇𝐴 (𝑠, 𝑥𝑠𝑓 ) 𝐷𝑢 𝑥𝑠𝑓 𝑑𝑊 (𝑠) ,
(108)
for 𝑡 ∈ [𝑢, 𝑇], and the coefficients 𝐴 𝑖 (𝑡, ⋅) (𝑖 = 0, 1, . . . , 𝑚)
∞
are in 𝐶1+,𝑏
(𝐶([−𝑟, 0]; R𝑑 ); R𝑑 ) with respect to the second
variable in 𝐶([−𝑟, 0]; R𝑑 ) for each 𝑡 ∈ [0, 𝑇], the assertion
can be obtained via the Hölder inequality, the Burkholder
inequality, and the Gronwall inequality. Here ∇2 𝐴(𝑠, 𝑥𝑠𝑓 ) =
(∇2 𝐴 1 (𝑠, 𝑥𝑠𝑓 ), . . . , ∇2 𝐴 𝑚 (𝑠, 𝑥𝑠𝑓 )).
Step 5. Let 𝑘 ∈ N be arbitrary, and 𝑢1 , . . . , 𝑢𝑘 ∈ [0, 𝑇]. Then,
for any 𝑝 > 1,
󵄨󵄨
󵄨𝑝
̃𝑓 (𝑡)󵄨󵄨󵄨󵄨 ] = 0.
lim E [ sup 󵄨󵄨󵄨󵄨𝐷𝑢𝑘1 ,...,𝑢𝑘 𝑌𝜀,𝑓 (𝑡) − 𝐷𝑢𝑘1 ,...,𝑢𝑘 𝑍
󵄨󵄨
𝜀↘0
𝑡∈[−𝑟,𝑇]󵄨
(109)
We have already proved the case of 𝑘 = 1 in Step 4. Remark
that
𝑡
𝐷𝑢𝑘1 ,...,𝑢𝑘 (∫ 𝜑 (𝑠) 𝑑𝑊 (𝑠))
0
𝑡
= ∫ 𝐷𝑢𝑘1 ,...,𝑢𝑘 (𝜑 (𝑠)) 𝑑𝑊 (𝑠)
0
𝑖
𝑓̇ (𝑠)
1
𝑑𝑊𝑖 (𝑠)− 2 ‖𝑓‖2𝐻)]
2𝜀
0 𝑖=1 𝜀
𝑡𝑚
= E[Φ (𝑋𝜀,𝑓 (𝑡)) exp (− ∫ ∑
= exp (−
‖𝑓‖2𝐻 + 4𝜎
)
2𝜀2
𝑖
𝑓̇ (𝑠)
2𝜎
𝑑𝑊𝑖 (𝑠) + 2 )]
𝜀
0 𝑖=1 𝜀
𝑡𝑚
×E[Φ (𝑋𝜀,𝑓 (𝑡)) exp (− ∫ ∑
≥ exp (−
‖𝑓‖2𝐻 + 4𝜎
)
2𝜀2
𝑡 𝑚
𝑖
× E [Φ (𝑋𝜀,𝑓 (𝑡)) I (𝜀 ∫ ∑𝑓̇ (𝑠) 𝑑𝑊𝑖 (𝑠) ≤ 2𝜎)]
0 𝑖=1
≥ exp (−
‖𝑓‖2𝐻 + 4𝜎
)
2𝜀2
𝑡 𝑚
𝑖
× E [Φ (𝑋𝜀,𝑓 (𝑡)) Λ đœŽ (𝜀 ∫ ∑𝑓̇ (𝑠) 𝑑𝑊𝑖 (𝑠))] .
0 𝑖=1
(113)
International Journal of Stochastic Analysis
15
Here, the third inequality comes from the nonnegativity in
the exponent
𝑡
2𝜎
1
− ∫ 𝑓̇ (𝑠) 𝑑𝑊 (𝑠) + 2 ≥ 0,
𝜀 0
𝜀
(114)
while the forth inequality holds because of 0 ≤ Λ đœŽ ≤ 1 and
Λ đœŽ ≠ 0 on the complement of [−2𝜎, 2𝜎]. Thus, the limiting
argument enables us to see that
𝑝𝜀 (𝑡, 𝑦) ≥ exp (−
‖𝑓‖2𝐻 + 4𝜎
)
2𝜀2
𝑡 𝑚
𝑖
× E [𝛿𝑦 (𝑋𝜀,𝑓 (𝑡)) Λ đœŽ (𝜀 ∫ ∑𝑓̇ (𝑠) 𝑑𝑊𝑖 (𝑠))]
0 𝑖=1
= 𝜀−𝑑 exp (−
+ 4𝜎
)
2𝜀2
0 𝑖=1
(115)
𝑖
lim E [𝛿0 (𝑌𝜀,𝑓 (𝑡)) Λ đœŽ (𝜀 ∫ ∑𝑓̇ (𝑠) 𝑑𝑊𝑖 (𝑠))] ≤ 1,
0 𝑖=1
(116)
from Lemma 19, we have
𝑡 𝑚
𝑖
lim 𝜀2 ln (𝜀−𝑑 E [𝛿0 (𝑌𝜀,𝑓 (𝑡)) Λ đœŽ (𝜀 ∫ ∑𝑓̇ (𝑠) 𝑑𝑊𝑖 (𝑠))])
0 𝑖=1
= 0.
(117)
Moreover, from the definition of the function 𝐼(𝑦), we can
find 𝑓 ∈ 𝐻 with 𝑦 = 𝑥𝑓 (𝑡) such that
𝑋 (𝑡) = 𝑥
𝑋𝜀 (𝑡) = 𝑥
(𝑡 ∈ [−𝑟, 0]) ,
(𝑡 ∈ (0, 𝑇]) ,
(123)
(𝑡 ∈ [−𝑟, 0]) ,
𝜀
̃ (𝑋
̃ , 𝑋𝜀 (𝑡)) 𝑑𝑊 (𝑡)
𝑑𝑋𝜀 (𝑡) = 𝜀𝐴
𝑡
(𝑡 ∈ (0, 𝑇]) ,
̃𝑚 ). Remark that 𝑋 = 𝑋𝜀 |𝜀=1 . Denote
̃ = (𝐴
̃1 , . . . , 𝐴
where 𝐴
𝜀
by 𝑝(𝑡, 𝑦) (or, 𝑝 (𝑡, 𝑦)) the density for the probability law of
𝑋(𝑡)(𝑋𝜀 (𝑡), resp.), whose existence can be justified under the
̃𝑖 (𝑖 =
uniformly elliptic condition (122) on the coefficients 𝐴
1, . . . , 𝑚). Then, we have the following.
̃𝑖 (𝑖 = 1, . . . , 𝑚)
Corollary 22. Suppose that the functions 𝐴
satisfy the uniformly elliptic condition (122). Then, it holds that
𝑝 (𝑡, 𝑦) ∼ exp [−
𝜀2 𝑟0
0
𝑟0 𝐼 (𝑦)
]
𝑡
(𝑡 ↘ 0) .
(124)
̃ (𝑋
̃ 𝑠 , 𝑋 (𝑠)) 𝑑𝑊 (𝑠)
𝐴
𝑟0
̃ (𝑋
̃ 𝜀2 𝑠 , 𝑋 (𝜀2 𝑠)) 𝑑𝑊 (𝑠)
= 𝑥 + 𝜀∫ 𝐴
(125)
0
𝑟0
̃ (𝑥 𝑖𝑑, 𝑋 (𝜀2 𝑠)) 𝑑𝑊 (𝑠) ,
= 𝑥 + 𝜀∫ 𝐴
0
Taking the limit as 𝜎 ↘ 0 completes the proof.
Corollary 21. Suppose that the R𝑑 -valued functions 𝐴 𝑖 (𝑖 =
1, . . . , 𝑚) satisfy the uniformly elliptic condition (24). Then, it
holds that
as 𝜀 ↘ 0, where the function 𝐼 is given in Corollary 11.
(122)
For 0 < 𝜀 ≤ 1, let 𝑋 = {𝑋(𝑡); 𝑡 ∈ [−𝑟, 𝑇]}, and let 𝑋𝜀 =
{𝑋𝜀 (𝑡); 𝑡 ∈ [−𝑟, 𝑇]} be the R𝑑 -valued processes determined
by the equations of the form:
𝑋 (𝜀2 𝑟0 ) = 𝑥 + ∫
‖𝑓‖2𝐻
lim inf 𝜀 ln 𝑝 (𝑡, 𝑦) ≥ − (
+ 2𝜎) ≥ −𝐼 (𝑦) − 3𝜎.
𝜀↘0
2
(119)
𝐼 (𝑦)
𝑝 (𝑡, 𝑦) ∼ exp [− 2 ] ,
𝜀
𝑓∈𝐶([−𝑟,−𝑟0 ];R𝑑 ) 𝑦∈R𝑑 𝑖=1
Proof. Recall that
𝜀
𝜀
̃𝑖 (𝑓, 𝑦))2 > 0.
inf ∑(𝜁 ⋅ 𝐴
inf
(118)
Hence, it holds that
2
(𝑖 = 1, . . . , 𝑚) ,
(121)
̃ ∈ 𝐶([−𝑟, 0]; R𝑑 ) such that 𝑓(𝑡)
̃
where 𝑓
= 𝑓(𝑡) (𝑡 ∈ [−𝑟,
̃
̃
−𝑟0 ]), and 𝑓(𝑡)
= 𝑓(−𝑟
)
(𝑡
∈
[−𝑟
,
0]),
for 𝑓 ∈ 𝐶([−𝑟,
0
0
𝑑
̃
0]; R ). Suppose that the functions 𝐴𝑖 (𝑖 = 1, . . . , 𝑚) satisfy
the uniformly elliptic condition of the form:
̃ (𝑋
̃ 𝑡 , 𝑋 (𝑡)) 𝑑𝑊 (𝑡)
𝑑𝑋 (𝑡) = 𝐴
where 𝛿𝑦 is the Dirac delta function. Since
‖𝑓‖2𝐻
≤ 𝐼 (𝑦) + 𝜎.
2
(𝑡 ∈ [−𝑟, 0]) ,
̃ 𝑓 (0))
̃𝑖 (𝑓,
𝐴 𝑖 (𝑡, 𝑓) = 𝐴
𝐴 0 (𝑡, 𝑓) ≡ 0,
𝜁∈S𝑑−1
𝑡 𝑚
𝜀↘0
𝜂 (𝑡) = 𝑥
𝑚
𝑖
× E [𝛿0 (𝑌𝜀,𝑓 (𝑡)) Λ đœŽ (𝜀 ∫ ∑𝑓̇ (𝑠) 𝑑𝑊𝑖 (𝑠))] ,
𝜀↘0
Finally, we will study the asymptotic behavior of the
density 𝑝(𝑡, 𝑦) for 𝑋(𝑡) in a short time. Let 0 < 𝑟0 ≤ 𝑟 be
a constant, and 𝑥 ∈ R𝑑 . We will consider the case
inf
‖𝑓‖2𝐻
𝑡 𝑚
Proof. Direct consequences of Theorems 16 and 20.
where 𝑖𝑑 ∈ 𝐶([−𝑟, −𝑟0 ]; R𝑑 ) such that 𝑖𝑑(𝑡) = 1 (𝑡 ∈ [−𝑟,
−𝑟0 ]). Here, the second equality holds from the scaling
property on the Brownian motion 𝑊, while the third equality
follows from 𝜀2 𝑠 − 𝑟0 ≤ 0. On the other hand, recall that
𝑟0
(120)
̃ (𝑋
̃ 𝜀 , 𝑋𝜀 (𝑠)) 𝑑𝑊 (𝑠)
𝑋𝜀 (𝑟0 ) = 𝑥 + 𝜀 ∫ 𝐴
𝑠
0
𝑟0
̃ (𝑥 𝑖𝑑, 𝑋𝜀 (𝑠)) 𝑑𝑊 (𝑠) ,
= 𝑥 + 𝜀∫ 𝐴
0
(126)
16
International Journal of Stochastic Analysis
because of 𝑠 − 𝑟0 ≤ 0. From the uniqueness of the solutions,
we have 𝑋(𝜀2 𝑟0 ) = 𝑋𝜀 (𝑟0 ) in the sense of the probability law.
Hence, we can get
𝑝 (𝜀2 𝑟0 , 𝑦) = 𝑝𝜀 (𝑟0 , 𝑦) .
(127)
As for the density 𝑝𝜀 (𝑟0 , 𝑦), we have already obtained the
asymptotic behavior of the form:
𝑝𝜀 (𝑟0 , 𝑦) ∼ exp [−
𝐼 (𝑦)
],
𝜀2
(128)
as 𝜀 ↘ 0, in Corollary 21. Taking 𝑡 = 𝜀2 𝑟0 completes the proof.
Remark 23. In particular, consider the case of
𝐴 0 (𝑠, 𝑓) = 0,
𝐴 𝑖 (𝑠, 𝑓) = 𝐴𝑖 (𝑓 (0))
𝜂 (𝑡) = 𝑥
(𝑖 = 1, . . . , 𝑚) ,
(𝑡 ∈ [−𝑟, 0]) ,
(129)
∞
(R𝑑 ; R𝑑 ) such that the functions 𝐴𝑖 (𝑖 =
where 𝐴𝑖 ∈ 𝐶1+,𝑏
1, . . . , 𝑚) satisfy the uniformly elliptic condition of the form:
𝑚
inf
𝜁∈S𝑑−1
2
inf ∑(𝜁 ⋅ 𝐴𝑖 (𝑦)) > 0.
𝑦∈R𝑑
(130)
𝑖=1
Then, our equation can be written as follows:
𝑋 (𝑡) = 𝑥
(𝑡 ∈ [−𝑟, 0]) ,
𝑑𝑋 (𝑡) = 𝐴 (𝑋 (𝑡)) 𝑑𝑊 (𝑡)
(𝑡 ∈ (0, 𝑇]) ,
(131)
where 𝐴 = (𝐴1 , . . . , 𝐴𝑚 ). Although our settings include
the effect of the time-delay parameter 𝑟, the effect of the
parameter 𝑟 in (131) can be ignored. Hence, the solution
{𝑋(𝑡); 𝑡 ∈ [−𝑟, 𝑇]} is the diffusion process, so we have only
to choose 𝑟 = 1 in the starting point of our study. Moreover,
the choice of 𝑟 = 1 tells us to see that Corollary 22 is the
well-known fact, that is, the Varadhan-type estimate, on the
asymptotic behavior of the density function for diffusion
processes. Hence, Corollary 22 can be also regarded as the
generalization of the short-time estimate of the density for
diffusion processes.
Remark 24. Ferrante et al. in [10] discussed the large deviation principle for the solution process 𝑋𝜀 and the asymptotic
estimate of the density, in the case of
̃𝑖 (𝑠, 𝑓 (𝑠 − 𝑟))
𝐴 𝑖 (𝑠, 𝑓) = 𝐴
(𝑖 = 1, . . . , 𝑚) ,
(132)
̃𝑖 : [0, 𝑇] × R𝑑 → R𝑑 with 𝐴
̃𝑖 (𝑡, ⋅) ∈ 𝐶∞ (R𝑑 ; R𝑑 ) for
where 𝐴
𝑏
̃𝑖 (𝑖 =
each 𝑡 ∈ [0, 𝑇]. Moreover, suppose that the functions 𝐴
1, . . . , 𝑚) satisfy the uniformly elliptic condition of the form:
𝑚
inf
2
inf inf ∑(𝜁 ⋅ 𝐴 𝑖 (𝑡, 𝑦)) > 0.
𝜁∈S𝑑−1 𝑡∈[0,𝑇] 𝑦∈R𝑑
𝑖=1
(133)
On the other hand, Mohammed and Zhang in [11] studied
the large deviation principle for the solution process 𝑋𝜀 , in the
case of
̃𝑖 (𝑡, 𝑓 (𝑡 − 𝑟) , 𝑓 (𝑡))
𝐴 𝑖 (𝑡, 𝑓) = 𝐴
(𝑖 = 1, . . . , 𝑚) ,
(134)
̃𝑖 (𝑡, ⋅, ⋅) ∈
̃𝑖 : [0, 𝑇] × R𝑑 × R𝑑 → R𝑑 with 𝐴
where 𝐴
∞
𝑑
𝑑
𝑑
𝐶1+,𝑏 (R × R ; R ).
Since the special forms of the coefficients on the diffusion
terms are quite essential in their arguments [10, 11], our situation cannot be included in their frameworks at all.
Acknowledgments
The authors are grateful to an anonymous referee for valuable
comments and suggestions. This work is partially supported
by Ministry of Education, Culture, Sports, Science, and
Technology and Grant-in-Aid for Encouragement of Young
Scientists, 23740083. This work is also partially supported
from the Research Council of Norway, 219005/F11. This work
was largely carried out while the second author stayed at the
Center of Mathematics for Applications, University of Oslo,
from August to September, 2012, on his leave from Osaka City
University.
References
[1] K. Itô and M. Nisio, “On stationary solutions of a stochastic differential equation,” Journal of Mathematics of Kyoto University,
vol. 4, pp. 1–75, 1964.
[2] S. E. A. Mohammed, Stochastic Functional Differential Equations, Pitman, Boston, Mass, USA, 1984.
[3] J.-M. Bismut, Large Deviations and the Malliavin Calculus,
Birkhäuser, 1984.
[4] S. Kusuoka and D. Stroock, “Applications of the Malliavin
calculus. I,” in Stochastic Analysis (Katata/Kyoto, 1982), pp. 271–
306, 1984.
[5] R. Léandre, “Majoration en temps petit de la densité d’une diffusion dégénérée,” Probability Theory and Related Fields, vol. 74,
no. 2, pp. 289–294, 1987.
[6] R. Léandre, “Minoration en temps petit de la densité d’une
diffusion dégénérée,” Journal of Functional Analysis, vol. 74, no.
2, pp. 399–414, 1987.
[7] R. Léandre, “Intégration dans la fibre associée à une diffusion
dégénérée,” Probability Theory and Related Fields, vol. 76, no. 3,
pp. 341–358, 1987.
[8] R. Léandre, “A simple proof for a large deviation theorem,” in
Barcelona Seminar on Stochastic Analysis (St. Feliu de GuĹ́xols,
1991), pp. 72–76, 1993.
[9] D. Nualart, “Analysis on Wiener space and anticipating stochastic calculus,” in Ecole d’été de Probabilités de Saint-Flour 1995,
vol. 1690 of Lecture Notes in Mathematics, pp. 123–227, Springer,
Berlin, Germany, 1998.
[10] M. Ferrante, C. Rovira, and M. Sanz-Solé, “Stochastic delay
equations with hereditary drift: estimates of the density,” Journal
of Functional Analysis, vol. 177, no. 1, pp. 138–177, 2000.
[11] S. E. A. Mohammed and T. Zhang, “Large deviations for
stochastic systems with memory,” Discrete and Continuous
Dynamical Systems, vol. 6, no. 4, pp. 881–893, 2006.
International Journal of Stochastic Analysis
[12] S. E. A. Mohammed, “Stochastic differential systems with memory: theory, examples and applications,” in Stochastic Analysis
and Related Topics VI (Geilo, 1996), vol. 42 of Progress in
Probability, pp. 1–77, Birkhäuser, Boston, Mass, USA, 1998.
[13] G. Di Nunno, B. Øksendal, and F. Proske, Malliavin Calculus for
Lévy Processes with Applications to Finance, Springer, 2009.
[14] D. Nualart, The Malliavin Calculus and Related Topics, Springer,
2nd edition, 2006.
[15] T. Komatsu and A. Takeuchi, “Simplified probabilistic approach
to the Hörmander theorem,” Osaka Journal of Mathematics, vol.
38, no. 3, pp. 681–691, 2001.
[16] A. Takeuchi, “Malliavin calculus for degenerate stochastic functional differential equations,” Acta Applicandae Mathematicae,
vol. 97, no. 1–3, pp. 281–295, 2007.
[17] A. Dembo and O. Zeitouni, Large Deviations Techniques and
Applications, Springer, 2nd edition, 1998.
[18] R. Azencott, “Grandes déviations et applications,” in Ecole d’été
de Probabilites de Saint-Flour VIII (1978), vol. 774 of Lecture
Notes in Mathematics, pp. 1–176, Springer, Berlin, Germany,
1980.
17
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download