Research Article From Pseudorandom Walk to Pseudo-Brownian Motion:

advertisement
Hindawi Publishing Corporation
International Journal of Stochastic Analysis
Volume 2014, Article ID 520136, 49 pages
http://dx.doi.org/10.1155/2014/520136
Research Article
From Pseudorandom Walk to Pseudo-Brownian Motion:
First Exit Time from a One-Sided or a Two-Sided Interval
Aimé Lachal1,2
1
2
Université de Lyon, Institut Camille Jordan, CNRS UMR5208, France
Institut National des Sciences Appliquées de Lyon Pôle de Mathématiques, Bâtiment Léonard de Vinci,
20 Avenue Albert Einstein, 69621 Villeurbanne Cedex, France
Correspondence should be addressed to Aimé Lachal; aime.lachal@insa-lyon.fr
Received 20 May 2013; Accepted 14 August 2013; Published 26 March 2014
Academic Editor: M. Lopez-Herrero
Copyright © 2014 Aimé Lachal. This is an open access article distributed under the Creative Commons Attribution License, which
permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Let 𝑁 be a positive integer, 𝑐 a positive constant and (𝜉𝑛 )𝑛≥1 be a sequence of independent identically distributed pseudorandom
variables. We assume that the 𝜉𝑛 ’s take their values in the discrete set {−𝑁, −𝑁 + 1, . . . , 𝑁 − 1, 𝑁} and that their common
2𝑁
) for any
pseudodistribution is characterized by the (positive or negative) real numbers P{𝜉𝑛 = 𝑘} = 𝛿𝑘0 + (−1)𝑘−1 𝑐 ( 𝑘+𝑁
𝑘 ∈ {−𝑁, −𝑁 + 1, . . . , 𝑁 − 1, 𝑁}. Let us finally introduce (𝑆𝑛 )𝑛≥0 the associated pseudorandom walk defined on Z by 𝑆0 = 0
𝑛
and 𝑆𝑛 = ∑𝑗=1 𝜉𝑗 for 𝑛 ≥ 1. In this paper, we exhibit some properties of (𝑆𝑛 )𝑛≥0 . In particular, we explicitly determine the
pseudodistribution of the first overshooting time of a given threshold for (𝑆𝑛 )𝑛≥0 as well as that of the first exit time from a bounded
interval. Next, with an appropriate normalization, we pass from the pseudorandom walk to the pseudo-Brownian motion driven
by the high-order heat-type equation 𝜕/𝜕𝑡 = (−1)𝑁−1 𝑐𝜕2𝑁 /𝜕𝑥2𝑁 . We retrieve the corresponding pseudodistribution of the first
overshooting time of a threshold for the pseudo-Brownian motion (Lachal, 2007). In the same way, we get the pseudodistribution
of the first exit time from a bounded interval for the pseudo-Brownian motion which is a new result for this pseudoprocess.
1. Introduction
Throughout the paper, we denote by Z the set of integers,
by N that of nonnegative integers, and by N∗ that of positive
integers: Z = {. . . , −1, 0, 1, . . .}, N = {0, 1, 2, . . .}, N∗ = {1,
2, . . .}. More generally, for any set of numbers 𝐸, we set 𝐸∗ =
𝐸 \ {0}.
Let 𝑁 be a positive integer, 𝑐 a positive constant, and set
𝜅𝑁 = (−1)𝑁−1 . Let (𝜉𝑛 )𝑛∈N∗ be a sequence of independent
identically distributed pseudorandom variables taking their
values in the set of integers {−𝑁, −𝑁 + 1, . . . , −1, 0, 1, . . . , 𝑁 −
1, 𝑁}. By pseudorandom variable, we mean a measurable
function defined on a space endowed with a signed measure
with a total mass equaling the unity. We assume that the
common pseudodistribution of the 𝜉𝑛 ’s is characterized by the
(positive or negative) real pseudo-probabilities 𝑝𝑘 = P{𝜉𝑛 = 𝑘}
for any 𝑘 ∈ {−𝑁, −𝑁 + 1, . . . , 𝑁 − 1, 𝑁}. The parameters 𝑝𝑘
sum to the unity: ∑𝑁
𝑘=−𝑁 𝑝𝑘 = 1.
Now, let us introduce (𝑆𝑛 )𝑛∈N the associated pseudorandom walk defined on Z by 𝑆0 = 0 and 𝑆𝑛 = ∑𝑛𝑗=1 𝜉𝑗 for 𝑛 ∈ N∗ .
The infinitesimal generator associated with (𝑆𝑛 )𝑛∈N is defined,
for any function 𝑓 defined on Z, as
G𝑆 𝑓 (𝑗) = E [𝑓 (𝜉1 + 𝑗)] − 𝑓 (𝑗)
𝑁
= ∑ 𝑝𝑘 𝑓 (𝑗 + 𝑘) − 𝑓 (𝑗) ,
𝑗 ∈ Z.
(1)
𝑘=−𝑁
Here we consider the pseudorandom walk which admits the
discrete 𝑁-iterated Laplacian as a generator infinitesimal.
More precisely, by introducing the so-called discrete Laplacian Δ defined, for any function 𝑓 defined on Z, by
Δ𝑓 (𝑗) = 𝑓 (𝑗 + 1) − 2𝑓 (𝑗) + 𝑓 (𝑗 − 1) ,
𝑗 ∈ Z,
(2)
the discrete 𝑁-iterated Laplacian is the operator Δ𝑁 =
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
Δ ∘ ⋅ ⋅ ⋅ ∘ Δ given by
𝑁 times
𝑁
2𝑁
Δ𝑁𝑓 (𝑗) = ∑ (−1)𝑘+𝑁 (
) 𝑓 (𝑗 + 𝑘) ,
𝑘+𝑁
𝑘=−𝑁
𝑗 ∈ Z. (3)
2
International Journal of Stochastic Analysis
We then choose the 𝑝𝑘 ’s such that G𝑆 = 𝜅𝑁𝑐Δ𝑁 which yields,
by identification, for any 𝑘 ∈ {−𝑁, −𝑁 + 1, . . . , −1, 1, . . . , 𝑁 −
1, 𝑁},
2𝑁
𝑝𝑘 = 𝑝−𝑘 = (−1)𝑘−1 𝑐 (
),
𝑘+𝑁
2𝑁
) . (4)
𝑁
𝑝0 = 1 − 𝑐 (
When 𝑁 = 1, (𝑆𝑛 )𝑛∈N is the nearest neighbours pseudorandom walk with a possible stay at its current location; it
is characterized by the numbers 𝑝1 = 𝑝−1 = 𝑐 and 𝑝0 =
1 − 2𝑐. Moreover, if 0 < 𝑐 < 1/2, then 𝑝0 > 0; in this
case, we are dealing with an ordinary symmetric random walk
(with positive probabilities). If 𝑐 = 1/2, this is the classical
symmetric random walk: 𝑝1 = 𝑝−1 = 1/2 and 𝑝0 = 0.
Actually, with the additional assumption that 𝑝−𝑘 = 𝑝𝑘
for any 𝑘 ∈ {−𝑁, −𝑁 + 1, . . .,𝑁 − 1, 𝑁} (i.e., the 𝜉𝑛 ’s are
symmetric, or the pseudorandom walk has no drift), the 𝑝𝑘 ’s
are the unique numbers such that
G𝑆 𝑓 (𝑗) = 𝜅𝑁𝑐𝑓̃(2𝑁) (𝑗)
+ terms with higher order derivatives,
(5)
where 𝑓̃ is an analytical extension of 𝑓 and 𝑓̃(2𝑁) stands for
̃ 2𝑁)(𝑥).
the (2𝑁)th derivative of 𝑓̃ : 𝑓̃(2𝑁) (𝑥) = (d2𝑁𝑓/d𝑥
Our motivation for studying the pseudorandom walk
associated with the parameters defined by (4) is that it is
the discrete counterpart of the pseudo-Brownian motion as
the classical random walk is for Brownian motion. Let us
recall that pseudo-Brownian motion is the pseudo-Markov
process (𝑋𝑡 )𝑡≥0 with independent and stationary increments,
associated with the signed heat-type kernel 𝑝(𝑡; 𝑥) which is
the elementary solution of the high-order heat-type equation
𝜕/𝜕𝑡 = 𝜅𝑁𝑐𝜕2𝑁/𝜕𝑥2𝑁. The kernel 𝑝(𝑡; 𝑥) is characterized by
its Fourier transform:
E( e
𝑖𝑢𝑋𝑡
)=∫
+∞
−∞
e
𝑖𝑢𝑥
𝑝 (𝑡; 𝑥)d𝑥 = e
−𝑐𝑡𝑢2𝑁
.
(6)
The corresponding infinitesimal generator is given, for any
𝐶2𝑁-function 𝑓, by
1
G𝑋 𝑓 (𝑥) = lim+ [E [𝑓 (𝑋ℎ + 𝑥)] − 𝑓 (𝑥)]
ℎ→0 ℎ
= 𝜅𝑁𝑐𝑓
(2𝑁)
(7)
Δ2 𝑓 (𝑗) = 𝑓 (𝑗 + 2) − 4𝑓 (𝑗 + 1) + 6𝑓 (𝑗)
− 4𝑓 (𝑗 − 1) + 𝑓 (𝑗 − 2) ,
𝑗 ∈ Z,
(8)
and in the continuous case,
d4 𝑓
(𝑥) ,
d𝑥4
𝜎𝑎− = min {𝑛 ∈ N∗ : 𝑆𝑛 ≤ 𝑎} ,
𝜎𝑏+ = min {𝑛 ∈ N∗ : 𝑆𝑛 ≥ 𝑏}
(10)
as well as the first exit time from a bounded interval (𝑎, 𝑏):
𝜎𝑎𝑏 = min {𝑛 ∈ N∗ : 𝑆𝑛 ≤ 𝑎 or 𝑆𝑛 ≥ 𝑏}
= min {𝑛 ∈ N∗ : 𝑆𝑛 ∉ (𝑎, 𝑏)}
(11)
with the usual convention that min 0 = +∞. Hence, when
𝜎𝑏+ < +∞, 𝑆𝜎𝑏+ −1 ≤ 𝑏 − 1 and 𝑆𝜎𝑏+ ≥ 𝑏, the overshoot at time 𝜎𝑏+
which is 𝑆𝜎𝑏+ − 𝑏 can take the values 0, 1, 2, . . . , 𝑁 − 1, that is,
𝑆𝜎𝑏+ ∈ {𝑏, 𝑏 + 1, 𝑏 + 2, . . . , 𝑏 + 𝑁 − 1}. Similarly, when 𝜎𝑎− < +∞,
𝑆𝜎𝑎− ∈ {𝑎 − 𝑁 + 1, 𝑎 − 𝑁 + 2, . . . , 𝑎}, and when 𝜎𝑎𝑏 < +∞,
𝑆𝜎𝑎𝑏 ∈ {𝑎, 𝑎 − 1, . . . , 𝑎 − 𝑁 + 1} ∪ {𝑏, 𝑏 + 1, . . . , 𝑏 + 𝑁 − 1}. We
put 𝑆𝑏+ = 𝑆𝜎𝑏+ , 𝑆𝑎− = 𝑆𝜎𝑎− and 𝑆𝑎𝑏 = 𝑆𝜎𝑎𝑏 .
In the same way, we introduce the first overshooting times
of the thresholds 𝑎 < 0 and 𝑏 > 0 (𝑎, 𝑏 being now real
numbers) for (𝑋𝑡 )𝑡≥0 :
𝜏𝑎− = inf {𝑡 ≥ 0 : 𝑋𝑡 ≤ 𝑎} ,
𝜏𝑏+ = inf {𝑡 ≥ 0 : 𝑋𝑡 ≥ 𝑏}
(12)
as well as the first exit time from a bounded interval (𝑎, 𝑏):
𝜏𝑎𝑏 = inf {𝑡 ≥ 0 : 𝑋𝑡 ≤ 𝑎 or 𝑋𝑡 ≥ 𝑏}
= inf {𝑡 ≥ 0 : 𝑋𝑡 ∉ (𝑎, 𝑏)}
(13)
with the similar convention that inf 0 = +∞, and we set,
when the corresponding time is finite,
(𝑥) .
The reader can find extensive literature on pseudo-Brownian
motion. For instance, let us quote the works of Beghin et al.
[1–20] and the references therein.
We observe that (5) and (7) are closely related to the
continuous 𝑁-iterated Laplacian d2𝑁/d𝑥2𝑁. For 𝑁 = 2,
the operator Δ2 is the two-Laplacian related to the famous
biharmonic functions: in the discrete case,
Δ2 𝑓 (𝑥) =
In the discrete case, it has been considered by Sato [21] and
Vanderbei [22].
The link between the pseudorandom walk and pseudoBrownian motion is the following one: when normalizing the
pseudorandom walk (𝑆𝑛 )𝑛∈N on a grid with small spatial step
𝜀2𝑁 and temporal step 𝜀 (i.e., we construct the pseudoprocess
(𝜀𝑆⌊𝑡/𝜀2𝑁 ⌋ )𝑡≥0 where ⌊⋅⌋ denotes the usual floor function), the
limiting pseudoprocess as 𝜀 → 0+ is exactly the pseudoBrownian motion.
Now, we consider the first overshooting time of a fixed
single threshold 𝑎 < 0 or 𝑏 > 0 (𝑎, 𝑏 being integers) for
(𝑆𝑛 )𝑛∈N :
𝑥 ∈ R.
(9)
𝑋𝑎− = 𝑋𝜏𝑎− ,
𝑋𝑏+ = 𝑋𝜏𝑏+ ,
𝑋𝑎𝑏 = 𝑋𝜏𝑎𝑏 .
(14)
In this paper we provide a representation for the generating function of the joint distributions of the couples
(𝜎𝑎− , 𝑆𝑎− ), (𝜎𝑏+ , 𝑆𝑏+ ), and (𝜎𝑎𝑏 , 𝑆𝑎𝑏 ). In particular, we derive
simple expressions for the marginal distributions of 𝑆𝑎− , 𝑆𝑏+ ,
and 𝑆𝑎𝑏 . We also obtain explicit expressions for the famous
“ruin pseudoprobabilities” P{𝜎𝑎− < 𝜎𝑏+ } and P{𝜎𝑏+ < 𝜎𝑎− }.
The main tool employed in this paper is the use of generating
functions.
Taking that the limit as 𝜀 goes to zero, we retrieve the joint
distributions of the couples (𝜏𝑎− , 𝑋𝑎− ) and (𝜏𝑏+ , 𝑋𝑏+ ) obtained in
[10, 11]. Therein, we used Spitzer’s identity for deriving these
distributions. Moreover, we obtain the joint distribution of
International Journal of Stochastic Analysis
3
the couple (𝜏𝑎𝑏 , 𝑋𝑎𝑏 ) which is a new and an important result
for the study of pseudo-Brownian motion. In particular, we
deduce the “ruin pseudo-probabilities” P{𝜏𝑎− < 𝜏𝑏+ } and
P{𝜏𝑏+ < 𝜏𝑎− }; the results have been announced without any
proof in a survey on pseudo-Brownian motion [13], after a
conference held in Madrid (IWAP 2010).
In [11, 17, 18], the authors observed a curious fact
concerning the pseudodistributions of 𝑋𝑎− and 𝑋𝑏+ : they
are linear combinations of the Dirac distribution and its
successive derivatives (in the sense of Schwarz distributions).
For instance,
P {𝑋𝑏+ ∈ d𝑧} 𝑁−1 𝑏𝑗 (𝑗)
= ∑ 𝛿𝑏 (𝑧) .
d𝑧
𝑗=0 𝑗!
Therefore,
𝑘=1
The reader will find a list of notations in Table 2 which is
postponed to the end of the paper.
2. Part I—Some Properties of
the Pseudorandom Walk
2.1. Pseudodistribution of 𝜉1 and 𝑆𝑛 . We consider the pseudorandom walk (𝑆𝑛 )𝑛∈N related to a family of real parameters
{𝑝𝑘 , 𝑘 ∈ {−𝑁, . . . , 𝑁}} satisfying 𝑝𝑘 = 𝑝−𝑘 for any
𝑘 ∈ {1, . . . , 𝑁} and ∑𝑁
𝑘=−𝑁 𝑝𝑘 = 1. Let us recall that the
infinitesimal generator associated with (𝑆𝑛 )𝑘≥0 is defined by
𝑁
G𝑆 𝑓 (𝑗) = ∑ 𝑝𝑘 𝑓 (𝑗 + 𝑘) − 𝑓 (𝑗)
𝑘=−𝑁
𝑁
= (𝑝0 − 1) 𝑓 (𝑗) + ∑ 𝑝𝑘 [𝑓 (𝑗 + 𝑘) + 𝑓 (𝑗 − 𝑘)] .
𝑘=1
(16)
In this section, we look for the values of 𝑝𝑘 , 𝑘 ∈ {−𝑁, . . . , 𝑁},
for which the infinitesimal generator G is of the form (5).
Next, we provide several properties for the corresponding
pseudorandom walk.
Suppose that 𝑓 can be extended into an analytical func̃ In this case, we can expand
tion 𝑓.
∞
𝑘2ℓ ̃(2ℓ)
𝑓 (𝑗) .
ℓ=0 (2ℓ)!
𝑓 (𝑗 + 𝑘) + 𝑓 (𝑗 − 𝑘) = 2 ∑
(17)
𝑘2ℓ ̃(2ℓ)
𝑓 (𝑗)
ℓ=0 (2ℓ)!
= (𝑝0 + 2 ∑ 𝑝𝑘 − 1) 𝑓 (𝑗)
(18)
𝑘=1
∞
𝑁
ℓ=1
𝑘=1
+ 2 ∑ ( ∑ 𝑘2ℓ 𝑝𝑘 )
𝑓̃(2ℓ) (𝑗)
.
(2ℓ)!
2 ∑𝑁
𝑘=1
𝑝𝑘 = 1, we see that the expression (5) of G
Since 𝑝0 +
holds if and only if the 𝑝𝑘 ’s satisfy the equations
𝑁
∑ 𝑘2ℓ 𝑝𝑘 = 0 for 1 ≤ ℓ ≤ 𝑁 − 1,
𝑘=1
(𝑗)
Part I—some properties of the pseudorandom walk
Part II—first overshooting time of a single threshold
Part III—first exit time from a bounded interval.
∞
𝑁
(15)
The quantity 𝛿𝑏 is to be understood as the functional acting
(𝑗)
on test functions 𝜙 according to ⟨𝛿𝑏 , 𝜙⟩ = (−1)𝑗 𝜙(𝑗) (𝑏). The
(𝑗)
appearance of the 𝛿𝑏 ’s in (15), which is quite surprising for
probabilists, can be better understood thanks to the discrete
(𝑗)
approach. Indeed, the 𝛿𝑏 ’s come from the location at the
overshooting time of 𝑏 for the normalized pseudorandom
walk: the location takes place in the “cluster” of points 𝑏, 𝑏 +
𝜀, 𝑏 + 2𝜀, . . . , 𝑏 + (𝑁 − 1)𝜀.
In order to facilitate the reading of the paper, we have
divided it into three parts:
𝑁
G𝑓 (𝑗) = (𝑝0 − 1) 𝑓 (𝑗) + 2 ∑ 𝑝𝑘 ∑
𝑁
1
∑ 𝑘 𝑝𝑘 = 𝜅𝑁𝑐 (2𝑁)!.
2
𝑘=1
(19)
2𝑁
Proposition 1. The numbers 𝑝𝑘 , 𝑘 ∈ {1, . . . , 𝑁}, satisfying
(19), are given by
2𝑁
𝑝𝑘 = (−1)𝑘−1 𝑐 (
).
𝑘+𝑁
(20)
In particular, 𝑝𝑁 = 𝜅𝑁𝑐.
Proof. First, we recall that the solution of a Vandermonde
ℓ
system of the form ∑𝑁
𝑘=1 𝑎𝑘 𝑥𝑘 = 𝛼ℓ , 1 ≤ ℓ ≤ 𝑁, is given
by
𝑎 ,...,𝑎
𝑥𝑘 =
𝐴 𝑘 ( 𝛼11 ,...,𝛼𝑁𝑁 )
𝐴 (𝑎1 , . . . , 𝑎𝑁)
,
1≤𝑘≤𝑁
(21)
with
󵄨󵄨 𝑎 ⋅ ⋅ ⋅ 𝑎 󵄨󵄨
󵄨󵄨 1
𝑁 󵄨󵄨
󵄨󵄨
󵄨
󵄨󵄨 𝑎2 ⋅ ⋅ ⋅ 𝑎2 󵄨󵄨󵄨
󵄨󵄨 1
𝑁 󵄨󵄨
󵄨
(22)
𝐴 (𝑎1 , . . . , 𝑎𝑁) = 󵄨󵄨󵄨
.. 󵄨󵄨󵄨󵄨
󵄨󵄨 ..
. 󵄨󵄨󵄨
󵄨󵄨󵄨 .
󵄨󵄨 𝑁
𝑁 󵄨󵄨
󵄨󵄨𝑎1 ⋅ ⋅ ⋅ 𝑎𝑁
󵄨󵄨
and, for any 𝑘 ∈ {1, . . . , 𝑁},
󵄨󵄨󵄨 𝑎1 ⋅ ⋅ ⋅ 𝑎𝑘−1 𝛼1 𝑎𝑘+1 ⋅ ⋅ ⋅ 𝑎𝑁 󵄨󵄨󵄨
󵄨
󵄨󵄨 2
2
2 󵄨󵄨
󵄨󵄨 𝑎 ⋅ ⋅ ⋅ 𝑎2
󵄨
󵄨󵄨 1
𝑎1 , . . . , 𝑎𝑁
𝑘−1 𝛼2 𝑎𝑘+1 ⋅ ⋅ ⋅ 𝑎𝑁 󵄨󵄨
󵄨 . (23)
𝐴𝑘 (
) = 󵄨󵄨󵄨 .
.
.
.
.
𝛼1 , . . . , 𝛼𝑁
..
..
..
.. 󵄨󵄨󵄨󵄨
󵄨󵄨 ..
󵄨
󵄨󵄨
𝑁
𝑁
𝑁 󵄨󵄨󵄨
󵄨󵄨󵄨𝑎1𝑁 ⋅ ⋅ ⋅ 𝑎𝑘−1
𝛼𝑁 𝑎𝑘+1
⋅ ⋅ ⋅ 𝑎𝑁
󵄨
In the notation of 𝐴 𝑘 and that of forthcoming determinants,
we adopt the convention that when the index of certain
entries in the determinant lies out of the range of 𝑘, the
corresponding column is discarded. That is, for 𝑘 = 1 and
𝑘 = 𝑁, the respective determinants write
󵄨󵄨󵄨 𝛼1 𝑎2 ⋅ ⋅ ⋅ 𝑎𝑁 󵄨󵄨󵄨
󵄨󵄨
󵄨
𝑎1 , . . . , 𝑎𝑁
..
.. 󵄨󵄨󵄨
󵄨 .
) = 󵄨󵄨󵄨 ..
𝐴1 (
󵄨󵄨 ,
.
.
𝛼1 , . . . , 𝛼𝑁
󵄨󵄨
󵄨
󵄨󵄨𝛼 𝑎𝑁 ⋅ ⋅ ⋅ 𝑎𝑁󵄨󵄨󵄨
󵄨 𝑁 2
𝑁󵄨
(24)
󵄨󵄨 𝑎 ⋅ ⋅ ⋅ 𝑎
󵄨
󵄨󵄨 1
𝑁−1 𝛼1 󵄨󵄨󵄨
󵄨󵄨 .
𝑎 , . . . , 𝑎𝑁
..
.. 󵄨󵄨󵄨
𝐴𝑁 ( 1
) = 󵄨󵄨󵄨 ..
.
. 󵄨󵄨󵄨 .
𝛼1 , . . . , 𝛼𝑁
󵄨󵄨 𝑁
󵄨󵄨𝑎 ⋅ ⋅ ⋅ 𝑎𝑁 𝛼 󵄨󵄨󵄨
󵄨 1
𝑁󵄨
𝑁−1
4
International Journal of Stochastic Analysis
It is well-known that, for any 𝑘 ∈ {1, . . . , 𝑁},
𝐴 (𝑎1 , . . . , 𝑎𝑁) = ∏ 𝑎𝑗
1≤𝑗≤𝑁
= (−1)
∏
1≤ℓ<𝑚≤𝑁
𝑘+𝑁
×
(𝑎𝑚 − 𝑎ℓ )
∏ 𝑎𝑗 ∏ (𝑎𝑘 − 𝑎𝑗 )
1≤𝑗≤𝑁
We find it interesting to compute the cumulative sums of the
𝑝𝑗 ’s: for 𝑘 ∈ {−𝑁, . . . , 𝑁},
1≤𝑗≤𝑁
𝑗 ≠ 𝑘
(25)
𝑎 , . . . , 𝑎𝑁
𝐴𝑘 ( 1
) = (−1)𝑘+𝑁𝛼𝑁
𝛼1 , . . . , 𝛼𝑁
󵄨󵄨 𝑎 ⋅ ⋅ ⋅ 𝑎
󵄨
𝑘−1 𝑎𝑘+1 ⋅ ⋅ ⋅ 𝑎𝑁 󵄨󵄨󵄨
󵄨󵄨 1
󵄨󵄨 2
󵄨
2
2
2 󵄨󵄨
󵄨󵄨 𝑎
󵄨󵄨 1 ⋅ ⋅ ⋅ 𝑎𝑘−1 𝑎𝑘+1 ⋅ ⋅ ⋅ 𝑎𝑁 󵄨󵄨󵄨
× 󵄨󵄨 .
..
..
.. 󵄨󵄨󵄨 .
󵄨󵄨 .
.
.
. 󵄨󵄨󵄨
󵄨󵄨 .
󵄨󵄨 𝑁−1
𝑁−1
𝑁−1
𝑁−1 󵄨󵄨󵄨
󵄨󵄨𝑎1
⋅ ⋅ ⋅ 𝑎𝑘−1 𝑎𝑘+1 ⋅ ⋅ ⋅ 𝑎𝑁 󵄨
= (−1)𝑘+𝑁𝛼𝑁 ∏ 𝑎𝑗
1≤𝑗≤𝑁
𝑗 ≠ 𝑘
∏ (𝑎𝑚 − 𝑎ℓ ) .
1≤ℓ<𝑚≤𝑁
ℓ,𝑚 ≠ 𝑘
(26)
Therefore, the solution simply writes
𝛼𝑁
,
𝑥𝑘 =
𝑎𝑘 ∏ 1≤𝑗≤𝑁 (𝑎𝑘 − 𝑎𝑗 )
1 ≤ 𝑘 ≤ 𝑁.
(27)
Now, we see that system (19) is a Vandermonde system
with the choices 𝑎𝑘 = 𝑘2 , 𝑥𝑘 = 𝑝𝑘 , and 𝛼ℓ = 0 for 1 ≤ ℓ ≤
𝑁 − 1, 𝛼𝑁 = 𝜅𝑁𝑐(2𝑁)!/2. With these settings at hands, we
explicitly have
𝑎𝑘 ∏ (𝑎𝑘 − 𝑎𝑗 ) = 𝑘2 ∏ (𝑘2 − 𝑗2 )
1≤𝑗≤𝑁
𝑗 ≠ 𝑘
= 𝑘2 ∏ (𝑘 − 𝑗) ∏ (𝑘 + 𝑗)
1≤𝑗≤𝑁
𝑗 ≠ 𝑘
1≤𝑗≤𝑁
𝑗 ≠ 𝑘
(28)
Finally, the value of 𝑝0 is obtained as follows: by using the
𝑘 2𝑁
fact that ∑2𝑁
𝑘=0 (−1) ( 𝑘 ) = 0,
−𝑁≤𝑘≤𝑁
𝑘 ≠ 0
The last displayed sum is classical and easy to compute by
appealing to Pascal’s formula which leads to a telescopic sum:
𝑘+𝑁
2𝑁
∑ (−1)𝑗 ( )
𝑗
𝑗=0
𝑘+𝑁
2𝑁 − 1
2𝑁 − 1
= ∑ [(−1)𝑗 (
) − (−1)𝑗+1 (
)]
𝑗−1
𝑗
2𝑁
= 1 + 𝑐 ∑ (−1) (
)
𝑘+𝑁
𝑘
2𝑁 − 1
).
= (−1)𝑘+𝑁 (
𝑘+𝑁
Thus, for 𝑘 ∈ {−𝑁, . . . , 𝑁},
𝑘
2𝑁 − 1
).
𝑘+𝑁
∑ 𝑝𝑗 = 1{𝑘≥0} + (−1)𝑘−1 𝑐 (
(32)
Observe that this sum is nothing but P{𝜉1 ≤ 𝑘}. Next, we
compute the total sum of the |𝑝𝑗 |’s: by using the fact that
𝑁
2𝑁
∑2𝑁
𝑘=0 ( 𝑘 ) = 4 ,
𝑁
󵄨
2𝑁 󵄨󵄨󵄨
2𝑁
󵄨 󵄨 󵄨󵄨
)
∑ 󵄨󵄨󵄨𝑝𝑘 󵄨󵄨󵄨 = 󵄨󵄨󵄨1 − 𝑐 ( )󵄨󵄨󵄨 + ∑ 𝑐 (
𝑁
𝑘
+𝑁
󵄨
󵄨
󵄨 −𝑁≤𝑘≤𝑁
󵄨
𝑘=−𝑁
𝑘 ≠ 0
󵄨󵄨
2𝑁 󵄨󵄨󵄨
2𝑁
󵄨
= 󵄨󵄨󵄨1 − 𝑐 ( )󵄨󵄨󵄨 + 𝑐 [4𝑁 − ( )]
𝑁 󵄨󵄨
𝑁
󵄨󵄨
= 𝑐4𝑁 − 1 + 2[1 − 𝑐 (
(33)
+
2𝑁
)] .
𝑁
𝑁
E (𝜁𝜉1 ) = ∑ 𝑝𝑘 𝜁𝑘
𝑘=−𝑁
= 1 + (−1)𝑁−1
(29)
2𝑁
𝑘=0
(31)
𝑗=0
𝑘=−𝑁
2𝑁
2𝑁
) + (−1)𝑁𝑐 ∑ (−1)𝑘 ( )
𝑘
𝑁
𝑗
𝑗=0
= 1 + 𝑐 [ ∑ (−1)𝑘−1 (
−𝑁≤𝑘≤𝑁
𝑘 ≠ 0
(30)
𝑘+𝑁
2𝑁
𝑐 ∑ (−1) ( ) .
𝑗
𝑁
∑ 𝑝𝑘
2𝑁
).
𝑁
𝑁−1
As previously mentioned, there is an interpretation to this
sum: this is the total variation of the pseudodistribution of
𝜉1 . We can also explicitly determine the generating function
of 𝜉1 : for any 𝜁 ∈ C∗ ,
1
= (−1)𝑁−𝑘 (𝑁 + 𝑘)! (𝑁 − 𝑘)!
2
and the result of Proposition 1 ensues.
= 1 − 𝑐(
𝑗=−𝑁
𝑗=−𝑁
𝑗 ≠ 𝑘
= 1 − 𝑐(
𝑗=−𝑁
2𝑁
)]
𝑗+𝑁
= 1{𝑘≥0} + (−1)
In the particular case where 𝛼ℓ = 0 for 1 ≤ ℓ ≤ 𝑁 − 1, we
have, for any 𝑘 ∈ {1, . . . , 𝑁}, that
𝑝0 = 1 −
𝑘
∏ (𝑎𝑚 − 𝑎ℓ ) .
1≤ℓ<𝑚≤𝑁
ℓ,𝑚 ≠ 𝑘
1≤𝑗≤𝑁
𝑗 ≠ 𝑘
𝑘
∑ 𝑝𝑗 = ∑ [𝛿𝑗0 + (−1)𝑗−1 𝑐 (
= 1 + 𝜅𝑁𝑐
2𝑁
) 𝜁𝑘 ]
𝑘+𝑁
(34)
2𝑁
𝑐
2𝑁
[ ∑ (−1)𝑘 ( ) 𝜁𝑘 ]
𝑘
𝜁𝑁 𝑘=0
(1 − 𝜁)
𝜁𝑁
2𝑁
.
We sum up below the results we have obtained concerning
the pseudodistribution of 𝜉1 .
International Journal of Stochastic Analysis
5
Proposition 2. The pseudodistribution of 𝜉1 is determined, for
𝑘 ∈ {−𝑁, . . . , 𝑁}, by
2𝑁
P {𝜉1 = 𝑘} = 𝛿𝑘0 + (−1)𝑘−1 𝑐 (
),
𝑘+𝑁
(35)
or, equivalently, by
2𝑁 − 1
).
𝑘+𝑁
P {𝜉1 ≤ 𝑘} = 1{𝑘≥0} + (−1)𝑘−1 𝑐 (
(36)
The total variation of the pseudodistribution of 𝜉1 is given by
2𝑁
{
{1 + 𝑐 [4𝑁 − 2 ( )]
{
󵄩󵄩 󵄩󵄩 {
𝑁
󵄩󵄩P𝜉1 󵄩󵄩 = {
󵄩 󵄩 {
{
{𝑐4𝑁 − 1
{
if 0 < 𝑐 ≤
if 𝑐 ≥
1
1
,
( 2𝑁
𝑁 )
( 2𝑁
𝑁 )
(37)
.
Table 1: Some numerical values in the cases 𝑁 ∈ {3, 4}.
(a)
𝑐
1
1/20
1/32
1/35
1/64
𝑝𝑜
−19
0
0.38
0.43
0.69
𝑝1
15
0.75
0.47
0.43
0.23
𝑝2
−6
−0.30
−0.19
−0.17
−0.094
𝑝3
1
0.050
0.031
0.029
0.016
𝑝3
8
0.114
0.063
0.062
0.031
𝑝4
−1
−0.0143
−0.0079
−0.0078
−0.0039
(b)
𝑐
1
1/70
1/126
1/128
1/256
𝑝𝑜
−69
0
0.44
0.45
0.73
𝑝1
56
0.8
0.44
0.44
0.22
𝑝2
−28
−0.4
−0.22
−0.22
−0.11
The generating function of 𝜉1 is given, for any 𝜁 ∈ C∗ , by
E (𝜁𝜉1 ) = 1 + 𝜅𝑁𝑐
(1 − 𝜁)
𝜁𝑁
2𝑁
.
(38)
In particular, the Fourier transform of 𝜉1 admits the following
expression: for any 𝜃 ∈ [0, 2𝜋], by
𝜃
E (e𝑖𝜃𝜉1 ) = 1 − 𝑐4𝑁sin2𝑁 ( ) .
2
(39)
Remark 3. For 𝑐 = 1/ ( 2𝑁
𝑁 ), we have P{𝜉1 = 0} = 0; that is,
the pseudorandom walk does not stay at its current location.
If 0 < 𝑐 ≤ 1/ ( 2𝑁+1
𝑁+1 ), it can be easily seen, by using the identity
2𝑁+1
2𝑁
( 2𝑁
𝑁 ) + ( 𝑁+1 ) = ( 𝑁+1 ), that P{𝜉1 = 0} ≥ P{𝜉1 = 1} > 0. On
the other hand, for any 𝑐 > 0, it is clear that P{𝜉1 = 1} >
|P{𝜉1 = 2}| > ⋅ ⋅ ⋅ > |P{𝜉1 = 𝑁}|. In Table 1 and Figures 1 and
2, we provide some numerical values and (rescaled) profiles
of the pseudodistribution of 𝜉1 for 𝑁 = 3 and 𝑁 = 4 and
several values of 𝑐.
In the sequel, we will use the total variation of 𝜉1 as an
upper bound which we call 𝑀1 :
2𝑁
𝑁
{
{
{
{1 + 𝑐 [4 − 2 ( 𝑁 )]
𝑀1 = {
{
{
{𝑐4𝑁 − 1
{
if 0 < 𝑐 ≤
if 𝑐 ≥
1
1
( 2𝑁
𝑁 )
( 2𝑁
𝑁 )
Let us denote this bound by 𝑀∞ :
𝑀∞
Proposition 4. The pseudodistribution of 𝑆𝑛 is given, for any
𝑘 ∈ {−𝑁𝑛, . . . , 𝑁𝑛}, by
𝑛
𝑛
2𝑁ℓ
P {𝑆𝑛 = 𝑘} = (−1)𝑘 ∑ (−𝑐)ℓ ( ) (
).
ℓ 𝑘 + 𝑁ℓ
(43)
ℓ=0
Actually, the foregoing sum is taken over the ℓ such that ℓ ≥
|𝑘|/𝑁. We also have that
𝑛
𝑛 2𝑁ℓ − 1
P {𝑆𝑛 ≤ 𝑘} = 1{𝑘≥0} + (−1)𝑘 ∑ (−𝑐)ℓ ( ) (
).
ℓ
𝑘 + 𝑁ℓ
ℓ=1
(44)
(40)
.
Proof. By the independence of the 𝜉𝑗 ’s which have the same
pseudoprobability distribution, we plainly have that
𝜃 𝑛
E (e𝑖𝜃𝑆𝑛 ) = f(𝜃)𝑛 = [1 − 𝑐4𝑁 sin 2𝑁 ( )] .
2
(45)
Hence, by inverse Fourier transform, we extract that
‖f‖∞ = sup |f (𝜃)|
𝜃∈[0,2𝜋]
1
{
if 0 < 𝑐 ≤ 2𝑁−1 ,
{1
2
={
{𝑐4𝑁 − 1 if 𝑐 ≥ 1 .
22𝑁−1
{
(42)
2𝑁−1
, we see that
In view of (40) and (42), since ( 2𝑁
𝑁 ) ≤ 2
𝑀1 ≥ 𝑀∞ ≥ 1.
,
Set f(𝜃) = E(e𝑖𝜃𝜉1 ) for any 𝜃 ∈ [0, 2𝜋]. We notice that f(𝜃) ∈
[1 − 𝑐4𝑁, 1] and, more precisely,
󵄨
󵄨
= max (󵄨󵄨󵄨󵄨1 − 𝑐4𝑁󵄨󵄨󵄨󵄨 , 1)
1
{
if 0 < 𝑐 ≤ 2𝑁−1 ,
{1
2
={
{𝑐4𝑁 − 1 if 𝑐 ≥ 1 .
{
22𝑁−1
P {𝑆𝑛 = 𝑘} =
(41)
1 2𝜋
∫ f(𝜃)𝑛 e−𝑖𝑘𝜃 d𝜃
2𝜋 0
(46)
𝑛
ℓ 𝑛
1 2𝜋 2𝑁ℓ 𝜃 −𝑖𝑘𝜃
d𝜃.
= ∑ (−4𝑁𝑐) ( )
∫ sin ( ) e
ℓ 2𝜋 0
2
ℓ=0
(47)
6
International Journal of Stochastic Analysis
Figure 1: 𝑁 = 3, 𝑐 ∈ {1, 1/32, 1/64}.
Figure 2: 𝑁 = 4, 𝑐 ∈ {1, 1/128, 1/256}.
By writing sin(𝜃/2) = (e𝑖𝜃/2 − e−𝑖𝜃/2 )/(2𝑖), we get for the
integral lying in (47) that
By plugging (48) into (47), we derive (43). Next, we write, for
𝑘 ∈ {−𝑁𝑛, . . . , 𝑁𝑛}, that
𝑘
P {𝑆𝑛 ≤ 𝑘} = ∑ P {𝑆𝑛 = 𝑗}
2𝜋
1
𝜃
∫ sin2𝑁ℓ ( ) e−𝑖𝑘𝜃 d𝜃
2𝜋 0
2
1 2𝜋 e𝑖𝜃/2 − e−𝑖𝜃/2
=
)
∫ (
2𝜋 0
2𝑖
𝑗=−𝑁𝑛
2𝑁ℓ
−𝑖𝑘𝜃
e
d𝜃
(−1)𝑁ℓ 2𝜋 2𝑁ℓ
2𝑁ℓ 𝑖(𝑚−𝑘−𝑁ℓ)𝜃
d𝜃
=
)e
∫ ∑ (−1)𝑚 (
𝑁ℓ
𝑚
(2𝜋) 4
0 𝑚=0
=
(−1)𝑁ℓ 2𝑁ℓ
2𝑁ℓ 1 2𝜋 𝑖(𝑚−𝑘−𝑁ℓ)𝜃
d𝜃
)
∑ (−1)𝑚 (
∫ e
𝑁ℓ
𝑚 2𝜋 0
4
𝑚=0
(−1)𝑘 2𝑁ℓ
= 𝑁ℓ (
).
𝑘 + 𝑁ℓ
4
𝑛
𝑘∧(𝑁ℓ)
ℓ=0
𝑗=(−𝑁𝑛)∨(−𝑁ℓ)
𝑛
= ∑ (−𝑐)ℓ ( )
ℓ
(48)
∑
𝑛
𝑘∧(𝑁ℓ)
ℓ=0
𝑗=−𝑁ℓ
2𝑁ℓ
)
(−1)𝑗 (
𝑗 + 𝑁ℓ
𝑛
2𝑁ℓ
= ∑ (−𝑐)ℓ ( ) ∑ (−1)𝑗 (
).
ℓ
𝑗 + 𝑁ℓ
(49)
If 𝑘 < 0, then the term in sum (49) corresponding to ℓ = 0
vanishes and
𝑛
𝑘
ℓ=1
𝑗=−𝑁ℓ
𝑛
2𝑁ℓ
P {𝑆𝑛 ≤ 𝑘} = ∑ (−𝑐)ℓ ( ) ∑ (−1)𝑗 (
).
ℓ
𝑗 + 𝑁ℓ
(50)
International Journal of Stochastic Analysis
7
The second sum in the foregoing equality is easy to compute:
This proves (53). Next, by (46), since f(2𝜋−𝜃) = f(𝜃), we have,
for any 𝜀 ∈ (0, 𝜋), that
𝑘
2𝑁ℓ
)
∑ (−1)𝑗 (
𝑗 + 𝑁ℓ
P {𝑆𝑛 = 0} =
𝑗=−𝑁ℓ
𝑘
2𝑁ℓ − 1
2𝑁ℓ − 1
= ∑ [(−1)𝑗 (
) − (−1)𝑗+1 (
)]
𝑗 + 𝑁ℓ − 1
𝑗 + 𝑁ℓ
𝜀
𝜋
1
= (∫ f(𝜃)𝑛 d𝜃 + ∫ f(𝜃)𝑛 d𝜃) .
𝜋 0
𝜀
𝑗=−𝑁ℓ
2𝑁ℓ − 1
= (−1)𝑘 (
).
𝑘 + 𝑁ℓ
(51)
𝑘∧(𝑁ℓ)
𝑛
2𝑁ℓ
P {𝑆𝑛 ≤ 𝑘} = 1 + ∑ (−𝑐) ( ) ∑ (−1)𝑗 (
) . (52)
ℓ
𝑗 + 𝑁ℓ
ℓ
ℓ=1
𝑗=−𝑁ℓ
󵄨
󵄨󵄨
𝑛
󵄨󵄨P {𝑆𝑛 = 0}󵄨󵄨󵄨 ≤ 𝜀 + |f (𝜀)| .
Proposition 5. The upper bound below holds true: for any
positive integer 𝑛 and any integer 𝑘,
󵄨󵄨
󵄨
𝑛
󵄨󵄨P {𝑆𝑛 = 𝑘}󵄨󵄨󵄨 ≤ √P {𝑆2𝑛 = 0} ≤ 𝑀∞ .
ln (|f (𝜀)|𝑛 ) = 𝑛 ln (1 − 𝑐4𝑁sin2𝑁 (
2𝑁−1
Assume that 0 < 𝑐 ≤ 1/2
. The asymptotics below holds
true: for any 𝛿 ∈ (0, 1/(2𝑁)),
1
P {𝑆𝑛 = 0} = O ( 𝛿 ) .
𝑛 → +∞
𝑛
𝜋−𝜀
∫0 + ∫𝜀
𝜋
+ ∫𝜋−𝜀 .
Remark 6. A better estimate for |P{𝑆𝑛 = 𝑘}| can be obtained
in the same way:
∀𝑘 ∈ Z,
(54)
1
)) ∼ −𝑐𝑛1−2𝑁𝛿 (59)
2𝑛𝛿
which clearly entails, for large enough 𝑛, that |f(𝜀)|𝑛 ≤
exp(−(𝑐/2)𝑛1−2𝑁𝛿 ). Thus, if 𝛿 ∈ (0, 1/(2𝑁)), |f(𝜀)|𝑛 = 𝑜(𝜀)
which proves (54).
If 𝑐 = 1/22𝑁−1 , f(𝜃) = 1 − 2sin2𝑁(𝜃/2). In this case,
𝜋
the same holds true upon splitting the integral ∫0 into
𝜀
(53)
(58)
Now, choose 𝜀 = 1/𝑛𝛿 for a positive 𝛿. We have that
𝛼
By using the convention that ( 𝛽 ) = 0 if 𝛽 > 𝛼, we see that
the second sum above also coincides with (51). Formula (44)
ensues in both cases.
(57)
The assumption 0 < 𝑐 < 1/22𝑁−1 entails that |f(𝜃)| < 1 for any
𝜃 ∈ (0, 𝜋). We see that |f(𝜃)| ≤ 1 on [0, 𝜀], and |f(𝜃)| ≤ |f(𝜀)|
on [𝜀, 𝜋] for any 𝜀 ∈ (0, 𝜋). Hence,
If 𝑘 ≥ 0, then the term in sum (49) corresponding to ℓ = 0 is
1 and
𝑛
1 2𝜋
1 𝜋
∫ f(𝜃)𝑛 d𝜃 = ∫ f(𝜃)𝑛 d𝜃
2𝜋 0
𝜋 0
P {𝑆𝑛 = 0}
󵄨
󵄨󵄨
󵄨󵄨P {𝑆𝑛 = 𝑘}󵄨󵄨󵄨 ≤ {
𝑀∞ P {𝑆𝑛−1 = 0}
if 𝑛 is even,
if 𝑛 is odd.
(60)
Proof. Let us introduce the usual norms of any suitable
function 𝜙:
Nevertheless, we will not use it. We also have the following
inequality for the total variation of 𝑆𝑛 :
1 2𝜋 󵄨󵄨
󵄨
󵄩󵄩 󵄩󵄩
∫ 󵄨𝜙 (𝜃)󵄨󵄨󵄨 d𝜃,
󵄩󵄩𝜙󵄩󵄩1 =
2𝜋 0 󵄨
𝑁𝑛
󵄩 󵄩𝑛
󵄩󵄩 󵄩󵄩
󵄩󵄩P𝑆𝑛 󵄩󵄩 = ∑ 󵄨󵄨󵄨󵄨P {𝑆𝑛 = 𝑘}󵄨󵄨󵄨󵄨 ≤ 󵄩󵄩󵄩P𝜉1 󵄩󵄩󵄩 = 𝑀1𝑛 .
󵄩 󵄩
󵄩 󵄩
1 2𝜋 󵄨󵄨
󵄨2
󵄩󵄩 󵄩󵄩
∫ 󵄨𝜙 (𝜃)󵄨󵄨󵄨 d𝜃,
󵄩󵄩𝜙󵄩󵄩2 = √
2𝜋 0 󵄨
(61)
𝑘=−𝑁𝑛
(55)
Proposition 7. For any bounded function 𝐹 defined on Z𝑛 ,
󵄨󵄨
󵄨
𝑛
󵄨󵄨E [𝐹 (𝑆1 , . . . , 𝑆𝑛 )]󵄨󵄨󵄨 ≤ ‖𝐹‖∞ 𝑀1 .
󵄨
󵄨
󵄩󵄩 󵄩󵄩
󵄩󵄩𝜙󵄩󵄩∞ = sup 󵄨󵄨󵄨𝜙 (𝜃)󵄨󵄨󵄨
𝜃∈[0,2𝜋]
and recall the elementary inequalities ‖𝜙‖1 ≤ ‖𝜙‖2 ≤ ‖𝜙‖∞ .
It is clear from (46) that, for any integer 𝑘,
1 2𝜋
󵄨
󵄩 󵄩
󵄨󵄨
∫ |f (𝜃)|𝑛 d𝜃 = 󵄩󵄩󵄩f𝑛 󵄩󵄩󵄩1
󵄨󵄨P {𝑆𝑛 = 𝑘}󵄨󵄨󵄨 ≤
2𝜋 0
1 2𝜋
󵄩 󵄩
≤ 󵄩󵄩󵄩f𝑛 󵄩󵄩󵄩2 = √
∫ f(𝜃)2𝑛 d𝜃 = √P {𝑆2𝑛 = 0}
2𝜋 0
𝑛
≤ ‖f‖𝑛∞ = 𝑀∞
.
(56)
(62)
Proof. Recall that we set 𝑝𝑘 = P{𝜉1 = 𝑘} for any 𝑘 ∈
{−𝑁, . . . , 𝑁}. We extend these settings by putting 𝑝𝑘 = 0 for
𝑘 ∈ Z \ {−𝑁, . . . , 𝑁}. We have that
󵄨󵄨
󵄨
󵄨󵄨E [𝐹 (𝑆1 , . . . , 𝑆𝑛 )]󵄨󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨
= 󵄨󵄨
𝐹 (𝑘1 , . . . , 𝑘𝑛 ) P {𝑆1 = 𝑘1 , . . . , 𝑆𝑛 = 𝑘𝑛 }󵄨󵄨󵄨
∑
󵄨󵄨
󵄨󵄨
𝑛
󵄨󵄨(𝑘1 ,...,𝑘𝑛 )∈Z
󵄨󵄨
󵄨󵄨
󵄨
󵄨󵄨𝑝𝑘1 𝑝𝑘2 −𝑘1 ⋅ ⋅ ⋅ 𝑝𝑘𝑛 −𝑘𝑛−1 󵄨󵄨󵄨 .
≤ ‖𝐹‖∞
∑
󵄨
󵄨
(𝑘1 ,...,𝑘𝑛 )∈Z𝑛
(63)
8
International Journal of Stochastic Analysis
If we choose 𝑧 such that |𝑀∞ 𝑧| < 1, then the same conclusion
holds.
Now, we have that
The foregoing sum can be easily evaluated as follows:
󵄨󵄨
󵄨
󵄨󵄨𝑝𝑘1 𝑝𝑘2 −𝑘1 ⋅ ⋅ ⋅ 𝑝𝑘𝑛 −𝑘𝑛−1 󵄨󵄨󵄨
∑
󵄨
󵄨
(𝑘1 ,...,𝑘𝑛 )∈Z𝑛
󵄨 󵄨
󵄨
󵄨
󵄨
󵄨
= ∑ 󵄨󵄨󵄨󵄨𝑝𝑘1 󵄨󵄨󵄨󵄨 ( ∑ 󵄨󵄨󵄨󵄨𝑝𝑘2 −𝑘1 󵄨󵄨󵄨󵄨 (⋅ ⋅ ⋅ ( ∑ 󵄨󵄨󵄨󵄨𝑝𝑘𝑛 −𝑘𝑛−1 󵄨󵄨󵄨󵄨) ⋅ ⋅ ⋅ ))
𝑘1 ∈Z
𝑘2 ∈Z
𝐺 (𝑧, 𝜁) = ∑ ( ∑ P {𝑆𝑛 = 𝑘} 𝜁𝑘 ) 𝑧𝑛
𝑛∈N
𝑘𝑛 ∈Z
𝑘∈Z
𝑛
= ∑ E (𝜁𝑆𝑛 ) 𝑧𝑛 = ∑ [𝑧E (𝜁𝜉1 )]
𝑛
󵄨 󵄨
= ( ∑ 󵄨󵄨󵄨𝑝𝑘 󵄨󵄨󵄨) = 𝑀1𝑛
𝑛∈N
𝑘∈Z
=
(64)
(69)
𝑛∈N
1
1 − 𝑧E (𝜁𝜉1 )
which proves (62).
and, thanks to (38), we can state the following result.
2.2. Generating Function of 𝑆𝑛 . Let us introduce the generating functions, defined for complex numbers 𝑧, 𝜁, by
Proposition 8. The double generating function of the P{𝑆𝑛 =
𝑘}, 𝑘 ∈ Z, 𝑛 ∈ N, is given, for any complex numbers 𝑧, 𝜁 such
𝑁
𝑁
that √|𝑀
∞ 𝑧| < |𝜁| < 1/ √|𝑀∞ 𝑧|, by
𝐺𝑘 (𝑧) = ∑ P {𝑆𝑛 = 𝑘} 𝑧𝑛 = ∑ P {𝑆𝑛 = 𝑘} 𝑧𝑛
𝑛∈N
∑ P {𝑆𝑛 = 𝑘} 𝜁𝑘 𝑧𝑛 =
𝐺 (𝑧, 𝜁) =
for 𝑘 ∈ Z,
𝑛∈N:
𝑛≥|𝑘|/𝑁
𝑘∈Z,𝑛∈N
𝐺 (𝑧, 𝜁) =
∑ P {𝑆𝑛 = 𝑘} 𝜁𝑘 𝑧𝑛 .
𝑘∈Z,𝑛∈N:
|𝑘|≤𝑁𝑛
(65)
We first study the problem of convergence of the foregoing
series. We start from
∑
𝑘∈Z,𝑛∈N
󵄨󵄨
󵄨
󵄨󵄨P {𝑆𝑛 = 𝑘} 𝜁𝑘 𝑧𝑛 󵄨󵄨󵄨 ≤
󵄨
󵄨
󵄨 󵄨𝑘 󵄨
󵄨𝑛
∑ 󵄨󵄨󵄨𝜁󵄨󵄨󵄨 󵄨󵄨󵄨𝑀∞ 𝑧󵄨󵄨󵄨 .
𝑘∈Z,𝑛∈N:
|𝑘|≤𝑁𝑛
(66)
.
(70)
𝐺 (𝑧, e 𝑖𝜃 ) =
1
1−𝑧+
𝑐4𝑁𝑧 sin2𝑁 (𝜃/2)
.
(71)
𝐺 (𝑧, 𝜁) = ∑ ( ∑ P {𝑆𝑛 = 𝑘} 𝑧𝑛 ) 𝜁𝑘 = ∑ 𝐺𝑘 (𝑧) 𝜁𝑘 .
𝑛∈N
𝑘∈Z
(72)
By substituting 𝜁 = e 𝑖𝜃 in the foregoing equality, we get the
Fourier series of the function 𝜃 󳨃→ 𝐺(𝑧, e 𝑖𝜃 ):
𝑘∈Z,𝑛∈N:
|𝑘|≤𝑁𝑛
𝐺 (𝑧, e 𝑖𝜃 ) = ∑ 𝐺𝑘 (𝑧) e𝑖𝑘𝜃
󵄨 󵄨𝑁𝑛+1
󵄨 󵄨𝑁𝑛+1
1 − 1/󵄨󵄨󵄨𝜁󵄨󵄨󵄨
1 − 󵄨󵄨󵄨𝜁󵄨󵄨󵄨
󵄨
󵄨𝑛
= ∑(
󵄨󵄨 󵄨󵄨 +
󵄨󵄨 󵄨󵄨 − 1) 󵄨󵄨󵄨𝑀∞ 𝑧󵄨󵄨󵄨
𝜁
𝜁
1
−
1
−
1/
󵄨
󵄨
󵄨
󵄨
𝑛=0
󵄨 󵄨
󵄨 󵄨
∞
𝑘∈Z
∞
1
󵄨
󵄨𝑛
+
󵄨󵄨 󵄨󵄨 ( ∑ 󵄨󵄨󵄨𝑀∞ 𝑧󵄨󵄨󵄨 −
1 − 1/ 󵄨󵄨𝜁󵄨󵄨 𝑛=0
∞
1 ∞ 󵄨󵄨󵄨 𝑀∞ 𝑧 󵄨󵄨󵄨𝑛
󵄨
󵄨𝑛
󵄨󵄨 󵄨󵄨 ∑ 󵄨󵄨󵄨󵄨 𝜁𝑁 󵄨󵄨󵄨󵄨 ) − ∑ 󵄨󵄨󵄨𝑀∞ 𝑧󵄨󵄨󵄨 .
󵄨󵄨𝜁󵄨󵄨 𝑛=0󵄨
󵄨
𝑛=0
(67)
If we choose 𝑧, 𝜁 such that |𝑀∞ 𝑧| < 1, |𝑀∞ 𝑧𝜁𝑁| < 1
and |𝑀∞ 𝑧/𝜁𝑁| < 1 (which is equivalent to |𝑧| < 1/𝑀∞ ×
𝑁
𝑁
[min(|𝜁|, 1/|𝜁|)]𝑁, or √|𝑀
∞ 𝑧| < |𝜁| < 1/ √|𝑀∞ 𝑧|), then
the double sum defining the function 𝐺(𝑧, 𝜁) is absolutely
summable. If |𝜁| = 1, then
∞
󵄨
󵄨 󵄨𝑘 󵄨
󵄨𝑛
󵄨𝑛
∑ 󵄨󵄨󵄨𝜁󵄨󵄨󵄨 󵄨󵄨󵄨𝑀∞ 𝑧󵄨󵄨󵄨 = ∑ (2𝑁𝑛 + 1) 󵄨󵄨󵄨𝑀∞ 𝑧󵄨󵄨󵄨 .
𝑛=0
(73)
from which we extract the sequence of the coefficients
(𝐺𝑘 (𝑧))𝑘∈N . Indeed, since P{𝑆𝑛 = −𝑘} = P{𝑆𝑛 = 𝑘}, we have
that 𝐺𝑘 (𝑧) = 𝐺−𝑘 (𝑧) and
∞
1
󵄨
󵄨𝑛 󵄨 󵄨 󵄨󵄨
𝑁 󵄨󵄨𝑛
󵄨 󵄨 ( ∑ 󵄨󵄨𝑀 𝑧󵄨󵄨 − 󵄨󵄨𝜁󵄨󵄨 ∑ 󵄨󵄨𝑀 𝑧𝜁 󵄨󵄨󵄨 )
1 − 󵄨󵄨󵄨𝜁󵄨󵄨󵄨 𝑛=0󵄨 ∞ 󵄨 󵄨 󵄨 𝑛=0󵄨 ∞
𝑘∈Z,𝑛∈N:
|𝑘|≤𝑁𝑛
2𝑁
In particular, for any 𝜃 ∈ [0, 2𝜋] and 𝑧 ∈ C such that |𝑧| <
1/𝑀∞ ,
𝑘∈Z
󵄨 󵄨𝑘 󵄨
󵄨𝑛
∑ 󵄨󵄨󵄨𝜁󵄨󵄨󵄨 󵄨󵄨󵄨𝑀∞ 𝑧󵄨󵄨󵄨
=
(1 − 𝑧) 𝜁𝑁 − 𝜅𝑁𝑐𝑧(1 − 𝜁)
On the other hand,
If 𝜁 ≠ 0 and |𝜁| ≠ 1, then
∞
𝜁𝑁
(68)
𝐺𝑘 (𝑧) =
1 2𝜋
∫ 𝐺 (𝑧, e 𝑖𝜃 ) e−𝑖𝑘𝜃 d𝜃
2𝜋 0
=
1 2𝜋
∫ 𝐺 (𝑧, e 𝑖𝜃 ) e𝑖𝑘𝜃 d𝜃
2𝜋 0
=
1
∫ 𝐺 (𝑧, 𝜁) 𝜁𝑘−1 d𝜁,
2𝜋𝑖 C
(74)
where C is the circle of radius 1 centered at the origin and
counter clockwise orientated. Then, referring to (70), we
obtain, for any 𝑧 ∈ C satisfying |𝑧| < 1/𝑀∞ , that
𝐺𝑘 (𝑧) =
1
𝜁𝑘+𝑁−1
d 𝜁,
∫
2𝜋𝑖 C 𝑃 (𝑧, 𝜁)
(75)
International Journal of Stochastic Analysis
9
where 𝑃(𝑧, 𝜁) is the polynomial given by
where
𝑃 (𝑧, 𝜁) = (1 − 𝑧) 𝜁𝑁 − 𝜅𝑁𝑐𝑧(𝜁 − 1)2𝑁.
(76)
We are looking for the roots of 𝜁 󳨃→ 𝑃(𝑧, 𝜁) which lie inside
the circle C. For this, we introduce the 𝑁th roots of 𝜅𝑁: 𝜃𝑗 =
− e 𝑖((2𝑗−1)/𝑁)𝜋 for 1 ≤ 𝑗 ≤ 𝑁; 𝜃𝑗𝑁 = 𝜅𝑁.
From now on, in order to simplify the expression of the
roots of 𝑃, we make the assumption that 𝑧 is a real number
lying in (0, 1/𝑀∞ ) (and then 𝑧 ∈ (0, 1)). The roots of 𝜁 󳨃→
𝑃(𝑧, 𝜁) are those of the equations 𝜁2 − 2[1 + 𝜃𝑗 𝑤(𝑧)]𝜁 + 1 = 0,
1 ≤ 𝑗 ≤ 𝑁, where
𝑤 (𝑧) =
𝑁
√
1−𝑧
.
𝑁
2 √𝑐𝑧
− 2 cos (
1 ≤ 𝑗 ≤ 𝑁,
(78)
V𝑗 (𝑧) = 1 + 𝜃𝑗 𝑤 (𝑧)
2𝑗 − 1
𝜋)) 𝑎𝑗 (𝑧)
𝑁
󵄨󵄨
2𝑗 − 1 󵄨󵄨󵄨󵄨
󵄨
+ 󵄨󵄨󵄨sin (
𝜋)󵄨󵄨 𝑏𝑗 (𝑧) ] .
󵄨󵄨
󵄨󵄨
𝑁
𝐵𝑗 (𝑧) = 2𝑎𝑗 (𝑧) √𝑤 (𝑧) [𝑤 (𝑧) − cos (
2𝑗 − 1
× [√ 𝑤(𝑧)2 − 4 cos (
𝜋) 𝑤 (𝑧) + 4 + 𝑤 (𝑧)]
𝑁
[
]
2𝑗 − 1
𝜋))
𝑁
> 0.
(82)
2𝑗 − 1
1 [√
𝑤(𝑧)2 − 4 cos (
𝜋) 𝑤 (𝑧) + 4
√2
𝑁
[
2𝑗 − 1 ]
𝜋)
+ 𝑤 (𝑧) − 2 cos (
𝑁
]
1/2
(79)
,
2𝑗 − 1 ]
𝜋)
𝑁
]
lim 𝑎𝑗 (𝑧) = √2 sin (
𝑧 → 1−
2𝑗 − 1
𝜋) ,
2𝑁
2𝑗 − 1
lim− 𝑏𝑗 (𝑧) = 𝜖𝑗 √2 cos (
𝜋)
𝑧→1
2𝑁
1/2
.
(83)
and then
We notice that 𝑎𝑗 (𝑧)𝑏𝑗 (𝑧) = | sin(((2𝑗 − 1)/𝑁)𝜋)|. Because of
the last coefficient 1 in the polynomial 𝜁2 − 2[1 + 𝜃𝑗 𝑤(𝑧)]𝜁 + 1,
it is clear that the roots 𝑢𝑗 (𝑧) and V𝑗 (𝑧) are inverse: V𝑗 (𝑧) =
1/𝑢𝑗 (𝑧).
Let us check that |𝑢𝑗 (𝑧)| < 1 < |V𝑗 (𝑧)| for any 𝑗 ∈
{1, . . . , 𝑁}. Straightforward computations yield that
󵄨2
󵄨󵄨
󵄨󵄨𝑢𝑗 (𝑧)󵄨󵄨󵄨 = 𝐴 𝑗 (𝑧) − 𝐵𝑗 (𝑧) ,
󵄨
󵄨
󵄨󵄨
󵄨󵄨2
󵄨󵄨V𝑗 (𝑧)󵄨󵄨 = 𝐴 𝑗 (𝑧) + 𝐵𝑗 (𝑧) ,
󵄨
󵄨
If sin(((2𝑗−1)/𝑁)𝜋) = 0 (which happens only when 𝑁 is odd
and 𝑗 = (𝑁 + 1)/2), 𝑎𝑗 (𝑧) = √𝑤(𝑧) + 2, 𝑏𝑗 (𝑧) = 0 and then
𝐵𝑗 = 2𝑎𝑗 (𝑧)√𝑤(𝑧)(𝑤(𝑧) + 1) > 0.
The above discussion ensures that the roots we are looking
for (i.e., those lying inside C) are 𝑢𝑗 (𝑧), 1 ≤ 𝑗 ≤ 𝑁; we discard
the V𝑗 (𝑧)’s.
Remark 9. We notice that
2𝑗 − 1
1 [√
𝑤(𝑧)2 − 4 cos (
𝑏𝑗 (𝑧) =
𝜋) 𝑤 (𝑧) + 4
√2
𝑁
[
− 𝑤 (𝑧) + 2 cos (
2𝑗 − 1
𝜋) + 𝑏𝑗 (𝑧)2 ]
𝑁
= 𝑎𝑗 (𝑧) √𝑤 (𝑧)
1 ≤ 𝑗 ≤ 𝑁,
(with the convention that sgn (0) = 0) ,
𝑎𝑗 (𝑧) =
(81)
𝐵𝑗 (𝑧) = 2√𝑤 (𝑧) [ (𝑤 (𝑧) − cos (
with
𝜖𝑗 = sgn (sin (
2𝑗 − 1 ]
𝜋) + 1,
𝑁
]
Since V𝑗 (𝑧) = 1/𝑢𝑗 (𝑧), checking that |𝑢𝑗 (𝑧)| < 1 < |V𝑗 (𝑧)|
is equivalent to checking that |𝑢𝑗 (𝑧)|2 < |V𝑗 (𝑧)|2 ; that is,
𝐵𝑗 (𝑧) > 0. If sin(((2𝑗 − 1)/𝑁)𝜋) ≠ 0, we have 𝑎𝑗 (𝑧)𝑏𝑗 (𝑧) ≠ 0,
𝑎𝑗 (𝑧) = | sin(((2𝑗 − 1)/𝑁)𝜋)|/𝑏𝑗 (𝑧) and then
𝑢𝑗 (𝑧) = 1 + 𝜃𝑗 𝑤 (𝑧)
+ 𝜃𝑗 √𝑤 (𝑧) [𝑎𝑗 (𝑧) + 𝑖𝜖𝑗 𝑏𝑗 (𝑧)] ,
2𝑗 − 1
× [√ 𝑤(𝑧)2 − 4 cos (
𝜋) 𝑤 (𝑧) + 4
𝑁
[
(77)
They can be written as
− 𝜃𝑗 √𝑤 (𝑧) [𝑎𝑗 (𝑧) + 𝑖𝜖𝑗 𝑏𝑗 (𝑧)] ,
𝐴 𝑗 (𝑧) = 𝑤(𝑧)2 + 𝑤 (𝑧)
lim 𝜃𝑗 [𝑎𝑗 (𝑧) + 𝑖𝜖𝑗 𝑏𝑗 (𝑧)] = √2𝜑𝑗 ,
𝑧 → 1−
(84)
where we set 𝜑𝑗 = −𝑖e 𝑖((2𝑗−1)/2𝑁)𝜋 . The 𝜑𝑗 , 1 ≤ 𝑗 ≤ 𝑁, are
the (2𝑁)th roots of 𝜅𝑁 with positive real part: 𝜑𝑗2𝑁 = 𝜅𝑁 and
R(𝜑𝑗 ) > 0. As a result, we derive the asymptotics, which will
be used further,
𝑢𝑗 (𝑧) = 1 + 𝜀𝑗 (𝑧)
(80)
with 𝜀𝑗 (𝑧) ∼ − −
𝑧→1
𝜑𝑗
√𝑐
2𝑁
√1 − 𝑧 = O ( 2𝑁
√1 − 𝑧) .
2𝑁
(85)
10
International Journal of Stochastic Analysis
Example 10. For 𝑁 = 1, the roots explicitly write as
𝑢1 (𝑧) = 1 + 𝑤 (𝑧) − 𝑎1 (𝑧) √𝑤 (𝑧),
V1 (𝑧) =
1
𝑢1 (𝑧)
(86)
Now, 𝐺𝑘 (𝑧) can be evaluated by residues theorem. Suppose first that 𝑘 ≥ 0 (then 𝑘 + 𝑁 − 1 ≥ 0) so that 0 is not a
pole in the integral defining 𝐺𝑘 (𝑧):
𝑁
𝐺𝑘 (𝑧) = ∑Res (
𝑗=1
with
𝑎1 (𝑧) = √𝑤 (𝑧) + 2.
=∑
(87)
𝑗=1 (𝜕𝑃/𝜕𝜁) (𝑧, 𝑢𝑗
If 𝑐 = 1/2, it can be simplified into
𝑢1 (𝑧) =
=
1 − √1 − 𝑧2
.
𝑧
(88)
For 𝑁 = 2, the roots explicitly write as
𝑢1 (𝑧) = [1 − 𝑏1 (𝑧) √𝑤 (𝑧)] − 𝑖 [𝑤 (𝑧) − 𝑎1 (𝑧) √𝑤 (𝑧)] ,
𝑢2 (𝑧) = 𝑢1 (𝑧),
V1 (𝑧) =
1
,
𝑢1 (𝑧)
V2 (𝑧) = V1 (𝑧),
(89)
𝑏1 (𝑧) =
1 √√
𝑤(𝑧)2 + 4 − 𝑤 (𝑧).
√2
(90)
𝑗=1 𝜁
1
,
V1 (𝑧) =
𝑢1 (𝑧)
1
V2 (𝑧) =
,
𝑢2 (𝑧)
𝑢3 (𝑧) = 𝑢1 (𝑧),
V3 (𝑧) = V1 (𝑧),
(91)
with
𝑎1 (𝑧) =
1 √√
𝑤(𝑧)2 − 2𝑤 (𝑧) + 4 + 𝑤 (𝑧) − 1,
√2
𝑏1 (𝑧) =
1 √√
𝑤(𝑧)2 − 2𝑤 (𝑧) + 4 − 𝑤 (𝑧) + 1,
√2
𝑎2 (𝑧) = √𝑤 (𝑧) + 2,
𝑤 (𝑧) =
√3 1 − 𝑧
.
3
2√𝑐𝑧
(94)
− 𝑢𝑗 (𝑧)
𝑁
+∑
𝑗=1 𝜁
V𝑗 (𝑧)
(95)
− V𝑗 (𝑧)
with
𝑖
[𝑤 (𝑧) √3 − (𝑎1 (𝑧) √3 + 𝑏1 (𝑧)) √𝑤 (𝑧)] ,
2
𝑢2 (𝑧) = 1 − 𝑤 (𝑧) − 𝑎2 (𝑧) √𝑤 (𝑧),
𝑢𝑗 (𝑧)
𝑁
𝐺 (𝑧, 𝜁) = ∑
1
[2 − 𝑤 (𝑧) − (𝑎1 (𝑧) − 𝑏1 (𝑧) √3) √𝑤 (𝑧)]
2
−
𝑁 1 − 𝑢 (𝑧)
1
𝑗
𝑢 (𝑧)|𝑘| .
∑
𝑁 (1 − 𝑧) 𝑗=1 1 + 𝑢𝑗 (𝑧) 𝑗
Remark 12. Another proof of Theorem 11 consists in expanding the rational fraction 𝜁 󳨃→ 𝐺(𝑧, 𝜁) into partial fractions. We
find it interesting to outline the main steps of this method. We
can write that
For 𝑁 = 3, the roots explicitly write as
𝑢1 (𝑧) =
𝑁 1 − 𝑢 (𝑧)
1
𝑗
𝑢 (𝑧)𝑘 .
∑
𝑁 (1 − 𝑧) 𝑗=1 1 + 𝑢𝑗 (𝑧) 𝑗
Theorem 11. For any 𝑘 ∈ Z, the generating function of the
P{𝑆𝑛 = 𝑘}, 𝑛 ∈ N, is given, for any 𝑧 ∈ (0, 1), by
𝐺𝑘 (𝑧) =
1 √√
𝑤(𝑧)2 + 4 + 𝑤 (𝑧),
𝑎1 (𝑧) =
√2
(93)
(𝑧))
The foregoing representation of 𝐺𝑘 (𝑧) is valid a priori for
any 𝑧 ∈ (0, 1/𝑀∞ ). Actually, in view of the expressions of
𝑤(𝑧) and 𝑢𝑗 (𝑧), we can see that (93) defines an analytical
function in the interval (0, 1). Since 𝐺𝑘 (𝑧) is a power series, by
analytical continuation, equality (93) holds true for any 𝑧 ∈
(0, 1). Moreover, by symmetry, we have that 𝐺𝑘 (𝑧) = 𝐺−𝑘 (𝑧)
for 𝑘 ≤ 0. We display this result in the theorem below.
with
√1 − 𝑧
𝑤 (𝑧) =
,
2√𝑐𝑧
𝑢𝑗 (𝑧)𝑘+𝑁−1
𝑁
1−𝑧
𝑤 (𝑧) =
,
2𝑐𝑧
𝜁𝑘+𝑁−1
, 𝜁 = 𝑢𝑗 (𝑧))
𝑃 (𝑧, 𝜁)
𝑢𝑗 (𝑧) =
V𝑗 (𝑧) =
V𝑗 (𝑧)
𝑢𝑗 (𝑧) 1 − 𝑢𝑗 (𝑧)
𝑁 (1 − 𝑧) 1 + 𝑢𝑗 (𝑧)
1 − V𝑗 (𝑧)
𝑁 (1 − 𝑧) 1 + V𝑗 (𝑧)
=−
,
1/𝑢𝑗 (𝑧) 1 − 𝑢𝑗 (𝑧)
𝑁 (1 − 𝑧) 1 + 𝑢𝑗 (𝑧)
.
We next expand the partial fractions 1/(𝜁 − 𝑢𝑗 (𝑧)) and 1/(𝜁 −
V𝑗 (𝑧)) into power series as follows. We have checked that
|𝑢𝑗 (𝑧)| < 1 < |V𝑗 (𝑧)| for any 𝑗 ∈ {1, . . . , 𝑁}. Now, if |𝑢𝑗 (𝑧)| <
|𝜁| < |V𝑗 (𝑧)| for any 𝑗 ∈ {1, . . . , 𝑁},
∞ 𝑢 (𝑧)𝑘
−1
𝜁𝑘
1
𝑗
,
= ∑ 𝑘+1 = ∑
𝑘+1
𝜁 − 𝑢𝑗 (𝑧) 𝑘=0 𝜁
𝑘=−∞ 𝑢𝑗 (𝑧)
(92)
(96)
∞
1
𝜁𝑘
= −∑
𝑘+1
𝜁 − V𝑗 (𝑧)
𝑘=0 V𝑗 (𝑧)
from which (94) can be easily extracted.
(97)
International Journal of Stochastic Analysis
11
2.3. Limiting Pseudoprocess. In this section, by pseudoprocess
it is meant a continuous-time process driven by a signed
measure. Actually, this object is not properly defined on all
continuous times but only on dyadic times 𝑘/2𝑗 , 𝑗, 𝑘 ∈ N.
A proper definition consists in seeing it as the limit of a step
process associated with the observations of the pseudoprocess on the dyadic times. We refer the reader to [10, 18] for
precise details which are cumbersome to reproduce here.
Below, we give an ad hoc definition for the convergence
of a family of pseudoprocesses ((𝑋𝑡𝜀 )𝑡≥0 )𝜀>0 towards a pseudoprocess (𝑋𝑡 )𝑡≥0 .
Proof. (i) We begin by computing the Laplace-Fourier trans𝜀
form of 𝑋𝑡𝜀 . By definition of 𝑋𝑡𝜀 , we have that E( e 𝑖𝜇𝑋𝑡 ) =
E( e 𝑖𝜇𝜀𝑆⌊𝑡/𝜀2𝑁 ⌋ ) and then
Definition 13. Let ((𝑋𝑡𝜀 )𝑡≥0 )𝜀>0 be a family of pseudoprocesses
and (𝑋𝑡 )𝑡≥0 a pseudoprocess. We say that
(103)
(𝑋𝑡𝜀 )𝑡≥0
∫
+∞
0
𝜀
e−𝜆𝑡 E (e𝑖𝜇𝑋𝑡 ) d𝑡
󳨀→ (𝑋𝑡 )𝑡≥0
(𝑛+1)𝜀2𝑁
𝑛=0
𝑛𝜀2𝑁
e−𝜆𝑡 d𝑡) E (e𝑖𝜇𝜀𝑆𝑛 )
2𝑁
1 − e−𝜆𝜀 ∞ −𝜆𝜀2𝑁 𝑛
=
) E (e𝑖𝜇𝜀𝑆𝑛 )
∑ (e
𝜆
𝑛=0
2𝑁
=
(98)
𝜀 → 0+
∞
= ∑ (∫
1 − e−𝜆𝜀
𝜆
𝑛
2𝑁
𝑘
∑ (e−𝜆𝜀 ) (e𝑖𝜇𝜀 ) P {𝑆𝑛 = 𝑘}
𝑛∈N,𝑘∈Z
2𝑁
=
if and only if
2𝑁
1 − e−𝜆𝜀
𝐺 (e−𝜆𝜀 , e𝑖𝜇𝜀 ) .
𝜆
By (71), we have that
∗
∀𝑛 ∈ N ,
E( e
∀𝑡1 , . . . , 𝑡𝑛 ≥ 0,
𝑖 ∑𝑛𝑘=1 𝜇𝑘 𝑋𝑡𝜀
𝑘
∀𝜇1 , . . . , 𝜇𝑛 ∈ R,
) 󳨀→+ E ( e
𝜀→0
𝑖 ∑𝑛𝑘=1 𝜇𝑘 𝑋𝑡𝑘
(99)
).
𝑡 ≥ 0,
(100)
where ⌊⋅⌋ stands for the usual floor function. The quantity 𝑋𝑡𝜀
takes its values on the discrete set 𝜀Z. Roughly speaking, we
normalize the pseudorandom walk on the time × space grid
𝜀2𝑁N × 𝜀Z. Let (𝑋𝑡 )𝑡≥0 be the pseudo-Brownian motion. It is
characterized by the following property: for any 𝑛 ∈ N∗ , any
𝑡1 , . . . , 𝑡𝑛 ≥ 0 such that 𝑡1 < ⋅ ⋅ ⋅ < 𝑡𝑛 and any 𝜇1 , . . . , 𝜇𝑛 ∈ R,
𝑛
𝑛
2𝑁
(𝑡𝑘 −𝑡𝑘−1 )
+
2𝑁
𝑐4𝑁e−𝜆𝜀 sin2𝑁 (𝜇𝜀/2)
.
2𝑁
Actually, equality (104) is valid for 𝜆 such that e−𝜆𝜀 < 1/𝑀∞ ;
that is, 𝜆 > (ln 𝑀∞ )/𝜀2𝑁. Since 𝑐 is assumed not to be greater
than 1/22𝑁−1 , by (42), we have that 𝑀∞ = 1 and (104) is valid
for any 𝜆 > 0.
Now, by using the elementary asymptotics sin(𝜇𝜀/2) = +
= + 1 − 𝜆𝜀2𝑁 + 𝑜(𝜀), we obtain that
𝜀→0
2𝑁
𝐺 (e−𝜆𝜀 , e𝑖𝜇𝜀 ) ∼
𝜀 → 0+
1
1
.
𝜆 + 𝑐𝜇2𝑁 𝜀2𝑁
(105)
As a result, for any 𝜆 > 0,
+∞
𝜀→0
.
𝜀→0
2𝑁
𝜇𝜀/2 + 𝑜(𝜀) and e−𝜆𝜀
lim+ ∫
E ( e 𝑖 ∑𝑘=1 𝜇𝑘 𝑋𝑡𝑘 ) = e −𝑐 ∑𝑘=1 (𝜇1 +⋅⋅⋅+𝜇𝑘 )
1−
2𝑁
e−𝜆𝜀
(104)
This is the weak convergence of the finite-dimensional
projections of the family of pseudoprocesses.
In this part, we choose for the family ((𝑋𝑡𝜀 )𝑡≥0 )𝜀>0 the
continuous-time pseudoprocesses defined, for any 𝜀 > 0, by
𝑋𝑡𝜀 = 𝜀𝑆⌊𝑡/𝜀2𝑁 ⌋ ,
1
2𝑁
𝐺 (e−𝜆𝜀 , e𝑖𝜇𝜀 ) =
0
𝜀
e−𝜆𝑡 E (e𝑖𝜇𝑋𝑡 )d𝑡 =
(101)
+∞
2𝑁
1
= ∫ e−(𝜆+𝑐𝜇 )𝑡 d𝑡
2𝑁
𝜆 + 𝑐𝜇
0
(106)
from which and (101) we deduce that
We refer to [10, 18] for a proper definition of pseudoBrownian motion, and to references therein for interesting
properties of this pseudoprocess.
Theorem 14. Suppose that 𝑐
convergence holds:
≤
1/22𝑁−1 . The following
(𝑋𝑡𝜀 )𝑡≥0 󳨀→+ (𝑋𝑡 )𝑡≥0 .
𝜀→0
(102)
𝜀
lim+ E (e𝑖𝜇𝑋𝑡 ) = e−𝑐𝜇
2𝑁
𝑡
𝜀→0
= E (e𝑖𝜇𝑋𝑡 ) .
(107)
Notice that the Laplace-Fourier of 𝑋𝑡 takes the simple form
∫
+∞
0
e−𝜆𝑡 E (e𝑖𝜇𝑋𝑡 )d𝑡 =
1
.
𝜆 + 𝑐𝜇2𝑁
(108)
12
International Journal of Stochastic Analysis
(ii) In the same way, we compute the Laplace-Fourier
𝜀
transform of 𝑋𝑡+𝜀
2𝑛 which will be used further. We have
𝜀
E(e𝑖𝜇𝑋𝑡+𝜀2𝑛 ) = E(e𝑖𝜇𝜀𝑆⌊𝑡/𝜀2𝑁 ⌋+1 ). Then
𝜀
+∞
∫
−𝜆𝑡
e
0
𝜀
𝑖𝜇𝑋𝑡+𝜀
2𝑛
E (e
∞
(𝑛+1)𝜀2𝑁
𝑛=0
𝑛𝜀2𝑁
= ∑ (∫
) d𝑡
𝜀→0
1−e
𝜆
2𝑁
=
e𝜆𝜀
We find it interesting to compute in a similar way the 𝜆potential of the pseudoprocess (𝑋𝑡 )𝑡≥0 . By definition of 𝑋𝑡𝜀 ,
we have, for any 𝛼, 𝛽 ∈ R such that 𝛼 < 𝛽, P{𝑋𝑡𝜀 ∈ [𝛼, 𝛽)} =
P{𝑆⌊𝑡/𝜀2𝑁 ⌋ ∈ [𝛼/𝜀, 𝛽/𝜀)}. Thus,
2𝑁
e𝜆𝜀
2𝑁
+∞
𝑛
𝑘
(e−𝜆𝜀 ) (e𝑖𝜇𝜀 ) P {𝑆𝑛 = 𝑘}
∑
∫
0
e−𝜆𝑡 P {𝑋𝑡𝜀 ∈ [𝛼, 𝛽)}d𝑡
𝑛∈N∗ ,𝑘∈Z
−1
𝜆
∞
(𝑛+1)𝜀2𝑁
𝑛=0
𝑛𝜀2𝑁
= ∑ (∫
2𝑁
[𝐺 (e−𝜆𝜀 , e𝑖𝜇𝜀 ) − 1] .
(109)
As for (107), we immediately extract the following limit:
(110)
(iii) We now compute the joint Fourier transform of
(𝑋𝑡𝜀1 , 𝑋𝑡𝜀2 ) for two times 𝑡1 , 𝑡2 such that 𝑡1 < 𝑡2 . Using the
elementary fact that ⌊𝑥⌋ − ⌊𝑦⌋ ∈ {⌊𝑥 − 𝑦⌋, ⌊𝑥 − 𝑦⌋ + 1}, we
observe that
]
1 − e−𝜆𝜀 ∞ −𝜆𝜀2𝑁 𝑛 [
) [
∑ (e
∑ P {𝑆𝑛 = 𝑘}]
[
]
𝜆
𝑛=0
𝑘∈Z:
(116)
𝛼/𝜀≤𝑘<𝛽/𝜀
[
]
2𝑁
1 − e−𝜆𝜀
=
𝜆
𝑑
(111)
𝜀
𝜀
E (e𝑖(𝜇1 𝑋𝑡1 +𝜇2 𝑋𝑡2 ) ) ∈ {E (e𝑖(𝜇1 +𝜇2 )𝑋𝑡1 ) E (e𝑖𝜇2 𝑋𝑡2 −𝑡1 ) ,
𝑖(𝜇1 +𝜇2 )𝑋𝑡𝜀1
E (e
1 − e−𝜆𝜀
𝜆
𝑖𝜇2 𝑋𝑡𝜀 −𝑡 +𝜀2𝑁
2 1
) E (e
𝑖𝜇2 𝑋𝑡𝜀
𝜀→0
)} .
(112)
2𝑁
2 −𝑡1 +𝜀
) = E (e𝑖𝜇2 𝑋𝑡2 −𝑡1 )
(113)
∫
+∞
0
lim E (e
𝜀 → 0+
𝑘∈Z:
𝛼/𝜀≤𝑘<𝛽/𝜀
𝐺𝑘 (e−𝜆𝜀 ) .
Suppose, for example, that 0 ≤ 𝛼 < 𝛽. Then,
⌈𝛽/𝜀⌉−1
∑
𝑢𝑗 (𝜆, 𝜀)|𝑘| = ∑ 𝑢𝑗 (𝜆, 𝜀)𝑘
𝑘=⌈𝛼/𝜀⌉
=
𝜀
𝜀→0
= E (e𝑖(𝜇1 +𝜇2 )𝑋𝑡1 ) × E (e𝑖𝜇2 𝑋𝑡2 −𝑡1 )
= E (e𝑖(𝜇1 𝑋𝑡1 +𝜇2 𝑋𝑡2 ) ) .
𝑢𝑗 (𝜆, 𝜀)
(118)
⌈𝛽/𝜀⌉
− 𝑢𝑗 (𝜆, 𝜀)
1 − 𝑢𝑗 (𝜆, 𝜀)
,
𝜀
= lim+ E (𝑒𝑖(𝜇1 +𝜇2 )𝑋𝑡1 ) × lim+ E (e𝑖𝜇2 𝑋𝑡2 −𝑡1 )
𝜀→0
(117)
1 𝑁 1 − 𝑢𝑗 (𝜆, 𝜀)
=
∑ 𝑢 (𝜆, 𝜀)|𝑘| .
∑
𝑁𝜆 𝑗=1 1 + 𝑢𝑗 (𝜆, 𝜀) 𝛼/𝜀≤𝑘<𝛽/𝜀 𝑗
⌈𝛼/𝜀⌉
)
𝑛
e−𝜆𝑡 P {𝑋𝑡𝜀 ∈ [𝛼, 𝛽)} d𝑡
𝛼/𝜀≤𝑘<𝛽/𝜀
which yields that
𝑖(𝜇1 𝑋𝑡𝜀1 +𝜇2 𝑋𝑡𝜀2 )
2𝑁
∑
−𝜆𝜀
justified by the fact that the series ∑∞
) is
𝑛=0 P{𝑆𝑛 = 𝑘}(e
2𝑁−1
absolutely convergent because of the condition 𝑐 ≤ 1/2
.
𝑛
= 1.
Indeed, by (53), for any 𝜆 > 0, |P{𝑆𝑛 = 𝑘}| < 𝑀∞
2𝑁
Put 𝑢𝑗 (𝜆, 𝜀) = 𝑢𝑗 (e−𝜆𝜀 ). This yields that
By (107) and (110), we obtain the following limit:
𝜀→0
𝑛=0
2𝑁
Then, we get, for 𝜇1 , 𝜇2 ∈ R, that
𝜀
𝑘∈Z:
𝛼/𝜀≤𝑘<𝛽/𝜀
𝑛
2𝑁
[ ∑ P {𝑆𝑛 = 𝑘} (e−𝜆𝜀 ) ]
Interchanging the two sums in the above computations is
= {𝑋𝑡𝜀2 −𝑡1 , 𝑋𝑡𝜀2 −𝑡1 +𝜀2𝑁 } .
lim+ E (e𝑖𝜇2 𝑋𝑡2 −𝑡1 ) = lim+ E (e
∞
∑
2𝑁
=
𝑋𝑡𝜀2 − 𝑋𝑡𝜀1 = 𝜀𝑆⌊𝑡2 /𝜀2𝑁 ⌋−⌊𝑡1 /𝜀2𝑁 ⌋ ∈ {𝜀𝑆⌊(𝑡2 −𝑡1 )/𝜀2𝑁 ⌋ , 𝜀𝑆⌊(𝑡2 −𝑡1 )/𝜀2𝑁 ⌋+1 }
𝜀
𝛼 𝛽
e−𝜆𝑡 d𝑡) P {𝑆𝑛 ∈ [ , )}
𝜀 𝜀
2𝑁
=
𝜀
lim E (e𝑖𝜇𝑋𝑡+𝜀2𝑁 ) = E (e𝑖𝜇𝑋𝑡 ) .
𝜀 → 0+
𝜀
(115)
The proof of Theorem 14 is complete.
e−𝜆𝑡 d𝑡) E (ei𝜇𝜀Sn+1 )
2𝑁
−𝜆𝜀2𝑁
𝜀
lim+ E (e𝑖(𝜇1 𝑋𝑡1 +⋅⋅⋅+𝜇𝑛 𝑋𝑡𝑛 ) ) = E (e𝑖(𝜇1 𝑋𝑡1 +⋅⋅⋅+𝜇𝑛 𝑋𝑡𝑛 ) ) .
1 − e−𝜆𝜀 ∞ −𝜆𝜀2𝑁 𝑛
=
) E (e𝑖𝜇𝜀𝑆𝑛+1 )
∑ (e
𝜆
𝑛=0
=
(iv) Finally, we can easily extend the foregoing limiting
result by recurrence as follows: for 𝑛 ∈ N∗ , 𝜇1 , . . . , 𝜇𝑛 ∈ R
and for any times 𝑡1 , . . . , 𝑡𝑛 such that 𝑡1 < ⋅ ⋅ ⋅ < 𝑡𝑛 ,
(114)
where ⌈⋅⌉ stands for the usual ceiling function. By using (85),
we deduce that
2𝑁 𝜆
𝑢𝑗 (𝜆, 𝜀) = + 1 − 𝜑𝑗 √ 𝜀 + 𝑜 (𝜀)
𝜀→0
𝑐
(119)
International Journal of Stochastic Analysis
13
+
(𝑧) = 1 and for 𝑘 ∈ {1, . . . , 𝑁}, ℓ ∈ {1, . . . , 𝑁 − 1},
where 𝑠𝑘,0
which implies that
lim 𝑢𝑗 (𝜆, 𝜀)⌈𝛼/𝜀⌉ = e−𝜑𝑗
2𝑁
√𝜆/𝑐𝛼
𝜀 → 0+
.
+
𝑠𝑘,ℓ
(𝑧) =
(120)
Therefore,
lim ∫
+∞
𝜀 → 0+ 0
𝑝𝑘+ (𝑧) = ∏ [𝑢𝑘 (𝑧) − 𝑢𝑗 (𝑧)] .
e−𝜆𝑡 P {𝑋𝑡𝜀 ∈ [𝛼, 𝛽)}d𝑡
=
2𝑁
1 𝑁 −𝜑𝑗 2𝑁√𝜆/𝑐𝛼
√𝜆/𝑐𝛽
− e−𝜑𝑗
)
∑ (e
2𝑁𝜆 𝑗=1
=
𝛽
2𝑁
1 𝑁
2𝑁 𝜆
√𝜆/𝑐𝑥
d𝑥) .
∑ (𝜑𝑗 √ ∫ e−𝜑𝑗
2𝑁𝜆 𝑗=1
𝑐 𝛼
(121)
Proof. Pick an integer 𝑘 ≥ 𝑏. If 𝑆𝑛 = 𝑘, then an overshoot of
the threshold 𝑏 occurs before time 𝑛: 𝜎𝑏+ ≤ 𝑛. This remark and
the independence of the increments of the pseudorandom
walk entail that
P {𝑆𝑛 = 𝑘} = P {𝑆𝑛 = 𝑘, 𝜎𝑏+ ≤ 𝑛}
𝑛 𝑏+𝑁−1
= ∑ ∑ P {𝑆𝑛 = 𝑘, 𝜎𝑏+ = 𝑗, 𝑆𝑏+ = ℓ}
𝑗=0 ℓ=𝑏
Proposition 15. The 𝜆-potential of the pseudoprocess (𝑋𝑡 )𝑡≥0
is given by
∫
0
(125)
1≤𝑗≤𝑁
𝑗 ≠ 𝑘
The case 𝛼 < 𝛽 ≤ 0 is similar to treat. We have obtained the
following result.
+∞
𝑢𝑖1 (𝑧) ⋅ ⋅ ⋅ 𝑢𝑖ℓ (𝑧) ,
∑
1≤𝑖1 <⋅⋅⋅<𝑖ℓ ≤𝑁
𝑖1 ,...,𝑖ℓ ≠ 𝑘
𝑛 𝑏+𝑁−1
= ∑ ∑ P {𝜎𝑏+ = 𝑗, 𝑆𝑏+ = ℓ} P {𝑆𝑛−𝑗 = 𝑘 − ℓ} .
𝑗=0 ℓ=𝑏
P {𝑋𝑡 ∈ d𝑥}
e−𝜆𝑡 (
)d𝑡
d𝑥
(126)
𝑁
2𝑁
1
√𝜆/𝑐𝑥
{
{
∑ 𝜑𝑗 e−𝜑𝑗
{
2𝑁
1−1/(2𝑁)
{
{ 2𝑁 √𝑐 𝜆
𝑗=1
={
𝑁
{
2𝑁
1
√𝜆/𝑐𝑥
{
{
𝜑𝑗 e𝜑𝑗
∑
{−
2𝑁
1−1/(2𝑁)
√𝑐
2𝑁
𝜆
𝑗=1
{
if 𝑥 ≥ 0,
(122)
+
(𝑧) absolutely conSince the series defining 𝐺𝑘 (𝑧) and 𝐻𝑏,ℓ
verge, respectively, for 𝑧 ∈ (0, 1) and |𝑧| < 1/𝑀1 , and
since 𝑀1 ≥ 1, we can apply the generating function to the
convolution equality (126). We get, for 𝑧 ∈ (0, 1/𝑀1 ), that
if 𝑥 ≤ 0.
𝑏+𝑁−1
+
𝐺𝑘 (𝑧) = ∑ 𝐺𝑘−ℓ (𝑧) 𝐻𝑏,ℓ
(𝑧) .
(127)
ℓ=𝑏
3. Part II—First Overshooting Time of
a Single Threshold
3.1. On the Pseudodistribution of (𝜎𝑏+ , 𝑆𝑏+ ). Let 𝑏 ∈ N∗ . In
this section, we explicitly compute the generating function of
(𝜎𝑏+ , 𝑆𝑏+ ). Set, for ℓ ∈ {𝑏, 𝑏 + 1, . . . , 𝑏 + 𝑁 − 1},
+
𝐻𝑏,ℓ
𝜎𝑏+
(𝑧) = E (𝑧 1{𝑆𝑏+ =ℓ,𝜎𝑏+ <+∞} ) =
∑ P {𝜎𝑏+
𝑘∈N
=
𝑘, 𝑆𝑏+
𝑘
= ℓ} 𝑧 .
(123)
+
We are able to provide an explicit expression of 𝐻𝑏,ℓ
(𝑧). Before
tackling this problem, we need an a priori estimate for P{𝜎𝑏+ =
𝑘, 𝑆𝑏+ = ℓ}. By (62), we immediately derive that |P{𝜎𝑏+ = 𝑘,
𝑆𝑏+ = ℓ}| = |P{𝑆1 < 𝑏, . . . , 𝑆𝑘−1 < 𝑏, 𝑆𝑘 = ℓ}| ≤ 𝑀1𝑘 . Hence, the
+
(𝑧) absolutely converges for |𝑧| <
power series defining 𝐻𝑏,ℓ
1/𝑀1 .
E (𝑧 1
) = (−1)
ℓ−𝑏
𝑁
∑
𝑘=1
+
𝑠𝑘,ℓ−𝑏
(𝑧)
𝑝𝑘+ (𝑧)
𝑗=1
ℓ=𝑏
(𝑧)
𝑢𝑗 (𝑧)ℓ
− 1) = 0,
𝑢𝑘 (𝑧)𝑏+𝑁−1 ,
(124)
𝑘 ≥ 𝑏 + 𝑁 − 1.
(128)
Recalling that V𝑗 (𝑧)
=
+
𝛼𝑗 (𝑧)(∑𝑏+𝑁−1
𝐻𝑏,ℓ
(𝑧)V𝑗 (𝑧)ℓ
ℓ=𝑏
̃𝑗 (𝑧)𝑢𝑗 (𝑧)𝑘 = 0, 𝑘
∑𝑁
𝑗=1 𝛼
1/𝑢𝑗 (𝑧) and setting 𝛼̃𝑗 (𝑧)
=
− 1), system (128) reads
≥ 𝑏 + 𝑁 − 1. When limiting
the range of 𝑘 to the set {𝑏 + 𝑁, 𝑏 + 𝑁 + 1, . . . , 𝑏 + 2𝑁 − 1},
this becomes a homogeneous Vandermonde system whose
solution is trivial: 𝛼̃𝑗 (𝑧) = 0, 1 ≤ 𝑗 ≤ 𝑁. Thus, we get the
following Vandermonde system:
+
∑ 𝐻𝑏,ℓ
(𝑧) V𝑗 (𝑧)ℓ = 1,
Theorem 16. The pseudodistribution of (𝜎𝑏+ , 𝑆𝑏+ ) is characterized by the identity, valid for any 𝑧 ∈ (0, 1) and any ℓ ∈
{𝑏, 𝑏 + 1, . . . , 𝑏 + 𝑁 − 1},
{𝑆𝑏+ =ℓ,𝜎𝑏+ <+∞}
𝑏+𝑁−1 𝐻+
𝑏,ℓ
𝑁
∑𝛼𝑗 (𝑧) 𝑢𝑗 (𝑧)𝑘 ( ∑
𝑏+𝑁−1
3.1.1. Joint Pseudodistribution of (𝜎𝑏+ , 𝑆𝑏+ )
𝜎𝑏+
Using expression (94) of 𝐺𝑘 , namely, 𝐺𝑘 (𝑧)
=
𝑁
𝑘
∑𝑗=1 𝛼𝑗 (𝑧)𝑢𝑗 (𝑧) for 𝑘 ≥ 0, where 𝛼𝑗 (𝑧) = 1/[𝑁(1 − 𝑧)] ×
[1 − 𝑢𝑗 (𝑧)]/[1 + 𝑢𝑗 (𝑧)], we obtain that
1 ≤ 𝑗 ≤ 𝑁.
(129)
ℓ=𝑏
System (129) can be explicitly solved. In order to simplify the
settings, we will omit the variable 𝑧 in the sequel of the proof.
It is convenient to rewrite (129) as
𝑏+𝑁−1
+ ℓ−𝑏
V𝑗 = 𝑢𝑗𝑏 ,
∑ 𝐻𝑏,ℓ
ℓ=𝑏
1 ≤ 𝑗 ≤ 𝑁.
(130)
14
International Journal of Stochastic Analysis
which is nothing but 𝑉(V1 , . . . , V𝑘−1 , 𝑥, V𝑘+1 , . . . , V𝑁), the value
of which is
Cramer’s formulae yield
+
𝐻𝑏,ℓ
=
𝑉ℓ (V1 , . . . , V𝑁)
,
𝑉 (V1 , . . . , V𝑁)
𝑏 ≤ ℓ ≤ 𝑏 + 𝑁 − 1,
(131)
where
∏ (V𝑗 − V𝑖 ) ∏ (𝑥 − V𝑖 ) ∏ (V𝑖 − 𝑥)
1≤𝑖<𝑗≤𝑁
𝑖,𝑗 ≠ 𝑘
=
󵄨󵄨1 V ⋅ ⋅ ⋅ V𝑁−1 󵄨󵄨
󵄨󵄨
󵄨󵄨
1
1
󵄨󵄨
󵄨󵄨
󵄨󵄨
𝑁−1 󵄨󵄨
󵄨󵄨1 V2 ⋅ ⋅ ⋅ V2 󵄨󵄨
𝑉 (V1 , . . . , V𝑁) = 󵄨󵄨󵄨 . .
.. 󵄨󵄨󵄨󵄨 = ∏ (V𝑗 − V𝑖 ) (132)
󵄨󵄨 .. ..
. 󵄨󵄨󵄨 1≤𝑖<𝑗≤𝑁
󵄨󵄨
󵄨󵄨
𝑁−1 󵄨󵄨
󵄨󵄨1 V𝑁 ⋅ ⋅ ⋅ V𝑁 󵄨󵄨
1≤𝑖≤𝑘−1
𝑘+1≤𝑖≤𝑁
∏1≤𝑖<𝑗≤𝑁 (V𝑗 − V𝑖 )
∏ 1≤𝑖≤𝑁 (V𝑘 − V𝑖 )
𝑖 ≠ 𝑘
𝑥 − V𝑖
V
1≤𝑖≤𝑁 𝑘 − V𝑖
= 𝑉 (V1 , . . . , V𝑁) ∏
= (−1)𝑁−1 𝑉 (V1 , . . . , V𝑁) ∏ (𝑢𝑘
1≤𝑖≤𝑁
𝑖 ≠ 𝑘
󵄨󵄨1 V ⋅ ⋅ ⋅ Vℓ−𝑏−1 𝑢𝑏 Vℓ−𝑏+1 ⋅ ⋅ ⋅ V𝑁−1 󵄨󵄨
󵄨󵄨
󵄨󵄨
1
1
1
1
1
󵄨󵄨
󵄨󵄨
󵄨
󵄨󵄨
󵄨󵄨1 V ⋅ ⋅ ⋅ Vℓ−𝑏−1 𝑢𝑏 Vℓ−𝑏+1 ⋅ ⋅ ⋅ V𝑁−1 󵄨󵄨󵄨
2
󵄨󵄨 .
2
2
2
2
𝑉ℓ (V1 , . . . , V𝑁) = 󵄨󵄨󵄨󵄨
󵄨
..
..
..
.. 󵄨󵄨󵄨
󵄨󵄨 .. ..
󵄨󵄨
󵄨󵄨 . .
.
.
.
.
󵄨
󵄨󵄨
󵄨󵄨1 V ⋅ ⋅ ⋅ Vℓ−𝑏−1 𝑢𝑏 Vℓ−𝑏+1 ⋅ ⋅ ⋅ V𝑁−1 󵄨󵄨󵄨
󵄨
𝑁
𝑁
𝑁
𝑁
𝑁 󵄨
(133)
𝑏
This last determinant can be expanded as ∑𝑁
𝑘=1 𝑢𝑘 𝑉𝑘ℓ (V1 , . . . ,
V𝑘−1 , V𝑘+1 , . . . , V𝑁) with, for 𝑘 ∈ {1, . . . , 𝑁},
⋅ ⋅ ⋅ V1ℓ−𝑏−1
..
.
V𝑘
V𝑘ℓ−𝑏−1
ℓ−𝑏−1
V𝑘−1 ⋅ ⋅ ⋅ V𝑘−1
⋅⋅⋅
ℓ−𝑏−1
V𝑘+1 ⋅ ⋅ ⋅ V𝑘+1
..
..
.
.
ℓ−𝑏−1
V 𝑁 ⋅ ⋅ ⋅ V𝑁
0 V1ℓ−𝑏+1 ⋅ ⋅ ⋅ V1𝑁−1 󵄨󵄨󵄨󵄨
..
..
.. 󵄨󵄨󵄨󵄨
.
.
. 󵄨󵄨󵄨
ℓ−𝑏+1
𝑁−1 󵄨󵄨
0 V𝑘−1 ⋅ ⋅ ⋅ V𝑘−1 󵄨󵄨
󵄨󵄨
󵄨
ℓ−𝑏+1
𝑁−1 󵄨󵄨󵄨
1 V𝑘
⋅ ⋅ ⋅ V𝑘 󵄨󵄨 .
󵄨󵄨
󵄨
ℓ−𝑏+1
𝑁−1 󵄨󵄨
0 V𝑘+1 ⋅ ⋅ ⋅ V𝑘+1 󵄨󵄨
󵄨󵄨
..
..
.. 󵄨󵄨󵄨
.
.
. 󵄨󵄨󵄨
ℓ−𝑏+1
𝑁−1 󵄨󵄨󵄨
0 V𝑁
⋅ ⋅ ⋅ V𝑁
󵄨
(134)
V1
..
.
V𝑘−1
𝑥
V𝑘+1
..
.
V𝑁
⋅ ⋅ ⋅ V1𝑁−1 󵄨󵄨󵄨󵄨
.. 󵄨󵄨󵄨󵄨
. 󵄨󵄨
𝑁−1 󵄨󵄨󵄨
⋅ ⋅ ⋅ V𝑘−1
󵄨󵄨
󵄨󵄨
𝑁−1 󵄨󵄨
⋅ ⋅ ⋅ 𝑥 󵄨󵄨
󵄨󵄨
󵄨
𝑁−1 󵄨󵄨
󵄨󵄨
⋅ ⋅ ⋅ V𝑘+1
󵄨
.. 󵄨󵄨󵄨
. 󵄨󵄨󵄨
𝑁−1 󵄨󵄨
⋅ ⋅ ⋅ V𝑁 󵄨󵄨
𝑢𝑘𝑁−1
𝑉 (V1 , . . . , V𝑁) ∏ (𝑢𝑖 𝑥 − 1) .
𝑝𝑘+
1≤𝑖≤𝑁
𝑖 ≠ 𝑘
Using the elementary expansion ∏ 1≤𝑖≤𝑁 (𝑢𝑖 𝑥 − 1)
𝑖 ≠ 𝑘
=
𝑁−1−ℓ +
𝑠𝑘,ℓ 𝑥ℓ , we obtain by identification that
∑𝑁−1
ℓ=0 (−1)
𝑉𝑘ℓ (V1 , . . . , V𝑘−1 , V𝑘+1 , . . . , V𝑁)
= (−1)ℓ−𝑏
𝑢𝑘𝑁−1 +
𝑠
𝑉 (V1 , . . . , V𝑁) .
𝑝𝑘+ 𝑘,ℓ−𝑏
(137)
Example 17. For 𝑁 = 1, the settings of Theorem 16 write
+
(𝑧) = 𝑝1+ (𝑧) = 1. Then, formula (124) reads
𝑠1,0
+
E (𝑧𝜎𝑏 1{𝑆𝑏+ =𝑏,𝜎𝑏+ <+∞} ) = 𝑢1 (𝑧)𝑏 ,
In fact, the quantity 𝑉𝑘ℓ (V1 , . . . , V𝑘−1 , V𝑘+1 , . . . , V𝑁) is the coefficient of 𝑥ℓ−𝑏 in the polynomial
󵄨󵄨1
󵄨󵄨
󵄨󵄨 .
󵄨󵄨 .
󵄨󵄨 .
󵄨󵄨
󵄨󵄨1
󵄨󵄨
󵄨󵄨
𝑥 󳨃󳨀→ 󵄨󵄨󵄨󵄨1
󵄨󵄨
󵄨󵄨
󵄨󵄨1
󵄨󵄨
󵄨󵄨󵄨 ...
󵄨󵄨
󵄨󵄨
󵄨󵄨1
= (−1)𝑁−1
𝑢𝑖 𝑥 − 1
)
𝑢𝑘 − 𝑢𝑖
+
(𝑧)
Plugging this expression into (131), we then derive for 𝐻𝑏,ℓ
representation (124) which is valid at least for 𝑧 ∈ (0, 1/𝑀1 ).
Finally, we observe that (124) defines an analytical function
+
(𝑧) is a power series. Thus, by analytical
in (0, 1) and that 𝐻𝑏,ℓ
continuation, (124) holds true for any 𝑧 ∈ (0, 1).
𝑉𝑘ℓ (V1 , . . . , V𝑘−1 , V𝑘+1 , . . . , V𝑁)
V1
..
.
(136)
𝑖 ≠ 𝑘
and, for any ℓ ∈ {𝑏, . . . , 𝑏 + 𝑁 − 1},
󵄨󵄨1
󵄨󵄨
󵄨󵄨 .
󵄨󵄨 .
󵄨󵄨 .
󵄨󵄨
󵄨󵄨1
󵄨󵄨
󵄨󵄨
󵄨󵄨
= 󵄨󵄨󵄨1
󵄨󵄨
󵄨󵄨
󵄨󵄨1
󵄨󵄨
󵄨󵄨
󵄨󵄨 ..
󵄨󵄨 .
󵄨󵄨
󵄨󵄨1
󵄨
∏ (𝑥 − V𝑖 )
1≤𝑖≤𝑁
𝑖 ≠ 𝑘
(138)
where 𝑢1 (𝑧) is given in Example 10. Of course, in this case,
the condition 𝑆𝑏+ = 𝑏 is redundant since we are dealing with
an ordinary random walk with jumps of one unity at most.
When 𝑐 = 1/2, this is the classical symmetric random walk
and (124) recovers the most well-known formula in random
walk theory:
𝑏
1 − √1 − 𝑧2
).
E (𝑧 1{𝜎𝑏+ <+∞} ) = (
𝑧
𝜎𝑏+
(135)
(139)
For 𝑁 = 2, the settings of Theorem 16 write
+
+
𝑠1,0
(𝑧) = 𝑠2,0
(𝑧) = 1,
+
𝑠1,1
(𝑧) = 𝑢2 (𝑧) ,
𝑝1+ (𝑧) = 𝑢1 (𝑧) − 𝑢2 (𝑧) ,
+
𝑠2,1
(𝑧) = 𝑢1 (𝑧) ,
𝑝2+ (𝑧) = 𝑢2 (𝑧) − 𝑢1 (𝑧) ,
(140)
International Journal of Stochastic Analysis
15
where 𝑢1 (𝑧) and 𝑢2 (𝑧) are given in Example 10 and (124) reads
𝑢1 (𝑧)𝑏+1 − 𝑢2 (𝑧)𝑏+1
,
𝑢1 (𝑧) − 𝑢2 (𝑧)
+
E (𝑧𝜎𝑏 1{𝑆𝑏+ =𝑏,𝜎𝑏+ <+∞} ) =
+
𝑢 (𝑧) 𝑢2 (𝑧)𝑏+1 − 𝑢2 (𝑧) 𝑢1 (𝑧)𝑏+1
E (𝑧 1{𝑆𝑏+ =𝑏+1,𝜎𝑏+ <+∞} ) = 1
.
𝑢1 (𝑧) − 𝑢2 (𝑧)
(141)
Remark 18. We have the similar expression related to 𝜎𝑎−
below. The analogous system to (129) writes as
𝑎
1 ≤ 𝑗 ≤ 𝑁,
𝑏+𝑁−1
ℓ=𝑏
E (𝑧 1{𝑆𝑎− =ℓ,𝜎𝑎− <+∞} ) = (−1)
∑
−
𝑠𝑘,ℓ−𝑎
(𝑧)
𝑝𝑘− (𝑧)
𝑘=1
𝑎+𝑁−1
𝑢𝑘 (𝑧)
,
−
where 𝑠𝑘,0
(𝑧) = 1 and, for 𝑘 ∈ {1, . . . , 𝑁}, ℓ ∈ {1, . . . , 𝑁 − 1},
V𝑖1 (𝑧) ⋅ ⋅ ⋅ V𝑖ℓ (𝑧) ,
∑
(144)
1≤𝑖1 <⋅⋅⋅<𝑖ℓ ≤𝑁
𝑖1 ,...,𝑖ℓ ≠ 𝑘
𝑝𝑘− (𝑧) = ∏ [V𝑘 (𝑧) − V𝑗 (𝑧)] .
(145)
1≤𝑗≤𝑁
𝑗 ≠ 𝑘
The double generating function of
+
+
𝑏+𝑁−1
(𝜎𝑏+ , 𝑆𝑏+ )
defined by
+
E (𝑧𝜎𝑏 𝜁𝑆𝑏 1{𝜎𝑏+ <+∞} ) = ∑ E (𝑧𝜎𝑏 1{𝑆𝑏+ =ℓ} ) 𝜁ℓ
𝑘=1
ℓ=𝑏
𝑉𝑘ℓ (V1 , . . . , V𝑘−1 , V𝑘+1 , . . . , V𝑁) ℓ−𝑏
𝑏
𝜁 ) (𝑢𝑘 𝜁)
𝑉 (V1 , . . . , V𝑁)
𝑁
𝑉 (V1 , . . . , V𝑘−1 , 𝜁, V𝑘+1 , . . . , V𝑁)
𝑏
(𝑢𝑘 𝜁) .
𝑉
(V
,
.
.
.
,
V
)
1
𝑁
𝑘=1
(149)
It is clear that the quantity 𝑉(V1 , . . . , V𝑘−1 , 𝜁, V𝑘+1 , . . . , V𝑁)/
𝑉(V1 , . . . , V𝑁), which explicitly writes as
󵄨󵄨 1 V1 ⋅⋅⋅ V1𝑁−1 󵄨󵄨
󵄨󵄨
󵄨
󵄨󵄨 .. ..
.. 󵄨󵄨󵄨󵄨
󵄨󵄨 . .
󵄨󵄨
.
󵄨󵄨
󵄨󵄨 1 V𝑘−1 ⋅⋅⋅ V𝑁−1 󵄨󵄨󵄨
𝑘−1 󵄨
󵄨󵄨
󵄨󵄨 1 𝜁 ⋅⋅⋅ 𝜁𝑁−1 󵄨󵄨󵄨
󵄨󵄨
󵄨
󵄨󵄨 1 V ⋅⋅⋅ V𝑁−1 󵄨󵄨󵄨
𝑘+1 󵄨
󵄨󵄨 𝑘+1
󵄨
(150)
󵄨󵄨 . .
.. 󵄨󵄨󵄨
󵄨󵄨 . .
󵄨󵄨 . .
.𝑁−1 󵄨󵄨󵄨
󵄨󵄨 1 V𝑁 ⋅⋅⋅ V𝑁 󵄨󵄨
󵄨󵄨 1 V1 ⋅⋅⋅ V1𝑁−1 󵄨󵄨 ,
󵄨󵄨
󵄨
󵄨󵄨 . .
.. 󵄨󵄨󵄨󵄨
󵄨󵄨󵄨 .. ..
. 󵄨󵄨󵄨
󵄨󵄨 1 V ⋅⋅⋅ V𝑁−1
󵄨󵄨
󵄨 𝑁
𝑁
defines a polynomial of the variable 𝜁 of degree 𝑁 − 1 which
vanishes at V1 , . . . , V𝑘−1 ,V𝑘+1 , . . . , V𝑁 and equals 1 at V𝑘 . Hence,
by putting back the variable 𝑧, it coincides with the Lagrange
polynomial 𝐿 𝑘 (𝑧, 𝜁) and formula (147) immediately ensues.
Example 20. For 𝑁 = 2, (147) reads
admits an interesting representation by means of Lagrange
interpolating polynomials that we display in the theorem
below.
Theorem 19. The double generating function of (𝜎𝑏+ , 𝑆𝑏+ ) is
given, for any 𝑧 ∈ (0, 1/𝑀1 ) and 𝜁 ∈ C, by
+
𝑘=1
𝑏+𝑁−1
𝑉𝑘ℓ (V1 , . . . , V𝑘−1 , V𝑘+1 , . . . , V𝑁) ℓ
)𝜁
𝑉 (V1 , . . . , V𝑁)
(146)
ℓ=𝑏
+
ℓ=𝑏
𝑁
𝑉ℓ (V1 , . . . , V𝑁) ℓ
𝜁
𝑉 (V1 , . . . , V𝑁)
=∑
(143)
−
𝑠𝑘,ℓ
(𝑧) =
𝑁
= ∑( ∑
−
𝑁
ℓ=𝑏
𝑏+𝑁−1
= ∑ ( ∑ 𝑢𝑘𝑏
−
where 𝐻𝑎,ℓ
(𝑧) = E(𝑧𝜎𝑎 1{𝑆𝑎− =ℓ,𝜎𝑎− <+∞} ). The solution is given by
ℓ−𝑎
𝑏+𝑁−1
+ ℓ
= ∑ 𝐻𝑏,ℓ
𝜁 = ∑
(142)
ℓ=𝑎−𝑁+1
𝜎𝑎−
+
E (𝑧𝜎𝑏 𝜁𝑆𝑏 1{𝜎𝑏+ <+∞} )
𝜎𝑏+
−
∑ 𝐻𝑎,ℓ
(𝑧) 𝑢𝑗 (𝑧)ℓ = 1,
Proof. By (131) and by omitting the variable 𝑧 as previously
mentioned, we have that
𝑁
𝑏
E (𝑧𝜎𝑏 𝜁𝑆𝑏 1{𝜎𝑏+ <+∞} ) = ∑ 𝐿 𝑘 (𝑧, 𝜁) (𝑢𝑘 (𝑧) 𝜁) ,
(147)
𝑘=1
+
+
E (𝑧𝜎𝑏 𝜁𝑆𝑏 1{𝜎𝑏+ <+∞} )
= 𝜁𝑏 (𝑢1 (𝑧)𝑏
=
𝜁 − V2 (𝑧)
𝜁 − V1 (𝑧)
+ 𝑢2 (𝑧)𝑏
)
V1 (𝑧) − V2 (𝑧)
V2 (𝑧) − V1 (𝑧)
𝜁𝑏
[(𝑢1 (𝑧)𝑏+1 − 𝑢2 (𝑧)𝑏+1 )
𝑢1 (𝑧) − 𝑢2 (𝑧)
+ (𝑢1 (𝑧) 𝑢2 (𝑧)𝑏+1 − 𝑢2 (𝑧) 𝑢1 (𝑧)𝑏+1 ) 𝜁] .
(151)
where
𝐿 𝑘 (𝑧, 𝜁) = ∏
1≤𝑗≤𝑁 V𝑘
𝑗 ≠ 𝑘
𝜁 − V𝑗 (𝑧)
(𝑧) − V𝑗 (𝑧)
,
𝑘 ∈ {1, . . . , 𝑁} ,
This is in good agreement with the formulae of Example 17.
We retrieve a result of [21].
(148)
are the Lagrange interpolating polynomials with respect to the
variable 𝜁 such that 𝐿 𝑘 (𝑧, V𝑗 (𝑧)) = 𝛿𝑗𝑘 .
3.1.2. Pseudodistribution of 𝑆𝑏+ . In order to derive the pseudodistribution of 𝑆𝑏+ which is characterized by the numbers
+
(1), ℓ ∈ {𝑏, 𝑏 + 1, . . . , 𝑏 + 𝑁 − 1}, we solve the system
𝐻𝑏,ℓ
obtained by taking the limit in (129) as 𝑧 → 1− .
16
International Journal of Stochastic Analysis
Lemma 21. The following system holds:
and second,
𝑏+𝑁−1
ℓ−𝑏
+
) 𝐻𝑏,ℓ
∑ (
(1)
𝑘
ℓ=𝑘+𝑏
𝑏+𝑘−1
= (−1) (
),
𝑏−1
𝑘
(152)
𝑏+𝑁−1
) 𝜀𝑗 (𝑧)𝑁 ∼ constant × √1 − 𝑧
𝑅𝑗 (𝑧) ∼ − (
𝑏−1
𝑧→1
(159)
which implies, for 𝑘 ∈ {0, . . . , 𝑁 − 1}, that
0 ≤ 𝑘 ≤ 𝑁 − 1.
̃𝑘 (𝑧) = O [(1 − 𝑧)1/2+(∑1≤𝑚≤𝑁−1,𝑚 ≠ 𝑘 𝑚)/(2𝑁) ]
𝑉
−
Proof. By (85), we have the expansion V𝑗 (𝑧) = 1/𝑢𝑗 (𝑧) = 1 −
√1 − 𝑧 ) for any 𝑗 ∈ {1, . . . , 𝑁}.
𝜀𝑗 (𝑧), where 𝜀𝑗 (𝑧) = − O( 2𝑁
𝑧→1
= 𝑜 [(1 − 𝑧)(𝑁−1)/4 ] .
𝑧→1
Putting this into (129), we get that
𝑏+𝑁−1
ℓ−𝑏
∑ (1 − 𝜀𝑗 (𝑧))
ℓ=𝑏
+
𝐻𝑏,ℓ
−𝑏
(𝑧) = (1 − 𝜀𝑗 (𝑧)) ;
(153)
that is,
𝑁−1
𝑏+𝑁−1
ℓ−𝑏
+
) 𝐻𝑏,ℓ
∑ (−1)𝑘 ( ∑ (
(𝑧)) 𝜀𝑗 (𝑧)𝑘
𝑘
𝑘=0
ℓ=𝑏+𝑘
∞
−𝑏
= ∑ (−1)𝑘 ( ) 𝜀𝑗 (𝑧)𝑘
𝑘
(154)
𝑘=0
∞
Therefore, lim𝑧 → 1− 𝑀𝑘 (𝑧) = 0. On the other hand, for 𝑧 ∈
(0, 1), referring to the definition of 𝑀𝑘 (𝑧), we can see that the
+
(𝑧) can be expressed as a linear combination of
quantity 𝐻𝑏,ℓ
+
(𝑧)
the 𝑀𝑘 (𝑧)’s plus a constant. Hence, the limit lim𝑧 → 1− 𝐻𝑏,ℓ
exists and, by appealing to a Tauberian theorem, it coincides
+
(1). This finishes the proof of (152).
with 𝐻𝑏,ℓ
Theorem 22. The pseudodistribution of 𝑆𝑏+ is characterized by
the following pseudo-probabilities: for any ℓ ∈ {𝑏, 𝑏 + 1, . . . , 𝑏 +
𝑁 − 1},
𝑏 𝑁−1 𝑏+𝑁−1
P {𝑆𝑏+ = ℓ, 𝜎𝑏+ < +∞} = (−1)𝑏+ℓ (
)(
).
𝑏
ℓ ℓ−𝑏
(161)
𝑏+𝑘−1
= ∑(
) 𝜀𝑗 (𝑧)𝑘 .
𝑏−1
𝑘=0
Set
Moreover, P{𝜎𝑏+ < +∞} = 1.
𝑏+𝑁−1
ℓ−𝑏
𝑏+𝑘−1
+
𝑀𝑘 (𝑧) = (−1)𝑘 ∑ (
) 𝐻𝑏,ℓ
),
(𝑧) − (
𝑘
𝑏−1
ℓ=𝑏+𝑘
∞
𝑏+ℓ−1
) 𝜀𝑗 (𝑧)ℓ .
𝑅𝑗 (𝑧) = ∑ (
𝑏−1
Proof. We explicitly solve system (152) rewritten as
(155)
𝑁−1
ℓ
𝑏+𝑘−1
+
),
∑ ( ) 𝐻𝑏,ℓ+𝑏
(1) = (−1)𝑘 (
𝑘
𝑏−1
0 ≤ 𝑘 ≤ 𝑁 − 1.
ℓ=𝑘
ℓ=𝑁
(162)
Then, equality (154) reads
𝑁−1
𝑘
∑ 𝑀𝑘 (𝑧) 𝜀𝑗 (𝑧) = 𝑅𝑗 (𝑧) ,
1 ≤ 𝑗 ≤ 𝑁.
(156)
𝑘=0
This is a Vandermonde system, the solution of which is given
̃𝑘 (𝑧)/𝑉(𝑧),
̃
by 𝑀𝑘 (𝑧) = 𝑉
0 ≤ 𝑘 ≤ 𝑁 − 1, where
󵄨󵄨󵄨1 𝜀1 (𝑧) ⋅ ⋅ ⋅ 𝜀1 (𝑧)𝑁−1 󵄨󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨
󵄨󵄨1 𝜀 (𝑧) ⋅ ⋅ ⋅ 𝜀 (𝑧)𝑁−1 󵄨󵄨󵄨
2
2
󵄨
󵄨󵄨 ,
̃
𝑉 (𝑧) = 󵄨󵄨󵄨 .
󵄨󵄨
..
..
󵄨󵄨 ..
󵄨󵄨
.
.
󵄨󵄨
󵄨󵄨
󵄨󵄨
𝑁−1 󵄨󵄨
󵄨󵄨1 𝜀𝑁 (𝑧) ⋅ ⋅ ⋅ 𝜀𝑁(𝑧) 󵄨󵄨
̃𝑘 (𝑧)
𝑉
󵄨󵄨1 𝜀 (𝑧) ⋅ ⋅ ⋅ 𝜀 (𝑧)𝑘−1 𝑅 (𝑧) 𝜀 (𝑧)𝑘+1 ⋅ ⋅ ⋅ 𝜀 (𝑧)𝑁−1 󵄨󵄨
󵄨󵄨
󵄨󵄨 1
1
1
1
1
󵄨
󵄨󵄨
󵄨󵄨1 𝜀 (𝑧) ⋅ ⋅ ⋅ 𝜀 (𝑧)𝑘−1 𝑅 (𝑧) 𝜀 (𝑧)𝑘+1 ⋅ ⋅ ⋅ 𝜀 (𝑧)𝑁−1 󵄨󵄨󵄨
2
2
2
2
2
󵄨󵄨 .
󵄨
= 󵄨󵄨󵄨 .
󵄨󵄨
..
..
..
..
..
󵄨󵄨
󵄨󵄨 ..
.
.
.
.
.
󵄨
󵄨󵄨
󵄨󵄨1 𝜀 (𝑧) ⋅ ⋅ ⋅ 𝜀 (𝑧)𝑘−1 𝑅 (𝑧) 𝜀 (𝑧)𝑘+1 ⋅ ⋅ ⋅ 𝜀 (𝑧)𝑁−1 󵄨󵄨󵄨
󵄨
󵄨 𝑁
𝑁
𝑁
𝑁
𝑁
(157)
Since, by (85), 𝜀𝑗 (𝑧)
∼
𝑧 → 1−
{1, . . . , 𝑁}, we have that
̃ (𝑧) = ∏
𝑉
1≤ℓ<𝑚≤𝑁
√1 − 𝑧 for any 𝑗 ∈
constant × 2𝑁
[𝜀𝑚 (𝑧) − 𝜀ℓ (𝑧)]
∼ − constant × (1 − 𝑧)(𝑁−1)/4
𝑧→1
(160)
(158)
The matrix of the system is (( 𝑘ℓ ))0≤𝑘,ℓ≤𝑁−1 which admits
((−1)𝑘+ℓ ( 𝑘ℓ ))0≤𝑘,ℓ≤𝑁−1 as an inverse with the convention of
settings ( 𝑘ℓ ) = 0 if 𝑘 > ℓ. The solution of the system is given,
for ℓ ∈ {0, 1, . . . , 𝑁 − 1}, by
𝑁−1
𝑘
𝑏+𝑘−1
+
𝐻𝑏,ℓ+𝑏
)
(1) = ∑ (−1)𝑘+ℓ ( ) × (−1)𝑘 (
ℓ
𝑏−1
𝑘=ℓ
𝑁−1
𝑏+ℓ−1
𝑏+𝑘−1
= (−1)ℓ (
)∑(
)
𝑏−1
𝑏+ℓ−1
𝑘=ℓ
(163)
𝑏+ℓ−1 𝑏+𝑁−1
)(
= (−1)ℓ (
)
𝑏−1
𝑏+ℓ
= (−1)ℓ
𝑏
𝑁−1 𝑏+𝑁−1
(
)(
).
ℓ
𝑏
ℓ+𝑏
This proves (161). Now, by summing the P{𝑆𝑏+ = ℓ, 𝜎𝑏+ < +∞},
𝑏 ≤ ℓ ≤ 𝑏 + 𝑁 − 1, given by (161), we obtain that
𝑏+𝑁−1
(−1)ℓ 𝑁 − 1
𝑏+𝑁−1
P {𝜎𝑏+ < +∞} = (−1)𝑏 𝑏 (
(
) ∑
)
𝑏
ℓ−𝑏
ℓ
ℓ=𝑏
𝑁−1
(−1)ℓ 𝑁 − 1
𝑏+𝑁−1
(
= 𝑏(
).
)∑
ℓ
𝑏
ℓ+𝑏
ℓ=0
(164)
International Journal of Stochastic Analysis
17
1
Writing 1/(ℓ + 𝑏) = ∫0 𝑥ℓ+𝑏−1 d𝑥, we see that
𝑁−1
(−1)ℓ 𝑁 − 1
(
)
ℓ
ℓ=0 ℓ + 𝑏
∑
𝑁−1
1
𝑁−1 ℓ
= ∫ ( ∑ (−1)ℓ (
) 𝑥 ) 𝑥𝑏−1 d𝑥
ℓ
0
(165)
ℓ=0
1
= ∫ (1 − 𝑥)𝑁−1 𝑥𝑏−1 d𝑥 =
0
(𝑁 − 1)! (𝑏 − 1)!
.
(𝑏 + 𝑁 − 1)!
3.1.3. Pseudomoments of 𝑆𝑏+ . In the sequel, we use the notation
(𝑖)𝑛 = 𝑖(𝑖 − 1)(𝑖 − 2) ⋅ ⋅ ⋅ (𝑖 − 𝑛 + 1) for any 𝑖 ∈ Z and any 𝑛 ∈ N∗
and (𝑖)0 = 1. Of course, (𝑖)𝑛 = 𝑖!/(𝑖 − 𝑛)! and (𝑖)𝑛 /𝑛! = ( 𝑛𝑖 ) if
𝑖 ≥ 𝑛. We also use the conventions 1/𝑖! = 0 for any negative
𝑗
integer 𝑖 and ∑𝑘=𝑖 = 0 if 𝑖 > 𝑗.
In this section, we compute several functionals related
to the pseudomoments of 𝑆𝑏+ . More precisely, we provide formulae for E[(𝑆𝑏+ − 𝛽)𝑛 ] (Theorem 25), E[(𝑆𝑏+ − 𝑏)𝑛 ]
(Corollary 26), E[(𝑆𝑏+ )𝑛 ], and E[(𝑆𝑏+ )𝑛 ] (Theorem 27).
1
Hence P{𝜎𝑏+ < +∞} = 1. The proof of Theorem 22 is finished.
In the sequel, when considering 𝑆𝑏+ , we will omit the
condition 𝜎𝑏+ < +∞.
Example 23. Let us have a look on the particular values
1, 2, 3, 4 of 𝑁.
Putting the elementary identity 1/(ℓ + 𝑏) = ∫0 𝑥ℓ+𝑏−1 d𝑥
into the equality
𝑏+𝑁−1
𝑏 𝑁−1
𝑏+𝑁−1
) ∑ (−1)𝑏+ℓ (
) 𝑓 (ℓ)
𝑏
ℓ ℓ−𝑏
E [𝑓 (𝑆𝑏+ )] = (
ℓ=𝑏
𝑁−1
𝑁 − 1 𝑓 (ℓ + 𝑏)
𝑏+𝑁−1
,
)
= 𝑏(
) ∑ (−1)ℓ (
ℓ
𝑏
ℓ+𝑏
ℓ=0
(i) Case 𝑁 = 1. Evidently, in this case 𝑆𝑏+ = 𝑏 and then
P {𝑆𝑏+ = 𝑏} = 1.
(170)
(166)
This is the case of the ordinary random walk!
we get the following integral representation of E[𝑓(𝑆𝑏+ )].
(ii) Case 𝑁 = 2. In this case the pseudorandom variables
𝜉𝑛 , 𝑛 ∈ N∗ , have two-valued upward jumps. Then the
overshooting place must be either 𝑏 or 𝑏 + 1: 𝑆𝑏+ ∈ {𝑏, 𝑏 + 1}.
We have that
Theorem 24. For any function 𝑓 defined on {𝑏, . . . , 𝑏 + 𝑁 − 1},
P {𝑆𝑏+ = 𝑏} = 𝑏 + 1,
P {𝑆𝑏+ = 𝑏 + 1} = −𝑏.
𝑏+𝑁−1
E [𝑓 (𝑆𝑏+ )]= 𝑏 (
)
𝑏
(167)
Of course, we immediately see that P{𝑆𝑏+ = 𝑏} + P{𝑆𝑏+ =
𝑏 + 1} = 1.
𝑁−1
1
𝑁−1
× ∫ ( ∑(−1)ℓ (
)𝑓 (ℓ + 𝑏) 𝑥ℓ) 𝑥𝑏−1 d𝑥.
ℓ
0
ℓ=0
(171)
(iii) Case 𝑁 = 3. In this case 𝑆𝑏+ ∈ {𝑏, 𝑏 + 1, 𝑏 + 2} and
P {𝑆𝑏+ = 𝑏} =
1
(𝑏 + 1) (𝑏 + 2) ,
2
P {𝑆𝑏+ = 𝑏 + 1} = −𝑏 (𝑏 + 2) ,
P {𝑆𝑏+
Theorem 25. For any integers 𝑛 ≥ 0 and 𝛽, the factorial
pseudo-moment of (𝑆𝑏+ − 𝛽) of order 𝑛 is given by
(168)
1
= 𝑏 + 2} = 𝑏 (𝑏 + 1) .
2
We can easily check that
𝑏 + 2} = 1.
P{𝑆𝑏+
= 𝑏} +
P{𝑆𝑏+
= 𝑏 + 1} +
P{𝑆𝑏+
=
(iv) Case 𝑁 = 4. In this case 𝑆𝑏+ ∈ {𝑏, 𝑏 + 1, 𝑏 + 2, 𝑏 + 3} and
1
(𝑏 + 1) (𝑏 + 2) (𝑏 + 3) ,
6
1
P {𝑆𝑏+ = 𝑏 + 1} = − 𝑏 (𝑏 + 2) (𝑏 + 3) ,
2
1
+
P {𝑆𝑏 = 𝑏 + 2} = 𝑏 (𝑏 + 1) (𝑏 + 3) ,
2
1
P {𝑆𝑏+ = 𝑏 + 3} = − 𝑏 (𝑏 + 1) (𝑏 + 2) .
6
P {𝑆𝑏+ = 𝑏} =
(169)
We can easily check that P{𝑆𝑏+ = 𝑏} + P{𝑆𝑏+ = 𝑏 + 1} + P{𝑆𝑏+ =
𝑏 + 2} + P{𝑆𝑏+ = 𝑏 + 3} = 1.
E [(𝑆𝑏+ − 𝛽)𝑛 ]
(𝑏 − 𝛽)! 𝑛∧(𝑁−1)
(𝑘 + 𝑏 − 1)!
𝑛
{
{
( )
∑
{
(−1)𝑘
{
𝑘
{
−
1)!
(𝑏
(𝑘
+
𝑏
−
𝛽
−
𝑛)!
{
{
𝑘=0∨(𝑛+𝛽−𝑏)
{
{
{
{
if 𝛽 ≤ 𝑏,
{
{
{
(−1)𝑛
={
{
(𝑏 − 1)! (𝛽 − 𝑏 − 1)!
{
{
{
𝑛∧(𝑁−1)
{
{
𝑛
{
{
×
∑ (𝑘 + 𝑏 − 1)! (𝛽 − 𝑏 + 𝑛 − 𝑘 − 1)! ( )
{
{
𝑘
{
{
𝑘=0
{
if 𝛽 ≥ 𝑏 + 1.
{
(172)
If 𝑛 ≤ 𝑁 − 1, we simply have that
E [(𝑆𝑏+ − 𝛽)𝑛 ] = (−𝛽)𝑛 = (−1)𝑛 𝛽 (𝛽 + 1) ⋅ ⋅ ⋅ (𝛽 + 𝑛 − 1) .
(173)
18
International Journal of Stochastic Analysis
Proof. By (171), we have that
E [(𝑆𝑏+ − 𝛽)𝑛 ]
𝑏+𝑁−1
= 𝑏(
)
𝑏
𝑁−1
1
𝑁−1
× ∫ ( ∑ (−1)ℓ (
) (ℓ + 𝑏 − 𝛽)𝑛 𝑥ℓ+𝑏−1 ) d𝑥.
ℓ
0
ℓ=0
(174)
Next, by observing that (ℓ + 𝑏 − 𝛽)𝑛 𝑥ℓ+𝑏−1 = 𝑥𝑛+𝛽−1 ×
(d 𝑛 / d𝑥𝑛 )(𝑥ℓ+𝑏−𝛽 ), we obtain that
(𝑏 − 𝛽)! (𝑁 − 1)!
{
{
{
{
(𝑏 + 𝑁 − 1)!
{
{
{
𝑛∧(𝑁−1)
{
(𝑘 + 𝑏 − 1)!
{
𝑛
{
{
( )
×
∑
(−1)𝑘
{
{
(𝑘 + 𝑏 − 𝛽 − 𝑛)! 𝑘
{
{
𝑘=0∨(𝑛+𝛽−𝑏)
{
{
{
if 𝛽 ≤ 𝑏,
={
{
(−1)𝑛 (𝑁 − 1)!
{
{
{
{
(𝛽 − 𝑏 − 1)! (𝑏 + 𝑁 − 1)!
{
{
{
{
𝑛∧(𝑁−1)
{
𝑛
{
{
{
×
∑ (𝑘 + 𝑏 − 1)! (𝛽 − 𝑏 + 𝑛 − 𝑘 − 1)! ( )
{
𝑘
{
{
𝑘=0
{
if 𝛽 ≥ 𝑏 + 1.
{
(177)
Finally, plugging (177) into (175) and (174) yields (172).
Assume now that 𝑛 ≤ 𝑁 − 1 and 𝛽 ≤ 𝑏. If 𝑛 ≥ 1 − 𝛽, we
can write in (172) that
𝑁−1
𝑁−1
) (ℓ + 𝑏 − 𝛽)𝑛 𝑥ℓ+𝑏−1
∑ (−1)ℓ (
ℓ
ℓ=0
𝑁−1
𝑛
𝑁−1 d
= ∑ ((−1) (
(𝑥ℓ+𝑏−𝛽 )) 𝑥𝑛+𝛽−1
)
𝑛
ℓ
d𝑥
ℓ=0
󵄨󵄨
d𝛽+𝑛−1
(𝑘 + 𝑏 − 1)!
𝑘+𝑏−1 󵄨󵄨
󵄨󵄨 .
(𝑥
)
=
󵄨󵄨
(𝑘 + 𝑏 − 𝛽 − 𝑛)!
d𝑥𝛽+𝑛−1
󵄨𝑥=1
ℓ
d 𝑛 𝑁−1
𝑁 − 1 ℓ+𝑏−𝛽
=
( ∑ (−1)ℓ (
) 𝑥𝑛+𝛽−1
)𝑥
𝑛
ℓ
d𝑥
ℓ=0
(175)
Then,
𝑛∧(𝑁−1)
(𝑘 + 𝑏 − 1)!
𝑛
( )
∑
(−1)𝑘
𝑘
(𝑘
+
𝑏
−
𝛽
−
𝑛)!
𝑘=0∨(𝑛+𝛽−𝑏)
d𝑛
=
((1 − 𝑥)𝑁−1 𝑥𝑏−𝛽 ) 𝑥𝑛+𝛽−1 .
d𝑥𝑛
𝑛
= ∑ (−1)𝑘
Applying Leibniz rule to (175), we see that
𝑘=0
𝑁−1
𝑁−1
) (ℓ + 𝑏 − 𝛽)𝑛 𝑥ℓ+𝑏−1
∑ (−1)ℓ (
ℓ
=
ℓ=0
𝑘
d 𝑛−𝑘
𝑛 d
𝑁−1
= (∑ ( )
((1
−
𝑥)
)
(𝑥𝑏−𝛽 )) 𝑥𝑛+𝛽−1
𝑘 d𝑥𝑘
d𝑥𝑛−𝑘
𝑘=0
𝑛∧(𝑁−1)
𝑛
∑ (−1)𝑘 ( ) (𝑁 − 1)𝑘 (𝑏 − 𝛽)𝑛−𝑘
𝑘
0
ℓ=0
𝑁−1
) (ℓ + 𝑏 − 𝛽)𝑛 𝑥ℓ+𝑏−1 ) d𝑥
ℓ
󵄨󵄨
󵄨󵄨
× ((1 − 𝑥)𝑛−𝑘 𝑥𝑘+𝑏−𝛽−𝑛 ) 󵄨󵄨󵄨
󵄨󵄨
󵄨𝑥=1
= (−1)𝑛
(𝑏 − 1)! (𝛽 + 𝑛 − 1)!
(𝑏 − 𝛽)! (𝛽 − 1)!
= (−1)𝑛
(𝑏 − 1)!
𝛽 (𝛽 + 1) ⋅ ⋅ ⋅ (𝛽 + 𝑛 − 1) .
(𝑏 − 𝛽)!
𝑛∧(𝑁−1)
𝑛
∑ (−1)𝑘 (𝑁 − 1)𝑘 (𝑏 − 𝛽)𝑛−𝑘 ( )
𝑘
𝑘=0
1
× ∫ (1 − 𝑥)𝑁−1−𝑘 𝑥𝑘+𝑏−1 d𝑥
0
=
(179)
𝛽+𝑛−1
= (−1)𝑛 (𝑛)𝑛 (𝑏 − 1)𝛽−1 (
)
𝑛
∫ ( ∑ (−1)ℓ (
=
d
d𝑥𝛽+𝑛−1
󵄨󵄨
󵄨
((1 − 𝑥)𝑛 𝑥𝑏−1 )󵄨󵄨󵄨󵄨
󵄨󵄨𝑥=1
𝑘=0
(176)
𝑁−1
=
𝛽+𝑛−1
𝑛
× (1 − 𝑥)𝑁−1−𝑘 𝑥𝑘+𝑏−1 .
1
󵄨󵄨
𝑛
d𝛽+𝑛−1
𝑘 𝑛
𝑘
𝑏−1 󵄨󵄨󵄨
[(
(
)
𝑥
]
)
𝑥
∑
(−1)
󵄨󵄨
𝑘
󵄨󵄨
d𝑥𝛽+𝑛−1
𝑘=0
󵄨
𝛽+𝑛−1
= ∑ (−1)𝑘 (𝑛)𝑘 (𝑏 − 1)𝛽+𝑛−𝑘−1 (
)
𝑘
𝑘=0
Therefore,
(𝑘 + 𝑏 − 1)!
𝑛
( )
(𝑘 + 𝑏 − 𝛽 − 𝑛)! 𝑘
𝑥=1
𝑛
=
(178)
𝑛∧(𝑁−1)
𝑛
∑ (−1)𝑘 (𝑁 − 1)𝑘 (𝑏 − 𝛽)𝑛−𝑘 ( )
𝑘
𝑘=0
×
(𝑁 − 1 − 𝑘)! (𝑘 + 𝑏 − 1)!
(𝑏 + 𝑁 − 1)!
Putting (179) into (172) yields (173). If 𝑛 ≤ −𝛽 (which requires
that 𝛽 ≤ 0), in (172), we write instead that
1
1
(𝑘 + 𝑏 − 1)!
=
∫ 𝑥𝑘+𝑏−1 (1 − 𝑥)−𝑛−𝛽 d𝑥.
(𝑘 + 𝑏 − 𝛽 − 𝑛)! (−𝑛 − 𝛽)! 0
(180)
International Journal of Stochastic Analysis
19
Then,
We immediately obtain the following particular result which
will be used in Theorem 28.
𝑛∧(𝑁−1)
(𝑘 + 𝑏 − 1)!
𝑛
( )
∑
(−1)𝑘
𝑘
(𝑘
+
𝑏
−
𝛽
−
𝑛)!
𝑘=0∨(𝑛+𝛽−𝑏)
𝑛
= ∑ (−1)𝑘
𝑘=0
Corollary 26. The factorial pseudomoments of (𝑆𝑏+ − 𝑏) are
given by
(𝑘 + 𝑏 − 1)!
𝑛
( )
(𝑘 + 𝑏 − 𝛽 − 𝑛)! 𝑘
E [(𝑆𝑏+ − 𝑏)𝑛 ] = {
1
=
(−𝑛 − 𝛽)!
𝑛
1
if 0 ≤ 𝑛 ≤ 𝑁 − 1,
if 𝑛 ≥ 𝑁.
(185)
The above identity can be rewritten, if 0 ≤ 𝑛 ≤ 𝑁 − 1, as
𝑛
× ∫ ( ∑ (−1) ( ) 𝑥𝑘 ) 𝑥𝑏−1 (1 − 𝑥)−𝑛−𝛽 d𝑥
𝑘
0
𝑘
𝑘=0
𝑛+𝑏−1
𝑆+ − 𝑏
)] = (−1)𝑛 (
E [( 𝑏
).
𝑏−1
𝑛
1
=
1
∫ 𝑥𝑏−1 (1 − 𝑥)−𝛽 d𝑥
(−𝑛 − 𝛽)! 0
=
(𝑏 − 1)! (−𝛽)!
(𝑏 − 𝛽)! (−𝑛 − 𝛽)!
= (−1)𝑛
(−𝑏)𝑛
0
(𝑏 − 1)!
𝛽 (𝛽 + 1) ⋅ ⋅ ⋅ (𝛽 + 𝑛 − 1) .
(𝑏 − 𝛽)!
(181)
Putting (181) into (172) yields (173) in this case too.
Assume finally that 𝑛 ≤ 𝑁 − 1 and 𝛽 ≥ 𝑏 + 1. We write in
(172) that
(𝑘 + 𝑏 − 1)! (𝛽 − 𝑏 + 𝑛 − 𝑘 − 1)!
1
= (𝛽 + 𝑛 − 1)! ∫ 𝑥𝑘+𝑏−1 (1 − 𝑥)𝛽−𝑏+𝑛−𝑘−1 d𝑥.
(182)
Moreover, since 𝑆𝑏+ ∈ {𝑏, 𝑏 + 1, . . . , 𝑏 + 𝑁 − 1}, it is clear that
(𝑆𝑏+ − 𝑏)(𝑆𝑏+ − 𝑏 − 1) ⋅ ⋅ ⋅ (𝑆𝑏+ − 𝑏 − 𝑁 + 1) = 0 which immediately
entails that (𝑆𝑏+ − 𝑏)𝑛 = 0 for any 𝑛 ≥ 𝑁; then E[(𝑆𝑏+ − 𝑏)𝑛 ] = 0
for 𝑛 ≥ 𝑁 as stated in Corollary 26.
By choosing 𝛽 = 0 in Theorem 25, we plainly extract that
E[(𝑆𝑏+ )𝑛 ] = 0 for 𝑛 ∈ {1, . . . , 𝑁 − 1}. Moreover, as previously
mentioned, (𝑆𝑏+ )𝑛 = 0 for any 𝑛 ≥ 𝑏 + 𝑁; then E[(𝑆𝑏+ )𝑛 ] =
0 for 𝑛 ≥ 𝑏 + 𝑁. Actually, we can compute the factorial
pseudomoments of 𝑆𝑏+ , E[(𝑆𝑏+ )𝑛 ], for 𝑛 ∈ {𝑁, 𝑁+1, . . . , 𝑏+𝑁}.
The formula of Theorem 25 seems to be untractable, so we
provide another way for evaluating them.
Theorem 27. The factorial pseudomoments of 𝑆𝑏+ are given by
0
Then
E [(𝑆𝑏+ )𝑛 ]
𝑛∧(𝑁−1)
𝑛
∑ (𝑘 + 𝑏 − 1)! (𝛽 − 𝑏 + 𝑛 − 𝑘 − 1)! ( )
𝑘
(186)
𝑁−1 𝑛 − 1
{
(
) (𝑏 + 𝑁 − 1)𝑛
{
{ (−1)
𝑁−1
={
𝑖𝑓 𝑁 ≤ 𝑛 ≤ 𝑏 + 𝑁 − 1,
{
{
0
if
1
≤ 𝑛 ≤ 𝑁 − 1 or 𝑛 ≥ 𝑏 + 𝑁.
{
(187)
𝑘=0
Moreover, for 𝑛 ∈ {1, . . . , 𝑁 − 1}, the pseudo-moment of 𝑆𝑏+ of
order 𝑛 vanishes:
= (𝛽 + 𝑛 − 1)!
𝑛
1
𝑥 𝑘 𝑏−1
𝑛
) ] 𝑥 (1 − 𝑥)𝛽−𝑏+𝑛−1 d𝑥
× ∫ [ ∑ ( )(
𝑘 1−𝑥
0
𝑛
E [(𝑆𝑏+ ) ] = 0,
𝑘=0
1
= (𝛽 + 𝑛 − 1)! ∫ 𝑥𝑏−1 (1 − 𝑥)𝛽−𝑏−1 d𝑥
𝑁
E [(𝑆𝑏+ ) ] = (−1)𝑁−1
0
= (𝑏 − 1)! (𝛽 − 𝑏 − 1)!
(𝛽 + 𝑛 − 1)!
.
(𝛽 − 1)!
(183)
Putting (183) into (172) yields (173).
By choosing 𝛽 = 𝑏 in Theorem 25, we derive that
E [(𝑆𝑏+ − 𝑏)𝑛 ] =
𝑛∧(𝑁−1)
1
(𝑘 + 𝑏 − 1)! 𝑛
( ).
∑ (−1)𝑘
𝑘
(𝑏 − 1)! 𝑘=𝑛
(𝑘 − 𝑛)!
(184)
(𝑏 + 𝑁 − 1)!
= −(−𝑏)𝑁.
(𝑏 − 1)!
(188)
Proof. We focus on the case where 𝑁 ≤ 𝑛 ≤ 𝑏 + 𝑁 − 1. We
have that
𝑁−1
E [(𝑆𝑏+ )𝑛 ] = ∑ P {𝑆𝑏+ = ℓ + 𝑏} (ℓ + 𝑏)𝑛
ℓ=0
𝑁−1
𝑏+𝑁−1
𝑁−1
= 𝑏(
) ∑ (−1)ℓ (ℓ + 𝑏 − 1)𝑛−1 (
).
𝑏
ℓ
ℓ=0
(189)
20
International Journal of Stochastic Analysis
The intermediate sum lying in the last displayed equality can
be evaluated as follows: by observing that (ℓ + 𝑏 − 1)𝑛−1 =
(d𝑛−1 /d𝑥𝑛−1 )(𝑥ℓ+𝑏−1 )|𝑥=1 and appealing to Leibniz rule, we
obtain that
𝑁−1
ℓ=0
󵄨󵄨
d𝑛−1 𝑁−1
ℓ 𝑁−1
ℓ+𝑏−1 󵄨󵄨󵄨
(
(
)
=
)
𝑥
∑
(−1)
󵄨󵄨󵄨
ℓ
d𝑥𝑛−1 ℓ=0
󵄨󵄨𝑥=1
󵄨󵄨
d𝑛−1
𝑁−1 𝑏−1 󵄨󵄨
󵄨󵄨
=
((1
−
𝑥)
𝑥
)
󵄨󵄨
d𝑥𝑛−1
󵄨𝑥=1
󵄨󵄨
𝑛−1
𝑗
󵄨󵄨󵄨
d𝑛−1−𝑗
𝑛−1 d
𝑏−1 󵄨󵄨
󵄨󵄨
= ∑ (
(𝑥
)
) 𝑗 ((1 − 𝑥)𝑁−1 )󵄨󵄨󵄨󵄨
󵄨󵄨
𝑗
d𝑥
󵄨󵄨𝑥=1 d𝑥𝑛−1−𝑗
󵄨𝑥=1
𝑗=𝑁−1
= (−1)
= (−1)
𝑁−1
𝑘
𝑗
𝑘
𝑓 (𝑖 + 𝑘) = ∑ ( ) (Δ+ ) 𝑓 (𝑖) .
𝑗
We have the following expression for any functional of the
pseudorandom variable 𝑆𝑏+ .
Theorem 28. One has, for any function 𝑓 defined on {𝑏, 𝑏 +
1, . . . , 𝑏 + 𝑁 − 1}, that
𝑁−1
𝑗
𝑗+𝑏−1
E [𝑓 (𝑆𝑏+ )] = ∑ (−1)𝑗 (
) (Δ+ ) 𝑓 (𝑏) .
𝑏−1
Proof. By (193), we see that
𝑆+ −𝑏
(𝑏 − 1)! (𝑛 − 1)!
.
(𝑛 − 𝑁)! (𝑏 + 𝑁 − 𝑛 − 1)!
=
𝑁−1
(190)
𝑆+
∑ E [( 𝑏
𝑗=0
which immediately yields (194) thanks to (186).
(𝑏 − 1)! (𝑛 − 1)!
𝑏+𝑁−1
E [(𝑆𝑏+ )𝑛 ] = (−1)𝑁−1 𝑏 (
)
𝑏
(𝑛 − 𝑁)! (𝑏 + 𝑁 − 𝑛 − 1)!
Corollary 29. The generating function of 𝑆𝑏+ is given by
(𝑛 − 1)! (𝑏 + 𝑁 − 1)!
.
(𝑛 − 𝑁)! (𝑁 − 1)! (𝑏 + 𝑁 − 𝑛 − 1)!
(195)
𝑗
−𝑏
)] (Δ+ ) 𝑓 (𝑏)
𝑗
Consequently,
= (−1)
(194)
𝑗=0
𝑏
𝑗
𝑆+ − 𝑏
E [𝑓 (𝑆𝑏+ )] = E ( ∑ ( 𝑏
) (Δ+ ) 𝑓 (𝑏))
𝑗
𝑗=0
𝑛−1
) (𝑏 − 1)𝑛−𝑁
(𝑁 − 1)! (
𝑁−1
𝑁−1
(193)
𝑗=0
𝑁−1
)
∑ (−1)ℓ (ℓ + 𝑏 − 1)𝑛−1 (
ℓ
𝑁−1
Conversely, 𝑓(𝑖 + 𝑘) can be expressed by means of (Δ+ )𝑗 𝑓(𝑖),
0 ≤ 𝑗 ≤ 𝑘, according to
𝑁−1
+
𝑗+𝑏−1
E (𝜁𝑆𝑏 ) = 𝜁𝑏 ∑ (
) (1 − 𝜁)𝑗 .
𝑏−1
(196)
𝑗=0
(191)
This is the result announced in Theorem 27 when 𝑁 ≤ 𝑛 ≤
𝑏 + 𝑁 − 1.
Next, concerning the pseudomoments of 𝑆𝑏+ , we appeal
to an elementary argument of linear algebra: the family
(1, 𝑋1 , 𝑋2 , . . . , 𝑋𝑛 ) (recall that 𝑋𝑘 = 𝑋(𝑋 − 1) ⋅ ⋅ ⋅ (𝑋 −
𝑘 + 1)) is a basis of the space of polynomials of degree not
greater than 𝑛. So, 𝑋𝑛 can be written as a linear combination
𝑛
of 𝑋1 , 𝑋2 , . . . , 𝑋𝑛 . Then E[(𝑆𝑏+ ) ] can be written as a linear
combination of the factorial pseudomoments of 𝑆𝑏+ of order
between 1 and 𝑛. The latters cancel for 𝑛 ∈ {1, . . . , 𝑁 − 1}. As
𝑛
a result, E[(𝑆𝑏+ ) ] = 0.
The same argument ensures the equalities E[(𝑆𝑏+ )𝑁] =
𝑁
E[(𝑆𝑏+ ) ], which is equal to (−1)𝑁−1 (𝑏 + 𝑁 − 1)𝑁, and
𝑁
E[(𝑆𝑏+ − 𝑏)𝑁] = E[(𝑆𝑏+ ) ] + (−𝑏)𝑁 which vanishes. Each of
𝑁
them yields the value of E[(𝑆𝑏+ ) ].
The proof of Theorem 27 is completed.
Proof. Let us apply Theorem 28 to the function 𝑓(𝑖) = 𝜁𝑖
for which we plainly have (Δ+ )𝑗 𝑓(𝑖) = (−1)𝑗 𝜁𝑖 (1 − 𝜁)𝑗 . This
immediately yields (196).
Remark 30. A direct computation with (161) yields the alternative representation:
+
𝑏 + 𝑁 − 1 𝑏 1 𝑏−1
) 𝜁 ∫ 𝑥 (1 − 𝜁𝑥)𝑁−1 d𝑥. (197)
E (𝜁𝑆𝑏 ) = 𝑏 (
𝑏
0
Of special interest is the case when the starting point of
the pseudorandom walk is any point 𝑥 ∈ Z. By translating 𝑏
into 𝑏 − 𝑥 and the function 𝑓 into the shifted function 𝑓(⋅ + 𝑥)
in formula (194), we get that
+
)]
E𝑥 [𝑓 (𝑆𝑏+ )] = E [𝑓 (𝑥 + 𝑆𝑏−𝑥
def
𝑁−1
𝑗
𝑗+𝑏−𝑥−1
= ∑ (−1)𝑗 (
) (Δ+ ) 𝑓 (𝑏) .
𝑗
(198)
𝑗=0
3.2. Link with the High-Order Finite-Difference Operator. Set
+
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
∘ ⋅ ⋅ ⋅ ∘ Δ+
Δ+ 𝑓(𝑖) = 𝑓(𝑖+1)−𝑓(𝑖) for any 𝑖 ∈ Z and (Δ+ )𝑗 = Δ
Thus, we obtain the following result.
for any 𝑗 ∈ N∗ . Set also (Δ+ )0 𝑓 = 𝑓. The quantity (Δ ) is the
iterated forward finite-difference operator given by
Theorem 31. One has, for any function 𝑓 defined on {𝑏, 𝑏 +
1, . . . , 𝑏 + 𝑁 − 1}, that
𝑗 times
+ 𝑗
+ 𝑗
𝑗
𝑗+𝑘
(Δ ) 𝑓 (𝑖) = ∑ (−1)
𝑘=0
𝑗
( ) 𝑓 (𝑖 + 𝑘) .
𝑘
𝑁−1
(192)
𝑗
+
E𝑥 [𝑓 (𝑆𝑏+ )] = ∑ 𝑃𝑏,𝑗
(𝑥) (Δ+ ) 𝑓 (𝑏)
𝑗=0
(199)
International Journal of Stochastic Analysis
21
+
with 𝑃𝑏,0
(𝑥) = 1 and, for 𝑗 ∈ {1, . . . , 𝑁 − 1},
+
𝑃𝑏,𝑗
(𝑥) =
Theorem 32. One has, for any function 𝑓 defined on Z and
any 𝑛 ∈ N, that
𝑗−1
1
∏ (𝑥 − 𝑏 − 𝑘) .
𝑗! 𝑘=0
(200)
+
,0
The 𝑃𝑏,𝑗
≤ 𝑗 ≤ 𝑁 − 1, are Newton interpolating polynomials.
They are of degree not greater than (𝑁 − 1) and characterized,
for any 𝑘 ∈ {0, . . . , 𝑁 − 1}, by
𝑘
+
(Δ+ ) 𝑃𝑏,𝑗
(𝑏) = 𝛿𝑗𝑘 .
(201)
Proof. Coming back to the proof of Theorem 28 and appealing to Theorem 22, we write that
𝑆+ − 𝑏
+
)]
𝑃𝑏,𝑗
(𝑥) = E𝑥 [( 𝑏
𝑗
In (206), the operator (Δ+ )𝑗 acts on the variable 𝑏.
Proof. We denote by P𝑥 the pseudoprobability associated
with the pseudoexpectation E𝑥 . Actually, it represents the
pseudoprobability related to the pseudorandom walk started
at point 𝑥 at time 0. We have, by independence of the 𝜉𝑗 ’s, that
(202)
=
∑
𝑘,ℓ∈N:
𝑏≤ℓ≤𝑏+𝑁−1
𝑚=𝑗
𝑁−1
1
𝑚 𝑁−1
) 𝐾𝑚 (𝑥) ,
∑ (−1)𝑚 ( ) (
𝑗
𝑚
(𝑁 − 1)! 𝑚=𝑗
E𝑥 [1{𝜎𝑏+ =𝑘,𝑆𝑏+ =ℓ} 𝑓 (𝑆𝑏+ + 𝜉𝑘+1 + ⋅ ⋅ ⋅ + 𝜉𝑘+𝑛 )]
P𝑥 {𝜎𝑏+ = 𝑘, 𝑆𝑏+ = ℓ} E [𝑓 (ℓ + 𝜉1 + ⋅ ⋅ ⋅ + 𝜉𝑛 )]
𝑏+𝑁−1
= ∑ P𝑥 {𝑆𝑏+ = ℓ} Eℓ [𝑓 (𝑆𝑛 )]
ℓ=𝑏
where, for any 𝑚 ∈ {0, . . . , 𝑁 − 1},
(∏𝑁−1
𝑘=0 (𝑏 − 𝑥 + 𝑘))
∑
𝑘,ℓ∈N:
𝑏≤ℓ≤𝑏+𝑁−1
𝑁−1
(𝑏 − 𝑥 + 𝑚)
(206)
𝑗=0
=
𝑚
+
= 𝑏 − 𝑥 + 𝑚}
= ∑ ( ) P {𝑆𝑏−𝑥
𝑗
𝐾𝑚 (𝑥) =
𝑗
E𝑥 [𝑓 (𝑆𝜎𝑏+ +𝑛 )]
𝑆+ + 𝑥 − 𝑏
= E [( 𝑏−𝑥
)]
𝑗
=
𝑁−1
+
E𝑥 [𝑓 (𝑆𝜎𝑏+ +𝑛 )] = ∑ 𝑃𝑏,𝑗
(𝑥) (Δ+ ) E𝑏 [𝑓 (𝑆𝑛 )] .
= E𝑥 [E𝑆𝑏+ [𝑓 (𝑆𝑛 )]] .
(207)
=
∏
0≤𝑘≤𝑁−1
𝑘 ≠ 𝑚
(𝑏 − 𝑥 + 𝑘) . (203)
The expression 𝐾𝑚 (𝑥) defines a polynomial of the variable 𝑥
+
is a polynomial of degree not greater
of degree (𝑁 − 1), so 𝑃𝑏,𝑗
than (𝑁−1). It is obvious that 𝐾𝑚 (𝑏+ℓ) = 0 for ℓ ∈ {0, . . . , 𝑁−
1}\{𝑚}. On the other hand, 𝐾ℓ (𝑏+ℓ) = (−1)ℓ (𝑁−1)!/ ( 𝑁−1
ℓ ).
By putting this into (202), we get that
ℓ
+
𝑃𝑏,𝑗
(𝑏 + ℓ) = ( ) 1{𝑗≤ℓ} .
𝑗
(204)
Hence, by setting 𝑔(𝑥) = E𝑥 [𝑓(𝑆𝑛 )], we have obtained that
E𝑥 [𝑓 (𝑆𝜎𝑏+ +𝑛 )] = E𝑥 [𝑔 (𝑆𝑏+ )]
(208)
which proves (206) thanks to (199).
Example 33. Below, we display the form of (206) for the
particular values 1, 2, 3 of 𝑁.
(i) For 𝑁 = 1, (206) reads
Next, we obtain, for any 𝑘 ∈ {0, . . . , 𝑁 − 1}, that
𝑘
E𝑥 [𝑓 (𝑆𝜎𝑏+ +𝑛 )] = E𝑏 [𝑓 (𝑆𝑛 )]
𝑘 +
𝑘 +
(Δ+ ) 𝑃𝑏,𝑗
(𝑏) = ∑ (−1)𝑘+ℓ ( ) 𝑃𝑏,𝑗
(𝑏 + ℓ)
ℓ
(209)
ℓ=0
𝑘
𝑘 ℓ
= ∑(−1)𝑘+ℓ ( ) ( )
ℓ 𝑗
(205)
ℓ=𝑗
𝑘−𝑗
𝑘
𝑘−𝑗
) = 𝛿𝑗𝑘 .
= (−1)𝑗+𝑘 ( ) ∑ (−1)ℓ (
𝑗
ℓ
ℓ=0
The proof of Theorem 31 is finished.
We complete this paragraph by stating a strong pseudoMarkov property related to time 𝜎𝑏+ .
which is of course trivial! This is the strong Markov
property for the ordinary random walk.
(ii) For 𝑁 = 2, (206) reads
E𝑥 [𝑓 (𝑆𝜎𝑏+ +𝑛 )] = E𝑏 [𝑓 (𝑆𝑛 )] + (𝑥 − 𝑏) Δ+ E𝑏 [𝑓 (𝑆𝑛 )]
= (𝑏 − 𝑥 + 1) E𝑏 [𝑓 (𝑆𝑛 )]
+ (𝑥 − 𝑏) E𝑏+1 [𝑓 (𝑆𝑛 )] .
(210)
22
International Journal of Stochastic Analysis
(iii) For 𝑁 = 3, (206) reads
where, for any 𝜆 > 0 and any 𝜇 ∈ R,
E𝑥 [𝑓 (𝑆𝜎𝑏+ +𝑛 )] = E𝑏 [𝑓 (𝑆𝑛 )] + (𝑥 − 𝑏) Δ+ E𝑏 [𝑓 (𝑆𝑛 )]
+
+
= e𝑖𝜇𝑏 ∑ ∏
𝜑 − 𝜑𝑘 1≤𝑗≤𝑁
𝑘=1 1≤𝑗≤𝑁 𝑗
(211)
𝜀+
E (e−𝜆𝜏𝑏
Below, we give an ad
hoc definition for the convergence of a family of exit times.
Definition 34. Let ((𝑋𝑡𝜀 )𝑡≥0 )𝜀>0 be a family of pseudoprocesses
which converges towards a pseudoprocess (𝑋𝑡 )𝑡≥0 when 𝜀 →
0+ in the sense of Definition 13. Let 𝐼 be a subset of R and set
𝜏𝐼𝜀 = inf{𝑡 ≥ 0 : 𝑋𝑡𝜀 ∉ 𝐼}, 𝑋𝐼𝜀 = 𝑋𝜏𝜀 𝜀 and 𝜏𝐼 = inf{𝑡 ≥ 0 : 𝑋𝑡 ∉
𝐼
𝐼}, 𝑋𝐼 = 𝑋𝜏𝐼 .
We say that
+𝑖𝜇𝑋𝑏𝜀+
2𝑁 +
𝜎𝑏𝜀 +𝑖𝜇𝜀𝑆𝑏+𝜀
√𝜆/𝑐𝑏
.
1{𝜎𝑏+ <+∞} )
(219)
𝜀
𝑁
2𝑁
𝑏𝜀
2𝑁
= e𝑖𝜇𝜀𝑏𝜀 ∑ 𝐿 𝑘 (e−𝜆𝜀 , e𝑖𝜇𝜀 ) 𝑢𝑘 (e−𝜆𝜀 ) .
𝑘=1
2𝑁
Recall that we previously set 𝑢𝑗 (𝜆, 𝜀) = 𝑢𝑗 (e−𝜆𝜀 ). Thanks to
asymptotics (119), we get that
(212)
2𝑁 𝜆
𝑢𝑘 (𝜆, 𝜀) − 𝑢𝑗 (𝜆, 𝜀) ∼ + (𝜑𝑗 − 𝜑𝑘 ) √ 𝜀,
𝜀→0
𝑐
(220)
∀𝜆 > 0,
−𝜆𝜏𝐼𝜀 +𝑖𝜇𝑋𝐼𝜀
−𝜆𝜏𝐼 +𝑖𝜇𝑋𝐼
1{𝜏𝐼𝜀 <+∞} ) 󳨀→+ E (e
𝜀→0
∀𝜇 ∈ R,
1 − 𝑢𝑗 (𝜆, 𝜀) e
1{𝜏𝐼 <+∞} ) .
(213)
We say that
𝑖𝜇𝜀
𝜆
∼ (𝜑𝑗 √ − 𝑖𝜇) 𝜀.
𝜀 → 0+
𝑐
2𝑁
Thus,
2𝑁
lim+ 𝐿 𝑘 (e−𝜆𝜀 , e𝑖𝜇𝜀 ) = ∏
𝑋𝐼𝜀 󳨀→+ 𝑋𝐼
(214)
𝜀→0
= ∏
𝑖𝜇𝑋𝐼𝜀
E (e
𝑖𝜇𝑋𝐼
1{𝜏𝐼𝜀 <+∞} ) 󳨀→+ E (e
𝜀→0
√𝜆/𝑐 − 𝑖𝜇
𝜑𝑗 2𝑁
1≤𝑗≤𝑁 (𝜑𝑗
𝑗 ≠ 𝑘
𝜀→0
if and only if
∀𝜇 ∈ R,
2𝑁
1{𝜏𝑏𝜀+ <+∞} )
= E (e−𝜆𝜀
if and only if
E (e
√𝜆/𝑐
) e−𝜑𝑘
Proof. We already pointed out that the assumption 𝑐 ≤
1/22𝑁−1 entails that 𝑀∞ = 1. Therefore, (147) holds for
2𝑁
𝑧 = e−𝜆𝜀 < 1/𝑀∞ = 1, that is, for 𝜆 > 0. So, by (147),
we have, for 𝜆 > 0, that
3.3. Joint Pseudodistribution of (𝜏𝑏+ , 𝑋𝑏+ ).
𝜀→0
𝑗 ≠ 𝑘
𝑖 𝜑𝑗 𝜇
2𝑁
(218)
1
(𝑥 − 𝑏) (𝑥 − 𝑏 − 1) E𝑏+2 [𝑓 (𝑆𝑛 )] .
2
(𝜏𝐼𝜀 , 𝑋𝐼𝜀 ) 󳨀→+ (𝜏𝐼 , 𝑋𝐼 )
∏ (1 −
𝑗 ≠ 𝑘
− (𝑥 − 𝑏) (𝑥 − 𝑏 − 2) E𝑏+1 [𝑓 (𝑆𝑛 )]
+
𝜑𝑗
𝑁
1
2
(𝑥 − 𝑏) (𝑥 − 𝑏 − 1) (Δ+ ) E𝑏 [𝑓 (𝑆𝑛 )]
2
1
(𝑥 − 𝑏 − 1) (𝑥 − 𝑏 − 2) E𝑏 [𝑓 (𝑆𝑛 )]
2
=
+
E (e−𝜆𝜏𝑏 +𝑖𝜇𝑋𝑏 1{𝜏𝑏 <+∞} )
1≤𝑗≤𝑁 𝜑𝑗
𝑗 ≠ 𝑘
1{𝜏𝐼 <+∞} ) .
√𝜆/𝑐
− 𝜑𝑘 ) 2𝑁
𝜑𝑗
∏ (1 −
− 𝜑𝑘 1≤𝑗≤𝑁
𝑗 ≠ 𝑘
𝑖𝜑𝑗 𝜇
√𝜆/𝑐
2𝑁
).
(215)
(221)
As in Section 2.3, we choose for the family ((𝑋𝑡𝜀 )𝑡≥0 )𝜀>0 the
pseudoprocesses defined, for any 𝜀 > 0, by
Finally, we can easily conclude with the help of the elementary
limits that
𝑋𝑡𝜀 = 𝜀𝑆⌊𝑡/𝜀2𝑁 ⌋ ,
𝑡 ≥ 0,
(216)
and for the pseudoprocess (𝑋𝑡 )𝑡≥0 the pseudo-Brownian
motion. For 𝐼, we choose the interval (−∞, 𝑏) so that 𝜏𝐼𝜀 = 𝜏𝑏𝜀+ ,
𝑋𝐼𝜀 = 𝑋𝑏𝜀+ and 𝜏𝐼 = 𝜏𝑏+ , 𝑋𝐼 = 𝑋𝑏+ . Set 𝑏𝜀 = ⌈𝑏/𝜀⌉ where ⌈⋅⌉ is the
usual ceiling function. We have 𝜏𝑏𝜀+ = 𝜀2𝑁𝜎𝑏+𝜀 and 𝑋𝑏𝜀+ = 𝜀𝑆𝑏+𝜀 .
Recall the setting 𝜑𝑗 = −𝑖e𝑖((2𝑗−1)/2𝑁)𝜋 , 1 ≤ 𝑗 ≤ 𝑁.
Theorem 35. Assume that 𝑐 ≤ 1/22𝑁−1 . The following convergence holds:
lim+ e𝑖𝜇𝜀𝑏𝜀 = e𝑖𝜇𝑏 ,
𝜀→0
󳨀→
𝜀 → 0+
(𝜏𝑏+ , 𝑋𝑏+ ) ,
(217)
2𝑁
√𝜆/𝑐𝑏
𝜀→0
.
(222)
Theorem 36. The following convergence holds:
𝑋𝑏𝜀+ 󳨀→+ 𝑋𝑏+ ,
(223)
𝜀→0
where, for any 𝜇 ∈ R,
+
(𝜏𝑏𝜀+ , 𝑋𝑏𝜀+ )
lim+ 𝑢𝑘 (𝜆, 𝜀)𝑏𝜀 = e−𝜑𝑘
𝑁−1
𝑗
(−𝑖𝜇𝑏)
.
𝑗!
𝑗=0
E (e𝑖𝜇𝑋𝑏 1{𝜏𝑏+ <+∞} ) = e𝑖𝜇𝑏 ∑
(224)
International Journal of Stochastic Analysis
23
This is the Fourier transform of the pseudorandom variable 𝑋𝑏+ .
Moreover,
P {𝜏𝑏+ < +∞} = 1.
(225)
Proof. By (194), we have that
𝜀+
+
E (e𝑖𝜇𝑋𝑏 1{𝜏𝑏𝜀+ <+∞} ) = E (e𝑖𝜇𝜀𝑆𝑏𝜀 1{𝜎𝑏+ <+∞} )
4. Part III—First Exit Time from
a Bounded Interval
4.1. On the Pseudodistribution of (𝜎𝑎𝑏 , 𝑆𝑎𝑏 ). Let 𝑎, 𝑏 be two
integers such that 𝑎 < 0 < 𝑏 and let E = {𝑎, 𝑎 − 1, . . . , 𝑎 −
𝑁 + 1} ∪ {𝑏, 𝑏 + 1, . . . , 𝑏 + 𝑁 − 1}. In this section, we explicitly
compute the generating function of (𝜎𝑎𝑏 , 𝑆𝑎𝑏 ). Set, for ℓ ∈ E,
𝜀
𝐻𝑎𝑏,ℓ (𝑧) = E (𝑧𝜎𝑎𝑏 1{𝑆𝑎𝑏 =ℓ,𝜎𝑎𝑏 <+∞} )
𝑁−1
𝑗
𝑗 + 𝑏𝜀 − 1
= e𝑖𝜇𝜀𝑏𝜀 ∑ (
) (1 − e𝑖𝜇𝜀 ) .
𝑏𝜀 − 1
= ∑ P {𝜎𝑎𝑏 = 𝑘, 𝑆𝑎𝑏 = ℓ} 𝑧𝑘 .
𝑘=0
𝑘∈N
(226)
We can easily conclude by using the elementary asymptotics
that
𝑏𝑗
𝑗 + 𝑏𝜀 − 1
(
,
) ∼+
𝑏𝜀 − 1
𝜀 → 0 𝑗!𝜀𝑗
lim+ e𝑖𝜇𝜀𝑏𝜀 = e𝑖𝜇𝑏 ,
𝜀→0
𝑖𝜇𝜀 𝑗
(1 − e )
(227)
We are able to provide an explicit expression of 𝐻𝑎𝑏,ℓ (𝑧). As
in Section 3.1, due to (62), we have the following a priori
estimate: |P{𝜎𝑎𝑏 = 𝑘, 𝑆𝑎𝑏 = ℓ}| ≤ |P{𝑆1 ∈ (𝑎, 𝑏), . . . , 𝑆𝑘−1 ∈
(𝑎, 𝑏), 𝑆𝑘 = ℓ}| ≤ 𝑀1𝑘 . As a byproduct, the power series
defining 𝐻𝑎𝑏,ℓ (𝑧) absolutely converges for |𝑧| < 1/𝑀1 .
𝑗
∼ (−𝑖𝜇𝜀) .
4.1.1. Joint Pseudodistribution of (𝜎𝑎𝑏 , 𝑆𝑎𝑏 )
𝜀 → 0+
Theorem 38. The pseudodistribution of (𝜎𝑎𝑏 , 𝑆𝑎𝑏 ) is characterized by the identity, valid for any 𝑧 ∈ (0, 1),
Corollary 37. The pseudodistribution of 𝑋𝑏+ is given by
P {𝑋𝑏+ ∈ d𝑧} 𝑁−1 𝑏 𝑗 (𝑗)
= ∑ 𝛿𝑏 (𝑧) .
d𝑧
𝑗=0 𝑗!
(228)
This formula should be understood as follows: for any
(𝑁 − 1)-times differentiable function 𝑓, by omitting the
condition 𝜏𝑏+ < +∞,
E [𝑓 (𝑋𝑏+ )]
(230)
𝑁−1
𝑗𝑏
= ∑ (−1)
𝑗=0
𝑗
𝑗!
𝑓
(𝑗)
(𝑏) .
(229)
We retrieve a result of [11] and, in the case 𝑁 = 2, a pioneering
result of [18].
E (𝑧𝜎𝑎𝑏 1{𝑆𝑎𝑏 =ℓ,𝜎𝑎𝑏 <+∞} ) =
𝑈ℓ (𝑢1 (𝑧) , . . . , 𝑢2𝑁 (𝑧))
,
𝑈 (𝑢1 (𝑧) , . . . , 𝑢2𝑁 (𝑧))
(231)
where
𝑈 (𝑢1 , . . . , 𝑢2𝑁)
󵄨󵄨
󵄨
󵄨󵄨1 𝑢1 ⋅ ⋅ ⋅ 𝑢1𝑁−1 𝑢1𝑏−𝑎+𝑁−1 ⋅ ⋅ ⋅ 𝑢1𝑏−𝑎+2𝑁−2 󵄨󵄨󵄨 (232)
󵄨󵄨 . .
󵄨󵄨
..
..
..
󵄨󵄨
= 󵄨󵄨󵄨󵄨 .. ..
󵄨󵄨
.
.
.
󵄨󵄨
󵄨
𝑁−1
𝑏−𝑎+𝑁−1
𝑏−𝑎+2𝑁−2 󵄨󵄨
󵄨󵄨1 𝑢
󵄨󵄨
⋅
⋅
⋅
𝑢
𝑢
⋅
⋅
⋅
𝑢
󵄨
2𝑁
2𝑁
2𝑁
2𝑁
and, if 𝑎 − 𝑁 + 1 ≤ ℓ ≤ 𝑎, 𝑈ℓ (𝑢1 , . . . , 𝑢2𝑁) is the determinant
󵄨󵄨
󵄨
󵄨󵄨1 𝑢1 ⋅ ⋅ ⋅ 𝑢1ℓ+𝑁−𝑎−2 𝑢1𝑁−𝑎−1 𝑢1ℓ+𝑁−𝑎 ⋅ ⋅ ⋅ 𝑢1𝑁−1 𝑢1𝑏−𝑎+𝑁−1 ⋅ ⋅ ⋅ 𝑢1𝑏−𝑎+2𝑁−2 󵄨󵄨󵄨
󵄨󵄨 . .
󵄨󵄨
..
..
..
..
..
..
󵄨󵄨 . .
󵄨󵄨 ,
󵄨󵄨 . .
󵄨󵄨
.
.
.
.
.
.
󵄨󵄨
󵄨
ℓ+𝑁−𝑎−2
𝑁−𝑎−1
ℓ+𝑁−𝑎
𝑁−1
𝑏−𝑎+𝑁−1
𝑏−𝑎+2𝑁−2 󵄨󵄨
󵄨󵄨1 𝑢
󵄨󵄨
𝑢2𝑁
𝑢2𝑁
⋅ ⋅ ⋅ 𝑢2𝑁 𝑢2𝑁
⋅ ⋅ ⋅ 𝑢2𝑁
󵄨
2𝑁 ⋅ ⋅ ⋅ 𝑢2𝑁
(233)
and if 𝑏 ≤ ℓ ≤ 𝑏 + 𝑁 − 1, 𝑈ℓ (𝑢1 , . . . , 𝑢2𝑁) is the determinant
󵄨󵄨
󵄨
󵄨󵄨1 𝑢1 ⋅ ⋅ ⋅ 𝑢1𝑁−1 𝑢1𝑏−𝑎+𝑁−1 ⋅ ⋅ ⋅ 𝑢1ℓ+𝑁−𝑎−2 𝑢1𝑁−𝑎−1 𝑢1ℓ+𝑁−𝑎 ⋅ ⋅ ⋅ 𝑢1𝑏−𝑎+2𝑁−2 󵄨󵄨󵄨
󵄨󵄨 . .
󵄨󵄨
..
..
..
..
..
..
󵄨󵄨 . .
󵄨󵄨 .
󵄨󵄨 . .
󵄨󵄨
.
.
.
.
.
.
󵄨󵄨
󵄨
𝑁−1
𝑏−𝑎+𝑁−1
ℓ+𝑁−𝑎−2
𝑁−𝑎−1
ℓ+𝑁−𝑎
𝑏−𝑎+2𝑁−2 󵄨󵄨
󵄨󵄨1 𝑢
󵄨󵄨
𝑢2𝑁
⋅ ⋅ ⋅ 𝑢2𝑁
𝑢2𝑁
𝑢2𝑁
⋅ ⋅ ⋅ 𝑢2𝑁
󵄨
2𝑁 ⋅ ⋅ ⋅ 𝑢2𝑁
(234)
24
International Journal of Stochastic Analysis
Proof. Pick an integer 𝑘 such that 𝑘 ≤ 𝑎 or 𝑘 ≥ 𝑏. If 𝑆𝑛 = 𝑘,
then an exit of the interval (𝑎, 𝑏) occurs before time 𝑛: 𝜎𝑎𝑏 ≤ 𝑛.
This remark and the independence of the increments of the
pseudorandom walk entail that
P {𝑆𝑛 = 𝑘} = P {𝑆𝑛 = 𝑘, 𝜎𝑎𝑏 ≤ 𝑛}
Systems (240) and (241) are “lacunary” Vandermonde systems (some powers of 𝑢𝑗 (𝑧) are missing). For instance, let us
rewrite system (240) as
∑ 𝐻𝑎𝑏,ℓ (𝑧) 𝑢𝑗 (𝑧)ℓ+𝑁−𝑎−1 = 𝑢𝑗 (𝑧)𝑁−𝑎−1 ,
1 ≤ 𝑗 ≤ 2𝑁.
ℓ∈E
(242)
𝑛
= ∑ ∑ P {𝑆𝑛 = 𝑘, 𝜎𝑎𝑏 = 𝑗, 𝑆𝑎𝑏 = ℓ}
𝑗=0 ℓ∈E
𝑛
= ∑ ∑ P {𝜎𝑎𝑏 = 𝑗, 𝑆𝑎𝑏 = ℓ} P {𝑆𝑛−𝑗 = 𝑘 − ℓ} .
𝑗=0 ℓ∈E
(235)
Thanks to the absolute convergence of the series defining
𝐺𝑘 (𝑧) and 𝐻𝑎𝑏,ℓ (𝑧) for 𝑧 ∈ (0, 1) and |𝑧| < 1/𝑀1 , respectively,
we can apply the generating function to equality (235). We
get, for 𝑧 ∈ (0, 1/𝑀1 ), that
𝐺𝑘 (𝑧) = ∑ 𝐺𝑘−ℓ (𝑧) 𝐻𝑎𝑏,ℓ (𝑧) .
ℓ∈E
𝑁
𝑗=1
A method for computing the determinants exhibited
in Theorem 38 and solving system (242) is proposed in
Appendix A.1. In particular, we can deduce from Proposition
A.3 an alternative representation of E(𝑧𝜎𝑎𝑏 1{𝑆𝑎𝑏 =ℓ,𝜎𝑎𝑏 <+∞} )
which can be seen as the analogous of (124). Set 𝑠0 (𝑧) = 1
and, for 𝑘, ℓ ∈ {1, . . . , 2𝑁},
𝑠ℓ (𝑧) =
(236)
Using expression (94) of 𝐺𝑘 , namely, 𝐺𝑘 (𝑧)
=
|𝑘|
∑𝑁
𝛼
(𝑧)
𝑢
(𝑧)
,
we
get,
for
𝑘
≥
𝑏
+
𝑁
−
1
(recall
𝑗
𝑗=1 𝑗
that V𝑗 (𝑧) = 1/𝑢𝑗 (𝑧)), that
∑ 𝛼𝑗 (𝑧) 𝑢𝑗 (𝑧)𝑘 ( ∑ 𝐻𝑎𝑏,ℓ (𝑧) V𝑗 (𝑧)ℓ − 1) = 0,
Cramer’s formulae immediately yield (231) at least for 𝑧 ∈
(0, 1/𝑀1 ). By analyticity of the 𝑢𝑗 ’s on (0, 1), it is easily seen
that (231) holds true for 𝑧 ∈ (0, 1). Systems (240) and (241)
will be used in Lemma 42.
𝑝𝑘 (𝑧) = ∏ [𝑢𝑘 (𝑧) − 𝑢𝑖 (𝑧)] ,
𝑠𝑘,0 (𝑧) = 1, 𝑠𝑘,𝑚 (𝑧) = 0 for any integer 𝑚 such that 𝑚 ≤ −1 or
𝑚 ≥ 2𝑁 and, for 𝑚 ∈ {1, . . . , 2𝑁 − 1},
(237)
𝑠𝑘,𝑚 (𝑧) =
and, for 𝑘 ≤ 𝑎 − 𝑁 + 1, that
𝑁
𝑗=1
When limiting the range of 𝑘 to the set {𝑏+𝑁, 𝑏+𝑁+1, . . . , 𝑏+
2𝑁 − 1} in (237) and to the set {𝑎 − 2𝑁 + 1, 𝑎 − 2𝑁 + 2, . . . , 𝑎 −
𝑁} in (238), we see that (237) and (238) are homogeneous
Vandermonde systems whose solutions are trivial; that is, the
terms within parentheses in (237) and (238) vanish. Thus, we
get the two systems below:
1 ≤ 𝑗 ≤ 𝑁,
∑ 𝐻𝑎𝑏,ℓ (𝑧) V𝑗 (𝑧)ℓ = 1,
1 ≤ 𝑗 ≤ 𝑁.
ℓ∈E
𝑢𝑖1 (𝑧) ⋅ ⋅ ⋅ 𝑢𝑖𝑚 (𝑧) .
∑
1≤𝑖1 <⋅⋅⋅<𝑖𝑚 ≤2𝑁
𝑖1 ,...,𝑖𝑚 ≠ 𝑘
(244)
Set also
(238)
ℓ∈E
∑ 𝐻𝑎𝑏,ℓ (𝑧) 𝑢𝑗 (𝑧)ℓ = 1,
(243)
1≤𝑖≤2𝑁
𝑖 ≠ 𝑘
ℓ∈E
∑ 𝛼𝑗 (𝑧) V𝑗 (𝑧)𝑘 ( ∑ 𝐻𝑎𝑏,ℓ (𝑧) 𝑢𝑗 (𝑧)ℓ − 1) = 0.
𝑢𝑖1 (𝑧) ⋅ ⋅ ⋅ 𝑢𝑖ℓ (𝑧) ,
∑
1≤𝑖1 <⋅⋅⋅<𝑖ℓ ≤2𝑁
(239)
ℓ∈E
It will be convenient to relabel the 𝑢𝑗 (𝑧)’s and V𝑗 (𝑧)’s, 1 ≤ 𝑗 ≤
𝑁, as 𝑢𝑗 (𝑧) = V𝑗+𝑁(𝑧) and V𝑗 (𝑧) = 𝑢𝑗+𝑁(𝑧); note that V𝑗 (𝑧) =
1/𝑢𝑗 (𝑧) for any 𝑗 ∈ {1, . . . , 2𝑁} and {𝑢1 (𝑧), . . . , 𝑢2𝑁(𝑧)} =
{V1 (𝑧), . . . , V2𝑁(𝑧)}. By using the relabeling 𝑢𝑗 (𝑧), V𝑗 (𝑧), 1 ≤
𝑗 ≤ 2𝑁, we obtain the two equivalent following systems of 2𝑁
equations and 2𝑁 unknowns, 𝑢1 (𝑧), . . . , 𝑢2𝑁(𝑧) for the first
one, V1 (𝑧), . . . , V2𝑁(𝑧) for the second one:
󵄨󵄨 𝑠 (𝑧)
𝑠𝑁−1 (𝑧) ⋅ ⋅ ⋅ 𝑠𝑁−𝑏+𝑎+2 (𝑧)󵄨󵄨󵄨󵄨
󵄨󵄨
𝑁
󵄨󵄨 𝑠
𝑠𝑁 (𝑧)
⋅ ⋅ ⋅ 𝑠𝑁−𝑏+𝑎+1 (𝑧)󵄨󵄨󵄨󵄨
(𝑧)
󵄨󵄨 𝑁+1
󵄨
󵄨󵄨 ,
̃
𝑈 (𝑧) = 󵄨󵄨
..
..
..
󵄨󵄨
󵄨󵄨
󵄨󵄨
.
.
.
󵄨󵄨
󵄨
󵄨󵄨𝑠
𝑠𝑁 (𝑧) 󵄨󵄨󵄨
󵄨 𝑁+𝑏−𝑎−2 (𝑧) 𝑠𝑁+𝑏−𝑎−3 (𝑧) ⋅ ⋅ ⋅
󵄨󵄨 𝑠 (𝑧)
𝑠𝑘,𝑁−1 (𝑧) ⋅ ⋅ ⋅ 𝑠𝑘,𝑁−𝑏+𝑎+1 (𝑧)󵄨󵄨󵄨󵄨
󵄨󵄨
𝑘,𝑁
󵄨󵄨 𝑠
𝑠𝑘,𝑁 (𝑧)
⋅ ⋅ ⋅ 𝑠𝑘,𝑁−𝑏+𝑎 (𝑧) 󵄨󵄨󵄨󵄨
󵄨󵄨 𝑘,𝑁+1 (𝑧)
󵄨󵄨
󵄨󵄨
.
.
..
󵄨󵄨 .
̃𝑘ℓ (𝑧) = 󵄨󵄨
..
..
𝑈
󵄨󵄨
󵄨󵄨
.
󵄨󵄨
󵄨
󵄨󵄨𝑠𝑘,𝑁+𝑏−𝑎−2 (𝑧) 𝑠𝑘,𝑁+𝑏−𝑎−3 (𝑧) ⋅ ⋅ ⋅ 𝑠𝑘,𝑁−1 (𝑧) 󵄨󵄨󵄨
󵄨󵄨
󵄨
󵄨󵄨𝑠𝑘,𝑁+𝑏−ℓ−1 (𝑧) 𝑠𝑘,𝑁+𝑏−ℓ−2 (𝑧) ⋅ ⋅ ⋅ 𝑠𝑘,𝑁+𝑎−ℓ (𝑧) 󵄨󵄨󵄨
(245)
Then, applying Proposition A.3 with the choices 𝑝 = 𝑟 = 𝑁
and 𝑞 = 𝑏 − 𝑎 − 1 leads, for any ℓ ∈ E, to
E (𝑧𝜎𝑎𝑏 1{𝑆𝑎𝑏 =ℓ,𝜎𝑎𝑏 <+∞} ) =
̃ (𝑧)
(−1)ℓ+𝑁−𝑎−1 2𝑁 𝑈
𝑢 (𝑧)𝑁−𝑎−1 .
∑ 𝑘ℓ
̃ (𝑧) 𝑘=1 𝑝𝑘 (𝑧) 𝑘
𝑈
(246)
The double generating function defined by
∑ 𝐻𝑎𝑏,ℓ (𝑧) 𝑢𝑗 (𝑧)ℓ = 1,
1 ≤ 𝑗 ≤ 2𝑁,
(240)
E (𝑧𝜎𝑎𝑏 𝜁𝑆𝑎𝑏 1{𝜎𝑎𝑏 <+∞} ) = ∑ E (𝑧𝜎𝑎𝑏 1{𝑆𝑎𝑏 =ℓ,𝜎𝑎𝑏 <+∞} ) 𝜁ℓ (247)
∑ 𝐻𝑎𝑏,ℓ (𝑧) V𝑗 (𝑧)ℓ = 1,
1 ≤ 𝑗 ≤ 2𝑁.
(241)
admits an interesting representation by means of interpolating polynomials that we display in the following theorem.
ℓ∈E
ℓ∈E
ℓ∈E
International Journal of Stochastic Analysis
25
Theorem 39. The double generating function of (𝜎𝑎𝑏 , 𝑆𝑎𝑏 ) is
given, for 𝑧 ∈ (0, 1/𝑀1 ) and 𝜁 ∈ C, by
2𝑁
̃ 𝑘 (𝑧, 𝜁) (V𝑘 (𝑧) 𝜁)
E (𝑧𝜎𝑎𝑏 𝜁𝑆𝑎𝑏 1{𝜎𝑎𝑏 <+∞} ) = ∑ 𝐿
𝑎−𝑁+1
, (248)
𝑘=1
E (𝑧𝜎𝑎𝑏 𝜁𝑆𝑎𝑏 1{𝑆𝑎𝑏 ≤𝑎,𝜎𝑎𝑏 <+∞} )
=
where, for 𝑘 ∈ {1, . . . , 2𝑁}
̃ 𝑘 (𝑧, 𝜁) =
𝐿
Proof. By (231), we have that
𝑎
∑ 𝐻𝑎𝑏,ℓ (𝑧) 𝜁ℓ =
ℓ=𝑎−𝑁+1
𝑈 (𝑢1 (𝑧) , . . . , 𝑢𝑘−1 (𝑧) , 𝜁, 𝑢𝑘+1 (𝑧) , . . . , 𝑢2𝑁 (𝑧))
.
𝑈 (𝑢1 (𝑧) , . . . , 𝑢2𝑁 (𝑧))
(249)
̃ 𝑘 (𝑧, 𝜁) are interpolating polynomials
The functions 𝜁 󳨃→ 𝐿
̃
satisfying 𝐿 𝑘 (𝑧, 𝑢𝑗 (𝑧)) = 𝛿𝑗𝑘 and can be expressed as
𝜁 − 𝑢𝑗 (𝑧)
̃ 𝑘 (𝑧, 𝜁) = 𝑃𝑘 (𝑧, 𝜁) ∏
𝐿
1≤𝑗≤2𝑁 𝑢𝑘
𝑗 ≠ 𝑘
(𝑧) − 𝑢𝑗 (𝑧)
𝑢1
..
.
,
⋅ ⋅ ⋅ 𝑢1ℓ+𝑁−𝑎−2
..
.
ℓ+𝑁−𝑎−2
𝑢𝑘−1 ⋅ ⋅ ⋅ 𝑢𝑘−1
𝑢𝑘
𝑈ℓ (𝑢1 (𝑧) , . . . , 𝑢2𝑁 (𝑧)) ℓ
𝜁.
ℓ=𝑎−𝑁+1 𝑈 (𝑢1 (𝑧) , . . . , 𝑢2𝑁 (𝑧))
(251)
In order to simplify the text, we omit the variable 𝑧. We
expand the determinant 𝑈ℓ (𝑢1 , . . . , 𝑢2𝑁), 𝑎 − 𝑁 + 1 ≤ ℓ ≤ 𝑎
with respect to its (ℓ + 𝑁 − 𝑎)th column:
𝑈ℓ (𝑢1 , . . . , 𝑢2𝑁)
2𝑁
⋅ ⋅ ⋅ 𝑢𝑘ℓ+𝑁−𝑎−2
ℓ+𝑁−𝑎−2
𝑢𝑘+1 ⋅ ⋅ ⋅ 𝑢𝑘+1
..
..
.
.
ℓ+𝑁−𝑎−2
𝑢2𝑁 ⋅ ⋅ ⋅ 𝑢2𝑁
= ∑ 𝑢𝑘𝑁−𝑎−1 𝑈𝑘ℓ (𝑢1 , . . . , 𝑢𝑘−1 , 𝑢𝑘+1 , . . . , 𝑢2𝑁) ,
(252)
𝑘=1
(250)
where 𝜁 󳨃→ 𝑃𝑘 (𝑧, 𝜁) are some polynomials of degree (𝑏 − 𝑎 − 1).
󵄨󵄨1
󵄨󵄨
󵄨󵄨 .
󵄨󵄨 .
󵄨󵄨 .
󵄨󵄨
󵄨󵄨
󵄨󵄨1
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨1
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨1
󵄨󵄨
󵄨󵄨
󵄨󵄨 ..
󵄨󵄨 .
󵄨󵄨
󵄨󵄨
󵄨󵄨1
󵄨
𝑎
∑
where 𝑈𝑘ℓ (𝑢1 , . . . , 𝑢𝑘−1 , 𝑢𝑘+1 , . . . , 𝑢2𝑁), 1 ≤ 𝑘 ≤ 2𝑁, is the
determinant
0 𝑢1ℓ+𝑁−𝑎 ⋅ ⋅ ⋅ 𝑢1𝑁−1 𝑢1𝑏−𝑎+𝑁−1 ⋅ ⋅ ⋅ 𝑢1𝑏−𝑎+2𝑁−2 󵄨󵄨󵄨󵄨
󵄨󵄨
..
..
..
..
..
󵄨󵄨
󵄨󵄨
.
.
.
.
.
󵄨󵄨
ℓ+𝑁−𝑎
𝑁−1
𝑏−𝑎+𝑁−1
𝑏−𝑎+2𝑁−2 󵄨󵄨󵄨
0 𝑢𝑘−1
⋅ ⋅ ⋅ 𝑢𝑘−1 𝑢𝑘−1
⋅ ⋅ ⋅ 𝑢𝑘−1
󵄨󵄨
󵄨󵄨
󵄨󵄨
ℓ+𝑁−𝑎
𝑁−1
𝑏−𝑎+𝑁−1
𝑏−𝑎+2𝑁−2 󵄨󵄨
󵄨󵄨
1 𝑢𝑘
⋅ ⋅ ⋅ 𝑢𝑘
𝑢𝑘
⋅ ⋅ ⋅ 𝑢𝑘
󵄨󵄨
󵄨󵄨
󵄨
ℓ+𝑁−𝑎
𝑁−1
𝑏−𝑎+𝑁−1
𝑏−𝑎+2𝑁−2 󵄨󵄨󵄨
0 𝑢𝑘+1
⋅ ⋅ ⋅ 𝑢𝑘+1 𝑢𝑘+1
⋅ ⋅ ⋅ 𝑢𝑘+1
󵄨󵄨
󵄨󵄨
󵄨󵄨
..
..
..
..
..
󵄨󵄨
.
.
.
.
.
󵄨󵄨
󵄨󵄨
ℓ+𝑁−𝑎
𝑁−1
𝑏−𝑎+𝑁−1
𝑏−𝑎+2𝑁−2 󵄨󵄨󵄨
0 𝑢2𝑁
⋅ ⋅ ⋅ 𝑢2𝑁 𝑢2𝑁
⋅ ⋅ ⋅ 𝑢2𝑁
󵄨
(253)
which plainly coincides with
󵄨󵄨1
󵄨󵄨
󵄨󵄨 .
󵄨󵄨 .
󵄨󵄨 .
󵄨󵄨
󵄨󵄨
󵄨󵄨1
󵄨󵄨
󵄨󵄨
󵄨󵄨0
󵄨󵄨
󵄨󵄨1
󵄨󵄨
󵄨󵄨 .
󵄨󵄨 .
󵄨󵄨󵄨 .
󵄨󵄨
󵄨󵄨1
󵄨
𝑢1
..
.
𝑢𝑘−1
0
𝑢𝑘+1
..
.
𝑢2𝑁
⋅ ⋅ ⋅ 𝑢1ℓ+𝑁−𝑎−2 𝑢1ℓ+𝑁−𝑎−1 𝑢1ℓ+𝑁−𝑎 ⋅ ⋅ ⋅ 𝑢1𝑁−1 𝑢1𝑏−𝑎+𝑁−1 ⋅ ⋅ ⋅ 𝑢1𝑏−𝑎+2𝑁−2 󵄨󵄨󵄨󵄨
󵄨󵄨
..
..
..
..
..
..
󵄨󵄨
󵄨󵄨
.
.
.
.
.
.
󵄨󵄨
ℓ+𝑁−𝑎−2
ℓ+𝑁−𝑎−1
ℓ+𝑁−𝑎
𝑁−1
𝑏−𝑎+𝑁−1
𝑏−𝑎+2𝑁−2 󵄨󵄨󵄨
⋅ ⋅ ⋅ 𝑢𝑘−1
𝑢𝑘−1
𝑢𝑘−1
⋅ ⋅ ⋅ 𝑢𝑘−1
𝑢𝑘−1
⋅ ⋅ ⋅ 𝑢𝑘−1
󵄨󵄨
󵄨󵄨
󵄨󵄨 .
⋅⋅⋅
0
1
0
⋅⋅⋅ 0
0
⋅⋅⋅
0
󵄨󵄨
ℓ+𝑁−𝑎−2
ℓ+𝑁−𝑎−1
ℓ+𝑁−𝑎
𝑁−1
𝑏−𝑎+𝑁−1
𝑏−𝑎+2𝑁−2 󵄨󵄨
󵄨󵄨
⋅ ⋅ ⋅ 𝑢𝑘+1
𝑢𝑘+1
𝑢𝑘+1
⋅ ⋅ ⋅ 𝑢𝑘+1 𝑢𝑘+1
⋅ ⋅ ⋅ 𝑢𝑘+1
󵄨󵄨
󵄨󵄨
..
..
..
..
..
..
󵄨󵄨
.
.
.
.
.
.
󵄨󵄨
󵄨
ℓ+𝑁−𝑎−2
ℓ+𝑁−𝑎−1
ℓ+𝑁−𝑎
𝑁−1
𝑏−𝑎+𝑁−1
𝑏−𝑎+2𝑁−2 󵄨󵄨󵄨
⋅ ⋅ ⋅ 𝑢2𝑁
𝑢2𝑁
𝑢2𝑁
⋅ ⋅ ⋅ 𝑢2𝑁 𝑢2𝑁
⋅ ⋅ ⋅ 𝑢2𝑁
󵄨
Therefore, we obtain that
𝑎
∑ 𝑈ℓ (𝑢1 , . . . , 𝑢2𝑁) 𝜁ℓ
ℓ=𝑎−𝑁+1
=
𝑎
∑
ℓ=𝑎−𝑁+1
2𝑁
( ∑ 𝑢𝑘𝑁−𝑎−1 𝑈𝑘ℓ (𝑢1 , . . . , 𝑢𝑘−1 , 𝑢𝑘+1 , . . . , 𝑢2𝑁)) 𝜁ℓ
𝑘=1
(254)
= 𝜁𝑎−𝑁+1
2𝑁
𝑎
𝑘=1
ℓ=𝑎−𝑁+1
× ∑ ( ∑ 𝑈𝑘ℓ (𝑢1 , . . . , 𝑢𝑘−1 , 𝑢𝑘+1 , . . . , 𝑢2𝑁) 𝜁ℓ+𝑁−𝑎−1 )
× V𝑘𝑎−𝑁+1 .
(255)
26
International Journal of Stochastic Analysis
Next, we can see that the foregoing sum within parentheses is
the expansion of the determinant 𝐷𝑘− (𝑧, 𝜁) below with respect
to its 𝑘th row (by putting back the variable 𝑧):
󵄨󵄨󵄨1 𝑢1 (𝑧) ⋅ ⋅ ⋅ 𝑢𝑁−1 (𝑧) 𝑢𝑏−𝑎+𝑁−1 (𝑧) ⋅ ⋅ ⋅ 𝑢𝑏−𝑎+2𝑁−2 (𝑧)󵄨󵄨󵄨
1
1
1
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨 .
..
..
..
..
󵄨󵄨
󵄨󵄨 ..
.
.
.
.
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
𝑁−1
𝑏−𝑎+𝑁−1
𝑏−𝑎+2𝑁−2
󵄨󵄨
󵄨󵄨1 𝑢𝑘−1 (𝑧) ⋅ ⋅ ⋅ 𝑢
𝑢
⋅
⋅
⋅
𝑢
(𝑧)
(𝑧)
(𝑧)
𝑘−1
𝑘−1
𝑘−1
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
𝑁−1
󵄨󵄨 .
󵄨󵄨1
𝜁
⋅
⋅
⋅
𝜁
0
⋅
⋅
⋅
0
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
𝑁−1
𝑏−𝑎+𝑁−1
𝑏−𝑎+2𝑁−2
󵄨󵄨
󵄨󵄨1 𝑢𝑘+1 (𝑧) ⋅ ⋅ ⋅ 𝑢𝑘+1 (𝑧) 𝑢𝑘+1
⋅ ⋅ ⋅ 𝑢𝑘+1
(𝑧)
(𝑧)
󵄨󵄨
󵄨󵄨
..
..
..
..
󵄨󵄨󵄨
󵄨󵄨󵄨 ..
.
.
.
.
󵄨󵄨
󵄨󵄨 .
󵄨
󵄨󵄨
󵄨󵄨1 𝑢 (𝑧) ⋅ ⋅ ⋅ 𝑢𝑁−1 (𝑧) 𝑢𝑏−𝑎+𝑁−1 (𝑧) ⋅ ⋅ ⋅ 𝑢𝑏−𝑎+2𝑁−2 (𝑧)󵄨󵄨󵄨
󵄨
󵄨
2𝑁
2𝑁
2𝑁
2𝑁
(256)
As a result, by setting
we obtain that
E (𝑧𝜎𝑎𝑏 𝜁𝑆𝑎𝑏 1{𝜎𝑎𝑏 <+∞} ) =
1 2𝑁
𝑎−𝑁+1
.
∑ 𝐷 (𝑧, 𝜁) (V𝑘 (𝑧) 𝜁)
𝐷 (𝑧) 𝑘=1 𝑘
(262)
̃ 𝑘 (𝑧, 𝜁) = 𝐷𝑘 (𝑧, 𝜁)/𝐷(𝑧)
We observe that the polynomials 𝐿
with respect to the variable 𝜁 are of degree 𝑏 − 𝑎 + 2𝑁 − 2
̃ 𝑘 (𝑧, 𝑢𝑗 (𝑧)) = 𝛿𝑗𝑘 for all 𝑗 ∈ E
and satisfy the equalities 𝐿
and 𝑘 ∈ {1, . . . , 2𝑁}. Hence they can be expressed by means
of the Lagrange fundamental polynomials as displayed in
Theorem 39.
Example 40. For 𝑁 = 1, (248) reads
𝑎
̃ 1 (𝑧, 𝜁) (V1 (𝑧) 𝜁)
E (𝑧𝜎𝑎𝑏 𝜁𝑆𝑎𝑏 1{𝜎𝑎𝑏 <+∞} ) = 𝐿
𝐷 (𝑧)
= 𝑈 (𝑢1 (𝑧) , . . . , 𝑢2𝑁 (𝑧))
󵄨󵄨
󵄨
󵄨󵄨1 𝑢1 (𝑧) ⋅ ⋅ ⋅ 𝑢1𝑁−1 (𝑧) 𝑢1𝑏−𝑎+𝑁−1 (𝑧) ⋅ ⋅ ⋅ 𝑢1𝑏−𝑎+2𝑁−2 (𝑧)󵄨󵄨󵄨
󵄨󵄨 .
󵄨󵄨
..
..
..
..
󵄨󵄨,
= 󵄨󵄨󵄨󵄨 ..
󵄨󵄨
.
.
.
.
󵄨󵄨
󵄨
󵄨󵄨1 𝑢 (𝑧) ⋅ ⋅ ⋅ 𝑢𝑁−1 (𝑧) 𝑢𝑏−𝑎+𝑁−1 (𝑧) ⋅ ⋅ ⋅ 𝑢𝑏−𝑎+2𝑁−2 (𝑧)󵄨󵄨󵄨
󵄨
󵄨
2𝑁
2𝑁
2𝑁
2𝑁
(257)
we obtain that
E (𝑧𝜎𝑎𝑏 𝜁𝑆𝑎𝑏 1{𝑆𝑎𝑏 ≤𝑎,𝜎𝑎𝑏 <+∞} ) =
1 𝑁 −
𝑎−𝑁+1
∑ 𝐷 (𝑧, 𝜁)(V𝑘 (𝑧) 𝜁)
.
𝐷 (𝑧) 𝑘=1 𝑘
(258)
where
̃ 1 (𝑧, 𝜁) =
𝐿
𝑈 (𝜁, 𝑢2 (𝑧))
𝑢2 (𝑧)𝑏−𝑎 − 𝜁𝑏−𝑎
,
=
𝑈 (𝑢1 (𝑧) , 𝑢2 (𝑧)) 𝑢2 (𝑧)𝑏−𝑎 − 𝑢1 (𝑧)𝑏−𝑎
Since V1 (𝑧) = 1/𝑢1 (𝑧) and V2 (𝑧) = 1/𝑢2 (𝑧), and also 𝑢2 (𝑧) =
1/𝑢1 (𝑧),
1 𝑁 +
𝑎−𝑁+1
,
∑ 𝐷𝑘 (𝑧, 𝜁)(V𝑘 (𝑧) 𝜁)
𝐷 (𝑧) 𝑘=1
(259)
where 𝐷𝑘+ (𝑧, 𝜁) is the determinant
󵄨󵄨1 𝑢 (𝑧) ⋅ ⋅ ⋅ 𝑢𝑁−1 (𝑧) 𝑢𝑏−𝑎+𝑁−1 (𝑧)
󵄨󵄨
1
1
1
󵄨󵄨 .
..
..
..
󵄨󵄨 .
󵄨󵄨 .
.
.
.
󵄨󵄨
󵄨󵄨
𝑁−1
𝑏−𝑎+𝑁−1
󵄨󵄨1 𝑢𝑘−1 (𝑧) ⋅ ⋅ ⋅ 𝑢𝑘−1 (𝑧) 𝑢𝑘−1
(𝑧)
󵄨󵄨
󵄨󵄨
𝑏−𝑎+𝑁−1
󵄨󵄨0
0
⋅⋅⋅
0
𝜁
󵄨󵄨
󵄨󵄨
󵄨󵄨
𝑁−1
𝑏−𝑎+𝑁−1
󵄨󵄨1 𝑢𝑘+1 (𝑧) ⋅ ⋅ ⋅ 𝑢𝑘+1
(𝑧) 𝑢𝑘+1
(𝑧)
󵄨󵄨
󵄨󵄨 .
..
..
..
󵄨󵄨 .
󵄨󵄨 .
.
.
.
󵄨󵄨
󵄨󵄨󵄨1 𝑢 (𝑧) ⋅ ⋅ ⋅ 𝑢𝑁−1 (𝑧) 𝑢𝑏−𝑎+𝑁−1 (𝑧)
󵄨
2𝑁
2𝑁
2𝑁
⋅ ⋅ ⋅ 𝑢1𝑏−𝑎+2𝑁−2 (𝑧)󵄨󵄨󵄨󵄨
󵄨󵄨
..
󵄨󵄨
󵄨󵄨
.
󵄨󵄨
󵄨
𝑏−𝑎+2𝑁−2
⋅ ⋅ ⋅ 𝑢𝑘−1
(𝑧)󵄨󵄨󵄨
󵄨󵄨
󵄨
⋅ ⋅ ⋅ 𝜁𝑏−𝑎+2𝑁−2 󵄨󵄨󵄨󵄨 .
󵄨󵄨
󵄨󵄨
𝑏−𝑎+2𝑁−2
⋅ ⋅ ⋅ 𝑢𝑘+1
(𝑧)󵄨󵄨󵄨󵄨
󵄨󵄨
󵄨󵄨
..
󵄨󵄨
.
󵄨󵄨
󵄨󵄨
𝑏−𝑎+2𝑁−2
⋅ ⋅ ⋅ 𝑢2𝑁
(𝑧)󵄨󵄨󵄨
(260)
By adding (258) and (259) and setting
(264)
𝑈 (𝑢1 (𝑧) , 𝜁)
𝜁𝑏−𝑎 − 𝑢1 (𝑧)𝑏−𝑎
̃ 2 (𝑧, 𝜁) =
𝐿
.
=
𝑈 (𝑢1 (𝑧) , 𝑢2 (𝑧)) 𝑢2 (𝑧)𝑏−𝑎 − 𝑢1 (𝑧)𝑏−𝑎
E (𝑧𝜎𝑎𝑏 𝜁𝑆𝑎𝑏 1{𝜎𝑎𝑏 <+∞} ) =
Similarly, we could check that
E (𝑧𝜎𝑎𝑏 𝜁𝑆𝑎𝑏 1{𝑆𝑎𝑏 ≥𝑏,𝜎𝑎𝑏 <+∞} ) =
(263)
̃ 2 (𝑧, 𝜁) (V2 (𝑧) 𝜁)𝑎 ,
+𝐿
𝑢2 (𝑧)𝑏 − 𝑢1 (𝑧)𝑏
𝑢2 (𝑧)𝑏−𝑎 − 𝑢1 (𝑧)𝑏−𝑎
+
𝜁𝑎
𝑢2 (𝑧)𝑎 − 𝑢1 (𝑧)𝑎
𝑢2 (𝑧)𝑎−𝑏 − 𝑢1 (𝑧)𝑎−𝑏
(265)
𝑏
𝜁
from which we extract the well-known formulae related to the
case of an ordinary random walk:
E (𝑧𝜎𝑎𝑏 1{𝑆𝑎𝑏 =𝑎,𝜎𝑎𝑏 <+∞} ) =
𝜎𝑎𝑏
E (𝑧 1{𝑆𝑎𝑏 =𝑏,𝜎𝑎𝑏 <+∞} ) =
𝑢2 (𝑧)𝑏 − 𝑢1 (𝑧)𝑏
𝑢2 (𝑧)𝑏−𝑎 − 𝑢1 (𝑧)𝑏−𝑎
𝑢2 (𝑧)𝑎 − 𝑢1 (𝑧)𝑎
𝑢2 (𝑧)𝑎−𝑏 − 𝑢1 (𝑧)𝑎−𝑏
,
(266)
.
In particular, if 𝑐 = 1/2 (case of the classical random walk),
by Example 10, we have in the above formulae that
𝑢1 (𝑧) =
1 − √1 − 𝑧2
,
𝑧
𝑢2 (𝑧) =
1 + √1 − 𝑧2
.
𝑧
(267)
For 𝑁 = 2, (248) reads
E (𝑧𝜎𝑎𝑏 𝜁𝑆𝑎𝑏 1{𝜎𝑎𝑏 <+∞} )
𝐷𝑘 (𝑧, 𝜁) = 𝐷𝑘+ (𝑧, 𝜁) + 𝐷𝑘− (𝑧, 𝜁)
𝑎−1
= 𝑈 (𝑢1 (𝑧) , . . . , 𝑢𝑘−1 (𝑧) , 𝜁, 𝑢𝑘+1 (𝑧) , . . . , 𝑢2𝑁 (𝑧)) ,
(261)
̃ 1 (𝑧, 𝜁) (V1 (𝑧) 𝜁)
=𝐿
̃ 3 (𝑧, 𝜁) (V3 (𝑧) 𝜁)
+𝐿
𝑎−1
̃ 2 (𝑧, 𝜁) (V2 (𝑧) 𝜁)
+𝐿
𝑎−1
̃ 4 (𝑧, 𝜁) (V4 (𝑧) 𝜁)
+𝐿
𝑎−1
,
(268)
International Journal of Stochastic Analysis
27
Lemma 42. The following identities hold: for 𝑁 ≤ 𝑘 ≤ 2𝑁 − 1,
where
̃ 1 (𝑧, 𝜁) =
𝐿
𝑈 (𝜁, 𝑢2 (𝑧) , 𝑢3 (𝑧) , 𝑢4 (𝑧))
,
𝑈 (𝑢1 (𝑧) , 𝑢2 (𝑧) , 𝑢3 (𝑧) , 𝑢4 (𝑧))
̃ 2 (𝑧, 𝜁) =
𝐿
𝑈 (𝑢1 (𝑧) , 𝜁, 𝑢3 (𝑧) , 𝑢4 (𝑧))
,
𝑈 (𝑢1 (𝑧) , 𝑢2 (𝑧) , 𝑢3 (𝑧) , 𝑢4 (𝑧))
̃ 3 (𝑧, 𝜁) =
𝐿
𝑈 (𝑢1 (𝑧) , 𝑢2 (𝑧) , 𝜁, 𝑢4 (𝑧))
,
𝑈 (𝑢1 (𝑧) , 𝑢2 (𝑧) , 𝑢3 (𝑧) , 𝑢4 (𝑧))
̃ 4 (𝑧, 𝜁) =
𝐿
𝑈 (𝑢1 (𝑧) , 𝑢2 (𝑧) , 𝑢3 (𝑧) , 𝜁)
.
𝑈 (𝑢1 (𝑧) , 𝑢2 (𝑧) , 𝑢3 (𝑧) , 𝑢4 (𝑧))
𝑏+𝑁−1
ℓ+𝑁−𝑎−1
𝑁−𝑎−1
) 𝐻𝑎𝑏,ℓ (1) = (
),
∑ (
𝑘
𝑘
𝑎
∑
(269)
ℓ=0
𝑏−𝑎+2𝑁−2
∑
ℓ=𝑏−𝑎+𝑁−1
(274)
𝑧→1
Remark 41. By expanding the determinant 𝐷𝑘 (𝑧, 𝜁) with
respect to its 𝑘th raw, we obtain an expansion for the
̃ 𝜁) as a linear combination of 1, 𝜁, . . . , 𝜁𝑁−1 ,
polynomial 𝐿(𝑧,
𝑏−𝑎+𝑁−1 𝑏−𝑎+𝑁
𝜁
,𝜁
, . . . , 𝜁𝑏−𝑎+2𝑁−2 , that is, an expansion of the
form
𝑁−1
ℓ=𝑎−𝑁+1
𝑏+𝑁−1
𝑏+𝑁−1−ℓ
(
).
) 𝐻𝑎𝑏,ℓ (1) = (
𝑘
𝑘
Proof. By (85), we have the expansion 𝑢𝑗 (𝑧) = 1+𝜀𝑗 (𝑧), where
√1 − 𝑧) for any 𝑗 ∈ {1, . . . , 𝑁}. Actually such
𝜀𝑗 (𝑧) = − O( 2𝑁
̃ 𝑘 (𝑧, 𝜁), 1 ≤ 𝑘 ≤ 4, have the form
All the polynomials 𝐿
𝐴 𝑘,𝑎−1 (𝑧) + 𝐴 𝑘,𝑎 (𝑧)𝜁 + 𝐴 𝑘,𝑏 (𝑧)𝜁𝑏−𝑎+1 + 𝐴 𝑘,𝑏+1 (𝑧)𝜁𝑏−𝑎+2 .
̃ 𝑘 (𝑧, 𝜁) = ∑ 𝐴 𝑘,ℓ+𝑎−𝑁+1 (𝑧) 𝜁ℓ +
𝐿
(273)
ℓ=𝑏
asymptotics holds true for any 𝑗 ∈ {1, . . . , 2𝑁} because of the
equality 𝑢𝑗 = 1/𝑢𝑗−𝑁 for 𝑗 ∈ {𝑁 + 1, . . . , 2𝑁}. We put this
into systems (240) and (241). For doing this, it is convenient
to rewrite the latter as
∑ 𝐻𝑎𝑏,ℓ (𝑧) 𝑢𝑗 (𝑧)ℓ+𝑁−𝑎−1 = 𝑢𝑗 (𝑧)𝑁−𝑎−1 ,
1 ≤ 𝑗 ≤ 2𝑁,
∑ 𝐻𝑎𝑏,ℓ (𝑧) 𝑢𝑗 (𝑧)𝑏+𝑁−1−ℓ = 𝑢𝑗 (𝑧)𝑏+𝑁−1 ,
1 ≤ 𝑗 ≤ 2𝑁.
ℓ∈E
ℓ∈E
(275)
We obtain that
ℓ+𝑁−𝑎−1
∑ (1 + 𝜀𝑗 (𝑧))
𝐴 𝑘,ℓ+𝑎−𝑁+1 (𝑧) 𝜁ℓ
ℓ∈E
(276)
= (1 + 𝜀𝑗 (𝑧))
= 𝜁𝑁−𝑎−1 ∑ 𝐴 𝑘ℓ (𝑧) 𝜁ℓ .
ℓ∈E
ℓ∈E
Hence,
,
1 ≤ 𝑗 ≤ 2𝑁,
𝐻𝑎𝑏,ℓ (𝑧)
(277)
= (1 + 𝜀𝑗 (𝑧))
2𝑁
E (𝑧𝜎𝑎𝑏 𝜁𝑆𝑎𝑏 1{𝜎𝑎𝑏 <+∞} ) = ∑ ( ∑ 𝐴 𝑘ℓ (𝑧) 𝜁ℓ ) V𝑘 (𝑧)𝑎−𝑁+1
𝑁−𝑎−1
𝑏+𝑁−1−ℓ
∑ (1 + 𝜀𝑗 (𝑧))
(270)
𝐻𝑎𝑏,ℓ (𝑧)
𝑏+𝑁−1
,
1 ≤ 𝑗 ≤ 2𝑁.
System (276) writes
ℓ∈E
𝑘=1
2𝑁
𝑏−𝑎+2𝑁−2
𝑎−𝑁+1
= ∑ ( ∑ 𝐴 𝑘ℓ (𝑧) V𝑘 (𝑧)
ℓ∈E
∑
ℓ
)𝜁
∑
ℓ∈E:
ℓ≥𝑘+𝑎−𝑁+1
𝑘=0
𝑘=1
(
(271)
ℓ+𝑁−𝑎−1
(
) 𝐻𝑎𝑏,ℓ (𝑧)) 𝜀𝑗 (𝑧)𝑘
𝑘
𝑁−𝑎−1
𝑁−𝑎−1
= ∑ (
) 𝜀𝑗 (𝑧)𝑘 .
𝑘
from which we extract, for any ℓ ∈ E, that
𝜎𝑎𝑏
2𝑁
E (𝑧 1{𝑆𝑎𝑏 =ℓ,𝜎𝑎𝑏 <+∞} ) = ∑ 𝐴 𝑘ℓ (𝑧) 𝑢𝑘 (𝑧)
𝑘=0
(278)
𝑁−𝑎−1
.
(272)
𝑘=1
Actually, the foregoing sum comes from the quotient
𝑈ℓ (𝑢1 (𝑧), . . . , 𝑢2𝑁(𝑧))/𝑈(𝑢1 (𝑧), . . . , 𝑢2𝑁(𝑧)) given by (231)
by expanding the determinant 𝑈ℓ (𝑢1 (𝑧), . . . , 𝑢2𝑁(𝑧)) with
respect to the (ℓ + 𝑁 − 𝑎)th column or (ℓ + 𝑁 − 𝑏 + 1)th
column according as the number ℓ satisfies 𝑎 − 𝑁 + 1 ≤ ℓ ≤ 𝑎
or 𝑏 ≤ ℓ ≤ 𝑏 + 𝑁 − 1.
4.1.2. Pseudodistribution of 𝑆𝑎𝑏 . In order to derive the pseudodistribution of 𝑆𝑎𝑏 which is characterized by the numbers
𝐻𝑎𝑏,ℓ (1), ℓ ∈ E, we solve the systems obtained by taking the
limit in (262) as 𝑧 → 1− .
Set
𝑀𝑘 (𝑧) =
∑
ℓ∈E:
ℓ≥𝑘+𝑎−𝑁+1
ℓ+𝑁−𝑎−1
𝑁−𝑎−1
(
) 𝐻𝑎𝑏,ℓ (𝑧) − (
),
𝑘
𝑘
𝑅𝑗 (𝑧) = −
𝑏−𝑎+2𝑁−2
∑
𝑘=2𝑁
𝑀𝑘 (𝑧) 𝜀𝑗 (𝑧)𝑘 .
(279)
Then, equality (278) reads
2𝑁−1
∑ 𝑀𝑘 (𝑧) 𝜀𝑗 (𝑧)𝑘 = 𝑅𝑗 (𝑧) ,
𝑘=0
1 ≤ 𝑗 ≤ 2𝑁.
(280)
28
International Journal of Stochastic Analysis
This is a Vandermonde system which can be solved as in the
proof of Lemma 21 upon changing 𝑁 into 2𝑁. We can check
that lim𝑧 → 1− 𝑀𝑘 (𝑧) = 0, which entails that
∑
ℓ∈E:
ℓ≥𝑘+𝑎−𝑁+1
D+ = {(𝑢, V) ∈ R2 : 0 ≤ V ≤ 𝑢 ≤ 1} ,
Proof. We have to solve system (273) and (274). For (273), for
instance, the principal matrix and the right-hand side matrix
are
0 ≤ 𝑘 ≤ 2𝑁 − 1.
(281)
ℓ+𝑏−𝑎+𝑁−1
[(
,
)]
𝑘+𝑁
0≤𝑘,ℓ≤𝑁−1
Similarly, using (277), we can prove that
∑
ℓ∈E:
ℓ≤𝑏+𝑁−1−𝑘
−1
𝑁−𝑎−1
ℓ+𝑏−𝑎+𝑁−1
.
[(
[(
)]
)]
𝑘+𝑁
𝑘+𝑁
0≤𝑘,ℓ≤𝑁−1
0≤𝑘≤𝑁−1
(288)
0 ≤ 𝑗 ≤ 2𝑁 − 1.
(282)
Actually, we will only use (281) and (282) restricted to 𝑗 ∈
{𝑁, . . . , 2𝑁 − 1} which immediately yields system (273) and
(274).
Now, we state one of the most important result of this
work. We solve the famous problem of the “gambler’s ruin”
in the context of the pseudorandom walk.
Theorem 43. The pseudodistribution of 𝑆𝑎𝑏 is given, for ℓ ∈
{0, 1, . . . , 𝑁 − 1}, by
P {𝑆𝑎𝑏
(−1)ℓ (𝐾𝑁/ (ℓ − 𝑎)) ( 𝑁−1
ℓ )
,
= 𝑎 − ℓ, 𝜎𝑎𝑏 < +∞} =
ℓ+𝑏−𝑎+𝑁−1
(
)
𝑁
P {𝑆𝑎𝑏 = 𝑏 + ℓ, 𝜎𝑎𝑏 < +∞} =
The computation of this product being quite fastidious,
we postponed it to Appendix A.3. The result is given by
Theorem A.8:
[
(−1)ℓ (𝐾𝑁/ (ℓ + 𝑏)) ( 𝑁−1
ℓ )
.
]
ℓ+𝑏−𝑎+𝑁−1
(
)
𝑁
0≤ℓ≤𝑁−1
P {𝜎𝑏+ < 𝜎𝑎− }
= P {𝜎𝑏+ < 𝜎𝑎− , 𝜎𝑎𝑏 < +∞} = P {𝑆𝑎𝑏 ≥ 𝑏, 𝜎𝑎𝑏 < +∞}
(−1)ℓ (𝐾𝑁/ (ℓ + 𝑏)) ( 𝑁−1
ℓ )
,
( ℓ+𝑏−𝑎+𝑁−1
)
𝑁
(283)
(290)
𝑁−1
= ∑ P {𝑆𝑎𝑏 = 𝑏 + ℓ, 𝜎𝑎𝑏 < +∞}
ℓ=0
1
Noticing that 1/(ℓ + 𝑏) = ∫0 𝑦ℓ+𝑏−1 d𝑦 and 1/ ( ℓ+𝑏−𝑎+𝑁−1
)=
𝑁
1
𝑁 ∫0 𝑥ℓ+𝑏−𝑎−1 (1 − 𝑥)𝑁−1 d𝑥, we get that
(−1)ℓ (𝑁/ (ℓ + 𝑏)) ( 𝑁−1
ℓ )
ℓ+𝑏−𝑎+𝑁−1
(
)
𝑁
ℓ=0
𝑁−1
Moreover, P{𝜎𝑎𝑏 < +∞} = 1 and
∑
P {𝜎𝑏+ < 𝜎𝑎− }
= 𝐾𝑁2 ∬
𝑁−1
D+
𝑢−𝑎−1 (1 − 𝑢)𝑁−1 V𝑏−1 (1 − V)𝑁−1 d𝑢 dV,
1 1
𝑁−1
ℓ
= 𝑁2 ∫ ∫ ( ∑ (−1)ℓ (
) (𝑥𝑦) )
ℓ
0 0
ℓ=0
× 𝑥𝑏−𝑎−1 (1 − 𝑥)𝑁−1 𝑦𝑏−1 d𝑥 d𝑦
P {𝜎𝑎− < 𝜎𝑏+ }
= 𝐾𝑁2 ∬
(291)
(−1)ℓ (𝑁/ (ℓ + 𝑏)) ( 𝑁−1
ℓ )
.
= 𝐾∑
ℓ+𝑏−𝑎+𝑁−1
(
)
𝑁
ℓ=0
𝑁−1
𝑁−𝑎−1 𝑁+𝑏−1
𝐾=(
)(
)
𝑁
𝑁
(−1)𝑁
𝑎 (𝑎 − 1) ⋅ ⋅ ⋅ (𝑎 − 𝑁 + 1) 𝑏 (𝑏 + 1) ⋅ ⋅ ⋅ (𝑏 + 𝑁 − 1) .
𝑁!
(284)
(289)
The entries of this matrix provide the pseudo-probabilities
P{𝑆𝑎𝑏 = 𝑏 + ℓ, 𝜎𝑎𝑏 < +∞}, 0 ≤ ℓ ≤ 𝑁 − 1, which are
exhibited in Theorem 43. The analogous formula for P{𝑆𝑎𝑏 =
𝑎 − ℓ, 𝜎𝑎𝑏 < +∞} holds true in the same way.
Next, by observing that 𝜎𝑎𝑏 = min(𝜎𝑎− , 𝜎𝑏+ ) and that {𝜎𝑎𝑏 <
+∞} = {𝜎𝑎− < +∞} ∪ {𝜎𝑏+ < +∞}, we have that
where
=
𝑁−𝑎−1
[(
)]
𝑘+𝑁
0≤𝑘≤𝑁−1
(287)
and the matrix form of the solution is given by
𝑏+𝑁−ℓ−1
(
) 𝐻𝑎𝑏,ℓ (1)
𝑘
𝑁+𝑏−1
=(
),
𝑘
(286)
D− = {(𝑢, V) ∈ R2 : 0 ≤ 𝑢 ≤ V ≤ 1} .
ℓ+𝑁−𝑎−1
(
) 𝐻𝑎𝑏,ℓ (1)
𝑘
𝑁−𝑎−1
=(
),
𝑘
where
D−
𝑢−𝑎−1 (1 − 𝑢)𝑁−1 V𝑏−1 (1 − V)𝑁−1 d𝑢 dV,
(285)
1
1
0
0
𝑁−1
= 𝑁2 ∫ ∫ 𝑥𝑏−𝑎−1 (1 − 𝑥)𝑁−1 𝑦𝑏−1 (1 − 𝑥𝑦)
d𝑥 d𝑦.
(292)
International Journal of Stochastic Analysis
29
The computations can be pursued by performing the change
of variables (𝑢, V) = (𝑥, 𝑥𝑦) in the above integral:
downward jumps. Hence, the exit place must be either 𝑎 − 1,
𝑎, 𝑏 or 𝑏 + 1: 𝑆𝑎𝑏 ∈ {𝑎 − 1, 𝑎, 𝑏, 𝑏 + 1}. We have that
(−1)ℓ (𝑁/ (ℓ + 𝑏)) ( 𝑁−1
ℓ )
ℓ+𝑏−𝑎+𝑁−1
(
)
𝑁
ℓ=0
𝑁−1
P {𝑆𝑎𝑏 = 𝑎 − 1} =
∑
= 𝑁2 ∬
D+
P {𝑆𝑎𝑏 = 𝑎} = −
𝑢−𝑎−1 (1 − 𝑢)𝑁−1 V𝑏−1 (1 − V)𝑁−1 d𝑢 dV.
(293)
P {𝑆𝑎𝑏 = 𝑏} =
Putting (293) into (291) yields the expression of P{𝜎𝑏+ < 𝜎𝑎− }
displayed in Theorem 43. The similar expression for P{𝜎𝑎− <
𝜎𝑏+ } holds true. Finally,
P {𝜎𝑎𝑏 < +∞}
= P {𝜎𝑎− < 𝜎𝑏+ , 𝜎𝑎𝑏 < +∞} + P {𝜎𝑏+ < 𝜎𝑎− , 𝜎𝑎𝑏 < +∞}
= P {𝜎𝑎− < 𝜎𝑏+ } + P {𝜎𝑏+ < 𝜎𝑎− }
1
1
0
0
= 𝐾𝑁2 ∫ ∫ 𝑢−𝑎−1 (1 − 𝑢)𝑁−1 V𝑏−1 (1 − V)𝑁−1 d𝑢 dV.
(294)
The foregoing integral is quite elementary:
1
1
0
0
1
1
0
0
= ∫ 𝑢−𝑎−1 (1 − 𝑢)𝑁−1 d𝑢 ∫ V𝑏−1 (1 − V)𝑁−1 dV
P {𝑆𝑎𝑏
1
𝑁2
𝑏+𝑁−1
( 𝑁−𝑎−1
𝑁 )( 𝑁 )
=
𝑏 (𝑏 + 1) (𝑏 − 3𝑎 + 2)
,
(𝑏 − 𝑎) (𝑏 − 𝑎 + 1) (𝑏 − 𝑎 + 2)
P {𝜎𝑏+ < 𝜎𝑎− } =
𝑎 (𝑎 − 1) (3𝑏 − 𝑎 + 2)
.
(𝑏 − 𝑎) (𝑏 − 𝑎 + 1) (𝑏 − 𝑎 + 2)
(iii) Case 𝑁 = 3. In this case 𝑆𝑎𝑏 ∈ {𝑎−2, 𝑎−1, 𝑎, 𝑏, 𝑏+1, 𝑏+2}
and
(295)
P {𝑆𝑎𝑏 = 𝑎} =
1
,
𝐾𝑁2
𝑎 (𝑎 − 1) 𝑏 (𝑏 + 1) (𝑏 + 2)
,
(𝑏 − 𝑎 + 2) (𝑏 − 𝑎 + 3) (𝑏 − 𝑎 + 4)
𝑎 (𝑎 − 1) (𝑎 − 2) (𝑏 + 1) (𝑏 + 2)
,
(𝑏 − 𝑎) (𝑏 − 𝑎 + 1) (𝑏 − 𝑎 + 2)
𝑎 (𝑎 − 1) (𝑎 − 2) 𝑏 (𝑏 + 2)
,
(𝑏 − 𝑎 + 1) (𝑏 − 𝑎 + 2) (𝑏 − 𝑎 + 3)
which entails that P{𝜎𝑎𝑏 < +∞} = 1.
P {𝑆𝑎𝑏 = 𝑏 + 1} =
In the sequel, when considering 𝑆𝑎𝑏 , we will omit the
condition 𝜎𝑎𝑏 < +∞.
P {𝑆𝑎𝑏 = 𝑏 + 2} = −
𝑏
P {𝑆𝑎𝑏 = 𝑎} =
<
=
,
𝑏−𝑎
𝑎
P {𝑆𝑎𝑏 = 𝑏} = P {𝜎𝑏+ < 𝜎𝑎− } = −
.
𝑏−𝑎
P {𝜎𝑎−
𝑎 (𝑎 − 2) 𝑏 (𝑏 + 1) (𝑏 + 2)
,
−
(𝑏 𝑎 + 1) (𝑏 − 𝑎 + 2) (𝑏 − 𝑎 + 3)
(𝑎 − 1) (𝑎 − 2) 𝑏 (𝑏 + 1) (𝑏 + 2)
,
(𝑏 − 𝑎) (𝑏 − 𝑎 + 1) (𝑏 − 𝑎 + 2)
P {𝑆𝑎𝑏 = 𝑏} = −
𝑎 (𝑎 − 1) (𝑎 − 2) 𝑏 (𝑏 + 1)
,
(𝑏 − 𝑎 + 2) (𝑏 − 𝑎 + 3) (𝑏 − 𝑎 + 4)
P {𝜎𝑏− < 𝜎𝑎+ }
=
(i) Case 𝑁 = 1. In this case 𝑆𝑎𝑏 ∈ {𝑎, 𝑏} and
(297)
We can easily check that P{𝑆𝑎𝑏 = 𝑎}+P{𝑆𝑎𝑏 = 𝑎−1}+P{𝑆𝑎𝑏 =
𝑏}+P{𝑆𝑎𝑏 = 𝑏+1} = 1 as well as P{𝜎𝑎− < 𝜎𝑏+ }+P{𝜎𝑏+ < 𝜎𝑎− } = 1.
P {𝑆𝑎𝑏 = 𝑎 − 1} = −
Example 44. Let us have a look on the particular values 1, 2, 3
of 𝑁.
𝑎 (𝑎 − 1) (𝑏 + 1)
,
(𝑏 − 𝑎) (𝑏 − 𝑎 + 1)
P {𝜎𝑎− < 𝜎𝑏+ } =
= 𝐵 (𝑁, −𝑎) 𝐵 (𝑁, 𝑏)
=
(𝑎 − 1) 𝑏 (𝑏 + 1)
,
(𝑏 − 𝑎) (𝑏 − 𝑎 + 1)
𝑎 (𝑎 − 1) 𝑏
,
= 𝑏 + 1} = −
(𝑏 − 𝑎 + 1) (𝑏 − 𝑎 + 2)
P {𝑆𝑎𝑏 = 𝑎 − 2} =
∫ ∫ 𝑢−𝑎−1 (1 − 𝑢)𝑁−1 V𝑏−1 (1 − V)𝑁−1 d𝑢 dV
𝑎𝑏 (𝑏 + 1)
,
(𝑏 − 𝑎 + 1) (𝑏 − 𝑎 + 2)
𝑏 (𝑏 + 1) (𝑏 + 2) (10𝑎2 − 5𝑎𝑏 + 𝑏2 − 25𝑎 + 7𝑏 + 12)
(𝑏 − 𝑎) (𝑏 − 𝑎 + 1) (𝑏 − 𝑎 + 2) (𝑏 − 𝑎 + 3) (𝑏 − 𝑎 + 4)
,
P {𝜎𝑏+ < 𝜎𝑎− }
𝜎𝑏+ }
(296)
We retrieve one of the most well-known and important
results for the ordinary random walk: this is the famous
problem of the gambler’s ruin!
(ii) Case 𝑁 = 2. In this case, the pseudorandom variables
𝜉𝑛 , 𝑛 ∈ N∗ , have two-valued upward jumps and two-valued
=−
𝑎 (𝑎 − 1) (𝑎 − 2) (𝑎2 − 5𝑎𝑏 + 10𝑏2 − 7𝑎 + 25𝑏 + 12)
.
(𝑏 − 𝑎) (𝑏 − 𝑎 + 1) (𝑏 − 𝑎 + 2) (𝑏 − 𝑎 + 3) (𝑏 − 𝑎 + 4)
(298)
4.1.3. Pseudomoments of 𝑆𝑎𝑏 . Let us recall the notation we
previously introduced in Section 3.1.3: (𝑖)𝑛 = 𝑖(𝑖 − 1)(𝑖 −
2) ⋅ ⋅ ⋅ (𝑖 − 𝑛 + 1) for any 𝑖 ∈ Z and any 𝑛 ∈ N∗ and (𝑖)0 = 1, as
well as the conventions 1/𝑖! = 0 for any negative integer 𝑖 and
𝑗
∑𝑘=𝑖 = 0 if 𝑖 > 𝑗.
30
International Journal of Stochastic Analysis
In this section, we compute several functionals related
to the pseudomoments of 𝑆𝑎𝑏 . Namely, we provide formulae
for E[𝑆𝑎𝑏 (𝑆𝑎𝑏 − 𝑏)𝑛−1 ] (Theorem 46), E[(𝑆𝑎𝑏 )𝑛 ] (Corollary 47),
and E[(𝑆𝑎𝑏 − 𝑏)𝑛 ] (Theorem 48). This schedule may seem
surprising; actually, we have been able to carry out the
calculations by following this chronology.
1
) = 𝑁 ∫0 𝑥ℓ+𝑏−𝑎−1 (1 −
Putting the identities 1/ ( ℓ+𝑏−𝑎+𝑁−1
𝑁
1
𝑥)𝑁−1 d𝑥 and 1/(ℓ + 𝑏) = ∫0 𝑦ℓ+𝑏−1 d𝑦 into the equality
E [𝑓 (𝑆𝑎𝑏 ) 1{𝑆𝑎𝑏 ≥𝑏} ]
E [𝑆𝑎𝑏 (𝑆𝑎𝑏 − 𝑏)𝑛−1 1{𝑆𝑎𝑏 ≥𝑏} ]
(−1)𝑛−1 (𝐾𝑁/ (2𝑁 − 𝑛)) (𝑁!/ (𝑁 − 𝑛)!)
{
{
{
( 2𝑁+𝑏−𝑎−2
2𝑁−𝑛 )
={
{
if 1 ≤ 𝑛 ≤ 𝑁,
{
if 𝑛 ≥ 𝑁 + 1,
{0
(304)
E [𝑆𝑎𝑏 (𝑆𝑎𝑏 − 𝑏)𝑛−1 1{𝑆𝑎𝑏 ≤𝑎} ]
(−1)ℓ (𝑓 (ℓ + 𝑏) / (ℓ + 𝑏)) ( 𝑁−1
ℓ )
,
ℓ+𝑏−𝑎+𝑁−1 )
(
𝑁
ℓ=0
𝑁−1
= 𝐾𝑁 ∑
(299)
we immediately get the following integral representations
for E[𝑓(𝑆𝑎𝑏 )1{𝑆𝑎𝑏 ≥𝑏} ], and the analogous ones hold true for
E[𝑓(𝑆𝑎𝑏 )1{𝑆𝑎𝑏 ≤𝑎} ].
Theorem 45. For any function 𝑓 defined on E,
E [𝑓 (𝑆𝑎𝑏 ) 1{𝑆𝑎𝑏 ≥𝑏} ]
1
Theorem 46. For any positive integer 𝑛,
(−1)𝑛 (𝐾𝑁/ (2𝑁 − 𝑛)) (𝑁!/ (𝑁 − 𝑛)!)
{
{
{
{
( 2𝑁+𝑏−𝑎−2
{
2𝑁−𝑛 )
{
{
{
if 1 ≤ 𝑛 ≤ 𝑁,
{
={0
if 𝑁 + 1 ≤ 𝑛 ≤ 2𝑁 − 1,
{
{
𝑛+𝑏−𝑎−2
{
{
{
)
(−1)𝑁+𝑛−1 𝐾𝑁𝑁! (𝑛 − 𝑁 − 1)! (
{
𝑛 − 2𝑁
{
{
if 𝑛 ≥ 2𝑁.
{
(305)
In particular, for any 𝑛 ∈ {1, . . . , 2𝑁 − 1},
𝑁−1
𝑁 − 1 𝑓 (ℓ + 𝑏) ℓ
= 𝐾𝑁2 ∫ [ ∑ (−1)ℓ (
𝑥]
)
ℓ
ℓ+𝑏
0
E [𝑆𝑎𝑏 (𝑆𝑎𝑏 − 𝑏)𝑛−1 ] = 0.
ℓ=0
× 𝑥𝑏−𝑎−1 (1 − 𝑥)𝑁−1 d𝑥
(300)
𝑁−1
1 1
𝑁−1
ℓ
= 𝐾𝑁2 ∫ ∫ [ ∑ (−1)ℓ (
) 𝑓 (ℓ + 𝑏) (𝑥𝑦) ]
ℓ
0 0
ℓ=0
(306)
Proof. By (300), we get that
E [𝑓𝑛 (𝑆𝑎𝑏 ) 1{𝑆𝑎𝑏 ≥𝑏} ]
𝑁−1
1
𝑁−1
= 𝐾𝑁2 ∫ [ ∑ (−1)ℓ (
) (ℓ)𝑛−1 𝑥ℓ+𝑏−𝑎−1 ] (307)
ℓ
0
× 𝑥𝑏−𝑎−1 (1 − 𝑥)𝑁−1 𝑦𝑏−1 d𝑥 d𝑦,
(301)
ℓ=0
E [𝑓 (𝑆𝑎𝑏 ) 1{𝑆𝑎𝑏 ≥𝑎} ]
× (1 − 𝑥)𝑁−1 d𝑥.
1 𝑁−1
𝑁 − 1 𝑓 (𝑎 − ℓ) ℓ
= 𝐾𝑁2 ∫ [ ∑ (−1)ℓ (
𝑥]
)
ℓ
ℓ−𝑎
0
ℓ=0
By noticing that (ℓ)𝑛−1 = 0 for ℓ ∈ {0, . . . , 𝑛 − 2} and
𝑁−𝑛
( 𝑁−1
ℓ ) (ℓ)𝑛−1 = ( ℓ−𝑛+1 ) (𝑁 − 1)𝑛−1 for ℓ ≥ 𝑛 − 1, we obtain
that
× 𝑥𝑏−𝑎−1 (1 − 𝑥)𝑁−1 d𝑥
(302)
1
1
𝑁−1
𝑁−1
𝑁−1
ℓ
= 𝐾𝑁 ∫ ∫ [ ∑ (−1) (
) 𝑓 (𝑎 − ℓ) (𝑥𝑦) ]
ℓ
0 0
2
ℓ
ℓ=0
𝑁−1
) (ℓ)𝑛−1 𝑥ℓ
∑ (−1)ℓ (
ℓ
ℓ=0
𝑁−1
× 𝑥𝑏−𝑎−1 (1 − 𝑥)𝑁−1 𝑦−𝑎−1 d𝑥 d𝑦.
(303)
In view of (300) and (302) and in order to compute
the pseudomoments of 𝑆𝑎𝑏 , it is convenient to introduce the
function 𝑓𝑛 defined by 𝑓𝑛 (𝑖) = 𝑖(𝑖−𝑏)𝑛−1 for any integers 𝑖 and
𝑛 such that 𝑛 ≥ 1. In particular, 𝑓1 (𝑖) = 𝑖. We immediately
see that, by choosing 𝑓 = 𝑓1 in Theorem 45, quantities (300)
and (302) are opposite. As a byproduct, E[𝑆𝑎𝑏 ] = 0. More
generally, we have the results below.
𝑁−𝑛
= (𝑁 − 1)𝑛−1 1{1≤𝑛≤𝑁} ∑ (−1)ℓ (
) 𝑥ℓ
ℓ−𝑛+1
ℓ=𝑛−1
= (𝑁 − 1)𝑛−1 1{1≤𝑛≤𝑁} 𝑥
𝑛−1
𝑁−𝑛
ℓ+𝑛−1
∑ (−1)
ℓ=0
(𝑁 − 1)! 𝑛−1
{(−1)𝑛−1
𝑥 (1 − 𝑥)𝑁−𝑛
={
(𝑁 − 𝑛)!
{0
𝑁−𝑛 ℓ
(
)𝑥
ℓ
if 1 ≤ 𝑛 ≤ 𝑁,
if 𝑛 ≥ 𝑁 + 1.
(308)
International Journal of Stochastic Analysis
31
Hence, if 1 ≤ 𝑛 ≤ 𝑁,
Therefore,
E [𝑓𝑛 (𝑆𝑎𝑏 ) 1{𝑆𝑎𝑏 ≥𝑏} ]
= (−1)𝑛−1 𝐾𝑁2
E [𝑓𝑛 (𝑆𝑎𝑏 ) 1{𝑆𝑎𝑏 ≤𝑎} ]
(𝑁 − 1)! 1 𝑛+𝑏−𝑎−2
∫ 𝑥
(1 − 𝑥)2𝑁−𝑛−1 d𝑥
(𝑁 − 𝑛)! 0
𝐾𝑁𝑁! (𝑛 + 𝑏 − 𝑎 − 2)! (2𝑁 − 𝑛 − 1)!
= (−1)𝑛−1
(𝑁 − 𝑛)!
(2𝑁 + 𝑏 − 𝑎 − 2)!
×
E [𝑓𝑛 (𝑆𝑎𝑏 ) 1{𝑆𝑎𝑏 ≤𝑎} ]
𝑁−1
= ∑ P {𝑆𝑎𝑏 = 𝑎 − ℓ} 𝑓𝑛 (𝑎 − ℓ)
= (−1)𝑛 𝐾𝑁𝑁!
(−1)ℓ−1 (𝑎 − 𝑏 − ℓ)𝑛−1 ( 𝑁−1
ℓ )
ℓ+𝑏−𝑎+𝑁−1 )
(
𝑁
ℓ=0
𝑁−1
= 𝐾𝑁 ∑
d
d𝑥𝑛−𝑁−1
󵄨󵄨
󵄨
(𝑥𝑛+𝑏−𝑎−2 (1 − 𝑥)𝑁−1 )󵄨󵄨󵄨󵄨 .
󵄨󵄨𝑥=1
(314)
If 𝑁 + 1 ≤ 𝑛 ≤ 2𝑁 − 1, the above derivative vanishes
since 1 is a root of multiplicity 𝑁 − 1 of the polynomial
𝑥ℓ+𝑛+𝑏−𝑎−2 (1−𝑥)𝑁−1 . Finally, for 𝑛 ≥ 2𝑁, we appeal to Leibniz
rule for evaluating the derivative of interest:
𝑛−𝑁−1
𝑛−𝑁−1
= ∑ (−1)𝑘 (
)
𝑘
𝑁 − 1 (ℓ + 𝑏 − 𝑎 + 𝑛 − 2)!
= (−1) 𝐾𝑁𝑁! ∑ (−1) (
.
)
ℓ
(ℓ + 𝑏 − 𝑎 + 𝑁 − 1)!
𝑘=0
ℓ
×
ℓ=0
(310)
(𝑛 + 𝑏 − 𝑎 − 2)!
(𝑁 − 1)𝑘 𝛿𝑘,𝑁−1
(𝑘 + 𝑁 + 𝑏 − 𝑎 − 1)!
= (−1)𝑁−1 (𝑁 − 1)!
For 𝑛 ≤ 𝑁, we can write that
(ℓ + 𝑏 − 𝑎 + 𝑛 − 2)!
(ℓ + 𝑏 − 𝑎 + 𝑁 − 1)!
=
𝑛−𝑁−1
󵄨󵄨
d𝑛−𝑁−1
𝑛+𝑏−𝑎−2
𝑁−1 󵄨󵄨
󵄨󵄨
−
𝑥)
(𝑥
)
(1
󵄨󵄨
d𝑥𝑛−𝑁−1
󵄨𝑥=1
ℓ=0
𝑁−1
󵄨󵄨
d𝑛−𝑁−1 𝑁−1
ℓ 𝑁−1
ℓ+𝑛+𝑏−𝑎−2 󵄨󵄨󵄨
(
(
)
)
𝑥
∑
(−1)
󵄨󵄨
ℓ
󵄨󵄨
d𝑥𝑛−𝑁−1 ℓ=0
󵄨
𝑥=1
(309)
and we arrive at (304). Moreover, if 𝑛 ≥ 𝑁 + 1 and 𝑆𝑎𝑏 ≥ 𝑏,
we have 𝑆𝑎𝑏 ∈ {𝑏, 𝑏 + 1,. . . , 𝑏 + 𝑁 − 1}. Then, it is clear that
𝑓𝑛 (𝑆𝑎𝑏 ) = 0 and (304) still holds in this case.
On the other hand, by Theorem 43, we get that
𝑛
= (−1)𝑛 𝐾𝑁𝑁!
(𝑛 + 𝑏 − 𝑎 − 2)! 𝑛 − 𝑁 − 1
(
)
𝑁−1
(2𝑁 + 𝑏 − 𝑎 − 2)!
𝑛+𝑏−𝑎−2
).
= (−1)𝑁−1 (𝑛 − 𝑁 − 1)! (
𝑛 − 2𝑁
1
1
(𝑁 − 𝑛)! (ℓ + 𝑏 − 𝑎 + 𝑁 − 1) ( ℓ+𝑏−𝑎+𝑁−2
)
𝑁−𝑛
(315)
(311)
This proves (305) in this case.
1
1
=
∫ 𝑥ℓ+𝑛+𝑏−𝑎−2 (1 − 𝑥)𝑁−𝑛 d𝑥.
(𝑁 − 𝑛)! 0
Corollary 47. For 𝑛 ∈ {1, . . . , 2𝑁 − 1}, the pseudo-moment of
𝑆𝑎𝑏 of order 𝑛 vanishes:
Then,
𝑛
E [(𝑆𝑎𝑏 ) ] = 0.
E [𝑓𝑛 (𝑆𝑎𝑏 ) 1{𝑆𝑎𝑏 ≤𝑎} ]
= (−1)𝑛
Moreover,
𝐾𝑁𝑁!
(𝑁 − 𝑛)!
2𝑁
E [(𝑆𝑎𝑏 ) ]
𝑁−1
1
𝑁 − 1 ℓ+𝑛+𝑏−𝑎−2
× ∫ ∑ (−1)ℓ (
)𝑥
(1 − 𝑥)𝑁−𝑛 d𝑥
ℓ
0
ℓ=0
= (−1)𝑛
= (−1)𝑛
(316)
𝐾𝑁𝑁! 1 𝑛+𝑏−𝑎−2
∫ 𝑥
(1 − 𝑥)2𝑁−𝑛−1 d𝑥
(𝑁 − 𝑛)! 0
𝐾𝑁𝑁!
(𝑛 + 𝑏 − 𝑎 − 2)! (2𝑁 − 𝑛 − 1)!
×
(𝑁 − 𝑛)!
(2𝑁 + 𝑏 − 𝑎 − 2)!
(312)
which proves (305). For 𝑛 ≥ 𝑁 + 1, we write instead that
𝑛−𝑁−1
d
(ℓ + 𝑏 − 𝑎 + 𝑛 − 2)!
=
d𝑥𝑛−𝑁−1
(ℓ + 𝑏 − 𝑎 + 𝑁 − 1)!
󵄨󵄨
󵄨
(𝑥ℓ+𝑛+𝑏−𝑎−2 )󵄨󵄨󵄨󵄨 .
󵄨󵄨𝑥=1
(313)
= −𝑎 (𝑎 − 1) ⋅ ⋅ ⋅ (𝑎 − 𝑁 + 1) 𝑏 (𝑏 + 1) ⋅ ⋅ ⋅ (𝑏 + 𝑁 − 1) .
(317)
Proof. As in the proof of Theorem 27, we appeal to the
following argument: the polynomial 𝑋𝑛 is a linear combination of 𝑓1 (𝑋), . . . , 𝑓𝑛 (𝑋). Then E[(𝑆𝑎𝑏 )𝑛 ] can be written
as a linear combination of E[𝑓1 (𝑆𝑎𝑏 )], . . . , E[𝑓𝑛 (𝑆𝑎𝑏 )] which
vanish when 1 ≤ 𝑛 ≤ 2𝑁 − 1. Thus, E[(𝑆𝑎𝑏 )𝑛 ] = 0. The same
argument entails that
2𝑁
E [(𝑆𝑎𝑏 ) ] = E [𝑓2𝑁 (𝑆𝑎𝑏 )] = (−1)𝑁−1 𝐾𝑁!2
which proves (317).
(318)
32
International Journal of Stochastic Analysis
In the following theorem, we provide an integral representation for certain factorial pseudomoments of (𝑆𝑎𝑏 −𝑎) and
(𝑆𝑎𝑏 − 𝑏) which will be used in the next section.
The sum lying in the above integral can be easily calculated:
𝑁−1
𝑁−1
ℓ
) (ℓ)𝑛 (𝑥𝑦)
∑ (−1)ℓ (
ℓ
ℓ=0
Theorem 48. For any integer 𝑛 ∈ {0, . . . , 𝑁 − 1},
𝑁−1
𝑁−𝑛−1
ℓ
= (𝑁 − 1)𝑛 ∑ (−1)ℓ (
) (𝑥𝑦)
ℓ−𝑛
E [(𝑆𝑎𝑏 − 𝑏)𝑛 1{𝑆𝑎𝑏 ≥𝑏} ]
(324)
ℓ=𝑛
𝑛
𝑁−𝑛−1
= (−1)𝑛 (𝑁 − 1)𝑛 (𝑥𝑦) (1 − 𝑥𝑦)
= (−1)𝑛 𝐾𝑁2 (𝑁 − 1)𝑛
.
Hence,
×∬
D+
𝑢
−𝑎−1
𝑁−1 𝑛+𝑏−1
(1 − 𝑢)
V
𝑁−𝑛−1
(1 − V)
d𝑢 dV,
(319)
= (−1)𝑛 (𝑁 − 1)𝑛 𝐾𝑁2
E [(𝑎 − 𝑆𝑎𝑏 )𝑛 1{𝑆𝑎𝑏 ≤𝑎} ]
𝑛
D−
1
1
0
0
𝑁−𝑛−1
× ∫ ∫ 𝑥𝑛+𝑏−𝑎−1 (1 − 𝑥)𝑁−1 𝑦𝑛+𝑏−1 (1 − 𝑥𝑦)
2
= (−1) 𝐾𝑁 (𝑁 − 1)𝑛
×∬
E [(𝑆𝑎𝑏 − 𝑏)𝑛 1{𝑆𝑎𝑏 ≥𝑏} ]
𝑢𝑛−𝑎−1 (1 − 𝑢)𝑁−𝑛−1 V𝑏−1 (1 − V)𝑁−1 d𝑢 dV.
(320)
The above identities can be rewritten as
d𝑥 d𝑦.
(325)
Performing the change of variables (𝑢, V) = (𝑥, 𝑥𝑦) in the
foregoing integral immediately yields (319). Formula (320)
can be deduced from (303) exactly in the same way.
4.2. Link with High-Order Finite-Difference Equations. Set
Δ+ 𝑓(𝑖) = 𝑓(𝑖 + 1) − 𝑓(𝑖) and Δ− 𝑓(𝑖) = 𝑓(𝑖) − 𝑓(𝑖 − 1) for
Δ+ ∘ ⋅ ⋅ ⋅ ∘ Δ+ and (Δ− )𝑗 = ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
Δ− ∘ ⋅ ⋅ ⋅ ∘ Δ−
any 𝑖 ∈ Z and (Δ+ )𝑗 = ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
𝑆 −𝑏
E [( 𝑎𝑏
) 1{𝑆𝑎𝑏 ≥𝑏} ]
𝑛
𝑗 times
𝑗 times
for any 𝑗 ∈ N∗ . Set also (Δ+ )0 𝑓 = (Δ− )0 𝑓 = 𝑓. The quantities
(Δ+ )𝑗 and (Δ− )𝑗 are the iterated forward and backward finitedifference operators given by
𝑁−1
= (−1)𝑛 𝐾𝑁2 (
)
𝑛
𝑗
×∬
D+
𝑢
−𝑎−1
𝑁−1 𝑛+𝑏−1
(1 − 𝑢)
V
𝑁−𝑛−1
(1 − V)
d𝑢 dV,
𝑗
𝑗
(Δ+ ) 𝑓 (𝑖) = ∑ (−1)𝑗+𝑘 ( ) 𝑓 (𝑖 + 𝑘) ,
𝑘
𝑘=0
(326)
(321)
𝑗
𝑗
𝑗
(Δ− ) 𝑓 (𝑖) = ∑ (−1)𝑘 ( ) 𝑓 (𝑖 − 𝑘) .
𝑘
𝑎 − 𝑆𝑎𝑏
E [(
) 1{𝑆𝑎𝑏 ≤𝑎} ]
𝑛
𝑘=0
Conversely, 𝑓(𝑖 + 𝑘) and 𝑓(𝑖 − 𝑘) can be expressed by means
of (Δ+ )𝑗 𝑓(𝑖), (Δ− )𝑗 𝑓(𝑖), 0 ≤ 𝑗 ≤ 𝑘, according to
𝑁−1
= (−1) 𝐾𝑁 (
)
𝑛
𝑛
2
𝑘
×∬
D−
𝑢
𝑛−𝑎−1
𝑁−𝑛−1 𝑏−1
(1 − 𝑢)
V
(1 − V)
𝑁−1
d𝑢 dV.
𝑗=0
(327)
(322)
𝑗=0
We have the following expression for any functional of the
pseudorandom variable 𝑆𝑎𝑏 .
E [(𝑆𝑎𝑏 − 𝑏)𝑛 1{𝑆𝑎𝑏 ≥𝑏} ]
1
1
𝑘
𝑗
𝑘
𝑓 (𝑖 − 𝑘) = ∑ (−1)𝑗 ( ) (Δ− ) 𝑓 (𝑖) .
𝑗
Proof. By (301), we have that
𝑁−1
𝑁−1
ℓ
= 𝐾𝑁 ∫ ∫ [ ∑ (−1) (
) (ℓ)𝑛 (𝑥𝑦) ] (323)
ℓ
0 0
2
𝑗
𝑘
𝑓 (𝑖 + 𝑘) = ∑ ( ) (Δ+ ) 𝑓 (𝑖) ,
𝑗
ℓ
ℓ=0
× 𝑥𝑏−𝑎−1 (1 − 𝑥)𝑁−1 𝑦𝑏−1 d𝑥 d𝑦.
Theorem 49. One has, for any function 𝑓 defined on E, that
𝑁−1
𝑗
𝑁−1
𝑗
−
+
(Δ− ) 𝑓 (𝑎) + ∑ 𝐼𝑎𝑏,𝑗
(Δ+ ) 𝑓 (𝑏)
E [𝑓 (𝑆𝑎𝑏 )] = ∑ 𝐼𝑎𝑏,𝑗
𝑗=0
𝑗=0
(328)
International Journal of Stochastic Analysis
33
Theorem 51. One has, for any function 𝑓 defined on E, that
with
𝑁−1
−
= 𝐾𝑁2 (
𝐼𝑎𝑏,𝑗
)
𝑗
×∬
+
𝐼𝑎𝑏,𝑗
D−
𝑢
𝑗−𝑎−1
𝑁−1
𝑗=0
𝑁−𝑗−1 𝑏−1
(1 − 𝑢)
V
𝑁−1
(1 − V)
d𝑢 dV,
+
𝑁−1
= (−1) 𝐾𝑁 (
)
𝑗
𝑗
×∬
2
D+
𝑢
−𝑎−1
𝑗
−
E𝑥 [𝑓 (𝑆𝑎𝑏 )] = ∑ 𝑃𝑎𝑏,𝑗
(𝑥) (Δ− ) 𝑓 (𝑎)
𝑁−1 𝑗+𝑏−1
(1 − 𝑢)
V
𝑁−𝑗−1
(1 − V)
d𝑢 dV.
(329)
(333)
𝑁−1
+
∑ 𝑃𝑎𝑏,𝑗
𝑗=0
−
+
where 𝑃𝑎𝑏,𝑗
and 𝑃𝑎𝑏,𝑗
, 0 ≤ 𝑗 ≤ 𝑁 − 1, are polynomials of degree
not greater than 2𝑁 − 1 characterized, for any 𝑘 ∈ {0, . . . ,
𝑁 − 1}, by
𝑘
𝑘
𝑘
𝑎−𝑆𝑎𝑏
𝑗
𝑎 − 𝑆𝑎𝑏
E [𝑓 (𝑆𝑎𝑏 )] = E (1{𝑆𝑎𝑏 ≤𝑎} ∑ (−1)𝑗 (
) (Δ− ) 𝑓 (𝑎))
𝑗
𝑗=0
𝑆 −𝑏
𝑎𝑏
𝑗
𝑆 −𝑏
+ E (1{𝑆𝑎𝑏 ≥𝑏} ∑ ( 𝑎𝑏
) (Δ+ ) 𝑓 (𝑏))
𝑗
𝑗=0
𝑁−1
𝑗
𝑎 − 𝑆𝑎𝑏
= ∑ (−1)𝑗 E [1{𝑆𝑎𝑏 ≤𝑎} (
)] (Δ− ) 𝑓 (𝑎)
𝑗
𝑗=0
𝑁−1
𝑗
𝑆 −𝑏
+ ∑ E [1{𝑆𝑎𝑏 ≥𝑏} ( 𝑎𝑏
)] (Δ+ ) 𝑓 (𝑏)
𝑗
(330)
which immediately yields (328) thanks to (321) and (322).
𝑆 −𝑏
+
𝑃𝑎𝑏,𝑗
)]
(𝑥) = E𝑥 [1{𝑆𝑎𝑏 ≥𝑏} ( 𝑎𝑏
𝑗
𝑚
= ∑ ( ) P𝑥 {𝑆𝑎𝑏 = 𝑏 + 𝑚}
𝑗
𝑚=𝑗
=
𝑁−1
E𝑥 [𝑓 (𝑆𝑎𝑏 )] = E [𝑓 (𝑥 + 𝑆𝑎−𝑥,𝑏−𝑥 )]
𝑗
𝑗
+
(Δ+ ) 𝑓 (𝑏) .
+ ∑ 𝐼𝑎−𝑥,𝑏−𝑥,𝑗
(332)
̃𝑚 (𝑥) =
𝐾
(∏𝑁−1
𝑘=0 [(𝑥 − 𝑎 + 𝑘) (𝑏 − 𝑥 + 𝑘)])
(𝑏 − 𝑥 + 𝑚)
𝑁−1
(336)
= ∏ (𝑥 − 𝑎 + 𝑘)
𝑘=0
∏
(𝑏 − 𝑥 + 𝑘) .
0≤𝑘≤𝑁−1
𝑘 ≠ 𝑚
̃𝑚 (𝑥) defines a polynomial of the variable 𝑥
The expression 𝐾
+
is a polynomial of degree not greater
of degree 2𝑁 − 1, so 𝑃𝑎𝑏,𝑗
than 2𝑁 − 1.
̃𝑚 (𝑎 − ℓ) = 0 for ℓ, 𝑚 ∈ {0, . . . , 𝑁 − 1}.
It is obvious that 𝐾
+
+
(𝑎) = 0
Then, 𝑃𝑎𝑏,𝑗 (𝑎 − ℓ) = 0 which implies that (Δ− )𝑘 𝑃𝑎𝑏,𝑗
+
for any 𝑘 ∈ {0, . . . , 𝑁 − 1}. Now, let us evaluate 𝑃𝑎𝑏,𝑗
(𝑏 + ℓ) for
̃𝑚 (𝑏 + ℓ) = 0 for
ℓ ∈ {0, . . . , 𝑁 − 1}. We plainly have that 𝐾
ℓ ≠ 𝑚 and that
𝑗=0
More precisely, we have the following result.
( 𝑚𝑗 ) ( 𝑁−1
𝑚 )
̃𝑚 (𝑥) ,
]𝐾
𝑚+𝑏−𝑎+𝑁−1
(
)
𝑁
where, for any 𝑚 ∈ {0, . . . , 𝑁 − 1},
Of special interest is the case when the starting point of
the pseudorandom walk is some point 𝑥 ∈ Z. By translating
𝑎, 𝑏 into 𝑎 − 𝑥, 𝑏 − 𝑥 and the function 𝑓 into the shifted
function 𝑓(⋅ + 𝑥), we have that
𝑗=0
𝑚=𝑗
(331)
(335)
1
(𝑁 − 1)!𝑁!
× ∑ (−1)𝑚 [
Proof. Let us apply Theorem 49 to the function 𝑓(𝑖) = 𝜁𝑖 for
which we plainly have (Δ+ )𝑗 𝑓(𝑖) = 𝜁𝑖 (𝜁 − 1)𝑗 and (Δ− )𝑗 𝑓(𝑖) =
𝜁𝑖 (1 − 1/𝜁)𝑗 . This immediately yields (331).
−
= ∑ 𝐼𝑎−𝑥,𝑏−𝑥,𝑗
(Δ− ) 𝑓 (𝑎)
(334)
−
−
+
+
Proof. By setting 𝑃𝑎𝑏,𝑗
(𝑥) = 𝐼𝑎−𝑥,𝑏−𝑥,𝑗
and 𝑃𝑎𝑏,𝑗
(𝑥) = 𝐼𝑎−𝑥,𝑏−𝑥,𝑗
,
−
(330) immediately yields (333). By observing that 𝑃𝑎𝑏,𝑗 (𝑥) =
+
(𝑎 + 𝑏 − 𝑥), it is enough to work with, for example,
(−1)𝑗 𝑃𝑎𝑏,𝑗
+
𝑃𝑎𝑏,𝑗 . Coming back to the proof of Theorem 49 and appealing
to Theorem 43, we write that
Corollary 50. The generating function of 𝑆𝑎𝑏 is given by
𝑁−1
𝑁−1
1 𝑗
𝑗
−
+
E (𝜁𝑆𝑎𝑏 ) = 𝜁𝑎 ∑ 𝐼𝑎𝑏,𝑗
(1 − ) + 𝜁𝑏 ∑ 𝐼𝑎𝑏,𝑗
(𝜁 − 1) .
𝜁
𝑗=0
𝑗=0
𝑘
+
−
(Δ− ) 𝑃𝑎𝑏,𝑗
(𝑎) = (Δ+ ) 𝑃𝑎𝑏,𝑗
(𝑏) = 0.
𝑁−1
𝑗=0
𝑁−1
(𝑥) (Δ ) 𝑓 (𝑏) ,
−
+
(Δ− ) 𝑃𝑎𝑏,𝑗
(𝑎) = (Δ+ ) 𝑃𝑎𝑏,𝑗
(𝑏) = 𝛿𝑗𝑘 ,
Proof. By (327), we see that
𝑁−1
+ 𝑗
̃ℓ (𝑏 + ℓ) =
𝐾
)
(−1)ℓ (𝑁 − 1)!𝑁! ( ℓ+𝑏−𝑎+𝑁−1
𝑁
.
𝑁−1
( ℓ )
(337)
34
International Journal of Stochastic Analysis
(i) For 𝑁 = 1, (206) reads as
By putting this into (335), we get that
ℓ
+
𝑃𝑎𝑏,𝑗
(𝑏 + ℓ) = ( ) 1{𝑗≤ℓ} .
𝑗
(338)
E𝑥 [𝑓 (𝑆𝜎𝑎𝑏 +𝑛 )] =
Next, we obtain, for any 𝑘 ∈ {0, . . . , 𝑁 − 1}, that
𝑘
which is of course well known! This is the strong
Markov property for the ordinary random walk.
𝑘 +
𝑘 +
(Δ+ ) 𝑃𝑎𝑏,𝑗
(𝑏) = ∑ (−1)𝑘+ℓ ( ) 𝑃𝑎𝑏,𝑗
(𝑏 + ℓ)
ℓ
ℓ=0
𝑘
𝑘+ℓ
= ∑(−1)
ℓ=𝑗
𝑗+𝑘
= (−1)
𝑘 ℓ
( )( )
ℓ 𝑗
𝑏−𝑥
𝑥−𝑎
E [𝑓 (𝑆𝑛 )] +
E [𝑓 (𝑆𝑛 )]
𝑏−𝑎 𝑎
𝑏−𝑎 𝑏
(343)
(ii) For 𝑁 = 2, (206) reads as
(339)
E𝑥 [𝑓 (𝑆𝜎𝑎𝑏 +𝑛 )]
𝑘−𝑗
(𝑥 − 𝑏) (𝑥 − 𝑏 − 1) (2𝑥 − 3𝑎 + 𝑏 + 2)
E𝑎 [𝑓 (𝑆𝑛 )]
(𝑏 − 𝑎) (𝑏 − 𝑎 + 1) (𝑏 − 𝑎 + 2)
=
𝑘
𝑘−𝑗
( ) ∑ (−1)ℓ (
) = 𝛿𝑗𝑘 .
𝑗
ℓ
ℓ=0
The proof of Theorem 51 is finished.
+
(𝑥 − 𝑎) (𝑥 − 𝑏) (𝑥 − 𝑏 − 1) −
Δ E𝑎 [𝑓 (𝑆𝑛 )]
(𝑏 − 𝑎 + 1) (𝑏 − 𝑎 + 2)
Example 52. In the case where 𝑁 = 2, (333) writes as
−
(𝑥 − 𝑎) (𝑥 − 𝑎 + 1) (2𝑥 + 𝑎 − 3𝑏 − 2)
E𝑏 [𝑓 (𝑆𝑛 )]
(𝑏 − 𝑎) (𝑏 − 𝑎 + 1) (𝑏 − 𝑎 + 2)
+
(𝑥 − 𝑎) (𝑥 − 𝑎 + 1) (𝑥 − 𝑏) +
Δ E𝑏 [𝑓 (𝑆𝑛 )]
(𝑏 − 𝑎 + 1) (𝑏 − 𝑎 + 2)
−
−
E𝑥 [𝑓 (𝑆𝑎𝑏 )] = 𝑃𝑎𝑏,0
(𝑥) 𝑓 (𝑎) + 𝑃𝑎𝑏,1
(𝑥) Δ− 𝑓 (𝑎)
+
+
+ 𝑃𝑎𝑏,0
(𝑥) 𝑓 (𝑏) + 𝑃𝑎𝑏,1
(𝑥) Δ+ 𝑓 (𝑏)
(340)
with
−
𝑃𝑎𝑏,0
(𝑥) =
(𝑥 − 𝑏) (𝑥 − 𝑏 − 1) (2𝑥 − 3𝑎 + 𝑏 + 2)
,
(𝑏 − 𝑎) (𝑏 − 𝑎 + 1) (𝑏 − 𝑎 + 2)
−
𝑃𝑎𝑏,1
(𝑥) =
(𝑥 − 𝑎) (𝑥 − 𝑏) (𝑥 − 𝑏 − 1)
,
(𝑏 − 𝑎 + 1) (𝑏 − 𝑎 + 2)
+
𝑃𝑎𝑏,0
(𝑥) = −
+
𝑃𝑎𝑏,1
(𝑥) =
(𝑥 − 𝑎) (𝑥 − 𝑎 + 1) (2𝑥 + 𝑎 − 3𝑏 − 2)
,
(𝑏 − 𝑎) (𝑏 − 𝑎 + 1) (𝑏 − 𝑎 + 2)
(341)
(𝑥 − 𝑎) (𝑥 − 𝑎 + 1) (𝑥 − 𝑏)
.
(𝑏 − 𝑎 + 1) (𝑏 − 𝑎 + 2)
Below, we state a strong pseudo-Markov property related
to time 𝜎𝑎𝑏 .
Theorem 53. One has, for any function 𝑓 defined on Z and
any 𝑛 ∈ N, that
𝑁−1
𝑗
−
E𝑥 [𝑓 (𝑆𝜎𝑎𝑏 +𝑛 )] = ∑ 𝑃𝑎𝑏,𝑗
(𝑥) (Δ− ) E𝑎 [𝑓 (𝑆𝑛 )]
+
+
∑ 𝑃𝑎𝑏,𝑗
𝑗=0
−
(𝑥 − 𝑎) (𝑥 − 𝑏) (𝑥 − 𝑏 − 1)
E𝑎−1 [𝑓 (𝑆𝑛 )]
(𝑏 − 𝑎 + 1) (𝑏 − 𝑎 + 2)
−
(𝑥 − 𝑎) (𝑥 − 𝑎 + 1) (𝑥 − 𝑏 − 1)
E𝑏 [𝑓 (𝑆𝑛 )]
(𝑏 − 𝑎) (𝑏 − 𝑎 + 1)
+
(𝑥 − 𝑎) (𝑥 − 𝑎 + 1) (𝑥 − 𝑏)
E𝑏+1 [𝑓 (𝑆𝑛 )] .
(𝑏 − 𝑎 + 1) (𝑏 − 𝑎 + 2)
(344)
Now, we consider the discrete Laplacian Δ = Δ+ ∘ Δ− =
Δ ∘ Δ+ . It is explicitly defined by Δ𝑓(𝑖) = 𝑓(𝑖 + 1) − 2𝑓(𝑖) +
𝑓(𝑖 − 1). Let us introduce the iterated Laplacian Δ𝑁 = (Δ+ )𝑁 ∘
(Δ− )𝑁 = (Δ− )𝑁 ∘ (Δ+ )𝑁. We compute Δ𝑁𝑓(𝑖) for any function
𝑓 and any 𝑖 ∈ Z:
−
Δ𝑁𝑓 (𝑖)
𝑗=0
𝑁−1
(𝑥 − 𝑎 + 1) (𝑥 − 𝑏) (𝑥 − 𝑏 − 1)
E𝑎 [𝑓 (𝑆𝑛 )]
(𝑏 − 𝑎) (𝑏 − 𝑎 + 1)
=
(342)
+ 𝑗
(𝑥) (Δ ) E𝑏 [𝑓 (𝑆𝑛 )] .
𝑁
𝑁
𝑁
= (Δ− ) ( ∑ (−1)𝑗+𝑁 ( ) 𝑓 (⋅ + 𝑗)) (𝑖)
𝑗
𝑗=0
𝑁
𝑁
𝑗=0
𝑘=0
In (342), the operators (Δ− )𝑗 and (Δ+ )𝑗 act on the variables 𝑎
and 𝑏.
𝑁
𝑁
= ∑(−1)𝑗+𝑁 ( ) ( ∑ (−1)𝑘 ( ) 𝑓 (𝑖 + 𝑗 − 𝑘))
𝑗
𝑘
Proof. Formula (342) can be proved exactly in the same
way as (206): by setting 𝑔(𝑥) = E𝑥 [𝑓(𝑆𝑛 )], we have that
E𝑥 [𝑓(𝑆𝜎𝑎𝑏 +𝑛 )] = E𝑥 [𝑔(𝑆𝑎𝑏 )]. This proves (342) thanks to
(333).
𝑁 𝑁
= (−1)𝑁 ∑ (−1)𝑗−𝑘 ( ) ( ) 𝑓 (𝑖 + 𝑗 − 𝑘)
𝑗
𝑘
Example 54. Below, we display the form of (342) for the
particular values 1, 2 of 𝑁.
0≤𝑗,𝑘≤𝑁
𝑁
(𝑁−ℓ)∧𝑁
ℓ=−𝑁
𝑘=(−ℓ)∨0
= (−1)𝑁 ∑ (−1)ℓ ( ∑
𝑁
𝑁
( )(
)) 𝑓 (𝑖 + ℓ) .
𝑘
𝑘+ℓ
(345)
International Journal of Stochastic Analysis
35
𝑝∧ℓ
𝑞
By using the elementary identity ∑𝑘=(ℓ−𝑞)∨0 ( 𝑝𝑘 ) ( ℓ−𝑘 ) =
( 𝑝+𝑞
ℓ ), we get that
(𝑁−ℓ)∧𝑁
∑
𝑘=(−ℓ)∨0
𝑁
𝑁
( )(
)=
𝑘
𝑘+ℓ
(𝑁−ℓ)∧𝑁
∑
𝑘=(−ℓ)∨0
Proof. By (333) we write that
𝑁−1
𝑁
𝑁
( )(
)
𝑁−𝑘−ℓ
𝑘
(350)
With this representation at hand, identities (334) and (348)
immediately yield equations (349).
𝑁
2𝑁
Δ𝑁𝑓 (𝑖) = ∑ (−1)ℓ+𝑁 (
) 𝑓 (𝑖 + ℓ) .
ℓ+𝑁
(347)
4.3. Joint Pseudodistribution of (𝜏𝑎𝑏 , 𝑋𝑎𝑏 ). As in Section 3.3,
we choose for the family ((𝑋𝑡𝜀 )𝑡≥0 )𝜀>0 the pseudoprocesses
defined, for any 𝜀 > 0, by
ℓ=−𝑁
Example 55. Fix a nonnegative integer 𝑗 and put 𝑓𝑗 (𝑖) = (𝑖)𝑗
for any 𝑖 ∈ Z. It is plain that, if 𝑘 ≤ 𝑗, (Δ+ )𝑘 𝑓𝑗 (𝑖) =
(𝑗)𝑘 𝑓𝑗−𝑘 (𝑖), (Δ− )𝑘 𝑓𝑗 (𝑖) = (𝑗)𝑘 𝑓𝑗−𝑘 (𝑖 − 1) and if 𝑘 > 𝑗,
(Δ+ )𝑘 𝑓𝑗 (𝑖) = (Δ− )𝑘 𝑓𝑗 (𝑖) = 0. Therefore, if 2𝑘 ≤ 𝑗, Δ𝑘 𝑓𝑗 (𝑖) =
(𝑗)𝑘 (𝑗 − 𝑘)𝑘 𝑓𝑗−2𝑘 (𝑖 − 1) and if 2𝑘 > 𝑗, Δ𝑘 𝑓𝑗 (𝑖) = 0. By
using a linear algebra argument, we deduce that Δ𝑁𝑃 = 0
for any polynomial 𝑃 of degree not greater than 2𝑁 − 1. As a
byproduct,
−
Δ𝑁𝑃𝑎𝑏,𝑗
= 0.
Theorem 56. Let 𝜑 be a function defined on E. The function
Φ defined on Z by Φ(𝑥) = E𝑥 [𝜑(𝑆𝑎𝑏 )] is the solution to the
discrete Lauricella’s problem
𝜀
𝜀
, 𝑋𝑎𝑏
) 󳨀→+ (𝜏𝑎𝑏 , 𝑋𝑎𝑏 ) ,
(𝜏𝑎𝑏
𝜀→0
𝑘
𝑘
for 𝑗 ∈ {0, . . . , 𝑁 − 1} .
E ( e −𝜆𝜏𝑎𝑏 +𝑖𝜇𝑋𝑎𝑏 1{𝜏𝑎𝑏 <+∞} )
D−𝑘 (𝜆, 𝜇) 𝑖𝜇𝑏 𝑁 D+𝑘 (𝜆, 𝜇)
+e ∑
.
D (𝜆)
D (𝜆)
𝑘=1
𝑘=1
𝜑1
..
.
⋅ ⋅ ⋅ 𝜑1𝑁−1 e−𝜑1
..
.
2𝑁
√𝜆/𝑐(𝑏−𝑎)
e−𝜑1
√𝜆/𝑐(𝑏−𝑎)
−𝜑𝑘+1 2𝑁
e
..
.
𝑁−1
e−𝜑2𝑁
𝜑2𝑁 ⋅ ⋅ ⋅ 𝜑2𝑁
√𝜆/𝑐(𝑏−𝑎)
..
.
2𝑁
𝜑𝑘+1 ⋅ ⋅ ⋅
..
.
2𝑁
..
.
𝑁−1
e−𝜑𝑘−1 √𝜆/𝑐(𝑏−𝑎) e−𝜑𝑘−1
𝜑𝑘−1 ⋅ ⋅ ⋅ 𝜑𝑘−1
𝑁−1
𝛿 ⋅⋅⋅ 𝛿
0
𝑁−1
𝜑𝑘+1
2𝑁
√𝜆/𝑐(𝑏−𝑎)
0
√𝜆/𝑐(𝑏−𝑎)
−𝜑𝑘+1 2𝑁
e
..
.
2𝑁
√𝜆/𝑐(𝑏−𝑎)
(353)
In the foregoing formula, D(𝜆), D−𝑘 (𝜆, 𝜇), and D+𝑘 (𝜆, 𝜇) are the
respective determinants
2𝑁
2𝑁
2𝑁
󵄨󵄨
󵄨
󵄨󵄨1 𝜑1 ⋅ ⋅ ⋅ 𝜑1𝑁−1 e−𝜑1 √𝜆/𝑐(𝑏−𝑎) e−𝜑1 √𝜆/𝑐(𝑏−𝑎) 𝜑1 ⋅ ⋅ ⋅ e−𝜑1 √𝜆/𝑐(𝑏−𝑎) 𝜑1𝑁−1 󵄨󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨 .. ..
󵄨󵄨
..
..
..
..
󵄨󵄨 . .
󵄨󵄨 ,
.
.
.
.
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨
2𝑁
2𝑁
2𝑁
𝑁−1
−𝜑2𝑁 √𝜆/𝑐(𝑏−𝑎)
−𝜑2𝑁 √𝜆/𝑐(𝑏−𝑎)
−𝜑2𝑁 √𝜆/𝑐(𝑏−𝑎) 𝑁−1 󵄨󵄨
󵄨󵄨1 𝜑
e
e
𝜑2𝑁 ⋅ ⋅ ⋅ e
𝜑2𝑁 󵄨󵄨
󵄨
2𝑁 ⋅ ⋅ ⋅ 𝜑2𝑁
󵄨󵄨
󵄨󵄨1
󵄨󵄨
󵄨󵄨 ..
󵄨󵄨 .
󵄨󵄨
󵄨󵄨
󵄨󵄨1
󵄨󵄨
󵄨󵄨
󵄨󵄨󵄨1
󵄨󵄨
󵄨󵄨1
󵄨󵄨
󵄨󵄨 .
󵄨󵄨 .
󵄨󵄨 .
󵄨󵄨
󵄨󵄨
󵄨󵄨1
(352)
where, for any 𝜆 > 0 and any 𝜇 ∈ R,
𝑁
for 𝑗 ∈ {0, . . . , 𝑁 − 1} , (349)
(351)
Theorem 57. The following convergence holds:
= e𝑖𝜇𝑎 ∑
𝑘
(Δ+ ) Φ (𝑏) = (Δ+ ) 𝜑 (𝑏)
𝑡 ≥ 0,
and for the pseudoprocess (𝑋𝑡 )𝑡≥0 the pseudo-Brownian
motion. In Definition 34, we choose for 𝐼 the interval (𝑎, 𝑏);
𝜀
𝜀
, 𝑋𝐼𝜀 = 𝑋𝑎𝑏
and 𝜏𝐼 = 𝜏𝑎𝑏 , 𝑋𝐼 = 𝑋𝑎𝑏 . Set 𝑎𝜀 = ⌊𝑎/𝜀⌋
then 𝜏𝐼𝜀 = 𝜏𝑎𝑏
and 𝑏𝜀 = ⌊𝑏/𝜀⌋, where ⌊⋅⌋ and ⌈⋅⌉, respectively stand for the
𝜀
= 𝜀2𝑁𝜎𝑎𝜀 ,𝑏𝜀 and
usual floor and ceiling functions. We have 𝜏𝑎𝑏
𝜀
𝑋𝑎𝑏 = 𝜀𝑆𝑎𝜀 ,𝑏𝜀 .
for 𝑥 ∈ Z,
𝑘
(Δ− ) Φ (𝑎) = (Δ− ) 𝜑 (𝑎)
𝑋𝑡𝜀 = 𝜀𝑆⌊𝑡/𝜀2𝑁 ⌋ ,
(348)
Now, the main link between time 𝜎𝑎𝑏 and finite-difference
equations is the following one.
Δ𝑁Φ (𝑥) = 0
𝑗
𝑗=0
(346)
As a result, we obtain the expression of Δ𝑁𝑓(𝑖) announced in
the introduction, namely,
=
𝑁−1
𝑗=0
2𝑁
=(
).
ℓ+𝑁
+
Δ𝑁𝑃𝑎𝑏,𝑗
𝑗
−
+
Φ (𝑥) = ∑ 𝑃𝑎𝑏,𝑗
(𝑥) (Δ− ) 𝜑 (𝑎) + ∑ 𝑃𝑎𝑏,𝑗
(𝑥) (Δ+ ) 𝜑 (𝑏) .
..
.
e−𝜑2𝑁
2𝑁
√𝜆/𝑐(𝑏−𝑎)
𝜑1
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
..
󵄨󵄨
.
󵄨󵄨
󵄨
2𝑁
𝑁−1 󵄨󵄨󵄨
e−𝜑𝑘−1 √𝜆/𝑐(𝑏−𝑎) 𝜑𝑘−1
󵄨󵄨
󵄨󵄨
0
󵄨󵄨󵄨 ,
󵄨
√𝜆/𝑐(𝑏−𝑎) 𝑁−1 󵄨󵄨
−𝜑𝑘+1 2𝑁
e
𝜑𝑘+1 󵄨󵄨
󵄨󵄨
..
󵄨󵄨
󵄨󵄨
.
󵄨󵄨
󵄨
2𝑁
−𝜑2𝑁 √𝜆/𝑐(𝑏−𝑎) 𝑁−1 󵄨󵄨
e
𝜑2𝑁 󵄨󵄨
⋅ ⋅ ⋅ e−𝜑1
𝜑𝑘−1 ⋅ ⋅ ⋅
⋅⋅⋅
𝜑𝑘+1 ⋅ ⋅ ⋅
𝜑2𝑁 ⋅ ⋅ ⋅
2𝑁
√𝜆/𝑐(𝑏−𝑎) 𝑁−1
𝜑1
36
International Journal of Stochastic Analysis
󵄨󵄨
󵄨󵄨1
󵄨󵄨
󵄨󵄨 ..
󵄨󵄨 .
󵄨󵄨
󵄨󵄨
󵄨󵄨1
󵄨󵄨
󵄨󵄨
󵄨󵄨0
󵄨󵄨
󵄨󵄨
󵄨󵄨1
󵄨󵄨
󵄨󵄨 ..
󵄨󵄨 .
󵄨󵄨
󵄨󵄨
󵄨󵄨1
󵄨
𝜑1
..
.
2𝑁
√𝜆/𝑐(𝑏−𝑎)
⋅ ⋅ ⋅ 𝜑1𝑁−1 e−𝜑1
..
.
e−𝜑1
..
.
2𝑁
𝜑𝑘+1 ⋅ ⋅ ⋅
..
.
√𝜆/𝑐(𝑏−𝑎)
..
.
𝑁−1
e−𝜑𝑘−1 √𝜆/𝑐(𝑏−𝑎) e−𝜑𝑘−1
𝜑𝑘−1 ⋅ ⋅ ⋅ 𝜑𝑘−1
0 ⋅⋅⋅ 0
1
𝑁−1
𝜑𝑘+1
2𝑁
√𝜆/𝑐(𝑏−𝑎)
−𝜑𝑘+1 2𝑁
e
..
.
2𝑁
√𝜆/𝑐(𝑏−𝑎)
𝛿
√𝜆/𝑐(𝑏−𝑎)
−𝜑𝑘+1 2𝑁
e
..
.
𝑁−1
e−𝜑2𝑁
𝜑2𝑁 ⋅ ⋅ ⋅ 𝜑2𝑁
..
.
2𝑁
√𝜆/𝑐(𝑏−𝑎)
e−𝜑2𝑁
2𝑁
√𝜆/𝑐(𝑏−𝑎)
𝜑1
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
..
󵄨󵄨
.
󵄨󵄨
󵄨
2𝑁
−𝜑𝑘−1 √𝜆/𝑐(𝑏−𝑎) 𝑁−1 󵄨󵄨󵄨
e
𝜑𝑘−1 󵄨󵄨
󵄨󵄨
𝛿𝑁−1
󵄨󵄨 .
󵄨󵄨
2𝑁
−𝜑𝑘+1 √𝜆/𝑐(𝑏−𝑎) 𝑁−1 󵄨󵄨
e
𝜑𝑘+1 󵄨󵄨
󵄨󵄨
..
󵄨󵄨
󵄨󵄨
.
󵄨󵄨
󵄨
2𝑁
𝑁−1 󵄨󵄨
󵄨󵄨
e−𝜑2𝑁 √𝜆/𝑐(𝑏−𝑎) 𝜑2𝑁
⋅ ⋅ ⋅ e−𝜑1
𝜑𝑘−1 ⋅ ⋅ ⋅
⋅⋅⋅
𝜑𝑘+1 ⋅ ⋅ ⋅
𝜑2𝑁 ⋅ ⋅ ⋅
2𝑁
√𝜆/𝑐(𝑏−𝑎) 𝑁−1
𝜑1
2𝑁
(354)
√𝜆/𝑐.
In the two last determinants, we have put 𝛿 = −𝑖𝜇/ 2𝑁
= lim+ E (e−𝜆𝜀
In Theorem 57, we obtain the joint pseudodistribution
of (𝜏𝑎𝑏 , 𝑋𝑎𝑏 ) characterized by its Laplace-Fourier transform.
This is a new result for pseudo-Brownian motion that we will
develop in a forthcoming paper [14].
̃ 𝑘 (e−𝜆𝜀 , e𝑖𝜇𝜀 ) (e𝑖𝜇𝜀 V𝑘 (𝜆, 𝜀))
= lim+ ∑ 𝐿
Proof. By Definition 34 and Theorem 39, we have that
̃ 𝑘 (e−𝜆𝜀 , e𝑖𝜇𝜀 ) V𝑘 (𝜆, 𝜀)𝑎𝜀 .
= e𝑖𝜇𝑎 ∑ lim+ 𝐿
−𝜆𝜏𝑎𝑏 +𝑖𝜇𝑋𝑎𝑏
E (e
𝜎𝑎𝜀 ,𝑏𝜀 +𝑖𝜇𝜀𝑆𝑎𝜀 ,𝑏𝜀
𝜀→0
2𝑁
𝜀→0
1{𝜎𝑎 ,𝑏 <+∞} )
𝜀 𝜀
𝑎𝜀 −𝑁
2𝑁
𝑘=1
2𝑁
𝑘=1
(355)
2𝑁
𝜀→0
1{𝜏𝑎𝑏 <+∞} )
𝜀
̃ 𝑘 (𝑧, 𝜁) = 𝐷𝑘 (𝑧, 𝜁)/𝐷(𝑧) and that the quantities
Recall that 𝐿
𝐷 and 𝐷𝑘 are expressed by means of the determinant
𝜀
= lim+ E (e−𝜆𝜏𝑎𝑏 +𝑖𝜇𝑋𝑎𝑏 1{𝜏𝑎𝑏𝜀 <+∞} )
𝜀→0
󵄨󵄨
󵄨
󵄨󵄨1 𝑢1 𝑢12 ⋅ ⋅ ⋅ 𝑢1𝑁−1 𝑢1𝑏−𝑎+𝑁−1 𝑢1𝑏−𝑎+𝑁 𝑢1𝑏−𝑎+𝑁+1 ⋅ ⋅ ⋅ 𝑢1𝑏−𝑎+2𝑁−2 󵄨󵄨󵄨
󵄨󵄨 . .
󵄨󵄨
..
..
..
..
..
..
󵄨󵄨 .
𝑈 (𝑢1 , . . . , 𝑢2𝑁) = 󵄨󵄨󵄨󵄨 .. ..
󵄨󵄨
.
.
.
.
.
.
󵄨󵄨
󵄨
2
𝑁−1
𝑏−𝑎+𝑁−1
𝑏−𝑎+𝑁
𝑏−𝑎+𝑁+1
𝑏−𝑎+2𝑁−2 󵄨󵄨
󵄨󵄨1 𝑢
󵄨󵄨
𝑢
⋅
⋅
⋅
𝑢
𝑢
𝑢
𝑢
⋅
⋅
⋅
𝑢
󵄨
2𝑁
2𝑁
2𝑁
2𝑁
2𝑁
2𝑁
2𝑁
By replacing the columns labeled as 𝐶𝑗 , 1 ≤ 𝑗 ≤ 2𝑁, by the
𝑗
linear combinations 𝐶𝑗󸀠 = ∑𝑘=0 (−1)𝑗+𝑘 ( 𝑘𝑗 ) 𝐶𝑘
𝑗
and 𝐶𝑗󸀠 = ∑𝑘=𝑁(−1)𝑗+𝑘 ( 𝑘𝑗 ) 𝐶𝑘 if 𝑁 + 1 ≤
if 1 ≤ 𝑗 ≤ 𝑁,
foregoing determinant remains invariant and can be rewritten as
𝑗 ≤ 2𝑁, the
2
𝑁−1
𝑁−1 󵄨
󵄨󵄨
𝑢1𝑏−𝑎+𝑁−1 𝑢1𝑏−𝑎+𝑁−1 (𝑢 − 1) 𝑢1𝑏−𝑎+𝑁−1 (𝑢 − 1)2 ⋅ ⋅ ⋅ 𝑢1𝑏−𝑎+2𝑁−2 (𝑢1 − 1) 󵄨󵄨󵄨
󵄨󵄨1 𝑢1 − 1 (𝑢1 − 1) ⋅ ⋅ ⋅ (𝑢1 − 1)
󵄨󵄨 .
󵄨󵄨
..
..
..
..
..
..
..
󵄨󵄨 .
󵄨󵄨 .
󵄨󵄨 .
󵄨󵄨
.
.
.
.
.
.
.
󵄨󵄨
󵄨
󵄨󵄨1 𝑢 − 1 (𝑢 − 1)2 ⋅ ⋅ ⋅ (𝑢 − 1)𝑁−1 𝑢𝑏−𝑎+𝑁−1 𝑢𝑏−𝑎+𝑁−1 (𝑢 − 1) 𝑢𝑏−𝑎+𝑁−1 (𝑢 − 1)2 ⋅ ⋅ ⋅ 𝑢𝑏−𝑎+2𝑁−2 (𝑢 − 1)𝑁−1 󵄨󵄨󵄨
󵄨
󵄨
2𝑁
2𝑁
2𝑁
1
2𝑁
2𝑁
2𝑁
2𝑁
Then, by replacing the 𝑢𝑗 ’s by 𝑢𝑗 (𝜆, 𝜀), 𝑏 − 𝑎 by 𝑏𝜀 − 𝑎𝜀 and by
using the asymptotics (𝑢𝑗 (𝜆, 𝜀) − 1)
𝑘
𝑘
∼ + (−𝜑𝑗 √𝜆/𝑐) 𝜀𝑘 and
2𝑁
𝜀→0
(356)
𝑢𝑗 (𝜆, 𝜀)𝑏𝜀 −𝑎𝜀 +𝑁−1 → + e −𝜑𝑗
get that
𝜀→0
2𝑁
√𝜆/𝑐(𝑏−𝑎)
(357)
coming from (119), we
International Journal of Stochastic Analysis
37
2𝑁
𝐷 (e−𝜆𝜀 ) = 𝑈 (𝑢1 (𝜆, 𝜀) , . . . , 𝑢2𝑁 (𝜆, 𝜀))
𝑁−1
󵄨󵄨󵄨
2𝑁
󵄨󵄨
√𝜆/𝑐(𝑏−𝑎)
2𝑁 𝜆
2𝑁 𝜆
󵄨󵄨 1 (−𝜑1 √ ) 𝜀 ⋅ ⋅ ⋅ (−𝜑1 √ ) 𝜀𝑁−1 e−𝜑1
󵄨󵄨
𝑐
𝑐
󵄨󵄨
󵄨
..
..
..
∼ + 󵄨󵄨󵄨 ...
.
.
.
𝜀 → 0 󵄨󵄨
𝑁−1
󵄨󵄨
2𝑁
󵄨󵄨
𝜆
𝜆
󵄨󵄨 1 (−𝜑2𝑁 2𝑁
√ ) 𝜀 ⋅ ⋅ ⋅ (−𝜑2𝑁 2𝑁
√ ) 𝜀𝑁−1 e−𝜑2𝑁 √𝜆/𝑐(𝑏−𝑎)
󵄨󵄨
𝑐
𝑐
󵄨
𝑁−1
2𝑁
𝜆
(−𝜑1 √ ) e−𝜑1 √𝜆/𝑐(𝑏−𝑎) 𝜀
𝑐
..
.
2𝑁
𝜆
(−𝜑1 √ )
𝑐
2𝑁
⋅⋅⋅
√𝜆/𝑐(𝑏−𝑎) 𝑁−1
−𝜑1 2𝑁
𝜀
e
𝑁−1
2𝑁
𝜆
𝜆
√ ) e−𝜑2𝑁 √𝜆/𝑐(𝑏−𝑎) 𝜀 ⋅ ⋅ ⋅ (−𝜑2𝑁 2𝑁
√ )
(−𝜑2𝑁 2𝑁
𝑐
𝑐
..
.
2𝑁
√𝜆/𝑐(𝑏−𝑎) 𝑁−1
e−𝜑2𝑁
𝜀
𝜆 (𝑁−1)/2 𝑁(𝑁−1)
=( )
𝜀
D (𝜆) .
𝑐
Similarly, by using the elementary asymptotics e𝑖𝜇𝜀 − 1
𝑖𝜇𝜀 and e𝑖𝜇𝜀(𝑏𝜀 −𝑎𝜀 +𝑁−1) → + e𝑖𝜇(𝑏−𝑎) , we obtain that
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
(358)
e𝑖𝜇𝜀 , 𝑢𝑘+1 (𝜆, 𝜀) , . . . , 𝑢2𝑁 (𝜆, 𝜀))
∼
𝜀 → 0+
𝜆 (𝑁−1)/2 𝑁(𝑁−1)
𝜀
D𝑘 (𝜆, 𝜇) ,
∼+( )
𝜀→0
𝑐
𝜀→0
2𝑁
e𝑖𝜇𝑎 𝐷𝑘 (e−𝜆𝜀 , e𝑖𝜇𝜀 )
(359)
= e𝑖𝜇𝑎 𝑈 (𝑢1 (𝜆, 𝜀) , . . . , 𝑢𝑘−1 (𝜆, 𝜀) ,
󵄨󵄨
󵄨󵄨 1
󵄨󵄨
󵄨󵄨 ..
󵄨󵄨 .
󵄨󵄨
󵄨󵄨
󵄨󵄨 1
󵄨󵄨 𝑖𝜇𝑎
󵄨󵄨e
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨 1
󵄨󵄨
󵄨󵄨 ..
󵄨󵄨 .
󵄨󵄨
󵄨󵄨
󵄨󵄨 1
𝜑1
..
.
⋅⋅⋅
𝜑1𝑁−1
..
.
e−𝜑1
where D𝑘 (𝜆, 𝜇) denotes the determinant
2𝑁
√𝜆/𝑐(𝑏−𝑎)
e−𝜑1
..
.
2𝑁
2𝑁
√𝜆/𝑐(𝑏−𝑎)
..
.
2𝑁
𝜑1
𝑁−1
e−𝜑𝑘−1 √𝜆/𝑐(𝑏−𝑎) e−𝜑𝑘−1 √𝜆/𝑐(𝑏−𝑎) 𝜑𝑘−1 ⋅ ⋅ ⋅
𝜑𝑘−1 ⋅ ⋅ ⋅ 𝜑𝑘−1
𝑖𝜇𝑎
𝑖𝜇𝑎 𝑁−1
e 𝛿 ⋅⋅⋅ e 𝛿
e𝑖𝜇𝑏
e𝑖𝜇𝑏 𝛿
⋅⋅⋅
√𝜆/𝑐(𝑏−𝑎)
−𝜑𝑘+1 2𝑁
𝜑𝑘+1 ⋅ ⋅ ⋅
..
.
𝑁−1
𝜑𝑘+1
e
𝜑2𝑁 ⋅ ⋅ ⋅
𝑁−1
𝜑2𝑁
e−𝜑2𝑁
..
.
√𝜆/𝑐(𝑏−𝑎)
−𝜑𝑘+1 2𝑁
e
..
.
2𝑁
√𝜆/𝑐(𝑏−𝑎)
e−𝜑2𝑁
..
.
2𝑁
√𝜆/𝑐(𝑏−𝑎)
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
..
󵄨󵄨
.
󵄨󵄨
√𝜆/𝑐(𝑏−𝑎) 𝑁−1 󵄨󵄨
−𝜑𝑘−1 2𝑁
e
𝜑𝑘−1 󵄨󵄨
󵄨󵄨
󵄨󵄨
e𝑖𝜇𝑏 𝛿𝑁−1
󵄨󵄨 .
󵄨󵄨
√𝜆/𝑐(𝑏−𝑎) 𝑁−1 󵄨󵄨
−𝜑𝑘+1 2𝑁
e
𝜑𝑘+1 󵄨󵄨󵄨
󵄨󵄨
..
󵄨󵄨
󵄨󵄨
.
󵄨
2𝑁
√𝜆/𝑐(𝑏−𝑎)
−𝜑2𝑁
𝑁−1 󵄨󵄨󵄨
e
𝜑2𝑁 󵄨
⋅ ⋅ ⋅ e−𝜑1
𝜑𝑘+1 ⋅ ⋅ ⋅
𝜑2𝑁 ⋅ ⋅ ⋅
2𝑁
√𝜆/𝑐(𝑏−𝑎) 𝑁−1
𝜑1
where, for any 𝜇 ∈ R,
By putting (358) and (359) into (355), we derive that
𝑁−1
−𝜆𝜏𝑎𝑏 +𝑖𝜇𝑋𝑎𝑏
E (e
2𝑁
D (𝜆, 𝜇)
1{𝜏𝑎𝑏 <+∞} ) = ∑ 𝑘
.
D (𝜆)
𝑘=1
𝑖𝜇𝑎
D−𝑘 (𝜆, 𝜇)
It is plain that D𝑘 (𝜆, 𝜇) = e
finishes the proof of Theorem 57.
𝑖𝜇𝑏
+e
D−𝑘 (𝜆, 𝜇)
(361)
𝜀→0
𝑗
𝑁−1
E (e𝑖𝜇𝑋𝑎𝑏 1{𝜏𝑎𝑏 <+∞} ) = e𝑖𝜇𝑎 ∑ I−𝑎𝑏,𝑗 (𝑖𝜇) + e𝑖𝜇𝑏 ∑ I+𝑎𝑏,𝑗 (𝑖𝜇)
𝑗=0
𝑗
𝑗=0
(363)
with
which
I−𝑎𝑏,𝑗 = (
Theorem 58. The following convergence holds:
𝜀
󳨀→+ 𝑋𝑎𝑏 ,
𝑋𝑎𝑏
(360)
I+𝑎𝑏,𝑗
(362)
𝑁−𝑗−1
𝑁
𝑏
−𝑎 𝑘
(−𝑎)𝑗
𝑘+𝑁−1
)
) ,
)(
∑ (
𝑘
𝑏−𝑎
𝑗! 𝑘=0
𝑏−𝑎
𝑁−𝑗−1
𝑘
−𝑎 𝑁 (−𝑏)𝑗
𝑏
𝑘+𝑁−1
=(
)
) .
)(
∑ (
𝑘
𝑏−𝑎
𝑗! 𝑘=0
𝑏−𝑎
(364)
38
International Journal of Stochastic Analysis
𝑁−𝑗−1
𝑁−𝑗−1
)
= ∑ (
𝑘
Moreover,
P {𝜏𝑎𝑏 < +∞} = 1.
(365)
𝑘=0
1
× ∫ 𝑥𝑏𝜀 −𝑎𝜀 +𝑁−𝑘−2 (1 − 𝑥)𝑘+𝑁−1 d𝑥
Proof. By Definition 34 and by (331),
0
1
𝜀
E (e𝑖𝜇𝑋𝑎𝑏 1{𝜏𝑎𝑏 <+∞} ) = lim+ E (e𝑖𝜇𝑋𝑎𝑏 1{𝜏𝑎𝑏𝜀 <+∞} )
𝑁−𝑗−𝑘−1
× ∫ 𝑦𝑗+𝑏𝜀 −1 (1 − 𝑦)
𝜀→0
0
𝑁−𝑗−1
= lim+ E (e𝑖𝜇𝜀𝑆𝑎𝜀 ,𝑏𝜀 1{𝜎𝑎 ,𝑏 <+∞} )
𝜀→0
d𝑦
𝑁−𝑗−1
= ∑ (
)
𝑘
𝜀 𝜀
𝑘=0
𝑁−1
−𝑖𝜇𝜀 𝑗
= lim+ (e𝑖𝜇𝜀𝑎𝜀 ∑ 𝐼𝑎−𝜀 ,𝑏𝜀 ,𝑗 (1 − e
𝜀→0
)
𝑗=0
𝑁−1
𝑗
+ e𝑖𝜇𝜀𝑏𝜀 ∑ 𝐼𝑎+𝜀 ,𝑏𝜀 ,𝑗 (e𝑖𝜇𝜀 − 1) )
𝑗=0
𝑁−1
𝑗
=
= e𝑖𝜇𝑎 lim+ ∑ 𝐼𝑎−𝜀 ,𝑏𝜀 ,𝑗 (1 − e−𝑖𝜇𝜀 )
𝜀→0
𝑖𝜇𝑏
+e
𝑗=0
𝑁−1
lim ∑ 𝐼𝑎+𝜀 ,𝑏𝜀 ,𝑗 (e𝑖𝜇𝜀
𝜀 → 0+
𝑗=0
×
(𝑏𝜀 − 𝑎𝜀 + 𝑁 − 𝑘 − 2)! (𝑘 + 𝑁 − 1)!
(𝑏𝜀 − 𝑎𝜀 + 2𝑁 − 2)!
×
(𝑗 + 𝑏𝜀 − 1)! (𝑁 − 𝑗 − 𝑘 − 1)!
(𝑏𝜀 + 𝑁 − 𝑘 − 1)!
(𝑁 − 1)! (𝑁 − 𝑗 − 1)! (𝑗 + 𝑏𝜀 − 1)!
(𝑏𝜀 − 𝑎𝜀 + 2𝑁 − 2)!
𝑁−𝑗−1
𝑘 + 𝑁 − 1 (𝑏𝜀 − 𝑎𝜀 + 𝑁 − 𝑘 − 2)!
.
× ∑ (
)
𝑘
(𝑏𝜀 + 𝑁 − 𝑘 − 1)!
𝑗
− 1) .
𝑘=0
(369)
(366)
Concerning, for example, the quantity 𝐼𝑎+𝜀 ,𝑏𝜀 ,𝑗 , we have that
𝐼𝑎+𝜀 ,𝑏𝜀 ,𝑗
(𝑗 + 𝑏𝜀 − 1)!
(𝑏𝜀 + 𝑁 − 𝑘 − 1)!
𝑁−1
(−1)𝑗
𝑁−1
(
)
∏ [(𝑘 − 𝑎𝜀 ) (𝑘 + 𝑏𝜀 )]
𝑗
(𝑁 − 1) !2
𝑘=0
=
×∬
D+
By putting the asymptotics
=
(367)
𝑁−𝑗−1
(1 − 𝑥𝑦)
(𝑏𝜀 − 𝑎𝜀 + 𝑁 − 𝑘 − 2)!
(𝑏𝜀 − 𝑎𝜀 + 2𝑁 − 2)!
1
=
(𝑏𝜀 − 𝑎𝜀 + 2𝑁 − 2)𝑁+𝑘
∼+(
𝜀→0
𝜀 𝑁+𝑘
)
𝑏−𝑎
into (369), we obtain that
𝑁−𝑗−1
= [(1 − 𝑥) + 𝑥 (1 − 𝑦)]
∬
D+
𝑁−𝑗−1
𝑁−𝑗−1
𝑁−𝑗−𝑘−1
= ∑ (
,
) (1 − 𝑥)𝑘 𝑥𝑁−𝑗−𝑘−1 (1 − 𝑦)
𝑘
𝑘=0
(368)
𝑢−𝑎𝜀 −1 (1 − 𝑢)𝑁−1 V𝑗+𝑏𝜀 −1 (1 − V)𝑁−𝑗−1 d𝑢 dV
∼
𝜀 → 0+
(𝑁 − 1)! (𝑁 − 𝑗 − 1)!
𝑏 𝑁−𝑗 (𝑏 − 𝑎)𝑁
(371)
𝑁−𝑗−1
𝑘
𝑏
𝑘+𝑁−1
× 𝜀2𝑁−𝑗 ∑ (
) .
)(
𝑘
𝑏−𝑎
we get that
D+
𝜀 𝑁−𝑗−𝑘
∼+( )
,
𝜀→0
𝑏
(370)
𝑢−𝑎𝜀 −1 (1 − 𝑢)𝑁−1 V𝑗+𝑏𝜀 −1 (1 − V)𝑁−𝑗−1 d𝑢 dV.
By performing the change of variables (𝑢, V) = (𝑥, 𝑥𝑦) in the
above integral and by expanding (1 − 𝑥𝑦)𝑁−𝑗−1 as
∬
1
(𝑏𝜀 + 𝑁 − 𝑘 − 1)𝑁−𝑗−𝑘
𝑢−𝑎𝜀 −1 (1 − 𝑢)𝑁−1 V𝑗+𝑏𝜀 −1 (1 − V)𝑁−𝑗−1 d𝑢 dV
1
1
0
0
=∫ ∫ 𝑥
𝑗+𝑏𝜀 −𝑎𝜀 −1
𝑁−1 𝑗+𝑏𝜀 −1
(1 − 𝑥)
𝑘=0
Next, using the asymptotics
𝑦
𝑁−1
× (1 − 𝑥𝑦)
𝑁−𝑗−1
d𝑥 d𝑦
∏ [(𝑘 − 𝑎𝜀 ) (𝑘 + 𝑏𝜀 )] ∼ + (−1)𝑁
𝑘=0
𝜀→0
(𝑎𝑏)𝑁
,
𝜀2𝑁
(372)
International Journal of Stochastic Analysis
39
expression (367) admits the following asymptotics:
𝐼𝑎+𝜀 ,𝑏𝜀 ,𝑗
∼
𝜀 → 0+
As a byproduct,
𝑎 𝑁
(−1)𝑗+𝑁𝑏𝑗
)
(
𝑗!𝜀𝑗
𝑏−𝑎
𝑁−𝑗−1
𝑘
𝑏
𝑘+𝑁−1
× ∑ (
) .
)(
𝑘
𝑏−𝑎
I−𝑎𝑏,0 =
(373)
𝑗
(378)
2𝑁−1
1
2𝑁 − 1
) (−𝑎)𝑚 𝑏2𝑁−1−𝑚
∑ (
𝑚
(𝑏 − 𝑎)2𝑁−1 𝑚=𝑁
(379)
Similarly,
I+𝑎𝑏,0 =
𝑘=0
Then, we see that the second limit lying in (366) equals
𝑁−1
1
2𝑁 − 1
) (−𝑎)𝑚 𝑏2𝑁−1−𝑚 .
∑(
2𝑁−1
𝑚
(𝑏 − 𝑎)
𝑚=0
and we deduce that
𝑁−𝑗−1
𝑘
−𝑎 𝑁 𝑁−1 (−𝑖𝜇𝑏)
𝑏
𝑘+𝑁−1
) ∑
[ ∑ (
) ].
(
)(
𝑘
𝑏−𝑎
𝑗!
𝑏−𝑎
𝑗=0
𝑘=0
P {𝜏𝑎𝑏 < +∞} = I−𝑎𝑏,0 + I+𝑎𝑏,0
=
(374)
In the same way, it may be seen that the first term of the sum
lying in (366) tends to
𝑗
(
𝑁−𝑗−1
𝑁 𝑁−1
(−𝑖𝜇𝑎)
𝑏
−𝑎 𝑘
𝑘+𝑁−1
) ∑
[ ∑ (
) ].
)(
𝑘
𝑏−𝑎
𝑗!
𝑏−𝑎
𝑗=0
𝑘=0
(375)
As a result, we derive (363).
Finally, let us have a look on the pseudoprobability
P{𝜏𝑎𝑏 < +∞}. We have that
I−𝑎𝑏,0 = (
2𝑁−1
1
2𝑁 − 1
) (−𝑎)𝑚 𝑏2𝑁−1−𝑚
∑ (
2𝑁−1
𝑚
(𝑏 − 𝑎)
𝑚=0
= 1.
(380)
The proof of Theorem 58 is finished.
Corollary 59. The pseudodensity of 𝑋𝑎𝑏 is given by
𝑁−1
P {𝑋𝑎𝑏 ∈ d𝑧} 𝑁−1
(𝑗)
= ∑ (−1)𝑗 I−𝑎𝑏,𝑗 𝛿𝑎(𝑗) (𝑧) + ∑ (−1)𝑗 I+𝑎𝑏,𝑗 𝛿𝑏 (𝑧) .
d𝑧
𝑗=0
𝑗=0
(381)
In particular,
𝑁 𝑁−1
𝑏
−𝑎 𝑘
𝑘+𝑁−1
) ∑(
)
)(
𝑘
𝑏−𝑎
𝑏−𝑎
𝑘=0
P {𝜏𝑎− < 𝜏𝑏+ } = I−𝑎𝑏,0 ,
P {𝜏𝑏+ < 𝜏𝑎− } = I+𝑎𝑏,0 .
(382)
=
𝑁−1
𝑏𝑁
𝑘+𝑁−1
) (−𝑎)𝑘 (𝑏 − 𝑎)𝑁−1−𝑘
∑(
2𝑁−1
𝑘
(𝑏 − 𝑎)
𝑘=0
This result has been announced in [13] without any proof.
We will develop it in a forthcoming paper.
=
𝑏𝑁
(𝑏 − 𝑎)2𝑁−1
Appendix
∑
0≤𝑘≤𝑁−1
0≤ℓ≤𝑁−1−𝑘
𝑘+𝑁−1 𝑁−1−𝑘
(
)(
)
𝑘
ℓ
× (−𝑎)𝑘+ℓ 𝑏𝑁−1−𝑘−ℓ
=
𝑁−1
𝑚
1
𝑘+𝑁−1 𝑁−1−𝑘
)(
)]
∑ [ ∑(
2𝑁−1
𝑘
𝑚−𝑘
(𝑏 − 𝑎)
𝑚=0
𝑘=0
× (−𝑎)𝑚 𝑏2𝑁−1−𝑚 .
(376)
By using the elementary identity ∑𝑛𝑘=0 ( 𝑘+𝑝
) ( 𝑛+𝑞−𝑘
) =
𝑛−𝑘
𝑘
−𝑝
−𝑞
)
which
comes
from
the
equality
(1
+
𝑥)
(1
+
𝑥)
=
( 𝑛+𝑝+𝑞+1
𝑛
−𝑝−𝑞
(1 + 𝑥)
together with the expansion, for example, for 𝑝,
𝑘 𝑘+𝑝−1
𝑘
(1 + 𝑥)−𝑝 = ∑∞
𝑘=0 (−1) ( 𝑘 ) 𝑥 , we get that
𝑚
𝑘+𝑁−1 𝑁−1−𝑘
2𝑁 − 1
)(
)=(
).
∑(
𝑘
𝑚−𝑘
𝑚
𝑘=0
(377)
A.
A.1. Lacunary Vandermonde Systems. Let us introduce the
“lacunary” Vandermonde determinant (of type (𝑝 + 𝑟) × (𝑝 +
𝑟)):
𝑈𝑝𝑞𝑟 (𝑢1 , . . . , 𝑢𝑝+𝑟 )
󵄨󵄨1 𝑢 ⋅ ⋅ ⋅ 𝑢𝑝−1 𝑢𝑝+𝑞 ⋅ ⋅ ⋅ 𝑢𝑝+𝑞+𝑟−1 󵄨󵄨
󵄨󵄨
󵄨󵄨
1
1
1
󵄨󵄨
󵄨󵄨 . .1
..
..
..
󵄨󵄨 .
= 󵄨󵄨󵄨󵄨 .. ..
󵄨󵄨
.
.
.
󵄨󵄨
𝑝−1
𝑝+𝑞
𝑝+𝑞+𝑟−1 󵄨󵄨󵄨
󵄨󵄨1 𝑢
󵄨󵄨
⋅
⋅
⋅
𝑢
𝑢
⋅
⋅
⋅
𝑢
󵄨
𝑝+𝑟
𝑝+𝑟
𝑝+𝑟
𝑝+𝑟
(A.1)
We put 𝑠0 := 𝑠0 (𝑢1 , . . . , 𝑢𝑝+𝑟 ) = 1 and, for ℓ ∈ {1, . . . , 𝑝 + 𝑟},
𝑠ℓ := 𝑠ℓ (𝑢1 , . . . , 𝑢𝑝+𝑟 ) =
∑
1≤𝑖1 <⋅⋅⋅<𝑖ℓ ≤𝑝+𝑟
𝑢𝑖1 ⋅ ⋅ ⋅ 𝑢𝑖ℓ .
(A.2)
We say “lacunary” because it comes from a genuine Vandermonde determinant where the powers from 𝑝 to (𝑝+𝑞−1) are
40
International Journal of Stochastic Analysis
missing. More precisely, the determinant 𝑈𝑝𝑞𝑟 (𝑢1 , . . .,𝑢𝑝+𝑟 )
is extracted from the classical Vandermonde determinant
𝑉𝑝+𝑞+𝑟 (𝑢1 , . . . , 𝑢𝑝+𝑞+𝑟 ) of type (𝑝 + 𝑞 + 𝑟) × (𝑝 + 𝑞 + 𝑟) by
󵄨󵄨 1 𝑢
󵄨󵄨
1
󵄨󵄨 .
..
󵄨󵄨 .
󵄨󵄨 .
.
󵄨󵄨
󵄨󵄨 1 𝑢
𝑝+𝑟
󵄨󵄨
󵄨󵄨
󵄨󵄨 1 𝑢𝑝+𝑟+1
󵄨󵄨
󵄨󵄨 .
..
󵄨󵄨 ..
.
󵄨󵄨
󵄨󵄨
󵄨󵄨 1 𝑢𝑝+𝑞+𝑟
𝑝−1
𝑝
⋅ ⋅ ⋅ 𝑢1
..
.
𝑝+𝑞−1
𝑢1
..
.
𝑝−1
removing the 𝑞 last rows and the (𝑝 + 1)st, (𝑝 + 2)nd, . . .,(𝑝 +
𝑞)th columns. We decompose 𝑉𝑝+𝑞+𝑟 (𝑢1 , . . . , 𝑢𝑝+𝑞+𝑟 ) into
blocks as follows:
⋅ ⋅ ⋅ 𝑢1
𝑝
𝑝+𝑞
𝑝+𝑞+𝑟−1
𝑢1
..
.
..
.
𝑝+𝑞−1
⋅ ⋅ ⋅ 𝑢1
𝑝+𝑞
..
.
𝑝+𝑞+𝑟−1
⋅ ⋅ ⋅ 𝑢𝑝+𝑟 𝑢𝑝+𝑟 ⋅ ⋅ ⋅ 𝑢𝑝+𝑟
𝑢𝑝+𝑟 ⋅ ⋅ ⋅ 𝑢𝑝+𝑟
𝑝−1
𝑝
𝑝+𝑞−1
𝑝+𝑞
𝑝+𝑞+𝑟−1
⋅ ⋅ ⋅ 𝑢𝑝+𝑟+1 𝑢𝑝+𝑟+1 ⋅ ⋅ ⋅ 𝑢𝑝+𝑟+1 𝑢𝑝+𝑟+1 ⋅ ⋅ ⋅ 𝑢𝑝+𝑟+1
..
..
..
..
..
.
.
.
.
.
𝑝−1
𝑝
𝑝+𝑞−1
𝑝+𝑞
𝑝+𝑞+𝑟−1
⋅ ⋅ ⋅ 𝑢𝑝+𝑞+𝑟 𝑢𝑝+𝑞+𝑟 ⋅ ⋅ ⋅ 𝑢𝑝+𝑞+𝑟 𝑢𝑝+𝑞+𝑟 ⋅ ⋅ ⋅ 𝑢𝑝+𝑞+𝑟
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨 .
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
(A.3)
By moving the 𝑟 last columns before the 𝑞 previous ones, this
determinant can be rewritten as
󵄨󵄨 1 𝑢
󵄨󵄨
1
󵄨󵄨 .
..
󵄨󵄨 .
󵄨󵄨 .
.
󵄨󵄨
󵄨󵄨 1 𝑢
𝑝+𝑟
󵄨
(−1)𝑞𝑟 󵄨󵄨󵄨
󵄨󵄨 1 𝑢𝑝+𝑟+1
󵄨󵄨
󵄨󵄨 .
..
󵄨󵄨 ..
.
󵄨󵄨
󵄨󵄨
󵄨󵄨 1 𝑢𝑝+𝑞+𝑟
𝑝−1
𝑝+𝑞
⋅ ⋅ ⋅ 𝑢1
..
.
𝑢1
..
.
𝑝−1
𝑝+𝑞
∏
1≤𝑖<𝑗≤𝑝+𝑞+𝑟
..
.
𝑝+𝑞+𝑟−1
𝑝
𝑢1
..
.
𝑝+𝑞−1
⋅ ⋅ ⋅ 𝑢1
𝑝
..
.
𝑝+𝑞−1
⋅ ⋅ ⋅ 𝑢𝑝+𝑟 𝑢𝑝+𝑟 ⋅ ⋅ ⋅ 𝑢𝑝+𝑟
𝑢𝑝+𝑟 ⋅ ⋅ ⋅ 𝑢𝑝+𝑟
𝑝−1
𝑝+𝑞
𝑝+𝑞+𝑟−1
𝑝
𝑝+𝑞−1
⋅ ⋅ ⋅ 𝑢𝑝+𝑟+1 𝑢𝑝+𝑟+1 ⋅ ⋅ ⋅ 𝑢𝑝+𝑟+1 𝑢𝑝+𝑟+1 ⋅ ⋅ ⋅ 𝑢𝑝+𝑟+1
..
..
..
..
..
.
.
.
.
.
𝑝−1
𝑝+𝑞
𝑝+𝑞+𝑟−1
𝑝
𝑝+𝑞−1
⋅ ⋅ ⋅ 𝑢𝑝+𝑞+𝑟 𝑢𝑝+𝑞+𝑟 ⋅ ⋅ ⋅ 𝑢𝑝+𝑞+𝑟 𝑢𝑝+𝑞+𝑟 ⋅ ⋅ ⋅ 𝑢𝑝+𝑞+𝑟
By appealing to an expansion by blocks of a determinant, it may be seen that 𝑈𝑝𝑞𝑟 (𝑢1 ,. . . , 𝑢𝑝+𝑟 ) is the
cofactor of the “south-east” block of the above determinant. Since the product of, for example, the diago𝑝
𝑝+1
𝑝+𝑞−1
nal terms of this last block is 𝑢𝑝+𝑟+1 𝑢𝑝+𝑟+2 ⋅ ⋅ ⋅ 𝑢𝑝+𝑞+𝑟 , the
determinant 𝑈𝑝𝑞𝑟 (𝑢1 , . . . , 𝑢𝑝+𝑟 ) is also the coefficient of
𝑝
𝑝+1
𝑝+𝑞−1
𝑢𝑝+𝑟+1 𝑢𝑝+𝑟+2 ⋅ ⋅ ⋅ 𝑢𝑝+𝑞+𝑟 in 𝑉𝑝+𝑞+𝑟 (𝑢1 , . . . , 𝑢𝑝+𝑞+𝑟 ). Now, let us
expand 𝑉𝑝+𝑞+𝑟 (𝑢1 , . . . , 𝑢𝑝+𝑞+𝑟 ):
𝑉𝑝+𝑞+𝑟 (𝑢1 , . . . , 𝑢𝑝+𝑞+𝑟 ) =
𝑝+𝑞+𝑟−1
⋅ ⋅ ⋅ 𝑢1
(𝑢𝑗 − 𝑢𝑖 ) = ∏∏∏
1
2
3
(A.5)
=
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨󵄨 .
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨
(A.4)
(−1)𝑞(𝑝+𝑟)−𝑘1 −⋅⋅⋅−𝑘𝑞
∑
0≤𝑘1 ,...,𝑘𝑞 ≤𝑝+𝑟
𝑘
𝑘
𝑞
1
× 𝑠𝑝+𝑟−𝑘1 ⋅ ⋅ ⋅ 𝑠𝑝+𝑟−𝑘𝑞 𝑢𝑝+𝑟+1
⋅ ⋅ ⋅ 𝑢𝑝+𝑞+𝑟
,
∏ = 𝑉𝑞 (𝑢𝑝+𝑟+1 , . . . , 𝑢𝑝+𝑞+𝑟 )
3
𝜍(1)−1
𝜍(𝑞)−1
= ∑ 𝜖 (𝜍) 𝑢𝑝+𝑟+1
⋅ ⋅ ⋅ 𝑢𝑝+𝑞+𝑟
𝜍∈S𝑞
=
ℓ
ℓ
∑
0≤ℓ1 ,...,ℓ𝑞 ≤𝑞−1
ℓ1 ,...,ℓ𝑞 all different
𝑞
1
𝜖 (ℓ1 . . . , ℓ𝑞 ) 𝑢𝑝+𝑟+1
⋅ ⋅ ⋅ 𝑢𝑝+𝑞+𝑟
.
(A.7)
with
∏=
1
∏=
2
∏=
3
∏
1≤𝑖<𝑗≤𝑝+𝑟
(𝑢𝑗 − 𝑢𝑖 ) ,
∏
1≤𝑖≤𝑝+𝑟
𝑝+𝑟+1≤𝑗≤𝑝+𝑞+𝑟
(𝑢𝑗 − 𝑢𝑖 ) ,
∏
𝑝+𝑟+1≤𝑖<𝑗≤𝑝+𝑞+𝑟
(𝑢𝑗 − 𝑢𝑖 ) .
We have that
𝑝+𝑞+𝑟
∏ = ∏ [(𝑢𝑗 − 𝑢1 ) ⋅ ⋅ ⋅ (𝑢𝑗 − 𝑢𝑝+𝑟 )]
2
𝑗=𝑝+𝑟+1
𝑝+𝑞+𝑟 𝑝+𝑟
= ∏ ∑ (−1)𝑝+𝑟−𝑘 𝑠𝑝+𝑟−𝑘 𝑢𝑗𝑘
𝑗=𝑝+𝑟+1 𝑘=0
(A.6)
The symbol S𝑞 in the above sum denotes the set of the
permutations of the numbers 1, 2, . . . , 𝑞, 𝜖(𝜍) is the signature
of the permutation 𝜍 and 𝜖(ℓ1 . . . , ℓ𝑞 ) ∈ {−1, +1} is the
signature of the permutation mapping 1, . . . , 𝑞 into ℓ1 . . . , ℓ𝑞 .
The product ∏2 ∏3 is given by
(−1)𝑞(𝑝+1)
∑
0≤𝑘1 ,...,𝑘𝑞 ≤𝑝+𝑟
0≤ℓ1 ,...,ℓ𝑞 ≤𝑞−1
ℓ1 ,...,ℓ𝑞 all different
(−1)𝑘1 +⋅⋅⋅+𝑘𝑞 𝜖 (ℓ1 . . . , ℓ𝑞 )
𝑘 +ℓ
𝑘 +ℓ
𝑞
𝑞
1
1
× 𝑠𝑝+𝑟−𝑘1 ⋅ ⋅ ⋅ 𝑠𝑝+𝑟−𝑘𝑞 𝑢𝑝+𝑟+1
⋅ ⋅ ⋅ 𝑢𝑝+𝑞+𝑟
.
(A.8)
𝑝
𝑝+1
𝑝+𝑞−1
For obtaining the coefficient of 𝑢𝑝+𝑟+1 𝑢𝑝+𝑟+2 ⋅ ⋅ ⋅ 𝑢𝑝+𝑞+𝑟 , we
only keep in the foregoing sum the indices 𝑘1 , . . . , 𝑘𝑞 ,
International Journal of Stochastic Analysis
41
ℓ1 , . . . , ℓ𝑞 such that 𝑘1 + ℓ1 = 𝑝, 𝑘2 + ℓ2 = 𝑝 + 1, . . ., 𝑘𝑞 + ℓ𝑞 =
𝑝 + 𝑞 − 1 and 0 ≤ 𝑘1 , . . . , 𝑘𝑞 ≤ 𝑝 + 𝑟, 0 ≤ ℓ1 , . . . , ℓ𝑞 ≤ 𝑞 − 1,
the indices ℓ1 , . . . , ℓ𝑞 being all distinct. This gives that
∏∏ =
2
3
∑
𝜖 (ℓ1 . . . , ℓ𝑞 )
for 𝑖∈{1,...,𝑞}:
(𝑖−𝑟−1)∨0≤ℓ𝑖 ≤(𝑝+𝑖−1)∧(𝑞−1)
ℓ1 ,...,ℓ𝑞 all different
Finally, we can observe that the foregoing sum is nothing but
the expansion of the determinant
󵄨󵄨󵄨 𝑠𝑟
𝑠𝑟−1 ⋅ ⋅ ⋅ 𝑠𝑟−𝑞+1 󵄨󵄨󵄨󵄨
󵄨󵄨
󵄨󵄨 𝑠𝑟+1
𝑠𝑟 ⋅ ⋅ ⋅ 𝑠𝑟−𝑞+2 󵄨󵄨󵄨󵄨
󵄨󵄨
󵄨
(A.10)
S𝑝𝑞𝑟 = 󵄨󵄨󵄨 .
..
.. 󵄨󵄨󵄨 .
󵄨󵄨
󵄨󵄨 ..
.
.
󵄨󵄨
󵄨󵄨
󵄨󵄨󵄨𝑠𝑟+𝑞−1 𝑠𝑟+𝑞−2 ⋅ ⋅ ⋅ 𝑠𝑟 󵄨󵄨󵄨
As a result, we obtain the result below.
× 𝑠ℓ1 +𝑟 𝑠ℓ2 +𝑟−1 ⋅ ⋅ ⋅ 𝑠ℓ𝑞 +𝑟−𝑞+1 .
(A.9)
Proposition A.1. The determinant 𝑈𝑝𝑞𝑟 (𝑢1 , . . . , 𝑢𝑝+𝑟 ) admits
the following expression:
󵄨󵄨
󵄨
𝑠𝑟−1 (𝑢1 , . . . , 𝑢𝑝+𝑟 ) ⋅ ⋅ ⋅ 𝑠𝑟−𝑞+1 (𝑢1 , . . . , 𝑢𝑝+𝑟 )󵄨󵄨󵄨
󵄨󵄨 𝑠𝑟 (𝑢1 , . . . , 𝑢𝑝+𝑟 )
󵄨󵄨
󵄨󵄨
󵄨󵄨 𝑠 (𝑢 , . . . , 𝑢 )
𝑠𝑟 (𝑢1 , . . . , 𝑢𝑝+𝑟 ) ⋅ ⋅ ⋅ 𝑠𝑟−𝑞+2 (𝑢1 , . . . , 𝑢𝑝+𝑟 )󵄨󵄨󵄨
𝑝+𝑟
󵄨󵄨 𝑟+1 1
󵄨󵄨 .
𝑈𝑝𝑞𝑟 (𝑢1 , . . . , 𝑢𝑝+𝑟 ) = ∏ (𝑢𝑗 − 𝑢𝑖 ) × 󵄨󵄨󵄨
󵄨󵄨
..
..
..
󵄨󵄨
󵄨󵄨
.
.
.
1≤𝑖<𝑗≤𝑝+𝑟
󵄨󵄨
󵄨󵄨
󵄨󵄨𝑠
󵄨
󵄨󵄨 𝑟+𝑞−1 (𝑢1 , . . . , 𝑢𝑝+𝑟 ) 𝑠𝑟+𝑞−2 (𝑢1 , . . . , 𝑢𝑝+𝑟 ) ⋅ ⋅ ⋅ 𝑠𝑟 (𝑢1 , . . . , 𝑢𝑝+𝑟 ) 󵄨󵄨󵄨
𝑢 ,...,𝑢
Let now 𝑈𝑝𝑞𝑟,ℓ ( 𝛼11 ,...,𝛼𝑝+𝑟
) be the determinant deduced from
𝑝+𝑟
𝑈𝑝𝑞𝑟 (𝑢1 , . . . , 𝑢𝑝+𝑟 ) by replacing one column by a general
column 𝛼1 , . . . , 𝛼𝑝+𝑟 , that is, the determinant given, if 0 ≤ ℓ ≤
𝑝 − 1, by
󵄨󵄨󵄨1 𝑢1 ⋅ ⋅ ⋅ 𝑢1ℓ−1 𝛼1 𝑢1ℓ+1 ⋅ ⋅ ⋅ 𝑢1𝑝−1 𝑢1𝑝+𝑞 ⋅ ⋅ ⋅ 𝑢1𝑝+𝑞+𝑟−1 󵄨󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨 .. ..
.
.
.
.
.
.
.
.
.
.
.
.
󵄨󵄨 ,
󵄨󵄨 . .
.
.
.
.
.
.
󵄨
󵄨󵄨
𝑝−1
𝑝+𝑞
𝑝+𝑞+𝑟−1 󵄨󵄨
ℓ−1
ℓ+1
󵄨󵄨1 𝑢𝑝+𝑟 ⋅ ⋅ ⋅ 𝑢𝑝+𝑟
𝛼𝑝+𝑟 𝑢𝑝+𝑟 ⋅ ⋅ ⋅ 𝑢𝑝+𝑟 𝑢𝑝+𝑟 ⋅ ⋅ ⋅ 𝑢𝑝+𝑟
󵄨󵄨
󵄨
(A.12)
and, if 𝑝 + 𝑞 ≤ ℓ ≤ 𝑝 + 𝑞 + 𝑟 − 1, by
󵄨󵄨1 𝑢 ⋅ ⋅ ⋅ 𝑢𝑝−1 𝑢𝑝+𝑞 ⋅ ⋅ ⋅ 𝑢ℓ−1 𝛼 𝑢ℓ+1 ⋅ ⋅ ⋅ 𝑢𝑝+𝑞+𝑟−1 󵄨󵄨
󵄨󵄨
󵄨󵄨
1
1
1
1
1
1
󵄨󵄨 . .1
󵄨󵄨
.
.
.
.
.
.
󵄨󵄨 . .
󵄨󵄨 .
.
.
.
.
.
.
.
.
.
.
.
.
󵄨󵄨 . .
󵄨󵄨
󵄨󵄨1 𝑢
𝑝−1
𝑝+𝑞
𝑝+𝑞+𝑟−1 󵄨󵄨
ℓ−1
ℓ+1
󵄨󵄨 𝑝+𝑟 ⋅ ⋅ ⋅ 𝑢𝑝+𝑟 𝑢𝑝+𝑟 ⋅ ⋅ ⋅ 𝑢𝑝+𝑟 𝛼𝑝+𝑟 𝑢𝑝+𝑟 ⋅ ⋅ ⋅ 𝑢𝑝+𝑟 󵄨󵄨
(A.13)
We have that
𝑢 , . . . , 𝑢𝑝+𝑟
𝑈𝑝𝑞𝑟,ℓ ( 1
)
𝛼1 , . . . , 𝛼𝑝+𝑟
𝑝+𝑟
(A.14)
= ∑ 𝛼𝑘 𝑈𝑝𝑞𝑟,𝑘ℓ (𝑢1 , . . . , 𝑢𝑘−1 , 𝑢𝑘+1 , . . . , 𝑢𝑝+𝑟 ) ,
and, if 𝑝 + 𝑞 ≤ ℓ ≤ 𝑝 + 𝑞 + 𝑟 − 1, by
𝑝−1
𝑝+𝑞
󵄨󵄨
𝑢1
⋅ ⋅ ⋅ 𝑢1ℓ−1
󵄨󵄨1 𝑢1 ⋅ ⋅ ⋅ 𝑢1
󵄨󵄨 . .
..
..
..
󵄨󵄨 . .
󵄨󵄨 . .
.
.
.
󵄨󵄨
𝑝−1
𝑝+𝑞
ℓ−1
󵄨󵄨1 𝑢
⋅
⋅
⋅
𝑢
𝑢
⋅
⋅
⋅
𝑢
󵄨󵄨
𝑘−1
𝑘−1
𝑘−1
𝑘−1
󵄨󵄨
𝑝−1
𝑝+𝑞
󵄨󵄨
󵄨󵄨1 𝑢𝑘 ⋅ ⋅ ⋅ 𝑢𝑘 𝑢𝑘 ⋅ ⋅ ⋅ 𝑢𝑘ℓ−1
󵄨󵄨
󵄨󵄨
𝑝−1
𝑝+𝑞
󵄨󵄨
ℓ−1
󵄨󵄨1 𝑢𝑘+1 ⋅ ⋅ ⋅ 𝑢𝑘+1 𝑢𝑘+1 ⋅ ⋅ ⋅ 𝑢𝑘+1
󵄨󵄨󵄨
󵄨󵄨 .. ..
..
..
..
󵄨󵄨 . .
.
.
.
󵄨󵄨
𝑝−1
𝑝+𝑞
󵄨󵄨
ℓ−1
󵄨󵄨1 𝑢𝑝+𝑟 ⋅ ⋅ ⋅ 𝑢𝑝+𝑟 𝑢𝑝+𝑟 ⋅ ⋅ ⋅ 𝑢𝑝+𝑟
𝑝+𝑞+𝑟−1 󵄨󵄨
0 𝑢1ℓ+1 ⋅ ⋅ ⋅ 𝑢1
󵄨󵄨
󵄨󵄨
.. ..
..
󵄨󵄨
󵄨󵄨
. .
.
󵄨
𝑝+𝑞+𝑟−1 󵄨󵄨
ℓ+1
󵄨󵄨
0 𝑢𝑘−1
⋅ ⋅ ⋅ 𝑢𝑘−1
󵄨󵄨
𝑝+𝑞+𝑟−1 󵄨󵄨󵄨
1 𝑢𝑘ℓ+1 ⋅ ⋅ ⋅ 𝑢𝑘
󵄨󵄨 .
󵄨󵄨
󵄨
𝑝+𝑞+𝑟−1 󵄨󵄨󵄨
ℓ+1
0 𝑢𝑘+1 ⋅ ⋅ ⋅ 𝑢𝑘+1
󵄨󵄨
󵄨󵄨
󵄨󵄨
.. ..
..
󵄨󵄨
󵄨󵄨
. .
.
𝑝+𝑞+𝑟−1 󵄨󵄨󵄨
ℓ+1
0 𝑢𝑝+𝑟 ⋅ ⋅ ⋅ 𝑢𝑝+𝑟
󵄨󵄨
(A.16)
In fact, 𝑈𝑝𝑞𝑟,𝑘ℓ (𝑢1 , . . . , 𝑢𝑘−1 , 𝑢𝑘+1 , . . . , 𝑢𝑝+𝑟 ) is the coefficient of
𝑢𝑘ℓ in 𝑈𝑝𝑞𝑟 (𝑢1 , . . . , 𝑢𝑝+𝑟 ). Let us introduce 𝑠𝑘,0 := 𝑠𝑘,0 (𝑢1 , . . . ,
𝑢𝑘−1 , 𝑢𝑘+1 , . . . , 𝑢𝑝+𝑟 ) = 1, 𝑠𝑘,𝑚 := 𝑠𝑘,𝑚 (𝑢1 , . . . , 𝑢𝑘−1 ,𝑢𝑘+1 , . . . ,
𝑢𝑝+𝑟 ) = 0 for any integer 𝑚 such that 𝑚 ≤ −1 or 𝑚 ≥ 𝑝 + 𝑟
and, for 𝑚 ∈ {1, . . . , 𝑝 + 𝑟 − 1},
𝑠𝑘,𝑚 := 𝑠𝑘,𝑚 (𝑢1 , . . . , 𝑢𝑘−1 , 𝑢𝑘+1 , . . . , 𝑢𝑝+𝑟 )
=
𝑘=1
where 𝑈𝑝𝑞𝑟,𝑘ℓ (𝑢1 , . . . , 𝑢𝑘−1 , 𝑢𝑘+1 , . . . , 𝑢𝑝+𝑟 ) is the determinant
given, if 0 ≤ ℓ ≤ 𝑝 − 1, by
𝑝−1
𝑝+𝑞
𝑝+𝑞+𝑟−1 󵄨󵄨
󵄨󵄨
𝑢1
⋅ ⋅ ⋅ 𝑢1
󵄨󵄨1 𝑢1 ⋅ ⋅ ⋅ 𝑢1ℓ−1 0 𝑢1ℓ+1 ⋅ ⋅ ⋅ 𝑢1
󵄨󵄨
󵄨󵄨 . .
󵄨󵄨
.. .. ..
..
..
..
󵄨󵄨 . .
󵄨󵄨
󵄨󵄨 . .
󵄨󵄨
.
.
.
.
.
.
󵄨󵄨
󵄨
𝑝−1
𝑝+𝑞
𝑝+𝑞+𝑟−1 󵄨󵄨
ℓ−1
ℓ+1
󵄨󵄨1 𝑢
󵄨󵄨
󵄨󵄨
𝑘−1 ⋅ ⋅ ⋅ 𝑢𝑘−1 0 𝑢𝑘−1 ⋅ ⋅ ⋅ 𝑢𝑘−1 𝑢𝑘−1 ⋅ ⋅ ⋅ 𝑢𝑘−1
󵄨󵄨
󵄨󵄨
󵄨󵄨
𝑝−1
𝑝+𝑞
𝑝+𝑞+𝑟−1
󵄨󵄨
ℓ−1
ℓ+1
󵄨󵄨
󵄨󵄨󵄨1 𝑢𝑘 ⋅ ⋅ ⋅ 𝑢𝑘 1 𝑢𝑘 ⋅ ⋅ ⋅ 𝑢𝑘 𝑢𝑘 ⋅ ⋅ ⋅ 𝑢𝑘
󵄨󵄨 ,
󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨
󵄨󵄨1 𝑢𝑘+1 ⋅ ⋅ ⋅ 𝑢ℓ−1 0 𝑢ℓ+1 ⋅ ⋅ ⋅ 𝑢𝑝−1 𝑢𝑝+𝑞 ⋅ ⋅ ⋅ 𝑢𝑝+𝑞+𝑟−1 󵄨󵄨󵄨
𝑘+1
𝑘+1
󵄨󵄨
󵄨󵄨
𝑘+1
𝑘+1
𝑘+1
󵄨󵄨
󵄨󵄨
󵄨󵄨 .. ..
󵄨󵄨
.. .. ..
..
..
..
󵄨󵄨 . .
󵄨󵄨
. . .
.
.
.
󵄨󵄨
󵄨
𝑝−1
𝑝+𝑞
𝑝+𝑞+𝑟−1 󵄨󵄨󵄨
󵄨󵄨
ℓ−1
ℓ+1
󵄨󵄨1 𝑢𝑝+𝑟 ⋅ ⋅ ⋅ 𝑢𝑝+𝑟 0 𝑢𝑝+𝑟 ⋅ ⋅ ⋅ 𝑢𝑝+𝑟 𝑢𝑝+𝑟 ⋅ ⋅ ⋅ 𝑢𝑝+𝑟
󵄨󵄨
(A.15)
(A.11)
∑
1≤𝑖1 <⋅⋅⋅<𝑖𝑚 ≤𝑝+𝑟
𝑖1 ,...,𝑖𝑚 ≠ 𝑘
𝑢𝑖1 ⋅ ⋅ ⋅ 𝑢𝑖𝑚 .
(A.17)
We need to isolate 𝑢𝑘 in ∏1≤𝑖<𝑗≤𝑝+𝑟 (𝑢𝑗 − 𝑢𝑖 ) and S𝑝𝑞𝑟 . First,
we write that
∏
1≤𝑖<𝑗≤𝑝+𝑟
(𝑢𝑗 − 𝑢𝑖 )
= (−1)𝑝+𝑟−𝑘
∏
1≤𝑖<𝑗≤𝑝+𝑟
𝑖,𝑗 ≠ 𝑘
= (−1)𝑘−1
(𝑢𝑗 − 𝑢𝑖 ) × ∏ (𝑢𝑘 − 𝑢𝑖 )
1≤𝑖≤𝑝+𝑟
𝑗 ≠ 𝑘
𝑝+𝑟−1
∏
1≤𝑖<𝑗≤𝑝+𝑟
𝑖,𝑗 ≠ 𝑘
(𝑢𝑗 − 𝑢𝑖 ) ∑ (−1)𝑚 𝑠𝑘,𝑝+𝑟−𝑚−1 𝑢𝑘𝑚 .
𝑚=0
(A.18)
42
International Journal of Stochastic Analysis
Second, by isolating 𝑢𝑘 in 𝑠𝑚 according to 𝑠𝑚 = 𝑠𝑘,𝑚 +
𝑢𝑘 𝑠𝑘,𝑚−1 , we get that the determinant S𝑝𝑞𝑟 can be rewritten as
𝑝+𝑞+𝑟−1
=
∑
ℓ=0
(𝑝+𝑟−1)∧ℓ
∑ (−1)𝑚 𝑠𝑘,𝑝+𝑟−𝑚−1
𝑚=0∨(ℓ−𝑞)
(
󵄨󵄨 𝑠𝑘,𝑟 + 𝑢𝑘 𝑠𝑘,𝑟−1
𝑠𝑘,𝑟−1 + 𝑢𝑘 𝑠𝑘,𝑟−2 ⋅ ⋅ ⋅ 𝑠𝑘,𝑟−𝑞+1 + 𝑢𝑘 𝑠𝑘,𝑟−𝑞 󵄨󵄨󵄨
󵄨󵄨
󵄨󵄨 𝑠𝑘,𝑟+1 + 𝑢𝑘 𝑠𝑘,𝑟
𝑠𝑘,𝑟 + 𝑢𝑘 𝑠𝑘,𝑟−1
⋅ ⋅ ⋅ 𝑠𝑘,𝑟−𝑞+2 + 𝑢𝑘 𝑠𝑘,𝑟−𝑞+1 󵄨󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨 .
󵄨󵄨
..
..
..
󵄨󵄨
󵄨󵄨
.
.
.
󵄨
󵄨󵄨
󵄨󵄨𝑠𝑘,𝑟+𝑞−1 + 𝑢𝑘 𝑠𝑘,𝑟+𝑞−2 𝑠𝑘,𝑟+𝑞−2 + 𝑢𝑘 𝑠𝑘,𝑟+𝑞−3 ⋅ ⋅ ⋅
𝑠𝑘,𝑟 + 𝑢𝑘 𝑠𝑘,𝑟−1 󵄨󵄨󵄨
⃗ , . . . , 𝑉ℓ+𝑟−𝑞−𝑚+1
⃗
× det (𝑉𝑟⃗ , 𝑉𝑟−1
,
⃗
⃗ ) ) 𝑢ℓ .
𝑉ℓ+𝑟−𝑞−𝑚−1
, . . . , 𝑉𝑟−𝑞
𝑘
(A.19)
⃗ with coordinates (𝑠𝑘,𝑚 , 𝑠𝑘,𝑚+1 , . . . ,
By introducing vectors 𝑉𝑚
𝑠𝑘,𝑚+𝑞−1 ) (written as a column), this determinant can be
rewritten as
(A.22)
Recalling the convention that 𝑠𝑘,𝑚 = 0 if 𝑚 ≤ −1 or 𝑚 ≥ 𝑝 + 𝑟,
the coefficient of 𝑢𝑘ℓ in (A.22) can be written as
ℓ
∑ (−1)𝑚 𝑠𝑘,𝑝+𝑟−𝑚−1
𝑚=ℓ−𝑞
⃗ , . . . , 𝑉ℓ+𝑟−𝑞−𝑚+1
⃗
⃗
⃗ )
× det (𝑉𝑟⃗ , 𝑉𝑟−1
, 𝑉ℓ+𝑟−𝑞−𝑚−1
, . . . , 𝑉𝑟−𝑞
⃗ , 𝑉𝑟−1
⃗ , . . . , 𝑉𝑟−𝑞+1
⃗ ).
⃗ + 𝑢𝑘 𝑉𝑟−2
⃗
S𝑝𝑞𝑟 = det (𝑉𝑟⃗ + 𝑢𝑘 𝑉𝑟−1
+ 𝑢𝑘 𝑉𝑟−𝑞
𝑞
(A.20)
= (−1)ℓ ∑ (−1)𝑚+𝑞 𝑠𝑘,𝑝+𝑞+𝑟−ℓ−𝑚−1
𝑚=0
By appealing to multilinearity, it is easy to see that
⃗ , . . . , 𝑉𝑟−𝑚+1
⃗
⃗
⃗ ).
× det (𝑉𝑟⃗ , 𝑉𝑟−1
, 𝑉𝑟−𝑚−1
, . . . , 𝑉𝑟−𝑞
𝑞
(A.23)
⃗ , . . . , 𝑉𝑟−𝑞+𝑛+1
⃗
⃗
⃗ ) 𝑢𝑛 .
S𝑝𝑞𝑟 = ∑ det (𝑉𝑟⃗ , 𝑉𝑟−1
, 𝑉𝑟−𝑞+𝑛−1
, . . . , 𝑉𝑟−𝑞
𝑘
𝑛=0
(A.21)
Now, let us multiply the sum lying in (A.18) by (A.21):
𝑝+𝑟−1
( ∑ (−1)𝑚 𝑠𝑘,𝑝+𝑟−𝑚−1 𝑢𝑘𝑚 )
𝑚=0
𝑞
⃗ , . . . , 𝑉𝑟−𝑞+𝑛+1
⃗
⃗
⃗ ) 𝑢𝑛 )
, 𝑉𝑟−𝑞+𝑛−1
, . . . , 𝑉𝑟−𝑞
× ( ∑ det (𝑉𝑟⃗ , 𝑉𝑟−1
𝑘
𝑛=0
In this form, we see that the coefficient of 𝑢𝑘ℓ in (A.22) is
nothing but the product of (−1)ℓ by the expansion of the
determinant
󵄨󵄨
𝑠𝑘,𝑟
𝑠𝑘,𝑟−1
⋅ ⋅ ⋅ 𝑠𝑘,𝑟−𝑞 󵄨󵄨󵄨
󵄨󵄨
󵄨
󵄨󵄨 𝑠
𝑠𝑘,𝑟
⋅ ⋅ ⋅ 𝑠𝑘,𝑟−𝑞+1 󵄨󵄨󵄨󵄨
󵄨󵄨
𝑘,𝑟+1
󵄨󵄨
󵄨󵄨
..
..
..
󵄨󵄨 .
S𝑝𝑞𝑟,𝑘ℓ = 󵄨󵄨󵄨󵄨
󵄨󵄨
.
.
.
󵄨󵄨
󵄨󵄨
󵄨
󵄨󵄨 𝑠𝑘,𝑟+𝑞−1
𝑠
⋅
⋅
⋅
𝑠
𝑘,𝑟+𝑞−2
𝑘,𝑟−1 󵄨󵄨
󵄨󵄨
󵄨󵄨
󵄨󵄨𝑠
𝑠
⋅
⋅
⋅
𝑠
𝑘,𝑝+𝑟−ℓ−1 󵄨󵄨
󵄨 𝑘,𝑝+𝑞+𝑟−ℓ−1 𝑘,𝑝+𝑞+𝑟−ℓ−2
(A.24)
Proposition A.2. The determinant 𝑈𝑝𝑞𝑟,𝑘ℓ (𝑢1 , . . . , 𝑢𝑘−1 ,
𝑢𝑘+1 , . . . , 𝑢𝑝+𝑟 ) admits the following expression:
𝑈𝑝𝑞𝑟,𝑘ℓ (𝑢1 , . . . , 𝑢𝑘−1 , 𝑢𝑘+1 , . . . , 𝑢𝑝+𝑟 )
= (−1)𝑘+ℓ−1
∏
1≤𝑖<𝑗≤𝑝+𝑟
𝑖,𝑗 ≠ 𝑘
(𝑢𝑗 − 𝑢𝑖 )
󵄨󵄨
𝑠𝑘,𝑟 (𝑢1 , . . . , 𝑢𝑘−1 , 𝑢𝑘+1 , . . . , 𝑢𝑝+𝑟 )
󵄨󵄨
󵄨󵄨
󵄨󵄨 𝑠
𝑘,𝑟+1 (𝑢1 , . . . , 𝑢𝑘−1 , 𝑢𝑘+1 , . . . , 𝑢𝑝+𝑟 )
󵄨󵄨
󵄨󵄨
..
× 󵄨󵄨󵄨
.
󵄨󵄨󵄨
󵄨󵄨 𝑠𝑘,𝑟+𝑞−1 (𝑢1 , . . . , 𝑢𝑘−1 , 𝑢𝑘+1 , . . . , 𝑢𝑝+𝑟 )
󵄨󵄨
󵄨󵄨𝑠
󵄨󵄨 𝑘,𝑝+𝑞+𝑟−ℓ−1 (𝑢1 , . . . , 𝑢𝑘−1 , 𝑢𝑘+1 , . . . , 𝑢𝑝+𝑟 )
󵄨
𝑠𝑘,𝑟−𝑞 (𝑢1 , . . . , 𝑢𝑘−1 , 𝑢𝑘+1 , . . . , 𝑢𝑝+𝑟 ) 󵄨󵄨󵄨
󵄨󵄨
𝑠𝑘,𝑟−𝑞+1 (𝑢1 , . . . , 𝑢𝑘−1 , 𝑢𝑘+1 , . . . , 𝑢𝑝+𝑟 ) 󵄨󵄨󵄨
󵄨󵄨
󵄨󵄨
..
󵄨󵄨 .
.
󵄨󵄨
⋅ ⋅ ⋅ 𝑠𝑘,𝑟−1 (𝑢1 , . . . , 𝑢𝑘−1 , 𝑢𝑘+1 , . . . , 𝑢𝑝+𝑟 ) 󵄨󵄨󵄨󵄨
󵄨
⋅ ⋅ ⋅ 𝑠𝑘,𝑝+𝑟−ℓ−1 (𝑢1 , . . . , 𝑢𝑘−1 , 𝑢𝑘+1 , . . . , 𝑢𝑝+𝑟 )󵄨󵄨󵄨󵄨
⋅⋅⋅
⋅⋅⋅
(A.25)
International Journal of Stochastic Analysis
43
As a consequence of Propositions A.1 and A.2, we get the
result below. Set
𝑝𝑘 := 𝑝𝑘 (𝑢1 , . . . , 𝑢𝑝+𝑟 ) = ∏ (𝑢𝑘 − 𝑢𝑖 ) .
(A.26)
1≤𝑖≤𝑝+𝑟
𝑖 ≠ 𝑘
Proposition A.3. Let 𝑝, 𝑞, 𝑟 be positive integers, let 𝑢1 , . . . ,
𝑢𝑝+𝑟 be distinct complex numbers, and let 𝛼1 , . . . , 𝛼𝑝+𝑟 be complex numbers. Set 𝐼 = {0, . . . , 𝑝 − 1} ∪ {𝑝 + 𝑞, . . . , 𝑝 + 𝑞 + 𝑟 − 1}.
The solution of the system ∑ℓ∈𝐼 𝑢𝑘ℓ 𝑥ℓ = 𝛼𝑘 , 1 ≤ 𝑘 ≤ 𝑝 + 𝑟, or,
more explicitly,
𝑥0 + 𝑢1 𝑥1 + ⋅ ⋅ ⋅ +
+ ⋅⋅⋅ +
𝑝−1
𝑢1 𝑥𝑝−1
+
𝑝+𝑞+𝑟−1
𝑢1
𝑥𝑝+𝑞+𝑟−1
It can be rewritten as
𝑛
𝑛 𝑘+𝛼
𝛼
)
) = (−1)𝑛 (
∑ (−1)𝑘 ( ) (
𝑘 𝑘+𝛽
𝛽+𝑛
(A.27)
𝑝+𝑞+𝑟−1
𝑢𝑝+𝑟
𝑥𝑝+𝑞+𝑟−1
= 𝛼𝑝+𝑟
𝑛
(−1)𝑘 ( 𝑛𝑘 )
𝑘=0
( 𝑘+𝛽
)
𝑘+𝛼
S𝑝𝑞𝑟 (𝑢1 , . . . , 𝑢𝑝+𝑟 )
S𝑝𝑞𝑟,𝑘ℓ (𝑢1 , . . . , 𝑢𝑘−1 , 𝑢𝑘+1 , . . . , 𝑢𝑝+𝑟 )
𝑝𝑘 (𝑢1 , . . . , 𝑢𝑝+𝑟 )
𝑘=1
,
ℓ ∈ 𝐼.
(A.28)
Proof. Cramer’s formulae yield that
𝑢 ,...,𝑢
𝑈𝑝𝑞𝑟,ℓ ( 𝛼11 ,...,𝛼𝑝+𝑟
)
𝑝+𝑟
𝑈𝑝𝑞𝑟 (𝑢1 , . . . , 𝑢𝑝+𝑟 )
(A.29)
𝑝+𝑟
= ∑ 𝛼𝑘
𝑈𝑝𝑞𝑟,𝑘ℓ (𝑢1 , . . . , 𝑢𝑘−1 , 𝑢𝑘+1 , . . . , 𝑢𝑝+𝑟 )
𝑘=1
𝑈𝑝𝑞𝑟 (𝑢1 , . . . , 𝑢𝑝+𝑟 )
.
By using the factorisations provided by Propositions A.1 and
A.2, namely,
𝑈𝑝𝑞𝑟 (𝑢1 , . . . , 𝑢𝑝+𝑟 )
= 𝑉𝑝+𝑟 (𝑢1 , . . . , 𝑢𝑝+𝑟 ) S𝑝𝑞𝑟 (𝑢1 , . . . , 𝑢𝑝+𝑟 )
if 𝛼 < 𝛽.
(A.32)
(A.33)
𝑘
We expand the last displayed derivative by using Leibniz rule:
󵄨󵄨
d𝛼−𝛽
𝛼
𝑛 󵄨󵄨
󵄨󵄨
(𝑥
)
−
𝑥)
(1
󵄨󵄨󵄨𝑥=1
d𝑥𝛼−𝛽
𝛼−𝛽
󵄨󵄨
󵄨󵄨
𝛼−𝛽−𝑘
d𝑘
󵄨
𝛼−𝛽 d
𝑛 󵄨󵄨
󵄨󵄨 .
= ∑(
((1
−
𝑥)
)
) 𝛼−𝛽−𝑘 (𝑥𝛼 )󵄨󵄨󵄨󵄨
𝑘
󵄨󵄨
𝑘
d𝑥
󵄨󵄨𝑥=1 d𝑥
󵄨𝑥=1
𝑘=0
(A.35)
Since (d𝑘 /d𝑥𝑘 )((1 − 𝑥)𝑛 )|𝑥=1 = (−1)𝑛 𝛿𝑛𝑘 𝑛!, we have that
(d𝛼−𝛽 /d𝑥𝛼−𝛽 )(𝑥𝛼 (1 − 𝑥)𝑛 )|𝑥=1 = 0 if 𝛼 − 𝛽 < 𝑛, and, if
𝛼 − 𝛽 ≥ 𝑛,
󵄨󵄨
𝛼−𝛽−𝑛
󵄨󵄨
d𝛼−𝛽
𝛼−𝛽 d
𝛼
𝑛 󵄨󵄨
𝛼 󵄨󵄨
󵄨󵄨 = (−1)𝑛 𝑛! (
󵄨󵄨
−
𝑥)
(𝑥
)
(𝑥
)
)
(1
󵄨󵄨
𝑛
󵄨󵄨󵄨𝑥=1
d𝑥𝛼−𝛽
d𝑥𝛼−𝛽−𝑛
󵄨𝑥=1
𝛼!
(𝛼
−
𝛽)!
= (−1)𝑛
(𝛼 − 𝛽 − 𝑛)! (𝛽 + 𝑛)!
(A.36)
which coincides with the announced result. Second, suppose
that 𝛼 < 𝛽. Noticing that
= (−1)𝑝+𝑟−𝑘 𝑝𝑘 (𝑢1 , . . . , 𝑢𝑝+𝑟 )
× 𝑉𝑝+𝑟−1 (𝑢1 , . . . , 𝑢𝑘−1 , 𝑢𝑘+1 , . . . , 𝑢𝑝+𝑟 ) S𝑝𝑞𝑟 (𝑢1 , . . . , 𝑢𝑝+𝑟 ) ,
𝑈𝑝𝑞𝑟,𝑘ℓ (𝑢1 , . . . , 𝑢𝑘−1 , 𝑢𝑘+1 , . . . , 𝑢𝑝+𝑟 )
1
1
(𝑘 + 𝛼)!
=
∫ 𝑥𝑘+𝛼 (1 − 𝑥)𝛽−𝛼−1 d𝑥,
(𝑘 + 𝛽)! (𝛽 − 𝛼 − 1)! 0
we get that
= (−1)𝑘+ℓ−1 𝑉𝑝+𝑟−1 (𝑢1 , . . . , 𝑢𝑘−1 , 𝑢𝑘+1 , . . . , 𝑢𝑝+𝑟 )
× S𝑝𝑞𝑟,𝑘ℓ (𝑢1 , . . . , 𝑢𝑘−1 , 𝑢𝑘+1 , . . . , 𝑢𝑝+𝑟 ) ,
(A.30)
we immediately get (A.28).
(𝛽 − 𝛼) / (𝛽 − 𝛼 + 𝑛)
( 𝛽+𝑛
𝛼 )
󵄨󵄨
𝑛
d𝛼−𝛽
𝑛 (𝑘 + 𝛼)!
𝑘 𝑛
𝑘+𝛼 󵄨󵄨󵄨
( ∑ (−1) ( ) 𝑥 )󵄨󵄨
=
∑ (−1) ( )
𝑘 (𝑘 + 𝛽)!
𝑘
󵄨󵄨
d𝑥𝛼−𝛽 𝑘=0
𝑘=0
󵄨𝑥=1
󵄨󵄨
𝛼−𝛽
d
󵄨
=
(𝑥𝛼 (1 − 𝑥)𝑛 )󵄨󵄨󵄨󵄨 .
󵄨󵄨𝑥=1
d𝑥𝛼−𝛽
(A.34)
(−1)ℓ+𝑝+𝑟−1
𝑝+𝑟
=
Proof. Suppose first that 𝛼 ≥ 𝛽. Noticing that
󵄨󵄨
d𝛼−𝛽
(𝑘 + 𝛼)!
𝑘+𝛼 󵄨󵄨
󵄨󵄨 ,
(𝑥
)
=
󵄨󵄨
(𝑘 + 𝛽)!
d𝑥𝛼−𝛽
󵄨𝑥=1
we immediately get that
𝑛
× ∑ 𝛼𝑘
if 𝛼 ≥ 𝛽,
𝑘=0
= 𝛼1
is given by
𝑥ℓ =
𝑛
𝛼!
𝑛 (𝑘 + 𝛼)!
=
(𝛽 − 𝛼 + 𝑛 − 1)𝑛 . (A.31)
∑ (−1)𝑘 ( )
𝑘 (𝑘 + 𝛽)! (𝛽 + 𝑛)!
𝑘=0
∑
𝑝−1
𝑝+𝑞
𝑥0 + 𝑢𝑝+𝑟 𝑥1 + ⋅ ⋅ ⋅ + 𝑢𝑝+𝑟
𝑥𝑝−1 + 𝑢𝑝+𝑟
𝑥𝑝+𝑞
𝑥ℓ =
Lemma A.4. The following identity holds for any positive
integers 𝛼, 𝛽 , 𝑛:
𝑝+𝑞
𝑢1 𝑥𝑝+𝑞
..
.
+ ⋅⋅⋅ +
A.2. A Combinatoric Identity
𝑛
𝑛 (𝑘 + 𝛼)!
∑ (−1)𝑘 ( )
𝑘 (𝑘 + 𝛽)!
𝑘=0
=
1
(𝛽 − 𝛼 − 1)!
(A.37)
44
International Journal of Stochastic Analysis
𝑛
1
𝑛
× ∫ ( ∑ (−1)𝑘 ( ) 𝑥𝑘 ) 𝑥𝛼 (1 − 𝑥)𝛽−𝛼−1 d𝑥
𝑘
0
𝑘=0
=
1
1
∫ 𝑥𝛼 (1 − 𝑥)𝛽−𝛼+𝑛−1 d𝑥
(𝛽 − 𝛼 − 1)! 0
Proof. We begin by detailing the algorithm providing the
(0)
(0)
matrix U. Call C(0)
0 , C1 , . . . , C𝑁−1 the columns of A, that is,
for 𝑗 ∈ {0, . . . , 𝑁 − 1},
C(0)
𝑗 = [(
𝛼! (𝛽 − 𝛼 + 𝑛 − 1)!
=
(𝛽 − 𝛼 − 1)! (𝛽 + 𝑛)!
(A.38)
(0)
C(1)
𝑗 = C𝑗 −
A.3. Some Matrices. Let 𝛼, 𝛽 ∈ N such that 𝛼 > 𝛽 ≥ 𝑁 and
set
B = [(
𝛽
)]
𝑖 + 𝑁 0≤𝑖≤𝑁−1
(A.39)
with the convention of settings ( 𝑗𝑖 ) = 0 if 𝑖 > 𝑗. These
matrices have been used for solving systems (273) and (274)
with the choices 𝛼 = 𝑏 − 𝑎 + 𝑁 − 1 and 𝛽 = 𝑁 − 𝑎 − 1. The aim
of this section is to compute the product of the inverse of A
by B, namely, A−1 B. For this, we use Gauss elimination. The
result is displayed in Theorem A.8. The calculations are quite
technical, so we perform them progressively, the intermediate
steps being stated in several lemmas (Lemmas A.4, A.5, A.6,
and A.7).
Lemma A.5. The matrix A can be decomposed into A = LU−1 ,
where the matrices U and L are given by
𝑗 (𝑗 + 𝛼)𝑁
]
,
U = [(−1)𝑖+𝑗 1{𝑖≤𝑗} ( )
𝑖 (𝑖 + 𝛼)𝑁 0≤𝑖,𝑗≤𝑁−1
(𝑖)𝑗
𝑗+𝛼
]
L = [1{𝑖≥𝑗} (𝑖 + 𝑁)
(𝑗 + 𝛼 − 𝑁)
𝑗
1 −
0
0
..
.
[0
𝑗+𝛼
C(0) .
𝑗 + 𝛼 − 𝑁 𝑗−1
(A.42)
(1)
(1)
The C(0)
0 , C1 , . . . , C𝑁−1 are the columns of a new matrix
L1 . Actually, this transformation corresponds to a matrix
multiplication acting on A: L1 = AU1 with
[
[
[
[
[
U1 = [
[
[
[
[
1 −
𝛼+1
𝛼+1−𝑁
0
1
0
..
.
0
..
.
[0
0
0
0
..
.
]
]
]
]
]
].
0
]
𝛼+𝑁−1 ]
]
d −
]
𝛼−1
1
]
(A.43)
𝛼+2
−
𝛼+2−𝑁
1
d
0
Simple computations show that
L1 = [(𝛿𝑗0 + 1{𝑗≥1}
𝑖
𝑗+𝛼
)(
)]
𝑖 + 𝑁 0≤𝑖,𝑗≤𝑁−1
𝑗+𝛼−𝑁
(𝑖)𝑗∧1
𝑗+𝛼
= [(𝑖 + 𝑁)
(𝑗 + 𝛼 − 𝑁)
𝑗∧1
]
(A.44)
.
0≤𝑖,𝑗≤𝑁−1
Next, apply the second transformation to the columns of L1
(1)
except for C(0)
0 and C1 , defined, for 𝑗 ∈ {2, . . . , 𝑁 − 1}, by
𝑗+𝛼
C(1)
𝑗 + 𝛼 − 𝑁 𝑗−1
𝑗+𝛼
= C(0)
C(0)
𝑗 −2
𝑗 + 𝛼 − 𝑁 𝑗−1
(1)
C(2)
𝑗 = C𝑗 −
(A.40)
.
0≤𝑖,𝑗≤𝑁−1
The regular matrices U and L, are respectively, upper and lower
triangular.
[
[
[
[
[
[
U2 = [
[
[
[
[
[
(A.41)
Apply to them, except for C(0)
0 , the transformation defined,
for 𝑗 ∈ {1, . . . , 𝑁 − 1}, by
which coincides with the announced result.
𝑗+𝛼
,
A = [(
)]
𝑖 + 𝑁 0≤𝑖,𝑗≤𝑁−1
𝑗+𝛼
.
)]
𝑖 + 𝑁 0≤𝑖≤𝑁−1
+
(A.45)
(𝑗 + 𝛼) (𝑗 + 𝛼 − 1)
C(1) .
(𝑗 + 𝛼 − 𝑁) (𝑗 + 𝛼 − 𝑁 − 1) 𝑗−2
(1)
(2)
(2)
The C(0)
0 , C1 , C2 , . . . , C𝑁−1 are the columns of a new matrix
L2 = AU2 , where
𝛼+1
(𝛼 + 2) (𝛼 + 1)
0
𝛼 + 1 − 𝑁 (𝛼 + 2 − 𝑁) (𝛼 + 1 − 𝑁)
..
𝛼+2
1
−2
d
.
𝛼+2−𝑁
(𝛼 + 𝑁 − 1) (𝛼 + 𝑁 − 2)
1
d
0
(𝛼 − 1) (𝛼 − 2)
..
𝛼+𝑁−1
.
d
−2
𝛼−1
0
1
0
]
]
]
]
]
]
].
]
]
]
]
]
]
(A.46)
International Journal of Stochastic Analysis
45
(𝑗 + 𝛼) (𝑗 + 𝛼 − 1)
𝑘
+( )
C(0)
2 (𝑗 + 𝛼 − 𝑁) (𝑗 + 𝛼 − 𝑁 − 1) 𝑗−2
Straightforward algebra yields that
L2 = [(𝛿𝑗0 +
𝑘
+ ⋅ ⋅ ⋅ + (−1)𝑘 ( )
𝑘
𝑖
𝛿
𝑗 + 𝛼 − 𝑁 𝑗1
+ 1{𝑗≥2}
𝑖 (𝑖 − 1)
)
(𝑗 + 𝛼 − 𝑁) (𝑗 + 𝛼 − 𝑁 − 1)
𝑗+𝛼
×(
)]
𝑖+𝑁
𝑗+𝛼
= [(𝑖 + 𝑁)
×
(A.47)
(𝑗 + 𝛼 − 𝑁)𝑗∧2
]
× C(0)
𝑗−𝑘
𝑘
(𝑗 + 𝛼)ℓ
𝑘
= ∑ (−1)ℓ ( )
C(0)
ℓ (𝑗 + 𝛼 − 𝑁) 𝑗−ℓ
0≤𝑖,𝑗≤𝑁−1
(𝑖)𝑗∧2
(𝑗 + 𝛼) (𝑗 + 𝛼 − 1) ⋅ ⋅ ⋅ (𝑗 + 𝛼 − 𝑘 + 1)
(𝑗 + 𝛼 − 𝑁) (𝑗 + 𝛼 − 𝑁 −1) ⋅ ⋅ ⋅ (𝑗 + 𝛼 − 𝑘 − 𝑁+ 1)
ℓ=0
.
ℓ
𝑘
(𝑗 + 𝛼)𝑁
𝑘
= ∑ (−1)ℓ ( )
C(0)
ℓ (𝑗 + 𝛼 − ℓ) 𝑗−ℓ
0≤𝑖,𝑗≤𝑁−1
ℓ=0
This method can be recursively extended: apply the 𝑘th transformation (1 ≤ 𝑘 ≤ 𝑁 − 1) defined, for 𝑗 ∈ {𝑘, . . . , 𝑁 − 1}, by
𝑁
(𝑗 + 𝛼)𝑗−ℓ
𝑘
)
𝑗 − ℓ (𝑗 + 𝛼 − 𝑁)
𝑗
= ∑ (−1)𝑗−ℓ (
𝑗−ℓ
ℓ=𝑗−𝑘
(𝑘−1)
C(𝑘)
−
𝑗 = C𝑗
𝑗+𝛼
C(𝑘−1)
𝑗 + 𝛼 − 𝑁 𝑗−1
= C(𝑘−2)
−2
𝑗
+
𝑗
C(0)
ℓ
(𝑗 + 𝛼)𝑁 (0)
𝑘
C .
)
𝑗 − ℓ (ℓ + 𝛼)𝑁 ℓ
= ∑ (−1)𝑗−ℓ (
ℓ=𝑗−𝑘
𝑗+𝛼
C(𝑘−2)
𝑗 + 𝛼 − 𝑁 𝑗−1
(A.48)
In particular,
(𝑗 + 𝛼) (𝑗 + 𝛼 − 1)
C(𝑘−2)
(𝑗 + 𝛼 − 𝑁) (𝑗 + 𝛼 − 𝑁 − 1) 𝑗−2
𝑘
𝑘−ℓ 𝑘 (𝑘 + 𝛼)𝑁 (0)
( )
C(𝑘)
C .
𝑘 = ∑ (−1)
ℓ (ℓ + 𝛼)𝑁 ℓ
(A.49)
ℓ=0
..
.
(1)
(2)
(𝑘)
(𝑘)
(𝑘)
The C(0)
0 , C1 , C2 , . . . , C𝑘 , C𝑘+1 , . . . , C𝑁−1 are the columns
.
of the 𝑘th matrix L𝑘 = AU𝑘 with U𝑘 = [U󸀠𝑘 .. U󸀠󸀠𝑘 ], where U󸀠𝑘
is the matrix
𝑗+𝛼
𝑘
C(0)
= C(0)
𝑗 − (1)
𝑗 + 𝛼 − 𝑁 𝑗−1
𝛼+1
(𝛼 + 2) (𝛼 + 1)
1 −
[
𝛼 + 1 − 𝑁 (𝛼 + 2 − 𝑁) (𝛼 + 1 − 𝑁)
[
[
𝛼+2
[0
1
−2
[
𝛼+2−𝑁
[
[
[0
0
1
[
[
..
[ ..
[.
.
[
[ ..
..
[.
.
[
[
[.
..
[ ..
.
[
[
[
[0
0
[
[
[0
0
⋅⋅⋅
[
[.
.
..
[ ..
0
⋅⋅⋅
[0
d
d d
d d d
d d
d
⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅
⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅
(𝛼 + 𝑘 − 1) ⋅ ⋅ ⋅ (𝛼 + 1)
𝑘−1
)
(−1)𝑘−1 (
𝑘 − 1 (𝛼 + 𝑘 − 𝑁 − 1) ⋅ ⋅ ⋅ (𝛼 + 1 − 𝑁) ]
]
]
(𝛼 + 𝑘 − 1) ⋅ ⋅ ⋅ (𝛼 + 2)
𝑘−1
]
)
(−1)𝑘−2 (
𝑘 − 2 (𝛼 + 𝑘 − 𝑁 − 1) ⋅ ⋅ ⋅ (𝛼 + 2 − 𝑁) ]
]
]
(𝛼 + 𝑘 − 1) ⋅ ⋅ ⋅ (𝛼 + 3)
𝑘−1
]
)
(−1)𝑘−3 (
𝑘 − 3 (𝛼 + 𝑘 − 𝑁 − 1) ⋅ ⋅ ⋅ (𝛼 + 3 − 𝑁) ]
]
..
]
]
.
]
]
(𝛼 + 𝑘 − 1) (𝛼 + 𝑘 − 2)
𝑘−1
]
(
)
2
(𝛼 + 𝑘 − 𝑁 − 1) (𝛼 + 𝑘 − 𝑁 − 2) ]
]
]
𝛼+𝑘−1
𝑘−1
]
−(
)
]
1
𝛼+𝑘−𝑁−1
]
]
𝑘−1
]
(
)
]
0
]
]
0
]
]
..
]
.
0
]
and U󸀠󸀠𝑘 is the matrix
(A.50)
46
International Journal of Stochastic Analysis
(𝛼 + 𝑘) ⋅ ⋅ ⋅ (𝛼 + 1)
𝑘
(−1)𝑘 ( )
0
𝑘 (𝛼 + 𝑘 − 𝑁) ⋅ ⋅ ⋅ (𝛼 + 1 − 𝑁)
[
[
..
(𝛼 + 𝑘 + 1) ⋅ ⋅ ⋅ (𝛼 + 2)
[
𝑘
.
(−1)𝑘 (𝑘)
[
[
(𝛼 + 𝑘 − 𝑁 + 1) ⋅ ⋅ ⋅ (𝛼 + 2 − 𝑁)
[
..
(𝛼 + 𝑘) (𝛼 + 𝑘 − 1)
[ 𝑘
.
[ (2)
[
(𝛼 + 𝑘 − 𝑁) (𝛼 + 𝑘 − 𝑁 − 1)
[
𝛼+𝑘
(𝛼 + 𝑘 + 1) (𝛼 + 𝑘)
𝑘
𝑘
[
(2)
− (1)
[
𝛼+𝑘−𝑁
(𝛼 + 𝑘 − 𝑁 + 1) (𝛼 + 𝑘 − 𝑁)
[
[
𝛼+𝑘+1
𝑘
𝑘
[
− (1)
(0)
[
𝛼
+
𝑘−𝑁+1
[
[
𝑘
[
0
(0)
[
[
..
[
[
.
[
[
0
]
]
]
]
]
]
]
0
]
]
]
𝑘 𝑘 (𝛼 + 𝑁 − 1) ⋅ ⋅ ⋅ (𝛼 + 𝑁 − 𝑘) ]
(−1) (𝑘)
]
(𝛼 − 1) ⋅ ⋅ ⋅ (𝛼 − 𝑘)
].
]
..
]
.
]
]
+
𝑁
−
1)
+
𝑁
−
2)
]
(𝛼
(𝛼
𝑘
]
(2)
]
(𝛼 − 1) (𝛼 − 2)
]
]
𝑘 𝛼+𝑁−1
]
− (1)
]
𝛼−1
]
..
.
d
d
d
d
𝑘
(0)
0
[
The matrices U󸀠𝑘 and U󸀠󸀠𝑘 can be simply written as
𝑁−1
ℓ=0
×
0≤𝑗≤𝑘−1
(𝑗 + 𝛼)𝑁
𝑘
]
.
U󸀠󸀠𝑘 = [(−1)𝑗−𝑖 1{𝑖∨𝑘≤𝑗≤𝑖+𝑘} (
)
𝑗 − 𝑖 (𝑖 + 𝛼)𝑁 0≤𝑖≤𝑁−1
𝑘≤𝑗≤𝑁−1
(A.52)
(𝑖)𝑗∧𝑘
]
.
(A.53)
0≤𝑖,𝑗≤𝑁−1
We will not prove (A.53); we will only check it below in the
case 𝑘 = 𝑁 − 1.
We progressively arrive at the last transformation which
corresponds to 𝑘 = 𝑁 − 1:
𝑁 − 1 𝑁 + 𝛼 − 1 (0)
(0)
C(𝑁−1)
C𝑁−2
𝑁−1 = C𝑁−1 − ( 1 )
𝛼−1
𝑁 − 1 (𝑁 + 𝛼 − 1) (𝑁 + 𝛼 − 2) (0)
C𝑁−3
+(
)
2
(𝛼 − 1) (𝛼 − 2)
𝑁−1
+ (−1)𝑁−1 (
)
𝑁−1
(𝑁 + 𝛼 − 1) (𝑁 + 𝛼 − 2) ⋅ ⋅ ⋅ (𝛼 + 1) (0)
C0
×
(𝛼 − 1) (𝛼 − 2) ⋅ ⋅ ⋅ (𝛼 − 𝑁 + 1)
𝑁−1
𝑁 − 1 (𝑁 + 𝛼 − 1)ℓ (0)
= ∑ (−1) (
C𝑁−1−ℓ
)
ℓ
(𝛼 − 1)ℓ
ℓ=0
ℓ
(𝑁 + 𝛼 − 1)𝑁−ℓ−1 (0)
Cℓ .
(𝛼 − 1)𝑁−ℓ−1
(A.54)
(1)
(2)
(𝑁−1)
The C(0)
0 , C1 , C2 , . . . , C𝑁−1 are the columns of the last
matrix given by L𝑁−1 = AU𝑁−1 , where
𝑗 (𝑗 + 𝛼)𝑁
]
.
U𝑁−1 = [(−1)𝑗−𝑖 ( )
𝑖 (𝑖 + 𝛼)𝑁 0≤𝑖,𝑗≤𝑁−1
Clever algebra yields that
(𝑗 + 𝛼 − 𝑁)𝑗∧𝑘
(A.51)
𝑁−1
= ∑ (−1)𝑁−1−ℓ (
)
ℓ
𝑗 (𝑗 + 𝛼)𝑁
]
,
U󸀠𝑘 = [(−1)𝑗−𝑖 1{𝑖≤𝑗} ( )
𝑖 (𝑖 + 𝛼)𝑁 0≤𝑖≤𝑁−1
𝑗+𝛼
L𝑘 = [(𝑖 + 𝑁)
]
(A.55)
Formula (A.53) gives the following expression for L𝑁−1 that
will be checked below:
𝑗+𝛼
L𝑁−1 = [(𝑖 + 𝑁)
(𝑖)𝑗
(𝑗 + 𝛼 − 𝑁)𝑗
]
.
(A.56)
0≤𝑖,𝑗≤𝑁−1
Hence, by putting L = L𝑁−1 and U = U𝑁−1 , we see that L is a
lower triangular matrix and U is an upper triangular matrix
and we have obtained that L = AU.
Finally, we directly check the decomposition L = AU.
The generic term of AU is
𝑗
𝑘 + 𝛼 𝑗 (𝑗 + 𝛼)𝑁
.
)( )
∑ (−1)𝑗+𝑘 (
𝑖 + 𝑁 𝑘 (𝑘 + 𝛼)𝑁
(A.57)
𝑘=0
Observing that
𝑗+𝛼
𝑘+𝛼−𝑁
𝑘 + 𝛼 (𝑗 + 𝛼)𝑁 ( 𝑖 ) ( 𝑁 )
(
)
=
,
𝑖 + 𝑁 (𝑘 + 𝛼)𝑁
( 𝑖+𝑁
𝑁 )
(A.58)
this term can be rewritten as
)
(−1)𝑗 ( 𝑗+𝛼
𝑁
( 𝑖+𝑁
𝑁 )
𝑗
𝑗 𝑘+𝛼−𝑁
× ∑ (−1)𝑘 ( ) (
).
𝑘
𝑖
𝑘=0
(A.59)
International Journal of Stochastic Analysis
47
𝑗
The sum ∑𝑘=0 (−1)𝑘 ( 𝑘𝑗 ) ( 𝑘+𝛼−𝑁
) can be explicitly evaluated
𝑖
thanks to Lemma A.4. Its value is (−1)𝑗 ( 𝛼−𝑁
𝑖−𝑗 ). Therefore,
we can easily get the generic term of L, and the proof of
Lemma A.5 is finished.
Proof. The generic term of L−1 B is
𝑁−1
∑ (−1)𝑖+𝑗 1{𝑖≥𝑗}
𝑗=0
Lemma A.6. The inverse of the matrix L is given by
𝑖+𝑗
L−1 = [(−1) 1{𝑖≥𝑗}
=
(𝑗 + 𝑁)!(𝑖 + 𝛼 − 𝑁)𝑖+1
]
.
𝑗! (𝑖 − 𝑗)!(𝑖 + 𝛼)𝑗+𝑁+1 0≤𝑖,𝑗≤𝑁−1
𝑖
𝑗=0
=
Proof. We simplify the entries of the product
(𝑖)𝑗
(𝑗 + 𝛼 − 𝑁)𝑗
𝑖+𝑗
× [(−1) 1{𝑖≥𝑗}
𝛽! (𝑖 + 𝛼 − 𝑁)!
(𝑖 + 𝛼)! (𝛼 − 𝑁 − 1)!
× ∑ (−1)𝑖+𝑗
(A.60)
𝑗+𝛼
[1{𝑖≥𝑗} (𝑖 + 𝑁)
(𝑗 + 𝑁)!(𝑖 + 𝛼 − 𝑁)𝑖+1
𝛽
(
)
𝑗! (𝑖 − 𝑗)!(𝑖 + 𝛼)𝑗+𝑁+1 𝑗 + 𝑁
(𝑖 − 𝑗 + 𝛼 − 𝑁 − 1)!
𝑗! (𝑖 − 𝑗)! (𝛽 − 𝑗 − 𝑁)!
𝛽! (𝑖 + 𝛼 − 𝑁)! (𝑖 + 𝛼 − 𝛽 − 1)!
𝑖! (𝑖 + 𝛼)! (𝛼 − 𝑁 − 1)!
𝑖
]
𝑖 𝑖−𝑗+𝛼−𝑁−1
× ∑ (−1)𝑖+𝑗 ( ) (
).
𝑗
𝑖+𝛼−𝛽−1
0≤𝑖,𝑗≤𝑁−1
𝑗=0
(𝑗 + 𝑁)!(𝑖 + 𝛼 − 𝑁)𝑖+1
]
.
𝑗! (𝑖 − 𝑗)!(𝑖 + 𝛼)𝑗+𝑁+1 0≤𝑖,𝑗≤𝑁−1
(A.61)
(A.65)
By performing the change of index 𝑗 󳨃→ 𝑖 − 𝑗 and by using
Lemma A.4, the sum in (A.65) is equal to
𝑖
𝑖 𝑗+𝛼−𝑁−1
𝛼−𝑁−1
)
) = (−1)𝑖 (
∑ (−1)𝑗 ( ) (
𝑗
𝛽−𝑁
𝑖+𝛼−𝛽−1
The generic term of this matrix is
𝑗=0
𝑁−1
(𝑖)𝑘
𝑘+𝛼
)
∑ 1{𝑖≥𝑘} (
𝑖 + 𝑁 (𝑘 + 𝛼 − 𝑁)𝑘
= (−1)𝑖
𝑘=0
(𝑗 + 𝑁)!(𝑘 + 𝛼 − 𝑁)𝑘+1
𝑗! (𝑘 − 𝑗)!(𝑘 + 𝛼)𝑗+𝑁+1
𝑖! (𝑗 + 𝑁)! (𝛼 − 𝑁)!
= 1{𝑖≥𝑗}
𝑗! (𝑖 + 𝑁)! (𝛼 − 𝑁 − 1)!
× (−1)𝑗+𝑘 1{𝑘≥𝑗}
𝑖
× ∑ (−1)𝑗+𝑘
𝑘=𝑗
By putting this into (A.65), we see that the generic term of
L−1 B writes
(𝑘 + 𝛼 − 𝑗 − 𝑁 − 1)!
.
(𝑖 − 𝑘)! (𝑘 − 𝑗)! (𝑘 + 𝛼 − 𝑖 − 𝑁)!
(A.62)
The last sum can be computed as follows: clearly, it vanishes
when 𝑖 < 𝑗 and it equals 1 when 𝑖 = 𝑗. If 𝑖 > 𝑗, by using
Lemma A.4,
𝑖
∑ (−1)𝑗+𝑘
𝑘=𝑗
=
(𝑘 + 𝛼 − 𝑗 − 𝑁 − 1)!
(𝑖 − 𝑘)! (𝑘 − 𝑗)! (𝑘 + 𝛼 − 𝑖 − 𝑁)!
𝑖−𝑗
1
𝑖−𝑗
=
) (𝑘 + 𝛼 − 𝑁 − 1)𝑖−𝑗−1 = 0.
∑ (−1)𝑘 (
𝑘
(𝑖 − 𝑗)! 𝑘=0
(A.63)
As a consequence, the entries of the product (A.61) are 𝛿𝑖𝑗
which proves that the second factor of (A.61) coincides with
L−1 .
Lemma A.7. The matrix L−1 B is given by
𝛽
)(
(−1)𝑖 ( 𝑁
𝑖+𝛼−𝛽−1
𝛼−𝛽−1 )
( 𝑖+𝛼
𝑁 )
]
𝑖 𝛽
𝛽! (𝑖 + 𝛼 − 𝑁)! (𝑖 + 𝛼 − 𝛽 − 1)! (−1) ( 𝑁 ) ( 𝛼−𝛽−1 )
=
𝑖! (𝑖 + 𝛼)! (𝛼 − 𝛽 − 1)! (𝛽 − 𝑁)!
( 𝑖+𝛼
𝑁 )
(A.67)
which ends up the proof of Lemma A.7.
Theorem A.8. The matrix A−1 B is given by
𝛽
𝛼−𝛽+𝑁−1
(𝑁)( 𝑁 )(
𝑁
𝑖+𝛼−𝛽
( 𝑖+𝛼
𝑁 )
𝑁−1 )
𝑖
]
.
0≤𝑖≤𝑁−1
(A.68)
Proof. Referring to Lemmas A.5 and A.7, we have that
A−1 B = U (L−1 B)
=[
)
(−1)𝑖+𝑗 1{𝑖≤𝑗} ( 𝑗𝑖 ) ( 𝑗+𝛼
𝑁
×[
( 𝑖+𝛼
𝑁 )
𝛽
)(
(−1)𝑖 ( 𝑁
]
0≤𝑖,𝑗≤𝑁−1
𝑖+𝛼−𝛽−1
𝛼−𝛽−1 )
( 𝑖+𝛼
𝑁 )
]
(A.69)
.
0≤𝑖≤𝑁−1
The generic term of A−1 B is
.
0≤𝑖≤𝑁−1
𝑖+𝛼−𝛽−1
(−1)𝑖
A−1 B = [(−1)𝑖
𝑖
1
𝑖 − 𝑗 (𝑘 + 𝛼 − 𝑗 − 𝑁 − 1)!
)
∑ (−1)𝑗+𝑘 (
𝑘−𝑗
(𝑘 + 𝛼 − 𝑖 − 𝑁)!
(𝑖 − 𝑗)! 𝑘=𝑗
L−1 B = [
(𝛼 − 𝑁 − 1)!
.
(𝛼 − 𝛽 − 1)! (𝛽 − 𝑁)!
(A.66)
(A.64)
𝛽 𝑁−1
)
(−1)𝑖 ( 𝑁
𝑗 𝑗+𝛼−𝛽−1
).
∑ ( )(
𝑖+𝛼
𝛼−𝛽−1
( 𝑁 ) 𝑗=𝑖 𝑖
(A.70)
48
International Journal of Stochastic Analysis
Table 2: List of notations.
𝑎, 𝑏, 𝑐, 𝐾, 𝑁, 𝜅 𝑁 , 𝑝𝑘
Given parameters
𝜃𝑗 , 𝜑𝑗
Roots of ±1
𝜉𝑘 , 𝑆𝑛 , 𝑋𝑡 , 𝑋𝑡𝜀
Pseudorandom variables, pseudoprocesses
G𝑠 , G𝑋
Infinitesimal generators
𝜎𝑎− , 𝜎𝑏+ , 𝜎𝑎𝑏 , 𝑆𝑎− , 𝑆𝑏+ , 𝑆𝑎𝑏
Overshooting times and locations related to pseudorandom walk
𝜏𝑎− , 𝜏𝑏+ , 𝜏𝑎𝑏 , 𝑋𝑎− , 𝑋𝑏+ , 𝑋𝑎𝑏
𝜀
𝜀
𝜏𝑎𝜀− , 𝜏𝑏𝜀+ , 𝜏𝑎𝑏
, 𝑋𝑎𝜀− , 𝑋𝑏𝜀+ , 𝑋𝑎𝑏
+
−
Δ, Δ , Δ
Overshooting times and locations related to pseudo-Brownian motion
Discrete Laplacian, finite difference operators
𝑎 , . . . , 𝑎𝑁
𝐴 (𝑎1 , . . . , 𝑎𝑁 ) , 𝐴 𝑘 ( 1
)
𝛼1 , . . . , 𝛼𝑁
𝑈 (𝑢1 , . . . , 𝑢2𝑁 ) , 𝑈ℓ (𝑢1 , . . . , 𝑢2𝑁 )
𝑈𝑘ℓ (𝑢1 , . . . , 𝑢𝑘−1 , 𝑢𝑘+1 , . . . , 𝑢2𝑁 )
𝑢 , . . . , 𝑢𝑝+𝑟
𝑈𝑝𝑞𝑟 (𝑢1 , . . . , 𝑢𝑝+𝑟 ) , 𝑈𝑝𝑞𝑟,ℓ ( 1
)
𝛼1 , . . . , 𝛼𝑝+𝑟
𝑈𝑝𝑞𝑟,𝑘ℓ (𝑢1 , . . . , 𝑢𝑘−1 , 𝑢𝑘+1 , . . . , 𝑢𝑝+𝑟 )
𝑉 (V1 , . . . , V𝑁 ) , 𝑉ℓ (V1 , . . . , V𝑁 )
𝑉𝑘ℓ (V1 , . . . , V𝑘−1 , V𝑘+1 , . . . , V𝑁 )
̃
̃𝑘ℓ (𝑧), 𝑉(𝑧),
̃
̃𝑘 (𝑧), 𝐷(𝑧), 𝐷𝑘 (𝑧, 𝜁)
𝑈(𝑧),
𝑈
𝑉
D(𝜆), D+𝑘 (𝜆, 𝜇), D−𝑘 (𝜆, 𝜇)
S𝑝𝑞𝑟 , S𝑝𝑞𝑟,𝑘ℓ
Vandermonde-like determinants
Other determinants
−
+
𝐺𝑘 (𝑧), 𝐺(𝑧, 𝜁), 𝐻𝑎,ℓ
(𝑧), 𝐻𝑏,ℓ
(𝑧), 𝐻𝑎𝑏,ℓ (𝑧)
̃ 𝑘 (𝑧, 𝜁), 𝑃(𝑧, 𝜁), 𝑃𝑘 (𝑧, 𝜁)
𝐿 𝑘 (𝑧, 𝜁), 𝐿
+
̃𝑚 (𝑥)
𝑃𝑏,𝑗
(𝑥), 𝑃𝑎𝑏,𝑗 (𝑥), 𝐾𝑚 (𝑥), 𝐾
Generating functions
Polynomials
+
−
𝐼𝑎𝑏,𝑗
, 𝐼𝑎𝑏,𝑗
, I+𝑎𝑏,𝑗 , I−𝑎𝑏,𝑗
Integrals and sums
D+ , D− , E
Sets
𝑀1 , 𝑀∞
Bounds
𝑎𝑗 (𝑧), 𝑏𝑗 (𝑧), 𝑢𝑗 (𝑧), V𝑗 (𝑧), 𝑤(𝑧)
𝛼𝑗 (𝑧), 𝛼̃𝑗 (𝑧), 𝜖𝑗 , 𝜀𝑗 (𝑧), u𝑗 (𝑧), v𝑗 (𝑧)
𝐴 𝑗 (𝑧), 𝐵𝑗 (𝑧), 𝑀𝑘 (𝑧), 𝑅𝑗 (𝑧), 𝑎𝜀 , 𝑏𝜀
+
−
𝑠ℓ (𝑧), 𝑝𝑘 (𝑧), 𝑠𝑘,ℓ
(𝑧), 𝑝𝑘+ (𝑧), 𝑠𝑘,ℓ
(𝑧), 𝑝𝑘− (𝑧)
𝑠𝑘,𝑚 (𝑢1 , . . . , 𝑢𝑘−1 , 𝑢𝑘+1 , . . . , 𝑢𝑝+𝑟 )
Miscellaneous
Sums and products
󸀠
󸀠󸀠
A, B, C(𝑘)
𝑖 , L, U, U , U
⃗
𝑉𝑚
Matrices
Vectors
Putting this into (A.70) yields the matrix A−1 B displayed in
Theorem A.8.
The foregoing sum can be easily evaluated as follows:
𝑁−1
𝑗 𝑗+𝛼−𝛽−1
)
∑ ( )(
𝑖
𝛼−𝛽−1
Conflict of Interests
𝑗=𝑖
𝑁−1
The author declares that there is no conflict of interests
regarding the publication of this paper.
𝑗+𝛼−𝛽−1
𝑖+𝛼−𝛽−1
)
=(
)∑(
𝑖+𝛼−𝛽−1
𝛼−𝛽−1
𝑗=𝑖
𝑖+𝛼−𝛽−1
=(
)
𝛼−𝛽−1
𝑁−1
𝑗+𝛼−𝛽
𝑗+𝛼−𝛽−1
)−(
× ∑ [(
)]
𝑖+𝛼−𝛽
𝑖+𝛼−𝛽
𝑗=𝑖
𝑖+𝛼−𝛽−1 𝛼−𝛽+𝑁−1
=(
)(
)
𝛼−𝛽−1
𝑖+𝛼−𝛽
=
𝑁
𝛼−𝛽+𝑁−1 𝑁−1
(
)(
).
𝑁
𝑖
𝑖+𝛼−𝛽
(A.71)
References
[1] L. Beghin, K. J. Hochberg, and E. Orsingher, “Conditional
maximal distributions of processes related to higher-order heattype equations,” Stochastic Processes and their Applications, vol.
85, no. 2, pp. 209–223, 2000.
[2] L. Beghin and E. Orsingher, “The distribution of the local
time for “pseudoprocesses” and its connection with fractional
diffusion equations,” Stochastic Processes and their Applications,
vol. 115, no. 6, pp. 1017–1040, 2005.
International Journal of Stochastic Analysis
[3] L. Beghin, E. Orsingher, and T. Ragozina, “Joint distributions
of the maximum and the process for higher-order diffusions,”
Stochastic Processes and their Applications, vol. 94, no. 1, pp. 71–
93, 2001.
[4] V. Cammarota and A. Lachal, “Joint distribution of the process
and its sojourn time on the positive half-line for pseudoprocesses governed by high-order heat equation,” Electronic
Journal of Probability, vol. 15, pp. 895–931, 2010.
[5] V. Cammarota and A. Lachal, “Joint distribution of the process
and its sojourn time in a half-line [a, + ∞) for pseudo-processes
driven by a high-order heat-type equation,” Stochastic Processes
and their Applications, vol. 122, no. 1, pp. 217–249, 2011.
[6] K. J. Hochberg, “A signed measure on path space related to
Wiener measure,” Annals of Probability, vol. 6, no. 3, pp. 433–
458, 1978.
[7] V. Yu. Krylov, “Some properties of the distribution corresponding to the equation 𝜕𝑢/𝜕𝑡 = (−1)𝑞+1 (𝜕2𝑞 𝑢/𝜕2𝑞 𝑥),” Soviet
Mathematics, pp. 760–763, 1960.
[8] K. J. Hochberg and E. Orsingher, “The arc-sine law and
its analogs for processes governed by signed and complex
measures,” Stochastic Processes and their Applications, vol. 52,
no. 2, pp. 273–292, 1994.
[9] A. Lachal, “Distributions of Sojourn time, maximum and
minimum for pseudo-processes governed by higher-order heattype equations,” Electronic Journal of Probability, vol. 8, pp. 1–53,
2003.
[10] A. Lachal, “First hitting time and place, monopoles and multipoles for pseudo-processes driven by the equation 𝜕/𝜕𝑡 =
±𝜕𝑁 /𝜕𝑥𝑁 ,” Electronic Journal of Probability, vol. 12, pp. 300–353,
2007.
[11] A. Lachal, “Joint law of the process and its maximum, first
hitting time and place of a half-line for the pseudo-process
driven by the equation 𝜕/𝜕𝑡 = ±𝜕𝑁 /𝜕𝑥𝑁 ,” Comptes Rendus
Mathematique, vol. 343, no. 8, pp. 525–530, 2006.
[12] A. Lachal, “First hitting time and place for pseudo-processes
driven by the equation 𝜕/𝜕𝑡 = ±𝜕𝑛 /𝜕𝑥𝑛 subject to a linear drift,”
Stochastic Processes and their Applications, vol. 118, no. 1, pp. 1–
27, 2008.
[13] A. Lachal, “A survey on the pseudo-process driven by the
high-order heat-type equation 𝜕/𝜕𝑡 = ±𝜕𝑁 /𝜕𝑥𝑁 concerning
the hitting and sojourn times,” Methodology and Computing in
Applied Probability, pp. 1–18, 2011.
[14] A. Lachal, “First exit time from a bounded interval for pseudoprocesses driven by the equation 𝜕/𝜕𝑡 = (−1)𝑁−1 𝜕2𝑁 /𝜕𝑥𝑁 ,”
Stochastic Processes and their Applications, vol. 124, no. 2, pp.
1084–1111, 2014.
[15] T. Nakajima and S. Sato, “On the joint distribution of the first
hitting time and the first hitting place to the space-time wedge
domain of a biharmonic pseudo process,” Tokyo Journal of
Mathematics, vol. 22, no. 2, pp. 399–413, 1999.
[16] Y. Nikitin and E. Orsingher, “On sojourn distributions of
processes related to some higher-order heat-type equations,”
Journal of Theoretical Probability, vol. 13, no. 4, pp. 997–1012,
2000.
[17] K. Nisiiioka, “Monopoles and dipoles in biharmonic pseudo
process,” Proceedings of the Japan Academy Series A, vol. 72, no.
3, pp. 47–50, 1996.
[18] K. Nishioka, “The first hitting time and place of a half-line by a
biharmonic pseudo process,” Japanese Journal of Mathematics,
vol. 23, pp. 235–280, 1997.
49
[19] K. Nishioka, “Boundary conditions for one-dimensional biharmonic pseudo process,” Electronic Journal of Probability, vol. 6,
pp. 1–27, 2001.
[20] E. Orsingher, “Processes governed by signed measures connected with third-order “heat-type” equations,” Lithuanian
Mathematical Journal, vol. 31, no. 2, pp. 220–231, 1991.
[21] S. Sato, “An approach to the biharmonic pseudo process by a
random walk,” Journal of Mathematics of Kyoto University, vol.
42, no. 3, pp. 403–422, 2002.
[22] R. J. Vanderbei, “Probabilistic solution of the Dirichlet problem
for biharmonic functions in discrete space,” Annals of Probability, vol. 12, no. 2, pp. 311–324, 1984.
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download