J P E n a l

advertisement
J
Electr
on
i
o
u
r
nal
o
f
P
c
r
o
ba
bility
Vol. 15 (2010), Paper no. 28, pages 895–931.
Journal URL
http://www.math.washington.edu/~ejpecp/
Joint distribution of the process and its sojourn time on
the positive half-line for pseudo-processes governed by
high-order heat equation
Valentina CAMMAROTA∗ and Aimé LACHAL†
Abstract
Consider the high-order heat-type equation ∂ u/∂ t = ± ∂ N u/∂ x N for an integer N > 2 and
introduce the related Markov pseudo-process (X (t)) t≥0 . In this paper, we study the sojourn time
T (t) in the interval [0, +∞) up to a fixed time t for this pseudo-process. We provide explicit
expressions for the joint distribution of the couple (T (t), X (t)).
Key words: pseudo-process, joint distribution of the process and its sojourn time, Spitzer’s
identity.
AMS 2000 Subject Classification: Primary 60G20; Secondary: 60J25, 60K35, 60J05.
Submitted to EJP on October 7, 2009, final version accepted March 25, 2010.
∗
Dipartimento di Statistica, Probabilità e Statistiche Applicate, UNIVERSITY OF ROME ‘LA SAPIENZA’, P.le A. Moro 5, 00185
Rome, ITALY. E-mail address: valentina.cammarota@uniroma1.it
†
Pôle de Mathématiques/Institut Camille Jordan/CNRS UMR5208, Bât. L. de Vinci, INSTITUT NATIONAL DES SCIENCES
APPLIQUÉES DE LYON, 20 av. A. Einstein, 69621 Villeurbanne Cedex, FRANCE. E-mail address: aime.lachal@insa-lyon.fr,
Web page: http://maths.insa-lyon.fr/∼lachal
895
1
Introduction
Let N be an integer equal or greater than 2 and κN = (−1)1+N /2 if N is even, κN = ±1 if N is odd.
Consider the heat-type equation of order N :
∂u
∂t
= κN
∂ Nu
∂ xN
.
(1.1)
For N = 2, this equation is the classical normalized heat equation and its relationship with linear
Brownian motion is of the most well-known. For N > 2, it is known that no ordinary stochastic
process can be associated with this equation. Nevertheless a Markov “pseudo-process” can be constructed by imitating the case N = 2. This pseudo-process, X = (X (t)) t≥0 say, is driven by a signed
measure as follows. Let p(t; x) denote the elementary solution of Eq. (1.1), that is, p solves (1.1)
with the initial condition p(0; x) = δ(x). This solution is characterized by its Fourier transform (see,
e.g., [13])
Z +∞
e iµx p(t; x) dx = eκN t(−iµ) .
N
−∞
The function p is real, not always positive and its total mass is equal to one:
Z
+∞
p(t; x) dx = 1.
−∞
Moreover, its total absolute value mass ρ exceeds one:
ρ=
Z
+∞
|p(t; x)| dx > 1.
−∞
In fact, if N is even, p is symmetric and ρ < +∞, and if N is odd, ρ = +∞. The signed function
p is interpreted as the pseudo-probability for X to lie at a certain location at a certain time. More
precisely, for any time t > 0 and any locations x, y ∈ R, one defines
P{X (t) ∈ d y|X (0) = x}/d y = p(t; x − y).
Roughly speaking, the distribution of the pseudo-process X is defined through its finite-dimensional
distributions according to the Markov rule: for any n > 1, any times t 1 , . . . , t n such that 0 < t 1 <
· · · < t n and any locations x, y1 , . . . , yn ∈ R,
P{X (t 1 ) ∈ d y1 , . . . X (t n ) ∈ d yn |X (0) = x}/d y1 . . . d yn =
n
Y
p(t i − t i−1 ; yi−1 − yi )
i=1
where t 0 = 0 and y0 = x.
This pseudo-process has been studied by several authors: see the references [2] to [4] and the
references [8] to [20].
Now, we consider the sojourn time of X in the interval [0, +∞) up to a fixed time t:
Z t
T (t) =
1l[0,+∞) (X (s)) ds.
0
896
The computation of the pseudo-distribution of T (t) has been done by Beghin, Hochberg, Nikitin,
Orsingher and Ragozina in some particular cases (see [2; 4; 9; 16; 20]), and by Krylov and the
second author in more general cases (see [10; 11]).
The method adopted therein is the use of the Feynman-Kac functional which leads to certain differential equations. We point out that the pseudo-distribution of T (t) is actually a genuine probability
distribution and in the case where N is even, T (t) obeys the famous Paul Lévy’s arcsine law, that is
P{T (t) ∈ ds}/ds =
1l(0,t) (s)
.
p
π s(t − s)
We also mention that the sojourn time of X in a small interval (−", ") is used in [3] to define a local
time for X at 0. The evaluation of the pseudo-distribution of the sojourn time T (t) together with the
up-to-date value of the pseudo-process, X (t), has been tackled only in the particular cases N = 3
and N = 4 by Beghin, Hochberg, Orsingher and Ragozina (see [2; 4]). Their results have been
obtained by solving certain differential equations leading to some linear systems. In [2; 4; 11], the
Laplace transform of the sojourn time serves as an intermediate tool for computing the distribution
of the up-to-date maximum of X .
In this paper, our aim is to derive the joint pseudo-distribution of the couple (T (t), X (t)) for any
integer N . Since the Feynman-Kac approach used in [2; 4] leads to very cumbersome calculations,
we employ an alternative method based on Spitzer’s identity. The idea of using this identity for
studying the pseudo-process X appeared already in [8] and [18]. Since the pseudo-process X is
properly defined only in the case where N is an even integer, the results we obtain are valid in this
case. Throughout the paper, we shall then assume that N is even. Nevertheless, we formally perform
all computations also in the case where N is odd, even if they are not justified.
The paper is organized as follows.
• In Section 2, we write down the settings that will be used. Actually, the pseudo-process X is
not well defined on the whole half-line [0, +∞). It is properly defined on dyadic times k/2n ,
k, n ∈ N. So, we introduce ad-hoc definitions for X (t) and T (t) as well as for some related
pseudo-expectations. For instance, we shall give a meaning to the quantity
–Z ∞
™
e−λt+iµX (t)−ν T (t) dt
E(λ, µ, ν) = E
0
which is interpreted as the 3-parameters Laplace-Fourier transform of (T (t), X (t)). We also
recall in this part some algebraic known results.
• In Section 3, we explicitly compute E(λ, µ, ν) with the help of Spitzer’s identity. This is Theorem 3.1.
• Sections 4, 5 and 6 are devoted to successively inverting the Laplace-Fourier transform with
respect to µ, ν and λ respectively. More precisely, in Section 4, we perform the inversion
with respect to µ; this yields Theorem 4.1. Next, we perform the inversion with respect to
ν which gives Theorems 5.1 and 5.2. Finally, we carry out the inversion with respect to λ
and the main results of this paper are Theorems 6.2 and 6.3. In each section, we examine
the particular cases N = 3 (case of an asymmetric pseudo-process) and N = 4 (case of the
biharmonic pseudo-process). For N = 2 (case of rescaled Brownian motion), one can retrieve
897
several classical formulas and we refer the reader to the first draft of this paper [6]. Moreover,
our results recover several known formulas concerning the marginal distribution of T (t), see
also [6].
• The final appendix (Section 7) contains a discussion on Spitzer’s identity as well as some
technical computations.
2
2.1
Settings
A first list of settings
In this part, we introduce for each integer n a step-process X n coinciding with the pseudo-process
X on the times k/2n , k ∈ N. Fix n ∈ N. Set, for any k ∈ N, X k,n = X (k/2n ) and for any t ∈
[k/2n , (k + 1)/2n ), X (t) = X k,n . We can write globally
X n (t) =
∞
X
X k,n 1l[k/2n ,(k+1)/2n ) (t).
k=0
Now, we recall from [13] the definitions of tame functions, functions of discrete observations, and
admissible functions associated with the pseudo-process X . They were introduced by Nishioka [18]
in the case N = 4.
Definition 2.1. Fix n ∈ N. A tame function for X is a function of a finite number k of observations of the pseudo-process X at times j/2n , 1 ≤ j ≤ k, that is a quantity of the form
Fk,n = F (X (1/2n ), . . . , X (k/2n )) for a certain k and a certain bounded Borel function F : Rk −→ C.
The “expectation” of Fk,n is defined as
Z
Z
E(Fk,n ) =
F (x 1 , . . . , x k ) p(1/2n ; x − x 1 ) . . . p(1/2n ; x k−1 − x k ) dx 1 . . . dx k .
...
Rk
Definition 2.2. Fix n ∈ N. A function of the
discrete observations of X at times k/2n , k ≥ 1, is a
P∞
convergent series of tame functions: FX n = k=1 Fk,n where Fk,n is a tame function for all k ≥ 1.
P∞
Assuming the series k=1 |E(Fk,n )| convergent, the “expectation” of FX n is defined as
E(FX n ) =
∞
X
E(Fk,n ).
k=1
Definition 2.3. An admissible function is a functional FX of the pseudo-process X which is the limit of
a sequence (FX n )n∈N of functions of discrete observations of X : FX = limn→∞ FX n , such that the sequence
(E(FX n ))n∈N is convergent. The “expectation” of FX is defined as
E(FX ) = lim E(FX n ).
n→∞
In this paper, we are concerned with the sojourn time of X in [0, +∞):
Z t
T (t) =
1l[0,+∞) (X (s)) ds.
0
898
In order to give a proper meaning to this quantity, we introduce the similar object related to X n :
Z t
Tn (t) =
1l[0,+∞) (X n (s)) ds.
0
For determining the distribution of Tn (t), we compute its 3-parameters Laplace-Fourier transform:
–Z ∞
™
e−λt+iµX n (t)−ν Tn (t) dt .
En (λ, µ, ν) = E
0
In Section 3, we prove that the sequence (En (λ, µ, ν))n∈N is convergent and we compute its limit:
lim En (λ, µ, ν) = E(λ, µ, ν).
n→∞
Formally, E(λ, µ, ν) is interpreted as
E(λ, µ, ν) = E
–Z
∞
e
−λt+iµX (t)−ν T (t)
™
dt
0
R∞
where the quantity 0 e−λt+iµX (t)−ν T (t) dt is an admissible function of X . This computation is performed with the aid of Spitzer’s identity. This latter concerns the classical random walk. Nevertheless, since it hinges on combinatorial arguments, it can be applied to the context of pseudo-processes.
We clarify this point in Section 3.
2.2
A second list of settings
We introduce some algebraic settings. Let θi , 1 ≤ i ≤ N , be the N th roots of κN and
J = {i ∈ {1, . . . , N } : ℜθi > 0},
K = {i ∈ {1, . . . , N } : ℜθi < 0}.
Of course, the cardinalities of J and K sum to N : #J + #K = N . We state several results related to
the θi ’s which are proved in [11; 13]. We have the elementary equalities
!
!
N
N
X
X
X
Y
Y
Y
θj +
θk =
θi = 0,
θj
θk =
θi = (−1)N −1 κN
(2.1)
j∈J
k∈K
i=1
and
j∈J
i=1
k∈K
N
N
Y
Y
(x − θi ) =
(x − θ̄i ) = x N − κN .
i=1
(2.2)
i=1
Moreover, from formula (5.10) in [13],
#K
Y
X
(x − θk ) =
(−1)` σ` x #K−` ,
where σ` =
P
k1 <···<k`
k1 ,...,k` ∈K
(2.3)
`=0
k∈K
θk1 . . . θk` . We have by Lemma 11 in [11]
1

 sin π

X
j∈J
θj
Y θi x − θ j
i∈J\{ j}
θi − θ j
=
X
j∈J
θj = −
X
θk =
k∈K
899
if N is even,
N


1
π
2 sin 2N
=
π
cos 2N
sin Nπ
(2.4)
if N is odd.
Set A j =
Q
θi
i∈J\{ j} θi −θ j
for j ∈ J, and Bk =
Q
θi
i∈K\{k} θi −θk
for k ∈ K. The A j ’s and Bk ’s solve a
Vandermonde system: we have
X
Aj =
j∈J
X
A j θ jm
X
Bk = 1
(2.5)
k∈K
X
= 0 for 1 ≤ m ≤ #J − 1,
j∈J
Bk θkm
= 0 for 1 ≤ m ≤ #K − 1.
k∈K
Observing that 1/θ j = θ̄ j for j ∈ J, that {θ j , j ∈ J} = {θ̄ j , j ∈ J} and similarly for the θk ’s, k ∈ K,
formula (2.11) in [13] gives
X Ajθj
j∈J
θj − x
=
X
Aj
j∈J
1 − θ̄ j x
=Q
1
X B θ
k k
,
j∈J (1 − θ j x)
k∈K
θk − x
=
X
Bk
k∈K
1 − θ̄k x
=Q
1
k∈K (1 − θk x)
(2.6)
In particular,
X Ajθj
j∈J
θ j − θk
=
1
N Bk
X B θ
k k
,
k∈K
θk − θ j
=
1
NAj
.
(2.7)
P
P
Set, for any m ∈ Z, αm = j∈J A j θ jm and βm = k∈K Bk θkm . We have, by formula (2.11) of [13],
€Q
Š€P
Š
Q
β#K = (−1)#K−1 k∈K θk . Moreover, β#K+1 = (−1)#K−1
k∈K θk
k∈K θk . The proof of this
claim is postponed to Lemma 7.2 in the appendix. We sum up this information and (2.5) into

1
if m = 0,


0
if 1 ≤ m ≤ #K − 1,


Q
#K−1
βm = (−1)
(2.8)
if m = #K,
k∈K θk Š€P

€Q
Š

#K−1


if m = #K + 1,
k∈K θk
k∈K θk
(−1)
κN
if m = N .
We also have
α−m =
X Aj
j∈J
and then
θ jm
= κN
X
A j θ jN −m = κN αN −m
j∈J

1
€Q
Š€ P
Š


#J−1

κ
(−1)
θ
θ

j
j
j∈J
j∈J
 N
Q
α−m = κN (−1)#J−1 j∈J θ j




0
κN
if m = 0,
if m = #K − 1,
if m = #K,
(2.9)
if #K + 1 ≤ m ≤ N − 1,
if m = N .
In particular, by (2.1),
α0 β0 = α−N βN = 1,
α−#K β#K = −1,
α−#K β#K+1 =
X
j∈J
900
θj,
α1−#K β#K =
X
k∈K
θk .
(2.10)
Concerning the kernel p, we have from Proposition 1 in [11]
 € Š
Γ N1


if N is even,
 N πt 1/N
p(t; 0) =
€πŠ
€1Š


 Γ N cos 2N
if N is odd.
N πt 1/N
(2.11)
Proposition 3 in [11] states
P{X (t) ≥ 0} =
Z
∞
p(t; −ξ) dξ =
#J
N
0
P{X (t) ≤ 0} =
,
0
Z
p(t; −ξ) dξ =
#K
−∞
N
(2.12)
and formulas (4.7) and (4.8) in [13] yield, for λ > 0 and µ ∈ R,
∞
Z
t
0
Z
e−λt
∞
0
€
dt
Z
e iµξ − 1 p(t; −ξ) dξ = log
∞
‚
€
dt
t
!
k∈K
p
N
λ − iµθk
Y
p
N
λ
Œ
Y
Š
−∞
e−λt
p
N
λ
0
Z
Š
e iµξ − 1 p(t; −ξ) dξ = log
0
j∈J
,
(2.13)
p
N
λ − iµθ j
.
Let us introduce, for j ∈ J, m ≤ N − 1 and x ≥ 0,
‚
Œ
Z∞
Z∞
π
π
Ni
m
m
iN
−i N
N
N
I j,m (τ; x) =
e−i N π
ξN −m−1 e−τξ −θ j e xξ dξ − e i N π
ξN −m−1 e−τξ −θ j e xξ dξ .
2π
0
0
(2.14)
Formula (5.13) in [13] gives, for 0 ≤ m ≤ N − 1 and x ≥ 0,
Z∞
m
e−λτ I j,m (τ; x) dτ = λ− N e−θ j
p
N
λx
.
(2.15)
0
We introduce in a very similar manner the functions I k,m (τ; x) for k ∈ K and x ≤ 0.
Example 2.1. Case N = 3.
• For κ3 = +1, the third roots of κ3 are θ1 = 1, θ2 = e i
K = {2, 3}, A1 = 1, B2 =
−i π
e 6
, B3 =
p
3
I1,0 (τ; x) =
iπ
e 6
3i
p ,
3
‚Z
2π
∞
2π
3
2π
3
, θ3 = e−i
, and the settings read J = {1},
α0 = α−1 = α−2 = 1, β0 = 1, β−1 = −1. Moreover,
Z
π
2 −τξ3 −e i 3 xξ
ξ e
∞
dξ −
Œ
π
2 −τξ3 −e−i 3 xξ
ξ e
dξ .
0
0
π
π
• For κ3 = −1, the third roots of κ3 are θ1 = e i 3 , θ2 = e−i 3 , θ3 = −1. The settings read J = {1, 2},
K = {3}, A1 =
iπ
ep6
,
3
A2 =
−i π
ep 6
3
I1,1 (τ; x) =
, B3 = 1, α0 = α−1 = 1, β0 = β−2 = 1, β−1 = −1. Moreover,
3i
2π
‚
e
−i π3
Z
∞
ξe
−τξ3 −e i
2π
3
0
xξ
dξ − e
i π3
Z
∞
0
901
ξe
−τξ3 −xξ
Œ
dξ ,
I2,1 (τ; x) =
3i
‚
e
2π
−i π3
Z
∞
3
ξ e−τξ
−xξ
dξ − e
i π3
Z
0
∞
3
ξ e−τξ
−e
−i 2π
3
Œ
xξ
dξ .
0
Actually, the three functions I1,0 , I1,1 and I2,1 can be expressed by mean of the Airy function Hi
R ∞ ξ3
defined as Hi(z) = π1 0 e− 3 +zξ dξ (see, e.g., [1, Chap. 10.4]). Indeed, we easily have by a change
of variables, differentiation and integration by parts, for τ > 0 and z ∈ C,
Z∞
π
z
−τξ3 +zξ
e
dξ =
,
Hi p
3
(3τ)4/3
3τ
0
Z∞
π
z
−τξ3 +zξ
0
ξe
dξ =
Hi p
,
3
(3τ)2/3
3τ
0
Z∞
z
πz
1
2 −τξ3 +zξ
ξ e
dξ =
Hi
.
+
p
3
3τ
(3τ)4/3
3τ
0
Therefore,
π
π
e−i 3 x
ei 3 x
−i π6
I1,0 (τ; x) = p
e
+
e
,
Hi
−
Hi
−
p
p
3
3
3
2 3 τ4/3
3τ
3τ
p
2π
3
3
ei 3 x
x
i π6
−i π6
0
0
I1,1 (τ; x) =
e Hi − p
Hi − p
+e
,
3
3
2τ2/3
3τ
3τ
p
3
−i 2π
3 x
x
e
3
π
i π6
−i
0
0
6
e Hi − p
+e
Hi − p
.
I2,1 (τ; x) =
3
3
2τ2/3
3τ
3τ
x
i π6
(2.16)
(2.17)
(2.18)
Example 2.2. Case N = 4: we have κ4 = −1. This is the case of the biharmonic pseudo-process.
π
π
The fourth roots of κ4 are θ1 = e−i 4 , θ2 = e i 4 , θ3 = e i
−i π
ep 4
2
3π
4
iπ
e 4
, θ4 = e−i
3π
4
and the notations read in this
p
, A2 = B4 = p2 , α0 = α−2 = 1, α−1 = 2, β0 = β−2 = 1,
case J = {1, 2}, K = {3, 4}, A1 = B3 =
p
β−1 = − 2. Moreover,
‚
Œ
Z∞
Z∞
2
i π4
−i π4
2 −τξ4 −xξ
2 −τξ4 +i xξ
e
I1,1 (τ; x) =
ξ e
dξ + e
ξ e
dξ ,
π
0
0
I2,1 (τ; x) =
3
2
‚
e
π
i π4
Z
∞
4
ξ2 e−τξ
−i xξ
dξ + e
0
−i π4
Z
∞
4
ξ2 e−τξ
(2.19)
Œ
−xξ
dξ .
0
Evaluation of E(λ, µ, ν)
The goal of this section is to evaluate the limit E(λ, µ, ν) = limn→∞ En (λ, µ, ν).
En (λ, µ, ν) = E[Fn (λ, µ, ν)] with
Z∞
e−λt+iµX n (t)−ν Tn (t) dt.
Fn (λ, µ, ν) =
0
Let us rewrite the sojourn time Tn (t) as follows:
n
n
Z
Z
[2
Xt] ( j+1)/2
Tn (t) =
1l[0,+∞) (X n (s)) ds −
j=0
j/2n
([2n t]+1)/2n
1l[0,+∞) (X n (s)) ds
t
902
We write
=
n
n
Z
[2
Xt] ( j+1)/2
j/2n
j=0
[2 t]
1 X
1l[0,+∞) (X j,n ) ds −
2n
1l[0,+∞) (X [2n t],n ) ds
t
n
=
([2n t]+1)/2n
Z
1l[0,+∞) (X j,n ) +
t−
[2n t] + 1
2n
j=0
1l[0,+∞) (X [2n t],n ).
Set T0,n = 0 and, for k ≥ 1,
Tk,n =
k
1 X
2n
1l[0,+∞) (X j,n ).
j=1
For k ≥ 0 and t ∈ [k/2n , (k + 1)/2n ), we see that
k+1
1
Tn (t) = Tk,n + t − n
1l[0,+∞) (X k,n ) + n .
2
2
With this decomposition at hand, we can begin to compute Fn (λ, µ, ν):
Z∞
e−λt+iµX n (t)−ν Tn (t) dt
Fn (λ, µ, ν) =
0
=
∞
X
k=0
=e
(k+1)/2n
Z
ν
e−λt+iµX k,n −ν Tk,n − 2n +ν(
k+1
−t)1l[0,+∞) (X k,n )
2n
dt
k/2n
∞
X
−ν/2n
Z
(k+1)/2n
e
−t)1l[0,+∞) (X k,n )
−λt+ν( k+1
2n
!
dt
e iµX k,n −ν Tk,n .
k/2n
k=0
The value of the above integral is
Z
(k+1)/2n
e[λ+ν1l[0,+∞) (X k,n )]/2 − 1
n
e
−λt+ν( k+1
−t)1l[0,+∞) (X k,n )
2n
dt = e
−λ(k+1)/2n
λ + ν1l[0,+∞) (X k,n )
k/2n
.
Therefore,
Fn (λ, µ, ν) =
n ∞
1 − e−(λ+ν)/2 X
λ+ν
+ e−ν/2
n
1−e
e−λk/2
k=0
∞
−λ/2n X
λ
n
+iµX k,n −ν Tk,n
e−λk/2
n
1l[0,+∞) (X k,n )
+iµX k,n −ν Tk,n
1l(−∞,0) (X k,n ).
k=0
Before applying the expectation to this last expression, we have to check that it defines a function of
discrete observations of the pseudo-process X which satisfies the conditions of Definition 2.2. This
fact is stated in the proposition below.
Proposition 3.1. Suppose
N even and ”fix an integer n. For any— complex
ℜ(λ) > 0
P∞
P∞ λ such that
n
n ”
and any ν > 0, the series k=0 e−λk/2 E e iµX k,n −ν Tk,n 1l[0,+∞) (X k,n ) and k=0 e−λk/2 E e iµX k,n −ν Tk,n
—
1l(−∞,0) (X k,n ) are absolutely convergent and their sums are given by
∞
X
k=0
— eν/2 − Sn+ (λ, µ, ν)
1l[0,+∞) (X k,n ) =
,
n
eν/2 − 1
n
e
−λk/2n
”
E e
iµX k,n −ν Tk,n
903
∞
X
— eν/2 [Sn− (λ, µ, ν) − 1]
,
1l(−∞,0) (X k,n ) =
n
eν/2 − 1
n
e
−λk/2n
”
E e
iµX k,n −ν Tk,n
k=0
where
Sn+ (λ, µ, ν)
!
n
∞ €
X
Š e−λk/2 ”
—
−ν k/2n
iµX k,n
= exp −
1−e
E e
1l[0,+∞) (X k,n ) ,
k
k=1
Sn− (λ, µ, ν) = exp
!
∞ €
−λk/2n ”
X
—
nŠ e
1 − e−ν k/2
E e iµX k,n 1l(−∞,0) (X k,n ) .
k
k=1
PROOF
• Step 1. First, notice that for any k ≥ 1, we have
”
—
E e iµX k,n −ν Tk,n 1l[0,+∞) (X k,n ) Z Z
Pk
ν
1
l
(x
)
iµx
−
= . . .
e k 2n j=1 [0,+∞) j P{X 1,n ∈ dx 1 , . . . , X k,n ∈ dx k } k−1
×[0,+∞)
R
Y
k−1 1
p n ; x j − x j+1 dx 1 . . . dx k e
p n ; x1
2
2
Rk−1 ×[0,+∞)
j=1
Z Z Y
k−1 1
1
dx . . . dx
≤ ...
;
x
p
;
x
−
x
p
1
j
j+1
k
1
2n
2n
k
Z Z
= . . .
iµx k − 2νn
=
Z
...
j=1 1l[0,+∞) (x j )
1
j=1
R
Z
Pk
k Y
Rk j=1
k Z
Y
p
; y j d y1 . . . d yk =
2n
1
j=1
+∞ −∞
1
p
; y j d y j = ρ k .
n
2
Hence, we derive the following inequality:
∞ ∞
X
— X
n
−λk/2n ” iµX k,n −ν Tk,n
1l[0,+∞) (X k,n ) ≤
E e
ρ k e−λk/2 =
e
k=1
k=1
1
1 − ρe−ℜ(λ)/2
n
.
We can easily see that this bound holds true also when the factor 1l[0,+∞) (X k,n ) is replaced by
1l(−∞,0) (X k,n ). This shows that the two series of Proposition 3.1 are finite for λ ∈ C such that
n
ρe−ℜ(λ)/2 < 1, that is ℜ(λ) > 2n log ρ.
• Step 2. For λ ∈ C such that ℜ(λ) > 2n log ρ, the Spitzer’s identity (7.2) (see Lemma 7.1 in the
appendix) gives for the first series of Proposition 3.1
∞
X
—
n ”
e−λk/2 E e iµX k,n −ν Tk,n 1l[0,+∞) (X k,n )
k=0
=
1
eν/2 − 1
n

e
ν/2n
!
n
∞ €
X
Š e−λk/2 ”
—
−ν k/2n
iµX k,n
− exp −
1−e
E e
1l[0,+∞) (X k,n ) .
k
k=1
904
(3.1)
The right-hand side of (3.1) is an analytic continuation of the Dirichlet series lying in the left-hand
side of (3.1), which is defined on the half-plane {λ ∈ C : ℜ(λ) > 0}. Moreover, for any " > 0, this
continuation is bounded over the half-plane {λ ∈ C : ℜ(λ) ≥ "}. Indeed, we have
Z +∞ ”
Z +∞
—
k
k
iµX k,n
iµξ
p
1l[0,+∞) (X k,n ) = ; −ξ dξ < ρ
e p n ; −ξ dξ ≤
E e
n
0
2
2
0
and then
!
n
∞ €
X
— Š e−λk/2 ”
iµX k,n
−ν k/2n
1l[0,+∞) (X k,n ) E e
1−e
exp −
k
k=1
!
∞ −ℜ(λ)k/2n
X
€
1
e
n Š
= exp −ρ log(1 − e−ℜ(λ)/2 ) =
≤ exp ρ
n ρ.
−ℜ(λ)/2
k
(1 − e
)
k=1
Therefore, if ℜ(λ) ≥ ",
!
∞ €
−λk/2n ”
X
— 1
nŠ e
1 − e−ν k/2
.
E e iµX k,n 1l[0,+∞) (X k,n ) ≤
exp −
(1 − e−"/2n )ρ
k
k=1
This proves that the left-hand side of this last inequality is bounded for ℜ(λ) ≥ ". By a lemma of
Bohr ([5]), we deduce
of convergence, absolute
convergence and boundedness of
”
—
P∞that the nabscissas
the Dirichlet series k=0 e−λk/2 E e iµX k,n −ν Tk,n 1l[0,+∞) (X k,n ) are identical. So, this series converges
absolutely on the half-plane {λ ∈ C : ℜ(λ) > 0} and (3.1) holds on this half-plane. A similar
conclusion holds for the second series of Proposition 3.1. The proof is finished. „
Thanks to Proposition 3.1, we see that the functional Fn (λ, µ, ν) is a function of the discrete observations of X and, by Definition 2.2, its expectation can be computed as follows:
1 − e−(λ+ν)/2 eν/2 − Sn+ (λ, µ, ν)
n
n
n
1 − e−λ/2 Sn− (λ, µ, ν) − 1
+
n
λ+ν
λ
eν/2 − 1
‚ ν/2n
Œ
−(λ+ν)/2n
−λ/2n
e
(1 − e
)
1−e
=
−
n
n
ν/2
− 1)
(λ + ν)(e
λ(eν/2 − 1)
En (λ, µ, ν) =
+
1 − e−λ/2
n
λ(eν/2 − 1)
n
Sn− (λ, µ, ν) −
1 − e−(λ+ν)/2
eν/2 − 1
n
n
(λ + ν)(eν/2 − 1)
n
Sn+ (λ, µ, ν).
(3.2)
Now, we have to evaluate the limit E(λ, µ, ν) of En (λ, µ, ν) as n goes toward infinity. It is easy
to see that this limit exists; see the proof of Theorem 3.1 below. Formally, we write E(λ, µ, ν) =
E[F (λ, µ, ν)] with
Z
∞
F (λ, µ, ν) =
e−λt+iµX (t)−ν T (t) dt.
0
Then, we can say that the functional F (λ, µ, ν) is an admissible function of X in the sense of Definition 2.3. The value of its expectation E(λ, µ, ν) is given in the following theorem.
Theorem 3.1. The 3-parameters Laplace-Fourier transform of the couple (T (t), X (t)) is given by
E(λ, µ, ν) = Q
1
.
Q
p
p
N
N
j∈J ( λ + ν − iµθ j )
k∈K ( λ − iµθk )
905
(3.3)
PROOF
It is plain that the term lying within the biggest parentheses in the last equality of (3.2) tends to
zero as n goes towards infinity and that the coefficients lying before Sn+ (λ, µ, ν) and Sn− (λ, µ, ν) tend
to 1/ν. As a byproduct, we derive at the limit when n → ∞,
—
1” −
S (λ, µ, ν) − S + (λ, µ, ν)
ν
E(λ, µ, ν) =
(3.4)
where we set
∞
Œ
”
—€
Š e−λt
iµX (t)
−ν t
E e
1l[0,+∞) (X (t)) 1 − e
S (λ, µ, ν) =
= exp −
dt ,
t
0
‚Z ∞
Œ
”
—€
Š e−λt
−
−
iµX (t)
−ν t
S (λ, µ, ν) = lim Sn (λ, µ, ν) = exp
E e
1l(−∞,0) (X (t)) 1 − e
dt .
n→∞
t
0
+
lim S + (λ, µ, ν)
n→∞ n
‚ Z
We have
∞
”
—€
Š e−λt
dt
E e iµX (t) 1l[0,+∞) (X (t)) 1 − e−ν t
t
0
Z∞
Z∞
”€
Š
— e−λt
”€
Š
— e−(λ+ν)t
iµX (t)
=
E e
− 1 1l[0,+∞) (X (t))
dt −
dt
E e iµX (t) − 1 1l[0,+∞) (X (t))
t
t
0
0
Z∞
e−λt − e−(λ+ν)t
dt
+
P{X (t) ≥ 0}
t
0
Z ∞ −λt
Z∞
Z∞
Z ∞ −(λ+ν)t
€
Š
€
Š
e
e
iµξ
=
dt
dt
e − 1 p(t; −ξ) dξ −
e iµξ − 1 p(t; −ξ) dξ
t
t
0
0
0
0
Z ∞ −λt
−(λ+ν)t
e
−e
dt.
+ P{X (1) ≥ 0}
t
0
Z
In view of (2.12) and (2.13) and using the elementary equality
have
R∞
0
e−λt −e−(λ+ν)t
t
dt = log
€ λ+ν Š
, we
λ
∞
”
—€
Š e−λt
E e iµX (t) 1l[0,+∞) (X (t)) 1 − e−ν t
dt
t
0
!
p
‚
Œ
‚
Œ
p
p
N
N
Y
Y
Y N λ + ν − iµθ j
#J
λ
λ+ν
λ+ν
= log
− log
+
log
= log
.
p
p
p
N
N
N
N
λ
λ + ν − iµθ j
λ − iµθ j
λ − iµθ j
j∈J
j∈J
j∈J
Z
We then deduce the value of S + (λ, µ, ν). By (2.2),
+
S (λ, µ, ν) =
Y
j∈J
p
N
λ − iµθ j
p
N
λ − iµθ` )
=
Q
Q
p
p
p
N
N
N
λ + ν − iµθ j
j∈J ( λ + ν − iµθ j )
k∈K ( λ − iµθk )
QN
`=1 (
λ − κN (iµ)N
=Q
.
Q
p
p
N
N
j∈J ( λ + ν − iµθ j )
k∈K ( λ − iµθk )
906
(3.5)
Similarly, the value of S − (λ, µ, ν) is given by
−
S (λ, µ, ν) =
Y
k∈K
p
N
λ + ν − κN (iµ)N
λ + ν − iµθk
=
.
Q
Q
p
p
p
N
N
N
λ − iµθk
j∈J ( λ + ν − iµθ j )
k∈K ( λ − iµθk )
(3.6)
Finally, putting (3.5) and (3.6) into (3.4) immediately leads to (3.3). „
Remark 3.1. We can rewrite (3.3) as
E(λ, µ, ν) =
1
λ
#K
N
Y
#J
(λ + ν) N
j∈J
p
N
λ+ν
p
N
λ + ν − iµθ j
Y
k∈K
p
N
λ
.
p
N
λ − iµθk
(3.7)
Actually, this form is more suitable for the inversion of the Laplace-Fourier transform.
In the three next sections, we progressively invert the 3-parameters Laplace-Fourier transform
E(λ, µ, ν).
Inverting with respect to µ
4
In this part, we invert E(λ, µ, ν) given by (3.7) with respect to µ.
Theorem 4.1. We have, for λ, ν > 0,
Z
∞
e−λt E e−ν T (t) , X (t) ∈ dx /dx dt
0

=
1
X


#K−1
#J−1

 λ N (λ + ν) N




1
λ
#K−1
N
(λ + ν)
#J−1
N
Ajθj
X
j∈J
k∈K
X
X
Bk θk
k∈K
j∈J
Bk θk
p
p
N
N
θk λ − θ j λ + ν
!
Ajθj
p
p
N
N
θk λ − θ j λ + ν
!
p
N
e−θ j
λ+ν x
if x ≥ 0,
(4.1)
p
N
λx
e−θk
if x ≤ 0.
PROOF
p
p
N
N
By (2.6) applied to x = iµ/ λ + ν and x = iµ/ λ, we have
Y
j∈J
p
N
λ+ν
p
N
λ + ν − iµθ j
Y
k∈K
p
N
λ
=
p
N
λ − iµθk
Y
=
X
j∈J
j∈J
=
p
N
1
1−
iµ
p
N
λ+ν
Ajθj
θj −
θj
X
X
j∈J
k∈K
907
k∈K
iµ
p
N
λ+ν k∈K
λ(λ + ν)
1
Y
1−
iµ
p
N
λ
θk
Bk θk
θk −
iµ
p
N
λ
A j Bk θ j θk
.
p
p
N
N
(θ j λ + ν − iµ)(θk λ − iµ)
Let us write that
1
1
= p
p
p
p
N
N
(θ j λ + ν − iµ)(θk λ − iµ) θk N λ − θ j N λ + µ
=
1
p
p
N
θk λ − θ j N λ + µ
1
!
1
− p
p
N
N
θ j λ + ν − iµ θk λ − iµ
!
Z∞
Z0
p
p
N
(iµ−θ j N λ+µ)x
(iµ−θk λ)x
e
dx +
e
dx .
0
−∞
Therefore, we can rewrite E(λ, µ, ν) as
E(λ, µ, ν) =
1
#J−1
#K−1
N
(λ + ν) N
Z∞
X
p
p
A j Bk θ j θk
N
N
iµx
−θk λ x
−θ j λ+ν x
×
e
e
1
l
(x)
+
e
1
l
(x)
dx
p
p
(−∞,0]
[0,∞)
N
N
θ
λ
−
θ
λ
+
ν
k
j
−∞
j∈J
λ
k∈K
which is nothing but the Fourier transform with respect to µ of the right-hand side of (4.1). „
Remark 4.1. One can observe that formula (24) in [11] involves the density of (T (t), X (t)), this
latter being evaluated at the extremity X (t) = 0 when the starting point is x. By invoking the
duality, we could derive an alternative representation for (4.1). Nevertheless, this representation is
not tractable for performing the inversion with respect to ν.
Example 4.1. For N = 3, we have two cases to consider. Although this situation is not correctly
defined, (4.1) writes formally, with the numerical values of Example 2.1, in the case κ3 = 1,
Z
∞
e−λt E e−ν T (t) , X (t) ∈ dx /dx dt
0
p
3

e− λ+ν x


p


 λ2/3 + 3 λ(λ + ν) + (λ + ν)2/3
=
3p
Š
Š
€ p3 p
€ p3 p
3
3
p
p
p p
λ

3
3
3
λ
λ
x

x
−
(2
x
3
λ
cos
λ
+
ν
+
λ
)
sin
2

e

2
2
p p
p
3
3
2/3
2/3
3 λ
λ + λ(λ + ν) + (λ + ν)
if x ≥ 0,
if x ≤ 0,
and in the case κ3 = −1,
Z
∞
e−λt E e−ν T (t) , X (t) ∈ dx /dx dt
0

=
−
3p
λ+ν
2
x
e


p
p

 33 λ+ν




€p p
Š
€p p
Š
p
3
3
p
p p
3
3
3
3 λ + ν cos 3 2λ+ν x + ( λ + ν + 2 λ ) sin 3 2λ+ν x
p
λ2/3 + 3 λ(λ + ν) + (λ + ν)2/3
e
λ2/3 +
p
3
if x ≥ 0,
p
3
λx
if x ≤ 0.
λ(λ + ν) + (λ + ν)2/3
908
Example 4.2. For N = 4, formula (4.1) supplies, with the numerical values of Example 2.2,
Z∞
e−λt E e−ν T (t) , X (t) ∈ dx /dx dt
0
–
‚p
Œ
‚p
Ϊ
p − 4ppλ+ν x
4
4
p
p
2
2e
λ+ν
λ+ν

4
4

λ + ν cos
x + λ sin
x

p
p
p
p
p
p
p

 4 λ + ν ( λ + λ + ν)( 4 λ + 4 λ + ν)
2
2

=
p

–
‚p
Œ
‚p
Ϊ
p p4 λ x

4
4

p
p
2

2
e
λ
λ
4
4
p p
λ cos p x − λ + ν sin p x
p
p
p
4
4
4
2
2
λ ( λ + λ + ν)( λ + λ + ν)
5
if x ≥ 0,
if x ≤ 0.
Inverting with respect to ν
In this section, we carry out the inversion with respect to the parameter ν. The cases x ≤ 0 and
x ≥ 0 lead to results which are not quite analogous. This is due to the asymmetry of our problem.
So, we split our analysis into two subsections related to the cases x ≤ 0 and x ≥ 0.
5.1
The case x ≤ 0
Theorem 5.1. The Laplace transform with respect to t of the density of the couple (T (t), X (t)) is given,
when x ≤ 0, by
Z∞
e−λt [P{T (t) ∈ ds, X (t) ∈ dx}/(ds dx)] dt
0
=−
#K
X
e−λs
λ
#K−1
N
s
#K
N
m
α−m (λs) N E1, m+#J (λs)
N
m=0
X
Bk θkm+1 e−θk
p
N
λx
.
(5.1)
k∈K
PROOF
Recall (4.1) in the case x ≤ 0:
Z∞
e−λt E e−ν T (t) , X (t) ∈ dx /dx dt
0
=
1
λ
#K−1
N
(λ + ν)
X
#J−1
N
X
Bk θk
j∈J
k∈K
Ajθj
p
p
N
N
θk λ − θ j λ + ν
!
e−θk
p
N
λx
.
We have to invert with respect to ν the quantity
1
(λ + ν)
X
#J−1
N
j∈J
X
Ajθj
Aj
=
−
p
p
#J−1 p
N
N
N
θk λ − θ j λ + ν
j∈J (λ + ν) N ( λ + ν −
θk
θj
.
p
N
λ)
By using the following elementary equality, which is valid for α > 0,
‚ α−1 −λs Œ
Z∞
Z∞
1
1
s
e
−(λ+ν)s α−1
−νs
=
e
s
ds =
e
ds,
α
(λ + ν)
Γ(α) 0
Γ(α)
0
909
p
N
λ + ν,
we obtain, for |β| <
1
1
= p
p
N
N
λ+ν −β
λ+ν 1−
Z∞
=
β
p
N
λ+ν
∞
X
∞
X
βr
βr
€ r+1 Š
=
r=0 Γ N
r+1
N
Z
(λ + ν)
!
p
∞
r
N
X
(β
s
)
1
€ r+1 Š ds.
s N −1 e−λs
r=0 Γ N
e−νs
=
1
0
r=0
∞
e−(λ+ν)s s
r+1
−1
N
ds
0
The sum lying in the last displayed equality can be expressed by means of the Mittag-Leffler function
P∞
ξr
(see [7, Chap. XVIII]): Ea,b (ξ) = r=0 Γ(ar+b) . Then,
1
=
p
N
λ+ν −β
∞
Z
1
p e−νs s N −1 e−λs E 1 , 1 (β N s ) ds.
(5.2)
N N
0
Next, we write
X
j∈J
Aj
p
N
λ+ν −
θk
θj
=
p
N
λ
Z
∞
–
e
−νs
s
1
N
−1 −λs
e
X
0
j∈J
‚
Aj E 1 , 1
θk p
N
θj
N N
Ϊ
λs
ds,
(5.3)
where
X
‚
Aj E 1 , 1
N N
j∈J
θk p
N
θj
Œ
λs
=
X
∞
X
Aj
‚
Œr
θj
r=0
j∈J
θk
r
∞
X
(λs) N
€ r+1 Š =
Γ N
r=0
θkr
X Aj
j∈J
θ jr
!
r
(λs) N
€
Š.
Γ r+1
N
When performing the euclidian division of r by N , we can write r as r = `N + m with ` ≥ 0 and
0 ≤ m ≤ N − 1. With this, we have θ j−r = (θ jN )−` θ j−m = κ` θ j−m and θkr = κ` θkm . Then,
N
θkr
X Aj
θr
j∈J j
= θkm
X Aj
j∈J
θ jm
N
= θkm α−m .
Hence, since by (2.9) the α−m , #K + 1 ≤ m ≤ N , vanish,
X
‚
Aj E 1 , 1
N N
j∈J
θk p
N
θj
Œ
λs
=
∞ X
#K
X
m
α−m θkm
`=0 m=0
#K
X
(λs)`+ N
m
m
N E m+1 (λs)
€
Š
=
α
θ
(λs)
−m
1, N
k
Γ ` + m+1
m=0
N
and (5.3) becomes
X
j∈J
Aj
p
N
λ+ν −
θk
θj
=
p
N
λ
Z
∞
e−νs
s
1
N
−1 −λs
e
0
#K
X
α−m θkm (λs) E1, m+1 (λs)
m=0
As a result, by introducing a convolution product, we obtain
Z
∞
e−λt E e−ν T (t) , X (t) ∈ dx /dx dt
0
=−
1
λ
#K−1
N
X
Bk θk e−θk
p
N
λx
k∈K
910
!
m
N
N
ds.
Z
∞
Z
s
e−νs 
×
0
σ
0
#J−1
−1
N

#K
X
e−λσ
m+1
m
€ #J−1 Š × e−λ(s−σ)
α−m θkm λ N (s − σ) N −1 E1, m+1 (λ(s − σ)) dσ ds.
N
Γ N
m=0
By removing the Laplace transforms with respect to the parameter ν of each member of the foregoing
equality, we extract
Z
∞
e−λt [P{T (t) ∈ ds, X (t) ∈ dx}/(ds dx)] dt
0
=−
#K
e−λs X
λ
#K−1
N
α−m λ
m
N
m=0
X
Bk θkm+1 e−θk
p
N
λx
!Z
s
0
k∈K
#J−1
σ N −1
m+1
€ #J−1 Š (s − σ) N −1 E1, m+1 (λ(s − σ)) dσ.
N
Γ N
The integral lying on the right-hand side of the previous equality can be evaluated as follows:
Z
s
0
#J−1
#J−1
∞
X
λ` (s − σ)`
σ N −1
m+1
−1
N
€ #J−1 Š (s − σ)
€
Š dσ
m+1
0 Γ
`=0 Γ ` + N
N
Z s #J−1 −1
m+1
∞
X
σ N
(s − σ)`+ N −1
`
€ #J−1 Š
€
Š dσ
=
λ
m+1
Γ
Γ
`
+
0
`=0
N
N
σ N −1
m+1
€ #J−1 Š (s − σ) N −1 E1, m+1 (λ(s − σ)) dσ =
N
Γ N
Z
s
∞
X
λ` s`+
€
=
`=0 Γ ` +
m+#J
N
−1
m+#J
N
Š =s
m+#J
N
−1
E1, m+#J (λs)
N
from which we deduce (5.1). „
Remark 5.1. An alternative expression for formula (5.1) is for x ≤ 0
Z
∞
e−λt [P{T (t) ∈ ds, X (t) ∈ dx}/(ds dx)] dt
0
=−
e−λs
λ
#K−1
N
s
X
#K
N
‚
A j Bk θk E 1 , #J
j∈J
k∈K
N
N
θk p
N
θj
Œ
e−θk
λs
p
N
λx
.
In effect, by (5.1),
Z
∞
e−λt [P{T (t) ∈ ds, X (t) ∈ dx}/(ds dx)] dt
0
=−
=−
λ
#K−1
N
s
#K
N
e−λs
λ
#K−1
N
s
m
p
N
(λs)`+ N
m+1
−θk λ x
€
Š
α−m Bk θk
e
Γ ` + m+#J
`=0 m=0 k∈K
N
‚ Œm
m
∞ N
−1 X
X
X
p
N
θk
(λs)`+ N
−θk λ x
€
Š
A j Bk θk
e
.
θj
Γ ` + m+#J
`=0 m=0 j∈J
N
∞ X
#K X
X
e−λs
#K
N
k∈K
911
(5.4)
In the last displayed equality, we have extended the sum with respect to m to the range 0 ≤ m ≤ N −1
because,
 by
‹ (2.9),
 the
‹ α−m , #K + 1 ≤ m ≤ N − 1, vanish. Let us introduce the index r = `N + m.
Since
Z
θk
θj
m
=
θk
θj
r
, we have
‹r
p
N
λs
p
N
€ r+#J Š e−θk λ x
Γ N

∞
e−λt [P{T (t) ∈ ds, X (t) ∈ dx}/(ds dx)] dt = −
0
e−λs
λ
#K−1
N
s
X
#K
N
j∈J
k∈K
A j Bk θk
∞
X
r=0
θk
θj
which coincide with (5.4).
Example 5.1. Case N = 3. We have formally for x ≤ 0, when κ3 = −1:
‚ −λs
Œ
Z∞
p
p
3
e
3
e−λt [P{T (t) ∈ ds, X (t) ∈ dx}/(ds dx)] dt = e λ x
E 2 (λs) − λ
p
3
s 1, 3
0
and when κ3 = 1:
Z
∞
e−λt [P{T (t) ∈ ds, X (t) ∈ dx}/(ds dx)] dt
0
3p
λ
3
 p3 p
‹
λ x ‹ p
e−λs+ 2 x •p
3
3 cos
λs E1, 2 (λs) − (λs)2/3 eλs
=p p
3
3
2
3 λs2
p
p
‹˜
 3 3 λ x ‹ p
3
2/3 λs
λs E1, 2 (λs) + (λs) e − 2E1, 1 (λs) .
+ sin
3
3
2
Example 5.2. Case N = 4. We have, for x ≥ 0,
Z
∞
e−λt [P{T (t) ∈ ds, X (t) ∈ dx}/(ds dx)] dt
0
p
p −λs+ p4 λ x •  p
4
4
‹
p
‹˜
p
2
2e
λ x ‹ p
λ x ‹ p
4
4
λs
3
3
1
=
λs
E
(λs)
−
λs
e
+
sin
λs
E
(λs)
−
E
(λs)
.
cos
p
p
p
p
1, 4
1, 4
1, 2
4
2
2
λ s
5.2
The case x ≥ 0
Theorem 5.2. The Laplace transform with respect to t of the density of the couple (T (t), X (t)) is given,
when x ≥ 0, by
Z
∞
e−λt [P{T (t) ∈ ds, X (t) ∈ dx}/(ds dx)] dt
0
=−
e−λs X
λ
#K−1
N
j∈J
k∈K
A j Bk θk
Z
s
σ
1
N
−1
0
‚
E1,1
N N
where the function I j,#J−1 is defined by (2.14).
912
θk p
N
θj
Œ
λσ
I j,#J−1 (s − σ; x) dσ
(5.5)
PROOF
Recall (4.1) in the case x ≥ 0:
Z∞
e−λt E e−ν T (t) , X (t) ∈ dx /dx dt
0
=
1
λ
#K−1
N
X
(λ + ν)
#J−1
N
(λ+ν)
p
N
λ+ν x
(λ + ν)
#J−1
N
k∈K
−θ j
=
Bk θk
p
p
N
N
θk λ − θ j λ + ν
Np
λ+ν x
p
#J−1
N
1
=
p
N
λ+ν −β
e−θ j
Ajθj
j∈J
e
We have to invert the quantity
X
p
θ
(N λ+ν− θkj N λ )
Z∞
!
e−θ j
p
N
λ+ν x
.
with respect to ν. Recalling (5.2) and (2.15),
€ p Š
1
e−νs s N −1 e−λs E 1 , 1 β N s
ds,
N N
0
∞
Z
€
Š
e−νs e−λs I j,#J−1 (s; x) ds,
0
we get by convolution
p
N
e−θ j λ+ν x
Š
#J−1 € p
N
θ p
N
(λ + ν) N
λ + ν − θk λ
j
‚Z s
Z∞
=
e
−νs
σ
0
=
Z
1
N
−1 −λσ
e
‚
e
−νs
e
−λs
E1,1
Z
s
1
N
σ
−1
‚
E1,1
N N
0
0
θk p
N
θj
N N
0
∞
‚
θk p
N
θj
Œ
Œ
λσ
×e
−λ(s−σ)
I j,#J−1 (s − σ; x) dσ ds
Œ
λσ
Œ
I j,#J−1 (s − σ; x) dσ ds.
This immediately yields (5.5). „
Remark 5.2. Noticing that
‚
Œ
r
m
∞
∞ N
−1 m
N
−1 m
X
X
X
X
θk (λσ)`+ N
θk
θkr (λσ) N
θk p
m
N
N
€
Š
€
Š
λσ =
=
=
(λσ)
E1,1
m
m (λσ) E1, m+1
r
r+1
m+1
N N
N
θj
θ
θ
θ
Γ
Γ
`
+
r=0 j
m=0 j
`=0 m=0 j
N
N
and reminding that, from (2.8), the βm , 1 ≤ m ≤ #K − 1, vanish, we can rewrite (5.5) in the
following form. For x ≥ 0,
Z∞
e−λt [P{T (t) ∈ ds, X (t) ∈ dx}/(ds dx)] dt
0
= −e
= −e
−λs
−λs
!
N
−1
X
X
m=#K−1
k∈K
N
X
βm λ
Bk θkm+1
m−#K
N
with Φm (τ; x) =
P
Aj
j∈J θ m−1
j
Z
s
σ
m+1
−1
N
0
Z
‚
E1, m+1 (λσ)
N
X Aj
j∈J
θ jm
Œ
I j,#J−1 (s − σ; x)
dσ
s
0
m=#K
λ
m−#K+1
N
m
σ N −1 E1, m (λσ) Φm (s − σ; x) dσ
N
I j,#J−1 (τ; x).
913
(5.6)
Remark 5.3. For x = 0, using formula (5.1) which is valid for x ≤ 0, we get, by (2.8), (2.9)
and (2.10),
Z
∞
0
e−λt P{T (t) ∈ ds, X (t) ∈ dx}/(ds dx)
=−
=−
=
=
λ
‚
#K
X
e−λs
λ
#K−1
N
s
m=0
#K
#K−1
#K
α1−#K β#K (λs) N E1,1− 1 (λs) + α−#K β#K+1 (λs) N E1,1 (λs)
λ N s N‚
X
e−λs
#K−1
N
X
j∈J
s
#K
N
Œ
θj
m
α−m βm+1 (λs) N E1, m+#J (λs)
#K
N
e−λs
#K−1
dt
x=0
N
N
θ j (λs)
#K−1
N
E1,1− 1 (λs) +
N
j∈J
−λs
e
p
N
s

E1,1− 1 (λs) −
X
θk (λs)
#K
N
e
λs
Œ
k∈K
p
N
N
‹
λs eλs .
(5.7)
On the other hand, with formula (5.6) which is valid for x ≥ 0,
Z
∞
0
e−λt P{T (t) ∈ ds, X (t) ∈ dx}/(ds dx)
= −e
−λs
N
X
βm λ
dt
x=0
m−#K
N
s
Z
m
σ N −1 E1, m (λσ) Φm (s − σ; 0) dσ
N
0
m=#K
with
Φm (τ; 0) =
=
Ni
‚
Œ
X Aj
m−1
j∈J θ j
e
π
−i #J−1
N
2π
Š € #J−1 Š
€
sin N π
Γ #K+1
N
πτ
#K+1
N
−e
i #J−1
π
N
Z
∞
N
ξ#K e−τξ dξ
0
α1−m
α1−m = €
Š #K+1 .
Γ #J−1
τ N
N
In view of (2.8), (2.9) and (2.10), we have
Z
∞
e
0
−λt
P{T (t) ∈ ds, X (t) ∈ dx}/(ds dx)
x=0
dt

‚
ŒZ s
#K
e−λs  X
σ N −1
= € #J−1 Š 
θj
E1, #K (λσ) dσ
#K+1
N
Γ N
0 (s − σ) N
j∈J

!
−1
p Z s σ #K+1
X
N
N
+
θk
λ
E1, #K+1 (λσ) dσ
#K+1
N
0 (s − σ) N
k∈K
€
Š
‚
Œ
∞ B ` + #K , 1 − #K+1
X
X
e−λs
1
€ #J−1 Š e−λs
€N #K Š N
=
θj
λ` s`− N
Γ N
Γ `+ N
j∈J
`=0
914
(5.8)
€
Š
!
∞ B ` + #K+1 , 1 − #K+1
p X
N
N
N
`
€
Š
− λ
(λs)
#K+1
Γ
`
+
`=0
N
Œ −λs 
‚
‹
p
X
e
N
λs
1
.
(λs)
−
λs
e
θj
E
=
p
1,1− N
N
s
j∈J
Thus, we have checked that the two different formulas (5.7) and (5.8) lead to the same result.
Example 5.3. Case N = 3. For x ≥ 0, (5.5) supplies formally with the numerical values of Example 2.1, when κ3 = −1,
Z
∞
e−λt [P{T (t) ∈ ds, X (t) ∈ dx}/(ds dx)] dt
0
e−λs
= p
3
+e
‚
e
iπ
6
Z
s
s
Z
p
€
Š
π 3
σ−2/3 E 1 , 1 − e−i 3 λσ I1,1 (s − σ; x) dσ
3 3
0
− iπ
6
€
σ−2/3 E 1 , 1 − e
i π3 3
p
3 3
0
Œ
Š
λσ I2,1 (s − σ; x) dσ
and when κ3 = 1,
Z
∞
e−λt [P{T (t) ∈ ds, X (t) ∈ dx}/(ds dx)] dt
0
i e−λs
=p p
3
3 λ
Zs
σ
−
s
‚Z
€ 2π p
Š
3
σ−2/3 E 1 , 1 e−i 3 λσ I1,0 (s − σ; x) dσ
3 3
0
−2/3
Œ
Š
€ i 2π p
3
E 1 , 1 e 3 λσ I1,0 (s − σ; x) dσ .
3 3
0
The functions I1,0 , I1,1 and I2,1 above are respectively given by (2.16), (2.17) and (2.18).
Example 5.4. Case N = 4. For x ≥ 0, (5.5) supplies, with the numerical values of Example 2.2,
Z
∞
e−λt [P{T (t) ∈ ds, X (t) ∈ dx}/(ds dx)] dt
0
e−λs
=− p
4
2 λ
+ e−i
+ ei
3π
4
3π
4
‚
e
i π4
Z
s
+e
4 4
0
s
Z
€ p
Š
4
σ−3/4 E 1 , 1 − i λσ I1,1 (s − σ; x) dσ
4 4
0
s
Z
€ p
Š
4
σ−3/4 E 1 , 1 i λσ I2,1 (s − σ; x) dσ
4 4
0
−i π4
€ p
Š
4
σ−3/4 E 1 , 1 − λσ I1,1 (s − σ; x) dσ
Z
s
0
σ
−3/4
Œ
€ p
Š
4
E 1 , 1 − λσ I2,1 (s − σ; x) dσ .
4 4
The functions I1,1 and I2,1 above are given by (2.19).
915
Inverting with respect to λ
6
In this section, we perform the last inversion in F (λ, µ, ν) in order to derive the distribution of the
couple (T (t), X (t)). As in the previous section, we treat separately the two cases x ≤ 0 and x ≥ 0.
The case x ≤ 0
6.1
Theorem 6.1. The distribution of the couple (T (t), X (t)) is given, for x ≤ 0, by
P{T (t) ∈ ds, X (t) ∈ dx}/ds dx
Z∞
#K
Ni X
m−#K
N
=−
α−m s N
ξm+#J e−(t−s)ξ Km (xξ) E1, m+#J (−sξN ) dξ
N
2π m=0
0
where
Km (z) = e−i
#K−m−1
π
N
X
Bk θkm+1 e−θk e
π
iN
z
− ei
#K−m−1
π
N
k∈K
X
Bk θkm+1 e−θk e
π
−i N
(6.1)
z
.
k∈K
PROOF
Assume x ≤ 0. Recalling (5.1), we have
Z
∞
e−λt [P{T (t) ∈ ds, X (t) ∈ dx}/(ds dx)]dt
0
=−
=−
#K
X
e−λs
#K−1
N
λ
p
N
s
#K
N
λ e−λs
m
α−m (λs) N E1, m+#J (λs)
N
m=0
#K
X
α−m
m=0
=−
(λs)
€
Γ
`+
`=0
`+ m−#K
N
∞ X
#K
X
s
α−m €
Γ `+
`=0 m=0
X
Š
m+#J
N
X
Š
m+#J
Bk θkm+1 e−θk
p
N
λx
k∈K
`+ m−#K
N
∞
X
X
Bk θkm+1 e−θk
p
N
λx
k∈K
Bk θkm+1 λ`+
m−#K+1
N
e−λs−θk
p
N
λx
.
(6.2)
k∈K
N
p
N
m−#K+1
We need to invert the quantity λ`+ N e−λs−θk λ x for ` ≥ 0 and 0 ≤ m ≤ #K with respect to λ.
We intend to use (2.15) which is valid for 0 ≤ m ≤ N − 1. Actually (2.15) holds true also for m ≤ 0;
the proof of this claim is postponed to Lemma 7.3 in the appendix. As a byproduct, for any ` ≥ 0
and 0 ≤ m ≤ #K,
Z∞
λ`+
m−#K+1
N
e−λs−θk
p
N
λx
= e−λs
e−λu I k,#K−`N −m−1 (u; x) du
0
=
Z
∞
e−λt I k,#K−`N −m−1 (t − s; x) dt.
(6.3)
s
Then, by putting (6.3) into (6.2) and next by eliminating the Laplace transform with respect to λ,
we extract
P{T (t) ∈ ds, X (t) ∈ dx}/(ds dx)
916
=−
s`+
∞ X
#K
X
α−m €
Γ `+
m=0
`=0
m−#K
N
X
Š
m+#J
Bk θkm+1 I k,#K−`N −m−1 (t − s; x)
k∈K
N
`+ m−#K
N
∞ X
#K
Ni X
s
Š
α−m €
2π `=0 m=0
Γ ` + m+#J
N
‚
Z∞
X
π
#K−`N −m−1
iN
N
π
−i
m+1
N
ξN −#K+`N +m e−(t−s)ξ −θk e xξ dξ
×
Bk θk
e
=−
0
k∈K
−e
∞
Z
i #K−`NN−m−1 π
ξ
Œ
π
N −#K+`N +m −(t−s)ξN −θk e−i N xξ
dξ
e
0
=−
#K
Ni X
2π m=0

α−m s
m−#K
N
X
Bk θkm+1
k∈K
€
Š 
N `
∞
X
−sξ

 m+#J −(t−s)ξN −θk e i Nπ xξ

Š
€
× e
ξ
e
dξ

m+#J 
Γ
`
+
0
`=0
N



€
Š
Z∞ ∞
N `
X
π
−sξ
#K−m−1
−i
N



Š ξm+#J e−(t−s)ξ −θk e N xξ dξ
€
− ei N π

m+#J 
0
`=0 Γ ` + N
−i #K−m−1
π
N
=−
#K
Ni X
2π m=0
‚
×
e−i
α−m s

∞
Z
m−#K
N
X
Bk θkm+1
k∈K
#K−m−1
π
N
Z
∞
ξm+#J e−(t−s) ξ
N
π
−θk e i N xξ
€
Š
E1, m+#J −sξN dξ
N
0
−e
i #K−m−1
π
N
Z
∞
ξ
m+#J
π
e
−(t−s) ξN −θk e−i N xξ
Œ
€
Š
N
E1, m+#J −sξ dξ .
N
0
The proof of (6.1) is established. „
Remark 6.1. Let us integrate (6.1) with respect to x on (−∞, 0]. We first compute, by using (2.8),
!
Z0
#K−m
1 X
#K−m
Bk θkm e−i N π − e i N π
Km (xξ) dx = −
ξ k∈K
−∞

if 1 ≤ m ≤ #K,
0
2i
#K − m
=
sin
π βm = 2i
 sin #K π
ξ
N
if m = 0.
ξ
N
We then obtain
P{T (t) ∈ ds, X (t) ≤ 0}/ds =
=
€
ŠZ
N sin #K
π
N
#K
N
∞
πs€
Š 0
∞
X
N sin #K
π
N
πs
#K
N
€
Š
N
ξ#J−1 e−(t−s)ξ E1, #J −sξN dξ
(−s)`
Š
€
#J
`=0 Γ ` + N
917
N
Z
∞
0
ξ`N +#J−1 e−(t−s)ξ dξ
N
=
sin
πs
#K
N
€ #K Š
π
N
∞ 
X
#J
(t − s) N
−
`=0
s ‹`
t −s
.
In the foregoing equality we must assume 0 < s < t/2 in order to make convergent the series. From
this, we extract
€
Š
#K
sin #K
π
t −s N
N
P{T (t) ∈ ds, X (t) ≤ 0}/ds =
.
(6.4)
πt
s
We retrieve Theorem 14 of [11].
Remark 6.2. Let us evaluate P{T (t) ∈ ds, X (t) ∈ dx}/(ds dx) at x = 0. For 0 ≤ m ≤ #K,
X
X
#K − m − 1
#K−m−1
#K−m−1
Bk θkm+1 − e i N π
Bk θkm+1 = −2i sin
π βm+1 .
Km (0) = e−i N π
N
k∈K
k∈K
€
Š
Observing that sin #K−m−1
π
= 0 if m = #K − 1, in view of (2.8), (2.9) and (2.10), we get
N
P{T (t) ∈ ds, X (t) ∈ dx}/ds
x=0
Z∞
π‹
€
Š
N
= sin
α−#K β#K+1
ξN e−(t−s)ξ E1,1 −sξN dξ
π
N
0
€ Š € Š
!Z ∞
‚

‹
X
sin Nπ Γ N1 X
π
N
N
θj
ξN e−tξ dξ =
θj.
= sin
1
π
N
N π t 1+ N
0
j∈J
j∈J
N
Thanks to (2.4) and (2.11), we see that
P{T (t) ∈ ds, X (t) ∈ dx}/ds
x=0
and we deduce
P{T (t) ∈ ds|X (t) = 0}/ds =
=
1
t
p(t; 0)
1l(0,t) (s)
t
,
that is, (T (t)|X (t) = 0) has the uniform law on (0, t). This is Theorem 13 of [11].
6.2
The case x ≥ 0
The case x ≥ 0 can be related to the case x ≤ 0 by using the duality. Let us introduce the dual
process (X t∗ ) t≥0 of (X t ) t≥0 defined as X t∗ = −X t for any t ≥ 0. It is known that (see [11]):
• If N is even, the processes X and X ∗ are identical in distribution (because of the symmetry of
d
the heat kernel p): X ∗ = X ;
d
d
• If N is odd, we have the equalities in distribution (X + )∗ = X − and (X − )∗ = X + where X + is
the pseudo-process associated with κN = +1 and X − the one associated with κN = −1.
918
When N is even, we have {−θ j , j ∈ J} = {θk , k ∈ K}. In this case, for any j ∈ J, there exists a unique
k ∈ K such that θ j = −θk and then
Y
θi
i∈J\{ j}
θi − θ j
Aj =
and
αm =
X
Y
−θi
i∈K\{k}
−θi + θk
=
A j θ jm =
j∈J
X
Y
θi
i∈K\{k}
θi − θk
=
= Bk
Bk (−θk )m = (−1)m βm .
k∈K
When N is odd, we distinguish the roots of κN in the cases κN = +1 and κN = −1:
• For κN = +1, let θi+ , 1 ≤ i ≤ N , denote the roots of 1 and set J + = {i ∈ {1, . . . , N } : ℜ(θi+ ) > 0}
and K + = {i ∈ {1, . . . , N } : ℜ(θi+ ) < 0};
• For κN = −1, let θi− , 1 ≤ i ≤ N , denote the roots of −1 and set J − = {i ∈ {1, . . . , N } : ℜ(θi− ) >
0} and K − = {i ∈ {1, . . . , N } : ℜ(θi− ) < 0}.
We have {θ j− , i ∈ J − } = {−θk+ , k ∈ K + } and {θk− , k ∈ K − } = {−θ j+ , j ∈ J + }. In this case, for any
j ∈ J − , there exists a unique k ∈ K + such that θ j− = −θk+ and then
A−j =
θi−
Y
θ − − θ j−
i∈J − \{ j} i
=
−θi+
Y
=
−θi+ − θk+
i∈K + \{k}
Y
θi+
θ + − θk+
i∈K + \{k} i
= Bk+
and similarly A+j = Bk− . Moreover, we have
α−
m=
X
X
A−j (θ j− )m =
k∈K +
j∈J −
Bk+ (−θk+ )m = (−1)m
X
k∈K +
+
Bk+ (θk+ )m = (−1)m βm
m −
and similarly α+
m = (−1) βm .
Now, concerning the connection between sojourn time and duality, we have the following fact. Set
T̃ (t) =
Z
t
1l(0,+∞) (X (u)) du and
∗
T (t) =
0
Z
t
1l[0,+∞) (X ∗ (u)) du.
0
Since Spitzer’s identity holds true interchanging the closed interval [0, +∞) and the open interval
(0, +∞), it is easy to see that T (t) and T̃ (t) have the same distribution. On the other hand, we have
T̃ (t) =
Z
0
t
1l(0,+∞) (X (u)) du =
Z
t
∗
1l(−∞,0) (X (u)) du =
0
Z
t
[1 − 1l[0,+∞) (X ∗ (u))] du = t − T ∗ (t).
0
We then deduce that T (t) and t − T ∗ (t) have the same distribution. Consequently, we can state the
lemma below.
Lemma 6.1. The following identity holds:
P{T (t) ∈ ds, X (t) ∈ dx}/(ds dx) = P{T ∗ (t) ∈ d(t − s), X ∗ (t) ∈ d(−x)}/(ds dx).
919
As a result, the following result ensues.
Theorem 6.2. Assume N is even. The distribution of (T (t), X (t)) is given, for x ≥ 0, by
P{T (t) ∈ ds, X (t) ∈ dx}/(ds dx)
Z∞
#J
€
Š
Ni X
m−#J
N
ξm+#K e−sξ Jm (xξ) E1, m+#K −(t − s)ξN dξ
=
β−m (t − s) N
N
2π m=0
0
where
Jm (z) = e−i
#J−m−1
π
N
X
A j θ jm+1 e−θ j e
π
iN
z
− ei
#J−m−1
π
N
j∈J
X
A j θ jm+1 e−θ j e
π
−i N
z
(6.5)
.
j∈J
PROOF
When N is even, we know that X ∗ is identical in distribution to X and (T ∗ (t), X ∗ (t)) is then distributed like (T (t), X (t)). Thus, by (6.1) and Lemma 6.1, for x ≥ 0,
P{T (t) ∈ ds, X (t) ∈ dx}/(ds dx)
= P{T (t) ∈ d(t − s), X (t) ∈ d(−x)}/(ds dx)
Z∞
#K
Ni X
m−#K
N
=−
α−m (t − s) N
ξm+#J e−sξ Km (−xξ) E1, m+#J (−(t − s)ξN ) dξ.
N
2π m=0
0
The discussion preceding Lemma 6.1 shows that
Km (z) = e−i
#J−m−1
π
N
X
A j (−θ j )m+1 eθ j e
π
iN
z
− ei
j∈J
#J−m−1
π
N
X
A j (−θ j )m+1 eθ j e
π
−i N
z
.
j∈J
We see that Km (z) = (−1)m+1 Jm (−z) where the function Jm is written in Theorem 6.2. Finally, by
replacing α−m by (−1)m β−m and #J, #K by #K, #J respectively (which actually coincide since N
is even), (6.5) ensues. „
If N is odd, although the results are not justified,
similar formulas can be stated. We find it interestRt
±
ing to produce them here. We set T (t) = 0 1l[0,+∞) (X ± (u)) du.
Theorem 6.3. Suppose that N is odd. The distribution of (T + (t), X + (t)) is given, for x ≥ 0, by
P{T + (t) ∈ ds, X + (t) ∈ dx}/(ds dx)
Z∞
X+
€
Š
N i #J
m−#J +
+
N
+
=
β−m (t − s) N
ξm+#K e−sξ Jm+ (xξ) E1, m+#K + −(t − s)ξN dξ
2π m=0
N
0
(6.6)
where
Jm+ (z) = e−i
#J + −m−1
π
N
X
j∈J +
A+j (θ j+ )m+1 e
π
−θ j+ e i N z
− ei
#J + −m−1
π
N
X
j∈J +
920
A+j (θ j+ )m+1 e
π
−θ j+ e−i N z
.
PROOF
d
d
When N is odd, we know that (X + )∗ = X − and then ((T + )∗ (t), (X + )∗ (t)) = (T − (t), X − (t)). Thus,
by (6.1) and Lemma 6.1, for x ≥ 0,
P{T + (t) ∈ ds, X + (t) ∈ dx}/(ds dx)
= P{T − (t) ∈ d(t − s), X − (t) ∈ d(−x)}/(ds dx)
Z∞
X−
€
Š
N i #K
m−#K −
N
m+#J − −sξN
−
N
− −(t − s)ξ
dξ
=−
α−
(t
−
s)
ξ
e
K
(−xξ)
E
m+#J
m
1, N
2π m=0 −m
0
where
Km− (z) = e−i
#K − −m−1
π
N
X
π
− iN
Bk− (θk− )m+1 e−θk e
z
− ei
k∈K −
#K − −m−1
π
N
X
π
− −i N
Bk− (θk− )m+1 e−θk e
z
.
k∈K −
As in the proof of Theorem 6.2, we can write Km− (z) = (−1)m+1 Jm+ (−z) where the function Jm+
m +
−
−
is defined in Theorem 6.3. Finally, by replacing α−
by #K + , #J +
m by (−1) βm and #J , #K
respectively, (6.6) ensues. „
Formula (6.6) involves only quantities with associated ‘+’ signs. We have a similar formula for X −
by changing all ‘+’ into ‘−’. So, we can remove these signs in order to get a unified formula (this
is (6.5)) which is valid for even N and, at least formally, for odd N without sign.
Remark 6.3. Let us integrate (6.5) with respect to x on [0, ∞). We first calculate, recalling that
Jm (z) = (−1)m+1 Km (−z) and referring to Remark 6.1,

Z0
Z∞
if 1 ≤ m ≤ #J,
0
m+1
Km (xξ) dx =
Jm (xξ) dx = (−1)
− 2i sin #J π
if m = 0.
−∞
0
ξ
N
Then,
P{T (t) ∈ ds, X (t) ≥ 0}/ds =
ŠZ
€
π
N sin #J
N
#J
N
∞
€
Š
N
ξ#K−1 e−sξ E1, #K −(t − s)ξN dξ
N
π(t − s)
0
€ #J Š
Z
∞
∞
N sin N π X
(−(t − s))`
N
€
Š
=
ξN `+#K−1 e−sξ dξ
#J
#K
π(t − s) N `=0 Γ ` + N
0
€ #J Š
`
∞
X
sin N π
t −s
−
.
=
#J
#K
s
πs N (t − s) N `=0
In the foregoing equality we must assume t/2 < s < t in order to make convergent the series. From
this, we extract
€
Š
sin #J
π  s ‹#J
N
N
P{T (t) ∈ ds, X (t) ≥ 0}/ds =
(6.7)
πt
t −s
and we retrieve Theorem 14 of [11]. By adding (6.4) and (6.7), we obtain the counterpart to the
famous Paul Lévy’s arc-sine law stated in [11] (Corollary 9):
€
Š
sin #J
π
1l(0,t) (s)
N
P{T (t) ∈ ds}/ds =
.
#K
#J
π
s N (t − s) N
921
6.3
Examples
In this part, we write out the distribution of the couple (T (t), X (t)) in the cases N = 3 and N = 4.
Example 6.1. Case N = 3. Let us recall that this case is not fully justified. Nevertheless, we find it
interesting to produce the formal corresponding results.
€
Š
3
• Suppose κ3 = 1. Using E1,1 −sξ3 = e−sξ and the values of Example 2.1, (6.1) writes, for x ≤ 0,
P{T (t) ∈ ds, X (t) ∈ dx}/(ds dx)
p ‚
Z∞
Š
€
3 −2/3
3
=
s
ξ e−(t−s)ξ K˜0 (xξ) E1, 1 −sξ3 dξ
3
2π
0
Z∞
Z
€
Š
−1/3
2 −(t−s)ξ3 ˜
3
+s
ξ e
K1 (xξ) E 2 −sξ dξ +
0
1, 3
∞
3 −tξ3
ξ e
Œ
K˜2 (xξ) dξ
0
where
‚
p
p Œ
p
3z p
3z
z
−z/2
cos
,
K˜0 (z) = −i 3 K0 (z) = e − e
+ 3 sin
2
2
‚
p
p Œ
p
3z p
3z
z
−z/2
cos
,
K˜1 (z) = −i 3 K1 (z) = −e + e
− 3 sin
2
2
p
p
3z
z
−z/2
K˜2 (z) = −i 3 K2 (z) = e + 2e
cos
.
2
For x ≥ 0, (6.6) gives
P{T (t) ∈ ds, X (t) ∈ dx}/(ds dx)
‚
Œ
Z∞
Z∞
€
Š
3
1
3
3
=
ξ2 e−sξ J˜0 (xξ) E1, 2 −(t − s)ξ3 dξ +
ξ3 e−tξ J˜1 (xξ) dξ
p
3
2π 3 t − s 0
0
where
p
3z
−z/2
˜
J0 (z) = i J0 (z) = 2 e
sin
,
‚ 2 p
p Œ
p
3z
3z
−z/2
J˜1 (z) = −i J1 (z) = e
3 cos
− sin
.
2
2
• Suppose κ3 = −1. Likewise, for x ≤ 0,
P{T (t) ∈ ds, X (t) ∈ dx}/(ds dx)
‚
Œ
Z∞
Z∞
€
Š
3
1
2 −(t−s)ξ3 ˜
3
3 −tξ3 ˜
=
ξ e
K0 (xξ) E1, 2 −sξ dξ +
ξ e
K1 (xξ) dξ
p
3
2π 3 s 0
0
where
K˜0 (z) = −i K0 (z) = −2 e
z/2
922
sin
p
3z
2
,
‚
p Œ
p
p
3
z
3z
+ sin
.
3 cos
K˜1 (z) = −i K1 (z) = ez/2
2
2
For x ≥ 0,
P{T (t) ∈ ds, X (t) ∈ dx}/(ds dx)
p ‚
Z∞
€
Š
3
3
−2/3
=
ξ e−sξ J˜0 (xξ) E1, 1 −(t − s)ξ3 dξ
(t − s)
3
2π
0
Z
Z∞
€
Š
2 −sξ3 ˜
3
−1/3
ξ e
J1 (xξ) E 2 −(t − s)ξ dξ +
+ (t − s)
0
1, 3
∞
3
Œ
ξ3 e−tξ J˜2 (xξ) dξ
0
where
‚
p
p Œ
p
3z p
3z
−z
z/2
J˜0 (z) = i 3 J0 (z) = e − e
cos
,
− 3 sin
2
2
‚
p
p Œ
p
3z p
3z
−z
z/2
J˜1 (z) = −i 3 J1 (z) = −e + e
cos
,
+ 3 sin
2
2
p
p
3z
−z
z/2
J˜2 (z) = i 3 J2 (z) = e + 2 e cos
.
2
Example 6.2. Case N = 4. Referring to Example 2.2, formula (6.1) writes, for x ≤ 0,
P{T (t) ∈ ds, X (t) ∈ dx}/(ds dx)
‚
Z∞
€
Š
2
1
4
=
ξ2 e−(t−s)ξ K˜0 (xξ) E1, 1 −sξ4 dξ
p
2
π
s 0
Œ
p Z∞
Z∞
€
Š
2
3 −(t−s)ξ4 ˜
4
4 −tξ4 ˜
ξ e
K1 (xξ) E1, 3 −sξ dξ +
+p
ξ e
K2 (xξ) dξ
4
4
s 0
0
where
K˜0 (z) = −i K0 (z) = ez − cos z − sin z,
K˜1 (z) = −i K1 (z) = −ez + cos z − sin z,
K˜2 (z) = −i K2 (z) = ez + cos z + sin z.
For x ≥ 0, (6.5) reads
P{T (t) ∈ ds, X (t) ∈ dx}/(ds dx)
‚
Z∞
€
Š
1
2
4
ξ2 e−sξ J˜0 (xξ) E1, 1 −(t − s)ξ4 dξ
=
p
2
π
t −s 0
Œ
p
Z∞
Z∞
€
Š
2
4
4
ξ3 e−sξ J˜1 (xξ) E1, 3 −(t − s)ξ4 dξ +
ξ4 e−tξ J˜2 (xξ) dξ
+p
4
4
t −s 0
0
where
J˜0 (z) = i J0 (z) = e−z − cos z + sin z,
J˜1 (z) = −i J1 (z) = −e−z + cos z + sin z,
J˜2 (z) = i J2 (z) = e−z + cos z − sin z.
923
7
Appendix
Lemma 7.1 (Spitzer). Let (ξk )k≥1 be a sequence of independent identically distributed random variables and set X 0 = 0 and T0 = 0 and, for any k ≥ 1,
X k = ξ1 + · · · + ξ k ,
Tk =
k
X
1l[0,+∞) (X k ).
j=1
Then, for µ ∈ R, ν > 0 and |z| < 1,
!
∞
X
”
— zk
iµX k −ν k1l[0,+∞) (X k )
,
(7.1)
E e
k
k=0
k=1

!
∞
∞ €
X
X
”
—
Š ”
— zk
1
eν− exp −
,
E e iµX k −ν Tk 1l[0,+∞) (X k ) z k = ν
1 − e−ν k E e iµX k 1l[0,+∞) (X k )
e
−
1
k
k=0
k=1
∞
X
”
—
E e iµX k −ν Tk z k = exp
∞
X
—
”
E e iµX k −ν Tk 1l(−∞,0) (X k ) z k =
k=0
eν
eν − 1

∞ €
X
— zk
Š ”
1 − e−ν k E e iµX k 1l(−∞,0) (X k )
k
k=1
exp
(7.2)

!
− 1.
(7.3)
PROOF
Formula (7.1) is stated in [21] without proof. So, we produce a proof below which is rather similar
to one lying in [21] related to the maximum functional of the X k ’s.
• Step 1. Set, for any (x 1 , . . . , x n ) ∈ Rn and σ ∈ Sn (Sn being the set of the permutations of
1, 2, . . . , n),
‚ k
Œ
n
X
X
U(x 1 , . . . , x n ) =
1l[0,∞)
xj
k=1
and
V (σ; x 1 , . . . , x n ) =
nσ
X
j=1
‚
#ck (σ)1l[0,∞)
X
Œ
xj .
j∈ck (σ)
k=1
In the definition of V above, the permutation σ is decomposed into nσ cycles:
(c1 (σ))(c2 (σ)) . . . (cnσ (σ)).
In view of Theorem 2.3 in [21], we have the equality between the two following sets:
{U(σ(x 1 ), . . . , σ(x n )), σ ∈ Sn } = {V (σ; x 1 , . . . , x n ), σ ∈ Sn }.
We then deduce, for any bounded Borel functions φ and F ,
– ‚ n
Œ
™
X
1 X
E φ(X n )F (U(ξ1 , . . . , ξn )) =
E φ
ξσ( j) F (V (σ; ξ1 , . . . , ξn )) .
n! σ∈S
j=1
n
924
σ =
In particular, for φ(x) = e iµx and F (x) = e−ν x (where µ ∈ R and ν > 0 are fixed),
ŒŒ™
‚
–
‚
nσ X
nσ
•
˜
Pn
Pk
X
X
X
1 X
iµX n −ν k=1 1l[0,+∞) ( j=1 ξ j )
ξj
E exp iµ
ξj − ν
#ck (σ)1l[0,∞)
E e
=
n! σ∈S
k=1
k=1
j∈c
(σ)
j∈c
(σ)
n
k
k
ŒŒ™
–
‚
‚
nσ
X
X
Y
X
1
=
ξj
E exp iµ
ξ j − ν (#ck (σ))1l[0,∞)
n! σ∈S k=1
nσ
1 X Y
=
n! σ∈S
n
j∈ck (σ)
j∈ck (σ)
n
–
‚
E exp iµ
#c
k (σ)
X
ξ j − ν (#ck (σ))1l[0,∞)
‚ #ck (σ)
X
ξj
.
j=1
j=1
k=1
ŒŒ™
Denote by r` (σ) the number of cycles of length ` in σ for any ` ∈ {1, . . . , n}. We have r1 (σ) +
2r2 (σ) + · · · + nrn (σ) = n. Then,
n € ”
—
”
—Šr` (σ)
1 X Y
E e iµX n −ν Tn =
E e iµX l −ν` 1l[0,∞) (X ` )
n! σ∈S `=1
n
=
1
X
n!
Nk1 ,...,kn
n € ”
Y
—Šk`
E e iµX ` −ν` 1l[0,∞) (X ` )
`=1
k1 ,...,kn ≥0:
k1 +2k2 +···+nkn =n
where Nk1 ,...,kn is the number of the permutations σ of n objects satisfying r1 (σ) = k1 , . . . , rn (σ) =
kn ; this number is equal to
Nk1 ,...,kn =
n!
(k1 !1k1 )(k2 !2k2 ) . . . (kn !nkn )
.
Then,
n
Y
X
—
”
E e iµX n −ν Tn =
k1 ,...,kn ≥0:
`=1
k1 +2k2 +···+nkn =n
1
k` !`k`
€ ”
—Šk`
E e iµX ` −ν` 1l[0,∞) (X ` )
.
• Step 2. Therefore, the identity between the generating functions follows: for |z| < 1,
∞
X
”
—
E e iµX n −ν Tn z n =
n
Y
1
X
n≥0,k1 ,...,kn ≥0: `=1
k1 +2k2 +···+nkn =n
n=0
X
∞
Y
1
k` !
‚
‚
”
E e
iµX ` −ν` 1l[0,∞) (X ` )
— z`
Œk`
E e iµX ` −ν` 1l[0,∞) (X ` )
k
!
`
`
k1 ,k2 ,···≥0 `=1
–
‚
Œ
™
∞
∞
Y
X
”
— z` k
1
=
E e iµX ` −ν` 1l[0,∞) (X ` )
k!
`
`=1 k=1
‚
Œ
∞
Y
”
— z`
=
exp E e iµX ` −ν` 1l[0,∞) (X ` )
`
`=1
=
925
”
— z`
`
Œk`
∞
X
— zn
”
= exp
E e iµX n −ν n1l[0,+∞) (X n )
n
n=1
!
.
The proof of (7.1) is finished.
• Step 3.
Using the elementary identity e a1lA(x) − 1 = (e a − 1)1lA(x) and noticing that Tk = Tk−1 + 1l[0,+∞) (X k ),
we get for any k ≥ 1,
–
™
ν1l[0,+∞) (X k )
”
—
€
Š—
e
−
1
1 ” € iµX −ν T Š
k
k−1 − E e iµX k −ν Tk
E e iµX k −ν Tk 1l[0,+∞) (X k ) = E e iµX k −ν Tk
E
e
.
=
eν − 1
eν − 1
Now, since X k = X k−1 + ξk where X k−1 and ξk are independent and ξk have the same distribution
as ξ1 , we have, for k ≥ 1,
€
Š
€
Š €
Š
E e iµX k −ν Tk−1 = E e iµξ1 E e iµX k−1 −ν Tk−1 .
Therefore,
∞
X
—
”
E e iµX k −ν Tk 1l[0,+∞) (X k ) z k =
k=1
=
=
1
∞ € ”
X
—
”
—Š
E e iµX k −ν Tk−1 − E e iµX k −ν Tk z k
eν − 1 k=1
1
eν − 1
1
eν
−1
€
E e
iµξ1
∞
∞
X
ŠX
”
—
”
—
E e iµX k−1 −ν Tk−1 z k −
E e iµX k −ν Tk z k
k=1
!
k=1
!
∞
Š
ŠX
—
€ €
”
z E e iµξ1 − 1
E e iµX k −ν Tk z k + 1 .
(7.4)
k=0
By putting (7.1) into (7.4), we extract
∞
X
—
”
E e iµX k −ν Tk 1l[0,+∞) (X k ) z k =
k=0
1
eν
−1
”
€
€
ŠŠ
—
eν − 1 − z E e iµξ1 S(µ, ν, z)
where we set
(7.5)
!
∞
X
”
— zk
S(µ, ν, z) = exp
E e iµX k −ν k1l[0,+∞) (X k )
.
k
k=1
” P∞
—
Next, using the elementary identity 1 − ζ = exp[log(1 − ζ)] = exp − k=1 ζk /k valid for |ζ| < 1,
€
1 − z E e iµξ1
Š
∞ ” €
X
Š—k z k
= exp −
E e iµξ1
k
k=1
!
∞
X
€
Š zk
= exp −
E e iµX k
k
k=1
and then
€
€
ŠŠ
1 − z E e iµξ1 S(µ, ν, z) = exp
∞
X
”
— zk
E e iµX k −ν k1l[0,+∞) (X k ) − e iµX k
k
k=1
926
!
!
∞ €
X
Š ”
— zk
= exp −
1 − e−ν k E e iµX k 1l[0,+∞) (X k )
k
k=1
!
.
(7.6)
Hence, by putting (7.6) into (7.5), formula (7.2) entails.
By subtracting (7.5) from (7.1), we obtain the intermediate representation
∞
X
”
—
E e iµX k −ν Tk 1l(−∞,0) (X k ) z k =
k=0
1
eν
−1
”€
€
ŠŠ
—
eν − z E e iµξ1 S(µ, ν, z) − eν .
By writing, as previously,
ν
€
e −zE e
iµξ1
Š
∞
X
€
Š e−ν k z k
iµX k
= e exp −
E e
k
k=1
!
ν
,
we find
€
ν
€
e −zE e
iµξ1
ŠŠ
!
S(µ, ν, z) = e exp
∞
X
”
— zk
E e iµX k −ν k1l[0,+∞) (X k ) − e iµX k −ν k
k
k=1
!
= eν exp
∞ €
X
— zk
Š ”
1 − e−ν k E e iµX k 1l(−∞,0) (X k )
k
k=1
ν
.
Finally, (7.3) ensues. „
Lemma 7.2. The following identities hold:
!
β#K = (−1)#K−1
Y
θk ,
β#K+1 = (−1)#K−1
k∈K
Y
θk
k∈K
!
X
θk .
k∈K
PROOF
We label the set K as {1, 2, 3, . . . , #K}. By (2.5), we know that the Bk ’s solve a Vandermonde system.
Then, by Cramer’s formulas, we can write them as fractions of some determinants: Bk = Vk /V where
1
1
...
1
1
1
...
1 ...
1 θ1
θ1
. . . θk−1 0 θk+1 . . .
θ#K ...
θ#K 2
2
2
2
2
θ2
. . . θk−1 0 θk+1 . . .
θ#K
...
θ#K and V = θ1
V = 1
.
k
.
.
.
.
.
.. ..
.
.
.
.
.
.
.
.
.
.
.
.
#K−1
#K−1
#K−1
#K−1
θ #K−1 . . . θ #K−1 θ
... θ
0 θ
... θ
1
1
#K
k−1
k+1
#K
By expanding the determinant Vk with respect to its kth column and next factorizing it suitably, we
easily see that
θ1
. . . θk−1
θk+1 . . .
θ#K 2
2
2
θ2
. . . θk−1
θk+1
...
θ#K
1
k+1 ..
..
.. Vk = (−1)
..
.
.
.
. θ #K−1 . . . θ #K−1 θ #K−1 . . . θ #K−1 1
k−1
927
k+1
#K
1
θ1
Q
i∈K θi θ 2
k+1
= (−1)
1
..
θk
.
θ #K−2
1
...
...
...
...
1
1
θk−1
2
θk−1
..
.
θk+1
2
θk+1
..
.
...
...
...
#K−2
#K−2
θk−1
θk+1
...
θ#K 2
θ#K
.
.. . θ #K−2 1
#K
With this at hands, we have
β#K =
X
Bk θk#K
k∈K
1
θ1
Q
X
2
θ
k
(−1)k+1 θk#K−1 θ1
= k∈K
..
V
k∈K
.
θ #K−2
1
...
...
...
...
1
1
θk−1
2
θk−1
..
.
θk+1
2
θk+1
..
.
...
...
...
#K−2
#K−2
θk−1
θk+1
...
θ#K 2
θ#K
.
.. . #K−2 θ
1
#K
We can observe that the sum lying on the above right-hand side is nothing but the expansion of the
determinant V with respect
to its last row multiplied by the sign (−1)#K−1 . This immediately ensues
Q
that β#K = (−1)#K−1 k∈K θk . Similarly,
1
θ1
Q
X
X
2
θ
k
β#K+1 =
Bk θk#K+1 = k∈K
(−1)k+1 θk#K θ1
.
.
V
k∈K
k∈K
.
θ #K−2
1
...
...
...
...
1
1
θk−1
2
θk−1
..
.
θk+1
2
θk+1
..
.
...
...
...
#K−2
#K−2
θk−1
θk+1
...
θ#K 2
θ#K
.
.. . θ #K−2 1
#K
The above sum is the expansion with respect to its last row, multiplied by the sign (−1)#K−1 , of the
determinant V 0 defined as
1
...
1 θ1
...
θ#K 2
θ2
...
θ#K
1
0
.. .
V = ..
.
. θ #K−2 . . . θ #K−2 1
#K
θ #K
...
θ #K 1
#K
0
Let R0 , R1 , R2 , . . . , R#K−2 , R#K−1 denote the rows of V . We perform the substitution R#K−1 ←
P#K
R#K−1 + `=2 (−1)` σ` R#K−` where the σ` ’s are defined by (2.3). This substitution does not affect the value of V 0 and it transforms, e.g., the first term of the last row into
θ1#K +
#K
X
(−1)` σ` θ1#K−` .
`=2
Recall that σ` =
into
θ1
P
1≤k1 <···<k` ≤#K
X
2≤k2 <···<k` ≤#K
θk1 . . . θk` . We decompose σ` , by isolating the terms involving θ1 ,
θk2 . . . θk` +
X
2≤k1 <k2 <···<k` ≤#K
928
0
θk1 θk2 . . . θk` = θ1 σ`−1
+ σ`0
0
where we set σ#K
= 0 and σ`0 =
θ1#K
P
2≤k1 <k2 <···<k` ≤#K
θk1 θk2 . . . θk` . Therefore, we have
#K
#K
#K
X
X
X
`
#K−`
#K
` 0
#K−`+1
+
(−1) σ` θ1
= θ1 +
(−1) σ`−1 θ1
+
(−1)` σ`0 θ1#K−`
`=2
`=2
`=2
!
X
= θ1#K + σ10 θ1#K−1 = θ1#K−1 (θ1 + σ10 ) = θ1#K−1
θk .
k∈K
0
The foregoing
manipulation
works similarly for each
Š of V . So, we deduce that
€P
Š
€Qterm ofŠ€the
P last row
0
#K−1
V =
k∈K θk . „
k∈K θk
k∈K θk V and finally β#K+1 = (−1)
Lemma 7.3. For any integer m ≤ N − 1 and any x ≥ 0,
Z∞
m
e−λu I j,m (u; x) du = λ− N e−θ j
p
N
λx
(2.15)
.
0
PROOF
This formula is proved in [13] for 0 ≤ m ≤ N − 1. To prove that it holds true also for negative m,
we directly compute the Laplace transform of I j,m (u; x). We have
‚
Œ
Z∞
Z ∞ N −m−1
Z ∞ N −m−1
π
π
Ni
ξ
ξ
m
iN
−i N
π
π
−i m
i
−λu
−θ
e
xξ
−θ
e
xξ
j
j
e
I j,m (u; x) du =
e N
e
dξ − e N
e
dξ .
2π
ξN + λ
ξN + λ
0
0
0
Let us integrate the function H : z →
the contour ΓR = {ρe
iϕ
z M −1 −az
e
z N +λ
for fixed a and M such that ℜ(a) > 0 and M > 0 on
∈ C : ϕ = 0, ρ ∈ [0, R]} ∪ {ρe iϕ ∈ C : ϕ ∈ (0, − 2π
), ρ = R} ∪ {ρe iϕ ∈ C :
N
ϕ = − 2π
, ρ ∈ (0, R]}. We get, by residues theorem,
N
Z ∞ M −1
Z ∞ M −1
 p
‹
2π
z
z
N
π
−2i M
−i Nπ
−az
−ae−i N z
N
dz
=
2iπ
Residue
H,
−
e
dz
+
e
e
λ
e
N
N
0 z +λ
0 z +λ
p −i π
ŠM −N −i M −N π
N
2iπ € p
N
N
=
λ
e N e−a λ e
N
2π M −1 −i M π −a Npλ e−i Nπ
=
λN e N e
.
Ni
π
For M = N − m and a = θ j e i N x, this yields
Z∞
m
m
m
e−λu I j;m (u; x) du = −e−i N π λ− N × (−e i N π )e−θ j
p
N
λx
m
= λ− N e−θ j
p
N
λx
.
0
Hence, (2.15) is valid for m ≤ N − 1. „
References
[1] Abramowitz, M. and Stegun, I.-A.: Handbook of mathematical functions with formulas, graphs,
and mathematical tables. Dover Publications, 1992. MR1225604
929
[2] Beghin, L., Hochberg, K.-J. and Orsingher, E.: Conditional maximal distributions of processes
related to higher-order heat-type equations. Stochastic Process. Appl. 85 (2000), no. 2, 209–
223. MR1731022
[3] Beghin, L. and Orsingher, E.: The distribution of the local time for “pseudoprocesses” and its
connection with fractional diffusion equations. Stochastic Process. Appl. 115 (2005), 1017–
1040. MR2138812
[4] Beghin, L., Orsingher, O. and Ragozina, T.: Joint distributions of the maximum and the process
for higher-order diffusions. Stochastic Process. Appl. 94 (2001), no. 1, 71–93. MR1835846
[5] Bohr, H.: Über die gleichmässige Konvergenz Dirichletscher Reihen. J. für Math. 143 (1913),
203–211.
[6] Cammarota, V. and Lachal, A.: arXiv:1001.4201 [math.PR].
[7] Erdélyi, A., Ed.: Higher transcendental functions, vol. III, Robert Krieger Publishing Company,
Malabar, Florida, 1981. MR0698781
[8] Hochberg, K.-J.: A signed measure on path space related to Wiener measure. Ann. Probab. 6
(1978), no. 3, 433–458. MR0490812
[9] Hochberg, K.-J. and Orsingher, E.: The arc-sine law and its analogs for processes governed
by signed and complex measures. Stochastic Process. Appl. 52 (1994), no. 2, 273–292.
MR1290699
[10] Krylov, V. Yu.: Some properties of the distribution corresponding to the equation
2q
(−1)q+1 ∂∂ 2q ux .
∂u
∂t
=
Soviet Math. Dokl. 1 (1960), 760–763. MR0118953
[11] Lachal, A.: Distribution of sojourn time, maximum and minimum for pseudo-processes governed by higer-order heat-typer equations. Electron. J. Probab. 8 (2003), paper no. 20, 1–53.
MR2041821
[12] Lachal, A.: Joint law of the process and its maximum, first hitting time and place of a half-line
N
for the pseudo-process driven by the equation ∂∂t = ± ∂∂x N . C. R. Acad. Sci. Paris, Sér. I 343
(2006), no. 8, 525–530. MR2267588
[13] Lachal, A.: First hitting time and place, monopoles and multipoles for pseudo-procsses driven
N
by the equation ∂∂t = ± ∂∂x N . Electron. J. Probab. 12 (2007), 300–353. MR2299920
[14] Lachal, A.: First hitting time and place for the pseudo-process driven by the equation
± ∂∂x n
n
∂
∂t
=
subject to a linear drift. Stoch. Proc. Appl. 118 (2008), 1–27. MR2376250
[15] Nakajima, T. and Sato, S.: On the joint distribution of the first hitting time and the first hitting
place to the space-time wedge domain of a biharmonic pseudo process. Tokyo J. Math. 22
(1999), no. 2, 399–413. MR1727883
[16] Nikitin, Ya. Yu. and Orsingher, E.: On sojourn distributions of processes related to some higherorder heat-type equations. J. Theoret. Probab. 13 (2000), no. 4, 997–1012. MR1820499
930
[17] Nishioka, K.: Monopole and dipole of a biharmonic pseudo process. Proc. Japan Acad. Ser. A
72 (1996), 47–50. MR1391894
[18] Nishioka, K.: The first hitting time and place of a half-line by a biharmonic pseudo process.
Japan J. Math. 23 (1997), 235–280. MR1486514
[19] Nishioka, K.: Boundary conditions for one-dimensional biharmonic pseudo process. Electronic
Journal of Probability 6 (2001), paper no. 13, 1–27. MR1844510
[20] Orsingher, E.: Processes governed by signed measures connected with third-order “heat-type”
equations. Lithuanian Math. J. 31 (1991), no. 2, 220–231. MR1161372
[21] Spitzer, F.: A combinatorial lemma and its application to probability theory. Trans. Amer. Math.
Soc. 82 (1956), 323–339. MR0079851
931
Download