Parametric estimation problem for a time-periodic signal with additive periodic noise

advertisement
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
Parametric estimation problem for
a time-periodic signal with additive periodic noise
Khalil EL WALED
University of Rennes 2
Dynstoch 2014, 10-12 September, Coventry
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Outline
1
Introduction
2
Maximum likelihood estimation
Likelihood and convergence in probability
Convergence in the case f (t, θ) = θf (t)
3
Maximum contrast estimation
Definition of the contrast
Study of the case f (t, θ) = θf (t), σ(t) = 1
4
Simulation
Simulation
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Introduction
Simulation
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Model of periodic signal disturbed by noise whose variance is
periodic
dζt = f (t, θ)dt + σ(t)dWt ,
t ≥ 0,
Simulation
(1)
where
1
f (·, ·) : R × R 7→ R is continuous, periodic in the first variable
with period P;
2
σ(·) : R 7→ R is continuous periodic with the same period P;
3
W = {Wt , t ≥ 0} is a standard Brownian motion;
4
θ ∈ Θ is an unknown parameter, Θ is a compact of R.
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
Our target is
1
Estimation of the unknown parameter θ when we observe a
continuous observation along the interval [0, T ]
2
Estimation of the unknown parameter θ when we observe a
discrete observation along the interval [0, T ].
We are going to use the maximum of likelihood method for the
first estimation and the maximum of contrast for the second.
Then we show the consistency of these estimators.
Introduction
Maximum likelihood estimation
Application, ξt such that dζt =
Maximum contrast estimation
dξt
ξt ,
Simulation
however
ζt − ζ0 6= ln(ξt ) − ln(ξ0 ).
So ξt is the solution of the geometric linear SDE
dξt = f (t, θ)ξt dt + σ(t)ξt dWt ,
and the estimation of the drift component in the model (1) is
identical to this estimation in the model (2 ).
The equations of this type appear in several areas :
1
Finance (Karatzas and Shreve, 1991; Klebaner, 2006)
(Black-Scholes-Merton model);
2
Mechanic (Has’minskii, 1980 ; Jankunas and Khas’minskii,
1997)
(2)
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
In the continuous case, for the parametric estimation several works
are available (See for instance, Ibragimov and Has’minskii, 1981;
Kutoyants, 1984...)
However it’s difficult to obtain a complete observation of the
sample path. So problem of discretization are considered.
On the drift estimation for a diffusion process, Le Breton (1976)
has shown that maximum likelihood estimators based on the
discrete schemas has asymptotically the same behaviour as the
maximum likelihood estimators based on the continuous
observation.
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
Kasonga (1988) has used the least square method to show the
consistency of a estimators based on the discrete schemas.
The case of ergodic diffusion models is studied in Dacunha-Castelle
and Florens-Zmirou (1986), Florens-Zmirou (1989).
Using maximum contrast and for small variance diffusion models,
Genon-Catalot (1990) has shown, under some classical
assumptions, asymptotic results. Harison (1996) has used this
method to estimate the drift parameter for one-dimentional
nonstationary Gaussian diffusion models.
In these works σ(t) is assumed to be positive.
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
Maximum likelihood estimation
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
Likelihood and convergence in probability
Likelihood and convergence in probability
Recall that ζt is given by
dζt = f (t, θ)dt + σ(t)dWt ,
t ≥ 0.
To define the likelihood function we can use Theorem 7.18 of
Liptser and Shiryaev (2001).
We apply this theorem to the next two processes
dζtθ = f (t, θ)dt + σ(t)dWt ,
(3)
dηt = σ(t)dWt .
(4)
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
Likelihood and convergence in probability
Under the condition
f (t, θ) 1{σ(t)
sup t∈[0,P] σ(t)
6=0}
< ∞,
these two processes fulfil the conditions of this Theorem 7.18. So
T
µT
θ ∼ ν , where
θ
T
µT
θ := L(ζt , 0 ≤ t ≤ T ), ν := L(ηt , 0 ≤ t ≤ T ).
In addition, the conditions of the Corollary which follows this
Theorem 7.18 are satisfied and we have P-a.s.
dµθ θ
(ζ ) = exp
dν
where ρ(s, θ) :=
Z
0
T
f (s, θ)
1
σ 2 (s) {σ(s)
f (s,θ)
σ(s) 1{σ(s) 6=0} .
θ
6 0} dζs
=
1
−
2
Z
T
2
ρ (s, θ)ds
0
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
Likelihood and convergence in probability
Denote the likelihood function
Z T
f (s, θ)
LT (θ) := exp
1
σ 2 (s) {σ(s)
0
θ
6 0} dζs
=
1
−
2
Z
T
ρ (s, θ)ds (5)
.
2
0
Using the assumptions under f (·, ·) and σ(·) there exist θ̂T such
that
LT (θ̂T ) = arg sup LT (θ).
θ∈Θ
To show the convergence in Pθ of this estimator :
first we check that the log-likelihood is a contrast in the sense of
Dacunha-Castelle and Duflo, 1983 (Definition 3.2.7) and then we
apply a version of (Theorem 3.2.4, Dacunha-Castelle and Duflo,
1983, see also Theorem 5.7 of van der Vaart, 2005 ).
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
Likelihood and convergence in probability
For α ∈ Θ let
ΛT (α) := log(LT (α))
1
T
Z
ΛT (α) =
Z
=
1
T
T
0
0
T
Z T 2
ρ(s, α)
1
f (s, θ)
θ
1
dζ
−
1
ds
{σ(s)6=0} s
2
σ (s)
2T 0 σ 2 (s) {σ(s)6=0}
Z
1 2
1 T
ρ(s, α)ρ(s, θ) − ρ (s, α) ds +
ρ(s, α)dWs
2
T 0
Take T = nP, Λn (α) converges Pθ -p.s. to the contrast function
1
K (θ, α) := −
P
Z
0
P
1 2
ρ(s, α)ρ(s, θ) − ρ (s, θ) ds
2
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
Likelihood and convergence in probability
Recall now Theorem 3.2.4 of Dacunha-Castelle and Duflo.
Theorem 1
Let (Ω, F, (Fx )x>0 , (Pθ )θ∈Θ ) be a probability space, assume that
the next two conditions are fulfilled
1
Θ is a compact of R, the functions α 7→ Λn (α), α 7→ K (θ, α)
are continuous;
2
for all > 0, there exists η > 0 such that
lim Pθ
n→∞
sup
|α−α0 |<η
!
Λn (α) − Λn (α0 ) > = 0.
Then the maximum contrast estimator θ̂n is consistent in θ.
P
θ̂n →θ θ.
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
Likelihood and convergence in probability
Λn (α) − Λn (α0 ) =
Z T
1
ρ(s, α) − ρ(s, α0 ) 2ρ(s, θ) − ρ(s, α) − ρ(s, α0 ) ds
2T 0
Z
1 T
(ρ(s, α) − ρ(s, α0 ))dWs
+
T 0
The absolute value of the first term of this equality is bounded by
a multiple of η where |α − α0 | ≤ η. We show that the second term
converges in mean to 0 when n → ∞.
So
P
θ̂n →θ θ.
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
Convergence in the case f (t, θ) = θf (t)
Convergence in the case f (t, θ) = θf (t)
Now consider the particular case f (t, θ) = θf (t) so ζt is given by
this equation
dζt = θf (t)dt + σ(t)dWt , t ∈ [0, T ].
For the function f (·) non-parametric estimators are provided in
(Ibragimov and Has’minskii, 1981; Dehay and El Waled, 2013).
For the parameter θ we are going to give the expression of its
estimator and establish its convergence : convergence almost sure,
mean square convergence, asymptotic normality and the
asymptotic efficiency when T → ∞.
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
Convergence in the case f (t, θ) = θf (t)
Thanks to (5) the likelihood function in this case is
Z T
Z
dµθ θ
f (s)
θ2 T 2
LT (θ) :=
(ζ ) = exp θ
1{σ(s) 6=0} dζs −
ρ (s)ds .
2
dν
2 0
0 σ (s)
So the MLE is
RT
θ̂T :=
0
f (s)
1
dζ
σ 2 (s) {σ(s)6=0} s
.
RT
2 (s)ds
ρ
0
Remark
When we observe a continuous trajectory of ξt defined in (2) on
[0, T ]
dξt = θf (t)ξt dt + σ(t)ξt dWt .
Then the conditions of the Theorem 7.18 and the Corollary which
follows it are satisfied and we deduce that the MLE θ̂T is defined as
R T f (s)
0 σ 2 (s)ξs 1{σ(s)6=0} dξs
.
θ̂T :=
RT
2 (s)ds
ρ
0
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Convergence in the case f (t, θ) = θf (t)
When ζs = ζsθ
dζtθ = θf (t)dt + σ(t)dWt .
Hence we can write θ̂T as :
RT
θ̂T = θ + R0 T
0
ρ(s)dWs
ρ2 (s)ds
=θ+
VT
.
JT
Simulation
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
Convergence in the case f (t, θ) = θf (t)
Here we show that θ̂T is unbiased, moreover we get the almost
sure convergence, mean square convergence, the asymptotic
normality and the asymptotic efficiency.
Almost sure convergence
Theorem 2
θ̂T converges almost surely to θ.
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
Convergence in the case f (t, θ) = θf (t)
Proof.
nP
Z
JT =
P
Z
2
ρ (s)ds = n
0
0
VT = VnP
=
n−1 Z
X
1
JT
=
ρ (s)ds ⇒ lim
T →∞ T
P
2
(k+1)P
ρ(s)dWs =
k=0 kP
(kP)
where Wu
P
P
ρ2 (s)ds.
0
(kP)
ρ(s)dWs
,
k=0 0
:= WkP+u − WkP . As
n−1 Z P
1X
n→∞ n
n−1 Z
X
Z
lim
(kP)
ρ(s)dWs
k=0 0
we deduce the convergence
Z
=E
P
(kP)
ρ(s)dWs
0
= 0 Pθ − p.s.
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
Convergence in the case f (t, θ) = θf (t)
As
Z
L
T
ρ(s)dWs
Z
= N 0,
0
T
ρ2 (s)ds
0
RT
and θ̂T −θ = R0 T
0
ρ(s)dWs
ρ2 (s)ds
we deduce that
L θ̂T − θ = N 0, R T
0
!
1
ρ2 (s)ds
.
So we get the mean square convergence as well as the asymptotic
normality.
Mean square convergence, asymptotic normality
Theorem 3
θ̂T converges in mean square to θ, and θ̄T =
asymptotically normal.
√
T (θ̂T − θ) is
,
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
Convergence in the case f (t, θ) = θf (t)
Asymptotic efficiency of θ̂T
To justify the relevance of this estimator we see if it is
asymptotically efficient. In order to show the asymptotic efficiency
we use the Hájek-Le Cam inequality (see Kutoyants 1984, van der
Vaart 1998 for further details).
(T )
We show firstly that the family Pθ is locally asymptotically
normal (see Definition 1.2.1 in Kutoyants 1984 ).
Proposition 1
(T )
Pθ
is locally asymptotically normal.
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
Convergence in the case f (t, θ) = θf (t)
Proof.
After computation we get
(T )
dPθ+ΦT u
(T )
dPθ
1
(ζT ) = exp u∆T (ζT ) − u 2
2
where
− 12
T
Z
2
ΦT :=
ρ (s)1{σ(s)6=0} ds
,
0
Z
∆T (ζT ) :=
T
2
− 12 Z
0
T
ρ(s)dWs .
ρ (s)ds
0
Theorem 4
The estimator θ̂T is asymptotically efficient for the square error
(see Definition 1.2.2 in Kutoyants 1984).
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
Maximum contrast estimation
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
Definition of the contrast
Definition of the contrast
dζt = f (t, θ)dt + σ(t)dWt .
First, we discretize the interval [0, T ] in the following way.
Let ti := i∆n , i ∈ 0 · · · n − 1, where ∆n = Tn .
Following Genon-Catalot (1990) we can approximate the likelihood
of this process by the next function
Ln (θ, ζ) := Ln (θ) =
n−1
X
i=0
n−1
f (ti , θ)(ζti+1 −ζti )−
1X 2
f (ti , θ)∆n . (6)
2
i=0
Assume that T = n∆n = Nn P, P = pn ∆n fixed, pn ∈ N,
T = n∆n → ∞ , ∆n → 0 when n → ∞.
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
Definition of the contrast
To show the consistency of the maximum contrast estimator we
n (α)
firstly show that Un (α) := Ln∆
is a contrast where α ∈ Θ.
n
That is to show that Un (α) converges in Pθ to some real contrast
function K (θ, α), where
K (θ, α) := −
1
2P
Z
0
P
(f (s, θ) − f (s, α))2 ds +
1
2P
Z
0
P
f 2 (s, θ)ds.
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
Definition of the contrast
To prove this convergence we use the next two results
Lemma 1
For a continuous periodic function f (·, ·) defined on [0, T ] × Θ
where Θ is a compact of R we have
Z
n−1
1 X 2
1 P 2
f (ti , θ)∆n =
f (t, θ)dt.
n→∞ n∆n
P 0
lim
(7)
i=0
Z
Z ti+1
n−1
1 X
1 P
lim
f (ti , α)
f (t, θ)dt =
f (t, α)f (t, θ)dt.
n→∞ n∆n
P 0
ti
i=0
(8)
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
Definition of the contrast
Theorem 5
Under the above conditions and for σ(s) 6= 0 if there exists an s
such that f (s, θ) 6= f (s, α) then Un (α) is a contrast.
To prove that Un (α) converges in Pθ to K (θ, α) we prove the
convergence in mean square
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
Definition of the contrast
Proof.
K (θ, α) is a contrast function which has a strict maximum for
α=θ
Z P
1
f 2 (s, θ)ds.
K (θ, θ) =
2P 0
h
i
Eθ |Un (α) − K (θ, α)|2 = |Eθ [Un (α)] − K (θ, α)|2 + varθ (Un (α))
Using (7) and (8) one can show that
lim Eθ [Un (α)] = K (θ, α),
n→∞
lim varθ (Un (α)) = 0.
n→∞
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
Definition of the contrast
Now we apply again Theorem 3.2.4 of Dacunha-Castelle and Duflo
Corollary 1
In our case the two conditions of this theorem are fulfilled.
Proof.
1 The functions U (α), K (θ, α) are continuous .
n
2
for all > 0, there exists η > 0 such that
lim Pθ
n→∞
sup
|α−α0 |<η
!
Ln (α) − Ln (α0 ) > = 0.
n∆n
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
Study of the case f (t, θ) = θf (t), σ(t) = 1
Study of the case f (t, θ) = θf (t), σ(t) = 1
Consider f (t, θ) = θf (t), σ(t) = 1. So we have the next model
dζt = θf (t)dt + dWt .
(9)
In the discrete case, let’s make again the next discretization of the
interval [0, T ]. {ζti } i = 0, · · · , n − 1, where ti = i∆n .
Then we have the next contrast
Ln (θ) =
n−1
X
i=0
θf (ti )(ζti+1 − ζti ) −
n−1
X
i=0
θ2 f 2 (ti )∆n .
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
Study of the case f (t, θ) = θf (t), σ(t) = 1
The estimator of θ can be explicitly written as
Pn−1
θ̂n =
f (ti )(ζti+1 − ζti )
.
Pn−1 2
i=0 f (ti )∆n
i=0
Therefore
θ̂n
Pn−1
1
i=0 f (ti )
= θ + θRn +
(Wti+1 − Wti )
Pn−1
∆n i=0 f 2 (ti )
where
Pn−1
Rn :=
i=0
Rt
f (ti ) ti i+1 (f (t) − f (ti ))dt
.
Pn−1 2
i=0 f (ti )∆n
Proposition 2
The estimator θ̂n is asymptotically unbiased.
(10)
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
Study of the case f (t, θ) = θf (t), σ(t) = 1
Mean square convergence
Theorem 6
Assume that n∆n goes to ∞ when n goes to ∞, then the
estimator θ̂n converges in mean square to θ. Moreover if f (·) is
continuously derivable then we have
h
2
lim n∆n E |θ̂n − θ|
n→∞
i
=
1
P
Z
P
2
f (t)dt
0
−1
.
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
Study of the case f (t, θ) = θf (t), σ(t) = 1
Proof.
h
i
2
E |θ̂n − θ|2 =
E[θ̂n − θ] + var(θ̂n )
= θ2 Rn2 + Pn−1
i=0
1
f 2 (ti )∆n
.
To finish the proof we use the next lemma
Lemma 2
Under the above conditions on f (·) and T we have
Z
n−1
1 X 2
1 P 2
lim
f (ti )∆n =
f (t)dt.
n∆n →∞ n∆n
P 0
i=0
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
Study of the case f (t, θ) = θf (t), σ(t) = 1
Asymptotic normality
Theorem 7
Assume that f (·) is continuously
derivable and that n∆3n goes to 0
√
when n goes
to ∞ then n∆n (θ̂n − θ) converges in law to
N 0, σ 2 , where
2
σ =
Therefore
√
lim
n→∞
1
P
Z
P
2
f (t)dt
−1
.
O
n∆n
L
(θ̂n − θ) ∼ N (0, 1).
σ
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
Simulation
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
●
●
●
●
−0.05
0.00
0.05
0.10
T = nP = 1000 sample size , P = 1, f (t) = cos(2πt), σ(t) = 1,
δ = 10−2 discretization step, θ = 0.
●
●
Figure: Boxplot of the values of the estimator θ̂n from 100 repetitions
Maximum likelihood estimation
Maximum contrast estimation
Simulation
●
−0.10
−0.05
0.00
0.05
0.10
Introduction
●
Figure: Boxplot of the values of the estimator θ̂n from 1000 repetitions
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
For θ = 1, δ = 10−3
0.90
0.95
1.00
1.05
●
●
Figure: Boxplot of the values of the estimator θ̂n from 1000 repetitions
Introduction
Maximum likelihood estimation
Maximum contrast estimation
p
p
P
θ̄n := n∆n (θ̂n −θ), lim
n∆n (θ̂n −θ) ∼ N 0, R P
n→∞
2
0 ρ (s)ds
0.3
0.2
0.0
0.1
Density
0.4
0.5
Histogramme de theta bar
−3
−2
−1
0
1
2
3
hat_theta_0
Figure: Histogram θ̄n , θ = 0 from 1000 repetitions
Simulation
!
in law .
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
0.3
0.2
0.1
0.0
Density
0.4
0.5
Histogramme de theta bar
−3
−2
−1
0
1
2
hat_theta_1
Figure: Histogram of θ̄n , θ = 1 from 1000 repetitions
3
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
Dacunha-Castelle, D. Florens-Zmirou ,D., 1986. Estimation of the
coefficient of a diffusion from discrete observations. Stochastics 19,
263-284.
Dacunha-Castelle, D., Duflo, M., 1983. Probabilité et Statistique
2, Problèmes à Temps Continu, Masson, Paris.
Dehay, D., El Waled, K., 2013. Nonparametric estimation problem
for a time-periodic signal in a periodic noise. Statistics and
Probability Letters 83 608–615.
Genon-Catalot, V., 1990. Maximum contrast estimation for
diffusion process from discrete observations. Statistics 21 99–116.
Harison, V., 1996. Drift Estimation of a Certain Class of Diffusion
Processes from Discrete Observations. Computers Math. Applic.
31 No. 6, 121–133.
Introduction
Maximum likelihood estimation
Maximum contrast estimation
Simulation
Ibragimov, I.A., Has’minskii, R.Z., 1981. Statistical Estimation,
Asymptotic Theory, Springer-Verlag, New York.
Kasonga, R.D., 1988. The consistency of a non-linear least squares
estimator from diffusion processes. Stoch. Processes and their
Appl. 30, 263–275.
Kutoyants, Yu., 1984. Parameter estimation for stochastic
processes. In Research and Exposition in Math., 6, Heldermann,
Verlg, Berlin.
Le Breton, A., 1976. On continuous and discrete sampling for
parameter estimation in diffusion type processes, Mathematical
Programming Studies 5, 124-144 .
van der Vaart, A.W. 2005. Asymptotic Statistics , Cambridge
University Press, Cambridge .
Download