RECURSIVE KERNEL ESTIMATION OF THE INTENSITY FUNCTION OF AN INHOMOGENEOUS POISSON PROCESS Anna Kitayeva1, and Mihail Kolupaev2 1 Tomsk Polytechnic University, Tomsk, Russia, kit1157@yandex.ru 2 Tomsk State University, Tomsk, Russia, al13n@sibmail.com In this paper we give an extension of the results considered in [1] to the recursive algorithms. Recursive estimation is particularly useful in large sample size since the result can be easily updated with each additional observation. The structure of the estimate is similar to the recursive kernel estimate of density function introduced in [2, 3], and thoroughly examined in [4]. The estimate is constructed on a single realization of a Poisson process on fixed interval [0, T ] . Specificity of the statistic (1) is due to a random sample size. Let t i , i 1, N , 0 t i T be a realization of a Poisson point process having unknown intensity function () on [0, T ] , where N is the number of points falling into [0, T ] , b denote (a, b) (t )dt . Consider the following expression as an estimate of the function a () / (0, T ) at a point t t tN 1 N 1 t ti 1 1 1 S N 1 (1) K K N i 1 hi hi N NhN hN where hn 0 is a sequence of real numbers, nhn , K () is a kernel function; let S N 0 if N 0 . SN Consider an asymptotic behavior of statistic (1) under following scheme of series: let series of observations are done on [0, T ] with the intensity of the process in n-th trial equals to n () n() . Denote the value of the statistic (1) in n-th trial Sn 1 N n 1 t tin , K N n i 1 hin hin (2) where N n and ( t in ) − respectively the number of observations and the realization of the process in n-th trial. Theorem 1 (asymptotic unbiasedness). Let the kernel K () is a compact bounded real T valued Borel function on [T , T ] and K (u )u m du M m 1,2,...; the intensity function T () is continuous at the point t (0, T ) , (t , T ) 0; (hin ) is monotonically no increasing with i sequence, (hin ) 0 as i , and m (1) C i i 0 i m m hmi N n ChNn m with some constant C. Then the estimate (1) is asymptotically unbiased: lim E ( S n ) (t ) / (0, T ) 0. n Proof. Let pi ( x / n) be the conditional density function ti given N n 1, pn P N n (0, T ) n e ( 0,T ) / n!, n 0 . Then the expected value 1 E ( S N ) pn n 1 T t x 1 n 1 pi ( x / n)dx K n i 1 hi 0 hi T t x (0, x)i 1 ( x, T )n i ( x) 1 n 1 e ( 0 ,T ) (3) K dx . ( 0 ,T ) n 1 n i 1 hi 0 hi (i 1)! (n i)! (0, T ) 1 e 1 n (0, x)i 1 ( x, T ) n i (0, T ) n 1 e (0,T ) 1 Taking into account and (n i )! n! (0, T ) n 1 n i 1 (i 1)! n 1 boundedness of the kernel we know that series (3) convergences absolutely. Let hi min t / T , 1 t / T i, and since K () is a compact function on [T , T ] we have T (0, t hi x) i 1 (t hi x, T ) ni 1 T t x (0, x) i 1 ( x, T ) ni K ( x)dx K ( x) (t hi x)dx. hi 0 hi (i 1)! (n i)! (i 1)! (n i)! T ~ Note that (t hin x) (t ) Chin x for sufficiently small hin , then E S n e n ( 0,T ) 1 e n ( 0,T ) e n ( 0,T ) 1 e n ( 0,T ) T (0, t hin x) i 1 (t hin x, T ) k i 1 k k n K ( x ) (t hin x)dx (i 1)! (k i )! k 1 k i 1 T T (0, t hin x) i 1 (t hin x, T ) k i 1 k k ~ n K ( x ) (t ) Chin x dx (t ) Ln Lon . (i 1)! (k i )! k 1 k i 1 T For the proof of the theorem we establish the convergence lim Ln (0, T ) 1 . n Using binomial expansion and the linearization (t hin x, t ) t (t )dt rhin x, where t hin x r > 0 – some constant, we obtain (0, t hin x)i 1 (t hin x, T ) k i 1 k k n K ( x ) dx (i 1)! (k i)! k 1 k i 1 T T l i 1 k i 1 k k (t , T ) m rhin x i 1 l (0, t ) rhin x n K ( x ) ( 1 ) 0 m! k i m !dx l ! ( i 1 l )! k 1 k i 1 l 0 m T i 1 l T rh x n k (0, t )l k k l n (1)i K ( x) in l! i! k 0 k 1 l 0 i 0 T n k (0, t )l nl l! k 0 k 1 l 0 i k i l T m0 k i m (t , T ) m rhin x dx m! k i l m ! k i l m rhin x (t , T ) m n m k l m k l m n (1)i K ( x) dx m! i!k i l m ! m0 i 0 T k l T k l m (4) n k (0, t )l nl (t , T ) k l n k l n , l! (k l )! k 0 k 1 l 0 where n k 1 (0, t ) l n l l! k 1 k 1 l 0 n k l 1 m 0 T rhin x (t , T ) m n m k l m k l m i n ( 1 ) K ( x) dx m! i ! k i l m ! i 0 T k l m (0, t ) k n k 1 n . k 1! k 0 In the next step we prove the convergence lim e n ( 0,T ) n 0 : n 2 n k 1 (0, t ) l n l (t , T ) k l n k l l! (k l )! k 1 k 1 l 0 lim e n ( 0,T ) n MC lim e n ( 0,T ) n MC lim e n ( 0 ,T ) n k l T Ckml r m m (1) i Cmi hmi l 1n K ( x)x m dx m m 1 (t , T ) i 0 T n k 1 (0, t )l nl (t , T ) k l n k l l! (k l )! k 1 k 1 l 0 n k 1 (0, t )l n l (t , T ) k l n k l l! (k l )! k 1 k 1 l 0 k l Ckm l r m hml 1n m m 1 (t , T ) (5) k l 1 rhl 1n 1 0. (t , T ) It follows from n k 1 (0, t )l n l (t , T ) k l n k l n k (0, t )l n l (t , T ) k l n k l lim e n ( 0,T ) l! n l! (k l )! (k l )! k 1 k 1 l 0 k 1 k 1 l 0 lim e n ( 0,T ) n n k 1 (0, T ) k e n ( 0,T ) 1 n(0, T ) 1 lim e n ( 0,T ) , n k! (0, T ) (0, T ) k 1 k 1 lim e n ( 0,T ) n 1 n k 1 (0, t )l nl (t , T ) k l n k l lim e n ( 0,T ) l! (0, T ) n (k l )! k 1 k 1 l 0 lim e n ( 0 ,T ) n rhl 1n n k 1 (0, t )l nl (t , T ) k l n k l 1 l! (k l )! (t , T ) k 1 k 1 l 0 k l k l 1 rh2 n lim e (t , T ) n exp n (0, t ) (t , T ) 1 rh2 n / (t , T ) n(0, T ) 1 lim . n (0, t ) (t , T ) 1 rh2 n / (t , T ) (0, T ) n ( 0,T ) n k 1 (0, t )l nl (t , T ) k l n k l l! (k l )! k 1 k 1l 0 (6) The theorem is proved. Theorem 2 (the variance convergence). Let the assumptions of Theorem 1 hold, T K T 2 ~ ( x) x m dx M m 1,2,...; ihi as i . Then lim Var ( S n ) 0. n Proof. By PN n 2 1 as n the variance Var ( S n ) t tin t t jn 1 1 N 1 2 t tin 1 N 1 t tin E K 2 K K E K 2 2 h hin i j hinh jn hin h jn N n i 1 hin hin N n i 1 in 2 n n n k 1 (0, t hin x)i (t hin x, T )k i k n ( 0,T ) 2 K ( x ) ( t h x ) n dxe in 2 (i 1)! (k i)! k 1 k i 1 hin T C1 (n) T 1 k 1 k 1 2 k 2 k i 1 j i 1 hin h jn 2C2 (n) i 1 j i 1 t y t x ( y, T ) k j ( y ) (0, x) ( x, y ) K ( x ) K 0 x hin h (i 1)! ( j i 1)! (k j )! jn T T 1 n 1 T t x (0, x)i 1 ( x, T ) n i n k dydxe n ( 0,T ) C3 (n) K ( x)dxe n ( 0,T ) , n 1 n i 1 hin 0 hin (i 1)! (n i )! (t ) where Ci (n) 1 as n , i 1,2,3. The second term in (7) convergences to (0, T ) k 1 k (0, x) i 1 ( x, y ) j i 1 ( y, T ) k j since according to Newton’s polynom ( j i 1)! (k j )! i 1 j i 1 (i 1)! 3 (7) 2 (0, T ) k 2 (the main part is finding similarly (4)). k 2 ! To establish n 2 k 1 k Qn oe k 1 the h K n ( 0,T ) i 1 statement T 2 ( x)(t hin x) in T of the theorem (0, t hin x) (t hin x, T ) (i 1)! (k i)! i k i we must show n k dx k n (0, t )l nl (t , T ) k l n k l as n . Let us consider Qn 2 l! (k l )! k 0 k 1 l 0 n k 1 (0, t )l nl 2 l! k 1 k 1 l 0 k l 1 m0 (t , T ) m n m k l m k l m (1)i n m! hin i 0 T 2 K ( x) T rhin x k l m dx i!k i l m ! 1 (0, T ) n 1 Ei( n(0,T )) ln n(0,T ) , (0, T ) k 1 kk! (0, T ) where Ei( n ) – integral exponent, – Euler constant, lim Ei( n)e n 0 [5]. Thus we obtain Qn(1) Qn( 2 ) , Qn(1) k k n lim n Q Qn(1)e n ( 2) n 0. Similarly (5) we have (0, t ) l n l (t , T ) k l n k l 2 l! (k l )! hl 1n k 1 k 1 l 0 ~ MC n k 1 n k 1 (0, t ) l n l (t , T ) k l n k l l! (k l )!(l 1)h k 1 k 1 l 0 l 1n ~ MC n k 1 (0, t ) l n l (t , T ) k l n k l l! (k l )! k 1 k 1 l 0 ~ MC k l rhl 1n 1 (t , T ) 1 o(n) k l rhl 1n 1 o(n) 1 (t , T ) k l rhl 1n 1 (t , T ) 1 o(n). Taking (6) into account we have that lim Qn( 2) en 0. The theorem is proved. n References 1. A. Kitayeva. Mean-square convergence of a kernel type estimate of the intensity function of an inhomogeneous Poisson process. The Second International Conference "Problems of Cybernetics and Informatics (PCI'2008)". Proceedings, v. III, (2008) p. 149–152. 2. C.T. Wolverton, T.J. Wagner. Recursive estimates of probability densities. IEEE Trans. Syst. Sci. and Cybernet., v. 5 (3), (1969) p. 246−247. 3. H. Yamato. Sequential estimation of a continuous probability density function and mode. Bulletin of Mathematical Statistics, 14, (1971) p. 1−12. 4. M. Abramowitz, I. A. Stegun. Handbook of Mathematical Functions. Dover Publications, New York (1964) 1046 p. This work was supported by FP “Scientific and scientific-pedagogical cadres of innovative Russia”, project no. 402.740.11.5190 4