Probability Theory for Continued Fractions Lisa Lorentzen Norwegian University of Science and Technology Continued fraction: Convergence: Möbius transformations: ì M := ít (w) = î ü aw + b ; D(t ) = ad - bg ¹ 0 ý g w +d þ Convergence: ▹ Sn ® S ÎM not possible since Sn ( ¥) = Sn-1 ( 0 ) ▹ Hence Sn ® ( constant function ) Catch: a n w + bn p(w - w* ) Sn (w) = ® if * g nw + d n w-w p¹¥ a n w + bn p(w - w* ) Sn (w) = ® if * g nw + d n w-w p¹¥ l éë p,w* ùû (w) General convergence: L (1986) a n w + bn p(w - w* ) Sn (w) = ® if * g nw + d n w-w p¹¥ l éë p,w* ùû (w) Restrained sequence: (L & Thron, 1987) Random continued fraction: Questions: 1. What is the probability for convergence of K(an / bn )? 2. What is the probability distribution for the values of K(an / bn )? Theorem (Furstenberg 1963) Connection to the Central Limit Theorem: Theorem (Furstenberg 1963) a nw + bn t n (w) = g nw + d n æ a n bn ö ▹ ç ÷ è g n dn ø Theorem (Furstenberg 1963) an sn (w) = bn + w æ 0 an ö ▹ ç ÷ è 1 bn ø Condition C: 1. The support of m contains at least two transformations. 3. If supp m = { s1, s2 } with s1 (w) = Theorem (L, 2013) a -a , then s2 (w) ¹ . w b+w Condition C*: 1. The support of m contains at least two transformations. 3. If supp m = { s1, s2 } with s1 (w) = Theorem (L, 2013) a -a , then s2 (w) ¹ . w b+w Idea of proof: 1. -cn2 A w + An K(-c / bn ), sn (w) = , Sn (w) = n-1 bn + w Bn-1w + Bn 2 n Cn-1Cn wn + bn ) wn-1 ( × +c n n cn dn := Sn (wn ) - Sn-1 (wn-1 ) = where Cn = Õ ck Bn-1wn + Bn × Bn-2 wn-1 + Bn-1 k=1 Cn-1Cn × ( wn + bn ) wn-1 + c ( wn + bn ) wn-1 + c n n cn cn dn := Sn (wn ) - Sn-1 (wn-1 ) = = (n) (n-1) Bn-1wn + Bn × Bn-2 wn-1 + Bn-1 h21 wn + h11(n) × h21 wn-1 + h11)n-1) 3. (n) Choose wn := m × exp ( i × arg h11(n) - i × arg h21 ) with m = 1, 43 , 12 . 1 (n) (n) (n) Then h21 wn + h11(n) = h21 m + h11(n) ³ h21 + h11(n) 2 ( ³ lim inf n®¥ 1 4 2 2 (n) h21 + h11(n) = ln denominator ³ lim inf n®¥ 2n ) 1 (n) æ 1 ö h ç 4 è 0 ÷ø æ 1 ö (n-1) æ 1 ö ln h (n) ç ln 4 + ln h - ln 4 ÷ ç ÷ 0 0 è ø è ø 2n = 2a (wn + bn )wn-1 The numerator of dn = Sn (wn ) - Sn-1 (wn-1 ) : + cn cn 2 æ ö b +1 1+ b bn + wn n n + cn £ ×1+ cn £ k ç an + ÷ cn cn an ø è The Borel-Cantelli Lemma: Let { En } be a sequence of events in a probability space, and let pn be the probability that En occurs. If S pn < ¥, then the probability that such events occur infinitely often, is zero. limsup n®¥ ln numerator ln k + ln rn £ limsup £ 0, 2n 2n n®¥ 1/2n lim inf n®¥ ln denominator ³ 2a 2n numerator ln numerator ln denominator lim sup ln = lim sup lim inf £ -2a 1/2n 2n 2n n®¥ denominator lim sup Sn (wn ) - Sn-1 (wn-1 ) = O ( e-2a ) = O(e-4a n ) as n ® ¥ 2n Example 1: ak , ak ¹ 0; k = 1,2 1+ w m ({ a1 } ) = 0.999999, m ({ a2 }) = 0.000001 Let supp m = { s1 , s2 } where sk (w) = We look at the m - random contined fraction K(an /1). Example 2: The uniform parabola theorem. (Scott & Wall 1940, Thron 1958) K(an /1) converges uniformly if all {an } Ì bounded parabolic region of this form. Example 3: The convergence is locally uniform for arg z < p whenever {an } is bounded. OBS! The convergence is not locally uniform in any open set containing parts of the negative real axes. To find the distribution of the values of the continued fractions We look for distributions for ( a1, b1 ) such that X = a1 b1 + X s ame distribution (stationary measure) Theorem (Furstenberg 1963), translated to continued fractions: If a m - random continued fraction converges with probability 1, then there exists a unique stationary measure for its values. Case 1: K(1/ bn ) 1. Search in the literature for distributions b ▹ m and X ▹ u such that 1 ▹ u. b+ X 2. 1. Gamma-GIG (Letac and Seshadri, 1983) g 2, 2 ; i.e., l = 2, c = 1 c = 1, d = 1, l = -3 The Generalized Inverse Gaussian distribution Properties: L(X) = f- l , c, d and L(b1 ) = g l , 2/c ß æ 1 ö L(b1 + X) = fl , c, d and L ç = f- l , d, c è b1 + X ÷ø L(X) = f- l , d, c and L(b2 ) = g l , 2/d ß æ 1 ö L(b2 + X) = fl , d, c and L ç = f- l , c, d ÷ è b2 + X ø Case 2 Unfortunately, It is not proved that the conditions imply convergence with probability 1 2. The Cauchy distribution has probability density Properties: 1. X1 ▹ Cauchy( p,q) and X2 ▹ Cauchy ( u,v ) Þ 2. X ▹ Cauchy ( p,q ) Þ ( X1 + X2 ) ▹ Cauchy ( a + u, q + v ) 1 æ p q ö ▹ Cauchy ç 2 , è p + q 2 p 2 + q 2 ÷ø X Orsingher & Polito 2012 Orsingher & Polito 2012 Since Fn+1 5 -1 ®j = , Fn 2 the distribution of the continued fraction limit points is Cauchy ( 0,j ) . Trick: z = p + iq, z1 = z, zk = pk + iqk = rk eiq k , z 1 zk+1 = z + k 2 so z k = rk eiq for all k where r1 = r and rk+1 = r + . rk zk r + r2 + 4 That is: rk ® =:j r and pk ® j r cosq and qk ® j r sinq . 2 Theorem: Let {bn } be i.i.d. ▹ Cauchy ( p,q ) . Then the limit points of the random continued fraction K(an / bn ) æ p ×j r has distribution Cauchy ç , 2 2 è p +q ö . 2 2 ÷ p +q ø q ×j r Case 3: Continued fractions 1. Beta-2 -- Beta-hypergeometric Let a2n-1 ▹ ba( 2, )b and a2n ▹ b b( 2, )a be independent, where ba( 2, )b is the beta-distribution of the second kind: a = 4, b = 2 The distribution satisfies our conditions. Asci, Letac & Piccioni, 2008: Found the beta-hypergeometric distribution ( 2) a ▹ b b , g and X ▹ b ha , g , b 1 indep. Þ ▹ b hg , a , b 1+ aX Theorem (Asci, Letac, Piccioni, 2008) Let {an } be independent random variables with a2n-1 ▹ b b( 2, )a and a2n ▹ b b( 2, )g Then K(an /1) ▹ b ha , g , b 1. Beta 2 – Beta-Hypergeometric (Letac & Piccioni, 2012) The Beta-Hypergeometric distribution BH(v) has density function The beta distribution of the second kind has density function 1 ▹ BH ( Mv ) 1+ WX 1 1 X2 = = ▹ BH ( M 2 v2 ) 1+ W1 X1 1+ W1 1+ WX 1 1 W2 W1 1 X3 = = 1+ W2 X2 1 + 1 + 1 + 1+ WX X1 = Theorem (Letac & Piccioni, 2012) Stieltjes fraction Theorem (Marklof, Tourigny & Wolowski, 2006). Applications 1. Minimal solutions of three-term recurrence relations Theorem (Pincherle, 1894) K(an / bn ) converges in the classical sense if and only if the recurrence relation Xn = bn Xn-1 + an Xn-2 , n = 1,2, 3,▹ has a minimal solution; i.e. lim Xn = 0. Yn 2. Orthogonal polynomials Three connections to the continued fraction K(-ln / (z - an )) : 1. 2. 3. 3. Random walks on the real line and on trees. i-1 i+1 i 1-pi pi p0 =1, 0<pi<1 for i>0 Level 0 Degree: d0 = 1 1 d1 = 4 2 d2 = 2 3 d3 = 3 4 d4 = and so on . . . (Other choices possible) Level 0 Degree: d0 = 1 1 d1 = 4 2 d2 = 2 3 d3 = 3 4 d4 = and so on . . . What is the probability of returning to Level 0? 4. Faster convergence of continued fractions It is therefore then a good idea to choose approximants Sn (wn ) where wn ÎV. But which wn should one choose? Example (L, Thron and Waadeland, 1989) Open questions: 1. Are the conditions for convergence with probability 1 still too restrictive? 2. The theorem works for continued fractions with i.i.d. elements. Can this be generalized to continued fractions where (an, bn) depends on bn-1 ? 3. Is it possible to say anything about a random continued fraction where the elements are independent, but not identically distributed? 4. Is it at all possible to say anything about the probability for convergence when the continued fraction is derived from a power series with random elements? 5. What can you say if you have a continued fraction where the elements are affected by some white noise; i.e., the elements are independent and normally distributed, but with varying, but given expectations. 6. Find pairs of distributions for continued fractions with complex elements.