The law of the iterated logarithm for character ratios Zhonggen Su

advertisement
ARTICLE IN PRESS
Statistics & Probability Letters 71 (2005) 337–346
www.elsevier.com/locate/stapro
The law of the iterated logarithm for character ratios
Zhonggen Su
Department of Mathematics, Zhejiang University, Hangzhou 310027, PR China
Received 13 May 2004; received in revised form 16 November 2004; accepted 28 November 2004
Available online 22 January 2005
Abstract
Recently, Fulman developed some general connections between martingales and character ratios of a
random representation of the symmetric group on transitions, and obtained a convergence rate in a central
limit theorem. In this work we aim to establish the law of the iterated logarithm for character ratios. The
technique is a well-known Skorokhod embedding theorem for martingales and strong approximation
argument. Also, bounded martingale difference methods are used to obtain a large deviation for character
ratios.
r 2005 Elsevier B.V. All rights reserved.
MSC: primary 05E10; secondary 60C05
Keywords: Character ratio; Plancherel measure; Skorokhod’s embedding scheme; The law of the iterated logarithm
1. Introduction and main results
Let S1 be the group of finite permutations of the set of natural numbers. This group is the
union of the increasing sequence of finite symmetric subgroups
S0 ! S1 ! S2 ! ! Sn ! ;
where Sn acts on the set of f1; 2; . . . ; ng: The irreducible representations of the symmetric group
Sn are parameterized by the set of partitions, i.e., by representations of n as a sum of positive
This work was partially supported by NSFC 10371109.
E-mail address: zgsu2001@yahoo.com (Z. Su).
0167-7152/$ - see front matter r 2005 Elsevier B.V. All rights reserved.
doi:10.1016/j.spl.2004.11.016
ARTICLE IN PRESS
Z. Su / Statistics & Probability Letters 71 (2005) 337–346
338
integers: n ¼ m1 þ m2 þ : It is convenient to visualize the partitions by means of Young
diagrams. A Young diagram is a set of squares, where the first row contains m1 squares, the second
row contains m2 squares, etc.
Denote by Yn the set of Young diagrams with n boxes. The dimension of irreducible
representation corresponding to the diagram l 2 Yn will be denoted by dim l: The set of all
Young diagrams Y ¼ [1
n¼0 Yn is partially ordered by inclusion; it is called Young lattice. A
Young diagram L follows immediately the diagram l; l % L; if l L and if L differs from l in
exactly one box. Recall that a standard Young tableau of the shape l is a one to one assignment of
the numbers 1; 2; . . . ; n ¼ jlj in such a way that the numbers increase along the rows and down the
columns. According to the Robinson–Schensted algorithm, a standard Young tableau of the
shape l can be viewed as an oriented path
t ¼ ð; % l1 % l2 % % ln ¼ lÞ
in the Young lattice connecting diagram ; to l: In other words, Young tableau is a growth history
of Young diagram. In particular, the number of Young tableaux of the shape l equals the
dimension dim l; which is given by the hook formula.
Let T be the space of infinite paths t ¼ ð;; l1 ; . . . ; ln ; . . .Þ starting at the initial vertex ;: In the
natural topology, T is compact and totally disconnected. A Borel measure P on the path space T
is determined by values PðT n Þ of cylindrical sets
T n ¼ ft 2 T : t ¼ ðl1 ; l2 ; . . . ; ln ; . . .Þg;
where ðl1 ; l2 ; . . . ; ln Þ is a finite path. The Plancherel measure P of the infinite symmetric group
S1 is a Markov probability measure on T with the transition probabilities
plL ¼
dim L
;
ðn þ 1Þ dim l
l 2 Yn ;
l % L:
Note that for each finite path u ¼ ðl1 ; l2 ; . . . ; ln Þ; PðT n Þ depends only on the last vertex ln ; i.e.,
PðT n Þ ¼
dim ln
:
n!
The fact of the matter is that the distribution of P on the n-level Yn of Young lattice coincides
with the ordinary Plancherel measure of the group Sn :
Pðt 2 T : ln ¼ lÞ ¼
dim2 l
:
n!
Define Fn to be the s-field generated by all T n ; F ¼ sð[nX1 Fn Þ: Thus, we get a fundamental
probability space ðT; F; PÞ and an increasing sequence of filters Fn Fnþ1 F:
Next, let us turn to the character ratios. Let l be a partition of n chosen from the Plancherel
measure of the symmetric group Sn and let wlð12Þ be the irreducible character parameterized by l
evaluated on transposition (12). The quantity wlð12Þ = dim l is called a character ratio and is crucial
for analyzing the convergence rate of the random walk on the symmetric group generated by
transpositions in Diaconis and Shahshahani (1981). In fact, they proved that the eigenvalues for
ARTICLE IN PRESS
Z. Su / Statistics & Probability Letters 71 (2005) 337–346
339
this random walk are the character ratios wlð12Þ = dim l each occurring with multiplicity dim2 l:
Character ratios on transpositions also play an essential role in work on the moduli space of
curves (see Eskin and Okounkov, 2001; Okounkov and Pandaripandhe, 2003). Still, the central
limit theorem for wlð12Þ = dim l is one major ingredient in the study of central limit theorems for
random Young diagrams due to Kerov (1993). Recently, several different proofs have been found
for wlð12Þ = dim l: Ivanov and Olshanski (2002) gave a wonderful development of Kerov’s work.
Another proof, due to Hora (1998), exploited the fact that the kth moment of a Plancherel
distributed character ratio is the chance that the random walk generated by random
transpositions is at the identity after k steps. Both of these proofs were essentially combinatorial
in nature and used the method of moments.
A more probabilistic approach to the central limit theorem for wlð12Þ = dim l appeared in Fulman
(2005a) using a technique known as Stein’s method, which is fundamentally different from the
method of moments. Just recently, Fulman (2005b) developed some general connections between
martingales and character ratios of finite groups. As a consequence, he sharpened the convergence
rate in a central limit theorem for the character ratio.
n
For t ¼ ðl1 ; l2 ; . . . ; ln ; . . .Þ 2 T; define X n ðtÞ ¼
n
2 wlð12Þ
= dim ln : This is a well-defined random
variable on probability space ðT; F; PÞ: An elegant discovery, due to Fulman (2005b), is the
following:
Theorem 0. Let ðX n ; Fn ; nX1Þ be as above. Then ðX n ; Fn ; nX1Þ is a martingale, i.e.,
EðX n jFn1 Þ ¼ X n1
a:e: P:
And one can compute certain conditional probabilities related to the martingale sequence X n ;
which is not possible for general martingales. Frobenius (1900) found the explicit formula for X n :
!
X mi X m0i
Xn ¼
2 2 ;
i
i
where mi is the length of row i of ln and m0i is the length of column i of ln : From Frobenius’
formula it follows that if we write d n ¼ X n X n1 for the martingale difference, then
d n ¼ cðxÞ;
(1)
where x is the box added to ln1 to obtain ln and the content of a box cðxÞ is defined as column
number of box-row number of boxes. A nice property on d n ; by Kerov (1996), is
Eðd 2n jFn1 Þ ¼ n 1
a:e: P for nX2
(2)
from which we immediately have Ed 2n ¼ n 1:
There is a lot of literature on limit theorems for martingales, see Hall and Hedey (1980),
Stout (1974). It is on the basis of connections between martingales and character ratios that
Fulman (2005b) obtains a sharp convergence rate of the central limit theorem for X n : In this
paper, we are mainly devoted to the study of the almost sure convergence for X n : One of our main
results is as follows:
ARTICLE IN PRESS
Z. Su / Statistics & Probability Letters 71 (2005) 337–346
340
Theorem 1. Let ðX n ; Fn ; nX1Þ be as above. Then
Xn
¼ 1 a:e: P:
lim sup sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
n!1
n
2 2 log log n
The proof will be given in the next section. One key technique is the approximation of the
martingale in an appropriate manner by Brownian motion. This is accomplished by means of a
martingale version of the Skorokhod representation theorem. In this way we can actually obtain a
stronger approximation result.
Theorem 2. Let ðX n ; Fn ; nX1Þ be as above. Then we can define ðX n ; Fn ; nX1Þ without changing its
distribution on a richer probability space on which there exists a standard Wiener process
fW ðtÞ; tX0g such that
nðn 1Þ
¼ oðn3=4 log3=4þe nÞ; a:e: P;
Xn W
2
where e40 is a constant.
From this strong approximation result one will readily derive various limit theorems immediately.
In this paper, there are lots of strictly positive but finite constants whose specific values are not
of interest to us. We denote them by K i ; K n :
2. Proofs
We begin with a martingale version of the Skorokhod embedding theorem, see Breiman (1968)
for an elegant proof, Phillipp and Stout (1975) for applications.
Let ðX n ; Fn ; nX1Þ be a martingale. Then there exists a probability space with Brownian
motion fW ðtÞ; tX0g defined on it and a sequence of non-negative random variables ti
such that
(
!
)
n
X
fX n ; nX1g and
W
ti ; nX1
i¼1
have the same distribution. Hence, without loss of generality we can redefine X n by
!
n
X
Xn ¼ W
ti
i¼1
Pk
and can keep the same notation. Thus
Fn becomes
Pn the s-field generated by fW ð i¼1 ti Þ; kpng:
Let An be the s-field generated by W ðtÞ; 0ptp i¼1 ti : Then Fn An ; nX1 and each ti is Ai measurable. Write d i ¼ X i X i1 for martingale difference. Then for iX1
Eti ¼ Ed 2i ;
Eðti jAi1 Þ ¼ Eðd 2i jAi1 Þ ¼ Eðd 2i jFi1 Þ
a:e: :
(3)
ARTICLE IN PRESS
Z. Su / Statistics & Probability Letters 71 (2005) 337–346
341
Moreover, for each n41
Etni pK n Ed 2n
i ;
(4)
where the constant K n depends only on n:
Applying the above scheme to the character ratios, we get the following:
Lemma 1. Let ðT; F; PÞ and X n be defined as in Section 1. Then
n
1 X
ðti Eti Þ ! 0
n2 i¼1
a:e: P:
In view of (3) and (2), we have
Eðti jAi1 Þ ¼ Eðd 2i jFi1 Þ ¼ Eti
a:e: P:
So, it is sufficient to prove
n
1 X
ðti Eðti jAi1 ÞÞ ! 0
n2 i¼1
a:e: P:
(5)
First, let us prove
n
1 X
P
ðti Eðti jAi1 ÞÞ ! 0:
2
n i¼1
(6)
To this end, we need the following lemma due to Kerov (1993).
Lemma 2. Let ln1 be chosen from Plancherel measure on partitions of size n, let d n ¼ X n X n1
be the martingale difference associated with ln1 : Then for any positive real n
2n
2n
n
Ed n
¼
lim
:
(7)
n!1 nn
nþ1
Now let Di ¼ ti Eðti jAi1 Þ: If the Di ’s are obviously a martingale difference sequence, then by
(4),
!2
n
n
n
n
X
X
X
X
E
Di
¼
ED2i p
Et2i pK 2
Ed 4i :
i¼1
i¼1
i¼1
i¼1
Thus, by Markov’s inequality and Lemma 2, we have for any e40;
!
P
X
n
K 2 ni¼1 Ed 4i
2
P D 4en p
! 0 as n ! 1:
i¼1 i e2 n4
This proves (6). But, to prove the almost sure convergence for (5), we need the following large
deviation for a martingale difference sequence:
ARTICLE IN PRESS
342
Z. Su / Statistics & Probability Letters 71 (2005) 337–346
Lemma 3. Let ðd i ; Fi ; iX1Þ be the martingale difference sequence. Then for each x40; all sequences
of positive numbers ðci Þ; and any 0oao1; we have
!
X
X
n
n
a2 x2
P d i 4x p2 exp Pn 2 þ
Pðjd i j4ci Þ
i¼1 8 i¼1 ci
i¼1
þ
n
X
1
ðEd 2i Þ1=2 ðPðjd i jXci ÞÞ1=2 :
xð1 aÞ i¼1
ð8Þ
In particular, when the martingale differences are conditionally symmetric, i.e. d i and d i have the
same conditional distributions, given Fi1 ; we have
!
X
X
n
n
x2
P d i 4x p2 exp Pn 2 þ
Pðjd i j4ci Þ:
(9)
i¼1 2 i¼1 ci
i¼1
This is essentially a generalization, due to Godbole and Hitczenko (1998), of classic Auzma’s
inequality, and its main appeal is the simplicity of its statement and proof, and the ease of
applications it enjoys. For the sake of convenience, we give the proof below.
Proof of Lemma 3. For any x and ðci Þ as above we have
!
!
!
X
X
n
n
n
[
P d 4x pP
fjd i j4ci g þ P d1
4x :
i¼1 i i¼1 i ðjd i jpci Þ i¼1
(10)
By adding and subtracting the conditional expectation Eðd i 1ðjd i jpci Þ jFi1 Þ inside the second
probability, we see that the second term on the right-hand side above is no longer than
!
X
n
d1
Eðd i 1ðjd i jpci Þ jFi1 ÞXax
P i¼1 i ðjd i jpci Þ
!
X
n
þP Eðd i 1ðjd i jpci Þ jFi1 ÞXð1 aÞx :
i¼1
Note that a summand appearing in the first probability is a martingale difference bounded by 2ci :
Therefore, applying Azuma’s inequality to the first term and Chebyschev’s inequality to the
second, we get that the above is in turn bounded by
X
n
a2 x2
1
(11)
E
Eðd i 1ðjd i jpci Þ jFi1 Þ:
2 exp Pn 2 þ
xð1 aÞ i¼1
8 i¼1 ci
The crucial observation is the following obvious fact: since the d i ’s are martingale difference,
Eðd i 1ðjd i jpci Þ jFi1 Þ ¼ Eðd i 1ðjd i j4ci Þ jFi1 Þ; and therefore
EjEðd i 1ðjd i jpci Þ jFi1 Þj ¼ Ej Eðd i 1ðjd i j4ci Þ jFi1 Þj
pEjd i j1ðjd i j4ci Þ
pðEd 2i Þ1=2 ðPðjd i j4ci ÞÞ1=2 :
ð12Þ
ARTICLE IN PRESS
Z. Su / Statistics & Probability Letters 71 (2005) 337–346
343
Combining (10)–(12), we have (8). As for (9), one can notice that if the d i are conditionally
symmetric, then ðd i 1ðjd i jpci Þ Þ are martingale differences, so centering is not needed. &
We next apply Lemma 3 to Di and d i defined as in Section 1.
Proof of Lemma 1. Since EDni pK n Ed 2n
i ; then
PðjDi j4ci Þp
EDni K n Ed 2n
i
p
:
cni
cni
Fix e40: Taking x ¼ en2 ; a ¼ 12 ; 0odo1=3; and ci ¼ n1þd ; 1pipn; we have for n40 with dn42;
!
X
n
2
P D 4en
i¼1 i X
n
n
2n 1=2
e2 n4
K n Ed 2n
2 X
2 1=2 K n Ed i
i
p2 exp þ 2
ðEd i Þ
:
ð13Þ
þ
en i¼1
32n3þ3d
nð1þdÞn
nð1þdÞn
i¼1
From (7) we know that there is a positive constant K n such that
n
Ed 2n
n pK n n ;
nX1:
Thus, we show by (13)
!
X
1
n
X
2
P D 4en o1:
i¼1 i n¼1
Lemma 1 now follows immediately from the Borel–Cantelli lemma. &
When we apply Lemma 3 to d i ; we get a large deviation for the character ratios X n : Since d i is
conditionally symmetric under the Plancherel measure, we have by (9)
1
0
n
ffiffiffiffiffiffiffiffiffi
ffi
s
!
2
t 2
n
n
C X
B
C
B
p2 exp@ Pn 2 A þ
2
Pðjd i j4ci Þ:
P jX n j4t
2 i¼1 ci
i¼1
pffiffiffi
Now let ci ¼ 2 n þ n1=6 t; 1pipn: Note that, by (1), jd i jp maxfm1 ; m01 g; where m1 is the length
of the first row of li and m01 the length of the first column of li : We will use the following very
delicate tail probability estimate, due to Ledoux (2003), for the length of longest increasing
subsequences.
Lemma 4. Let Ln ðsÞ be the length of the longest increasing subsequence in a permutation s of size n
taken at random uniformly on the symmetric group Sn : For any nX1 and 0oep1;
pffiffiffi
pffiffiffi
PðLn X2 nð1 þ eÞÞpK 1 expðK 2 ne3=2 Þ;
where K 1 ; K 2 40 are numerical.
Note that according to the Robinson–Schensted algorithm, when l is chosen from the Plancherel
measure of the symmetric group Sn ; both m1 ; m01 have the same distribution as the length
of longest increasing subsequence in a random permutation of size jlj: Applying Lemma 4,
ARTICLE IN PRESS
Z. Su / Statistics & Probability Letters 71 (2005) 337–346
344
we have
ffiffiffiffiffiffiffiffiffi
s
ffi!
n
p2 exp 2
P jX n j4t
t2
þ 2nK 1 expðK 2 t3=2 Þ:
128ð1 þ t2 =2n2=3 Þ
Lemma 5.
P
P
W ð ni¼1 ti Þ W ð ni¼1 Eti Þ
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
!0
a:e: P as n ! 1:
n
log log n
2 2
Proof. It follows from the standard argument in strong approximation theory. By Lemma 1, we
can write
n
X
ti ¼
i¼1
n
Eti þ dn ¼ 2 þ dn ;
n
X
i¼1
n
where dn ¼ o 2 : Thus, for any e40; there is a subset Oe such that
PðOe Þ ¼ 1;
Oe ¼
1 \
1 [
n
jdn jpe 2 :
m¼1 n¼m
Let
1=2
n
T
T ¼ 2 ; aT ¼ eT; bT ¼ 2aT log
þ log log T
:
aT
Then on the event Oe ;
n
n
W
2 þ dn W
2 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
n
2 2
p sup
0pspaT
log log n
jW ðT þ sÞ W ðTÞj
bT
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
:
bT
n
2 2 log log n
In view of Theorem 1.2.1 of Csörgö and Révész (1981), we have
lim sup sup
T!1
0pspaT
jW ðT þ sÞ W ðTÞj
¼1
bT
a:e: :
ð14Þ
ARTICLE IN PRESS
Z. Su / Statistics & Probability Letters 71 (2005) 337–346
345
Thus, it is easy to see from (14) that
n
n
W
2 þ dn W
2
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
lim sup
pð2eÞ1=2 :
n!1
n
2 2
log log n
Since e is arbitrary, we conclude the proof of Lemma 5. &
Lemma 5 is now good enough to the law of the iterated logarithm for character ratios. The
following law of the iterated logarithm for Brownian motion is well-known; we only refer the
reader to Csörgö and Révész (1981) for details.
Lemma 6.
W ðnÞ
lim sup pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ 1
n!1
2n log log n
a:e: :
We are now in a position to complete the proof of Theorem 1.
Proof of Theorem 1. Combining Lemmas 5 and 6, we get
P
W ð ni¼1 ti Þ
lim sup sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
¼ 1 a:e: :
n!1
n
2 2
log log n
If we can redefine ðX n ; nX1Þ without changing their distributions on a new and richer probability
space, then the above law of the iterated logarithm also holds for the X n ’s. This completes the
proof of Theorem 1. &
We remark that one can make use of Theorem 2.1 of Shao (1993) to refine the rate of
convergence to 0 in Lemma 5 above, and so establish Theorem 2.
To conclude, we remark that Fulman’s martingale argument for the central limit theorem of
character ratios can lead to convergence of moment of any order to the corresponding moment of
normal distribution. To illustrate, we use Burkhölder’s martingale theoretic generalization of
Rosenthal’s inequality as follows
0
1
p
!p=2
X
X
X
d pK p @E
Eðd 2i jFi1 Þ
þ
Ejd i jp A:
E
1pipn i 1pipn
1pipn
If Eðd 2i jFi1 Þ ¼ i 1 (independent of li1 ), then
p
X
E
d i pK p np :
1pipn ARTICLE IN PRESS
346
Z. Su / Statistics & Probability Letters 71 (2005) 337–346
Thus X n =n is uniformly integrable of p order for any positive real p. Consequently, taking into
account the central limit theorem for the X n ’s, we have
(
ð2k 1Þ!! p ¼ 2k;
EX pn
lim p=2 ¼
n!1
0
p ¼ 2k 1:
n
2
Thus, we recover the result due to Kerov (1981), Ivanov and Olshanski (2002), Hora (1998)
Acknowledgements
The author would like to express his gratitude to Fulman J. and Shao Q.M. for insightful
comments.
References
Breiman, L., 1968. Probability. Addison-wesley, Reading, MA.
Csörgö, M., Révész, P., 1981. Strong Approximations in Probability and Statistics. Academic Press, New York.
Diaconis, P., Shahshahani, M., 1981. Generating a random permutation with random transpositions. Z. Wahr. Verw.
Gebiete 57, 159–179.
Eskin, A., Okounkov, A., 2001. Asymptotics of branched coverings of a torus and volumes of moduli spaces of
holomorphic differentials. Invent. Math. 145, 59–103.
Frobenius, F., 1900. Uber die charaktere der symmetrischen gruppe. Sitz. Konig. Preuss. Akad. Wissen. 516–534;
Frobenius, F., 1968. Gesammelte abhandlungen III. Springer, Heidelber, pp. 148–166.
Fulman, J., 2005a. Stein’s method and Plancherel measure of the symmetric group. Trans. Amer. Math. Soc. 357,
555–570.
Fulman, J., 2005b. Martinagles and Character ratios. To appear in Trans. Amer. Math. Soc. (http://arXiv.org/abs/
math/0402409).
Godbole, A., Hitczenko, P., 1998. Beyond the method of bounded differences. DIAMCS, Ser. Discrete Math. Theoret.
Comput. Sci. 41, 43–57.
Hall, P., Hedey, C., 1980. Martingale Limit Theorem Theory and its Applications. Academic Press, New York.
Hora, A., 1998. Central limit theorem for the adjacency operators on the infinite symmetric group. Comm. Math. Phys.
195, 405–416.
Ivanov, V., Olshanski, G., 2002. Kerov’s central limit theorem for the Plancherel measure on Young diagrams. In:
Symmetric Functions 2001: Surveys of Developments and Perspectives. Kluwer Academic Publishers, Dodrecht.
Kerov, S.V., 1993. Gaussian limit for the Plancherel measure of the symmetric group. Compt. Rend. Acad. Sci. Paris,
Series I 316, 303–308.
Kerov, S.V., 1996. The boundary of Young lattice and random Yound tableaux. Formal power series and algebraic
combinatorics. DIMACS Ser. Discrete Math. Theoret. Comput. Sci. 24, 133–158.
Ledoux, M., 2004. Differential operators and spectral distributions of invariant ensembles from the classical orthogonal
polynomilas. The discrete case. Preprint.
Okounkov, A., Pandaripandhe, R., 2003. Gromov–Witten theory, Hurwitz numbers, and matrix models, I. Preprint
math. AG/0101147 at http://xxx.lang.gov.
Phillipp, W., Stout, W., 1975. Almost sure invariance principles for partial sums of weakly dependent random variables.
Mem. Amer. Math. Soc. 161.
Shao, Q.M., 1993. Almost sure principles for mixing sequences of random variables. Stochastic Processes Appl. 48,
319–334.
Stout, W.F., 1974. Almost Sure Convergence. Academic Press, New York.
Download