Acta Mathematica Sinica, English Series Published online: DOI: 00000000000000

advertisement
Acta Mathematica Sinica, English Series
Jan., 2011, Vol. 27, No. 1, pp. 1–9
Published online:
DOI: 00000000000000
Http://www.ActaMath.com
On increasing subsequences of minimal Erdös-Szekeres permutations
Zhong Gen SU∗
Department of Mathematics, Zhejiang University
Hangzhou, 310027, China
E-mail : suzhonggen@zju.edu.cn
Abstract Let π be an minimal Erdös-Szekeres permutation of 1, 2, · · · , n2 and let ln,k be the length
of the longest increasing subsequence in the segment (π(1), · · · , π(k)). Under uniform measure we
establish an exponentially decaying bound of the upper tail probability for ln,k , and as a consequence
we obtain a complete convergence, which is an improvement of Romik’s recent result. We also give a
precise lower exponential tail for ln,k .
Keywords Large deviation; Minimal Erdös-Szekeres permutation; Robinson-Schensted-Knuth correspondence; Young tableaux
MR(2000) Subject Classification
1
60C05, 60F10
Introduction and Main Results
In this paper, we consider a class of permutations of 1, 2, · · · , n2 which have a certain minimality property with respect to the length of their monotone subsequences. The celebrated ErdösSzekeres theorem implies that a permutation π = (π(1), π(2), · · · , π(n2 )) must contain a monotone (either increasing or decreasing) subsequence π(i1 ), π(i2 ), · · · , π(in ), i1 < i2 < · · · < in .
Our interest of study is in those permutations whose longest increasing and longest decreasing
subsequence have length exactly n. Following Pittel and Romik [1], such permutations is called
minimal Erdös-Szekeres permutations.
Denote by En the set of all minimal Erdös-Szekeres permutations of 1, 2, · · · , n2 . A remarkable result is
2
(n2 )!
|En | =
.
(1.1)
1 · 22 · 33 · · · nn · (n + 1)n−1 · (n + 2)n−2 · · · (2n − 1)
This can be seen from the well-known Robinson-Schensted-Knuth (RSK) correspondence and
the hook formula (cf. Romik [2]).
Now equip a uniform probability measure Pn on En . It seems interesting to study the
behavior of the typical minimal Erdös-Szekeres permutations. In contrast with the ordinary
random permutations, there is no systematic study around Erdös-Szekeres permutations in the
literature. An initial step in this direction was taken in [1, 2]. As an application of their
limit shape result, Pittel and Romik [1] proved a law of large numbers for the length of the
Received December 22, 2008, Accepted December 23, 2009
∗ Supported by NSFC grant No. 10671176 and ZJNSFC grant No. R6090034
2
SU Z. G.
longest increasing subsequence when just an initial segment of a random minimal Erdös-Szekeres
permutation is read.
Let 0 < α ≤ 1/2, let k = kn → ∞ in such a way that αk =: k/n2 → α. For π ∈ En , let ln,k
be the length of the longest increasing subsequence in the segment π(1), · · · , π(k). Then for all
η > 0,
p
ln,k
Pn π ∈ En : n → ∞.
(1.2)
− 2 α(1 − α) > η −→ 0,
n
See [1] for a slightly stronger statement including rate of convergence estimate.
The aim of this paper is to strengthen (1.2) to a complete convergence by giving an exponentially decaying bound for the upper tail probability. Our main result reads as follows.
Theorem 1 Assume α < 1/2. Let τn = n−2/3+ε for some 0 < ε < 2/3. Then for any θ > 0
and all sufficiently large n ≥ n0 (independent of θ),
p
√ θ
3/2
Pn π ∈ En : ln,k − 2 αk (1 − αk ) n ≥ θn τn ≤ 2 exp −θ
+ ε/6 .
(1.3)
n
We also establish a precise lower exponential tail for ln,k . It is expected that ln,k exhibit
quite different behaviors in their upper and lower tail. To state the result, we introduce some
additional notations. For 0 < α < 1, let Fα be the class of weakly decreasing functions
R1
f : [0, 1] → [0, 1] such that 0 f (x)dx = α. Set
Z 1Z 1
I(f ) =
log |f (x) − y + f −1 (y) − x|dxdy.
0
0
p
Let, for 0 < c < 2 α(1 − α)
n
o
p
Hα (c) = inf I(f ) : f ∈ Fα , f (0) = 2 α(1 − α) − c + H(α) + C,
where H(α) = −α log α − (1 − α) log(1 − α), C =
3
2
− 2 log 2.
Theorem 2
Under the above notations we have
p
1
lim 2 log Pn π ∈ En : ln,k ≤ 2 α(1 − α) − c n = −Hα (c).
n→∞ n
As a consequence of Theorems 1 and 2, we immediately have
Corollary Assume 0 < α < 1/2. Then for all η > 0,
∞
X
p
ln,k
− 2 α(1 − α) > η < ∞.
Pn π ∈ En : n
n=1
(1.4)
(1.5)
The proofs of Theorems 1 and 2 will be given in the next section. The basic idea is to use the
Schensted algorithm [3] to express the probability distribution of ln,k in terms of random square
Young tableaux. From the RSK correspondence it follows that to each π ∈ En there correspond
a pair (T, T 0 ) of square n × n tableaux. And moreover, if we let λkT be the sub-diagram of the
square comprised of those boxes where the value of the entry T is ≤ k, then by the Schensted
algorithm the length ln,k is equal to the length λkT (1) of the first row of λkT . Let Tn be the set
of n × n square Young tableaux, and let P̄n be the uniform probability measure on Tn . So the
distribution of ln,k under (En , Pn ) is equal to the distribution of λkT (1) under (Tn , P̄n ), namely,
d
ln,k = λkT (1).
(1.6)
Increasing subsequences of minimal Erdös-Szekeres permutations
3
Based on such an elegant identity, we can, and do in the sequel, reformulate our main results
(1.3) and (1.4) in terms of λkT (1). Throughout Section 2, we will carry out our calculation only
with respect to probability P̄n and expectation Ēn .
The technique used in the proof of Theorem 1 is to estimate the moment generating function
exp(xλkT (1)) using the conditioning devices. This technique was already used in Kim [4] on
increasing subsequences of ordinary random permutations. See also Pittel and Romik [1] for
the upper bound of the expected value of λkT (1). Interestingly, we need a nice control for the
lower bound of λkT (1) in the computation of upper bound.
The proof of Theorem 2 is a direct application of Lemma 1 of Pittel and Romik [1], which
estimates the probability that the sub-diagram λkT has a given shape.
2
Proofs
Let us start with the proof of Theorem 1. To this end, we need a lemma to estimate the moment
generating function of λkT (1).
Let m = [τn n2 ] where τn is as in Theorem 1 and [·] stands for the integer part, and let
1
δ = n− 3 (1−ε) . Define

2 sinh x
2

1 ≤ j ≤ m,

 nq j (1− j ) ,

2

n
n2


an,j =
(2.1)

q
2 !1/2



2 sinh x
j
j
1
2


, m + 1 ≤ j ≤ k,
 nq j (1− j ) 1 − 2 n2 (1 − n2 ) − δ − n
n2
n2
and for ĉ > 0, κ > 1
bn = exp(−ĉnκ ).
Lemma 1 For x > 0 such that maxj≥1 an,j < 1, we have

1
xλj−1
(1)
T

, 1 ≤ j ≤ m,

1−an,j Ēn e

j
xλT (1)
Ēn e
≤


 1+bn
xλj−1
(1)
T
, m + 1 ≤ j ≤ n.
1−an,j Ēn e
Proof
(2.2)
(2.3)
Let, for j ≥ 1
In,j = λjT (1) − λj−1
T (1).
Trivially,
In,j =



 1,



0,
λjT is obtained from λj−1
by adding a box to the first row,
T
or else.
Then using the total expectation formula, we have for any x,
h j i
j
Ēn exλT (1) = Ēn Ēn exλT (1) λj−1
T
h j−1
i
= Ēn exλT (1) Ēn exIn,j λTj−1 .
(2.4)
4
SU Z. G.
Conditioning on λj−1
T , it obviously follows
j−1
x
Ēn exIn,j |λj−1
=
1
+
(e
−
1)
P̄
I
=
1|λ
.
n
n,j
T
T
(2.5)
Inserting (2.5) into (2.4) yields
j
j−1
Ēn exλT (1)
= Ēn exλT
(1)
h j−1
i
+ (ex − 1)Ēn exλT (1) P̄n In,j = 1|λj−1
.
T
In turn, it easily follows via the Cauchy-Schwarz inequality
h j−1
i
Ēn exλT (1) P̄n In,j = 1|λTj−1
1/2
j−1
j−1
≤ (Ēn exλT (1) )1/2 Ēn exλT (1) P̄n2 (In,j = 1|λj−1
)
.
T
(2.6)
(2.7)
Next we shall estimate the second expectation of the right hand side of (2.7) using a similar
argument to Lemma 10 of Pittel and Romik [1]. Let Yn,j be the set of Young diagrams of area
j contained in the n × n square. For a diagram λ0 ∈ Yn,j−1 , denote by λ∗0 the diagram obtained
from λ0 by adding a box to the first row. For a Young diagram λ, denote by d(λ) the number
of Young tableaux of shape. Then we have
P̄n2 In,j = 1|λj−1
= λ0 P̄n λTj−1 = λ0
T
j−1
−1
= P̄n2 In,j = 1, λj−1
=
λ
P̄
λ
=
λ
0
0
n
T
T
d(n )
d2 (λ0 )d2 (n \ λ∗0 )
×
2
d (n )
d(λ0 )d(n \ λ0 )
d(λ∗0 )d(n \ λ∗0 ) d(λ0 )d(n \ λ∗0 )
=
×
.
d(n )
d(n \ λ0 )d(λ∗0 )
=
where n stands for the n × n square and n \ λ0 is thought of as an ordinary diagram from
the opposite corner of the square.
Now note the amusing identity due to Pittel and Romik [1] (see (71) therein):
d(λ0 )d(n \ λ∗0 )
n2 − λ20 (1)
.
=
∗
d(n \ λ0 )d(λ0 )
j(n2 − j + 1)
Then we have
P̄n2 In,j = 1|λj−1
T
X
exλ0 (1) P̄n2 In,j = 1|λj−1
= λ0 P̄n λTj−1 = λ0
T
j−1
Ēn exλT
=
(1)
λ0 ∈Yn,j−1
= e−x
X
λ0 ∈Yn,j−1
∗
exλ0 (1)
d(λ∗0 )d(n \ λ∗0 )
n2 − λ20 (1)
×
.
d(n )
j(n2 − j + 1)
This is nearly an average over Yn,j ; in fact, slightly less, since not any λ ∈ Yn,j is of the form
λ∗0 for some λ0 ∈ Yn,j−1 . So we must have
j−1
e−x
j
xλjT (1)
2
2
Ēn exλT (1) P̄n2 In,j = 1|λj−1
≤
Ē
e
n
−
(λ
(1)
−
1)
.
(2.8)
n
T
T
j(n2 − j)
For 1 ≤ j ≤ m, we use the trivial upper bound to get
j−1
≤
Ēn exλT (1) P̄n2 In,j = 1|λj−1
T
j
n2
e−x Ēn exλT (1) .
2
j(n − j)
(2.9)
Increasing subsequences of minimal Erdös-Szekeres permutations
5
Combining (2.6), (2.7) and (2.9) yields
j
Ēn exλT (1)
j−1
= Ēn exλT
(1)
j
2 sinh x2
+ q
Ēn exλT (1)
j
j
n n2 (1 − n2 )
(2.10)
from which we obtain (2.3) for 1 ≤ j ≤ m.
For j ≥ m + 1, we adapt a non-trivial lower bound for λjT (1) given by Lemma 9 of Pittel
and Romik [1]. Specifically speaking,
!
!
r
j
j
j
P̄n T ∈ Tn :
min
λT (1) − 2n
(1 − 2 ) ≤ −δn ≤ exp(−ĉnκ ),
(2.11)
n2
n
τn n2 ≤j≤n2 /2
where κ > 1 and where and in the sequel ĉ > 0 is numerical constant, whose value may change
from line to line.
For simplicity, we write An for the random event of the left hand side of (2.11). Then
j
Ēn exλT (1) n2 − (λjT (1) − 1)2

!2 
r
h j i
j
j
j
1
Ēn exλT (1) + Ēn exλT (1) n2 − (λj (1) − 1)2 IAn
≤ n2 1 − 2
(1
−
)
−
δ
−
T
n2
n2
n

!2 
r
j
j
1
j
Ēn exλT (1) + exp(−ĉnκ ),
(1 − 2 ) − δ −
≤ n2 1 − 2
2
n
n
n
from which it easily follows
1/2
j
Ēn exλT (1) (n2 − (λjT (1) − 1)2 )

!2 1/2
r
1/2
j
j
1
xλjT (1)

+ exp(−ĉnκ ).
Ē
e
≤ n 1 − 2
(1
−
)
−
δ
−
n
n2
n2
n
(2.12)
Analogously to (2.10), combining (2.6), (2.7) and (2.12) yields
j
Ēn exλT (1)
j−1
≤ Ēn exλT
j−1
(1)
+ exp(−ĉnκ )Ēn exλT (1)

!2 1/2
r
j
2 sinh x2
1 − 2 j (1 − j ) − δ − 1  Ēn exλT (1) .
+ q
2
2
n
n
n
n j (1 − j )
n2
n2
Hence we have for j ≥ m + 1
j
Ēn exλT (1) ≤
This completes the proof of Lemma 1.
j−1
1 + bn
Ēn exλT (1) .
1 − an,j
Proof of Theorem 1. Without loss of generality, we may and do assume 0 < θ ≤ n1/3−ε/2 ,
1/2
since λkT (1) ≤ n. Let x = θ1/2 /(nτn ). Obviously, the hypothesis in Lemma 1 is satisfied for
all sufficiently large n.
6
SU Z. G.
Repeatedly using (2.3), we have
k
Ēn exλT (1)
k
Y
m
1 + bn
Ēn exλT (1)
1
−
a
n,j
j=m+1


k
X
≤ exp kbn −
log(1 − an,j ) .
≤
(2.13)
j=1
It remains to estimating the logarithmic sum in the exponent of the right hand side of (2.13).
Note max1≤j≤k an,j ≤ 1/2 and use log(1 − z) ≥ −z − z 2 , 0 < z ≤ 1/2. We have
−
k
X
log(1 − an,j ) ≤
j=1
k
X
an,j +
j=1
k
X
a2n,j .
(2.14)
j=1
By the Euler sum formula,
m
X
1
q
j=1
j
n2 (1
−
j
n2 )
=
τn
Z
1
p
dt + O(n)
t(1 − t)
0
√
2n2 arcsin τn + O(n).
= n2
So, it follows from the definition of an,j over 1 ≤ j ≤ m
m
X
an,j = 2 sinh
j=1
√
x
(2n arcsin τn + O(1)) .
2
For the remaining sum of an,j over m + 1 ≤ j ≤ k, recall α < 1/2 and

!2 1/2
r
2 sinh x2
j
1
j
 , m + 1 ≤ j ≤ k.
1 − 2
(1 − 2 ) − δ −
an,j = q
2
n
n
n
j
j
n n2 (1 − n2 )
Then using (1 + x)1/2 ≤ 1 + x2 ,
!1/2
2
r
2 sinh x2
2j
j
1
j
q
≤
1− 2
+4 δ+
(1 − 2 )
n
n
n2
n
n nj2 (1 − nj2 )


1 − n2j2
2(δ + n1 )
x
.
q
≤ 2 sinh
+
2 n j (1 − j ) n(1 − n2j2 )
an,j
n2
n2
Again, by the Euler sum formula,
k
X
1 − 2 nj2
q
j=m+1
j
n2 (1
−
j
n2 )
=
αk
1 − 2t
p
dt + O n2/3−ε/2
t(1 − t)
τn
hp
i
p
2n2
αk (1 − αk ) − τn (1 − τn ) + O n2/3−ε/2
= n2
Z
(2.15)
and
k
X
1
(1
−
2 nj2 )
j=m+1
= n2
Z
αk
τn
1
dt + O(1)
1 − 2t
2
= −
n
log(1 − 2αk ) + O(m).
2
(2.16)
Increasing subsequences of minimal Erdös-Szekeres permutations
7
Combining (2.15) and (2.16) gives
k
X
an,j
=
2 sinh
j=m+1
p
x h p
αk (1 − αk ) − τn (1 − τn )
2n
2
i
−2nδ log(1 − αk ) + O(n4ε/3 ) .
By noting τn = n−2/3+ε → 0 and using a simple analysis, we have
k
X
an,j
=
j=1
m
X
an,j +
j=1
k
X
an,j
j=m+1
i
xh p
2n αk (1 − αk ) − 2nδ log(1 − αk ) + O(n3ε/2 ) .
(2.17)
2
Turn to the square sum. It easily follows
k
k
X
x 2 X
1
a2n,j ≤
2 sinh
2 j=1 j(1 − nj2 )
j=1
x 2
≤ 4 2 sinh
log n.
(2.18)
2
Substituting (2.17) and (2.18) into (2.13), and then into (2.14) yields a key control of moment
generating function:
h
k
x p
Ēn exλT (1) ≤ exp n2 bn + 2 sinh
2n αk (1 − αk )
2
i
x 2
−2nδ log(1 − αk ) + O(n3ε/2 ) + 4 2 sinh
log n .
2
Applying the Markov inequality, we have
p
√ P̄n λkT (1) − 2n αk (1 − αk ) ≥ θn τn
p
k
√ ≤ exp −x2n αk (1 − αk ) − xθn τn Ēn exλT (1) .
=
2 sinh
Since x → 0 as n → ∞, 2 sinh x2 = x + O(x3 ). So, a simple algebra leads to
p
√ θ P̄n λkT (1) − 2n αk (1 − αk ) ≥ θn τn ≤ 2 exp −θ3/2 + ε/6
n
for all sufficiently large n, as desired.
Proof of Theorem 2. Let us start with an estimate of the probability that the subdiagram λkT has a given shape. For any Young diagram λ = (λ(1), λ(2), · · · , λ(n)) (some of
them may be 0) whose graph lies within the n × n square, define fλ : [0, 1] → [0, 1] by
1
fλ (x) = λ(dnxe).
n
Then for any given diagram λ0 ⊆ n of size k, we have
d(λ)d(n \ λ)
P̄n T ∈ Tn : λkT = λ =
d(n )
= exp −(1 + o(1))n2 (I(fλ ) + H(αk ) + C ) ,
(2.19)
where o(1) is uniform over all λ and all 1 ≤ k ≤ n2 .
(2.19) was proved by Pittel and Romik [1] through a careful computation with the help of
the classic hook formula. Our proof will be a direct application of this asymptotic estimate.
See Deuschel and Zeitouni [5] for similar arguments.
8
SU Z. G.
First, by virtue of (2.19), it follows
p
P̄n T ∈ Tn : λkT (1) ≤ 2 αk (1 − αk ) − c n
X
=
P̄n T ∈ Tn : λkT = λ
λ∈Pk (αk ,c)
=
X
exp −(1 + o(1))n2 (I(fλ ) + H(αk ) + C ) ,
(2.20)
λ∈Pk (αk ,c)
where Pk (αk , c) denotes the set of all partitions of k with the largest summand at most
p
(2 αk (1 − αk ) − c)n.
p
Let p(k) be the number of partitions of k. It is well known p(k) ≤ exp π 2k/3 . So,
(2.20) is bounded by
2
p(k) exp −(1 + o(1))n
min (I(fλ ) + H(αk ) + C)
λ∈Pk (αk ,c)
≤ p(k) exp −(1 + o(1))n2 Hαk (c) .
Taking the logarithm and then letting n → ∞ gives
p
1
lim sup 2 log P̄n T ∈ Tn : λkT (1) ≤ 2 α(1 − α) − c n ≤ −Hα (c).
n→∞ n
Let us turn to proving the lower bound:
p
1
lim inf 2 log P̄n T ∈ Tn : λkT (1) ≤ 2 α(1 − α) − c n ≥ −Hα (c).
n→∞ n
c
Let f0 ∈ Fα be such that
n
o
p
I(f0c ) = inf I(f ) : f ∈ Fα , f (0) = 2 α(1 − α) − c .
(2.21)
Assume it has the support b0 (c). We construct a particular Young diagram out of f0c . For
1 ≤ i ≤ [b0 (c)n] =: imax , set j(i) = [f0c ( ni )n]. Note that j(i) is a decreasing sequence, and
2
n α − ĉn ≤ mn =
iX
max
j(i) ≤ n2 α.
i=1
The sequence {(i, j(i)), i = 1, · · · , imax } defines a Young diagram λn of size mn . Applying
(2.19) to the λn yields
p
n
P̄n T ∈ Tn : λm
(1)
≤
(2
α(1
−
α)
−
c)n
T
n
≥ P̄n (T ∈ Tn : λm
= λn )
T
= exp −(1 + o(1))n2 (I(fλn ) + H(α) + C) .
This immediately gives
p
1
n
log P̄n T ∈ Tn : λm
(1) ≤ 2 α(1 − α) − c n ≥ −Hα (c).
T
2
n→∞ n
The lower bound (2.21) follows from the continuity of Hα (c) in c. This concludes the proof.
A remarkable achievement in combinatorics probability in the end of the nineties is the
work of Baik, Deift and Johansson [6] on the asymptotic distribution of the longest increasing
subsequences of random uniform permutations. In virtue of the RSK correspondence, they
equivalently obtained the asymptotic distribution for the first row of standard Young tableaux
lim inf
Increasing subsequences of minimal Erdös-Szekeres permutations
9
under the Plancherel measure. The similar results hold for the first m rows where m is arbitrary
fixed integer. Just recently did Bogachev and Su [7] show the central limit theorem in the bulk.
It is natural to ask whether we could find a normalizing constant and limit distribution for
λkT (m) as n tends to infinity, where m may be fixed or depend on n. See Pittel and Romik [1]
for the limit shape and more details. Honestly, the present bounds for λkT (1) is only a bit effort
toward this goal.
Acknowledgements The author wishes to express his gratitude to L. Bogachev and B. Pittel
for their insightful comments.
References
[1] Pittel, B. G., Romik, D.: Limit shapes for random square Young tableaux. Adv. Appl. Math., 38, 164–209
(2007)
[2] Romik D.: Permutations with short monotone subsequences. Adv. Appl. Math., 37, 510–510 (2006).
[3] Schensted, C.: Longest increasing and decreasing subsequences. Canadian J. Math. , 13, 179–191 (1961)
[4] Kim, J. H.: On increasing subsequences of random permutations, J. Combin. Theory Ser. A. , 76, 146–155
(1996)
[5] Deuschel J.D. and Zeitouni O.: On increasing subsequences of I.I.D. samples. Combinatorics, Probbaility
and Computing, 8, 247-263 (1999)
[6] Baik, J., Deift, P., Johansson, K.: On the distribution of the length of the longest increasing subsequence
of random permutations. J. Amer. Math. Soc., 12, 1119–1178 (1999)
[7] Bogachev, L. V., Su, Z. G.: Gaussian fluctuations of Young diagrams under the Plancherel measure. Proc.
Roy. Soc. A., 463, 1069–1080 (2007)
Download