CRITICAL SETS OF RANDOM LINEAR COMBINATIONS OF

advertisement
CRI
TIC
AL
SET
S
OF
RA
ND
OM
LIN
EA
R
CO
MBI
NA
TIO
NS
OF
EIG
EN
FU
NC
TIO
NS
L
I
V
I
U
I
.
N
I
C
O
L
A
E
S
C
U
L
A
B
S
T
R
A
C
T
.
G
i
v
e
n
a
c
o
m
p
a
c
t
,
c
o
n
n
e
c
t
e
d
R
i
e
m
a
n
n
m
a
n
i
f
o
l
d
w
i
t
h
o
u
t
b
o
u
n
d
a
r
y
(
M
;
g
)
o
f
d
i
m
e
n
s
i
o
n
m
a
n
d
a
l
a
r
g
e
p
o
s
i
t
i
v
e
c
o
n
s
t
a
n
t
L
w
e
d
e
n
o
t
e
b
y
U
t
h
e
s
u
b
s
p
a
c
e
o
f
C
1
(
M
)
s
p
a
n
n
e
d
b
y
e
i
g
e
n
f
u
n
c
t
i
o
n
s
o
f
t
h
e
L
a
p
l
a
c
i
a
n
c
o
r
r
e
s
p
o
n
d
i
n
g
t
o
e
i
g
e
n
v
a
l
u
e
s
L
.
W
e
e
q
u
i
p
U
2
w
i
t
h
t
h
e
s
t
a
n
d
a
r
d
G
a
u
s
s
i
a
n
p
r
o
b
a
b
i
l
i
t
y
m
e
a
s
u
r
e
i
n
d
u
c
e
d
b
y
t
h
e
L
m
e
t
r
i
c
o
n
U
L
L
,
a
n
d
w
e
d
e
n
o
t
e
b
y
N
L
t
h
e
e
x
p
e
c
t
e
d
n
u
m
b
e
r
o
f
c
r
i
t
i
c
a
l
p
o
i
n
t
s
o
f
a
r
a
n
d
o
m
f
u
n
c
t
i
o
n
i
n
U
.
W
e
p
r
o
v
e
t
h
a
t
N
L
C
m
L
d
i
m
U
L
a
s
L
!
1
,
w
h
e
r
e
C
i
s
a
n
e
x
p
l
i
c
i
t
p
o
s
i
t
i
v
e
c
o
n
s
t
a
n
t
t
h
a
t
d
e
p
e
n
d
s
o
n
l
y
o
n
t
h
e
d
i
m
e
n
s
i
o
n
m
o
f
M
a
n
d
s
a
t
i
s
f
y
i
n
g
t
h
e
a
s
y
m
p
t
o
t
i
c
e
s
t
i
m
a
t
e
l
o
g
C
m
m
2
l
o
g
m
a
s
m
!
1
.
m
C
O
N
T
E
N
T
S
1.
Introd
uction
1 2.
An
integr
al
formul
a3 3.
The
proof
of
Theor
em1.1
7 4.
The
proof
of
Theor
em1.2
10
Appen
dix A.
Gauss
ian
meas
ures
and
Gauss
ian
rando
m
fields1
5
Appen
dix B.
Gauss
ian
rando
m
symm
etric
matric
es19
Refer
ences
22
1
.
I
N
T
R
O
D
U
C
T
I
O
N
gSupp
ose
that
(M;g)
is a
smoot
h,
compa
ct
Riema
nn
manifo
ld of
dimen
sion
m>1.
We
denote
by
jdVjthe
volum
e
densit
y on
Mindu
ced by
g. For
any
u;v2Ci
nner
produc
t,
(u;v)g:
=
ZMu(x)
v(x)jd
Vg1(M)
we
denote
by
(u;v)2(
x)j:
The
L-nor
m of a
smoot
h
functio
n uis
then
kuk:=q
(u;u):
Let g:
C1(M)
!C1ggth
eir
L2(M)
denote
the
scalar
Laplac
ian
define
d by
the
metric
g. For
L>0
we set
UL=
UL(M;
g) :=
M 2[0;L]
ker( g)
; d(L)
:=
dimUL:
We equip
UL
Date:
Started
January
17,
2011.
Comple
ted on
January
26,
2011.
Last
modifie
d on
Februar
y 1,
2011.
2000
Mathe
matics
Subject
Classifi
cation.
Primary
15B52,
42C10,
53C65,
58K05,
60D05,
60G15,
60G60 .
Key
words
and
phrases
. Morse
function
s,
critical
points,
Kac-Pri
ce
formula
,
random
matrice
with the Gaussian probability
measure. dL(u) := (2 )d(L) 2
e
kuk
22
jduj:
s,
gaussia
n
random
processes,
spectral
function
.
iv:1101.5990v1 [math.DG] 31 Jan 2011
1
2 LIVIU I. NICOLAESCU
L
we denote by NL
L
L)
:= ZUL
L
L
NL(u)dL
as L!1: (1.1)
L
E(NL
F ) C(m)dimU
or any u2UL
The constant C(m) can
be expressed in terms of
certain statistics on the
space S 1the Gaussian
measureon Sthe space of
symmetric m mmatrices .
We denote dgiven by
d (X) = 1(2 )m(m+1) 4p m e1
4trX21 m+2(trX)2 21 2m(m 2i j)
Ydxij;m
m m 2= 2 () +1(m+ 2)m1
m 2 m + m2 )
ZSm jdetXjd
(u) the number of critical points of u. If Lis sufficiently large the
1. We obtain in this fashion a random variable N, and we deno
E(N(u): In this paper we investigate the behavior of E(N) as L!1
prove the following result.
Theorem 1.1. There exists a positive constant C= C(m) that de
dimension of M, such that
: (1.2)
=:Im
C(m) = 4 m+ 4
:
Then
(X) | {z }We can say
something about the
behavior of C(m) as m!1.
Theorem 1.2.
logC(m) logI
m2 logm as m!1: (1.3)
The proof of (1.1) relies
on a Kac-Price type
integral formula proved
by the author in [15] that
expresses the expected
number of critical points
of a function in Uas an
integral E(NL) =
ZM L(x)jdVgL(x)j: Using
some basic ideas from
random field theory we
reduce the large
Lasymptotics of Lto
questions concerning the
asymptotics of the
spectral function of the
Laplacian. Fortunately,
these questions were
recently settled by X. Bin
[4] by refining the wave
kernel method of L. H Ё
ormander, [11]. We
actually prove a bit more.
We show that
lim L
m2
L L(x)
= C(m)!(2 )mm
m
L!1
g
L(x)jdVg
a;b;c
m
L
1
;
uniformly in x2M; (1.4)
where !denotes the volume
of the unit ball in R. Using
the classical Weyl
estimates (3.2) we see that
(1.4) implies (1.1).
The equality (1.4) has an
interesting interpretation.
We can think of (x)jas
the expected number of
critical points of a random
function in Uinside an
infinitesimal region of
volume jdV(x) around the
point x. From this point of
view we see that (1.4)
states that for large Lwe
expect the critical points
of a random function in
Uto be uniformly
distributed.
We refer to AppendixBfor a detailed description of a 3-parameter family Gaussian
measures d
as d = d3;1;1.
that includes d
on Sm
RANDOM LINEAR COMBINATIONS OF EIGENFUNCTIONS 3
This rather vague statement could be made more precise if we had estimates for the
variance V(NL) of NL. We are inclined to believe that as L!1the ratio
m
L
2 NL
qL
2
m
1
m
k
1
U
: M!U
_
h :=
Hom(U;R ); p7!evp
p
p
1
jk
k
h
(
U
)
map evU
:
=
Sh
In [15] we proved that q (S1
alluded to
) has a finite limit q(M;g).Such a result above,
including
would show that the random variable Lis highly concentrated near its mean value.
several
We obtain the asymptotics of C(m) by relying on a technique used by Y.V. Fyodorov [9] in
a related context. This reduces the asymptotics of the integral Ito known asymptotics of thereformulati
1-point correlation function in random matrix theory, more precisely, Wigner’s semi-circle ons in the
language
law.
The paper is structured as follows. Section2contains a description of the integral formulaof random
processes. Section3contains the proof of the asymptotic estimate (1.1), while
= V(NL)
section4contains the proof of the estimate (1.2). For the reader’s convenience we have
E(N
included in AppendixAa brief survey of the main facts about Gaussian measures and
Gaussian processes used in the proof, while AppendixBcontains a detailed description of a
family of Gaussian measures on the space Sof real, symmetric m mmatrices. These
measures play a central role in the proof of (1.1) and we could not find an appropriate
reference for the mostly elementary facts discussed in this appendix.
2. AN INTEGRAL FORMULA
A key component in the proof of Theorem1.1is an integral formula that we proved in
[15]. We recall it in this section, and then we formulate it in the language of random fields `
a la [1].
(M) we will denote by jSuppose Mis a compact manifold without boundary. Set m:= dimM.
For any nonnegative integer k, any point p2Mand any f2C(f;p) the k-th jet of fat p. Fix a finite
dimensional vector space U C(M). Set N:= dimU. We have an evaluation map ev = ev;
where for any p2Mthe linear map ev: U!R is given by ev(u) = u(p); 8u2U: If kis nonnegative
integer then we say that Uis k-ample if for any p2Mand any f2C(M) there exists u2Usuch
that(u;p) = j(f;p): In the remainder of this section we will assume that Uis 1-ample. This
implies that the evaluation
is an immersion. Moreover, as explained in [14, x1.2], the 1-ampleness condition also
implies that almost all functions u2Uare Morse functions and thus have finite critical sets.
For any u2Uwe denote by N (u) the number of critical points of u.We fix an inner product
h(;) on Uand we denote by jjthe resulting Euclidean norm. We will refer to the pair (U;h) as
the sample space. We set
=
u2U); juj
1 :
2
Using the metric hwe can regard the evaluation map a smooth map ev : M!U:
) 0:4518::::
4
LI
VI
U
I.
NI
CO
LA
ES
CU
W
e
d
efi
n
e
th
e
ex
p
ec
te
d
n
u
m
b
er
of
cri
tic
al
p
oi
nt
s
of
a
fu
nc
tio
n
in
Ut
o
b
e
th
e
q
u
a
nti
ty
where
n1
denotes the ”area” of the unit sphere in
Rn,
=:dhh(u)(u)j
| {z }; (2.1)
2h 2
jdVjuj
N 2 N (u) e N (u)jdAh(u)j= ZU
ZS
n2
=2
(u)jdenotes the volume density on
Udetermined by the metric h. A priori, the expected number of critical points could be
infinite, but in any case, it is independent of any choice of metric on M.
The integral formula needed in the proof of Theorem1.1expresses N (U;h) as the integral
of an explicit density on M. To describe this formula it is convenient to fix a metric gon M.
We will express N (U;h) as an integral
M
(p)jdV (p)j:
g
0
Z g
p
g
g
(p)jdVg
:
0p
The
function
g
n2
n1
j
(U),
and
jdV
h ;
d
jdenotes the ”area” density
A
on Sh
h
does depend on gbut the density (p)jis independent of g. The concrete description of (p)
relies on several fundamental objects naturally associated to the triplet (U;h;g). For any
p2Mwe setU0 p:= u2U; du(p) = 0The 1-ampleness assumption on Uimplies that for any
x2Mthe subspace Uhas codimension m in Uso that
: TpM!U, and we denote by JmU.
Equivalently, if (e1g;:::;em
m
pM,
then
2(p) = det hh(Apei;Apej) i1 i;j m:
Jg 0 p g
dimU= Nm: The differential of the evaluation map at m: T(p) its Jacobian, i.e., the norm of the induced map pM! ) is g-orthon
pis a linear map A
p
p (u;g). In [15] we proved
p
that
Since evU
jdetHessis an immersion we have J(p) 6= 0, 8x2M. For any p2Mand any u2U, the Hessi
a well defined symmetric bilinear form on TMthat can be identified via the metri
symmetric endomorphism of TM. We denote this symmetric endomorphism by
N (U;h) = (2 )m 2
g
Z
1
J(p)
0 =:IpA | {z }
@
jdVh(u)j 1 p(u;g)j ejuj 2(2 )h ZU0 g
2Nm 2
p
M
0p
This formula looks hopeless in a general context for two immediately visible
may be impossible to pin down.
reasons. The Jacobian Jpg(p) seems difficult to compute. The integral Iin (2.2) may
be difficult to compute since the domain of integration U
R
A
N
D
O
M
L
I
N
E
A
R
C
O
M
B
I
N
A
T
I
O
N
S
O
F
E
I
G
E
N
F
U
N
C
T
I
O
N
S
5
W
e
will
deal
with
thes
e
diffi
culti
es
sim
ulta
neo
usly
by
relyi
ng
on
som
e
pro
babi
listic
prin
cipl
es
insp
ired
from
[1].
For
the
rea
der’
s
con
veni
enc
e
we
hav
e
gath
ere
d in
App
endi
xAt
he
basi
c
pro
babi
listic
noti
ons
and
fact
s
nee
d in
the
seq
uel.
W
e
d
e
n
ot
e
by
=
Ut
h
e
p
ull
b
ac
k
of
th
e
m
et
ric
h
o
n
U
vi
a
th
e
ev
al
u
ati
o
n
m
a
p.
W
e
wi
ll
re
fe
r
to
it
as
th
e
st
oc
h
as
tic
m
et
ric
as
so
ci
at
e
d
to
th
e
sa
m
pl
e
sp
ac
e
(U
;h
).
It
is
co
nv
e
ni
e
nt
to
h
av
e
a
lo
ca
l
d
es
cri
pti
o
n
of
th
e
st
oc
h
as
tic
m
et
ric
.
of
U.
T
h
e
ev
al
u
ati
o
n
m
a
p
ev
U
n
@ n nX n(x)
2U:
n
m;:::;x
h
i
1
n
@xj
p(@x
p
Note that if the collection (@xi )1 i m
g2(p)
p
as follows. The measure dh
p)
Using the orthonormal basis ( k
k
3t= (t1;:::;tn) 7!ut
p(ut
tk
tk k
k
2U;
N
(p):
N
Fix an orthonormal basis 1;:::; N : M!Uis then given by M3x7!
If p2Mand U is an open coordinate neighborhood of pwith coordinates x= (x) then i ;@xj ) =(u) = h(u;
with inverse u7!tk X@x(p) @ (p); 81 i;jm: (2.3)forms an g-orthonormal frame of TMthen J= det h p(@xi ;@xj )
i1 i;j m: (2.4) To the sample space (U;h) we associate in a tautological fashion a Gaussian
random field on M
in (2.1) is a probability measure and thus (U;d) is naturally a probability space. We have a
natural map
: M U!R; M U3(p;u) 7! (u) := u(p) The collection of random variables ( p2Mis a random field
on M. As explained in AppendixA this is in fact a Gaussian random field.
) of Uwe obtain a linear isometry R= X
:
M
M
!R
gi
ve
n
by
N E (p;q) = E( p; q) =
Xj;k= ZRN tjtkdN
(t)
j(p) k(q)
1
=
w
h
ee dN
r
). Fo
M x= (x1m;:::;x) such that x(p)
k
tes
= 0, then
( we can rewrite (2.3) in terms of the covariance
k
kernel
alone
q
=
1
)
p(@xi ;@xj ) = @2iE (x;y) @x@yjjx=y=0: (2.6) Note that for any vector field Xdetermines a
X
;
Gaussian random field on M, the derivative
of u
along
X. We obtain Gaussian random variables u7!Xu(p); u7!Yu(p);
k
(
(
2
p
.
)
5
)
is
th
e
ca
no
nic
al
G
au
ssi
an
m
ea
su
re
on
RN
. If
p2
M
an
d
U
is
an
op
en
co
or
di
na
te
ne
ig
hb
or
ho
od
of
pw
ith
co
or
di
na
6 LIVIU I. NICOLAESCU
and we have
2f(X;Y)
p(X;Y)
2
1
= E Xu(p);Yu(p)
m;:::;x
:= XYf(rX2 p
pj )
the bilinear form r2 p
= @2 xixj
r2 pf(@xi ;@x
p
1
an element of S (Tp
pM;gp
2p
form of du(p) is the metric
x
p
pMwith
pM);
U3u7!du(p) 2T
p
2p
p
2 pu2S
jdetr2 puj
0 :
2p
(Tp
du(p) =
p
Tp
The last equality justifies the attribute stochastic attached to the metric . We denote by
: (2.7)
rthe Levi-Civita connection of the metric g. The Hessian of a smooth functionf:!R with
respect to the metric gis the symmetric (0;2)-tensor rfon Mdefined by the equality rY)f;
8X;Y 2Vect(M): If pis a critical point of fthen rfis the usual Hessian of fat p. More generally,
if (x) are g-normal coordinates at pthen
(M) we identify using the metric gf(p); 81 i;j m: For any p2Mand any f 2Cfon TM), the vector
space of symmetric endomorphisms of the Euclidean space (T). For any p2Mwe have two
random Gaussian vectors U3u7!ru2S (TM: Note that the expectation of both random vectors
are trivial while (2.6) shows that the covariance
. To proceed further we need to make an additional assumption
on the sample space U. Namely, in
the remainder of this section we will assume that it is 2-ample. In this case the map
U3u7!rM) is surjective so the Gaussian random vector ruis nondegenerate. A simple
application of the coarea formula shows that the integral Iin (2.2) can be expressed as a
conditional expectation I= E The covariance form of the pair of random variables ruand
du(p) is the bilinear map : S (T_M!R ; ( ; ) = E h ;r2 pM)ui hdu; i ; 8 2S_ m; 2TM: Using
the natural inner products on S (TpM) and TpMdefined by gpwe can regard the covariance
form as a linear operator
E
: TpM!S (T2 pp
p
=
: S (TpM) !S (Tp
S du(p) = 0 = E(jdetY1 du(p)y:
S (TpM) !S (Tp
Yp
Sr2 pu
:
=
S
r
p
S
2
p
u
E(jdetY is invertible we have
p)
2
ZS (TpM) jdetYje
12
( 1
2
d (Y): (2.10)
V
j) = (2 )
M): Similarly, we can identify the
covariance forms of ruand duwith symmetric positive definite operators
M) and respectively
du(p) : TpM!Tp
: U!S (Tp
jdetr2 puj
p
p
p
dimS(T p )
(
d
e
t
p Y;Y)
g
M:
Using the regression formula (A.4) we deduce that
j); (2.8) where YM) is a Gaussian random vector with mean value zero and covariance
operator pM): (2.9) When
RANDOM LINEAR COMBINATIONS OF EIGENFUNCTIONS 7
Observing that Jg(p) = (detSdu(p))1 2; (2.11) we deduce that when Uis 2-ample we have
Dp
du(p) :
; Dp iu= @x
x=y=0
U!Rm; u7!Dpu2Rmp
p
k
u(p)
ixj
=
4
ij)
Hp ( ij;
Similarly we have
( ij
ij gi j
i
S_ m, ^ ij
k
=X
ij
= ( ij; i= j p2 ij
_m
2
n
n
g
)n
where Yp
xx
pj )
=
@
i
;
@
(
@
i
u(
p)
:
T
h
e
N (U;h) = 1
(2 )
m
2
(detSdu(p) )
1
2
E(jdetYpj)jdVg(p)j; (2.12)
k @y‘k2
xkx‘u(x)
x=y=0
ZMis a Gaussian random symmetric
covariance operator pdescribed by
normal coordinates (x) near pand th
can view the random variable rpuas
Hp ij2 p(u) = @2 xixju(p); and the rando
jx=y=0
co
va
ri
a
nc
e
o
p
er
at
or
S
of
th
e
ra
n
d
o
m
va
ri
a
bl
e
Di
s
gi
ve
n
by
th
e
sy
m
m
et
ric
m
m
m
at
rix
wi
th
e
nt
ri
es
2i
E
(x
;y
)
@
x
@
yjj
:
T
o
co
m
p
ut
e
th
e
co
va
ri
a
nc
e
fo
r
m
Hp
of
th
e
ra
n
d
o
m
m
at
rix
H
w
e
o
bs
er
ve
fir
st
th
at
w
e
h
av
e
a
ca
n
o
ni
ca
l
b
as
is
(1
ij
m
of
Sl
oc
at
e
d
in
th
e
p
os
iti
o
n
(i;j
).
T
h
e
n
k‘
)
=
E
=
N
Xn
=1
_
m
H
@
so
th
at
p
ij2
xi(
u)
;H
xj
nij
p
k‘a
ss
oc
iat
es
to
a
sy
m
m
et
ric
m
at
rix
At
h
e
e
nt
ry
a(
u)
=
E(
x)
@
2
xkx
‘ n
@
2
xu
(x
)
@
(x
)
=
@
jE
(x
;y
)
@
x
@
x
@
yij
:
(2
.1
3)
^
:
(2
.1
4)
T
o
id
;@xk) = E(@2 xixju(p);@x
. If we denote by b
Ei j( ^
@3ijE (x;y) j
@x@x@y
n
;@xk ) b Eij:
p
e
nti
fy
wi
th
a
n
o
p
er
at
or
it
su
ffi
ce
s
to
o
bs
er
ve
th
at
(
@
xk
)
is
a
n
or
th
o
n
or
m
al
b
as
is
of
T
M
,
w
hil
e
th
e
co
lle
cti
o
n
f
n
m
of
g
x
3. THE PROOF OF THEOREM 1.1 We fix an orthonormal basis of
L(M;g) consisting of eigenfunctions
= n
; n= 0;1;:::; 0
1
n
L
is EL n(p;q) = X L n (p)
L
(q):
, then @ij
;
i<j is an
orthonormal
basis of S
,
: The col
an orthonormal basis of Uso that the covariance kernel of the Gaussian
U
n
uniformly with respect to p2M. Above, !m
dimUL
R
ec
e
ntl
y
(2
0
0
4)
,
X.
Bi
n
[4
]
us
e
d
H
Ё
or
m
a
n
d
er
’s
a
p
pr
o
ac
h
to
pr
o
d
uc
e
su
bs
ta
nti
all
y
!mm(2 volg(M)L
)
m
2
: (3.2)
+O
L
as L!1; (3.1)denotes the volume of the unit ball in Rm. This implies immedia
classical Weyl estimates
re
fin
e
d
as
y
m
pt
oti
c
es
ti
m
at
es
of
th
e
b
e
h
av
io
r
of
th
e
sp
ec
tr
al
fu
nc
tio
n
in
a
in
fin
ite
si
m
al
n
ei
g
h
b
or
h
o
o
d
of
th
e
di
a
g
o
n
al.
To formulate these estimates we
set := L1 2
m 0
E
we set
+
;
j
@y
)
at
p.
N
ot
e
th
at
x(
p)
=
0.
F
or
a
ny
m
ult
i-i
n
di
ce
s
;
2
Z
(p) :=
@
0;
Fix a point pand normal coordinates x=
(x1
m;:::;x
L
E (x;y)
@x
x=y=0
6
2(
2
Z)
m
wher
Cm( ; ) =
(1)j j j 2(2 )m
denotes the unit ball Bm
and
Bm
j j+j j+m 2 2 +2 xjdxj=
m
Z
B m
m
RBm
+ xjdxj;
m;
jxj= 1
1(4 )
Rm
m
(
2
)
e
8 >> <>
>:
2(2Z)m; (3.4)
:
=
x2R
x+
xj2
jdxj:
X.
Bin
prov
ed
that
for
any
multi
-indic
es ;
2Zm
0we
have
E ; L(
p) =
Cm( ;
) m+j
(
3.
3
)
j+j j
The estimates (3.3) are uniform in p2M. Using (A.6) we deduce (compare with
(B.13) ) 1
We set K
=C
e m2
m
2
1+
Z
(;
);
jj
=
1;
so
th
at
K
m
(4 )= 1m 2
2+
2
2 + jdxj= 1
m2
For any i jdefine
:
F
or
ij
a
n
d
kj
w
e
se
t
Cm(i;j;k;‘) = Cm
(4 )
3+
ZRm ij
2Zm so that x ij
(
= xixj
m
2
Z xx x x e
k‘
m
m2
ij;
)=1
R
i j
k ‘
xj
2
m2
jdxj:
e x2 m
1 2
xj
RA
ND
O
M
LI
NE
AR
CO
MB
IN
AT
IO
NS
OF
EI
GE
NF
UN
CT
IO
NS
9
m(
i;j;
i;j)
=
C
m(
i;i;
j;j)
;
Fi
n
all
y
mand CCm(i;i;i;i) = 1(4 )m 2 3 +
m2
ZRm x4 iexj2
jdxj= 34(4 )m 2 3 +
L
pMis
L p,
and from (3.3) we deduce
m2
For i<jwe have
m2
L
= 3cm;
m2
ZRm x2 xi 2 jexj2
jdxj= 14(4 )m 2 3 +
m
2
m2
Cm(i;i;j;j) = 1(4 )m 2C 3 +
L
(i;j;k;‘) = 0; 8k ‘; (i;j) 6= (k;‘): We denote by the stochastic metric on
Mdeterminer by the sample space U
x=y=0 = Km m+2 ij
+ O( m+1
m m+2gp(@ i ;@xj ) + O( m+1
= Km m+2✶ m + O( m+1
L du(p)
SL du(p)
m 2 m(m+2) 2 + O
m(m+2) 2 1 ; uniformly in p. (3.7)
JL g(p) = (detS
) =K
_m
3;1;1
,
m
m+3
_m
m+4
Lp
m
3;1;1
Lp
Lp
=: cm:
L
0
.
A
s
ex
pl
ai
n
e
d
in
th
e
pr
ev
io
us
se
cti
o
n
th
e
co
va
ri
a
nc
e
fo
r
m
of
th
e
ra
n
d
o
m
ve
ct
or
U
3
u
7!
d
u(
p)
2
T
L p(@xi ;@xj ) = @2ELi(x;y)
@x@yjj
In particular, if S
L
du(p)
2
p
) = Kx) as L!1;
denotes the covariance operator of the random vector du(p), then we
uniformly in p.deduce from the above equality that
(3.5)
); uniformly in p; (3.6) and invoking (2.11) we deduce
1
2
m
p
Denote by
:
S)
;
u
nif
or
ml
y
in
p,
(3
.8
)
w
h
er
e
th
e
p
os
iti
ve
d
efi
nit
e,
sy
m
m
et
ric
bil
in
e
ar
fo
r
m
S!
R
is
d
es
cri
b
m
LH
L Hp =
c
3;1;1
the covariance form of the random matrix UL3u7!ru2S (TM) =
S: Using (2.13) and (3.3) we deduce
+ O(
p
e
d
by
th
e
e
q
u
ali
tie
s
(B
.2
a)
a
n
d
(B
.2
b)
.
W
e
d
e
n
ot
e
by
3;1
;1t
h
e
ce
nt
er
e
d
G
a
us
si
a
n
m
e
as
ur
e
o
n
S
wi
th
co
va
ri
a
nc
e
fo
r
m
.
T
h
e
e
q
u
ali
ty
(2
.1
4)
co
u
pl
e
d
wi
th
(3
.3
)
im
pl
y
th
at
th
e
co
va
ri
a
nc
e
o
p
er
at
or
=
O
(
m+
2s
ati
sfi
es
);
u
nif
or
ml
y
in
p.
(3
.9
)
U
si
n
g
(3
.6
),
(3
.8
)
a
n
d
(3
.9
)
w
e
d
e
d
uc
e
th
at
th
e
co
va
ri
a
nc
e
o
p
er
at
or
d
efi
n
e
d
as
in
(2
.9
)
sa
tis
fie
s
th
e
es
ti
m
at
e
);
as
L!
1;
u
nif
or
ml
y
in
p,
(3
.1
0)
w
h
er
e3
;1;
1i
s
th
e
co
va
ri
a
nc
e
o
p
er
at
or
as
so
ci
at
e
d
to
th
e
co
va
ri
a
nc
e
fo
r
m
a
n
d
it
is
d
es
cri
b
e
d
ex
pli
cit
ly
in
(B
.3
).
If
w
e
d
e
n
ot
e
by
dL
th
e
G
a
us
si
a
n
m
e
as
ur
e
o
n
S
wi
th
co
va
ri
a
nc
e
o
p
er
at
or
,
w
e
d
e
d
uc
e
th
at
b
Q
Lp
=
c
m
m
m
+
4
b
Q
3
;
1
;
1
(2 )dL(Y) = 1N m2
|
{z
}
3;1;1
Lp
+ O( m+2
(det
L )p 1 2e
p Y;Y)
2 (
i j) Y m 12(22 L
jdYj
dyij;
10
LI
VI
U
I.
NI
CO
LA
ES
CU
where Nm
L
et
us
o
bs
er
ve
th
at
jd
Yj
is
th
e
E
uc
lid
e
a
n
vo
lu
m
e
el
e
m
e
nt
o
n
S
d
efi
n
e
d
by
th
e
n
at
ur
al
= dimSm = m(m+ 1)2 :
in
n
er
pr
o
d
uc
t
o
n
S
m,
(X
;Y
)
=
tr(
X
Y)
.
W
e
se
t
cL
:=
c
m
m+
4;
Q
L
p=
1c
Lm
L
p:
U
si
n
g
(A
.7
)
w
e
d
e
d
uc
e
th
at
(det
N m2
L)1
2
p
1 (2 )
(detQ L ) 12
Sm
p
ZSm jdetYje
( Lp Y;Y)
2
jdYj= (c(2 )N m2L)
m2
Z
jdetYje
(Q Lp Y;Y)
2
jdYj:
E(jdetYp
The measure d3;1;1
From the estimate (3.10) we deduce that QL p! b
Q3;1;1as L!1; uniformly in p. We conclude that
(2 )d3;1;1(Y) = 1N m2 p m e 14 trY 2 1m+2 (trY)2 jdYj;
where
is given by (B.12). Using (2.12), (3.7) and (3.11) we
m
deduce that
cm m m(m+4) 2 m(m+2) 2 volg(M) ZSm jdetYjd3;1;1(Y)
E(NL)
Km 2
(2 ) dimU L:
c
m
Km
m
m2
m!
Km
This
com
plet
es
the
pro
of of
(1.1
)
and
(1.4
). ut
4.
T
H
E
P
R
O
O
F
O
F
T
H
E
O
R
E
M
(3:2)
Observe that cm
p
m2
m2
) (2 )m
= (4 )
=
Z
)2+
Im
(2 )
1 m+ 4
)=
m2
m
2
= +
; !m
+
m
!
m
2
jdetYjdL(Y) c
m(m+4) 2 m
2
m
1+ m
2 :
j) =
1.
2
W
e
b
e
gi
n
by
d
es
cri
bi
n
g
th
e
la
rg
e
m
b
e
h
av
io
r
of
th
e
int
e
gr
al
:= 1m(m+1) 4
Re
(at
2+
bt+
c)
W
e
fol
lo
w
th
e
st
ra
te
gy
in
[9
];
se
e
al
m
Sm
jdetXje
1
4
trX2
1
m+2
(trX)2
jdXj:
so
[8
,
x1
.5
].
R
ec
all
fir
st
th
e
cl
as
si
ca
l
e
q
u
ali
ty
Zj
dtj
=
w
hi
ch
fol
lo
w
s
fr
o
m
th
e
w
ell
kn
o
w
n
id
e
nti
ty
a
2
1
e
4a
;
=
b2
4
ac
;
a
>
0;
2at + bt+ c= a
t b2a
2
4a :
RANDOM LINEAR COMBINATIONS OF EIGENFUNCTIONS 11
2For any real numbers u;v;w, we m 2) = (u+ mw2)t2
+ 2vwtrX+ vtrX2
have ut+ vtr(X+ wt✶
b 4ac
trX2 1
m+ 2 (trX) 2
2 4a
=: a(u;v;w)t2+ b(u;v;w)t+ c(u;v;w): We seek u;v;wsuch that
= 14
We have b2
4ac = v2w2u+ (trX)2 vu+ mw2 trX2
4a
mw2
u+
=
v2w2 = 14(m+ 2)
mw2
14 u+ mw2 :
This implies v
;
w=
(m+ 2) ;
1 2
e
e(u+ mw2)()u= 4vmw2
2 1
v=
We deduce vw2
m
4
v(m+ 2)
= 4m+ 2 ; a(u;v;w) = 4v= 2;
:
Hence= 1v(m+ 2) ; u= 4v: We choose v=1 22so that w= 2(m+ 2) ; u= 2 2m
(m+ 2)
e1 4( trX21 m+2(trX)4t22) =
Z
1 tr(X+t q
2
✶m
m+2
2
m+2
2 1
H
e
2
d
n
(
c
s
e
)
d
s
|
{
z
Im
S
m(m+1) 4
= Am
p
}
R
Sm
R
n
=:Am
=:f
jde
tXj
e
m
( )jjd j;
Sm
n
(x)
n
n
n
12
m
Z
m
Z
Z
Y)je
1
2
tr(Xs✶
)2jdXj
d(s)
:
)2dt
2
s
e
p
2
m
)
2
t
r
(
X
s
✶
1
2
e Z
m
R)2
m
es2ds=
+
1
2
2
2
| {z }Z Zjdet(x✶
m
2
jdYj
For2any O(n) invariant function : S
=
(j
(
m
+
where
2
)
1
2
(
2
)
{z }f(X)jdXj= 1!R we have a Weyl integration formula (see
[2,8,13]), ZZf( )j
1 i<j n
( ) := Y
R|
1 2trY
m
i);
d(x):
12
LI
VI
U
I.
NI
CO
LA
ES
CU
and the constant is defined by the
Zn
equality
dimS n
2
Sne1 2mtrX2(
)jjd j RjdXj :Rne1 2j j2j m( )jjd j= 23n 2nwhile the j j2j
numerator is given by ([13, x17.6]) ZYj=1 1 +j 2 ;
1
2
e = RRn Zn
The denominator is equal to
(2 )
j : (4.1)
2
+
2
so that Zn
=2
3n
2
Qn
dimS n
) (2 )
j=1
Now observe that for
any 0
fm( 0) = 1Zm ZRm e j2j
= e1 2m
Z2 0
2R we
have
nYj j j 0jj m( )jjd j
=1
e
1
2
Pm i=1
2i
j m+1 ( 0;
1
Z
;:::;
)jd
m
1
d
m
m
j
:
m+1( 0)
Rm
j | {z }(x) is known in random matrix theory as the 1-point correlation function of the
The function n(x) = nR n=:R
Gaussian orthogonal ensemble of symmetric n nmatrices, [6, x4.4.1], [13, x4.2]. We
conclude
that
x2
2
m+ d(x) = AmZZm+1m ZR Rm+1(x) r 2 e 3x
2
2
=
A
m
ZZ
m+1m
Z
R
R
m+1
(x)e
Im
Z
Zm
Z
= 23 2
2
Zm
dx
:
W
e
h
av
e
m+
2
=2
2
2
32
m
m
+
+
m
1 32
2
m
1
2
=
p
m+
32
2
2m
+1
2
d
11;:::;
p 2
2
2
12
3
:
(m+
2)
2
(x)e
m
2
2
m
(m1 2
+
2)2
R
+m
2
m+1 Amm+12n+1 2
Im
1
2
m+ r; r
3 2p
m+1 R
12m(m+1) 4+m(m1) 4+
(2 )
m(m+1) 4
3x 2
1 2m2 2
(m+ 2)
=
We deduce
= p 22m+m1 2
p
dx
3=2 2m
m2 2
Z
32
R
(m+ 2)
m2 2(m+ 1)
(
m
+
2)
Z
m+1
(x)e
3x 2
dx: (4.2)
m+
RANDOM LINEAR COMBINATIONS OF EIGENFUNCTIONS 13
To proceed further we use as guide Wigner’s theorem, [2, Thm.2.1.1] stating that the
sequences of probability measures n(x)dx, n(x) := 1p n n( pnx);
3converges weakly to the semi-circle probability
(x)dx,
measure
22 x; jxj p 2
(4.3
)
(x) = 1 ( p 0; jxj> p 2: 3x 2
3 2Z e 3ns 2
n (x)e
dx= n
n( p ns)e 3ns 2
2
2 jdsj= n
n(s)ds
Z= R2 3
We deduce ZR
2
2
=:w
n ZR (3n) 1 2e
12
n
Z
n
(s)
3ns 2
R
(s)ds: (4.4)
n(x)
n(2n!
p )1 2 e
x2
Hn(x); Hn
n(x)
= (1)
x2 dn
(ex2);
ndx
where
2
Rn n(x) R(x)dx8 ><
>:1 21 2; x>0 0; x=
0;
12
n
n
n
(2 ) =:‘
; (4.5)
e
We observe that the Gaussian
| {z }(s)dsconverge to the Dirac delta measure concentrated at the
measures wn
correlation (x) in terms of Hermite polynomials, [13, Eq. (7.2.32) a
n 1 R "(xt) n(t)dt+
22
2
+ (x) | {z }
n1X
n(x) =
k(
| {z
k=0
x
=:kn(x)
}
(x) = 1
)
n (x) = ( "(x) = 0; n22Z + 1;; n2Z; and
; x<0: From the Christoffel-Darboux formula [16, Eq. (5.5.9)]we deduce
k1 2k! H2k(x) = 1n2(n1)!
xe2 n1k=0 k 2(x) = n1k=1
H0 n(x)Hn1(x) Hn(x)H0 n(x)
12
0 n(x) = H
X
X
n1
0 n(x)H
= 2xHn Hn+1
2 n(x)
Hn1(x)Hn+1
n
Using the recurrence formula we deduce H(x) H(x)H(x) and
H0 n
2
(x)Hn+1
k n (x) = e
(n1)!
H2 n(x) Hn1
x
1
2n
2
(x)
:
3
The difference between our definition (4.3) of the semicircle measure and the one in [2, Thm. 2.1.1] is due to the
fact that the covariances of our random matrices differ by a factor from those in [2]. Our conventions agree with
those in [13].
14
LI
VI
U
I.
NI
CO
LA
ES
CU
FI
G
U
R
E
1.
T
h
e
gr
a
p
h
of
k1
6(
x)
,
jxj
8
.
p
nx
p
n
:
We set kn(x) := kn
The integrals entering into the definition of ‘n
1
Hn(x)
n! = e
X
wn
n
=
li
0
n!
m
n
1
!
1
in (4.4) can be given a more explicit description using the generating function, [
(5.5.6)],
2
2:
w
x
w
n
Z
R
k
n
I
m
l
i
m
n
!
1
m
3
2
2
(s)wn
+m1(
2
as m!1:
m+
2)
Using the refined asymptotic estimates for Hermite polynomials [7] and [16, Thm.8.22.9,
Th wn(s)ds= lim(s) (x)
m.8.
91.3
] we
ded
uce
that
ZR
n(s)
(x)
w(s)ds=
ZR
2
2(s)
ds=
(0)
= p:
Usi
ng
the
abo
ve
equ
ality
in
(4.2
)
and
(4.4
) we
ded
uce
that
m+
m
22
W
e
n
o
w
in
vo
ke
St
irli
n
g’
s
fo
r
m
ul
a
to
co
nc
lu
d
e
th
at
2
m+m+31 2
m2 2
p 2 m+ 3 2
m+
12
em
m
32
m
me
m2 2
(m+ 2)m2 2
logIm
m
p2 m
m+ 3
2
m
m
+
1
2
e
m
3
2
p2 m
log(m+ 4):
m3
+
m
: We have
logC(m) =
logI
=m
2
logm
m2
1+
m
2
m+ 2
m
m2 2
e: Thus
lo
g
m
;
as
m
!1
:
(4
.6
)
F
or
m
(1
.2
)
w
e
d
e
d
uc
e
th
at
2 log4 + log
m
2
RANDOM LINEAR COMBINATIONS OF EIGENFUNCTIONS 15
Stirling’s formula and (4.6) imply that logC(m) logImas m!1: This proves (1.3). ut
APPENDIX A. GAUSSIAN MEASURES AND GAUSSIAN RANDOM FIELDS
the formFor the reader’s convenience we survey here a few basic facts about Gaussian
measures. For more details we refer to [5]. A Gaussian measure on R is a Borel measure
2 e (xm)
22 2
m
which case m;0
;
(
x
)
=
1
m; of
p
dx: The scalar mis called the mean
while is called the standard deviation. We allow to be zero in
= m= The Dirac measure on R concentrated at m: Suppose that Vis a finite dimensional
vector space. A Gaussian measure on Vis a Borel measure
on Vsuch that, for any 2V _, the pushforward () is a Gaussian measure on
R , () = m( ); ( )
_
vd(v):
12
( )2
= ZV
: One can show that the map V3 7!m( ) 2R is linear, and thus can be identified with a vector
m2Vcalled the barycenter or expectation of that can be alternatively defined by the equality
m
_
im( ): Moreover, there exists a nonnegative definite, symmetric
bilinear map
_ V _
:V
_
2()
The Gaussian measure is uniquely determined by its Fourier
!R . b( ) = e!R such that
transform b: V
_
= ( ; ); 8 2V
: The form is called the covariance form and can be identified with a linear operator S: V!V
such that
_
_ h ;S i= ( ; ) = ZV
( ; ) = h ;S i; 8 ; 2V
; where h;: V V !R denotes the natural bilinear pairing between a vector space and its dual.
The operator Sis called the covariance operator and it is explicitly described by the integral
formula
h ;vm
ih ;vm
e 12 (A1u;u)
_
dA(x) = 1(2 )n 2p detA
id(v): The Gaussian measure
is said to be nondegenerate if is nondegenerate, and it is called centered if
m= 0. A nondegenerate Gaussian measure on Vis uniquely determined by its covariance form
and its barycenter.
Example A.1. Suppose that Uis an n-dimensional Euclidean space with inner product (;). We
use the inner product to identify Uwith its dual U. If A: U!Uis a symmetric, positive definite
operator, then
jduj (A.1) is a centered Gaussian measure on Uwith covaraince form described by the
operator A. ut
16 LIVIU I. NICOLAESCU
If Vis a finite dimensional vector space equipped with a Gaussian measure and L: V!Uis a
linear map then the pushforward L is a Gaussian measure on Uwith barycenter mL = L(m)
and covariance form
_
X
L
L
: U_ U _
X
); 8 2U
_
such that X
!R given by
j:
1(
U_
;)=
(L_
X
X
1
;L_
V
2
X(
;XE(X)i
:
2
X
2
; where L!V_is the dual (transpose) of the linear map L. Observe that if sis nondegenerate
and Lis surjective, then Lis also nondegenerate. Suppose (S ; ) is a probability space. A
Gaussian random vector on (S ; ) is a (Borel) measurable
map X: S !V; Vfinite dimensional vector space is a Gaussian
and by
measure on V. We denote this measure by
2
) its covariance form (respectively
operator),
_ 1V _
1!R ; cov(X1;X2)( 1; 2) = ( 1; and X2
1
cov(X1;X2) : V
2
2
(respectively S
) = E ;XE(X)ih is precisely the expectation of X. The random vector is called nond
Gaussian measure is such. Suppose that X: S !V1, j= 1;2, are two Gaussian
such that the direct sum X X: S !Vis also a Gaussian random vector with as
Gaussian measure X1 X2= pX1(x1;x2)jdx1dx2j: We obtain a bilinear form
2); called the covariance form. The random vectors Xare independent if and only if they are
uncorrelated, i.e.,
cov(X1;X) = 0: We can form the random vector E(XjX22), the
given X2. If X1
conditional expectation of X1are independent then E(X1jX2) = E(X1);
while at the other end, we have
j
on V j
2V _
2
jX21 X2121)S;X) = Cov(X11;X2) with a linear operator Cov(X
h 1;X1ih 2;X2i 1;X= cov(X12) : V;X2)( 1; ) = 1;Cov(X1;X22) y
1
22V2denotes the vector metric dual to 2;X2. If X2 X2E(X2that it
2
suffices to prove the equality in the special case when E(X1
Note that the expectation of
2
1
and X
E(X1jX1) = X1: To find the formula in general we fix
Euclidean metrics (;)V
y
2
;
8
. We can then identify cov(X2!V1, via the equality E
2
V
_ 1;
;X
2) = 0, which is
1
; where . The operator cov(X) is called the covariance operator of Xis nondegenerate, then
we have the regression formula E(X) + E(X): (A.2) To prove this we follow the elegant
argument in [3, Prop. 1.2]. Observe first that it suffices to prove) = 0 and E(X
w
h
at
w
e
wi
ll
as
su
m
e
in
th
e
se
q
u
el.
R
A
N
D
O
M
L
I
N
E
A
R
C
O
M
B
I
N
A
T
I
O
N
S
O
F
E
I
G
E
N
F
U
N
C
T
I
O
N
S
1
7
W
e
se
ek
a
lin
e
ar
o
p
er
at
or
C:
V
su
ch
th
at
th
e
ra
n
d
o
m
ve
ct
or
Y
=
X1
C
X2
is
in
d
e
p
e
n
d
e
nt
of
X2
2!
V.
If
su
ch
a
n
o
p
er
at
or
ex
ist
s
th
e
n
E(
X1
jX
2)
=
E(
Yj
X2
)
+
E(
C
X2
jX
2)
=
E(
Y)
+
C
X2
1=
C
X:
Si
nc
e
th
e
ra
n
d
o
m
ve
ct
or
X1
C
X2
2i
s
G
a
us
si
a
n
th
e
o
p
er
at
or
C
m
us
t
sa
tis
fy
th
e
co
ns
tr
ai
nt
co
v(
X1
C
X2
;X
2)
=
0(
)0
=
C
ov
(X
1
C
X2
;X
2)
=
C
ov
(X
1;
X2
)
C
ov
(C
X2
;X
2)
T
o
fin
d
C
w
e
n
ot
e
th
at
i)
=
X2
(C
_1
;2
)
=
h
1i
H
e
nc
e
id
e
nti
fyi
n
g
V2
wi
th
V_
2v
ia
th
e
E
uc
lid
e
a
n
m
et
ric
(;)
V 2;
C
S
X2
,
w
e
ca
n
re
g
ar
d
S
Xa
s
a
lin
e
ar
,
sy
m
m
et
ric
n
o
n
n
e
g
ati
ve
o
p
er
at
or
V2
!V
,
a
n
d
w
e
d
e
d
uc
e
C
ov
(C
X2
;X
22
)
=
C
S
X2
=
C
ov
(X
1;
X2
);
w
hi
ch
sh
o
w
s
th
at
= x2) = ZV f(x1)p(X1
jX2=x2)(x1)jdx1j:
1
E(f(X1)jX2
)
=E
h 1;CX2ih 2;X2i
2
2
y
1=
E(hC;Cov(CX_ 1;
X2ih 22;X;X22
2
For a measurable function f :
V
1V1pX
sca
C= Cov(X1;X2)S1 X2: (A.3) The conditional probability density of X1given that X2= x2is the
function
p(X1jX2=x2)(x1) = pX1 X2(x1;x
Again, if X2is nondegenerate, then we have the regression formula E(f(X1)jX2= x2) = E f(Y+
Cx21) (A.4) where Y : S !Vis a Gaussian vector with E(Y) = E(X1) CE(X2); SY=
SX1Cov(X1;X2)S1 X2Cov(X2;X1); (A.5) and Cis given by (A.3).
E(LX) = LE(X); LX( ; ) = X
(L_ Let us point out that if X: S !Uis a Gaussian random vector and L: U!Vis a linear map, t
the random vector LX: S !Vis also Gaussian. Moreover
_
18
LI
VI
U
I.
NI
CO
LA
ES
CU
21;t(
tCT
hus,
a
ran
dom
field
on
Tis
a
fami
ly of
ran
dom
vari
able
s p
ara
met
eriz
ed
by
the
set
T.
For
sim
plici
ty
we
will
ass
ume
that
all
thes
e
ran
dom
vari
able
s
hav
e
finit
e
sec
ond
mo
men
ts.
For
any
t2T
we
den
ote
by t
1the
exp
ecta
tion
of tt
.
The
cov
aria
nce
func
tion
or
kern
el of
the
field
is
the
func
tion
C:
T T!
R
defi
ned
by)
=E
( t1 t1
)( t2 t
) =
ZS
t1(s)
t1
t2(s)
Th
e
field
is
call
ed
Gau
ssia
n if
for
any
finit
e
sub
set
F Tt
he
ran
dom
vect
or S
2s7!
t(s
) t2
F2R
Ft2
d (s)
:is a
Gau
ssia
n
ran
dom
vect
or.
Alm
ost
all
the
imp
orta
nt
infor
mati
on
con
cern
ing
a
Gau
ssia
n
ran
dop
m
field
can
be
extr
acte
d
from
its
cov
aria
nce
kern
el.
H
ere
is a
sim
ple
met
hod
of
pro
duci
ng
Gau
ssia
n
ran
dom
field
s on
a
set
2
T.
Cho
ose
a
finit
e
dim
ensi
onal
spa
ce
Uof
real
valu
ed
func
tion
s on
T.
Onc
e
we
fix a
Gau
ssia
n
mea
sure
don
Uw
e
obta
in
taut
olog
icall
ya
ran
dom
field
:
U!R
:
T U!
R;
(t;u)
7! (
u) =
u(t):
This
is a
Gau
ssia
n
field
sinc
e
for
any
finit
e
sub
set
F Tt
he
ran
dom
vect
orFt;
u7!(
u(t))
t2Fis
Gau
ssia
n
bec
aus
e
the
map
is
line
ar
and
thus
the
pus
hfor
war
d F
dis
a
Gau
ssia
n
mea
sure
on
R.
For
mor
e
infor
mati
on
abo
ut
ran
dom
field
s
we
refe
r to
[1,3,
10].
In
the
con
clus
ion
of
this
sect
ion
we
wan
t to
des
crib
ea
few
sim
ple
inte
gral
form
ulas
.
Pr
o
p
os
iti
o
n
A.
2.
S
u
p
p
os
e
V
is
a
n
E
uc
lid
e
a
n
sp
ac
e
of
di
m
e
ns
io
n
N,
f:
U
!R
is
a
lo
ca
lly
int
e
gr
a
bl
e,
p
os
iti
ve
ly
h
o
m
o
g
e
n
e
o
us
fu
nc
tio
n
of
d
e
gr
e
e
k
0,
a
n
d
A:
U!
Ui
s
a
p
os
iti
ve
d
efi
nit
e
sy
m
m
et
ric
o
p
er
at
or
.
D
e
n
ot
e
by
B(
U)
th
e
u
nit
b
all
of
V
ce
nt
er
e
d
at
th
e
or
igi
n,
a
n
d
by
S(
U)
its
b
o
u
n
d
ar
y.
T
h
e
n
th
e
fol
lo
wi
n
g
h
ol
d
Z
B
(U)
tAf(u)jduj=
1 1
+(u) = tk 2ZU
)
Z
U
k
+
N
2
ujf(u)
eN 22
f(u)jdA(u)j:
=1
S(U)
1 f(u)df(u)djduj: (A.6) Z(u) 8t>0; (A.7) where dis the Gaussian measure defined by (A.1).
N 2 Proof. We haveA
1
f(u)jduj= Z
f(u)jdA(u)j !
ZU
k+ N Z
0
tk+N1
A
S(U)
ZB(U)
On the other
hand 1
2 = 1N
N2
1
ZU f(u)euj2jduj= 1N 2
tk+N1
et2dt
Z
Z
f(u)jdA(u)j= k+ N
S(U)
2
k+ N 2
= 1N
0
Z
1 + k+ N2
f(u)jdA(u)j
ZB(U)
2
f(u)jduj:
N
2
k+ N
2
S(U)
Z
2
This proves (A.6). The equality (A.7) follows by using the change in
variables u= t
1
2
v. ut
B(U)
f(u)jduj:
R
A
N
D
O
M
L
I
N
E
A
R
C
O
M
B
I
N
A
T
I
O
N
S
O
F
E
I
G
E
N
F
U
N
C
T
I
O
N
S
1
9
A
P
P
E
N
D
I
X
B
.
G
A
U
S
S
I
A
N
ijij(A)
=
aW
e
wan
t to
des
crib
e in
som
e
deta
il a
3-p
ara
met
er
fami
ly of
cent
ere
d
Gau
ssia
n
mea
sure
s on
S,
the
vect
or
spa
ce
of
real
sym
metr
ic
mm
matr
ices
,
m>1
.For
any
1 ij
defi
ne i
j2S_
mso
that
for
any
A2S
m=
the
(i;j)entr
y of
the
matr
ix A:
The
coll
ecti
on
( ij)1
i j mi
sa
R
A
N
D
O
M
S
Y
M
M
E
T
R
I
C
M
A
T
R
I
C
E
S
basi
s of
the
dual
spa
ce
S_
m.
We
den
ote
by
(E
ij)
1 i jm
the
dual
basi
s of
Sm.
Mor
e
prec
isel
y,
Eijmi
s
the
sym
metr
ic
matr
ix
who
se
(i;j)
and
(j;i)
entri
es
are
1
whil
e all
the
othe
r
entri
es
are
equ
al to
zero
.
For
any
A2S
we
hav
e
A= ij
(A)E
ij:
ij
m
Sm !R ; (A;B) = tr(AB); 8A;B2Sm
The collection ( b
b ij := (Eij1 p2; i= jEij; i<j:
Eij)i j
E :
The space Xis equipped with an inner
Sm
product (;) : S
:
T
hi
s
in
n
er
pr
o
d
uc
t
is
in
va
ri
a
nt
wi
th
m
re
sp
ec
t
to
th
e
ac
tio
n
of
S
O
(
m
)
o
n
S.
W
e
se
t
is a basis of ^ ij := ( ij; i= j p2 ^ ; i<j: The collection (ij)i jthe orthonormal basis of Sij_ mdual to ( b Eij). To any orthonormal with re
Sm
numbers a;b;csatisfying the inequalitiesab; c; a+ (m1)b>0: (B.1) we will associate a
We set
centered Gaussian measure a;b;con Smuniquely determined by its covariance form
: S_ mS_ m
= a;b;c ii
a;b;c
( ii; ii
ij; ij) = c; ( ij; k‘
= spanf
a;b;c
jj
_m
a;b;cS_ m
ij
m
ij;
m
=
1 i<j mg; dimO
m
) ;m 2
where G
m
R
d
e
fi
n
e
d
a
s
!
a;b;c
f
o
l
l
o
w
s
:
)
=
a;
(
;
)
=
b;
8i
6
=
j;
(B
.2
a)
(
)
=
0;
8i
<j;
k‘
;
(i;j
)
6
=
(k
;‘)
:
(B
.2
b)
T
o
se
e
th
at
i
s
p
os
iti
ve
d
efi
nit
e
if
a;
b;
cs
ati
sf
y
(B
.1
)
w
e
d
ec
o
m
p
os
e
S
as
a
di
re
ct
su
m
of
su
bs
p
ac
es
ii;
=D
1 i mg; Om
m
O
m
+bCm
= Gm
m
describing
m
2
has rank 1 and a single nonzero
eigenvalue equal
m
m
;
D
=
s
p
a
n
f
W
ith
re
sp
ec
t
to
thi
s
d
ec
o
m
p
os
iti
o
n,
a
n
d
th
e
co
rr
es
p
o
n
di
n
g
b
as
es
of
th
es
e
su
bs
p
ac
es
th
e
m
at
rix
Q
w
i
t
h
r
e
s
p
e
c
t
t
o
t
h
e
b
a
s
i
s
(
)
h
a
s
a
d
i
r
e
c
t
s
u
m
d
e
c
o
m
p
o
s
i
t
i
o
n
a;b)
is
the
mm
sym
metr
ic
matr
ix
who
se
diag
onal
entri
es
are
equ
al to
awh
ile
all
the
off
diag
onal
entri
es
are
all
equ
al to
b.
T
h
e
th
e
sp
ec
tr
u
m
of
Q
(
a
;
b
)
c
✶
(
(
G
(a
;b
)
co
ns
ist
s
of
tw
o
ei
g
e
nv
al
u
es
:
(a
b)
wi
th
m
ult
ipl
ici
ty
(
m
1)
a
n
d
th
e
si
m
pl
e
ei
g
e
nv
al
u
e
a
b
+
m
b.
In
d
e
e
d,
if
C
d
e
n
ot
es
th
e
m
m
m
at
rix
wi
th
all
e
nt
ri
es
e
q
u
al
to
1,
th
e
n
G
(a
;b
)
=
(a
b)
✶
m.
T
h
e
m
at
rix
C
20 LIVIU I. NICOLAESCU
to mwith multiplicity 1. This proves that Qis positive definite since its spectrum is positive.
We denote by da;b;ca;b;cthe centered Gaussian measure on Smwith covariance form . Since
Smis equipped with an inner product we can identify a;b;ca;b;cwith a symmetric, positive
definite bilinear form on Sm. We would like to compute the matrix b Q=with respect to the
orthonormal basis ( b E. We have bQ( b Eii; b Eiiij) = Q( ^ ii)1 i j; ^ ii) = a; b Q(b Eii; b Ejjb
Qa;b;c) = b; 8i6= j; bQ( b Eij; b Eij) = Q( ^ ij; ^ ) = 2c; 8i<j; Thusa;b;cij= G) =
2Q( mii; ii(a;b) 2c✶ (m 2that describes a;b;c) : (B.3)
determined by a;b;c
aijEij
aii b Eii+ pi<j2 X aijb Eij:
i
we have jAj2 a;b;c
a2 ij
iiajj
a2 ii+ b iX aii!2 + 2c 0@ a2 i<j+ 2 a2 ij1
= (ab2c) X
i
ii X
X2i
A
If jja;b;c
= a X a2 b Qdenotes the Euclidean
+ 2b Xa
+ 4c X then for A=i jX= X
ii norm on Sm
i
i<j i
i<j
= (ab2c) X
a2 ii+ b(trA)
+ 2ctrA2:
we have jAj2 a;b;c
2
a;b;c
a;b;c
To obtain a more concrete description : Sm
of
a;b;c2;0;
1
Qa;b;c
m2
) 2c !
a0;b
0;b0
Observe that det
where 2c=1 2cand bQ1 a;b;c
a0
<:
Observe that when ab= 2c (B.4)
a;b;c
b
m
0
Q
= b(trA)
0
(2c) (m + 2ctrA2
0
0
a
;
b
;
c
a;b;c
and the Gaussian measure d(B.5) so that the norm jjare SO(m)-invariant. Let us point out that
) ; (B.6)
the space Sequipped with the Gaussian measure dis the well known GOE, the Gaussian
orthogonal ensemble.
we first identify : Using (B.3) we deduce thatwith a symmetric operator bb Q= G(a;b) 2c✶
(= (ab) a+ (m1)band the real numbers a= b Q0;c0 = Gm(a;b✶ () ; (B.7)a0bare determined
from the linear system 8+ (m1)b00==1 ab1 a+(m1)b: (B.8)
c
b
Q
RANDOM LINEAR COMBINATIONS OF EIGENFUNCTIONS 21
(2 )We then have da;b;c(X) = 1m(m+1) 4
(det b Qb0
e 1 12 a;b;c( b QX;X)21 2 m 2i j) Ydxij; (B.9)
m
1a;b;c2c )iX
2
3c;c;c
mb
Q
a0
0;b0;c0 =
a0
;b0)
✶
Gm
0
<:
where (b Q1 a;b;cX;X) =
12c (m 2) ;
1 2c
b0
1
(m+2)c
c
th
e
in
n
er
pr
o
d
uc
t
a
n
d
re
sp
ec
tiv
el
y
th
e
G
a
us
si
a
n
m
e
as
ur
e
o
n
(
12
0
2c(m+ 2) :
0
a
x2 + b0(trX)
0
ii
+ 12c trX2: (B.10)
S
T
h
e
sp
ec
ial
ca
se
b
=
c>
0,
a
=
3c
is
p
ar
tic
ul
ar
ly
im
p
or
ta
nt
fo
r
o
ur
co
ns
id
er
ati
o
ns
.
W
e
d
e
n
ot
e
by
(;)
ca
n
d
re
sp
ec
tiv
el
y
dc
or
re
sp
o
n
di
n
g
to
th
e
co
va
ri
a
nc
e
fo
r
m
.
If
w
e
se
t:
=
b
Q
3c;
c;c
1
ct
h
e
n
w
e
d
e
d
uc
e
fr
o
m
(B
.7
)
th
at
b
w
h
er
e
8
=
b
Q
a0
+
(
m
1)
b0
(a
=
=
=
2c
:
W
e
d
e
d
uc
e
=
1(
m
+
2)
c
1
2c
=
m
2c
(
m
+
2)
)b
=
1
(2 c)b Q1 cX;X Using (B.6) and
(B.9) we deduce dc(X) = 1m(m+1)
4p
Note that the invariance condition
(B.4) a0
b
2=
2
=:jdXj :
2c0
0
ij
; (B.11)
1
= 12c trX2c(m+ 2) (trX)is automatically satisfied so
that
w
h
e
r
e
:
=
m
2(
iption: (B.12) The inner product
(;)
) Y ( 12
2
(A;B)c = Ic (A;B) := 4c ZRm xj(Ax;x)(Bx;x) em 22 jdxj; 8A;B2Sm
:
(B
.1
3)
T
o
ve
rif
y
(B
.1
3)
it
su
ffi
ce
s
to
sh
o
w
th
at
c(Eij;Ek‘
(Eii;Ejj) = c; Ic(Eij;E
Ic(Eii;Eii) = 3c; I
) = 4c; 81 i<j m; I) = 0; 81 i<j m; k ‘; (i;j) 6= (k;‘): To achieve this we need to ij c
use the classical identity
m
2p e
x2p
Yk=m xj2jdxj=
xm1 1
Z 2pxk et2dxk = mY pk +
1
R k
k=1
12
ZRm
ii
R
(E x;x)(Ejjx;x) exj m 22
jdxj= ZRm x2 xi 2 jexj2 jdxj
m2
Observe that Z
m
:
22 LIVIU I. NICOLAESCU
=
8 > <5 2
>:
m2
1 2 m2
21 2 ;
i= j;
x 1 24j (3; i= j 1; i6=
j:
(E
xj
R
k‘
p
ij
p
m2
(E
RmNext, if i<jwe have
Z(Eijx;x)(Eij
x;x) e xj2
x;x) e jdxj= 4 Z
R
xj2
m
3
; i6= j =
2
m
2 ix
e
2
m2
m2
jdxj= 1:
Finally, if i<j, k ‘and (i;j) 6= (k;‘), then the quartic polynomial
x;x)(Ex;x) is odd with respect to a reflection x7!xijx;x)(Ek‘jdxj= 0:for some
p= fi;j;k;‘gand thus Z
REFERENCES
[1]R. Adler, R.J.E. Taylor: Random Fields and Geometry , Springer Monographs in Mathematics,
Springer Verlag, 2007.
[2]G. W. Anderson, A. Guionnet, O. Zeitouni: An Introduction to Random Matrices, Cambridge University
Press, 2010.
[3]J.-M. Aza Ё is, M. Wschebor: Level Sets and Extrema of Random Processes, John Wiley & Sons,
2009. [4]X. Bin: Derivatives of the spectral function andSobolev norms of eigenfunctions on a closed
Riemannian
manifold, Ann. Global. Analysis an Geometry, 26(2004), 231-252. [5]V. I. Bogachev: Gaussian
Measures , Mathematical Surveys and Monographs, vol. 62, American Mathematical
Society, 1998. [6]P. Deift, D. Gioev: Random Matrix Theory: Invariant Ensembles and Universality ,
Courant Lecture Notes, vol.
18, Amer. Math. Soc., 2009. [7]A. Erd ґ elyi: Asymptotic forms for Laguerre polynomials, J. Indian
Math. Soc., 24(1960), 235-250. [8]P. J. Forrester: Log-Gases and Random Matrices , London Math. Soc.
Monographs, Princeton University Press,
2010. [9]Y. V. Fyodorov: Complexity of random energy landscapes, glass transition, and absolute
value of the spectral
determinant of random matrices, Pys. Rev. Lett, 92 (2004), 240601; Erratum: 93 (2004), 149901.
[10]I.I. Gikhman, A.V. Skorohod: Introduction to the Theory of Random Processes, Dover Publications,
1996. [11]L. H Ё ormander: On the spectral function of an elliptic operator, Acta Math. 121(1968),
193-218. [12]L. H Ё ormander: The Analysis of Linear Partial Differential Operators III, Springer Verlag,
1994. [13]M. L. Mehta: Random Matrices, 3rd Edition, Elsevier, 2004. [14]L.I. Nicolaescu: An Invitation
to Morse Theory, Springer Verlag, 2007. [15]L.I. Nicolaescu: Critical sets of random smooth functions
on compact manifolds, arXiv: 1008.5085 [16]G. Szego: Orthogonal Polynomials, Colloquium Publ., vol
23, Amer. Math. Soc., 2003.
DEPARTMENT OF MATHEMATICS, UNIVERSITY OF NOTRE DAME, NOTRE DAME, IN 46556-4618. E-mail address:
nicolaescu.1@nd.edu URL: http://www.nd.edu/˜ lnicolae/
Download