Random Matrices Hieu D. Nguyen Rowan University Rowan Math Seminar

advertisement
Random Matrices
Hieu D. Nguyen
Rowan University
Rowan Math Seminar
12-10-03
Historical Motivation
Statistics of Nuclear Energy Levels
- Excited states of an atomic nucleus
Level Spacings
{E1 , E2 , E3 ,...} – Successive energy levels
Si  Ei 1  Ei
– Nearest-neighbor level spacings
Wigner’s Surmise
Level Sequences of Various Number Sets
Basic Concepts in Probability and Statistics
Statistics
{x1 , x2 , x3 ,..., xN }
– Data set of values
1 N
   xi
N i 1
– Mean
1 N
   ( xi   )2
N i 1
2
– Variance
Probability
x
f ( x)  0

b
a
f ( x)dx  1
– Continuous random variable on [a,b]
– Probability density function (p.d.f.)
– Total probability equals 1
Examples of P.D.F.
f ( x)  1
f ( x) 
1
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0.2
0.4
0.6
b
0.8
P (a  x  b)   f ( x)dx
a

   xf ( x)dx


 2   x 2 f ( x)dx

1
-3
-2
-1
1
e
2
1

x2
2
2
3
– Probability of choosing x
between a and b
– Mean
– Variance
Wigner’s Surmise
Notation
{E1 , E2 , E3 ,...} – Successive energy levels
Si  Ei 1  Ei
– Nearest-neighbor level spacings
D  Si
– Mean spacing
si 
Si
D
– Relative spacings
Wigner’s P.D.F. for Relative Spacings
pW ( s ) 
s
  
exp   s 2  ,
2
 4 
s
S
D
Are Nuclear Energy Levels Random?
Poisson Distribution (Random Levels)
Distribution of 1000 random numbers in [0,1]
175
150
125
100
75
50
25
0.001
0.002
0.003
0.004
0.005
0.006
175
150
125
100
75
50
25
0.001
0.002
0.003
0.004
0.005
0.006
How should we model the statistics of nuclear energy
levels if they are not random?
Distribution of first 1000 prime numbers
250
200
150
100
50
5
10
15
20
25
30
35
Distribution of Zeros of Riemann Zeta Function

1
z
n 1 n
 ( z)  
(Re z  1)
Fun Facts
1.
2.
1 1 1
2
 (2)  2  2  2  ... 
1 2 3
6
 (3) is irrational (Apery’s constant)
3.  ( z ) can be analytically continued to all z  1
4.  (1  z )  2(2 ) z cos( z / 2)( z) ( z) (functional equation)
5. Zeros of  ( z )
Trivial Zeros:
Non-Trivial Zeros (RH):
z  {2, 4,...}
1
z   i n
2
(critical line)
Distribution of Zeros and Their Spacings
First 105 Zeros
First 200 Zeros
30
25
20
15
10
5
100
200
300
400
50
40
30
20
10
1
2
3
4
5
6
7
Asymptotic Behavior of Spacings for Large Zeros
Question: Is there a Hermitian matrix H which has
the zeros of  ( z ) as its eigenvalues?
Model of The Nucleus
Quantum Mechanics
H – Hamiltonian (Hermitian operator)
H i  Ei i
i
Ei
– Bound state (eigenfunction)
– Energy level (eigenvalue)
Statistical Approach
H
– Hermitian matrix
H i  Ei i
(H *  H )
(Matrix eigenvalue problem)
Basics Concepts in Linear Algebra
Matrices
 a11
a
A  (a jk )   21
 ...

 an1
a12
a22
...
an 2
... a1n 
... a2 n 
... ... 

... ann 
n x n square matrix
Special Matrices
Symmetric:
A A
 1 2 
A

 2 1 
Hermitian:
A*  A
 1 i
A


i
1


Orthogonal:
T
AT A  I
( A1  AT )
 cos 
A
 sin 
 sin  
cos  
Eigensystems
 1 2 
A

 2 1 
Ax   x

– Eigenvalue
x
– Eigenvector
 1
1  1, x   
 1
2  3,
Similarity Transformations (Conjugation)
A  A  UAU 1
Diagonalization
UAU 1  D(1 , 2 ,..., n )
 1
x 
1
Gaussian Orthogonal Ensembles (GOE)
H  (h jk )
– random N x N real symmetric matrix
Distribution of eigenvalues of 200 real symmetric
matrices of size 5 x 5
Level spacing
Eigenvalues
50
40
40
30
30
20
20
10
10
-4
-2
0
2
4
1
2
3
4
Entries of each matrix is chosen randomly and
independently from a Gaussian distribution with
  0,  1
500 matrices of size 5 x 5
100
150
80
125
100
60
75
40
50
20
25
-4
-2
0
2
4
6
1
2
3
4
5
1000 matrices of size 5 x 5
140
150
120
125
100
100
80
75
60
50
40
20
25
-6
-4
-2
0
2
4
6
1
2
3
4
5
10 x 10 matrices
350
300
200
250
150
200
150
100
100
50
50
-5
-2.5
0
2.5
5
1
7.5
2
3
4
20 x 20 matrices
300
400
250
300
200
150
200
100
100
50
-10
-5
0
5
10
1
2
3
4
Why Gaussian Distribution?
Uniform P.D.F.
Gaussian P.D.F.
  0,  1
 0
200
150
150
100
100
50
50
-2
-1
0
1
2
-5
-2.5
0
2.5
5
7.5
350
250
300
200
250
200
150
150
100
100
50
50
0.2
0.4
0.6
0.8
1
1.2
1
2
3
4
Statistical Model for GOE
H  (h jk )
– random N x N real symmetric matrix
Assumptions
1. Probability of choosing H is invariant under
orthogonal transformations
2. Entries of H are statistically independent
Joint Probability Density Function (j.p.d.f.) for H
f jk (h jk )
– p.d.f. for choosing h jk
N
p( H )   f jk (h jk )
j k
– j.p.d.f. for choosing H
Lemma (Weyl, 1946)
All invariant functions of an (N x N) matrix H
under nonsingular similarity transformations
H  H  AHA1
can be expressed in terms of the traces of the first N
powers of H.
Corollary
Assumption 1 implies that P(H) can be
expressed in terms of tr(H), tr(H2), …, tr(HN).
Observation
N
tr( H )   i
i 1
N
tr( H )   i2
2
i 1
...
N
tr( H )   iN
N
i 1
(Sum of eigenvalues of H)
Statistical Independence
Assume
H  UHU 1 ,
 cos
  sin 

U  0

 ...
 0

sin 
cos 
0
...
0
0
0
1
...
0
...
...
...
...
...
0
0

0

... 
1 
Then
H

U 1 HU 


  
U T
U

HU  U T H


U T
U

UH  HU T


 AH  HAT
(*)
 0 1
1 0

U T
A
U   ... ...


0 0


0
0
...
0
...
...
...
...
0
0

... 

0


Now, P(H) being invariant under U means that its
derivative should vanish:
p ( H )   f kj (hkj )
j k
p
0

1 f kj hkj

0
f kj hkj 

We now apply (*) to the equation immediately above
to ‘separate variables’, i.e. divide it into
groups of expressions which depend on mutually
exclusive sets of variables: {h11 , h12 , h22 },{h1k , h2k }
 1 f11
1 f 22 
1 f12


(2
h
)

(h11  h22 )

 12
f12 h12
 f11 h11 f 22 h22 
 1 f1k

1 f 2 k
  
h2 k 
h1k   0
f1k h1k
f 2 k h2 k
k 3 

N
It follows that say


1 f1k
1 f 2 k
h2 k 
h1k  Ck
f1k h1k
f 2 k h2 k
Ck
1 f1k
1 f 2 k


h1k f1k h1k h2 k f 2 k h2 k h1k h2 k
(constant)
It can be proven that Ck = 0. This allows us to
separate variables once again:
1 f1k
 2a
h1k f1k h1k
(constant)
1 f 2 k
 2a
h2 k f 2 k h2 k
Solving these differential equations yields our
desired result:
f1k (h1k )  exp(ah12k )
f 2 k (h2 k )  exp(ah22k )
(Gaussian)
Theorem
Assumption 2 implies that P(H) can be expressed
in terms of tr(H) and tr(H2), i.e.
p( H )  exp(a  tr( H 2 )  b  tr( H )  c)
J.P.D.F. for the Eigenvalues of H
D  U T HU ,
U TU  I ,
 1 0 ... 0 
 0  ... 0 
2

D
 ... ... ... ... 


0
0
0

N 

Change of variables for j.p.d.f.
F:
N
 O( N )  Sym( N )
( , U )
Jac( F ) 
H  UDU T ,
  (1 ,..., N )
 ( H11 , H12 ,..., H NN )
 (1 ,..., N ,U11 ,U12 ,...U NN )
P ( H   )   p ( H )dH ,
  Sym( N )



p ( )Jac( F ) dUd 
F 1 (  )

1

F ()

N

F 1 (  )


p ( )   Jac( F ) dU  d 


 O( N )

p N ( ) d 
N
Joint P.D.F. for the Eigenvalues


pN ( )  p( )   Jac( F )dU 


O
(
N
)


Lemma
Jac( F )  g (U ) k   j
j k
Corollary
pN (1 ,..., N )  exp( a   i2  b   i  c) k   j
j k
Standard Form
j 
1
b
xj 
2a
2a
 1

pN ( x1 ,..., xN )  CN exp    xi2   xk  x j
 2
 j k
Density of Eigenvalues
Level Density
We define the probability density of finding a level
(regardless of labeling) around x, the positions of
the remaining levels being unobserved, to be




 N ( x)  R1 ( x)  N  ... pN ( x, x2 ,..., xN )dx2 ...dxn
Asymptotic Behavior for Large N (Wigner, 1950’s)
300
1
2
2
N

x
,

 N ( x)  

0

250
x  2N
x  2N
200
150
100
50
-10
-5
0
5
20 x 20 matrices
10
Two-Point Correlation
We define the probability density of finding a level
(regardless of labeling) around each of the points x1
and x2, the positions of the remaining levels being
unobserved, to be


N!
R2 ( x1 , x2 ) 
... pN ( x1 ,..., xN ) dx3 ...dxn



( N  2)!
We define the probability density for finding
two consecutive levels inside an interval ( , )
to be
AN ( ; x1 , x2 ) 
N!
...  pN ( x1 ,..., xN )dx3 ...dxN

2!( N  2)! out
x j   , j  1, 2
x j   , j  3,..., N
Level Spacings
Limiting Behavior (Normalized)
We define the probability density that in an infinite
series of eigenvalues (with mean spacing unity)
an interval of length 2t contains exactly two levels
at positions around the points y1 and y2 to be
B(t ; y1 , y2 )  lim  2 AN ( ; x1 , x2 ),
N 
t   /, y j  xj /
  mean spacing
P.D.F. of Level Spacings
We define the probability density of finding a
level spacing s = 2t between two successive levels
y1 = -t and y2 = t to be
p( s)  2 B(t; t , t )
Multiple Integration of pN ( x1 ,..., xN )
 1

pN ( x1 ,..., xN )  CN exp    xi2   xk  x j
 2
 j k
Key Idea
Write pN ( x1 ,..., xN ) as a determinant:
 1

exp    xi2   xk  x j  c  det i ( x j ) 
 2
 j k
 x2 
 j ( x) 
exp    H j ( x) (Oscillator wave functions)
j
 2
2 j! 
1
d j  x2
H j ( x )   j [e ]
dx
(Hermite polynomials)
Harmonic Oscillator (Electron in a Box)

d2 1
2 2


m

x n ( x)  Enn ( x),

2
2
 m dx

1
En  (n  )
2
NOTE: Energy levels are quantized (discrete)
Formula for Level Spacings?
d2
p ( s )  2  (1  2  )
ds 
2 
- Eigenvalues of a matrix whose entries are
integrals of functions involving the oscillator
wave functions
The derivation of this formula very complicated!
Wigner’s Surmise
pW ( s ) 
s
  
exp   s 2  ,
2
 4 
s
S
D
Random Matrices and Solitons
Korteweg-de Vries (KdV) equation
ut  6uu x  u xxx  0
Soliton Solutions
2
u ( x, t )  2 2 log det( I  A( x, t ))
x
N
 cm cn ( km  kn ) x 4( km3  kn3 )t 
A( x, t )  
e

k

k
n
 m
m,n 1
Cauchy Matrices
N
 1 
A

k

k
n  m , n 1
 m
( kn  0)
- Cauchy matrices are symmetric and positive definite
 1
 22

1
A
 3 2
 1

 52
1
23
1
33
1
53
1 
25

1 
35 
1 

55
Eigenvalues of A: 0.502136, 0.0142635, 0.000267133
0.688884, 4.25005, 8.22776
Logarithms of Eigenvalues:
Level Spacings of Eigenvalues of Cauchy Matrices
Assumption
The values kn are chosen randomly and independently
on the interval [0,1] using a uniform distribution
1000 matrices of size 4 x 4
1200
350
1000
300
250
800
200
600
150
400
100
200
50
2
4
6
8
10
Distribution of spacings
-15
-10
-5
0
Log distribution
5
Level Spacings
First-Order Log Spacings
1000 matrices of size 4 x 4
140
10,000 matrices of size 4 x 4
700
120
600
100
500
80
400
60
300
40
200
20
100
2
4
6
2
4
6
8
Second-Order Log Spacings
70
300
60
250
50
200
40
150
30
100
20
50
10
-6
-4
-2
0
2
-6
-4
-2
0
2
Open Problem
Mathematically describe the distributions of
these first- and higher-order log spacings
References
1. Random Matrices, M. L. Mehta, Academic Press, 1991.
Download