Adaptive annealing: sampling and counting a near-optimal connection between

advertisement
Adaptive annealing: a near-optimal
connection between
sampling and counting
Daniel Štefankovič
(University of Rochester)
Santosh Vempala
Eric Vigoda
(Georgia Tech)
Adaptive annealing: a near-optimal
connection between
counting
If sampling
you want and
to count
using MCMC then
statistical physics
Daniel Štefankovič
is useful.
(University of Rochester)
Santosh Vempala
Eric Vigoda
(Georgia Tech)
Outline
1. Counting problems
2. Basic tools: Chernoff, Chebyshev
3. Dealing with large quantities
(the product method)
4. Statistical physics
5. Cooling schedules (our work)
6. More…
Counting
independent sets
spanning trees
matchings
perfect matchings
k-colorings
Counting
independent sets
spanning trees
matchings
perfect matchings
k-colorings
Compute the number of
spanning trees
Compute the number of
spanning trees
Kirchhoff’s Matrix Tree Theorem:
2
0
0
0
0
2
0
0
0
0
2
0
D
0
0
0
2
-
0
1
0
1
1
0
1
0
0
1
0
1
A
1
0
1
0
det(D – A)vv
2 -1 0
det
-1 2 -1
0 -1 2
Compute the number of
spanning trees
G
polynomial-time
algorithm
number of spanning trees of G
Counting
independent sets
spanning trees
matchings
perfect matchings
k-colorings
?
Compute the number of
independent sets
(hard-core gas model)
independent set =
subset S of vertices,
of a graph
no two in S are neighbors
# independent sets = 7
independent set = subset S of vertices
no two in S are neighbors
# independent sets =
G1
G2
G3
Gn-2
...
Gn-1
...
Gn
...
# independent sets =
2
G1
3
G2
5
G3
Gn-2
...
Fn-1
Gn-1
...
Fn
Gn
...
Fn+1
# independent sets = 5598861
independent set = subset S of vertices
no two in S are neighbors
Compute the number of
independent sets
G
polynomial-time
algorithm
?
number of independent sets of G
Compute the number of
independent sets
G
polynomial-time
algorithm
!
number of independent sets of G
graph G  # independent sets in G
#P-complete
#P
NP
FP
P
#P-complete even for 3-regular graphs
(Dyer, Greenhill, 1997)
graph G  # independent sets in G
?
approximation
randomization
graph G  # independent sets in G
?
approximation
randomization
which is
more important?
graph G  # independent sets in G
My world-view:
?
(true) randomness is important
conceptually but NOT computationally
(i.e., I believe P=BPP).
which is
approximation makes problems
approximation
more important?
easier (i.e., I believe #P=BPP)
randomization
We would like to know Q
Goal: random variable
Y
such that
P( (1-)Q  Y  (1+)Q )  1-
“Y gives (1)-estimate”
We would like to know Q
Goal: random variable
Y
such that
P( (1-)Q  Y  (1+)Q )  1-
(fully polynomial randomized approximation scheme):
FPRAS:
G,,
polynomial-time
algorithm
Y
Outline
1. Counting problems
2. Basic tools: Chernoff, Chebyshev
3. Dealing with large quantities
(the product method)
4. Statistical physics
5. Cooling schedules (our work)
6. More...
We would like to know Q
1. Get an unbiased estimator
X, i. e.,
E[X] = Q
2. “Boost the quality” of X:
X1 + X2 + ... + Xn
Y=
n
The Bienaymé-Chebyshev inequality
P( Y gives (1)-estimate )
1-
V[Y]
E[Y]2
1
2
The Bienaymé-Chebyshev inequality
P( Y gives (1)-estimate )
1-
V[Y]
E[Y]2
1
2
squared coefficient
of variation SCV
Y=
X1 + X2 + ... + Xn
n

V[Y]
1
=
2
E[Y]
n
V[X]
E[X]2
The Bienaymé-Chebyshev inequality
Let X1,...,Xn,X be independent, identically
distributed random variables,
Q=E[X]. Let
Y=
X1 + X2 + ... + Xn
n
Then
P( Y gives (1)-estimate of Q )
V[X] 1
12
2

n E[X]
Chernoff’s bound
Let X1,...,Xn,X be independent, identically
distributed random variables, 0  X  1,
Q=E[X]. Let
Y=
X1 + X2 + ... + Xn
n
Then
P( Y gives (1)-estimate of Q )
1– e
- 2 . n . E[X] / 3
V[X]
n=
E[X]2
1
2
1

Number of samples to achieve precision  with confidence .
n=
0X1
1
E[X]
3
ln (1/)
2
BAD
V[X]
n=
E[X]2
1
2
1

Number of samples to achieve precision  with confidence .
n=
0X1
1
E[X]
BAD
3
ln (1/)
2
GOOD
Median “boosting trick”
n=
1
E[X]
4
2
Y=
X1 + X2 + ... + Xn
n
BY BIENAYME-CHEBYSHEV:
P( 
)  3/4
=
(1-)Q
Y
(1+)Q
Median trick – repeat 2T times
(1-)Q
BY BIENAYME-CHEBYSHEV:
P( 
P(
)  3/4

BY CHERNOFF:
(1+)Q
> T out of 2T
)1-e

P(
median is in )
1-e
-T/4
-T/4
n=
V[X] 32
ln (1/)
2
2
E[X] 
+ median trick
n=
0X1
1
E[X]
BAD
3
2
ln (1/)
Creating “approximator” from X
 = precision
 = confidence
n=
(
V[X] 1
E[X]2 2
ln (1/)
)
Outline
1. Counting problems
2. Basic tools: Chernoff, Chebyshev
3. Dealing with large quantities
(the product method)
4. Statistical physics
5. Cooling schedules (our work)
6. More...
(approx) counting  sampling
Valleau,Card’72 (physical chemistry), Babai’79 (for matchings and
colorings), Jerrum,Valiant,V.Vazirani’86
the outcome of the JVV reduction:
random variables: X1 X2 ... Xt
such that
1) E[X X ... X ] = “WANTED”
1 2
t
2) the Xi are easy to estimate
V[Xi]
squared coefficient
=
O(1)
2
of variation (SCV)
E[Xi]
(approx) counting  sampling
1)
E[X1 X2 ... Xt]
= “WANTED”
2) the Xi are easy to estimate
V[Xi]
= O(1)
2
E[Xi]
Theorem (Dyer-Frieze’91)
2
2
O(t / ) samples (O(t/ ) from each X )
2
i
give
1 estimator of “WANTED” with prob3/4
JVV for independent sets
GOAL: given a graph G, estimate the
number of independent sets of G
1
# independent sets =
P(
)
JVV for independent sets
P(
P(
?
?
?
X1
)P(
P(AB)=P(A)P(B|A)
)=
?
?
) P( )P( )
?
X2
Xi  [0,1] and E[Xi] ½
X3

V[Xi]
X4
=
O(1)
E[Xi]2
JVV for independent sets
P(
P(
?
?
?
X1
)P(
P(AB)=P(A)P(B|A)
)=
?
?
) P( )P( )
?
X2
Xi  [0,1] and E[Xi] ½
X3

V[Xi]
X4
=
O(1)
E[Xi]2
Self-reducibility for independent sets
P(
?
?
?
)
5
=
7
Self-reducibility for independent sets
P(
?
?
?
7
=
5
)
5
=
7
Self-reducibility for independent sets
P(
?
?
?
7
=
5
)
5
=
7
7
=
5
Self-reducibility for independent sets
P(
?
?
)
5
=
3
3
=
5
Self-reducibility for independent sets
P(
?
?
)
5
=
3
3
=
5
5
=
3
Self-reducibility for independent sets
7
=
5
7 5 3
=
5 3 2
7 5
=
5 3
=7
JVV: If we have a sampler oracle:
graph G
SAMPLER
ORACLE
random
independent
set of G
then FPRAS using O(n2) samples.
JVV: If we have a sampler oracle:
graph G
SAMPLER
ORACLE
random
independent
set of G
then FPRAS using O(n2) samples.
ŠVV: If we have a sampler oracle:
, graph G
SAMPLER
ORACLE
set from
gas-model
Gibbs at 
then FPRAS using O*(n) samples.
Application – independent sets
O*( |V| ) samples suffice for counting
Cost per sample (Vigoda’01,Dyer-Greenhill’01)
time = O*( |V| ) for graphs of degree  4.
Total running time:
O* ( |V|2 ).
Other applications
matchings
O*(n2m)
(using Jerrum, Sinclair’89)
spin systems:
Ising model
O*(n2) for <C
(using Marinelli, Olivieri’95)
k-colorings
O*(n2) for k>2
(using Jerrum’95)
total running time
Outline
1. Counting problems
2. Basic tools: Chernoff, Chebyshev
3. Dealing with large quantities
(the product method)
4. Statistical physics
5. Cooling schedules (our work)
6. More…
easy = hot
hard = cold
Hamiltonian
4
2
1
0
Big set = 
Hamiltonian
H :   {0,...,n}
Goal: estimate
-1
|H (0)|
-1
|H (0)|
= E[X1] ... E[Xt ]
Distributions between hot and cold
 = inverse temperature
 = 0  hot  uniform on 
 =   cold  uniform on H-1(0)
 (x)  exp(-H(x))
(Gibbs distributions)
Distributions between hot and cold
 (x)  exp(-H(x))
exp(-H(x))
 (x) =
Z()
Normalizing factor = partition function
Z()=  exp(-H(x))
x
Partition function
Z()=  exp(-H(x))
x
have:
want:
Z(0) = ||
-1
Z() = |H (0)|
Partition function - example
Z()=  exp(-H(x))
x
4
2
1
0
have:
want:
Z(0) = ||
Z() = |H-1(0)|
Z() =
1 e-4.
+ 4 e-2.
+ 4 e-1.
+ 7 e-0.
Z(0) = 16
Z()=7
Assumption:
we have a sampler oracle for 
exp(-H(x))
 (x) =
Z()
graph G

SAMPLER
ORACLE
subset of V
from 
Assumption:
we have a sampler oracle for 
exp(-H(x))
 (x) =
Z()
W  
Assumption:
we have a sampler oracle for 
exp(-H(x))
 (x) =
Z()
W  
X = exp(H(W)( - ))
Assumption:
we have a sampler oracle for 
exp(-H(x))
 (x) =
Z()
W  
X = exp(H(W)( - ))
can obtain the following ratio:
E[X] =  (s) X(s) =
s
Z()
Z()
Our goal restated
Partition function
Z() =  exp(-H(x))
x
Goal: estimate
Z() =
Z(1) Z(2)
Z(0) Z(1)
-1
Z()=|H (0)|
...
Z(t)
Z(t-1)
0 = 0 < 1 <  2 < ... < t = 
Z(0)
Our goal restated
Z() =
Z(1) Z(2)
Z(0) Z(1)
...
Z(t)
Z(t-1)
Z(0)
Cooling schedule:
0 = 0 < 1 <  2 < ... < t = 
How to choose the cooling schedule?
minimize length, while satisfying
V[Xi]
E[Xi]2
= O(1)
E[Xi] =
Z(i)
Z(i-1)
Our goal restated
Z() =
Z(1) Z(2)
Z(0) Z(1)
...
Z(t)
Z(t-1)
Z(0)
Cooling schedule:
0 = 0 < 1 <  2 < ... < t = 
How to choose the cooling schedule?
minimize length, while satisfying
V[Xi]
E[Xi]2
= O(1)
E[Xi] =
Z(i)
Z(i-1)
Outline
1. Counting problems
2. Basic tools: Chernoff, Chebyshev
3. Dealing with large quantities
(the product method)
4. Statistical physics
5. Cooling schedules (our work)
6. More...
Parameters: A and n
Z() =  exp(-H(x))
x
Z(0) = A
H:  {0,...,n}
n
Z() =

ak e- k
k=0
ak = |H-1(k)|
Parameters
Z(0) = A
H:  {0,...,n}
A
n
V
2
E
 V!
V
perfect matchings
V!
V
k-colorings
V
k
E
independent sets
matchings
Parameters
Z(0) = A
H:  {0,...,n}
A
n
independent sets
matchings
V
2
E
 V!
V
V!
V
V
k
E
perfect
matchings
matchings
= # ways
of marrying them so that no
unhappy couple
k-colorings
Parameters
Z(0) = A
H:  {0,...,n}
A
n
independent sets
matchings
V
2
E
 V!
V
V!
V
V
k
E
perfect
matchings
matchings
= # ways
of marrying them so that no
unhappy couple
k-colorings
Parameters
Z(0) = A
H:  {0,...,n}
A
n
independent sets
matchings
V
2
E
 V!
V
V!
V
V
k
E
perfect
matchings
matchings
= # ways
of marrying them so that no
unhappy couple
k-colorings
Parameters
Z(0) = A
H:  {0,...,n}
A
n
independent sets
matchings
V
2
E
 V!
V
V!
V
V
k
E
perfect
marry
ignoringmatchings
“compatibility”
hamiltonian = number of unhappy couples
k-colorings
Parameters
Z(0) = A
H:  {0,...,n}
A
n
V
2
E
 V!
V
perfect matchings
V!
V
k-colorings
V
k
E
independent sets
matchings
Previous cooling schedules
Z(0) = A
H:  {0,...,n}
0 = 0 < 1 <  2 < ... < t = 
“Safe steps”
   + 1/n
(Bezáková,Štefankovič,
   (1 + 1/ln A)
Vigoda,V.Vazirani’06)
ln A  
Cooling schedules of length
O( n ln A)
O( (ln n) (ln A) )
(Bezáková,Štefankovič,
Vigoda,V.Vazirani’06)
Previous cooling schedules
Z(0) = A
H:  {0,...,n}
0 = 0 < 1 <  2 < ... < t = 
“Safe steps”
   + 1/n
(Bezáková,Štefankovič,
   (1 + 1/ln A)
Vigoda,V.Vazirani’06)
ln A  
Cooling schedules of length
O( n ln A)
O( (ln n) (ln A) )
(Bezáková,Štefankovič,
Vigoda,V.Vazirani’06)
“Safe steps”
   + 1/n
   (1 + 1/ln A)
ln A  
W  
(Bezáková,Štefankovič,
Vigoda,V.Vazirani’06)
X = exp(H(W)( - ))
n
Z() =

ak e- k
k=0
1/e  X  1
V[X]
E[X]2

1
E[X]
e
“Safe steps”
   + 1/n
   (1 + 1/ln A)
ln A  
W  
(Bezáková,Štefankovič,
Vigoda,V.Vazirani’06)
X = exp(H(W)( - ))
n
Z() =

ak e- k
k=0
Z() = a0  1
Z(ln A)  a0 + 1
E[X]  1/2
“Safe steps”
   + 1/n
   (1 + 1/ln A)
ln A  
W  
(Bezáková,Štefankovič,
Vigoda,V.Vazirani’06)
X = exp(H(W)( - ))
n
Z() =

ak e- k
k=0
E[X]  1/2e
Previous cooling schedules
1/n, 2/n, 3/n, .... , (ln A)/n, .... , ln A
“Safe steps”
   + 1/n
(Bezáková,Štefankovič,
   (1 + 1/ln A)
Vigoda,V.Vazirani’06)
ln A  
Cooling schedules of length
O( n ln A)
O( (ln n) (ln A) )
(Bezáková,Štefankovič,
Vigoda,V.Vazirani’06)
No better fixed schedule possible
Z(0) = A
H:  {0,...,n}
THEOREM:
A schedule that works for all
-n
A
Za() =
(1 + a e
)
1+a
(with a[0,A-1])
has LENGTH  ( (ln n)(ln A) )
Parameters
Z(0) = A
H:  {0,...,n}
Our main result:
can get adaptive schedule
*
1/2
of length O ( (ln A) )
Previously:
non-adaptive schedules
of length *( ln A )
Related work
can get adaptive schedule
*
1/2
of length O ( (ln A) )
Lovász-Vempala
Volume of convex bodies in O*(n4)
schedule of length O(n1/2)
(non-adaptive cooling schedule, using specific properties
of the “volume” partition functions)
Existential part
Lemma:
for every partition function there exists
a cooling schedule of length O*((ln A)1/2)
can get adaptive schedule
of length O* ( (ln A)1/2 )
Cooling schedule (definition refresh)
Z() =
Z(1) Z(2)
Z(0) Z(1)
...
Z(t)
Z(t-1)
Z(0)
Cooling schedule:
0 = 0 < 1 <  2 < ... < t = 
How to choose the cooling schedule?
minimize length, while satisfying
V[Xi]
E[Xi]2
= O(1)
E[Xi] =
Z(i)
Z(i-1)
Express SCV using partition function
(going from  to )
W  
E[X2]
2
E[X]
V[X]
+1
2
E[X]
E[X] =
Z()
Z()
X = exp(H(W)( - ))
=
Z(2-) Z()
2
Z()
 C
E[X2]
2
E[X]


=
Z(2-) Z()
2
Z()
 C
2-
f()=ln Z()
graph of f
(f(2-) + f())/2 
(ln C)/2 + f()
Proof:
 C’=(ln C)/2
Properties of partition functions
f()=ln Z()
f is decreasing
f is convex
f’(0)  –n
f(0)  ln A
Properties of partition functions
f is decreasing
f is convex
f’(0)  –n
f(0)  ln A
f()=ln Z()
n
f() = ln

k=0
ak e- k
f’()
(ln f)’ =
f’
f
n

ak k e- k
k=0
= n
ak e- k
k=0

GOAL: proving Lemma:
for every partition function there exists
a cooling schedule of length O*((ln A)1/2)
f()=ln Z()
Proof:
1
f is decreasing
f is convex
f’(0)  –n
f(0)  ln A
either f or f’
changes a lot
Let K:=f
Then
1
(ln |f’|) 
K
Proof:
1
a
Let K:=f
Then
1
(ln |f’|) 
K
c b
c := (a+b)/2,  := b-a
have f(c) = (f(a)+f(b))/2 – 1
f is convex
(f(a) – f(c)) /  f’(a)
(f(c) – f(b)) /  f’(b)
f’(b)
 1-1/f  e-f
f’(a)
Let K:=f
Then
1
(ln |f’|) 
K
c := (a+b)/2,  := b-a
have f(c) = (f(a)+f(b))/2 – 1
f is convex
(f(a) – f(c)) /  f’(a)
(f(c) – f(b)) /  f’(b)
f:[a,b]  R, convex, decreasing
can be “approximated” using
f’(a)
(f(a)-f(b))
f’(b)
segments
Technicality: getting to 2-
Proof:


2-
Technicality: getting to 2-
i
Proof:

i+1

2-
Technicality: getting to 2-
i
Proof:

i+1

2-
i+2
Technicality: getting to 2-
i
Proof:
ln ln A
extra
steps

i+1

2-
i+2
i+3
Existential  Algorithmic
can get adaptive schedule
of length O* ( (ln A)1/2 )
can get adaptive schedule
*
1/2
of length O ( (ln A) )
Algorithmic construction
Our main result:
using a sampler oracle for 
exp(-H(x))
 (x) =
Z()
we can construct a cooling schedule of length
 38 (ln A)1/2(ln ln A)(ln n)
Total number of oracle calls
 107 (ln A) (ln ln A+ln n)7 ln (1/)
Algorithmic construction
current inverse temperature 
ideally move to  such that
B1 
E[X2]
E[X]2
 B2
E[X] =
Z()
Z()
Algorithmic construction
current inverse temperature 
ideally move to  such that
B1 
E[X2]
E[X]2
 B2
E[X] =
X is “easy to estimate”
Z()
Z()
Algorithmic construction
current inverse temperature 
ideally move to  such that
B1 
E[X2]
E[X]2
 B2
E[X] =
we make progress (where B1>1)
Z()
Z()
Algorithmic construction
current inverse temperature 
ideally move to  such that
B1 
E[X2]
E[X]2
 B2
E[X] =
need to construct a “feeler” for this
Z()
Z()
Algorithmic construction
current inverse temperature 
ideally move to  such that
B1 
E[X2]
E[X]2
 B2
E[X] =
Z()
Z(2-)
Z()
Z()
need to construct a “feeler” for this
Z()
Z()
Algorithmic construction
current inverse temperature 
bad “feeler”
ideally move to  such that
B1 
E[X2]
E[X]2
 B2
E[X] =
Z()
Z(2-)
Z()
Z()
need to construct a “feeler” for this
Z()
Z()
estimator for
n
Z() =

Z()
Z()
ak e- k
k=0
For W   we have P(H(W)=k) =
ak e- k
Z()
estimator for
Z()
Z()
If H(X)=k likely at both ,   estimator
n
Z() =

ak e- k
k=0
For W   we have P(H(W)=k) =
For U   we have P(H(U)=k) =
ak e- k
Z()
ak e- k
Z()
estimator for
Z()
Z()
If H(X)=k likely at both ,   estimator
n
Z() =

ak e- k
k=0
For W   we have P(H(W)=k) =
For U   we have P(H(U)=k) =
ak e- k
Z()
ak e- k
Z()
estimator for
Z()
Z()
For W   we have P(H(W)=k) =
For U   we have P(H(U)=k) =
P(H(U)=k) k(-) Z()
=
e
P(H(W)=k)
Z()
ak e- k
Z()
ak e- k
Z()
estimator for
Z()
Z()
For W   we have P(H(W)=k) =
For U   we have P(H(U)=k) =
ak e- k
Z()
ak e- k
Z()
P(H(U)=k) k(-) Z()
=
e
P(H(W)=k)
Z()
PROBLEM: P(H(W)=k) can be too small
Z()
Rough estimator for
n
Z() =

ak e- k
Z()
interval instead
of single value
k=0
For W   we have
P(H(W)[c,d]) =
For U   we have
P(H(W)[c,d]) =
d
 ak e- k
k=c
Z()
d
 ak e- k
k=c
Z()
Rough estimator for
If |-| |d-c|  1 then
Z()
Z()
Z()
1 Z()
P(H(U)[c,d]) ec(-)

e
P(H(W)[c,d])
e Z()
Z()
We also need P(H(U)  [c,d])
P(H(W)  [c,d]) to be large.
d
 ak e- k
k=c
d
 ak e- k
k=c
d
ec(-)
 ak e- (k-c)
=
k=c
d
 ak e- (k-c)
k=c
We will:
Split {0,1,...,n} into h  4(ln n) ln A
intervals
[0],[1],[2],...,[c,c(1+1/ ln A)],...
for any inverse temperature  there
exists a interval with P(H(W) I)  1/8h
We say that I is HEAVY for 
We will:
Split {0,1,...,n} into h  4(ln n) ln A
intervals
[0],[1],[2],...,[c,c(1+1/ ln A)],...
for any inverse temperature  there
exists a interval with P(H(W) I)  1/8h
We say that I is HEAVY for 
Algorithm
repeat
find an interval I which is heavy for
the current inverse temperature 
see how far I is heavy (until some *)
use the interval I for the feeler
ANALYSIS:
either
* make progress, or
* eliminate the interval I
* or make a “long move”
Z()
Z(2-)
Z()
Z()
I is
heavy

distribution of h(X) where X
...
I = a heavy interval at 
I is
heavy
I is NOT
heavy


distribution of h(X) where X
...
no longer
heavy at 
I = a heavy interval at 
!
I is
heavy
I is
heavy
I is NOT
heavy

’

distribution of h(X) where X’
heavy at ’
...
I = a heavy interval at 
I is
heavy
I is
heavy
I is NOT
heavy

’

I is
heavy
*
*+1/(2n)
I is NOT
heavy
use binary search to find *
 = min{1/(b-a), ln A}
I=[a,b]
I is
heavy
I is
heavy
I is NOT
heavy

’

I is
heavy
*
*+1/(2n)
I is NOT
heavy
I=[a,b]
use binary search to find *
 = min{1/(b-a), ln A}
How do you know that you can use binary search?
How do you know that you can use binary search?
I is NOT
heavy
I is
heavy
I is NOT
heavy
I is
heavy
Lemma: the set of temperatures for which I
is h-heavy is an interval.
I is h-heavy at 

kI
ak
-
k
e
P(h(X) I)  1/8h for X
1

8h
n

k=0
ak
-
k
e
How do you know that you can use binary search?

ak
-
k
e
kI
1

8h
n

ak
-
k
e
k=0
x=e-
Descarte’s rule of signs:
c0 x0 + c1 x1 + c2 x2 + .... + cn xn
+
+
+
sign change
number of
positive roots

number of
sign changes
How do you know that you can use binary search?

kI
n
ak

1
-1+x+x
 2+x3+...+xank e- k
20
1+x+x
h
k=0
-
k
e
x=e-
Descarte’s rule of signs:
c0 x0 + c1 x1 + c2 x2 + .... + cn xn
+
+
+
sign change
number of
positive roots

number of
sign changes
How do you know that you can use binary search?

ak
-
k
e
kI
1

8h
n

ak
-
k
e
k=0
x=e-
Descarte’s rule of signs:
c0 x0 + c1 x1 + c2 x2 + .... + cn xn
+
+
+
sign change
number of
positive roots

number of
sign changes
I=[a,b]
I is
heavy

I is
heavy
*
*+1/(2n)
I is NOT
heavy
can roughly
compute ratio of
Z()/Z(’)
for , ’ [,*]
if |-|.|b-a| 1
1. success
I=[a,b]
I is
heavy
2. eliminate interval

I is
heavy
*
*+1/(2n)
3. long move
I is NOT
heavy
find largest  such that
Z() Z(2-)
C
Z()
Z()
can roughly
compute ratio of
Z()/Z(’)
for , ’ [,*]
if |-|.|b-a| 1
if we have sampler oracles for 
then we can get adaptive schedule
of length t=O* ( (ln A)1/2 )
independent sets
O*(n2)
(using Vigoda’01, Dyer-Greenhill’01)
matchings
O*(n2m)
(using Jerrum, Sinclair’89)
spin systems:
Ising model
O*(n2) for <C
(using Marinelli, Olivieri’95)
k-colorings
O*(n2) for k>2
(using Jerrum’95)
Outline
1. Counting problems
2. Basic tools: Chernoff, Chebyshev
3. Dealing with large quantities
(the product method)
4. Statistical physics
5. Cooling schedules (our work)
6. More...
Outline
6. More…
a) proof of Dyer-Frieze
b) independent sets revisited
c) warm starts
Appendix – proof of:
1)
E[X1 X2 ... Xt]
= “WANTED”
2) the Xi are easy to estimate
V[Xi]
= O(1)
2
E[Xi]
Theorem (Dyer-Frieze’91)
2
2
O(t / ) samples (O(t/ ) from each X )
2
i
give
1 estimator of “WANTED” with prob3/4
How precise do the Xi have to be?
First attempt – term by term
Main idea:




(1 )(1 )(1 )... (1 )  1
t
t
t
t
n=
( E[X]
V[X]
2
1
2
ln (1/)
)
each term  (t2) samples   (t3) total
How precise do the Xi have to be?
Analyzing SCV is better
(Dyer-Frieze’1991)
X=X1 X2 ... Xt
GOAL: SCV(X)  2/4
squared coefficient
of variation (SCV)
P( X gives (1)-estimate )
V[X]
1E[X]2
1
2
How precise do the Xi have to be?
Analyzing SCV is better
(Dyer-Frieze’1991)
Main idea:
2/4
2/4
SCV(Xi) 
SCV(X)
<


t

proof:
SCV(X) = (1+SCV(X1)) ... (1+SCV(Xt)) - 1
V[X]
SCV(X)=
=
2
E[X]
E[X2]
-1
2
E[X]
How precise
do the Xi have
X , X independent
E[X X ] =to
E[Xbe?
]E[X ]
1
2
1
2
Analyzing
is better
X , X SCV
independent
 X ,X
(Dyer-Frieze’1991)
1
Main idea:
2
2
1
2
2
1
2
independent
X1,X2 independent 
SCV(X1X2)=(1+SCV(X1))(1+SCV(X2))-1
2/4
2/4
SCV(Xi) 
SCV(X)
<


t

proof:
SCV(X) = (1+SCV(X1)) ... (1+SCV(Xt)) - 1
V[X]
SCV(X)=
=
2
E[X]
E[X2]
-1
2
E[X]
How precise do the Xi have to be?
Analyzing SCV is better
(Dyer-Frieze’1991)
Main idea:
X = X1 X2 ... Xt
2/4
2/4
SCV(Xi) 

 SCV(X) <

t
each term O(t /2) samples  O(t2/2) total
Outline
6. More…
a) proof of Dyer-Frieze
b) independent sets revisited
c) warm starts
Hamiltonian
4
2
1
0
Hamiltonian – many possibilities
2
1
0
(hardcore lattice gas model)
What would be a natural hamiltonian
for planar graphs?
What would be a natural hamiltonian
for planar graphs?
H(G) = number of edges
natural MC
pick u,v uniformly at random
1/(1+)
try G - {u,v}
/(1+)
try G + {u,v}
1/(1+)
n(n-1)/2
G
G’
v
v
u
/(1+)
n(n-1)/2
natural MC
pick u,v uniformly at random
1/(1+)
try G - {u,v}
/(1+)
try G + {u,v}
u
1/(1+)
n(n-1)/2
G
G’
v
v
u
/(1+)
n(n-1)/2
u
(G)  number of edges
satisfies the detailed balance condition
(G) P(G,G’) = (G’) P(G’,G)
( = exp(-))
Outline
6. More…
a) proof of Dyer-Frieze
b) independent sets revisited
c) warm starts
(n=3)
Mixing time:
mix = smallest t such that
| t -  |TV  1/e
(n ln n)
Relaxation time:
rel = 1/(1-2)
(n)
rel  mix  rel ln (1/min)
(discrepancy may be substantially bigger for, e.g., matchings)
Estimating (S)
Mixing time:
mix = smallest t such that
| t -  |TV  1/e
Relaxation time:
rel = 1/(1-2)
X
METHOD 1
{
1
if
X
S
Y=
0 otherwise
E[Y]=(S)
X1
X2
X3
...
Xs
Mixing time:
mix = smallest t such that
| t -  |TV  1/e
Estimating (S)
Relaxation time:
rel = 1/(1-2)
METHOD 1
X
{
1
if
X
S
Y=
0 otherwise
E[Y]=(S)
X1
X2
X3
...
METHOD 2
(Gillman’98, Kahale’96, ...)
X1
X2
X3
...
Xs
Xs
Further speed-up
Mixing time:
mix = smallest t such that
| t -  |TV  1/e
Relaxation time:
rel = 1/(1-2)
|t -  |TV  exp(-t/rel) Var(0/)
( (x)(0(x)/(x)-1)2)1/2
small  called warm start
METHOD 2
(Gillman’98, Kahale’96, ...)
X1
X2
X3
...
Xs
Further speed-up
Mixing time:
sample
at  can be used as a
mix
= smallest
t such that
warm
start
for ’
| t -  |TV  1/e

Relaxation
time: can step
cooling
schedule
= 1/(1-2)
relto
from ’
|t -  |TV  exp(-t/rel) Var(0/)
( (x)(0(x)/(x)-1)2)1/2
small  called warm start
METHOD 2
(Gillman’98, Kahale’96, ...)
X1
X2
X3
...
Xs
sample at  can be used as a
warm start for ’

cooling schedule can step
from ’ to 
m=O( (ln n)(ln A) )
0
1
2
3
m
....
= “well mixed” states
0
1
2
3
m
....
= “well mixed” states
run the our cooling-schedule
algorithm with METHOD 2
using “well mixed” states
as starting points
METHOD 2
X1
Xs
X2
X3
...
Xs
k=O*( (ln A)1/2 )
Output of our algorithm:
0
0
1
k
small augmentation (so that we can use
sample from current  as a warm start at next)
still O*( (ln A)1/2 )
3
2
1
m
....
Use analogue of Frieze-Dyer for independent samples
from vector variables with slightly dependent coordinates.
if we have sampler oracles for 
then we can get adaptive schedule
of length t=O* ( (ln A)1/2 )
independent sets
O*(n2)
(using Vigoda’01, Dyer-Greenhill’01)
matchings
O*(n2m)
(using Jerrum, Sinclair’89)
spin systems:
Ising model
O*(n2) for <C
(using Marinelli, Olivieri’95)
k-colorings
O*(n2) for k>2
(using Jerrum’95)
Download