Vempala - IAS Video Lectures

advertisement
Cool with a Gaussian:
An 𝑂∗ (𝑛3 ) volume algorithm
Ben Cousins and Santosh Vempala
The Volume Problem
Given a measurable, compact set K in n-dimensional space, find a
number A such that:
1 − πœ– vol 𝐾 ≤ 𝐴 ≤ 1 + πœ– vol 𝐾
K is given by
 a point π‘₯0 ∈ 𝐾, s.t. π‘₯0 + 𝐡𝑛 ⊆ 𝐾
 a membership oracle: answers YES/NO to “π‘₯ ∈ 𝐾? "
Volume: first attempt

Divide and conquer:

Difficulty: number of parts grows exponentially in n.
More generally: Integration
Input: integrable function f: 𝑅𝑛 → 𝑅+ specified by an
oracle, point x, error parameter ε .
Output: number A such that:
1 − πœ– ∫ 𝑓 ≤ 𝐴 ≤ (1 + πœ–)∫ 𝑓
Volume is the special case when f is a 0-1 function.
High-dimensional problems





Integration (volume)
Optimization
Learning
Rounding
Sampling
All hopelessly intractable in general, even to approximate.
High-dimensional problems
Input:
 A set of points S in n-dimensional space 𝑅 𝑛
or a distribution in 𝑅𝑛
 A function f that maps points to real values (could be the
indicator of a set)

What is the complexity of computational problems as the
dimension grows?

Dimension = number of variables

Typically, size of input is a function of the dimension.
Structure
Q. What structure makes high-dimensional problems
computationally tractable? (i.e., solvable with polynomial
complexity)

Convexity and its extensions appear to be the
frontier of polynomial-time solvability.
Volume: second attempt: Sandwiching
Thm (John). Any convex body K has an ellipsoid E s.t.
𝐸 ⊆ 𝐾 ⊆ 𝑛𝐸.
E = maximum volume ellipsoid contained in K.
Thm (KLS95). For a convex body K in isotropic position,
𝑛+1
𝐡𝑛
𝑛

⊆𝐾⊆
𝑛 𝑛 + 1 𝐡𝑛
Also a factor n sandwiching, but with a different ellipsoid.
Isotropic position and sandwiching

For any convex body K (in fact any set/distribution with
bounded second moments), we can apply an affine
transformation so that for a random point x from K :
𝐸 π‘₯ = 0,
𝐸 π‘₯π‘₯ 𝑇 = 𝐼𝑛 .

Thus K “looks like a ball” up to second moments.

How close is it really to a ball?
K lies between two balls with radii within a factor of n.

Volume via Sandwiching

The John ellipsoid can be approximated using the Ellipsoid
algorithm, s.t.
𝐸 ⊆ 𝐾 ⊆ 𝑛1.5 𝐸

The Inertial ellipsoid can be approximated to within any
constant factor (we’ll see how)

Using either one,
𝐸 ⊆ 𝐾 ⊆ 𝑛𝑂(1) 𝐸

Polytime algorithm, 𝑛𝑂

Can we do better?
π‘£π‘œπ‘™ 𝐸 ≤ π‘£π‘œπ‘™ 𝐾 ≤ 𝑛𝑂(𝑛) π‘£π‘œπ‘™ 𝐸 .
𝑛
approximation
Complexity of Volume Estimation
Thm [E86, BF87]. For any deterministic algorithm that uses at
most π‘›π‘Ž membership calls to the oracle for a convex body K
and computes two numbers A and B such that A ≤ vol K ≤ B,
there is some convex body for which the ratio B/A is at least
𝑐𝑛
π‘Ž log 𝑛
n
2
where c is an absolute constant.
Thm [DF88]. Computing the volume of an explicit polytope
𝐴π‘₯ ≤ 𝑏 is #P-hard, even for a totally unimodular matrix A and
rational b.
Complexity of Volume Estimation
Thm [BF]. For deterministic algorithms:
# oracle calls
approximation factor
Thm [Dadush-V.13].
Matching upper bound of 1 + πœ–
𝑛
in time
1 𝑂 𝑛
πœ–
poly(𝑛).
Randomized Volume/Integration
[DFK89]. Polynomial-time randomized algorithm that
estimates volume to within relative error 1 + πœ– with
1
1
probability at least 1 − 𝛿 in time poly(n, , log
).
πœ–
𝛿
[Applegate-K91]. Polytime randomized algorithm to
estimate integral of any (Lipshitz) logconcave function.
Volume Computation: an ongoing adventure
Dyer-Frieze-Kannan 89 23
Lovász-Simonovits 90
Applegate-K 90
L 90
DF 91
LS 93
KLS 97
5
LV 03,04
LV 06
Cousins-V. 13, 14
Power
New aspects
everything
16
localization
10
logconcave integration
10
ball walk
8
error analysis
7
multiple improvements
speedy walk, isotropy
4
annealing, wt. isoper.
4
integration, local analysis
3
Gaussian cooling
Does it work?

[Lovász-Deák 2012] implemented [LV] 𝑛4 algorithm



worked for cubes up to dimension 9
but too slow after that.
[CV13] Matlab implementation of a new algorithm
Rotated cubes
Volume: third attempt: Sampling

Pick random samples from ball/cube containing K.
Compute fraction c of sample in K.
Output c.vol(outer ball).

Need too many samples!


Volume via Sampling [DFK89]
𝐡 ⊆ 𝐾 ⊆ 𝑅𝐡.
Let 𝐾𝑖 = 𝐾 ∩ 2𝑖/𝑛 𝐡, 𝑖 = 0, 1, … , π‘š = 𝑛 log 𝑅.
vol(K1 ) vol(K 2 )
vol(K m )
vol K = vol B .
…
.
vol(K 0 ) vol(K1 ) vol(K m−1 )
Estimate each ratio with random samples.
Volume via Sampling
𝐾𝑖 = 𝐾 ∩ 2𝑖/𝑛 𝐡,
𝑖 = 0, 1, … , π‘š = 𝑛 log 𝑅.
vol(K1 ) vol(K 2 )
vol(K m )
vol K = vol B .
…
.
vol(K 0 ) vol(K1 ) vol(K m−1 )
Claim. vol K i+1 ≤ 2. vol K i .
Total #samples =
π‘š
π‘š. 2
πœ–
But, how to sample?
= 𝑂 ∗ 𝑛2 .
Sampling

Generate



a uniform random point from a compact set S
or with density proportional to a function f.
Numerous applications in diverse areas: statistics,
networking, biology, computer vision, privacy, operations
research etc.
Sampling
Input: function f: 𝑅𝑛 → 𝑅+ specified by an oracle,
point x, error parameter ε.
Output: A point y from a distribution within distance ε
of distribution with density proportional to f.
Logconcave functions

𝑓: 𝑅𝑛 → 𝑅+ is logconcave if for any π‘₯, 𝑦 ∈ 𝑅𝑛 ,
𝑓 πœ†π‘₯ + 1 − πœ† 𝑦 ≥ 𝑓 π‘₯ πœ† 𝑓 𝑦

Examples:





1−πœ†
Indicator functions of convex sets are logconcave
Gaussian density function
exponential function
Level sets of f, 𝐿𝑑 = π‘₯ ∢ 𝑓 π‘₯ ≥ 𝑑 , are convex.
Product, minimum, convolution preserve logconcavity
Algorithmic Applications
Given a blackbox for sampling logconcave densities, we get
efficient algorithms for:




Rounding
Convex Optimization
Volume Computation/Integration
some Learning problems
Rounding via Sampling
Sample m random points from K;
Compute sample mean and sample covariance matrix
1.
2.

3.
𝐴 = 𝐸( π‘₯ − 𝑧 π‘₯ − 𝑧 𝑇 ).
𝑧=𝐸 π‘₯
1
2
−
Output 𝐡 = 𝐴 .
B(K-z) is nearly isotropic.
Thm. C(πœ–).n random points suffice to get 𝐸
𝐴−𝐼
2
[Adamczak et al; improving on Bourgain, Rudelson]
I.e., for any unit vector v,
1 + πœ– ≤ 𝐸 𝑣𝑇π‘₯
2
≤ 1 + πœ–.
≤ πœ–.
How to Sample?
Ball walk:
At x,
-pick random y from π‘₯ + 𝛿𝐡𝑛
-if y is in K, go to y
Hit-and-Run:
At x,
-pick a random chord L through x
-go to a random point y on L
Complexity of Sampling
Thm. [KLS97] For a convex body, the ball walk with an M-warm
start reaches an (independent) nearly random point in poly(n, R,
M) steps.
𝑄0 𝑆
𝑀 = sup
𝑄 𝑆
π‘œπ‘Ÿ 𝑀 = 𝐸𝑄0
𝑄0 π‘₯
𝑄 π‘₯
Thm. [LV03]. Same holds for arbitary logconcave density
functions. From a warm start, complexity is 𝑂∗ (𝑛2 𝑅2 ).

Isotropic transformation makes R=O( 𝑛).
KLS volume algorithm: 𝑛 × π‘› × π‘›3 = 𝑛5
Markov chains




State space K
set of measurable subsets that form a 𝜎-algebra, i.e.,
closed under finite unions and intersections
A next step distribution 𝑃𝑒 . associated with each point
u in the state space.
A starting point.
𝑀0 , 𝑀1 , … , π‘€π‘˜ , … s.t.
𝑃 π‘€π‘˜ ∈ 𝐴 𝑀0 , 𝑀1 , … , π‘€π‘˜−1 ) = 𝑃(π‘€π‘˜ ∈ 𝐴 | π‘€π‘˜−1 )

Convergence
Stationary distribution Q, ergodic “flow” is:
Φ π΄ =
𝐴
𝑃𝑒 𝐾\A 𝑑𝑄(𝑒)
For any subset 𝐴, we have Φ π΄ = Φ(𝐾\A)
Conductance:
∫𝐴 𝑃𝑒 𝐾\A 𝑑𝑄(𝑒)
πœ™ 𝐴 =
min 𝑄 𝐴 , 𝑄 𝐾\A
πœ™ = inf πœ™(𝐴)
𝐴
1
Rate of convergence is bounded by 2 [LS93, JS86].
πœ™
Conductance
Arbitrary measurable subset S.
How large is the conditional escape probability from S?
Local conductance can be arbitrarily small for the ball walk.
vol π‘₯ + 𝛿𝐡𝑛 ∩ 𝐾
β„“ π‘₯ =
vol(𝛿𝐡𝑛 )
Conductance
Need:
 Nearby points have overlapping one-step distributions
 Large subsets have large boundaries [isoperimetry]
𝑐
πœ‹ 𝑆3 ≥ 𝑑 𝑆1 , 𝑆2 min πœ‹ 𝑆1 , πœ‹ 𝑆2
𝑅
where 𝑅2 = 𝐸𝐾 | π‘₯| 2
Isoperimetry and the KLS conjecture
𝑐
𝑅
πœ‹ 𝑆3 ≥ 𝑑(𝑆1 , 𝑆2 ) min πœ‹ 𝑆1 , πœ‹ 𝑆2
A = 𝐸((π‘₯ − π‘₯)(π‘₯ − π‘₯)𝑇 ) : covariance matrix of πœ‹
𝑅2 = πΈπœ‹
π‘₯−π‘₯
Thm. [KLS95].
πœ‹ 𝑆3 ≥
Conj. [KLS95].
πœ‹ 𝑆3 ≥
2
𝑐
π‘‡π‘Ÿ 𝐴
𝑐
πœ†1 𝐴
= π‘‡π‘Ÿ 𝐴 =
πœ†π‘– (𝐴)
𝑖
𝑑(𝑆1 , 𝑆2 ) min πœ‹ 𝑆1 , πœ‹(𝑆2 )
𝑑(𝑆1 , 𝑆2 ) min πœ‹ 𝑆1 , πœ‹(𝑆2 )
KLS hyperplane conjecture
𝐴 = 𝐸(π‘₯π‘₯ 𝑇 )
Conj. [KLS95].
πœ‹ 𝑆3 ≥
𝑐
πœ†1 𝐴
𝑑(𝑆1 , 𝑆2 ) min πœ‹ 𝑆1 , πœ‹(𝑆2 )
• Could improve sampling complexity by a factor of n
• Implies well-known conjectures in convex geometry: slicing
conjecture and thin-shell conjecture
• But wide open!
KLS, Slicing, Thin-shell
thin shell
slicing
KLS
current bound
𝑛1/3
[Guedon-Milman]
𝑛1/4
[Bourgain, Klartag]
~ 𝑛1/3
[Bobkov;
Eldan-Klartag]
All are conjectured to be O(1).
Conjectures are equivalent! [Ball, Eldan-Klartag].
Is rapid mixing possible?
Ball walk can have bad starts, but
Hit-and-run escapes from corners
Min distance isoperimetry is
too coarse
Average distance isoperimetry


How to average distance?
β„Ž π‘₯ ≤ π‘šπ‘–π‘› 𝑑 𝑒, 𝑣 ∢ 𝑒 ∈ 𝑆1 , 𝑣 ∈ 𝑆2 , π‘₯ ∈ β„“(π‘₯, 𝑦)
Thm.[LV04]
πœ‹ 𝑆3 ≥ 𝐸 β„Ž π‘₯ πœ‹(𝑆1 )πœ‹(𝑆2 )
Hit-and-run mixes rapidly

Thm [LV04]. Hit-and-run mixes in polynomial time from any
starting point inside a convex body.
1
𝑛𝐷

Conductance = Ω

Along with isotropic transformation, gives 𝑂∗ 𝑛3 sampling
algorithm.
Simulated Annealing [LV03, Kalai-V.04]
To estimate ∫ 𝑓 consider a sequence 𝑓0 , 𝑓1 , 𝑓2 , … , 𝑓 = π‘“π‘š
with ∫ 𝑓0 being easy, e.g., constant function over ball.
Then, ∫ 𝑓 =
∫ 𝑓1 ∫ 𝑓2
∫ π‘“π‘š
∫ 𝑓0 .
.
…
.
∫ 𝑓0 ∫ 𝑓1
∫ π‘“π‘š−1
Each ratio can be estimated by sampling:
1.
Sample X with density proportional to 𝑓𝑖
2.
Compute π‘Œ =
Then, 𝐸 π‘Œ = ∫
𝑓𝑖+1 𝑋
𝑓𝑖 𝑋
𝑓𝑖+1 𝑋 𝑓𝑖 𝑋
.
𝑓𝑖 𝑋
∫ 𝑓𝑖 𝑋
𝑑𝑋 =
∫ 𝑓𝑖+1
.
∫ 𝑓𝑖
Annealing [LV06]

Define:
𝑓𝑖 𝑋 = 𝑒 −π‘Žπ‘–
𝑋

π‘Ž0 = 2𝑅, π‘Žπ‘–+1 = π‘Žπ‘– / 1 +

π‘š ~ 𝑛 log(2𝑅/πœ–) phases
1
𝑛
, π‘Žπ‘š =
πœ–
2𝑅
∫ 𝑓1 ∫ 𝑓2
∫ π‘“π‘š
 𝑓0 .
.
…
.
∫ 𝑓0 ∫ 𝑓1
∫ π‘“π‘š−1

The final estimate could be 𝑛Ω(𝑛) , so each ratio could be
𝑛 𝑛 or higher. How can we estimate it with a few
samples?!
Annealing [LV03, 06]


𝑓𝑖 𝑋 = 𝑒 −π‘Žπ‘–
𝑋
π‘Ž0 = 2𝑅, π‘Žπ‘–+1 = π‘Žπ‘– / 1 +
Lemma. 𝑉𝐴𝑅 π‘Œ =

𝑓𝑖+1 𝑋
𝑓𝑖 𝑋
1
𝑛
, π‘Žπ‘š =
πœ–
2𝑅
< 4 𝐸 π‘Œ 2.
Although expectation of Y can be large (exponential
even), we need only a few samples to estimate it!
LoVe algorithm: 𝑛 × π‘› × π‘›3 = 𝑛4
Variance of ratio estimator
Let 𝑓 π‘Žπ‘– , 𝑋 =
Then, 𝐸 π‘Œ = ∫
𝐸 π‘Œ
𝐸 π‘Œ
2
2
𝑓 π‘Žπ‘–+1 ,𝑋
, π‘Œ=
𝑓 π‘Žπ‘– ,𝑋
𝑓 π‘Žπ‘–+1 , 𝑋 𝑓 π‘Žπ‘– ,𝑋
.
𝑑𝑋
𝑓 π‘Žπ‘– , 𝑋
∫ 𝑓 π‘Žπ‘– , 𝑋
𝑒 −π‘Žπ‘– 𝑋
=
∫ 𝑓 π‘Žπ‘–+1 ,𝑋
∫ 𝑓 π‘Žπ‘– ,𝑋
2
=
𝑓 π‘Žπ‘–+1 , 𝑋
𝑓 π‘Žπ‘– , 𝑋
∫ 𝑓 π‘Žπ‘– , 𝑋
𝑑𝑋 ⋅
2
𝑓 π‘Žπ‘– , 𝑋
∫ 𝑓 π‘Žπ‘– , 𝑋
∫ 𝑓 π‘Žπ‘–+1 , 𝑋
=
∫ 𝑓 2π‘Žπ‘–+1 − π‘Žπ‘– , 𝑋 ∫ 𝑓 π‘Žπ‘– , 𝑋
∫ 𝑓 π‘Žπ‘–+1 , 𝑋
2
𝐹 π‘Ž 1−𝛼 𝐹 π‘Ž 1+𝛼
=
𝐹 π‘Ž 2
(would be at most 1, if F was logconcave…)
2
Variance of ratio estimator
Lemma. For any logconcave f and a >0, the function
𝑍 π‘Ž = π‘Žπ‘› ∫ 𝑓 π‘Ž, 𝑋 𝑑𝑋 is also logconcave.
So π‘Žπ‘› 𝐹(π‘Ž) is logconcave and
(π‘Žπ‘› 𝐹
π‘Ž
)2
≥ π‘Ž 1−𝛼
𝑛
𝐹(π‘Ž 1 − 𝛼 ) π‘Ž 1 + 𝛼
Therefore:
𝐹 π‘Ž 1−𝛼 𝐹 π‘Ž 1+𝛼
𝐹 π‘Ž 2
for 𝛼 ≤
1
.
𝑛
≤
1
1−𝛼 1+𝛼
𝑛
≤
𝑛
1
1−𝛼2
≤ 4.
𝑛
𝐹(π‘Ž 1 + 𝛼 )
Volume Computation: an ongoing adventure
Dyer-Frieze-Kannan 89 23
Lovász-Simonovits 90
Applegate-K 90
L 90
DF 91
LS 93
KLS 97
5
LV 03,04
LV 06
Cousins-V. 13, 14
Power
New aspects
everything
16
localization
10
logconcave integration
10
ball walk
8
error analysis
7
multiple improvements
speedy walk, isotropy
4
annealing, wt. isoper.
4
integration, local analysis
3
Gaussian cooling
Gaussian sampling/volume

Sample from Gaussian restricted to K
Compute Gaussian measure of K

Anneal with a Gaussian


Define
𝑓𝑖 𝑋 = 𝑒
−
𝑋
2
2𝜎2
𝑖
1
, increase
𝑛

Start with 𝜎0 small ~

Compute ratios of integrals of consecutive phases:
∫ 𝑓𝑖+1
∫ 𝑓𝑖
in phases.
Gaussian sampling

KLS conjecture holds for Gaussian restricted to any convex
body (via Brascamp-Lieb inequality).
Thm. πœ‹ 𝑆3 ≥

𝑐
𝜎
𝑑(𝑆1 , 𝑆2 ) min πœ‹ 𝑆1 , πœ‹(𝑆2 )
Not enough on its own, but can be used to show:
Thm. [Cousins-V. 13]. For 𝜎 2 = 𝑂 1 , Ball walk applied to
Gaussian 𝑁(0, 𝜎 2 𝐼𝑛 ) restricted to any convex body containing
the unit ball mixes in 𝑂∗ 𝑛2 time from a warm start.
Speedy walk: a thought experiment

Take sequence of points visited by ball walk:
𝑀0 , 𝑀1 , 𝑀2 , 𝑀3 , … , 𝑀𝑖 , 𝑀𝑖+1 , 𝑀𝑖+3 …

Subsequence of “proper” attempts that stay inside K

This subsequence is a Markov chain and is rapidly mixing
from any point

For a warm start, the total number of steps is only a
constant factor higher
Gaussian volume

Theorem [Cousins-V.13] The Gaussian volume of a
convex body K containing the unit ball can be estimated
in time 𝑂∗ 𝑛3 .


No need to adjust for isotropy!
Each step samples a 1-d Gaussian from an interval

Can we use this to compute the volume?
Gaussian Cooling

𝑓𝑖 𝑋 = 𝑒
2
 𝜎0

=
−
1
2
, πœŽπ‘š
𝑛
Estimate
πœŽπ‘–2
𝑋
2
2𝜎2
𝑖
=𝑂 𝑛 .
∫ 𝑓𝑖+1
∫ 𝑓𝑖
using samples drawn according to 𝑓𝑖
≤ 1, set
πœŽπ‘–2
=
2
πœŽπ‘–−1
1+

For

2
For πœŽπ‘–2 > 1, set πœŽπ‘–2 = πœŽπ‘–−1
1+
1
𝑛
πœŽπ‘–−1
𝑛
Gaussian Cooling

𝑓𝑖 𝑋 = 𝑒
−
𝑋
2
2𝜎2
𝑖
2
For πœŽπ‘–2 ≤ 1, we set πœŽπ‘–2 = πœŽπ‘–−1
1+
1
𝑛

Sampling time: 𝑛2 ,
#phases, #samples per phase: 𝑛

So, total time = 𝑛2 × π‘› × π‘› = 𝑛3

Gaussian Cooling

𝑓𝑖 𝑋 = 𝑒
For




πœŽπ‘–2
−
𝑋
2
2𝜎2
𝑖
> 1, we set
πœŽπ‘–2
=
Sampling time: 𝜎 2 𝑛2
#phases to double 𝜎 is
2
πœŽπ‘–−1
1+
πœŽπ‘–−1
𝑛
(too much??)
𝑛
𝜎
#samples per phase is also
𝑛
𝜎
So, total time to double 𝜎 is
𝑛
𝜎
×
𝑛
𝜎
× πœŽ 2 𝑛2 = 𝑛3 !!!
Variance of ratio estimator

2
Why can we set πœŽπ‘–2 as high as πœŽπ‘–−1
1+
2
π‘₯
− 2
2𝜎
πœŽπ‘–−1
𝑛
?
𝑓 πœŽπ‘–2 , π‘₯ = 𝑒
for π‘₯ ∈ 𝐾
𝐹 πœŽπ‘–2 = ∫ 𝑓 πœŽπ‘–2 , π‘₯ 𝑑π‘₯
Lemma. π‘Œ =
𝑓 𝜎2 , 𝑋
𝜎2
𝑓 1+𝛼, 𝑋
𝐸 π‘Œ2
𝐸 π‘Œ 2
for 𝛼 = 𝑂
𝜎
𝑛
.
𝐸 π‘Œ =
𝐹 𝜎2
𝐹
𝜎2
1+𝛼
𝜎2
𝜎2
𝐹
𝐹
1+𝛼
1−𝛼
𝑂
=
=
𝑒
𝐹 𝜎2 2
𝛼2 𝑛
𝜎2
=𝑂 1
Variance of ratio estimator
𝐸 π‘Œ2
𝐸 π‘Œ 2
𝜎2
𝜎2
𝐹 1+𝛼 𝐹 1−𝛼
𝑂
=
=𝑒
2
2
𝐹 𝜎
First use localization to reduce to 1-d inequality,
for a restricted family of logconcave functions:
For 𝐾 ∈ 𝑅 ⋅ 𝐡𝑛 and −𝑅 ≤ 𝑙 ≤ 𝑒 ≤ 𝑅
𝑒
𝐺 𝜎2 =
𝑑+𝑏
t2
𝑛−1 𝑒 −2𝜎 2 dt
𝑙
𝜎2
𝜎2
𝐺 1+𝛼 𝐺 1−𝛼
𝑂
=𝑒
2
2
𝐺 𝜎
𝛼2 𝑅2
𝜎2
𝛼2 𝑛
𝜎2
Variance of ratio estimator
𝜎2
𝜎2
𝐺 1+𝛼 𝐺 1−𝛼
𝑂
=𝑒
2
2
𝐺 𝜎
𝛼2 𝑅2
𝜎2
where
𝐸
𝑑2
=
𝑒 2
∫𝑙 𝑑
𝑒
∫𝑙
𝑑+𝑏
𝑑+𝑏
⇔
𝑑𝐸 𝑑 2
≤𝑐
2
d𝜎
𝑑2
𝑛−1 −2𝜎 2
𝑒
𝑑𝑑
𝑑2
𝑛−1 𝑒 −2𝜎 2 𝑑𝑑
Warm start

With
πœŽπ‘–2
=
2
πœŽπ‘–−1
1
πœŽπ‘–−1
+
𝑛
, random point from one
distribution gives a warm start for the next.
𝑓 πœŽπ‘–2 , π‘₯ = 𝑒
𝑀 = 𝐸𝑖
for 𝛼 = 𝑂
2
π‘₯
− 2
2𝜎
𝑄𝑖 π‘₯
𝑄𝑖+1 π‘₯
𝜎
𝑛
for π‘₯ ∈ 𝐾
𝑓𝑖 π‘₯
𝐹 πœŽπ‘–2 = ∫ 𝑓 πœŽπ‘–2 , π‘₯ 𝑑π‘₯
∫ 𝑓𝑖+1 π‘₯ 𝑓𝑖 π‘₯ 𝑑π‘₯
=
⋅
⋅
𝑓𝑖+1 π‘₯
∫ 𝑓𝑖 π‘₯
∫ 𝑓𝑖 π‘₯
1+𝛼
𝐹 𝜎 2 (1 + 𝛼) 𝐹 𝜎 2 ⋅
1 + 2𝛼 = 𝑂 1
=
𝐹 𝜎2 2
. Same lemma!
Gaussian Cooling [CV14]

Accelerated annealing
𝜎
𝑛

1+

Thm. The volume of any well-rounded convex body K can
be estimated using 𝑂∗ (𝑛3 ) membership queries.
is best possible rate
CV algorithm:
𝑛
𝜎
×
𝑛
𝜎
× πœŽ 2 𝑛2 × log 𝑛 = 𝑂∗ (𝑛3 )
Practical volume/integration
Start with a concentrated Gaussian
Run the algorithm till the Gaussian is nearly flat
In each phase, flatten Gaussian as much as possible while keeping
variance of ratio of integrals bounded
Variance can be estimated with a small constant number of samples
If covariance is skewed (as seen by SVD of O(n) points), scale down
high variance subspace
“Adaptive” annealing (also used in [Stefankovic-Vigoda-V.] for discrete
problems)
Open questions

How true is the KLS conjecture?
Open questions
When to stop a random walk?
(how to decide if current point is “random”?)


Faster isotropy/rounding?

How to get information before reaching stationary?
To make isotropic:
run for N steps; transform using covariance; repeat.
Open questions

How efficiently can we learn a polytope given only random
points?

With O(mn) points, cannot “see” structure, but enough
information to estimate the polytope! Algorithms?
For convex bodies:



[KOS][GR] need 2٠𝑛 points
𝑐
[Eldan] need 2𝑛 even to estimate volume!
Open questions

Can we estimate the volume of an explicit polytope in
deterministic polynomial time?
𝐴π‘₯ ≤ 𝑏
Download