1

advertisement
1
>> Yuval Peres: Okay. Good morning, everyone. We're happy to have Ronen
Eldan from Tel Aviv and the Weizmann Institute tell us about the
Kannan-Lovasz-Simonovits conjecture and the variance conjecture.
>> Ronen Eldan: All right. Thank you very much for this invitation and this
opportunity. I hope I can make this talk work, despite the jet lag. So I want
to talk about the connection between the KLS or Kannan-Lovasz-Simonovits
conjecture and the so-called variance or thin shell conjecture.
So I want to start with our main topic will be isoperimetric inequalities on
convex sets or on uniform measures on convex sets, and we'll actually want to
consider a slightly bigger family of measures, namely log concave measures. So
I want to start with a few definitions.
The first one is just the definition, the basic definition of a log concave
measure. That's just a measure whose density has a logarithm which is a
concave function. So a typical example of a concave measure is just the
uniform measure on some given convex body. Another typical example is the
Gaussian measure. If you feel inconvenient with considering log concave
measures, you can always imagine this thing is just uniform measure over some
convex set.
And we'll need a certain normalization for measures. We'll see why in a while.
So we'll just normalize the measures to be isotropic. Namely, we'll define B
of Mu as the very center, the center of the mass of the measure and Cov of Mu
as the covariance metrics of the measure, and we'll say that it's isotropic if
it's centered at the origin and as the identity covariance matrix, it's an easy
exercise to see that any random vector can be under some fine transformation
with a little [indiscernible] condition can be made isotropic.
All right. So we're talking about isoperimetric problems. For this, we want
to define the surface area of a set with respect to a measure. So let's
imagine our measure as the uniform measure on some convex body. And then we
have some subset of this convex body, let's call it T. Then the surface area
measure of -- okay, here it's called A. The surface area measure of A, Mu plus
of A is just you take a little extension of A ->>:
[inaudible].
2
>> Ronen Eldan: Yeah I guess so. I just take a side on extension of A, you
look at the measure inside this extension, and then you take the ratio between
this and epsilon as epsilon goes to zero. This will be called surface area
measure, and our main interest will be the following constant, which I also
defined here for your convenience. So out of all possible isotropic or concave
measures and all possible subsets of measure exactly one half, so we're talking
about probability measures. So you can just imagine this as the most efficient
way to cut a convex set into two parts of equal mass, and the minimal surface
area will be defined as GN to the minus one, and we are looking -- what our
main objective will be to try to prove an upper bound for this constant GN.
The significance of this, I'll talk about it a bit later.
Now let's try to understand why I want this normalization, the isotropicity.
Why do you ask my measure to be isotropic. So first of all, just notice that
if I don't have had this constraint, then it's a bit pointless, even if I want
my convex body to have volume one, say, I can consider bodies which look like
this, for example, very long and thin, and then I can always cut them into two
halves using a very small surface area, and, well, obviously, the definition
would be -- wouldn't make any sense. Now isotropicity, so this is a natural
way to understand it.
For those of you who know the Prekopa-Leindler theorem, it states that
marginals, one-dimensional marginals of log concave measures are also
log-concave. The marginal will also be isotropic. Now, this family of one
dimensional log-concave isotropic measures attains some compactness. Well, I
won't formally define it, but they are compact enough, for example, in order
for the density at the point of median to be bounded between two global
constants, two universal constants. This means that if you think about the
geometric meaning of this, it's just that if I cut my body using hyper plane,
if my surface is flat, then I know that this surface area will be between two
universal constants. So another way to think about this isotropicity, instead
of writing here isotropic, I could have written measures such that all hyper
plane cuts into two halves have some surface area bounded from below by one,
say.
All right. Now, it's well known that this constant GN has many equivalent, up
to a universal constant equivalent definitions so by works of Buser, Cheeger,
Gromov, Milman, LeDoux and [indiscernible] Milman, some of the equivalent
definitions of GN are the following. So it's the optimal spectral gap of the
Neumann Laplacian on an isotropic convex vex domain in RN, and we know that
3
Lipschitz and Lipschitz function admits exponential concentration around its
mean with exponent equal to GN to the minus one.
So many proofs of facts using the concentration phenomena in high dimensions
actually rely on things related to this constant.
The Neumann spectral gap is also, I guess, all of you know that it's highly
related to the mixing time of random works on convex bodies, and this is the
reason why this constant GN was also interesting for some computer scientists
working on algorithms related to high dimensional convex sets.
So I'll just mention that the first -- I think the first result where random
walks on convex sets were applied was this breakthrough result by Dyer, Frieze
and Kannan. Excuse me if I don't pronounce the name right, who demonstrated
that rather surprisingly, the volume of the convex body in high dimensions may
be approximated only in polynomial time, and the way to do it relies on the
fact that the random walk in a convex body has a mixing time, well, it relies
on only the fact that it's polynomial, but the complexity is actually a direct
derivative of the mixing time, and this is why it's interesting for us.
Okay. Because this is an interview, I was urged to mention here that one
question asked by Lovasz was whether this polynomial algorithm was possible if
you're just given random algorithm is possible if you're just given random
points from the convex body. And in 2009, I proved that it's actually
impossible. So this algorithm relies on the fact that your oracle to the body
is the membership oracle. You give a point and it answers yes or no to the
question whether this point is in the body.
All right. Some other algorithms related to convex bodies also rely on these
mixing random walks. For example, if we want to randomized concave functions
over the body or do a PCA, just calculate the covariance matrix of the body.
So these two rely on sampling, which in turn relies on the complexity of
sampling is just a complexity of how much time it takes for a random walk to
mix. So these are all related to these constant GN.
All right. It was conjectured by Kannan, Lovasz and Simonovits that this GN is
actually bounded from above by some universal constant independent of the
dimension. In other words, what they conjecture is that in a convex set,
the -- in order to cut it into one half, I can equivalently just check only
flat hyper surfaces, and if I know that all of those guys are bounded by some
4
constant, then actually it will satisfy an isoperimetric inequality with
essentially the same constant.
This is a pretty bold conjecture. Right now, it's known for very few classes
of convex bodies. In the original paper, the only prove this dependence not up
to some constant, but they have this extra factor of square root of N, they use
some localization methods. I will talk a bit about later. But right now, if
we believe that the GN is also equivalent to the optimal exponential
concentration constant, then this can be proved rather easily with just a note
that by the definition of isotropicity, the expected norm square is just equal
to N, and then we use Markov's inequality to say that most of the mass of the
bodies in distance no more than, say, ten times root N from the origin, and
then we use a classical theorem of Borell, which is actually just a
straightforward application of the broom [indiscernible] inequality, which says
that once the mass starts decaying, then it continues decaying exponentially.
And well, it follows from these -- this that any Lipschitz function will admit
an exponential concentration of this type. And, of course, I have this factor
root N here, which I want to -- I mean, the KLS conjecture asserts that it's
true without it.
Okay. I want to talk about a slightly weaker type of concentration on convex
bodies; namely, maybe a more relaxed isoperimetric inequality called thin-shell
concentration. So sass we saw when X is isotropic, the expectation of the norm
square is just N. And we define this constant sigma N, which I also defined
here for your convenience as the super norm of the standard deviation of the
norm.
And it turns out that the following thing is true. The standard deviation of
the norm is always much, much smaller than its expectation. This quantity, the
norm of X is always concentrated around its expectation. If you think about
the geometric interpretation of this thing, it just says that most of the mass
of my body is concentrated in a very thin, spherical shell, because this radius
here of the shell is presumably much bigger than its thickness, okay?
The first type of results like this was proved by Boz Klartag, in relation with
the so-called central limit theorem for convex sets, which, unfortunately, I
don't have time to talk about. Several alternative proofs and improvements
were introduced by some other guys. And currently, it is known that this
constant sigma N is smaller than N to the 1 over 3. So this distance is N to
5
the one half, and the thickness of the shell is N to the 1 over 3, which is
much smaller.
This might be surprising for people who are unfamiliar with high dimensional
convex geometry, but this is true, and it's actually conjectured in two
different papers by Antilla Ball and Perissinaki and Bobkov and Koldobsky that
constant sigma N also can be bounded by some universal constant. This is also
known for several classes of convex bodies already.
And let's understand why I called this a relaxed version of the isoperimetric
inequality. So as I mentioned above, the isoperimetric inequality corresponds
to a spectral gap, which, in turn, gives us that [indiscernible] equality like
this. Now, if we use this [indiscernible] equality just with a function -each, forget this query here. Just with the norm of X, then the gradient is
always smaller than one. Here we get GN square and here we just get the
variants of the norm.
So we see that, well, the isoperimetry immediately implies thin shell
concentration. A nice way to think about it is just the following.
Isoperimetry just means that the surface of any surface cutting the body into
two halves is small, and thin shell just means that one specific surface;
namely, the Euclidian sphere, taken to be with the radius that divides the mass
in two halves, doesn't have a big surface area.
So this is just, well, it's isoperimetry for a specific case. Okay. So we
just saw that the KLS conjecture implies the thin shell, or, actually, I did
mention this, it's also called the covariance conjecture, the thin shell
conjecture.
Now, what about the other way around? So using more or less the same lines of
the proof of the KLS paper itself, Sergei Bobkov managed to show that under
some thin shell assumption -- okay. Actually, we get this kind of estimate.
So what's on the left side here is just one correct constant. It can be
replaced by the optimal constant in the [indiscernible] equality. Here, we
have the thin shell constant. But we have this extra factor which is equal to
square root of N if our random vector is isotropic.
So this implies that we have this relation, GN is smaller or equal to N to the
one over four times root of sigma N. We just saw that sigma N is smaller or
equal to GN.
6
So we do have some bounds in GN in terms of sigma N already. And under the
thin shell hypothesis, if it's proven, then we'll actually know that we're
halfway to attaining the KLS conjecture. So from N to the one half, we got to
N to the one over four.
The main theorem I want to talk about now is the following. We actually have a
logarithmic dependence between the two constants. So we actually prove
something slightly stronger, and maybe this is a nice way to formulate it. Up
to a factor of square root of log N, the worst isoperimetric sets are
ellipsoids. So it's conjectured by KLS that the worst isoperimetric sets are
flat or hyper planes. The isoperimetry is for any surface, and, well,
according to this theorem, if we're willing to give up this root log N factor,
then it's actually attained the worst surfaces are actually ellipsoids.
Now, along with the best known thin shell bound, we also get some improvement
of the best known KLS bound using this theorem. It's important to mention that
the bound we get is actually global and not local. If we know thin shell for a
specific convex set, a specific log concave measure, it does not -unfortunately does not imply an isoperimetric inequality. We somehow have to
use, and we see this in the sketch of proof soon, we somehow have to use thin
shell for an entire family of log concave measures in order to prove
isoperimetry.
This is a good place to mention that in a joint work with Boz Klartag, we also
shown that the thin shell conjecture implies that of the so-called hyper plane
conjecture or the slicing problem, which is another central problem in high
dimensional convex sets. So if you don't know it, it's just a conjecture about
the maximal -- sorry, minimal possible volume of a convex set in isotropic
position.
So okay. Just catch this to make some -- or to give some order here. So we
have the KLS thin shell and hyper plane conjecture by Cheeger's theorem. KLS
implies thin shell. This is Bobkov's theorem I just mentioned. This is the
theorem we're talking about now, which only works globally, unfortunately,
together with Boz Klartag, we know it also implies the hyper plane conjecture,
and there's also a direct reduction of hyper plane to KLS by ball and Nguyen,
but unfortunately with only exponential dependence.
So currently, we have all these connections known.
7
All right. So I want to start with sketching a first attempt to prove our
theorem. So this is in some way based on what KLS did in their paper. This is
not what we'll actually use, but let's try to go for a first attempt. How to
try to prove some isoperimetric inequality. So I'm starting with some convex
body K, and let's assume that it has some subset T of measure one half. And I
want to do the following process. I choose a random direction theta in the
unit sphere. Let's say theta points this way.
And what I do is I cut the body K into two halves through its very center with
a hyper plane whose random direction is theta, and I ignore this half and I
call this half K1, and then I continue this process. I generate another random
direction, let's -- oh, sorry. I generate another random direction, and then I
cut the body K1 in its very center again, only this time, I cut it -- I -well, okay. If instead of this formula, what you can try to think about is
that at every step, I don't generate the direction from the unit sphere, but I
generate it uniformly from the ellipsoid of inertia of the body I'm currently
looking at.
So this normalization here with the covariance matrix just makes my process
into a Markov process on convex bodies. In some sense, I'm always looking.
I'm doing this cut. In the world where my current body is isotropic. And I
continue this process on and on. I cut again and again and again until I get a
localized version of my body, some much smaller body. And let's try to think
what this may teach us.
So one observation we can make is the following. So if I choose this direction
theta from the sphere, I can define this function F as a function on this
sphere, which just measures the relative volume of T with respect to K after I
make the cut.
Now, it's not so hard to see that this function will be, say, ten Lipschitz. I
mean, where ten is a constant independent of the dimension. Now, it's well
known that ten Lipschitz functions on the sphere are actually very concentrated
around their mean. The mean is definitely one half. And this means that I'll
get something like one half plus minus ten over square root of N. Something
very close to one half. The expectation is one half. And this means that we
can it rate this process more or less N times. We can cut again and again and
again, and still have some known negligible probability that the proportion of
T from what we get, so we have a non-negligible probability, ending up with
8
something that T is more or less one half of which. Okay? It's not that we'll
always end up with something located here or here that doesn't see the boundary
of T at all. We have a pretty good probability of seeing the boundary of T
after making many cuts. And this is pretty good, because if we knew something
about what the body KN looked like, if we knew something about what happened
after N cuts. For example, if we knew something about the renewal time of this
Markov process, maybe this body doesn't even depend on what we started with so
much.
Or if we knew some isoperimetric equality about it, this would in turn imply
that the surface of T is quite large. So we might have been able to say
something, but unfortunately, I can't find any way to say anything about those
bodies KN. So let's try to consider a better, I mean, at least for our
purposes, maybe a better localization, some slightly softer process.
Instead, so I pick some constant epsilon, which will be small. And instead of
truncating an entire half space every time, what I do is I just take my
measure, which I now consider as a measure in the class of log concave
measures, but not as a convex body. I take my measure and I multiply it by a
linear function, which is equal to one at the very center, and the gradient of
which is a random direction.
So instead of killing the entire half space, I just give a little more mass to
one of the half spaces, and I continue this process again and again. I
multiply by more and more linear functions, and I do it again in a more
covariant manner. I normalize my thing to be isotropic at each step.
How is this better? So it turns out that here I can actually maybe say
something about what I get after many iterations. So let's consider the one
dimensional case for a second. If, in the one-dimensional case, after doing
this thing twice, I get some linear function, but I also get some quadratic
term, right? It's the product of two linear functions. And if we do something
like one over epsilon square iterations, then what I get is the product of many
such things, which is roughly an exponential -- I mean, the exponent of some
quadratic function. And in a higher dimension, I will expect my measure to be
more or less multiplied by some Gaussian density.
Now, measures of this form, log concave measures times some Gaussian density,
for those who know the [indiscernible], for instance, which I will mention
later, those kind of measures already attain some spectral gap. So I can
9
already say something about them, only provided that I know that my quadratic
term is large enough. So if I know that this matrix A is quite large, I mean,
it's bigger than, say, one over ten times the identity, then I know that this
measure already has some good concentration properties, and I'm kind of done.
Okay. But unfortunately, I have no way to quantify what this -- to analyze
what this matrix A looks like, but what I can do is I can change this process
such that I will be able to say something.
So what I am going to do is I am going to define a continuous version of this
process, so maybe I'll write it down on the blackboard, because this is
actually the main tool in our proof. So I'm starting with browning motion in
WFT. And then I serve the following system of stochastic differential
equations. F zero of X is just the function one. Times zero, multiply by one,
and DF at time T will be at a point X will be FT at the point X times some
linear function.
So again, I have this random gradient, and I have this normalization and my
function little FT will just be my initial function times this function capital
F. So I get a one parameter family of random functions, and here little AT is
the barycenter of FT and capital AT is the covariance matrix of FT. So this is
our process.
And let's try
properties of
SDs for every
solvable. It
anytime T.
to see how it can be helpful to us. Let's start with some basic
it. So first of all, you see in infinite -- I'm in a system of
X, you have an SD, but it can be shown that this is actually
has a unique solution, and it's finite and non-negative for
As you see here, F at NX is a martingale. It only has a martingale term here,
which means that if I want to, for example, measure the surface area of some
set here, what I can do is take the expectation of the surface area at some
positive time T. It's pretty easy to calculate, thanks to the fact that we're
multiplying by something equal to one at the barycenter. Then we'll always
have a probability measure. It's a simple observation.
And also, thanks to this normalization here, this is semi group, so again it
has this Markovian property. We can run it for one second and again one
second, then it's like running it for two seconds. Thanks to this fact,
essentially by, ideologically, by using the same ideas of compactness we've
10
seen before, this fact will make sure that we can run this process for a rather
long time, the proportion of some subset E from all over measure will be kind
of close to one half.
So the idea we used before, using the fact that we had a Markov process, we can
still kind of use in here, using more or less the same ideas.
>>:
So the idea is this is sort of the continuous version of the previous --
>> Ronen Eldan:
Yes, yes, yes, yes.
>>: And is it exactly what you would get if you somehow took lots of
various ->> Ronen Eldan: Yeah, well, if I look at this and I take epsilon to zero,
ideologically what I get is this, yes, yes, yes.
Okay. Now for a very nice property of this thing. If I apply Ito's formula to
the logarithm of F, this is just straightforward calculation. What I get is
the following. I have this diffusion term which is just a linear function, and
then I have this Ito term, which is a quadratic function. So what I actually
get now, I mean, what we approximately guessed we would get in the
noncontinuous version, here we actually get that at every time step, at every
time T, we -- what we do is we actually multiply our density by some Gaussian
density, and as we said before, we somehow want in quadratic term to be big.
We want it to have a high curvature in some sense.
So here, we actually know exactly what our quadratic term will be. It just
turns out to be, from what you see here, it's just the integral between zero
and T of the covariance matrix of the measure we had along the process.
Now, as I mentioned before, by results of Buser, Brascamp-Lieb, Gross,
Bakry-Emery, Ledoux, et cetera, I guess the, what we actually use is the
so-called hyper contractivity principle by Bakry-Emery. What it says is the
following if BT is larger than some multiple of the identity, then the function
F, which is then actually any function which is a log concave function times
this function retains an isoparametric inequality. So maybe I'll write it down
here. So if the function H of X is of the form some log concave function, I
don't care which as long as it's long concave, times E to the minus alpha
square, then the spectra gap of H is at least alpha.
11
So great. So what we learn from this is that if BT is rather large; in other
words, if I manage to keep this matrix rather small, if I manage to keep the
covariance matrix along the process rather small, then for free, I get some
isoparametric inequality for the measure I get after, say, one second.
So if we connect all of this together, what we get is the following. If we -if the measure along the process we call it MU sub T, then the last equation
with, say, right Bakry Emery result shows that if the operator norm of the
covariance matrix is smaller than some constant alpha square for a large enough
span of time, then the function FT will satisfy this. So I will have an
isoparametric inequality for any set which is -- you know, too small or not too
large. This can be insured using the this compactness I talked about before.
So all we really have to do is make sure that this operator norm remains small
for long enough. So at this point, we reduced the isoparametric inequality to
just saying something about the norm of some matrix valued stochastic process.
And it turns out that actually, this process can be analyzed using some
stochastic calculus related tools. So I'll just -- I have ten more minutes,
right? I think I have 'til 11:25. Okay. So I'll just briefly go over how
this can be done. So this matrix AT, which is equal to -- so AT at IJ is just
the integral of XI minus AT. XJ minus ATI.
I have this covariance matrix, and if I want to try to analyze how its entries
vary, so its entries are -- it's actually easy to see that they are Ito
processes, I can, in order to know this differential, I can just differentiate
inside the integral and I get this linear term. I actually have to also
differentiate little AT, but after doing some calculations and estimates, what
we get is that roughly, the differential of the entries of the covariance
matrices are just integrals of the [indiscernible] over a measure. This is not
a very surprising fact, just because what we do is we multiply by linear
functions.
And the nice thing is that thanks to this normalization that we had, what we
get is I won't get into the formulas, but what we get is that somehow, we
integrate third degree monomials over some function which is always normalized
to be isotropic. Somehow we always normalize such that at any given time, the
measure we consider is an isotropic measure.
12
And what we conclude in the end is the differentials of the entries of the
matrix are just some vectors defined by integrals of third degree monomials
over some isotropic measure. The isotropic measure is just some original
measure times some Gaussian density. Actually and normalized again to be
isotropic, and, well, these things can be estimated using everything we know
about isotropic log concave measures.
So now what we can do is we can differentiate, we want to bound the operator
norm. So we can differentiate the eigenvalues with respect to the entries.
Well, more or less up to some cheating, we get that these things are close
enough to be Ito processes. The Ito process were never -- all of the
eigenvalues are different, actually. But it's good enough for us. And the Ito
processes whose diffusion term corresponds to those vector, sigh II, hence the
diagonal terms of this guy and we also have some repulsion between the
eigenvalues, which also corresponds to some, I mean, the magnitudes of
repulsion can be bounded also by vectors of this form.
And okay. So this is what I just said, and in order to bound the operator
norm, what we can do is we define an energy term like this, and this actually
turns out to be exactly an Ito process, and it turns out that the repulsion
between the eigenvalues is actually the significant thing. The diffusion gives
us some logarithmic term, and the repulsion is somehow, well, bigger unless we
knew something about the thin shell conjecture.
And what we get is that the drift of this guy can be bounded by an expression
like this, which is just, well, okay, at this point, we see that already we
reduced the KLS conjecture to saying something about the expectation of some
monomials degree three. Where the KLS conjecture talked about all Lipschitz
functions, now we only have to know something about nomials degree three. And
the thin shell conjecture is just saying something about so it's the variance
of the Euclidian norm. So in order to know that, we have to know something
about monomials of degree four, right, because the Euclidian norm is just
polynomial of degree two, and I won't get into the details, but this guy here
can be rather easily bounded by sigma N, and this more or less concludes the
proof.
So I won't explain how to do it but I do want to talk about one maybe
algorithmic application of this method. So say I have a convex body, and I
want in polynomial time to say something about -- to say whether or not this
body attains a spectral gap.
13
So what this method can give us, it can find the lower bound for the spectral
gap in polynomial time. Now giving a lower bound is rather easy. Zero is
always a lower bound for the spectral gap, so it's not -- it doesn't say much,
but what we actually give is a lower bound such that if we have some convex
body that we have a non-negligible chance such that the algorithm will return
something very low, then we will know that the KLS conjecture is false.
So somehow, if we -- so one thing you can do with such algorithm is that if you
have somebody and you think that this is -- that this body is a counter example
to the KLS conjecture, then now instead of checking all possible Lipschitz
functions or all possible subsets, which is something that definitely needs
exponential time, right now you don't need exponential. You only need
polynomial time to verify that the body you're holding in your hands is, in
fact, a counter example to the KLS conjecture.
And, well, how do we do that based on the method we've just seen? So what we
saw is that in order to know that the body has a spectral gap, we can -- it's
enough to check that the operator norm of this matrix along the process is not
too big. So what we do is we just discredit -- we do a -- we make a discrete
version. We run a discrete version of this localization, and we test this
matrix, AT along the process, and, well, in order to estimate this matrix AT,
we can actually do it in polynomial time. Why? Because our measure is just
some Gaussian density restricted to our convex body.
And we can actually sample from this measure by the standard means of taking a
reflecting random walk. But since we have some density, then we also have this
drift term. But we can still do that, and in this way, we can actually run
this localization on a computer and somehow test what happens to this matrix,
and this gives us this theorem.
All right.
I'll end here.
I'll finish here.
>> Yuval Peres: Questions? In the last theorem, you need a sequence of convex
bodies, and you need to [indiscernible] ->> Ronen Eldan: No, no, no. I'm given one convex body, but I -- what I get is
a sequence of low concave measures which are obtained by taking this -- the
uniform measure on this convex body and multiplying it by some Gaussian
measure. And these measures are attained randomly. I mean, I take a random
14
gradient, I multiply by some linear function, and if I do it enough times, I'll
get something which is approximately a sequence of some Gaussian measures
restricted to my body.
>>:
The KLS conjecture, is such an algorithm easy to find?
>> Ronen Eldan: Well, okay, if I assume the KLS conjecture, then the algorithm
can be return one or something like that. But because the KLS conjecture just
gives -- well, okay. Not exactly, because I don't know that the body is
isotropic. But what the algorithm would do is it would check the position of
the body, calculate its covariance matrix, and then just output the smallest
eigenvalue or something like that.
>>:
Can you shut off the [indiscernible].
>> Ronen Eldan: Yeah, yeah, yeah. But I'm actually glad you asked me that,
because even if the KLS conjecture is proven, then it's not hard to see that
this algorithm would work for not only for convex bodies, but also for measures
which are not too log convex. Any measure whose logarithm has hash and bounded
by some matrix which may be positive but not so large, this algorithm also
works on it. So, well, it actually works for a larger class of measures on
which there's no hope of proving an isoparametric inequality. So this can
still be useful.
>>: So even without the KLS conjecture, the [indiscernible] can give an
approximation for the [indiscernible]?
>> Ronen Eldan: Well, the thing is that it kind of -- if K doesn't have a
spectral gap, you will know it. But if it does, it might still give you some
answer that seems like it doesn't have a spectral gap, and this is because one
of these measures along the process doesn't have a spectral gap. So somehow,
what it gives you is the spectral gap of a certain family of measures, which
kind of implies the spectral gap of K.
I mean, well, if you really want some approximation, then you also will need an
upper bound somehow. And this I don't know how to prove.
>>: Also, you mentioned in the [indiscernible] some impossibility results.
Can you say?
15
>> Ronen Eldan: Okay. So this has nothing to do with this. The impossibility
result is just take a convex body K and take N to the ten random points from K.
Based on this N to the ten points, it was not known whether you can say what
the volume of K is. Say up to a constant of, I don't know, 50, a
multiplicative constant of 50.
And the impossibility result is just that this is impossible. There can be -there are two bodies with very different volumes, such that if you look at the
total variation distance between the random points, somehow got from these
bodies, it's very big. No algorithm can distinguish between the two.
>>: In other words, this is a -- this learning problem is only responsible
with a super [indiscernible].
>> Ronen Eldan: Exactly. So yeah. I mean, there are many supervised cases.
We're talking about the specific oracle where you give a point to your black
box, say, and it tells you yes or no whether it's in the body or not. Yeah.
>> Yuval Peres:
More questions?
Thank him again.
Download