>> Yuval Peres: Good afternoon, everyone. So we're... installment in the talks this week and we'll hear about...

advertisement

>> Yuval Peres: Good afternoon, everyone. So we're happy to hear the second installment in the talks this week and we'll hear about the Talagrand's convolution conjecture or rigorization of L1 functions.

>> Ronen Eldan: Thanks. Okay. So yeah, this is more or less a direct continuation of James's talk yesterday. And James was talking more about the discrete space whereas today's talk will focus on the Gaussian version of the conjecture James stated. So just to set up, we're talking about Gaussian space. So RN with disk density. And we consider the convolution operator which just says take a function and to evaluate the mu function of text, we take the expectation of F at -- this is just a rescaling of X. We add a

Gaussian to the point and take expectation over this Gaussian. So this is just convolution with a Gaussian up to some rescaling. This is sometimes called the Ornstein-Uhlenbeck operator. And another way to define this operator is just to see that this is actually the expected value of F at a

Brownian motion at time 1 conditioned on the Brownian motion at time 1 minus

T. Okay. And as James mentioned yesterday, this operator admits some smoothing or regularization properties. So if we take a function and apply this operator, we expect it to become more smooth in a sense. In one sense is this hypercontractivity property that just says that if I know that my function F has some LP norm, then for this P and for any T, I can stretch B to some Q which is bigger than P and estimate and have an upper bound for the

LQ norm which is bigger of course then the LP norm of a function by Helder given only that I smooth this function with this operator. Yeah?

>>: So what does the gamma in the LP [indiscernible]?

>> Ronen Eldan: Gamma is the Gaussian measure. It's LP over the Gaussian measure. And okay, this is by now a very fundamental fact in analysis and has applications to concentration and isoperimetric inequalities. And a question that Talagrand raised is whether -- so this is a quantitative smoothing result but somehow, it completely -- I mean, we can't really say anything if we only know that our function is in L1. But note that even for a delta function, there is some smoothing going on. If I take a delta and converve it with a little Gaussian, I do get something kind of regular and it's not clear what the correct statement or the correct quantitative statement is just for L1 functions.

And Talagrand asked a very specific question. So the question is the following. Also James mentioned it yesterday. So let's assume, just normalize our function to have expectation one. Gamma is a standard Gaussian vector, and we know that, of course, this variable F of gamma meets Markov's inequality. The probability to be bigger than alpha is smaller than one over alpha. And what Talagrand asks is the following: Is it true that after I do this smoothing, I can improve over just Markov's inequality by just saying that I get some C over alpha. C depends on this convolution, on -- I mean, how big the Gaussian I converve with is, but times some function that goes to zero as alpha goes to infinity.

So I just want to demonstrate some slight improvement over just Markov. And basically the only result -- so as James said, Talagrand asked this in 1989, but in a discrete setting, the only result known in either a discrete and

Gaussian more or less that doesn't just follow from what you get from hypercontractivity, which is not much at all, is the result by these five guys -- five or four -- five guys in 2010, which basically gave a bound that depends on the dimension too.

Now, it seems as -- I'll try to convince you later that half spaces should be the worst examples, the sharp examples and the best inequality you can find here. And this is a typical thing for the Gaussian measure. Most estimates would expect them to be dimensionless. So I mean, this dependence on the dimension is kind of bad. It's also exponential.

>>: [Indiscernible].

>> Ronen Eldan: Oh, right. So there is also a title, but -- okay. I'll try to -- I hope you understand the talk nevertheless.

>>: It's a fixed title or it changes per slide?

>> Ronen Eldan: It changes per slide. Yes. So this slide's title has the letter G in it. [Laughter]. But it's okay, I guess. You won't miss much.

So this is the result James and I show. Basically it says the following:

Given a function which I know is not to log concave, if I consider the

Hessian of the log, it's bigger than some minus some constant times the identity. A good way to understand this thing is this is a function if I do some gradient optimization and, okay, if it increases, then it has to keep increasing for a while. The gradient cannot turn from increasing to something that decreasing very quickly. If you imagine a set, then the set cannot be very, very narrow. It can't be that. Climbing up and immediately climbing down again. Then we have the following result.

So we assume that the expectation of F is one, but if we take the expectation of F restricted on some level set of this sort, somewhere between some value

S and 2S, then this is not only smaller than one, but it's smaller than something going to zero as S goes to infinity. So basically, what you should imagine is that such functions cannot have constant mass on a very high level set. It cannot be that half the mass of the function is between 1 billion and 2 billion. Okay. It has to be pretty small on every such multiplicative scale, and --

>>: You put 1-7 just in case you made a calculation error?

>> Ronen Eldan: No. This is just the log log. It's so -- the power here doesn't matter. Okay. The focus of the lecture is to demonstrate literal of one.

>>: And you didn't want to have an S as well?

>> Ronen Eldan: Where?

>>: There's an S on the left because F is S.

>> Ronen Eldan: I'm not sure I understand the question.

>>: F of gamma is S in this.

>> Ronen Eldan: Yeah. F is gamma is typically S.

>>: Remarkably the quality would be one over S.

>> Ronen Eldan: No, no, no, no. So this is just --

>>: [Indiscernible].

>> Ronen Eldan: Yeah, yeah, yeah. Without this, this is just one. I restrict it to a level set, it become little O of one. And I claim that this is equivalent to an improvement over Markov's inequality. And the reason is that I can just say that, okay, so I guess this thing is more or less S because I restrict it to between S and 2S so this means that S times the probability that F of gamma is between S and 2S is little O of one. So if I move S to the other side, then this is little O of one over S and then this just means that the probability that F of gamma is bigger than alpha or, okay, some alpha is just the sum K goes from one to infinity, the probability that F is bigger than -- is -- sorry, in alpha times two to the K to alpha times two to the K plus one. And by using this, this then is smaller than two to the minus K times little O of one. And then I sum them all up and I get little -- I mean, one over alpha little O of one.

And also the other implication is pretty easy, but we'll just need this one together with this, which I'll explain in a second, to show that this is actually stronger than the question that I introduced earlier so what else do we need?

So we know that this implies something better than Markov. And I claim that this is implied by taking the convolution with something. So know that everything that's converved with a Gaussian is just a mixture of translated

Gaussians. Every translative Gaussian is just an exponential function times the standard Gaussian. Right? And basically a mixture of exponential functions, it's known to be a log convex. This is called Arden's theorem.

It's just basically in this setting, it's just quasi-Schwartz, so it means that, well, everything converved with a Gaussian cannot be too log concave and we have exactly this condition.

All right. So okay. So this is a bit annoying already.

So let's go over an idea for how one could prove such a thing. First of all, we had the Gaussian measure gamma, and let's now define a new measure which

is called mu, so it's a mu measure, mu. Define just FD gamma. We change the measure with F. We fix some big value S. Let's just imagine this is a billion. And we consider the level set corresponding to S. So all of the -- this E will be all the points where F is between S and 2S.

And what we need to show is basically that this measure mu which is a probability measure cannot have a constant mass over this level set E. So how can we try to prove it?

What I'm going to do is I'll try to take E and perturb it in a way such that

I'll try to discover new points. Let's say that S is a billion, so E is where F is between a billion and 2 billion. I'll try to perturb E to get to a new set where the values of F are between 2 billion and 4 billion. And

I'll try to find another perturbation where the values of F are between 4 billion and 8 billion. So I'll try to find some sequence of sets, E1, E2, et cetera, that are disjoined. They correspond to different levels of F.

But if I manage to find the super constant number of such sets, whose measure is roughly the same measure as that of E, it will contradict the fact that the measure of E is constant. Right? Just because the integral of mu is one.

So what's a natural such construction to do? So let's take a point X and E.

And just follow the gradient. Look at the gradient of the logarithm of F in this point. I'll normalize it somehow, and I'll just take delta times this thing to get to some point Y.

Now, remember this property of F, that if it's increasing, it cannot begin decreasing too quickly. This exactly says that if I take this Taylor expansion of the log of F, then the log of F at Y is at least this thing. I mean, the log of F at X, which we assume to be S, plus this delta, this is just -- we take the gradient of the log times this. This is exactly one.

Plus something which is small if delta is not too large. It's small if delta is small relative to this thing.

>>: [Indiscernible].

>> Ronen Eldan: Yeah. Well, it's actually minus, but it's big O, so absolute value of this. So it cannot -- the second order cannot contradict too much.

Now, for those of you who are familiar with these things, this thing is usually rather high. The expectation of this is the Fisher information so we might -- it's not important if you do not understand it, but there is an obvious reason to expect for this thing to be small.

So basically the only missing ingredient here is -- so we do know that the values of F here will be rather large if delta is not too big. But we don't have any idea as for what the Gaussian measure of this set is. Maybe we took the set E and the gradients look like this, and this E1 already looks like

this. I have no control at all over the measure of this perturbation. So we have to try something more clever.

And what we're going to do is the following. Instead of perturbing the point

X, we'll take a Brownian motion which ends at X and we'll perturb this

Brownian motion in an adaptive manner, so this has to do with what James talked about yesterday, and basically, the main tool I'm going to need in order to estimate my new sets is called Girsanov's theorem. So let's take a few minutes to understand what this thing says.

So we have a Brownian motion. We considered a standard Brownian motion BT.

And to each point at BT, we add some drift V sub T. So you can think about this drift as just a jump that's predictable so we know exactly which way we're pushing the Brownian motion given the past and we get this new process

WT, which is just BT plus the drift. And Girsanov's theorem says the following. So if we consider BT as some distribution over the space of paths, then there is an explicit change of measure between BT and WT.

Namely, if you take this distribution and multiply it by this, that change of density, then WT becomes itself a Brownian motion. Let me try to explain to you why this formula makes sense.

So if we just think of BT as distinct Gaussian jumps, then one Gaussian jump with a drift V can be imagined like this, so it's -- I guess this is the density up to some normalization, normalizing constant of a Gaussian which we just translate in the direction V.

But this is actually, this can be written like this. So it's E to the minus

W square over 2 plus W dot V minus V squared over 2. And if we think as B which corresponds to BT as just W minus V, then this dot product is just B dot V plus V square over 2.

So what we get is the following. We get E to the minus W square over 2, which is just the Gaussian density times E to the B dot V plus V square over

2. This is all of these together. Which means that if we multiply by the inverse of this --

>>: [Indiscernible] the plus has no [indiscernible].

>> Ronen Eldan: Sorry. V squared of course. So we see that basically W is the Brownian motion if we only change of measure -- apply change of measure which is the inverse of this. But if you look at this, this is exactly this formula only we're multiplying many, many infinitesimal guys like this.

Right. We're just summing up B dot VT and minus one-half VT square. So you can also for some -- to get some intuition about this, you can think as just so if V points in one direction, in order to make my think a Brownian motion again, I want this thing to be large. I want my BT to cancel out this drift.

Okay. So this is Girsanov's formula, and what's really nice about it is that it implies more or less the following thing. It says that if I have some set in Gaussian space, such that, okay, can you see what I'm doing here? If I have a set in Gaussian space such that I can take a Brownian motion and force

it to go in this set without applying too much of a drift, then basically this change of measure is not too large which means that the Gaussian measure of this set cannot be too small. So it's a very nice way to lower bound the

Gaussian measures of a set. I just have to construct some drift that goes inside it, which is too strong. Know that it's very important for this drift to be predictable. It cannot just do something that realize on what happens in future. I cannot cheat.

Okay. So this is Girsanov's theorem, and the next thing I want to define or actually remind you from yesterday is so-called Föllmer's drift. So this is the particular process VT that James defined yesterday. I'm going to write these formulas because I them for later. So again, I have the Brownian motion BT and I define the process WT. With this equation, DWTs, DBT plus

VTDT. I take this Brownian motion plus this drift increment where VT is the following. It's the gradient of log of P one minus TF at the point WT. So this is exactly the formula I saw yesterday which, okay, so I guess you remember we had the set E and we wanted to go in the direction of the gradient of the log: This drift doesn't exactly go in the direction of the gradient of the log of the function but it somehow goes in the direction that expects the gradient to be at time 1, right. So that kind of makes sense.

Moreover, if we define MT to be just the expectation, well, the convolution of F with the Gaussian at our current point which means this is more or less what we expect the value of F to be when the Brownian motion reaches time 1, then this is going to be only, like, heavy calculation done in this talk so

I'm going to go over this quickly. If you don't understand it, it's not so important. I'm going to give you an intuition about what's going on here later.

Basically, if I differentiate this process MT, I get -- so I just differentiate this formula, I get the partial derivative of this respect to

T. And then because WT is a defusion, I get the Laplacian of this thing.

It's not hard to see that these two things exactly cancel out by the definition of PT. And then I get the two terms that there are a result of basically the -- where WT is going. So I get the gradient of this dot DWT which is exactly these two terms, and if I take the logarithm on both sides and use E 2's formula again, I get the D of log of MT is exactly this thing and if I go to the previous slide for a second, you see that I have exactly the same formula in Girsanov's formula. So what does this mean?

Let's take T quote to one here. I get this formula. And it means according

Girsanov's theorem, that WT is a Brownian motion under the measure whose density is one over F of one. This exactly means, to put it differently, it just says that BT is distributed according to the Gaussian measure. WT is distributed according mu. Basically F of WT times the Gaussian measure. So

James already kind of proved this yesterday in the discrete setting, that this drift VT exactly gives us the terminal distribution of F. And this is a proof of this thing. So somehow this drift is very, very natural. Also there's this minimality thing that James talked about yesterday that's going to somewhat help us later. So, okay.

So now we have a very strange way to recover this distribution mu, this FD gamma. And we can finally try to understand how one can perturb this thing and the perturbation will be the following. So we fix a parameter delta, which you should imagine as the delta we had before. In the end we'll consider multiple values of delta which I will give us many different perturbations of an initial set. And now we take this process WT and we add some extra drift to it. So DXT delta will be DB t+1 plus delta plus VDDT.

So this is the sadism that James talked about yesterday.

>>: [Indiscernible].

>> Ronen Eldan: What?

>>: Can you use a different --

>> Ronen Eldan: Oh, yes, sorry. I have a different -- yeah. So this is exactly the sadism that James talked about yesterday. WT is the process DT with some pain induced. We gave it some. We push it in some direction and it suffers a bit or exactly enough to get to the distribution of F and here we're being really sadistic, we give it even more pain than what we need to get to F. So basically, you can imagine, so if we have the Gaussian measure, this is the origin, we have some set, and we have some drift that we have to apply in order to get to this set, so now we do some over shooting. We give a little bit more of this drift. So okay. I guess this picture was pointless.

And now I want to try to analyze the change of measure that this process induces using Girsanov's formula. So let's see. Girsanov's formula tells us the following. I have an exact formula for the change of measure from the measure according to which B is the Brownian motion. So maybe I'll actually draw a little thing here. So we have the following processes.

We have the process BT, WT, and XT delta. This is the Brownian motion. This is the Brownian motion with the measure P. This is a Brownian motion with a measure Q, and this is a Brownian motion with the measure Q delta. And we already saw -- okay. So I just plug this thing in to Girsanov's formula and

I get this, but remember that if delta is zero, then what's written in here is exactly the change of measure between Q and P. So -- sorry. So I can just factor out this one here and one here. And I'm only left with the delta terms which basically says, okay, this is the energy I have, this is the measure I'm losing by pushing BT into WT. But I'm going to also lose the energy of this push. Now, I want to somehow approximate this thing. If we look at it for a second, this thing, so the variance of an increment of this process is exactly VT square, right? Variance of VTDBT is VT square. And here we have the integral of VT square. So we know that the variance of this thing is basically this. So the standard deviation of this thing which is a martingale, is much smaller than this, which suggests that maybe this whole term can be just neglected.

Moreover, if delta is small and let's say that it's little O of one, then this delta square, let's also neglect this.

>>: But the energy [indiscernible]?

>> Ronen Eldan: No, no, no. Actually, the integral of VT squared is super constant because we know if I go back to --

>>: [Indiscernible].

>> Ronen Eldan: -- the previous slide, that log F of W1, given that F -- so this is large, meaning that this is much smaller. This can be neglected.

Okay?

>>: [Indiscernible] large because --

>> Ronen Eldan: Because of variance of this is this. Typically.

>>: But the whole thing is large because the [indiscernible].

>> Ronen Eldan: Right. Exactly. Thank you. Yeah. Good point. So right.

So I'm about going to neglect this thing. And this thing and roughly I'm going to say that the only thing I'm saying to change the measure between this set E and E delta is something like this. E to the minus delta times this energy. And this is given the delta is small enough. Forget -- forget this exact one.

>>: Why can [indiscernible].

>> Ronen Eldan: Right. So here, the scaling is different. So it's -- okay.

We'll soon see that small deltas suffice because a little change of delta means a big change in the value of F. Okay. We'll see that actually in the next slide.

So at this point, we want to use our Hessian estimate. Remember, we know that the Hessian of the log is not too small. This exactly means that F at the point X1 delta, this perturbed process is at least F at a non-perturbed process times -- and now here we consider the Taylor expansion of the logarithm. So the first derivative of the logarithm is just V1. You remember the definition of VT was just the gradient of the logarithm. So we get V1 dot whatever the drift was. This is just the integral of the drift.

This is just X1 minus W1. Minus, we have this quadratic term that's -- well, it's something times delta squared. And this thing, it's not so hard to see that it won't be so large. Let's forget that for now and assume that we can just approximate by this thing only. Okay.

So this is what we expect to gain in the increment of F by the push. Okay?

And at this point, it will be helpful to recall that the process VT is a martingale, so this is something James said and didn't prove. If I have time

later, I'll -- the proof is just a straightforward calculation, but I'll say something about it later if I have time.

>>: You did the calculation [indiscernible].

>> Ronen Eldan: Not that VT is a martingale. I don't -- so okay. What we have is the following. We do this push. We get a change of measure, which was equal more or less to E to the Z. And we get -- right? No. We get a change of measure which is equal to E to the minus W where this is W. And the gain we have in the value of F is E to the plus Z. Okay? The previous slide exactly says that the change of measure is E to the minus this thing.

And here we have E to the plus this Z.

>>: [Indiscernible].

>> Ronen Eldan: Oh, delta. They both had deltas, of course. Thank you.

Yeah.

Now, the fact that VT is a martingale, if you look at this definition for a second, you understand that the expectation of the difference between Z and

W, which is what appears in the exponent, is just zero. Right? Because V -- okay. For each T, I'm taking the scatter product with V1, but the expectation of V1 conditioned on time T is just VT. So I get that expectation of this is also the integral of VT squared which makes sense because recall that I'm pushing in the direction where I expect the gradient at time 1 to be. Okay.

This is actually more or less also the explanation why VT is a martingale.

Okay.

So at least in expectation, this push should -- the contribution I get from the increment of F should exactly cancel out with the change of measure. The problem is that this is only true in expectation. I have no idea where the gradient is actually pointing. It could be that I expect the gradient to point over there, but when I get to time 1, it points like in a completely different direction. Okay.

So I -- it's not so clear that this is enough, but we can look at this thing like this. So we know that the expectation of F over a Gaussian is one, meaning that under this measure Q delta, the expectation of F of X1 delta is one. Now, we can instead take expectation with respect to the measure, okay, that makes BT a Brownian motion, and then we get F of X1 delta times this change of measure, which is exactly what I wrote here. Now, so we know the expectation of the exponent -- basically the expectation of all of this is one. And the expectation of whatever is in the exponent is zero. This means that the variance of Z minus W cannot be too large, right? Otherwise the E to the Z minus would be more than one. So as long as delta is not too small, this variance is not too big. And finally --

>>: [Indiscernible].

>> Ronen Eldan: As long as delta is not too large, sorry, which means all our estimates from before, yeah.

So finally, we can write something like this. The expectation of -- so now we consider exactly the same thing as here. So this is just the integral of

F with respect to the Gaussian measure but now let's just restrict to one level set.

>>: It's not log of W1?

>> Ronen Eldan: Of course, it's log of W1. Thank you. It should be a log.

So let's just consider this integral restricted to this one level set. So we know that it's at least this thing. We just, again, change this by the expectation, but now, look, so we know that this thing is quite concentrated around zero. We just established that. So even if we restrict to some subset whose measure is not too small, it should still be kind of close to zero, right? So if we -- remember, we assume by negation that this whole thing is not little O of one. It's just the constant. So if it's just the constant, it means that we restrict to something of probability, one over ten. So this thing cannot be too small under this restriction, which means that this expectation is of the same order as the probability that F of W1 is in this same integral. Just meaning that this thing is pretty close to one.

>>: Again log F W1.

>> Ronen Eldan: Again, log F W1, yeah. Sorry. Okay. So this basically is not too small, and then we just get the probability of this. Okay.

So let's -- I know you're confused. Let's try to understand what we achieved so far. So we consider this bed level set where log F is between 1 and 2 billion, 1 billion and 2 billion, or 1 billion and 8 billion. We looked at the location of this perturbed process given that W1 was an E so --

>>: That was E to the billion?

>>: [Indiscernible].

>> Ronen Eldan: No. No. S is a billion. So we add Brownian motion that ended up in this set E. We look at -- sorry, this is WT. It ended up in E.

We look at this perturbed thing. So this is W1. This is X1 delta. It ends up somewhere. And what we saw is that given that WT ends in E, the value of

F at X1 delta is quite large. It's at least F of WT times this thing. And we also know that Z is large. So we know we actually get some multiplicative gain by going from W1 to X1. And we know that when we integrate over the paths such that W1 is an E, we still get a pretty decent measure. This exactly means that if we look at -- okay. So it doesn't -- it says something about the measure of paths, but intuitively, what it means is that if we look at the set where X delta terminates, given that W terminates in E, then this

set is roughly the same measure as the one of E. This is more or less what this equation says. It doesn't say something about -- so it says something about the measure of paths that end in this set, but, well, this -- the measure of Brownian motions which end here is just the Gaussian measure of this set.

>>: Is this equation also true if the [indiscernible] is very small?

Because I agree with [indiscernible] --

>> Ronen Eldan: Right. So you get some dependence, right. There is -- if probability goes to zero, then this estimate goes to infinity somehow, of course. But we're just trying to get a little O of one.

>>: [Indiscernible] one over S again?

>> Ronen Eldan: No, no, no, no. It's going to be something like one over log S to some power.

>>: When you change the measure, you get rid of this one over S, is now one over S, like epsilon over S is now epsilon, right?

>> Ronen Eldan: Yeah. But, yeah, but you agree that this contradicts. Of course you have to do something much more delicate to understand exactly how that dependence works.

Okay. So now we know that zed is more or less W, which is more or less this thing. And recall that F of W -- so we have this formula that says that F of

W1 is roughly this thing. So we get that this thing is roughly log S and if we plug the fact that Z is roughly log S into this formula, we get that F of

X1 delta is S but we get some more duplicative gain like this. So as long as delta is bigger than something like one over log S, we already get some multiplicative constant gain and that's almost it. So we know that this thing is in a set where the values of F are bigger but using Markov's inequality, we can also -- this is pretty easy but I'll skip the details. We can also know that the values of F cannot increase too much, just because the change of measure wasn't too small. So it cannot be -- I can't just gain probability from nowhere. So just by Markov's --

>>: [Indiscernible] probability one?

>> Ronen Eldan: What's that?

>>: It's larger than S times one plus two [indiscernible].

>> Ronen Eldan: No, no, no. So both of these things are only with probability -- so this depends on this Z, on this variable Z which is typically close to two W, but not deterministic. So for most of the paths that ends up here, the value of F here is different than the values of F here. And, well, as I said before, by assigning now to delta different values taken from a suitable arithmetic progression, I can take this level

set E and discover new level sets, E one, E2, E three, so E delta one, E delta two, E delta three, such that all disjoined and they all have a measure comparable to that of E and this exactly demonstrates that mu of E is little

O of one.

How much time do I have? I have to finish at 430 or now?

>>: 4:35.

>> Ronen Eldan: Okay. So I won't even need so much. Okay.

Yeah. So this exactly means that we created these sets and this kind of finish is the idea of the proof and if we optimize over everything, we get that actually this -- this is the dependence we get up to a log log factor.

Okay.

So I'll just talk about some possible future directions of research related to this and then maybe I'll try to explain why VT is a martingale, because I think it's really a central point of what's happening here. Okay. So first of all, so the theorem -- so note that the main theorem didn't have this convolution in it. It just talked about functions which are not too log concave and these functions are -- they can be defined on any basically remaining manifold and even they make sense on remaining man folds with densities so maybe it's reasonable to expect that this kind of result holds in like a more general setting of spaces satisfying Bakry-'Emery curvature dimension condition, okay. So I guess if you don't know what this is, just, okay, a natural question, does this thing hold this sphere, for example, or on any positive curve the remaining manifold, hopefully.

>>: What's the measure?

>> Ronen Eldan: So just the measure you get from the remaining metric.

>>: Volume metric? [Indiscernible].

>> Ronen Eldan: Yeah. Yeah.

>>: You had Gaussian measure and you [indiscernible] --

>> Ronen Eldan: Right. But if it's a compact --

>>: Okay. Okay.

>> Ronen Eldan: Yeah. So positive curved is --

>>: Strictly positive.

>> Ronen Eldan: Yeah. It's strictly positive.

>>: No, no, [indiscernible].

>> Ronen Eldan: Right. No negative and compact. Right. Yeah, yeah, yeah.

And I guess like the correct normalization, so we will need to know something about the diameter. I'm not sure.

>>: I don't even think we compact. I mean, we're talking about

[indiscernible]. So what's the measure? You need a probability measure and it should be stationary for some kind of defusion process and then you need --

>> Ronen Eldan: Yeah. For the Brownian motion on the manifold. So just that. Yeah. I don't know if it has like -- it's the usual measure on our remaining manifold on a compact remaining manifold. Yeah. Okay.

So what I'm saying is maybe this is like this type of inequality holds in a more natural setting. Okay.

One obvious direction we get logged to the minus one over six and, well, at least it's natural to expect that one dimensional thing will demonstrate what the sharp dependence is and for just translations of the Gaussian, we get log to the minus one over two, so is this the correct exponent?

>>: This is like [indiscernible]. Really just means F is constant on the half space.

>> Ronen Eldan: Yeah. Okay. Of course there is the discrete version of the question that James mentioned yesterday. So convolution operator is just a re-randomizing some bits on the cube and taking expectation and the main reason this proof fails there is that we cannot just take a small perturbation of a point on the discrete cube. So we can consider this -- these two processes, these two analogs of BT and WT which James described yesterday, but it's not really -- so we can do it, we can basically -- okay.

It's not clear what this point, X1 delta, should be, right? Maybe you can get some random point that -- but, okay.

>>: So if N is not too big, you can apply this previous result

[indiscernible]?

>> Ronen Eldan: Uh-huh.

>>: And then if it's very big, then the Gaussian thing is a good approximation. So shouldn't that give you a little O --

>> Ronen Eldan: If N is very big, then not clear to me why the --

>>: [Indiscernible] end result [indiscernible] Gaussian so it doesn't bite the cube at all. [Indiscernible] if you look at the cube but [indiscernible] dimension, then it's true [indiscernible].

>>: [Indiscernible] can only go up so high.

>>: Right. And if N is very large, then, you know, coupling to the Gaussian should --

>> Ronen Eldan: No. If this is -- this is true under some smoothness conditions of the function, I mean, the function is not --

>>: [Indiscernible].

>> Ronen Eldan: Yeah, but not merely --

>>: [Indiscernible] analysis of functions on the discrete cube, even in high dimensions, they don't look like Gaussian space. They can depend very heavily on a small number of coordinates. Or think about tribes --

>>: Yeah again, it depends very heavily [indiscernible].

>> Ronen Eldan: Okay. I guess the type of smoothness --

>>: That's [indiscernible]. Metric function which doesn't [indiscernible] anything in Gaussian space, right? You can't just go to a sub

[indiscernible].

>> Ronen Eldan: So this, for those of you who know how to prove hyper conductivity using those semi group proves, they basically only use the second variation -- the first variation of happening. Take the derivative with respect to the semi group, and what we do here kind of looks also at the second variation. So this, on an intuitive level, when we tried to analyze -- so when we went from W1 to X1, we had this expected gradient, but we also had to scrutinize what happens with the variants of the gradient, and this can be -- this is idea logically, so you have to look at the second derivatives of what's going on and of course, an obvious question is does this give anything more.

So should I take the five more minutes to just talk about this process VT or maybe we can do it off line.

>>: [Indiscernible].

>> Ronen Eldan: Yeah. But I don't want to -- just a calculation, even I don't understand it. You know, something that it actually gives you an intuition of what's going on. Okay.

So thank you.

[Applause]

Download