Yuval Peres: Alright, good afternoon everyone. We're happy to have

advertisement
>> Yuval Peres: Alright, good afternoon everyone. We’re happy to have Jordan Greenblatt update us on
the latest in this topic of Maximal Inequalities on Cartesian Products which was essentially started here
a few years back when Alexander [indiscernible] was a Post Doc. Anyway we’re happy to hear Jordan.
>> Jordan Greenblatt: Thank you. Thank you for the invite and thanks everyone for coming. The topic is
sort of a funny split of classic Harmonic Analysis and graph theory. Since I assume most people here
have a better graph theory background. I want to explain some of the Harmonic Analysis sort of
classical context for this and lead it in just to give some motivation.
One of the most important objects in all of Harmonic Analysis is the Hardy-Littlewood Maximal
Operator. This will be the Euclidean one on r d. There are going to be some minor technical
assumptions on F. But since we’re not really talking about Euclidean space I’m just going to completely
brush those over.
This is defined as the maximal average over all spheres centered at the point x of the function f.
Incidentally, all our functions are going to be non-negative. But I’ll try to remember to put in absolute
values. If I don’t just assume they’re in as far in as they can be. There are some silly conventions. But it
doesn’t matter. For all we care they can all be non-negative. I’m going to use m for maximal functions
throughout if it’s ambiguous because there’s more than one in one section. I’ll just use subscripts or
something like that.
Okay, so, yeah this is the maximal average or supremal average over all ball centered at a point. For
example, in dimension one we have the indicator of an interval. F looks like you know it’s just a step and
then m f the maximal function looks a bit different. In the interior here it’s going to equal one. Because
you can just take a radius small enough that the entire ball is contained in the interval. You get this and
on the boundary, sorry on the boundary it’s a half. It’s pretty easy to check. Then it decays linearly. By
this I mean it’s bounded above and below by constant multiples of one over x.
>>: That’s what you call the.
>> Jordan Greenblatt: What did you say?
>>: Inverse linear.
>> Jordan Greenblatt: Inverse linear, sorry. I would have said grows otherwise I didn’t want a double
negative. Okay, so it grows inverse linearly. Okay, I’m done. I’ll get back to the actual talk.
Two things to notice here, one is that it’s an l infinity contraction. If f is bounded, right, no matter what f
is, if f is bounded then all of its averages share the same bound. The maximal function also does.
However, not only is it not an l one contraction it’s not even bounded in l one, right. This is a nice
interval function. It has total mass two. But this one has poor decay or inverse growth. As a result it
just goes too slowly off to infinity to be integrable.
A natural question is what about for p in between? This is the first classical theorem. We’re going to
talk about; this is the Hardy-Littlewood Maximal Theorem which says that for all p greater than one.
This is actually a specific part of it. But it’s the part that’s relevant to us. For all p greater than one the
maximal function is bounded up to a constant in l p by f. This dependence on p is inevitable as that
picture would suggest as p goes down to one the bounds are going to go to infinity.
I’ll get back to this dimensional dependence in just a moment. This was later strengthened by Stein I
think rather remarkably to the spherical maximal operator, which is exactly what you’d expect. Here
we’re looking at spheres centered at x, surface of spheres rather than interiors of balls. Keep in mind by
the way that this operator point wise dominates the Hardy-Littlewood Maximal Operator. Simply
because an average over a ball is just a weighted average of averages over spheres, so this is a much
more dangerous operator I guess, for lack of a better way to say it.
What he showed is for all d greater than or equal to three. This is later extended to two by
[indiscernible], p greater than d over d minus one, test on delta to see why that’s necessary. We have
and l p bound, again. The constant will be dependent on p. Originally the dependence was very bad.
Oh, sorry no, the dependence is very bad. Right, so there’s bad dependence on p.
But remarkably as long as p is greater than this threshold we do have this bound. Also the constants in
the equality can be taken to be non-increasing in d. Once the dimension is high enough for a given p
that it crosses the threshold, these constants are non-increasing. Simply because you can view the
averaging process over a sphere and dimension d plus one as two random processes picking a random
direction in the sphere of dimension d plus one. Then averaging over the cross section, picking a
random element in the cross section, those two combined it inherits the bound.
It’s a very elegant method called the Method of Rotations. Oh, this is the Stein. He’s an important guy
but still seems weird. In particular this lets us get rid of this dimensional dependence. The original
dimensional dependence from the classic proof is actually exponential. It’s a nice proof but its ugly
dimensional dependence. But here for a fixed p greater than one up until d crosses the threshold and
cuts under p we can just use the classical constants. It’s fine in many cases. Then we can just use the
constants that comes for all dimensions higher for the spherical since it dominates the ball.
This is the sort of thing we’re going to be talking about today, dimensional constants for spheres in the
setting of finite graphs rather than Euclidean space. But before I step forward, I want to say since I
imagine this is new to some people. That it’s not just contrived. There are applications. In fact this is
like I said one of the most important tools in analysis. One reason is for point wise convergence
questions.
The first thing I learned and I think a lot of people learn as a corollary which you’ve all pointed out was
not the original proof. But implicit in the original proof and is in the actual original proofs of a lot of
other point wise convergence things, questions. Is the Lebesgue differentiation theorem which says
that if we average a ball, sorry average the functioned f over a ball, centered at x and send the radius to
zero. Then this limit exists and is equal to f of x for almost every x, or if you like almost surely over any
reasonable probability on a Euclidian space.
This is sort of remarkable. If f is continuous this is the fundamental theorem of calculus. If f is you know
hideously jagged that’s not at all obvious. I think for almost every x, sorry. This uniform control that
comes from the maximal bound is what gives us this.
Now in a discrete setting you know point wise convergence is less meaningful. But what’s not is
probabilistic bounds. Here’s where I’ll introduce our first non-classical theorem which is the work of
[indiscernible] that Yuval alluded to earlier. This is on the Hyper Cube which we’re going to write as k
two to the n because we’re generally doing it for clicks. But we’ll use this funny notation. On k two to
the n we define p k to be the spherical averaging operator. Expectation of the function over the sphere
around x, here k has to be an integer from zero to n because that’s the diameter of the Hyper Cube. Our
spherical maximal operator is again what you would expect is the maximum over k of.
What they proved is that there is a dimension free l two bound. In other words, for a fixed Hyper Cube
it’s a finite dimensional space. It’s not particularly interesting. But asymptotically there’s no
dependence on this dimension n, which is a rather remarkable and non-trivial phenomenon.
The way this came up just to give some motivation, was in the context of Alexander looking at the UGC,
the Unique Games Conjecture rather here at Microsoft, right. Without going into the full details of the
original problem I’ll give some intuition for how it came up. How this sort of thing might come up
naturally.
The image she had was you know imagine you have a Hyper Cube and there’s some information spread
out around the vertices. The way you can find out information is by starting at any seed point on the
Hyper Cube and work outward. You pull all of the neighbors of your seed point and get the information
there. If you get good information you can use that to examine the sphere of radius two, which you can
use to examine the sphere of radius three, and so on.
Now, imagine there’s an adversary who want to obscure this information. They have a limitation on the
amount of bad information. Take these terms with a grain of salt. I’m trying not to be too quantitative
in the interest of saving time. But you as the player have control over how much bad information they
can spread, thinking of it in terms of mass sort of vaguely.
What they want to do is spread bad information. Here I’ll use these chunks to represent bad
information. They want to represent bad, he wants to spread bad information in such a way that
eventually starting at a given seed point. When you survey a sphere you get a large portion of this
information and then it ruins the process for the rest of your search of the graph radially outward.
You know imagine we start at this see point. Here there’s a little bad information. But it’s not enough
to throw us off our track. Here on this second one there’s a little more. But maybe it still doesn’t hit
that threshold. But here on the sphere of radius three we see a ton of bad information. There’s so
much that when we look to the sphere of radius four and onward it just ruins any hope of getting the
right signal.
Meanwhile since he was the player are allowed to pick any seed point, which our adversary’s challenges
here is to spread the small amount of mass around. In such a way that no matter what seed point you
chose eventually there’s a sphere. Generally not of the same radius but eventually there’s a sphere that
has high probability of bad information.
Say at this point in the sphere of radius one we have this massive glob of bad information. Since the
sphere of radius one is small that ruins that seed point. The adversary’s goal is too limited, spread the
limited amount of bad information in such a way that it will radially spread to any seed point and pile up
there. But this says that it can’t. Because as the player we pick the amount of bad information, the
mass of the bad information low enough such that f here is the function describing the, you know
massive bad information to give n vertex.
This l two norm will be small which means this l two norm will be uniformly small, right, so we can pick
the same parameter for all the Hyper Cubes which is part of the problem. That means by Chebyshev
that there has to be some point, actually a lot of points with small maximal function. That means that
the average is over all circles are pretty small.
What this did in practice was eliminate bad research direction. It did serve a practical result. But it
didn’t give a positive result. It just narrowed the search. On the other hand I mentioned this partially
because I think it’s very believable. Because you know the idea of signals emanating radially is a very
natural one. This gives some idea that this could be useful in other contexts.
Oh and I forgot to follow through on what we’re doing today. Later on in the same year this was
published in thirteen. My colleague Ben Krause showed that for all p greater than one we have a
dimension free bound, well the weak type endpoint fails. But I’m not going to mention weak one, one
anymore because I realize that’s a whole can of worms. But he showed that there is depending on p and
it will explode if p goes to one, dimension free bound.
Then together with Alexander and Ben I joined. We changed this to arbitrary clicks where it will depend
on the size of the click. It will depend on p but it won’t depend on dimension. Throughout the rest of
the talk we’re going to think of the size of the base click is fixed. What we’re kind of trying to deal with
is dimension going off to infinity.
Okay, now I’d already mentioned Stein’s work, Stein looms very large in this. The sort of framework for
the whole theorem came from an old paper of his. I think from sixty-one. The theorem was the paper
on the Maximal Ergodic Theorem. It’s sort of a variant on Ergodic Theorems. If t is a self adjoint Markov
operator and m is the maximal operator given by the largest power for a given x. The largest power of t,
then we have a dimension free, or not dimension free, we have a universal l p bound.
I think this is kind of a remarkable result. Given that it is the framework for this, the theorems we’re
talking about today. We would wish that for all spherical averages there’s some corresponding index
such that we have a point wise bound of the form for non-negative functions. P k of f is bounded up to a
constant by some self adjoint Markov operator which I mean here we’ll take the stochastic adjacency
operator. This is just a random walk. This is kind of a canonical form here.
It would be great if we could say that this spherical average is point wise dominated by this. But
unfortunately it’s not actually true. It’s not actually that hard to see why. Each of these operators is
given by the expectation of f over a radial probability mass. Radial with respect to x that is. We can see
what these probability masses are. How they act on the spheres.
Keep in mind by the way since the click power is vertex transitive it doesn’t actually matter that this
delta centered at x. All spheres are the same up to automorphism. All spheres of a given radius are the
same up to automorphism. This is pretty simple. This is just a delta, right; this gives mass one to the
sphere of radius k and mass zero to all other spheres.
What this guy will do.
>>: Can you explain again what is this guidance of p one k two.
>> Jordan Greenblatt: This?
>>: Yeah.
>> Jordan Greenblatt: What I’m imagining here is just a random walk of some length.
>>: Okay.
>> Jordan Greenblatt: Here k [indiscernible] is standing in for any length random walk. I want to show
that it’s not going to be possible. I mean show weakly that it’s not going to be possible to find a length
such that if we take a random walk of that length. That probability mass is going to dominate this one
up to a constant. Maybe it’ll be clear after I draw the next graph. If not please ask.
We’ll get something with some you know central limiting Gaussian looking structure from a random
walk. It is possible if big K is small enough that we could actually take a long enough random walk that
this peaks at that sphere. Right, that the expectation of this process is this radius. But even so we’re
going to be capped at one over rad k mass to that sphere. Does that clarify what I’m getting at?
>>: Yes.
>> Jordan Greenblatt: Okay, cool, thank you. The point is if you have a function with a lot of mass
concentrated at the k sphere. This will see that mass much more than this probability mass will. This is
hopeless. But we are inspired to compare averages. It turns out that if we compare averages of these
with something like averages of these. We can get such a domination.
On the other hand it’s not clear why dominating Gazaro means which I’ll define in a second. Of the
spherical averages would tell us anything about the spherical averages. We’re going to come back to
these probabilistic bounds in a moment. But first we’re going to see how to split up the spherical
averages in, how to split up the spherical averages into a smoothed out part given by Gazaro means and
a part that can be handled with spectral techniques.
For a fixed big K, if we take the Gazaro mean of the first big K spherical averages and look at the
difference. After the Abel Summation formula and a little application of Cauchy-Schwarz, CauchySchwarz in k, this is still a point wise bound in x, x is fixed throughout this computation. But CauchySchwarz in k we get, this is equal to the sum from k equals one of; I’m writing it in this form so you can
see exactly how you would derive it using Cauchy-Schwarz. Times the square function k equals one of
the big K, of rad k.
This first factor is actually lesser than the one, not even up to a constant. The upshot of this is that we
can bound for all big K. We can get the uniform bound that the spherical average is point wise less than
the Gazaro mean plus the square function, which is a pain to rewrite.
In particular we can get rid of the dependence on k on the right side. Right, if we supremize over big K
here we get. I’ll define these in a second. The smooth maximal operator and these are all non-negative
sum n’s. There’s no harm in sending big K up to big N where we have the smooth maximal operator is
simply the average of the Gazaro means.
>>: The maximum.
>> Jordan Greenblatt: Did I say average?
>>: Yes.
>> Jordan Greenblatt: Thank you that would be the second Gazaro mean. Yes, the maximum of the
Gazaro means up to big K. This square function down here is this extended up to n, k equals one.
Now at the moment it might not which might be an understatement. It’s probably not evident why this
is a useful way to break it up. Hopefully it will be by the end. But the motivation is that these smooth
kernels are much easier to bound from this probabilistic standpoint. We will be able to do that in the
way that this failed. Whereas the square functions play very nicely with l two, as square functions all
want to do. We’ll be able to use spectral methods there and between them we’ll be able to get taking a
supreme over k that the spherical maximal operator is bounded in l two, just taking a set in this point
wise bound to an l two bound by the smooth plus the square function. Let’s put the square function off
to the side right now. We’ll come back to this at the end and finish up with the smooth maximal
operator over here. Yeah, so I want to leave these up.
>>: Do you have this k coefficient [indiscernible]?
>> Jordan Greenblatt: What about the k coefficient? Yeah, the k coefficient should be there.
>>: Okay, because you didn’t have it on this, on the left [indiscernible].
>> Jordan Greenblatt: Yeah, it’s squared over k squared. We got a k from the Abel summation formula.
We put half of it in the fact that it disappeared and half of it here. By the way if you want to reference
on that that whole thing is in the Stein Maximal Ergodic Theorem Paper. It’s in [indiscernible] they have
reviews of a lot of the stuff.
Okay, now before we go any further I just want to draw out the real probability mass induced by these
smooth kernels which look pretty simple. It’s just a uniform but I want to just view this as on the order
of one over k. Since that’s all I care about, it’s just a uniform selection of the first big K, spheres, and
then zero after that.
Okay, this leads us to what I’d said earlier that we would be sort of borrowing from Stein’s techniques, if
not black boxing his result. We start with the same seed bound that he does which is a much older
theorem of Hoff, Dunford and Schwartz. Which says if, this is a continuous version, there’s a classical
discrete version. This follows just by looking at Riemann sums and limits.
If we have a semigroup, a semigroup of Markov operators, so this means if you’re not familiar. Just that
the operator t plus s is equal to the operator t composed to the operator s. If you want just think of this
as a time evolution, right if you freeze time after s seconds, start it and then freeze it after t seconds
that’s no different than just letting it run for s plus t seconds.
That is sort of the intuition we have in our semigroup. If we have a continuous semigroup of Markov
operators, I didn’t say continuous. This is trivial for us because all of our spaces are; I mean they’re
getting larger. But they’re all finite in and of themselves. But for soft reasons continuous is
unimportant. But I’ll leave it up there.
If we have this family and we define the following maximal function. This is time average, we have this
maximal operator, then predictably we have that, an l p in equality. Sorry, depending on p for all p
greater than one. This is, could almost be viewed as a corollary of the Stein proof from earlier. Except
these don’t have to be self adjoined, but that’s dumb. Because that’s like saying that you can build
house lamps because you can build lasers. This is low tech used to prove that. That’s why I bothered to
restate it.
Our goal then is to compare this probability mass to a rate of probability mass generated by something
like this. The way we do this is by introducing the noise semigroup. I can’t think what I should erase. I’ll
erase over here. The noise semigroup this is just on the click, on the base graph is computationally given
by e to the negative two of [indiscernible] with the convention that [indiscernible] is positive definite.
I’ve see both conventions. But more geometrically intuitively we can view this as a very natural process,
a diffusion process. Right, so if we look at k four, right this is the mass that’s induced by placing a delta
at one of the vertices. Then just letting it diffuse naturally. It treats all of the other vertices equally
because it’s a click. This won’t be true if we look at other graphs. But in this case it has a nice simple
expression.
We’re actually going to linearize it. For say t less than small enough c we can basically write this up to
harmless constants, as a convex combination of the, this here being the spherical averaging operator on
the click, stochastic adjacency if you like. Linearizing at zero we get, we can use this as the noise
operator.
Keep in mind by the way that when we look at the smooth maximal operator this is what we’re trying to
bound in l p now. It’s good enough to do this up to c n. The reason being that if we look at the max over
k greater than equal to c n. We can just bound this crudely. Let’s say look at p norm, this denominator
takes care of everything. Right, is less than or equal to, well we here we have n plus one copies of
something of an l p contraction. It’s, yeah, so there’s no harm in truncating our smooth maximal
operator at a multiple of n, past that it becomes trivial. That’s why at this linearization it really does no
harm.
When we look at what our noise operator does. When we apply a noise operator to a click power we’re
just thinking of tensor powers of this operator. Right, so if we’re looking at say k three to the five. We
have five components, each of them with a k three in it. We started at a point x which is sum product.
Then you know start the clock, as t increases all we’re doing is steadily moving the mass from here to
here, independently in each component.
In particular this gives us an i i d, the radius of the outcome is an i i d, sum of i i d variables. This induces
the following probability mass. Here we’re going to say for now just fix little k, less than or equal to big
K. We were thinking about fixing a big K here. The only spheres that are seen by this probability mass
are spheres with radius less than this big K marker. Those are the spheres I care about in my bound.
If t, this should be squared, is close to k over n. In a sense of being within squared of k over n, probably
from k over n, we get something like this. Something Gaussian looking that peaks and peaks close to k,
so that k lies within up to a constant one standard deviation. In particular, so this is rad k. In particular
this means that it gives probability on the order of one over rad k to that sphere.
The significance of this is that when we integrate when we take a Gazaro mean. We’re going to take the
time average up to k over n, oh sorry not Gazaro mean, the time integral. We want to see what this
does, what this probability distribution, so let’s put a delta in here. What this probability distribution
does to the sphere s k for an arbitrary little k less than the big K. This is bigger than we just restrict the
integral to where it gives significant mass. Right, up to a constant it gives full mass to the c over radius k
or maximum not full.
This will give mass one over rad k for a duration of rad k over n to the sphere we care about, and so up
to a constant this is at least one over big K. The upshot of this is that if we average these noise
operators we will have a probability mass that up to a constant dominates the smooth kernels.
In particular we saw, right, we have the semigroup bound. I had said that the reparameterization was
harmless. What I meant by that is that if we define our noise operator by, our noise maximal operator
by the supreme one over zero less than [indiscernible], less than or equal to c, these means of, that this
inherits the l p bounds.
>>: Vaguely this is like the opposite direction for it to go because instead of concentrating your
averaging over a bigger interval, right, you’re averaging over all.
>> Jordan Greenblatt: You mean instead of going to the more singular spherical?
>>: Right.
>> Jordan Greenblatt: We’ve already taken that leap. The reason that this is useful is that here we lose
basically half the power of k when we average. Whereas here we get back a power of k when we
average here, so in the long run the smooth kernel is going to basically gain more by smoothing out than
the noise operator.
That might be a little too pithy. But this bounds the smooth operator. But now we still have the deal
with the square function, and if you got lost in the last part that’s fine because we’re sort of starting
over with the spectral portion.
Okay so we have our square function over there. What remains to show is that it’s bounded in l two.
Well actually its square both sides by f, I’ll put a question mark here. Let’s take a look at these
expressions. Well, this is the sum over x in the click, k from one up to n. That’s what [indiscernible]
marker now, okay. Yeah, rearranging the sum and this is sort of the whole point of square functions.
We can bring an l two norm all the way in here. What we get is the sum over k of the l two norm square
to this difference.
If you’re not use to this computation it’s pretty quick. But the point is these spherical averages on the
click power are self adjoint and they commute. If you don’t believe me we’ll see why in a moment. But
they do, so they can be simultaneously and orthonormally diagonalized. Geez, yeah, the, an
orthonormal Igon basis for the spherical averaging operator family, then we can write this l two norm as
the sum over all Igon functions in this basis of. I’ll explain this expression in a moment although you will
likely figure it out from context.
I’m borrowing some notation from Forey analysis this really is discrete Forey analysis. By this I mean the
Igon value of this function associated with the [indiscernible] operator. Here I mean the inner product
between the function f and the function v, oh sorry, this is k here. The upshot is that I can bound this by
sum over k of the sup or max I guess over v of these Igon values squared times the two norm of f
squared, right. This is an orthonormal basis so the sum comes out to the two norm squared. I’m just
pulling out the largest of these coefficients.
All of a sudden we’ve gotten rid of f. Now this is just a question about the spherical averaging operators
and their spectral properties. Right, so now we’re going to have to say more about the Igon values and
Igon vectors. But we’ve gotten it to a nicer form I would say.
Notice because of the structure of the click. Really because the click has diameter one that there’s a
very nice relationship between averaging operators on the click powers and on the underlying click.
Which is that averaging over the k sphere on the click power is the same as just picking uniformly
randomly a k subset of the n components. Taking your, and applying the operator that’s the stochastic
adjacency on the click and all of those bad components, and then the identity everywhere else.
Right, so p k of delta is just pick k components and change those and only those. These Igon vectors
here are just tensor products of Igon vectors of the adjacency, the stochastic adjacency here of the click.
Simply because, I mean these are convex combinations based on this observation of tensor products of
the click adjacency and the identity. Certainly we can orthonormally diagonalized that by this.
What we get is up to a normalizing factor b, our basis is this basis of functions of this form y is in k m n.
What I mean by this is c here is a primitive m root of unity. We identify vertices with the numerical label
zero through m. This is just more for easy notation than anything else. Then this is just a numerical dot
product. But these are the tensor products of Igon vectors.
What we see is that the more of the components of a given y that are not zero, right. The more
components that are not the constant Igon vector, so we see c y x, this is on the click has Igon value one
if y equals zero, right. Then it’s just a constant vector and this is a Markov operator and negative one
over m otherwise. This is a pretty straightforward computation.
What we see is that the Igon vector, sorry the Igon values here are given by the probability that if you
take a random component in this spherical averaging operator. That operator is the adjacency rather
than the identity on the click. Then you take a random component in this Igon vector and that is an
oscillating vector, not the constant. That probability determines the Igon vector because if and only if
there’s a collision do you get a non-one Igon value.
This plays into some massive sum. But the two observations to be had here are. First of all if v has a
very low frequency, right, that is v has very few oscillating components. Then these are going to be very
similar. They’re not going to be small. They’ll be close to one but they will be close to one another. If v
is you know all ones with one oscillating component the probability, the difference in probability here of
a decay is just one over n. Right, because it’s the difference in probability that the first component here
and the first component here are the adjacency rather than the identity.
As a result this gives us, with a grain of salt, that if v has r non-constant components then the difference
is equal to with some shifts of indices. But for all intents and purposes equal to r over n times p k, right,
so the lower the frequency the more gain we get from that difference. If k is bigger, sorry if the product
of k and the frequency is large that means a high probability of collision. Which means a high probability
of cancelation in the case of the Hyper Cube or contraction of, by powers of m in the case of higher
clicks, we have that this thing is bounded by either negative Omega r k over n. With this corresponds to
the probability in some sense of a collision of an adjacency in this operator and an oscillating component
in the Igon vector.
As a result this thing has to be smaller than. I guess I should put quotes around here too if I put an
equals, has to be smaller than k times r over n, e to the negative, r k over n, and basic summation tricks
give you that this is bounded up to a constant by one. If you like I mean think about this as an integral in
k and integrate by parts. That does it. That bounds this thing in l two. I mean a lot of steps but.
>>: How does the bound depend on the size of the click?
>> Jordan Greenblatt: How does the bound depend on the size of the click? The linearization process
shouldn’t care much because as the click gets larger it just gets more and more like a diagonal. The, I
mean larger m means quicker to k. I’m not sure is the short answer.
But I haven’t, I mean I’ve been so busy trying to actually get the dimension free bounds for other graphs
that I haven’t thought much about how the actual constants within a class of graphs. It’s an interesting
question and at some point I’ll probably go back through and check.
>>: [indiscernible] and you get matching lower bounds for specific functions.
>> Jordan Greenblatt: You get matching lower bounds.
>>: Do you, I mean do you find the right rate of dependency as the click grows?
>> Jordan Greenblatt: Yeah.
>>: Do these constants depend on the click [indiscernible]. You know from the argument and you know
[indiscernible] mentioned lower bounds.
>> Jordan Greenblatt: I can’t remember off of the top of my head. But I can get back to you. Yeah,
that’s the basic proof. I’m a little over time but I’ll just say where the project is going because I may as
well.
My main things now, is trying to figure out to what class of graphs this phenomenon extends. For what
class it fails. For instance for powers of trees we had exponentially bad bounds. It’s not that hard to
see. I have a proof on my website. My UCLA website if you’d like.
>>: Bad bounds, l p what does that mean?
>> Jordan Greenblatt: The l p bounds of the operator grow exponentially in dimension for trees.
>>: They’re really, so it’s not just the bounds but they actually. The maximum function they are
functions. They are functions themselves where [indiscernible]…
>> Jordan Greenblatt: Funny I can…
>>: The bounds are better than the active truth is bad.
>> Jordan Greenblatt: Sorry, say that again.
>>: It’s not just that the bounds you can prove are bad. But the actual behavior is bad.
>> Jordan Greenblatt: Yes, yep, the point is that if you take very large radius spheres and a tree it really
only sees leaves. If you just put mass on powers of leaves then any point sees that mass. But I don’t
want to go into more detail for time reasons.
The main result I have now is that the five cycle graph has l two bounds for its ball maximal operator
which is a far cry from what I really want. But it deals with a lot of the main issues caused by the click
having this nice property of a simple relationship between averaging and the click powers, and averaging
in the click.
I’m optimistic that that will move forward. We’ll be able to get small p. Yuval had a nice suggestion for
a reference that was an improvement on Stein’s original paper that this was sort of following. To maybe
get small p’s through simpler means and to get explicit bounds in p which would be nice.
But that’s where it’s going and I’m hoping that I can prove eventually that at least for Klee-graphs of
Abelian groups you can get dimension free bounds. Then if that’s true try expand it as far as I can and
see where it stops. Thank you very much.
[applause]
>>: What’s your guess on what’s the right condition on a graph, what?
>> Jordan Greenblatt: At the moment, so its, the techniques clearly require, the spectral techniques
anyway clearly require that the spherical averaging operators in the base graph commute with one
another, because that’s the necessary and sufficient condition for them to commute in the graph
powers. I think there is a decent chance that that is sufficient for dimension free bounds. But that’s
definitely not obvious.
>>: What, so that’s true in clicks that you need Abelian, so that’s, right. What is the assumption if I’m in
a Klee-graph what is?
>> Jordan Greenblatt: Klee-graph of an Abelian group will have that property.
>>: Right, but if it’s not Abelian then…
>> Jordan Greenblatt: It may or may not. It depends on if it can be you know constructed otherwise. It
would include distance regular. It would include Klee-graphs of Abelian. I’m trying to get a sense for in
the process what that condition means geometrically. I have some guesses at that. But those are things
that I don’t want to say because I haven’t explored it enough. They might be stupid guesses.
>>: It is in turn the technique. But say if I’m, that the base ground is any transitive graph then you have
a counter [indiscernible]. The trees they have these very different leaves are so different from the
center. But if you have transitive graph are all versions of the same, will it be enough?
>> Jordan Greenblatt: Yeah, it’s a good question.
>>: I mean won’t p k and p k minus one commute in this case if it’s vertex transitive?
>> Jordan Greenblatt: It’s possible but I don’t think that’s obvious. I mean I don’t think it’s obvious for
non-Abelian groups which are a special case of that, but.
>>: Why is it not true?
>> Jordan Greenblatt: Yeah, so it might be, it’s an odd condition. Right now it looks very algebraic and
contrived. I’m hoping that there’s a more natural way to phrase that condition. But at the moment my
main focus is getting this five cycle bound and moving towards. Because I think it will also become
easier to characterize when I have a better sense of the proofs for wider classes of graphs.
>>: Okay.
>> Jordan Greenblatt: Thank you very much.
>> Yuval Peres: Any other questions?
>>: Just maybe p, I mean you only showed p equals two...
>> Jordan Greenblatt: Yeah, I meant to say this part way through. I decided to only show p equals two
because this is something you all have alluded to. The proof for p less than two uses techniques that are
totally different and very technical. Its interpolation theory and most of the juicy stuff is in the l two
bound. You know it’s possible that I’ll be able to present on the l p bounds if Yuval’s suggestion pans
out. But at the moment it’s just not worth the time because it’s pages of stuff. There’s a good reference
in Nevo and Stein’s paper, and in the Stein Maximal Ergodic Theory paper that [indiscernible].
>>: Stein himself [indiscernible] first starts with two and then…
>> Jordan Greenblatt: Right, I mean the proof does start; this is the beginning of the proof. But then the
rest, so it’s a convenient stopping point. But the rest of it is just like very arithmetic in complex analyses.
Yeah.
>> Yuval Peres: Alright, so no more questions. Let’s…
[applause]
>> Jordan Greenblatt: Thank you.
Download